Don't think that in Hadoop the same key can be mapped to multiple reducers. But, the keys can be partitioned so that the reducers are more or less evenly loaded. For this, the input data should be sampled and the keys be partitioned appropriately.
Check the Yahoo Paper for more details on the custom partitioner.
The Yahoo Sort code is in the org.apache.hadoop.examples.terasort package.
Lets say Key A has 10 rows, B has 20 rows, C has 30 rows and D has 60 rows in the input. Then keys A,B,C can be sent to reducer 1 and key D can be sent to reducer 2 to make the load on the reducers evenly distributed. To partition the keys, input sampling has to be done to know how the keys are distributed.
Here are some more suggestions to make the Job complete faster.
Specify a Combiner on the JobConf to reduce the number of keys sent to the reducer.
This also reduces the network traffic between the mapper and the reducer tasks. Although, there is no guarantee that the combiner will be invoked by the Hadoop framework.
Also, since the data is skewed (some of the keys are repeated again and again, lets say 'tools'),
you might want to increase the # of reduce tasks to complete the Job faster. This ensures that while a reducer is processing 'tools', the other data is getting processed by other reducers in parallel.