This was a fundamental issue in HDFS for a long time, but there is a new tool called DiskBalancer. It essentially allows you to create a plan file -- That describes how data will be moved from disk to disk and then you can ask a datanode to execute it.
If one disk is over-utilized, some writes will fail, that is when the datanode picks that disk. So you need to make sure data is similarly distributed in each of the disks. That is what DiskBalancer does for you, it computes how much to move based on each disk type.