It seems like you are confused between the terms "contiguous" and "sequential". We have sequential reads/writes (from/to disk) and "contiguous" disk space allocation.
A single HDFS block of 64 MB will be written to disk sequentially. Therefore there is a fair chance that the data will be written into contiguous space on disk (consisting of multiple blocks next to each other). So the disk/block fragmentation will be much lower compared to a random disk write.
Furthermore, sequential reads/writes are much faster than random writes with multiple disk seeks.
To learn more about this, check the following link: http://Difference between sequential write and random write.
Hope this answer your query to some extent.