HDFS is a block structured file system where each file is divided into particular size and by default the particular size is 128MB. Lets take an example to understand how HDFS stores files and data blocks.
Suppose a person wants to store a file which is of 380MB and want to store it into Hadoop distributed file system. So now what HDFS will do is that it will divide up the files into three blocks because 380 MB divided by 128MB which is the default size of each data block is approximately three.
So the first block will occupy 128MB, the second block will also occupy 128MB and the third block will occupy the remaining size of the file that is 124MB. So after the files has been divided into data blocks, this data blocks will be distributed into Data nodes that is present in the Hadoop cluster.
There is a small pictorial form of example given, I hope it will be helpful.