A MapReduce job usually splits the input data-set into independent chunks which are processed by the map tasks in a completely parallel manner. The framework sorts the outputs of the maps, which are then inputted to the reduce tasks. Typically both the input and the output of the job are stored in a file-system.
Now as you said, we have data in four nodes, so in all nodes, our mapreduce will run differently, the mapreduce task is not performed in the data node. Data nodes are only used to store data. So, if we have data in four nodes then we will have to give that data individually to the mapper to execute the task and generate an output which will be further given to the reducer as the input to generate the final output. This is where the reduce task takes place.