Hadoop doesn't copy the blocks to the node running the map task, the blocks are streamed from the data node to the task node (with some sensible transfer block size such as 4kb). So in the example you give, the map task that processed the first block will read the entire first block, and then stream read the second block until it finds the end of line character. So it's probably 'mostly' local.
How much of the second block is read depends on how long the line is - it's entirely possible that a file split over 3 blocks will be processed by 3 map tasks, with the second map task essentially processing no records (but reading all the data from block 2 and some of 3) if a line starts in block 1 and ends in block 3.
Hope this makes sense