In Hadoop MapReduce the input data is on disk, you perform a map and a reduce and put the result back on disk. Apache Spark allows more complex pipelines. Maybe you need to map twice but don't need to reduce. Maybe you need to reduce then map then reduce again. The Spark API makes it very intuitive to set up very complex pipelines with dozens of steps.
You could implement the same complex pipeline with MapReduce too. But then between each stage, you write to disk and read it back. Spark avoids this overhead when possible. Keeping data in-memory is one way. But very often even that is not necessary. One stage can just pass the computed data to the next stage without ever storing the whole data anywhere.
This is not an option with MapReduce, because one MapReduce does not know about the next. It has to complete fully before the next one can start. That is why Spark can be more efficient for complex computation.
The API, especially in Scala, is very clean too. A classical MapReduce is often a single line. It's very empowering to use.