I have a huge table in relation database, users work with it every day (crud operations and search).
And now there is a new task - have a possibility to build huge aggregate report for a one-two year period on demand. And do it fast. All this table records for last two years are too big to fit in memory, so I should split computations into chunks, right?
I don't want to reinvent the wheel, so my question is, does distributed processing systems like Hadoop are suit for this kind of tasks?