When not using a temporary table, I am assuming the data is written in hdfs file. Spark does in memory processing, so this will be different from Spark's regular approach since the data will now be read from the file for further processing. Is this assumption correct?