To integrate Hadoop with Spark, you need to configure a cluster first. After that you should have the these following software installed in your system.
After installing these software, open your jupyter notebook and import findspark as follows.
$ import findspark
$ findspark.init('Replace Spark Path')
Now you are ready to do your job as you did in Machine Learning.