If you're using Yarn, there's a rest API that I'd use before screen scraping the job tracker, Hadoop YARN - Introduction to the web services REST API's. If you're using 1.3, I don't know of anything. There is a bug opened on Apache's Jira asking for a said feature, but it's marked as resolved in MRv2, so I wouldn't expect any progress towards it.
Regarding Ganglia/Nagios, the pair doesn't track job flow, it tracks the health of the system. If it has the capability to do job tracking buried among its innards, I haven't found it.
You can either scrape the information from the Jobtracker web UI (for tasks) or write a small Java program using the API's to access the JobTracker and poll it to grab the information. In terms of HDFS events, you'll need to tail & parse the log file, or possibly scrape some of the information from the Namenode web UI. Possibly use JMX to get metrics from each of the datanodes, depending on what you are after