How to control logging functionality in Hadoop?

0 votes

Hadoop uses default log4j.properties file for controlling logs. My use case is to control logs generated by my classes.

Hadoop daemons like JobTrackerTaskTrackerNameNode and DataNode daemon processes use log4j.properties file from their respective host node’s hadoop-conf-directory. The rootLogger is set to “INFO,console” which logs all message at level INFO to the console.

I trigger hadoop jobs using Oozie Workflow. I tried passing my custom log4j.properties file to the job by setting -Dlog4j.configuration=path/to/log4j.properties system property, but it is not working. Still, it takes log4j properties from the default one.

I am not supposed to touch default log4j.properties file.

I am using Oozie-v3.1.3-incubating, hadoop-v0.20 and cloudera CDH-v4.0.1.

How can I override the default log4j.properties file ?? or How can I control logs for my classes ??

Nov 12, 2018 in Big Data Hadoop by Neha
• 6,140 points
26 views

1 answer to this question.

Your answer

Your name to display (optional):
Privacy: Your email address will only be used for sending these notifications.
0 votes

 Logs are distributed across your cluster, but by logging them to the rootLogger, you should be able to see them via the job tracker.

If you want to utilize rolling files then you have a difficult time retrieving those files later (again because they are distributed across your task nodes).

If you want to dynamically set log levels, this should be simple enough:

public static Logger log = Logger.getLogger(MyMapper.class);

@Override
protected void setup(Context context) throws IOException,
        InterruptedException {
    log.setLevel(Level.WARN);
}

If you want to add you own appenders, then you should be able to do this programmatically

answered Nov 12, 2018 by Frankie
• 9,590 points

Related Questions In Big Data Hadoop

0 votes
0 answers

How to run Hadoop in Docker containers?

I want to incorporate Hadoop in Docker ...READ MORE

Mar 16, 2018 in Big Data Hadoop by nitinrawat895
• 9,070 points
47 views
0 votes
7 answers

How to run a jar file in hadoop?

I used this command to run my ...READ MORE

answered Dec 10, 2018 in Big Data Hadoop by Dasinto
3,296 views
0 votes
1 answer

How to configure secondary namenode in Hadoop 2.x ?

bin/hadoop-daemon.sh start [namenode | secondarynamenode | datanode ...READ MORE

answered Apr 6, 2018 in Big Data Hadoop by kurt_cobain
• 9,260 points
241 views
0 votes
1 answer

What is -cp command in hadoop? How it works?

/user/cloudera/data1 is not a directory, it is ...READ MORE

answered Oct 17, 2018 in Big Data Hadoop by Frankie
• 9,590 points
216 views
0 votes
1 answer

Hadoop Mapreduce word count Program

Firstly you need to understand the concept ...READ MORE

answered Mar 16, 2018 in Data Analytics by nitinrawat895
• 9,070 points
1,679 views
0 votes
1 answer

hadoop.mapred vs hadoop.mapreduce?

org.apache.hadoop.mapred is the Old API  org.apache.hadoop.mapreduce is the ...READ MORE

answered Mar 16, 2018 in Data Analytics by nitinrawat895
• 9,070 points
132 views
0 votes
10 answers

hadoop fs -put command?

copy command can be used to copy files ...READ MORE

answered Dec 7, 2018 in Big Data Hadoop by Sujay
8,152 views
0 votes
1 answer

How to format the output being written by MapReduce in Hadoop?

Here is a simple code demonstrate the ...READ MORE

answered Sep 5, 2018 in Big Data Hadoop by Frankie
• 9,590 points
48 views
0 votes
1 answer

What is Custom partitioner in Hadoop? How to write partition function ?

Don't think that in Hadoop the same ...READ MORE

answered Sep 18, 2018 in Big Data Hadoop by Frankie
• 9,590 points
79 views

© 2018 Brain4ce Education Solutions Pvt. Ltd. All rights Reserved.
"PMP®","PMI®", "PMI-ACP®" and "PMBOK®" are registered marks of the Project Management Institute, Inc. MongoDB®, Mongo and the leaf logo are the registered trademarks of MongoDB, Inc.