Can't see the prints from the map tasks - java

I have set mapred-site.xml [1] and the log4j.properties [2] to show debug log files, and I have set my map class [3] System.out.printlns, but I don't see this getting printed. I have looked to the logs of the hadoop, or the log of the tasks, but I don't see nothing from the map tasks getting printed. I don't understand why I am getting this. Any help?
[1] mapred-site.xml
~/Programs/hadoop$ cat etc/hadoop/mapred-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
(...)
<property> <name>mapreduce.map.log.level</name> <value>DEBUG</value> </property>
</configuration>
[2] log4j.properties
hadoop.root.logger=DEBUG,console
hadoop.log.dir=.
hadoop.log.file=hadoop.log
# Define the root logger to the system property "hadoop.root.logger".
log4j.rootLogger=${hadoop.root.logger}, EventCounter
# Logging Threshold
log4j.threshold=ALL
# Null Appender
log4j.appender.NullAppender=org.apache.log4j.varia.NullAppender
#
# Rolling File Appender - cap space usage at 5gb.
#
hadoop.log.maxfilesize=256MB
hadoop.log.maxbackupindex=20
log4j.appender.RFA=org.apache.log4j.RollingFileAppender
log4j.appender.RFA.File=${hadoop.log.dir}/${hadoop.log.file}
log4j.appender.RFA.MaxFileSize=${hadoop.log.maxfilesize}
log4j.appender.RFA.MaxBackupIndex=${hadoop.log.maxbackupindex}
log4j.appender.RFA.layout=org.apache.log4j.PatternLayout
# Pattern format: Date LogLevel LoggerName LogMessage
log4j.appender.RFA.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n
# Debugging Pattern format
#log4j.appender.RFA.layout.ConversionPattern=%d{ISO8601} %-5p %c{2} (%F:%M(%L)) - %m%n
#
# Daily Rolling File Appender
#
log4j.appender.DRFA=org.apache.log4j.DailyRollingFileAppender
log4j.appender.DRFA.File=${hadoop.log.dir}/${hadoop.log.file}
# Rollver at midnight
log4j.appender.DRFA.DatePattern=.yyyy-MM-dd
# 30-day backup
#log4j.appender.DRFA.MaxBackupIndex=30
log4j.appender.DRFA.layout=org.apache.log4j.PatternLayout
# Pattern format: Date LogLevel LoggerName LogMessage
log4j.appender.DRFA.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n
# Debugging Pattern format
#log4j.appender.DRFA.layout.ConversionPattern=%d{ISO8601} %-5p %c{2} (%F:%M(%L)) - %m%n
#
# console
# Add "console" to rootlogger above if you want to use this
#
log4j.appender.console=org.apache.log4j.ConsoleAppender
log4j.appender.console.target=System.err
log4j.appender.console.layout=org.apache.log4j.PatternLayout
log4j.appender.console.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p %c{2}: %m%n
#
# TaskLog Appender
#
#Default values
hadoop.tasklog.taskid=null
hadoop.tasklog.iscleanup=false
hadoop.tasklog.noKeepSplits=4
hadoop.tasklog.totalLogFileSize=100
hadoop.tasklog.purgeLogSplits=true
hadoop.tasklog.logsRetainHours=12
log4j.appender.TLA=org.apache.hadoop.mapred.TaskLogAppender
log4j.appender.TLA.taskId=${hadoop.tasklog.taskid}
log4j.appender.TLA.isCleanup=${hadoop.tasklog.iscleanup}
log4j.appender.TLA.totalLogFileSize=${hadoop.tasklog.totalLogFileSize}
log4j.appender.TLA.layout=org.apache.log4j.PatternLayout
log4j.appender.TLA.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n
#
# HDFS block state change log from block manager
#
# Uncomment the following to suppress normal block state change
# messages from BlockManager in NameNode.
#log4j.logger.BlockStateChange=WARN
#
#Security appender
#
hadoop.security.logger=INFO,NullAppender
hadoop.security.log.maxfilesize=256MB
hadoop.security.log.maxbackupindex=20
log4j.category.SecurityLogger=${hadoop.security.logger}
hadoop.security.log.file=SecurityAuth-${user.name}.audit
log4j.appender.RFAS=org.apache.log4j.RollingFileAppender
log4j.appender.RFAS.File=${hadoop.log.dir}/${hadoop.security.log.file}
log4j.appender.RFAS.layout=org.apache.log4j.PatternLayout
log4j.appender.RFAS.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n
log4j.appender.RFAS.MaxFileSize=${hadoop.security.log.maxfilesize}
log4j.appender.RFAS.MaxBackupIndex=${hadoop.security.log.maxbackupindex}
#
# Daily Rolling Security appender
#
log4j.appender.DRFAS=org.apache.log4j.DailyRollingFileAppender
log4j.appender.DRFAS.File=${hadoop.log.dir}/${hadoop.security.log.file}
log4j.appender.DRFAS.layout=org.apache.log4j.PatternLayout
log4j.appender.DRFAS.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n
log4j.appender.DRFAS.DatePattern=.yyyy-MM-dd
#
# hadoop configuration logging
#
# Uncomment the following line to turn off configuration deprecation warnings.
# log4j.logger.org.apache.hadoop.conf.Configuration.deprecation=WARN
#
# hdfs audit logging
#
hdfs.audit.logger=INFO,NullAppender
hdfs.audit.log.maxfilesize=256MB
hdfs.audit.log.maxbackupindex=20
log4j.logger.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=${hdfs.audit.logger}
log4j.additivity.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=false
log4j.appender.RFAAUDIT=org.apache.log4j.RollingFileAppender
log4j.appender.RFAAUDIT.File=${hadoop.log.dir}/hdfs-audit.log
log4j.appender.RFAAUDIT.layout=org.apache.log4j.PatternLayout
log4j.appender.RFAAUDIT.layout.ConversionPattern=%d{ISO8601} %p %c{2}: %m%n
log4j.appender.RFAAUDIT.MaxFileSize=${hdfs.audit.log.maxfilesize}
log4j.appender.RFAAUDIT.MaxBackupIndex=${hdfs.audit.log.maxbackupindex}
#
# mapred audit logging
#
mapred.audit.logger=INFO,NullAppender
mapred.audit.log.maxfilesize=256MB
mapred.audit.log.maxbackupindex=20
log4j.logger.org.apache.hadoop.mapred.AuditLogger=${mapred.audit.logger}
log4j.additivity.org.apache.hadoop.mapred.AuditLogger=false
log4j.appender.MRAUDIT=org.apache.log4j.RollingFileAppender
log4j.appender.MRAUDIT.File=${hadoop.log.dir}/mapred-audit.log
log4j.appender.MRAUDIT.layout=org.apache.log4j.PatternLayout
log4j.appender.MRAUDIT.layout.ConversionPattern=%d{ISO8601} %p %c{2}: %m%n
log4j.appender.MRAUDIT.MaxFileSize=${mapred.audit.log.maxfilesize}
log4j.appender.MRAUDIT.MaxBackupIndex=${mapred.audit.log.maxbackupindex}
# Custom Logging levels
log4j.logger.org.apache.hadoop=DEBUG
# Jets3t library
log4j.logger.org.jets3t.service.impl.rest.httpclient.RestS3Service=ERROR
# AWS SDK & S3A FileSystem
log4j.logger.com.amazonaws=ERROR
log4j.logger.com.amazonaws.http.AmazonHttpClient=ERROR
log4j.logger.org.apache.hadoop.fs.s3a.S3AFileSystem=WARN
#
# Event Counter Appender
# Sends counts of logging messages at different severity levels to Hadoop Metrics.
#
log4j.appender.EventCounter=org.apache.hadoop.log.metrics.EventCounter
#
# Job Summary Appender
#
# Use following logger to send summary to separate file defined by
# hadoop.mapreduce.jobsummary.log.file :
# hadoop.mapreduce.jobsummary.logger=DEBUG,JSA
#
hadoop.mapreduce.jobsummary.logger=${hadoop.root.logger}
hadoop.mapreduce.jobsummary.log.file=hadoop-mapreduce.jobsummary.log
hadoop.mapreduce.jobsummary.log.maxfilesize=256MB
hadoop.mapreduce.jobsummary.log.maxbackupindex=20
log4j.appender.JSA=org.apache.log4j.RollingFileAppender
log4j.appender.JSA.File=${hadoop.log.dir}/${hadoop.mapreduce.jobsummary.log.file}
log4j.appender.JSA.MaxFileSize=${hadoop.mapreduce.jobsummary.log.maxfilesize}
log4j.appender.JSA.MaxBackupIndex=${hadoop.mapreduce.jobsummary.log.maxbackupindex}
log4j.appender.JSA.layout=org.apache.log4j.PatternLayout
log4j.appender.JSA.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p %c{2}: %m%n
log4j.logger.org.apache.hadoop.mapred.JobInProgress$JobSummary=${hadoop.mapreduce.jobsummary.logger}
log4j.additivity.org.apache.hadoop.mapred.JobInProgress$JobSummary=false
log4j.logger.org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary=${yarn.server.resourcemanager.appsummary.logger}
log4j.additivity.org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary=false
log4j.appender.RMSUMMARY=org.apache.log4j.RollingFileAppender
log4j.appender.RMSUMMARY.File=${hadoop.log.dir}/${yarn.server.resourcemanager.appsummary.log.file}
log4j.appender.RMSUMMARY.MaxFileSize=256MB
log4j.appender.RMSUMMARY.MaxBackupIndex=20
log4j.appender.RMSUMMARY.layout=org.apache.log4j.PatternLayout
log4j.appender.RMSUMMARY.layout.ConversionPattern=%d{ISO8601} %p %c{2}: %m%n
[3] My map class:
static abstract class SampleMapBase
extends Mapper<Text, Text, Text, Text>{
private long total;
private long kept = 0;
private float keep;
protected void setKeep(float keep) {
this.keep = keep;
}
protected void emit(Text key, Text val, OutputCollector<Text,Text> out)
throws IOException {
System.out.println("Key: " + key.toString() + " Value: " + val.toString());
System.out.println("Key: " + key.getClass() + " Value: " + val.getClass());
++total;
while((float) kept / total < keep) {
++kept;
out.collect(key, val);
}
}
}
public static class MySampleMapper extends SampleMapBase{
String inputFile;
public void setup(Context context) {
this.inputFile = ((FileSplit) context.getInputSplit()).getPath().toString();
setKeep(context.getConfiguration().getFloat("hadoop.sort.map.keep.percent", (float) 100.0) / (float) 100.0);
}
public void map(Text key, Text val,
OutputCollector<Text,Text> output, Reporter reporter)
throws IOException {
emit(key, val, output);
System.out.println("EMIT");
}
public static void main(String[] args) throws Exception {
GenericOptionsParser parser = new GenericOptionsParser(new Configuration(), args);
String[] otherArgs = parser.getRemainingArgs();
if (otherArgs.length < 2) {
System.err.println("Usage: webdatascan -Dfile.path=<path> [<in>...] <out>");
System.exit(2);
}
System.setProperty("file.path", parser.getConfiguration().get("file.path"));
List<String> remoteHosts = new ArrayList<String>();// directory where the map output is remotely
remoteHosts.add(Inet4Address.getLocalHost().getHostAddress());
// first map tasks
JobConf conf = MyWebDataScan.addJob(1, true, true);
Job job = new Job(conf, conf.getJobName());
job.setJarByClass(MyWebDataScan.class);
job.setInputFormatClass(org.apache.hadoop.mapreduce.lib.input.SequenceFileInputFormat.class);
job.setOutputFormatClass(org.apache.hadoop.mapreduce.lib.output.SequenceFileOutputFormat.class);
job.setMapperClass(MySampleMapper.class);
job.setReducerClass(MySampleReducer.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(Text.class);
Path[] inputPaths = new Path[otherArgs.length-1];
for (int i = 0; i < otherArgs.length - 1; ++i) { inputPaths[i] = new Path(otherArgs[i]); }
Path outputPath = new Path(otherArgs[otherArgs.length - 1]);
FileInputFormat.setInputPaths(conf, inputPaths);
FileOutputFormat.setOutputPath(conf, outputPath);
job.getConfiguration().set("mapred.output.dir", outputPath.toString());
JobClient.runJob(job);
}

You have mixed up old API and new API. Your input and output format classes use classes from new API, but your map function is specified in the form of old API. If you have imported org.apache.hadoop.mapreduce.mapper, then map function should have specification like this
private static class RecordMapper
extends Mapper<LongWritable, Text, Text, LongWritable> {
public void map(LongWritable lineOffset, Text record, Context output)
throws IOException, InterruptedException {
output.write(new Text("Count"), new LongWritable(1));
}
}
As you are using map function like this, the map function will never invoked. Instead it will use default map function as explained in this video
public void map(Text key,
Text val,
OutputCollector<Text,Text> output,
Reporter reporter)
throws IOException

Related

log4j2 with pax-logging: can't use values from StructuredDataMessage

I'm using pax-logging-api along with pax-logging-log4j2 for logging from my OSGi bundles. I would like to utilize Log4J2's StructuredDataMessage (using EventLogger) to write some messages to a database. However, I'm unable to read the values I put into the StructuredDataMessage from the appenders when using Pax Logging.
The following works in a non-OSGi project using the Log4J2 libraries directly:
log4j2.properties:
appender.console.type = Console
appender.console.name = STDOUT
appender.console.layout.type = PatternLayout
appender.console.layout.pattern = %m%n
appender.event.type = Console
appender.event.name = event
appender.event.layout.type = PatternLayout
appender.event.layout.pattern = %marker ${sd:id} ${sd:testKey} %n %m%n
rootLogger.level = debug
rootLogger.appenderRef.console.ref = STDOUT
logger.event.name = EventLogger
logger.event.level = debug
logger.event.appenderRef.console.ref = event
logger.event.additivity = false
Test.java:
public class Test {
private static final Logger LOGGER = LogManager.getLogger(Test.class);
public static void main(String[] args) {
StructuredDataMessage msg = new StructuredDataMessage("1", "message", "event");
msg.put("testKey", "testValue");
LOGGER.info(msg);
EventLogger.logEvent(msg);
}
}
Output:
1 testValue event [1 testKey="testValue"] message
EVENT 1 testValue
event [1 testKey="testValue"] message
Note that the event appender properly dereferenced the sd keys from the StructuredDataMessage.
However, the following does not work in OSGi with pax-logging:
org.ops4j.pax.logging.cfg:
log4j2.appender.console.type = Console
log4j2.appender.console.name = STDOUT
log4j2.appender.console.layout.type = PatternLayout
log4j2.appender.console.layout.pattern = %m%n
log4j2.appender.event.type = Console
log4j2.appender.event.name = event
log4j2.appender.event.layout.type = PatternLayout
log4j2.appender.event.layout.pattern = %marker \$\\\{sd:id\} \$\\\{sd:testKey\} %n %m%n
log4j2.rootLogger.level = debug
log4j2.rootLogger.appenderRef.console.ref = STDOUT
log4j2.logger.event.name = EventLogger
log4j2.logger.event.level = debug
log4j2.logger.event.appenderRef.console.ref = event
log4j2.logger.event.additivity = false
Test.java:
public class Test implements BundleActivator {
private static final Logger LOGGER = LogManager.getLogger(Test.class);
#Override
public void start(BundleContext context) throws Exception {
StructuredDataMessage msg = new StructuredDataMessage("1", "message", "event");
msg.put("testKey", "testValue");
LOGGER.info(msg);
EventLogger.logEvent(msg, Level.INFO);
}
#Override
public void stop(BundleContext context) throws Exception {
}
}
Output:
event [1 testKey="testValue"] message
EVENT ${sd:id} ${sd:testKey}
event [1 testKey="testValue"] message
Is there a trick to getting this to work in pax-logging? I am able to access values from the MDC using \$\\\{ctx:key\} when applicable, so I'm assuming the syntax is similar. I've also tried using the lookups in patterns for RoutingAppender, FileAppender, etc. to no avail.
Thanks in advance!
Edit: I'm using the latest version of pax-logging-api and pax-logging-log4j2 (1.11.3)
OK, it's not yet a definitive answer - simply comment is too short to describe what happens.
The stack trace with your call is:
"pipe-restart 238#10666" prio=5 tid=0xc3 nid=NA runnable
java.lang.Thread.State: RUNNABLE
at org.ops4j.pax.logging.log4j2.internal.PaxLoggerImpl.doLog0(PaxLoggerImpl.java:354)
at org.ops4j.pax.logging.log4j2.internal.PaxLoggerImpl.doLog(PaxLoggerImpl.java:337)
at org.ops4j.pax.logging.log4j2.internal.PaxLoggerImpl.inform(PaxLoggerImpl.java:233)
at org.ops4j.pax.logging.internal.TrackingLogger.inform(TrackingLogger.java:209)
at org.ops4j.pax.logging.log4jv2.Log4jv2Logger.logMessage(Log4jv2Logger.java:162)
at org.apache.logging.log4j.spi.AbstractLogger.log(AbstractLogger.java:2102)
at org.apache.logging.log4j.spi.AbstractLogger.tryLogMessage(AbstractLogger.java:2190)
at org.apache.logging.log4j.spi.AbstractLogger.logMessageTrackRecursion(AbstractLogger.java:2144)
at org.apache.logging.log4j.spi.AbstractLogger.logMessageSafely(AbstractLogger.java:2127)
at org.apache.logging.log4j.spi.AbstractLogger.logIfEnabled(AbstractLogger.java:1828)
at org.apache.logging.log4j.EventLogger.logEvent(EventLogger.java:56)
at grgr.test.ActivatorLogging.start(ActivatorLogging.java:39)
...
org.ops4j.pax.logging.log4jv2.Log4jv2Logger.logMessage() is bridge between logging facade and logging backend.
Remember - with pax-logging, you can, say, use Commons Logging facade with Log4J1 backend, or Log4j2 facade (and that's what you're doing) with e.g., Logback backend.
That's why org.ops4j.pax.logging.log4jv2.Log4jv2Logger.logMessage() does this:
} else if (level.intLevel() >= Level.INFO.intLevel()) {
m_delegate.inform(paxMarker, message.getFormattedMessage(), t, fqcn);
and your structured message is changed into String event [1 testKey="testValue"] message.
Only then configured appenders are called - and the appender with layout that extract structured data can't find it, because the structured message is already converted to plain String.
These 3 lines:
at org.ops4j.pax.logging.log4j2.internal.PaxLoggerImpl.inform(PaxLoggerImpl.java:233)
at org.ops4j.pax.logging.internal.TrackingLogger.inform(TrackingLogger.java:209)
at org.ops4j.pax.logging.log4jv2.Log4jv2Logger.logMessage(Log4jv2Logger.java:162)
do the crossing from pax-logging-api (facade) through TrackingLogger (bridge) to pax-logging-log4j2 (backend) losing structured information in between.
I've created https://ops4j1.jira.com/browse/PAXLOGGING-302 and hope to do something about it soon.
EDIT1
The key is that in org.apache.logging.log4j.core.lookup.StructuredDataLookup#lookup(), this condition is true:
if (event == null || !(event.getMessage() instanceof StructuredDataMessage)) {
return null;
}
EDIT2
I've just fixed https://ops4j1.jira.com/browse/PAXLOGGING-302 and this test proves it works:
Logger logger = LogManager.getLogger("my.logger");
logger.info(new StructuredDataMessage("1", "hello!", "typeX").with("key1", "sd1"));
logger.info(new StringMapMessage().with("key1", "map1"));
List<String> lines = readLines();
assertTrue(lines.contains("my.logger/org.ops4j.pax.logging.it.Log4J2MessagesIntegrationTest typeX/sd1 [INFO] typeX [1 key1=\"sd1\"] hello!"));
assertTrue(lines.contains("my.logger/org.ops4j.pax.logging.it.Log4J2MessagesIntegrationTest ${sd:type}/map1 [INFO] key1=\"map1\""));
the configuration is:
log4j2.appender.console.type = Console
log4j2.appender.console.name = console
log4j2.appender.console.layout.type = PatternLayout
log4j2.appender.console.layout.pattern = %c/%C ${sd:type}/${map:key1} [%p] %m%n
log4j2.rootLogger.level = info
log4j2.rootLogger.appenderRef.file.ref = console
(you have to use the escape sequences you've used, if configuring via etc/org.ops4j.pax.logging.cfg in Karaf).

log4j2 properties configuration to put messages of different severities in different log files

I tried with all the possibilities. My log4j2.properties configuration not resulting properly.
I'm trying to create different files for info, error and debug. At last the files are created. But, no log is logged in to it. I spent whole day for this. Please help.
name=PropertiesConfig
property.filename = c:\\logs
appenders = console, file, debug
appender.console.type = Console
appender.console.name = consoleLogger
appender.console.layout.type = PatternLayout
appender.console.layout.pattern = [%-5level] %d{yyyy-MM-dd HH:mm:ss.SSS} [%t] %c{1} - %msg%n
appender.file.type = RollingFile
appender.file.name = RollingFile
appender.file.level = info
appender.file.fileName = ${filename}/info.log
appender.file.filePattern = ${filename}.%d{yyyy-MM-dd}
appender.file.layout.type = PatternLayout
appender.file.layout.pattern = %d{yyyy-MM-dd HH:mm:ss} %c{1} [%p] %m%n
appender.file.policies.type = Policies
appender.file.policies.time.type = TimeBasedTriggeringPolicy
appender.file.policies.time.interval = 1
appender.debug.type = RollingFile
appender.debug.name = RollingFile
appender.debug.level = debug
appender.debug.fileName = ${filename}/debug.log
appender.debug.filePattern = ${filename}.%d{yyyy-MM-dd}
appender.debug.layout.type = PatternLayout
appender.debug.layout.pattern = %d{yyyy-MM-dd HH:mm:ss} %c{1} [%p] %m%n
appender.debug.policies.type = Policies
appender.debug.policies.time.type = TimeBasedTriggeringPolicy
appender.debug.policies.time.interval = 1
loggers = file,debug
logger.file.name = infoLogger
logger.file.level = info
logger.file.appenderRefs = RollingFile
logger.file.additivity = true
logger.file.appenderRef.rolling.ref = RollingFile
logger.debug.name = debugLogger
logger.debug.level = debug
logger.debug.appenderRefs = RollingFile
logger.debug.additivity = false
logger.debug.appenderRef.rolling.ref = RollingFile
rootLogger.level = INFO
rootLogger.appenderRefs = console, file, debug
rootLogger.appenderRef.stdout.ref = INFO
Finally I got it worked. The below example will create individual file for error, debug and all the log will be printed in the log file under d:\logs directory
pom.xml
<dependency>
<groupId>org.apache.logging.log4j</groupId>
<artifactId>log4j-api</artifactId>
<version>2.9.1</version>
</dependency>
<dependency>
<groupId>org.apache.logging.log4j</groupId>
<artifactId>log4j-core</artifactId>
<version>2.9.1</version>
</dependency>
<dependency>
<groupId>com.lmax</groupId>
<artifactId>disruptor</artifactId>
<version>3.3.6</version>
</dependency>
and the property file configuration is...
log4j2.properties
name=PropertiesConfig
#output folder
property.filename = d:\\logs
appenders = console, debugLoggerAppender, commonLoggerAppender, errorLoggerAppender
appender.console.type = Console
appender.console.name = STDOUT
appender.console.layout.type = PatternLayout
appender.console.layout.pattern = %d{yyyy-MM-dd HH:mm:ss} %-5p %c{1}:%L - %m%n
#common appender
appender.commonLoggerAppender.type = File
appender.commonLoggerAppender.name = RollingFile
appender.commonLoggerAppender.fileName=${filename}/log.log
appender.commonLoggerAppender.layout.type=PatternLayout
appender.commonLoggerAppender.layout.pattern=%d{yyyy-MM-dd HH:mm:ss} %-5p %c{1}:%L - %m%n
#error appender
appender.errorLoggerAppender.type = RandomAccessFile
appender.errorLoggerAppender.name = RandomAccessFile
appender.errorLoggerAppender.fileName=${filename}/error.log
appender.errorLoggerAppender.layout.type=PatternLayout
appender.errorLoggerAppender.layout.pattern=%d{yyyy-MM-dd HH:mm:ss} %-5p %c{1}:%L - %m%n
#using LevelRangeFilter to only log error levels.
appender.errorLoggerAppender.filter.threshold.type = LevelRangeFilter
appender.errorLoggerAppender.filter.threshold.minLevel = error
appender.errorLoggerAppender.filter.threshold.maxLevel = error
#debug appender
appender.debugLoggerAppender.type = File
appender.debugLoggerAppender.name = LOGFILE
appender.debugLoggerAppender.fileName=${filename}/debug.log
appender.debugLoggerAppender.layout.type=PatternLayout
appender.debugLoggerAppender.layout.pattern=%d{yyyy-MM-dd HH:mm:ss} %-5p %c{1}:%L - %m%n
#using LevelRangeFilter to only log debug levels.
appender.debugLoggerAppender.filter.threshold.type = LevelRangeFilter
appender.debugLoggerAppender.filter.threshold.minLevel = debug
appender.debugLoggerAppender.filter.threshold.maxLevel = debug
# creating only one logger, we can use this with multiple appenders.
loggers=fileLogger
# this is package name. This package and all of it's child packages will use this logger
logger.fileLogger.name=com.fis.bsa
# logger base level
logger.fileLogger.level = debug
logger.fileLogger.appenderRefs = debugLoggerAppender,commonLoggerAppender, errorLoggerAppender
logger.fileLogger.appenderRef.debugLoggerAppender.ref = LOGFILE
logger.fileLogger.appenderRef.commonLoggerAppender.ref = RollingFile
logger.fileLogger.appenderRef.errorLoggerAppender.ref = RandomAccessFile
rootLogger.level = info
#rootLogger.appenderRefs = stdout
rootLogger.appenderRef.stdout.ref = STDOUT
Below property should be set to use AsyncLogger
System.setProperty("-DLog4jContextSelector", "org.apache.logging.log4j.core.async.AsyncLoggerContextSelector");
If I understand You correctly, You want to create a log file for only one log level. This can be little tricky to achieve(but can be done as explained below), because log levels are defined as minimum levels.
For example; when you set a log level for debug, this logger can log debug and above levels(info, warn, error,fatal). So we should somehow disable logging for other levels. To log only debug level I used LevelRangeFilter. With this property we can define level range for the appender. The configuration can be done like below. (I only add info and debug levels but you can add others levels if you want to)
https://logging.apache.org/log4j/2.x/log4j-core/apidocs/org/apache/logging/log4j/core/filter/LevelRangeFilter.html
name=PropertiesConfig
#output folder
property.filename = c:\\Testlogs
appenders = console, debugLoggerAppender, infoLoggerAppender
appender.console.type = Console
appender.console.name = STDOUT
appender.console.layout.type = PatternLayout
appender.console.layout.pattern = [%-5level] %d{yyyy-MM-dd HH:mm:ss.SSS} [%t] %c{1} - %msg%n
#debug appender
appender.debugLoggerAppender.type = File
appender.debugLoggerAppender.name = DEBUGLOGFILE
appender.debugLoggerAppender.fileName=${filename}/debug.log
appender.debugLoggerAppender.layout.type=PatternLayout
appender.debugLoggerAppender.layout.pattern=[%-5level] %d{yyyy-MM-dd HH:mm:ss.SSS} [%t] %c{1} - %msg%n
#using LevelRangeFilter to only log debug levels.
appender.debugLoggerAppender.filter.threshold.type = LevelRangeFilter
appender.debugLoggerAppender.filter.threshold.minLevel = debug
appender.debugLoggerAppender.filter.threshold.maxLevel = debug
#info appender
appender.infoLoggerAppender.type = File
appender.infoLoggerAppender.name = INFOLOGFILE
appender.infoLoggerAppender.fileName=${filename}/info.log
appender.infoLoggerAppender.layout.type=PatternLayout
appender.infoLoggerAppender.layout.pattern=[%-5level] %d{yyyy-MM-dd HH:mm:ss.SSS} [%t] %c{1} - %msg%n
#using LevelRangeFilter to only log info levels.
appender.infoLoggerAppender.filter.threshold.type = LevelRangeFilter
appender.infoLoggerAppender.filter.threshold.minLevel = info
appender.infoLoggerAppender.filter.threshold.maxLevel = info
# creating only one logger, we can use this with multiple appenders.
loggers=fileLogger
# this is package name. This package and all of it's child packages will use this logger
logger.fileLogger.name=log.test
# logger base level
logger.fileLogger.level = debug
logger.fileLogger.appenderRefs = debugLoggerAppender,infoLoggerAppender
logger.fileLogger.appenderRef.debugLoggerAppender.ref = DEBUGLOGFILE
logger.fileLogger.appenderRef.infoLoggerAppender.ref = INFOLOGFILE
rootLogger.level = debug
rootLogger.appenderRefs = stdout
rootLogger.appenderRef.stdout.ref = STDOUT
With this configuration
logger.debug("This is debug"); // this will be logged to console and debug file
logger.info("This is info"); // this will be logged to console and info file
logger.error("This is error"); // this will be only logged to console
Also check this thread, it shows how can this be done with xml configuration. how to log only one level with log4j2?

Set messages pattern in Log4J

I have this Log4J configuration which generates messages into log file:
log = Logger.getLogger(LogMessages.class.getName());
BasicConfigurator.configure(); // Basic configuration for Log4J 1.x
ConsoleAppender console = new ConsoleAppender(); //create appender
//configure the appender
String PATTERN = "%d{DATE} [%p|%c|%C{1}] %m%n";
console.setLayout(new PatternLayout(PATTERN));
console.setThreshold(Level.FATAL);
console.activateOptions();
//add appender to any Logger (here is root)
Logger.getRootLogger().addAppender(console);
DailyRollingFileAppender fa = new DailyRollingFileAppender();
fa.setName("FileLogger");
fa.setFile("log" + File.separator + "messages.log");
fa.setDatePattern("'.'yyyy-MM-dd");
fa.setLayout(new PatternLayout("%d %-5p [%c{1}] %m%n"));
fa.setThreshold(Level.DEBUG);
fa.setAppend(true);
fa.activateOptions();
//add appender to any Logger (here is root)
Logger.getRootLogger().addAppender(fa); //repeat with all other desired appenders
This is the generated output:
2013-12-26 10:19:27,501 WARN [LogMessages] test_message
I would like to generate message like this:
2013-12-26 10:19:27 WARN test_message
How I can remove [LogMessages] from the messages?
For the file logging, change the setLayout invocation to:
fa.setLayout(new PatternLayout("%d{dd/MM/yyyy HH\:mm\:ss} %-5p %m%n"));
For the console logging also, use the same pattern:
String PATTERN = "%d{dd/MM/yyyy HH\:mm\:ss} %-5p %m%n";

Log4j file size

I am using log4j for logging of my Java project. I am creating three types of log files called "info","debug" and "error". I have set the option of max size of each file. But they are not limiting the size as we have set.
Code for debug file is -
public class DebugAppender extends RollingFileAppender {
/** Creates a new instance of DebugAppender */
public DebugAppender(String debuglogname, String path, String logfilesize) {
String Filename = null;
if (path == null || path.equalsIgnoreCase("")) {
Filename = "Logs" + File.separator + debuglogname + ".log";
} else {
Filename = path + File.separator + "Logs" + File.separator + debuglogname + ".log";
}
super.setFile(Filename);
setMaxFileSize(logfilesize);
setMaxBackupIndex(2);
setAppend(true);
setLayout(new PatternLayout("%d{DATE} %-5p %c{1} : %m%n"));
activateOptions();
}
public String getFName() {
return super.getFile();
}
}
Calling of Debug logger is -
Logger debugLogger = Logger.getLogger(LogHandler.class.getName() + "Debug");
DebugAppender objDebugAppender = new DebugAppender(debugFileName, sTempPath, "1MB");
debugLogger.addAppender(objDebugAppender);
We are setting its size to "1MB", but they all three files are not limiting to 1 MB, they get exceed to GBs. How can I limit their size. If any setting needed to set then please let me know.
Thanks
You have to set the file size of the log file through log4j properties file
log4j.appender.file.MaxFileSize=1MB
log4j.appender.file.MaxBackupIndex=1
In the xml format it would be
<param name="MaxFileSize" value="1MB" />
<param name="MaxBackupIndex" value="1" />

Log4j : Creating/Modifying appenders at runtime, log file recreated and not appended

I want to create and enable an appender for a particular method call MyMethod(), whose log output is supposed to go to a file present at "logFilePath".
I do not want to include this appender in the xml configuration file, so I thought to create it at run time.
First, I tried to modify the logger properties at runtime and then calling activateOptions, eg. setting level to DEBUG before and setting it to Off in the finally block, so that output is logged only while the method is in use. That didn't work.
My problem here is that appender recreates a file everytime, and does not append to the same file. This is inspite of setAppend being true.
I am not very familiar with log4j, so please feel free to suggest an alternative approach.
The following is sample code to explain what I am trying.
private static FileAppender createNewAppender(String logFilePath) {
FileAppender appender = new FileAppender();
appender.setName("MyFileAppender");
appender.setLayout(new PatternLayout("%d %-5p [%c{1}] %m%n"));
appender.setFile(logFilePath);
appender.setAppend(true);
appender.setThreshold(Level.INFO);
appender.activateOptions();
Logger.getRootLogger().addAppender(appender);
return appender;
}
private static void removeAppender() {
Logger.getRootLogger().removeAppender(fileAppender) ; // ("MyFileAppender");
}
I call the above methods in the following way:
private static FileAppender fileAppender = null;
private static void myMethod(String logFilePath) {
try {
fileAppender = createNewAppender();
someOperation();
}
finally {
removeAppender();
fileAppender=null;
}
}
very easy just create a method and add this
String targetLog="where ever you want your log"
FileAppender apndr = new FileAppender(new PatternLayout("%d %-5p [%c{1}] %m%n"),targetLog,true);
logger.addAppender(apndr);
logger.setLevel((Level) Level.ALL);
then in any method you need to log just do this:
logger.error("your error here");
I do the following from scala (basically the same):
Set my root logging level to TRACE but set the threshold on my global appenders to info.
# Root logger option
log4j.rootLogger=TRACE, file, stdout
# log messages to a log file
log4j.appender.file=org.apache.log4j.RollingFileAppender
log4j.appender.file.File=./log.log
log4j.appender.file.MaxFileSize=100MB
log4j.appender.file.MaxBackupIndex=1
log4j.appender.file.layout=org.apache.log4j.PatternLayout
log4j.appender.file.layout.ConversionPattern=%d{HH:mm:ss} %m%n
log4j.appender.file.Threshold=INFO
# log messages to stdout
log4j.appender.stdout=org.apache.log4j.ConsoleAppender
log4j.appender.stdout.Target=System.out
log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern=%d{HH:mm:ss} %m%n
log4j.appender.stdout.Threshold=INFO
Then in the class I want to log:
private def set_debug_level(debug: String) {
import org.apache.log4j._
def create_appender(level: Level) {
val console_appender = new ConsoleAppender()
val pattern = "%d %p [%c,%C{1}] %m%n"
console_appender.setLayout(new PatternLayout(pattern))
console_appender.setThreshold(level)
console_appender.activateOptions()
Logger.getRootLogger().addAppender(console_appender)
}
debug match {
case "TRACE" => create_appender(Level.TRACE)
case "DEBUG" => create_appender(Level.DEBUG)
case _ => // just ignore other levels
}
}
So basically, since I set the threshold of my new appender to TRACE or DEBUG it will actually append. If I change the root to another level it will not log a lower level.

Categories