Log4j file size - java

I am using log4j for logging of my Java project. I am creating three types of log files called "info","debug" and "error". I have set the option of max size of each file. But they are not limiting the size as we have set.
Code for debug file is -
public class DebugAppender extends RollingFileAppender {
/** Creates a new instance of DebugAppender */
public DebugAppender(String debuglogname, String path, String logfilesize) {
String Filename = null;
if (path == null || path.equalsIgnoreCase("")) {
Filename = "Logs" + File.separator + debuglogname + ".log";
} else {
Filename = path + File.separator + "Logs" + File.separator + debuglogname + ".log";
}
super.setFile(Filename);
setMaxFileSize(logfilesize);
setMaxBackupIndex(2);
setAppend(true);
setLayout(new PatternLayout("%d{DATE} %-5p %c{1} : %m%n"));
activateOptions();
}
public String getFName() {
return super.getFile();
}
}
Calling of Debug logger is -
Logger debugLogger = Logger.getLogger(LogHandler.class.getName() + "Debug");
DebugAppender objDebugAppender = new DebugAppender(debugFileName, sTempPath, "1MB");
debugLogger.addAppender(objDebugAppender);
We are setting its size to "1MB", but they all three files are not limiting to 1 MB, they get exceed to GBs. How can I limit their size. If any setting needed to set then please let me know.
Thanks

You have to set the file size of the log file through log4j properties file
log4j.appender.file.MaxFileSize=1MB
log4j.appender.file.MaxBackupIndex=1

In the xml format it would be
<param name="MaxFileSize" value="1MB" />
<param name="MaxBackupIndex" value="1" />

Related

Can't see the prints from the map tasks

I have set mapred-site.xml [1] and the log4j.properties [2] to show debug log files, and I have set my map class [3] System.out.printlns, but I don't see this getting printed. I have looked to the logs of the hadoop, or the log of the tasks, but I don't see nothing from the map tasks getting printed. I don't understand why I am getting this. Any help?
[1] mapred-site.xml
~/Programs/hadoop$ cat etc/hadoop/mapred-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
(...)
<property> <name>mapreduce.map.log.level</name> <value>DEBUG</value> </property>
</configuration>
[2] log4j.properties
hadoop.root.logger=DEBUG,console
hadoop.log.dir=.
hadoop.log.file=hadoop.log
# Define the root logger to the system property "hadoop.root.logger".
log4j.rootLogger=${hadoop.root.logger}, EventCounter
# Logging Threshold
log4j.threshold=ALL
# Null Appender
log4j.appender.NullAppender=org.apache.log4j.varia.NullAppender
#
# Rolling File Appender - cap space usage at 5gb.
#
hadoop.log.maxfilesize=256MB
hadoop.log.maxbackupindex=20
log4j.appender.RFA=org.apache.log4j.RollingFileAppender
log4j.appender.RFA.File=${hadoop.log.dir}/${hadoop.log.file}
log4j.appender.RFA.MaxFileSize=${hadoop.log.maxfilesize}
log4j.appender.RFA.MaxBackupIndex=${hadoop.log.maxbackupindex}
log4j.appender.RFA.layout=org.apache.log4j.PatternLayout
# Pattern format: Date LogLevel LoggerName LogMessage
log4j.appender.RFA.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n
# Debugging Pattern format
#log4j.appender.RFA.layout.ConversionPattern=%d{ISO8601} %-5p %c{2} (%F:%M(%L)) - %m%n
#
# Daily Rolling File Appender
#
log4j.appender.DRFA=org.apache.log4j.DailyRollingFileAppender
log4j.appender.DRFA.File=${hadoop.log.dir}/${hadoop.log.file}
# Rollver at midnight
log4j.appender.DRFA.DatePattern=.yyyy-MM-dd
# 30-day backup
#log4j.appender.DRFA.MaxBackupIndex=30
log4j.appender.DRFA.layout=org.apache.log4j.PatternLayout
# Pattern format: Date LogLevel LoggerName LogMessage
log4j.appender.DRFA.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n
# Debugging Pattern format
#log4j.appender.DRFA.layout.ConversionPattern=%d{ISO8601} %-5p %c{2} (%F:%M(%L)) - %m%n
#
# console
# Add "console" to rootlogger above if you want to use this
#
log4j.appender.console=org.apache.log4j.ConsoleAppender
log4j.appender.console.target=System.err
log4j.appender.console.layout=org.apache.log4j.PatternLayout
log4j.appender.console.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p %c{2}: %m%n
#
# TaskLog Appender
#
#Default values
hadoop.tasklog.taskid=null
hadoop.tasklog.iscleanup=false
hadoop.tasklog.noKeepSplits=4
hadoop.tasklog.totalLogFileSize=100
hadoop.tasklog.purgeLogSplits=true
hadoop.tasklog.logsRetainHours=12
log4j.appender.TLA=org.apache.hadoop.mapred.TaskLogAppender
log4j.appender.TLA.taskId=${hadoop.tasklog.taskid}
log4j.appender.TLA.isCleanup=${hadoop.tasklog.iscleanup}
log4j.appender.TLA.totalLogFileSize=${hadoop.tasklog.totalLogFileSize}
log4j.appender.TLA.layout=org.apache.log4j.PatternLayout
log4j.appender.TLA.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n
#
# HDFS block state change log from block manager
#
# Uncomment the following to suppress normal block state change
# messages from BlockManager in NameNode.
#log4j.logger.BlockStateChange=WARN
#
#Security appender
#
hadoop.security.logger=INFO,NullAppender
hadoop.security.log.maxfilesize=256MB
hadoop.security.log.maxbackupindex=20
log4j.category.SecurityLogger=${hadoop.security.logger}
hadoop.security.log.file=SecurityAuth-${user.name}.audit
log4j.appender.RFAS=org.apache.log4j.RollingFileAppender
log4j.appender.RFAS.File=${hadoop.log.dir}/${hadoop.security.log.file}
log4j.appender.RFAS.layout=org.apache.log4j.PatternLayout
log4j.appender.RFAS.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n
log4j.appender.RFAS.MaxFileSize=${hadoop.security.log.maxfilesize}
log4j.appender.RFAS.MaxBackupIndex=${hadoop.security.log.maxbackupindex}
#
# Daily Rolling Security appender
#
log4j.appender.DRFAS=org.apache.log4j.DailyRollingFileAppender
log4j.appender.DRFAS.File=${hadoop.log.dir}/${hadoop.security.log.file}
log4j.appender.DRFAS.layout=org.apache.log4j.PatternLayout
log4j.appender.DRFAS.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n
log4j.appender.DRFAS.DatePattern=.yyyy-MM-dd
#
# hadoop configuration logging
#
# Uncomment the following line to turn off configuration deprecation warnings.
# log4j.logger.org.apache.hadoop.conf.Configuration.deprecation=WARN
#
# hdfs audit logging
#
hdfs.audit.logger=INFO,NullAppender
hdfs.audit.log.maxfilesize=256MB
hdfs.audit.log.maxbackupindex=20
log4j.logger.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=${hdfs.audit.logger}
log4j.additivity.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=false
log4j.appender.RFAAUDIT=org.apache.log4j.RollingFileAppender
log4j.appender.RFAAUDIT.File=${hadoop.log.dir}/hdfs-audit.log
log4j.appender.RFAAUDIT.layout=org.apache.log4j.PatternLayout
log4j.appender.RFAAUDIT.layout.ConversionPattern=%d{ISO8601} %p %c{2}: %m%n
log4j.appender.RFAAUDIT.MaxFileSize=${hdfs.audit.log.maxfilesize}
log4j.appender.RFAAUDIT.MaxBackupIndex=${hdfs.audit.log.maxbackupindex}
#
# mapred audit logging
#
mapred.audit.logger=INFO,NullAppender
mapred.audit.log.maxfilesize=256MB
mapred.audit.log.maxbackupindex=20
log4j.logger.org.apache.hadoop.mapred.AuditLogger=${mapred.audit.logger}
log4j.additivity.org.apache.hadoop.mapred.AuditLogger=false
log4j.appender.MRAUDIT=org.apache.log4j.RollingFileAppender
log4j.appender.MRAUDIT.File=${hadoop.log.dir}/mapred-audit.log
log4j.appender.MRAUDIT.layout=org.apache.log4j.PatternLayout
log4j.appender.MRAUDIT.layout.ConversionPattern=%d{ISO8601} %p %c{2}: %m%n
log4j.appender.MRAUDIT.MaxFileSize=${mapred.audit.log.maxfilesize}
log4j.appender.MRAUDIT.MaxBackupIndex=${mapred.audit.log.maxbackupindex}
# Custom Logging levels
log4j.logger.org.apache.hadoop=DEBUG
# Jets3t library
log4j.logger.org.jets3t.service.impl.rest.httpclient.RestS3Service=ERROR
# AWS SDK & S3A FileSystem
log4j.logger.com.amazonaws=ERROR
log4j.logger.com.amazonaws.http.AmazonHttpClient=ERROR
log4j.logger.org.apache.hadoop.fs.s3a.S3AFileSystem=WARN
#
# Event Counter Appender
# Sends counts of logging messages at different severity levels to Hadoop Metrics.
#
log4j.appender.EventCounter=org.apache.hadoop.log.metrics.EventCounter
#
# Job Summary Appender
#
# Use following logger to send summary to separate file defined by
# hadoop.mapreduce.jobsummary.log.file :
# hadoop.mapreduce.jobsummary.logger=DEBUG,JSA
#
hadoop.mapreduce.jobsummary.logger=${hadoop.root.logger}
hadoop.mapreduce.jobsummary.log.file=hadoop-mapreduce.jobsummary.log
hadoop.mapreduce.jobsummary.log.maxfilesize=256MB
hadoop.mapreduce.jobsummary.log.maxbackupindex=20
log4j.appender.JSA=org.apache.log4j.RollingFileAppender
log4j.appender.JSA.File=${hadoop.log.dir}/${hadoop.mapreduce.jobsummary.log.file}
log4j.appender.JSA.MaxFileSize=${hadoop.mapreduce.jobsummary.log.maxfilesize}
log4j.appender.JSA.MaxBackupIndex=${hadoop.mapreduce.jobsummary.log.maxbackupindex}
log4j.appender.JSA.layout=org.apache.log4j.PatternLayout
log4j.appender.JSA.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p %c{2}: %m%n
log4j.logger.org.apache.hadoop.mapred.JobInProgress$JobSummary=${hadoop.mapreduce.jobsummary.logger}
log4j.additivity.org.apache.hadoop.mapred.JobInProgress$JobSummary=false
log4j.logger.org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary=${yarn.server.resourcemanager.appsummary.logger}
log4j.additivity.org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary=false
log4j.appender.RMSUMMARY=org.apache.log4j.RollingFileAppender
log4j.appender.RMSUMMARY.File=${hadoop.log.dir}/${yarn.server.resourcemanager.appsummary.log.file}
log4j.appender.RMSUMMARY.MaxFileSize=256MB
log4j.appender.RMSUMMARY.MaxBackupIndex=20
log4j.appender.RMSUMMARY.layout=org.apache.log4j.PatternLayout
log4j.appender.RMSUMMARY.layout.ConversionPattern=%d{ISO8601} %p %c{2}: %m%n
[3] My map class:
static abstract class SampleMapBase
extends Mapper<Text, Text, Text, Text>{
private long total;
private long kept = 0;
private float keep;
protected void setKeep(float keep) {
this.keep = keep;
}
protected void emit(Text key, Text val, OutputCollector<Text,Text> out)
throws IOException {
System.out.println("Key: " + key.toString() + " Value: " + val.toString());
System.out.println("Key: " + key.getClass() + " Value: " + val.getClass());
++total;
while((float) kept / total < keep) {
++kept;
out.collect(key, val);
}
}
}
public static class MySampleMapper extends SampleMapBase{
String inputFile;
public void setup(Context context) {
this.inputFile = ((FileSplit) context.getInputSplit()).getPath().toString();
setKeep(context.getConfiguration().getFloat("hadoop.sort.map.keep.percent", (float) 100.0) / (float) 100.0);
}
public void map(Text key, Text val,
OutputCollector<Text,Text> output, Reporter reporter)
throws IOException {
emit(key, val, output);
System.out.println("EMIT");
}
public static void main(String[] args) throws Exception {
GenericOptionsParser parser = new GenericOptionsParser(new Configuration(), args);
String[] otherArgs = parser.getRemainingArgs();
if (otherArgs.length < 2) {
System.err.println("Usage: webdatascan -Dfile.path=<path> [<in>...] <out>");
System.exit(2);
}
System.setProperty("file.path", parser.getConfiguration().get("file.path"));
List<String> remoteHosts = new ArrayList<String>();// directory where the map output is remotely
remoteHosts.add(Inet4Address.getLocalHost().getHostAddress());
// first map tasks
JobConf conf = MyWebDataScan.addJob(1, true, true);
Job job = new Job(conf, conf.getJobName());
job.setJarByClass(MyWebDataScan.class);
job.setInputFormatClass(org.apache.hadoop.mapreduce.lib.input.SequenceFileInputFormat.class);
job.setOutputFormatClass(org.apache.hadoop.mapreduce.lib.output.SequenceFileOutputFormat.class);
job.setMapperClass(MySampleMapper.class);
job.setReducerClass(MySampleReducer.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(Text.class);
Path[] inputPaths = new Path[otherArgs.length-1];
for (int i = 0; i < otherArgs.length - 1; ++i) { inputPaths[i] = new Path(otherArgs[i]); }
Path outputPath = new Path(otherArgs[otherArgs.length - 1]);
FileInputFormat.setInputPaths(conf, inputPaths);
FileOutputFormat.setOutputPath(conf, outputPath);
job.getConfiguration().set("mapred.output.dir", outputPath.toString());
JobClient.runJob(job);
}
You have mixed up old API and new API. Your input and output format classes use classes from new API, but your map function is specified in the form of old API. If you have imported org.apache.hadoop.mapreduce.mapper, then map function should have specification like this
private static class RecordMapper
extends Mapper<LongWritable, Text, Text, LongWritable> {
public void map(LongWritable lineOffset, Text record, Context output)
throws IOException, InterruptedException {
output.write(new Text("Count"), new LongWritable(1));
}
}
As you are using map function like this, the map function will never invoked. Instead it will use default map function as explained in this video
public void map(Text key,
Text val,
OutputCollector<Text,Text> output,
Reporter reporter)
throws IOException

GROOVY + Slf4j + rotation and backup log files

the following groovy script generate the C:\tmp\groovy.log log file by the Slf4j
groovy script works fine and log increased each time we use log.info/log.debug ... etc
but the problem is when the log file became with huge capacity and each day the log become more larger
I wonder how to add to my script the capability of the rotation mechanism
for example what I need is when the log file become 100K , then log will backup like zip file ( it will decrease the capacity for example to 10K )
is it possible ?
if yes please advice how to change the groovy script in order to give the backup capability
#Grab('org.slf4j:slf4j-api:1.6.1')
#Grab('ch.qos.logback:logback-classic:0.9.28')
import org.slf4j.*
import groovy.util.logging.Slf4j
import ch.qos.logback.core.*
import ch.qos.logback.classic.encoder.*
// Use annotation to inject log field into the class.
#Slf4j
class Family {
static {
new FileAppender().with {
name = 'file appender'
file = 'C:\\tmp\\groovy.log'
context = LoggerFactory.getILoggerFactory()
encoder = new PatternLayoutEncoder().with {
context = LoggerFactory.getILoggerFactory()
pattern = "%date{HH:mm:ss.SSS} [%thread] %-5level %logger{35} - %msg%n"
start()
it
}
start()
log.addAppender(it)
}
}
def father() {
log.debug 'car engine is hot'
log.error 'my car is stuck'
}
def mother() {
log.debug 'dont have a water in the kitchen'
log.error 'Cant make a cake'
}
}
def helloWorld = new Family()
helloWorld.father()
helloWorld.mother()
You can try with RollingFileAppender instead of FileAppender and set the RollingPolicy you need. i.e you can try with:
...
#Slf4j
class Family {
static {
new RollingFileAppender().with {
name = 'file appender'
file = 'C:\\tmp\\groovy.log'
// the policy to roll files
rollingPolicy = new TimeBasedRollingPolicy().with{
context = LoggerFactory.getILoggerFactory()
// file name pattern for the rolled files
fileNamePattern = 'C:\\tmp\\groovy.%date{yyyy-MM-dd}.%i.log'
// the maximum number of files to be keeped.
maxHistory = 10
timeBasedFileNamingAndTriggeringPolicy = new SizeAndTimeBasedFNATP().with{
context = LoggerFactory.getILoggerFactory()
// the max size of each rolled file
maxFileSize = '3MB'
}
}
context = LoggerFactory.getILoggerFactory()
encoder = new PatternLayoutEncoder().with {
context = LoggerFactory.getILoggerFactory()
pattern = "%date{HH:mm:ss.SSS} [%thread] %-5level %logger{35} - %msg%n"
start()
it
}
start()
log.addAppender(it)
}
}
...
Hope this helps,

java logging writing multiple times for each thread in polling system

threading polling system in java. and i have included logging for each thread separately. but the instance of fileappender not getting closed with thread. so in the next poll when thread comes there are two instance of fileappender so it is logging two times and again in next poll its logging three time and so on...
here is my code
public void run()
{
String parent = "Autobrs", name = "Pool";
name = name + poolId;
String loggerName = parent + "." + name;
Logger log = Logger.getLogger(loggerName);
DailyRollingFileAppender fileApp = new DailyRollingFileAppender();
fileApp.setName("Autobrs." + loggerName + "_FileAppender");
fileApp.setFile("c:\\java\\"+ name+".log");
fileApp.setLayout(new PatternLayout("%d{yyyy-MM-dd HH:mm:ss} (%0F:%L) %3x - %m%n"));
fileApp.setThreshold(Level.toLevel("debug"));
fileApp.setAppend(true);
fileApp.activateOptions();
log.addAppender(fileApp);
}
and i don't know how to handle this.
pls help..
Since FileAppender inherits close() from Writeappender as stated here, use
fileApp.close()
String parent = "Autobrs", name = "Pool";
name = name + poolId;
String loggerName = parent + "." + name;
Logger log = Logger.getLogger(loggerName);
You might have memory leak here. Logger.getLogger() will keep a reference to all loggers it created.
Best practice would be to have one file appender configured by a main thread and each run() can log the poolId (or whatever) in its message.
Have multiple threads writing to a common logger & appender.

Log4j : Creating/Modifying appenders at runtime, log file recreated and not appended

I want to create and enable an appender for a particular method call MyMethod(), whose log output is supposed to go to a file present at "logFilePath".
I do not want to include this appender in the xml configuration file, so I thought to create it at run time.
First, I tried to modify the logger properties at runtime and then calling activateOptions, eg. setting level to DEBUG before and setting it to Off in the finally block, so that output is logged only while the method is in use. That didn't work.
My problem here is that appender recreates a file everytime, and does not append to the same file. This is inspite of setAppend being true.
I am not very familiar with log4j, so please feel free to suggest an alternative approach.
The following is sample code to explain what I am trying.
private static FileAppender createNewAppender(String logFilePath) {
FileAppender appender = new FileAppender();
appender.setName("MyFileAppender");
appender.setLayout(new PatternLayout("%d %-5p [%c{1}] %m%n"));
appender.setFile(logFilePath);
appender.setAppend(true);
appender.setThreshold(Level.INFO);
appender.activateOptions();
Logger.getRootLogger().addAppender(appender);
return appender;
}
private static void removeAppender() {
Logger.getRootLogger().removeAppender(fileAppender) ; // ("MyFileAppender");
}
I call the above methods in the following way:
private static FileAppender fileAppender = null;
private static void myMethod(String logFilePath) {
try {
fileAppender = createNewAppender();
someOperation();
}
finally {
removeAppender();
fileAppender=null;
}
}
very easy just create a method and add this
String targetLog="where ever you want your log"
FileAppender apndr = new FileAppender(new PatternLayout("%d %-5p [%c{1}] %m%n"),targetLog,true);
logger.addAppender(apndr);
logger.setLevel((Level) Level.ALL);
then in any method you need to log just do this:
logger.error("your error here");
I do the following from scala (basically the same):
Set my root logging level to TRACE but set the threshold on my global appenders to info.
# Root logger option
log4j.rootLogger=TRACE, file, stdout
# log messages to a log file
log4j.appender.file=org.apache.log4j.RollingFileAppender
log4j.appender.file.File=./log.log
log4j.appender.file.MaxFileSize=100MB
log4j.appender.file.MaxBackupIndex=1
log4j.appender.file.layout=org.apache.log4j.PatternLayout
log4j.appender.file.layout.ConversionPattern=%d{HH:mm:ss} %m%n
log4j.appender.file.Threshold=INFO
# log messages to stdout
log4j.appender.stdout=org.apache.log4j.ConsoleAppender
log4j.appender.stdout.Target=System.out
log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern=%d{HH:mm:ss} %m%n
log4j.appender.stdout.Threshold=INFO
Then in the class I want to log:
private def set_debug_level(debug: String) {
import org.apache.log4j._
def create_appender(level: Level) {
val console_appender = new ConsoleAppender()
val pattern = "%d %p [%c,%C{1}] %m%n"
console_appender.setLayout(new PatternLayout(pattern))
console_appender.setThreshold(level)
console_appender.activateOptions()
Logger.getRootLogger().addAppender(console_appender)
}
debug match {
case "TRACE" => create_appender(Level.TRACE)
case "DEBUG" => create_appender(Level.DEBUG)
case _ => // just ignore other levels
}
}
So basically, since I set the threshold of my new appender to TRACE or DEBUG it will actually append. If I change the root to another level it will not log a lower level.

log4j log problem

I am writing an servlet.
I having several classes, some of them I want their log to be seperated from each other.
Here is the log4j configuration file:
log4j.rootLogger=INFO, CONSOLE, SearchPdfBill, Scheduler
# CONSOLE is set to be a ConsoleAppender.
log4j.appender.CONSOLE=org.apache.log4j.ConsoleAppender
log4j.appender.CONSOLE.layout=org.apache.log4j.PatternLayout
log4j.appender.CONSOLE.layout.ConversionPattern=<%p> %c{1}.%t %d{HH:mm:ss} - %m%n
# LOGFILE is set to be a file
log4j.appender.SearchPdfBill=org.apache.log4j.DailyRollingFileAppender
log4j.appender.SearchPdfBill.File = /bps/app/BpsPdfBill/BpsPdfBill.ear/BpsPdfBill.war/WEB-INF/logs/BpsPdfBill.log
#log4j.appender.SearchPdfBill.File = E:\\Workspace\\Eclipse_Workspace\\BpsPdfBill\\log\\BpsPdfBill.log
log4j.appender.SearchPdfBill.Append = true
log4j.appender.SearchPdfBill.DatePattern = '.'yyyy-MM-dd
log4j.appender.SearchPdfBill.layout=org.apache.log4j.PatternLayout
log4j.appender.SearchPdfBill.layout.ConversionPattern=<%p> %c{1}.%t %d{HH:mm:ss} - %m%n
# LOGFILE is set to be a file
log4j.appender.Scheduler=org.apache.log4j.DailyRollingFileAppender
log4j.appender.Scheduler.File = /bps/app/BpsPdfBill/BpsPdfBill.ear/BpsPdfBill.war/WEB-INF/logs/Schedule.log
#log4j.appender.Scheduler.File = E:\\Workspace\\Eclipse_Workspace\\BpsPdfBill\\log\\BpsPdfBill.log
log4j.appender.Scheduler.Append = true
log4j.appender.Scheduler.DatePattern = '.'yyyy-MM-dd
log4j.appender.Scheduler.layout=org.apache.log4j.PatternLayout
log4j.appender.Scheduler.layout.ConversionPattern=<%p> %c{1}.%t %d{HH:mm:ss} - %m%n
I setup a logger here:
String logDir = conf.getInitParameter("log_file_path");
if (logDir == null) {
initErrMsg = "Param - log_file_path cannot be empty";
throw new ServletException(initErrMsg);
}
if ((logger = Logger.getLogger(SearchPdfBill.class)) != null) {
//writeLog("Initializing log4j.");
conf.getServletContext().log("Log4j initialized.");
} else {
conf.getServletContext().log("Cannot initialize log4j properly.");
}
And another logger here:
logDir = sc.getInitParameter("log_file_path");
if (logDir == null) {
initErrMsg = "Param - log_file_path cannot be empty";
try {
throw new ServletException(initErrMsg);
} catch (ServletException e) {
// TODO Auto-generated catch block
conditionalWriteLog(logEnabled, e.getMessage());
}
}
if ((logger = Logger.getLogger(Scheduler.class)) != null) {
//writeLog("Initializing log4j.");
conditionalWriteLog(logEnabled, "Log4j initialized.");
} else {
conditionalWriteLog(logEnabled, "Cannot initialize log4j properly.");
}
However, it end up the 2 two logger are logging the same thing. Every log is log to the 2 file identically. Why?
I think the configuration file is wrong probably, but I don't know where it is, can somebody help me to correct that?
You need to define those two loggers in your configuration file. Are those objects within a package? That is important to the way you will configure them. Say the package is com.gunbuster:
log4j.category.com.gunbuster.SearchPdfBill=INFO, SearchPdfBill
log4j.additivity.com.gunbuster.SearchPdfBill=false
log4j.category.com.gunbuster.Scheduler=INFO, Scheduler
log4j.additivity.com.gunbuster.Scheduler=false
The additivity setting is to prevent those loggers from adding to the rootLogger output to CONSOLE Also, you should make the first line:
log4j.rootLogger=INFO, CONSOLE
So that the rootLogger does not add entries into those files.

Categories