log4j to database - java

I used log4j to report error to database:
log4j.rootLogger=DEBUG, CUBRID
# CUBRID Database
log4j.appender.CUBRID = org.apache.log4j.jdbc.JDBCAppender
log4j.appender.CUBRID.driver = org.postgresql.Driver
log4j.appender.CUBRID.user = postgres
log4j.appender.CUBRID.password = postgres
log4j.appender.CUBRID.URL = jdbc:postgresql://localhost:5432/logs
log4j.appender.CUBRID.sql = INSERT INTO LOGS VALUES('%x','%d{yyyy-MM-dd HH:mm:ss.SSS}','%C','%p','%m')
log4j.appender.CUBRID.layout=org.apache.log4j.PatternLayout
I have a code:
public class LogTest extends BaseDAO<Object> {
public void show()
{
Logger log = Logger.getLogger(LogTest.class.getName());
log.info("Wystartowal");
log.warn("Trwanie aplikacji");
try {
if ( 1 / 0 == 0 ) {
System.out.println("Czekaj...");
}
} catch( Exception ex) {
log.error("Komunikat bledu", ex);
}
log.fatal("Koniec aplikacji");
}
}
All writes to the database but it saves me all the logs and I'd like to I would write only what I give myself using commands log.warn (), log.fatal (), etc.
How to do it?

This can be done by adjusting the logLevel. Try to change the value like this
log4j.rootLogger=WARN, CUBRID

You'll have to change the log level to something else than DEBUG.
log4j.rootLogger=WARN, CUBRID
This tutorial is very very comprehensive: Short introduction to log4j.
To only log your own info messages but allow other warnings that are >= WARN change to the following:
log4j.rootLogger=WARN, CUBRID
log4j.logger.LogTest = INFO, CUBRID
log4j.additivity.LogTest = false
# CUBRID Database
log4j.appender.CUBRID = org.apache.log4j.jdbc.JDBCAppender
log4j.appender.CUBRID.driver = org.postgresql.Driver
log4j.appender.CUBRID.user = postgres
log4j.appender.CUBRID.password = postgres
log4j.appender.CUBRID.URL = jdbc:postgresql://localhost:5432/logs
log4j.appender.CUBRID.sql = INSERT INTO LOGS VALUES('%x','%d{yyyy-MM-dd HH:mm:ss.SSS}','%C','%p','%m')
log4j.appender.CUBRID.layout=org.apache.log4j.PatternLayout
Here I am assuming that LogTest is not part of any package. If you have put it in a package then change the two lines to include the package (here I use my.package:
log4j.logger.my.package.LogTest = INFO, CUBRID
log4j.additivity.my.package.LogTest = false
Now if you want to apply the INFO level on all your loggers in your package then do this:
log4j.logger.my.package = INFO, CUBRID
log4j.additivity.my.package = false
Or even on top level package:
log4j.logger.my = INFO, CUBRID
log4j.additivity.my = false

Related

log4j2 with pax-logging: can't use values from StructuredDataMessage

I'm using pax-logging-api along with pax-logging-log4j2 for logging from my OSGi bundles. I would like to utilize Log4J2's StructuredDataMessage (using EventLogger) to write some messages to a database. However, I'm unable to read the values I put into the StructuredDataMessage from the appenders when using Pax Logging.
The following works in a non-OSGi project using the Log4J2 libraries directly:
log4j2.properties:
appender.console.type = Console
appender.console.name = STDOUT
appender.console.layout.type = PatternLayout
appender.console.layout.pattern = %m%n
appender.event.type = Console
appender.event.name = event
appender.event.layout.type = PatternLayout
appender.event.layout.pattern = %marker ${sd:id} ${sd:testKey} %n %m%n
rootLogger.level = debug
rootLogger.appenderRef.console.ref = STDOUT
logger.event.name = EventLogger
logger.event.level = debug
logger.event.appenderRef.console.ref = event
logger.event.additivity = false
Test.java:
public class Test {
private static final Logger LOGGER = LogManager.getLogger(Test.class);
public static void main(String[] args) {
StructuredDataMessage msg = new StructuredDataMessage("1", "message", "event");
msg.put("testKey", "testValue");
LOGGER.info(msg);
EventLogger.logEvent(msg);
}
}
Output:
1 testValue event [1 testKey="testValue"] message
EVENT 1 testValue
event [1 testKey="testValue"] message
Note that the event appender properly dereferenced the sd keys from the StructuredDataMessage.
However, the following does not work in OSGi with pax-logging:
org.ops4j.pax.logging.cfg:
log4j2.appender.console.type = Console
log4j2.appender.console.name = STDOUT
log4j2.appender.console.layout.type = PatternLayout
log4j2.appender.console.layout.pattern = %m%n
log4j2.appender.event.type = Console
log4j2.appender.event.name = event
log4j2.appender.event.layout.type = PatternLayout
log4j2.appender.event.layout.pattern = %marker \$\\\{sd:id\} \$\\\{sd:testKey\} %n %m%n
log4j2.rootLogger.level = debug
log4j2.rootLogger.appenderRef.console.ref = STDOUT
log4j2.logger.event.name = EventLogger
log4j2.logger.event.level = debug
log4j2.logger.event.appenderRef.console.ref = event
log4j2.logger.event.additivity = false
Test.java:
public class Test implements BundleActivator {
private static final Logger LOGGER = LogManager.getLogger(Test.class);
#Override
public void start(BundleContext context) throws Exception {
StructuredDataMessage msg = new StructuredDataMessage("1", "message", "event");
msg.put("testKey", "testValue");
LOGGER.info(msg);
EventLogger.logEvent(msg, Level.INFO);
}
#Override
public void stop(BundleContext context) throws Exception {
}
}
Output:
event [1 testKey="testValue"] message
EVENT ${sd:id} ${sd:testKey}
event [1 testKey="testValue"] message
Is there a trick to getting this to work in pax-logging? I am able to access values from the MDC using \$\\\{ctx:key\} when applicable, so I'm assuming the syntax is similar. I've also tried using the lookups in patterns for RoutingAppender, FileAppender, etc. to no avail.
Thanks in advance!
Edit: I'm using the latest version of pax-logging-api and pax-logging-log4j2 (1.11.3)
OK, it's not yet a definitive answer - simply comment is too short to describe what happens.
The stack trace with your call is:
"pipe-restart 238#10666" prio=5 tid=0xc3 nid=NA runnable
java.lang.Thread.State: RUNNABLE
at org.ops4j.pax.logging.log4j2.internal.PaxLoggerImpl.doLog0(PaxLoggerImpl.java:354)
at org.ops4j.pax.logging.log4j2.internal.PaxLoggerImpl.doLog(PaxLoggerImpl.java:337)
at org.ops4j.pax.logging.log4j2.internal.PaxLoggerImpl.inform(PaxLoggerImpl.java:233)
at org.ops4j.pax.logging.internal.TrackingLogger.inform(TrackingLogger.java:209)
at org.ops4j.pax.logging.log4jv2.Log4jv2Logger.logMessage(Log4jv2Logger.java:162)
at org.apache.logging.log4j.spi.AbstractLogger.log(AbstractLogger.java:2102)
at org.apache.logging.log4j.spi.AbstractLogger.tryLogMessage(AbstractLogger.java:2190)
at org.apache.logging.log4j.spi.AbstractLogger.logMessageTrackRecursion(AbstractLogger.java:2144)
at org.apache.logging.log4j.spi.AbstractLogger.logMessageSafely(AbstractLogger.java:2127)
at org.apache.logging.log4j.spi.AbstractLogger.logIfEnabled(AbstractLogger.java:1828)
at org.apache.logging.log4j.EventLogger.logEvent(EventLogger.java:56)
at grgr.test.ActivatorLogging.start(ActivatorLogging.java:39)
...
org.ops4j.pax.logging.log4jv2.Log4jv2Logger.logMessage() is bridge between logging facade and logging backend.
Remember - with pax-logging, you can, say, use Commons Logging facade with Log4J1 backend, or Log4j2 facade (and that's what you're doing) with e.g., Logback backend.
That's why org.ops4j.pax.logging.log4jv2.Log4jv2Logger.logMessage() does this:
} else if (level.intLevel() >= Level.INFO.intLevel()) {
m_delegate.inform(paxMarker, message.getFormattedMessage(), t, fqcn);
and your structured message is changed into String event [1 testKey="testValue"] message.
Only then configured appenders are called - and the appender with layout that extract structured data can't find it, because the structured message is already converted to plain String.
These 3 lines:
at org.ops4j.pax.logging.log4j2.internal.PaxLoggerImpl.inform(PaxLoggerImpl.java:233)
at org.ops4j.pax.logging.internal.TrackingLogger.inform(TrackingLogger.java:209)
at org.ops4j.pax.logging.log4jv2.Log4jv2Logger.logMessage(Log4jv2Logger.java:162)
do the crossing from pax-logging-api (facade) through TrackingLogger (bridge) to pax-logging-log4j2 (backend) losing structured information in between.
I've created https://ops4j1.jira.com/browse/PAXLOGGING-302 and hope to do something about it soon.
EDIT1
The key is that in org.apache.logging.log4j.core.lookup.StructuredDataLookup#lookup(), this condition is true:
if (event == null || !(event.getMessage() instanceof StructuredDataMessage)) {
return null;
}
EDIT2
I've just fixed https://ops4j1.jira.com/browse/PAXLOGGING-302 and this test proves it works:
Logger logger = LogManager.getLogger("my.logger");
logger.info(new StructuredDataMessage("1", "hello!", "typeX").with("key1", "sd1"));
logger.info(new StringMapMessage().with("key1", "map1"));
List<String> lines = readLines();
assertTrue(lines.contains("my.logger/org.ops4j.pax.logging.it.Log4J2MessagesIntegrationTest typeX/sd1 [INFO] typeX [1 key1=\"sd1\"] hello!"));
assertTrue(lines.contains("my.logger/org.ops4j.pax.logging.it.Log4J2MessagesIntegrationTest ${sd:type}/map1 [INFO] key1=\"map1\""));
the configuration is:
log4j2.appender.console.type = Console
log4j2.appender.console.name = console
log4j2.appender.console.layout.type = PatternLayout
log4j2.appender.console.layout.pattern = %c/%C ${sd:type}/${map:key1} [%p] %m%n
log4j2.rootLogger.level = info
log4j2.rootLogger.appenderRef.file.ref = console
(you have to use the escape sequences you've used, if configuring via etc/org.ops4j.pax.logging.cfg in Karaf).

How to disable Spring Boot Hibernate "Listing entities" option?

We have a Spring Boot RESTful web app. Whenever data is submitted to an endpoint, the log is completely filled with data that represents the state of the database entities involved in the submission process.
Looking in the Log I see this:
2019-11-01 10:50:44.686 DEBUG [https-jsse-nio-8443-exec-2] o.h.i.u.EntityPrinter [EntityPrinter.java:102] Listing entities:
And then every entity involved in the traction is printed out ... even the binary data for images and such.
I know it is a DEBUG statement, but there has got to be a way to be in DEBUG mode and not get all that useless data printed to the log. How can I do this, how can I turn off the "Listing entities" feature?
Are you running your application in DEBUG log level mode?
Try setting the log level of package org.hibernate.internal.util to INFO in your application.yml file.
logging:
level:
org.hibernate.internal.util: INFO
The log in question is generated by class EntityPrinter (source) belonging to package org.hibernate.internal.util as part of below overridden implementation of toString() method.
public void toString(Iterable<Map.Entry<EntityKey, Object>> entitiesByEntityKey) throws HibernateException {
if ( !LOG.isDebugEnabled() || !entitiesByEntityKey.iterator().hasNext() ) {
return;
}
LOG.debug( "Listing entities:" );
int i = 0;
for ( Map.Entry<EntityKey, Object> entityKeyAndEntity : entitiesByEntityKey ) {
if ( i++ > 20 ) {
LOG.debug( "More......" );
break;
}
LOG.debug( toString( entityKeyAndEntity.getKey().getEntityName(), entityKeyAndEntity.getValue() ) );
}
}

GROOVY + Slf4j + rotation and backup log files

the following groovy script generate the C:\tmp\groovy.log log file by the Slf4j
groovy script works fine and log increased each time we use log.info/log.debug ... etc
but the problem is when the log file became with huge capacity and each day the log become more larger
I wonder how to add to my script the capability of the rotation mechanism
for example what I need is when the log file become 100K , then log will backup like zip file ( it will decrease the capacity for example to 10K )
is it possible ?
if yes please advice how to change the groovy script in order to give the backup capability
#Grab('org.slf4j:slf4j-api:1.6.1')
#Grab('ch.qos.logback:logback-classic:0.9.28')
import org.slf4j.*
import groovy.util.logging.Slf4j
import ch.qos.logback.core.*
import ch.qos.logback.classic.encoder.*
// Use annotation to inject log field into the class.
#Slf4j
class Family {
static {
new FileAppender().with {
name = 'file appender'
file = 'C:\\tmp\\groovy.log'
context = LoggerFactory.getILoggerFactory()
encoder = new PatternLayoutEncoder().with {
context = LoggerFactory.getILoggerFactory()
pattern = "%date{HH:mm:ss.SSS} [%thread] %-5level %logger{35} - %msg%n"
start()
it
}
start()
log.addAppender(it)
}
}
def father() {
log.debug 'car engine is hot'
log.error 'my car is stuck'
}
def mother() {
log.debug 'dont have a water in the kitchen'
log.error 'Cant make a cake'
}
}
def helloWorld = new Family()
helloWorld.father()
helloWorld.mother()
You can try with RollingFileAppender instead of FileAppender and set the RollingPolicy you need. i.e you can try with:
...
#Slf4j
class Family {
static {
new RollingFileAppender().with {
name = 'file appender'
file = 'C:\\tmp\\groovy.log'
// the policy to roll files
rollingPolicy = new TimeBasedRollingPolicy().with{
context = LoggerFactory.getILoggerFactory()
// file name pattern for the rolled files
fileNamePattern = 'C:\\tmp\\groovy.%date{yyyy-MM-dd}.%i.log'
// the maximum number of files to be keeped.
maxHistory = 10
timeBasedFileNamingAndTriggeringPolicy = new SizeAndTimeBasedFNATP().with{
context = LoggerFactory.getILoggerFactory()
// the max size of each rolled file
maxFileSize = '3MB'
}
}
context = LoggerFactory.getILoggerFactory()
encoder = new PatternLayoutEncoder().with {
context = LoggerFactory.getILoggerFactory()
pattern = "%date{HH:mm:ss.SSS} [%thread] %-5level %logger{35} - %msg%n"
start()
it
}
start()
log.addAppender(it)
}
}
...
Hope this helps,

How to configure the on your server spotlight to give the same output as the spotlight demo on the web?

I am using the run on your server spotlight. but couldn't configure the properties file so it will output the same output as the demo does.
here is the demo screen shot shows the parameters it uses:
which are look like this (from the file server.properties which is locally on my machine):
org.dbpedia.spotlight.web.rest.uri = http://localhost:2222/rest
org.dbpedia.spotlight.index.dir = data/index
org.dbpedia.spotlight.spot.dictionary = data/spotter.dict
jcs.default.cacheattributes.MaxObjects = 5000
org.dbpedia.spotlight.tagging.hmm = data/pos-en-general-brown.HiddenMarkovModel
org.dbpedia.spotlight.sparql.endpoint = http://dbpedia.org/sparql
org.dbpedia.spotlight.sparql.graph = http://dbpedia.org
# Configurations for the CoOccurrenceBasedSelector
# From: http://spotlight.dbpedia.org/download/release-0.5/spot_selector.tgz
# org.dbpedia.spotlight.spot.cooccurrence.datasource = ukwac
# org.dbpedia.spotlight.spot.cooccurrence.database.jdbcdriver = org.hsqldb.jdbcDriver
# org.dbpedia.spotlight.spot.cooccurrence.database.connector = jdbc:hsqldb:file:data/spotsel/ukwac_candidate;shutdown=true&readonly=true
# org.dbpedia.spotlight.spot.cooccurrence.database.user = sa
# org.dbpedia.spotlight.spot.cooccurrence.database.password =
# org.dbpedia.spotlight.spot.cooccurrence.classifier.unigram = data/spotsel/ukwac_unigram.model
# org.dbpedia.spotlight.spot.cooccurrence.classifier.ngram = data/spotsel/ukwac_ngram.model
# Other possible values: AtLeastOneNounSelector,CoOccurrenceBasedSelector,NESpotter
org.dbpedia.spotlight.spot.spotters = LingPipeSpotter
# org.dbpedia.spotlight.spot.opennlp.dir = opennlp
# Info for context searcher
org.dbpedia.spotlight.language = English
org.dbpedia.spotlight.lucene.analyzer = SnowballAnalyzer
# Choose between jdbc or lucene for DBpedia Resource creation. Also, if the jdbc throws an error, lucene will be used.
# org.dbpedia.spotlight.core.database = jdbc
# org.dbpedia.spotlight.core.database.jdbcdriver = org.hsqldb.jdbcDriver
# org.dbpedia.spotlight.core.database.connector = jdbc:hsqldb:file:data/database/spotlight-db;shutdown=true&readonly=true
# org.dbpedia.spotlight.core.database.user = sa
# org.dbpedia.spotlight.core.database.password =
# List of disambiguators to load: Document,Occurrences,CuttingEdge,Default
org.dbpedia.spotlight.disambiguate.disambiguators = Default,Document
# From http://spotlight.dbpedia.org/download/release-0.5/candidate-index-full.tgz
# org.dbpedia.spotlight.candidateMap.dir = /fastdata/spotlight/3.7/candidateIndexTitRedDis
The quickstart comes with tiny versions of index and spotter.dict.
If you want the same results as our demo webserver, you need to download the larger files with several gigabytes.
You can either overwrite your index and spotter.dict, our change the config to point to the new files.
See http://github.com/dbpedia-spotlight/dbpedia-spotlight/wiki/Downloads

log4j log problem

I am writing an servlet.
I having several classes, some of them I want their log to be seperated from each other.
Here is the log4j configuration file:
log4j.rootLogger=INFO, CONSOLE, SearchPdfBill, Scheduler
# CONSOLE is set to be a ConsoleAppender.
log4j.appender.CONSOLE=org.apache.log4j.ConsoleAppender
log4j.appender.CONSOLE.layout=org.apache.log4j.PatternLayout
log4j.appender.CONSOLE.layout.ConversionPattern=<%p> %c{1}.%t %d{HH:mm:ss} - %m%n
# LOGFILE is set to be a file
log4j.appender.SearchPdfBill=org.apache.log4j.DailyRollingFileAppender
log4j.appender.SearchPdfBill.File = /bps/app/BpsPdfBill/BpsPdfBill.ear/BpsPdfBill.war/WEB-INF/logs/BpsPdfBill.log
#log4j.appender.SearchPdfBill.File = E:\\Workspace\\Eclipse_Workspace\\BpsPdfBill\\log\\BpsPdfBill.log
log4j.appender.SearchPdfBill.Append = true
log4j.appender.SearchPdfBill.DatePattern = '.'yyyy-MM-dd
log4j.appender.SearchPdfBill.layout=org.apache.log4j.PatternLayout
log4j.appender.SearchPdfBill.layout.ConversionPattern=<%p> %c{1}.%t %d{HH:mm:ss} - %m%n
# LOGFILE is set to be a file
log4j.appender.Scheduler=org.apache.log4j.DailyRollingFileAppender
log4j.appender.Scheduler.File = /bps/app/BpsPdfBill/BpsPdfBill.ear/BpsPdfBill.war/WEB-INF/logs/Schedule.log
#log4j.appender.Scheduler.File = E:\\Workspace\\Eclipse_Workspace\\BpsPdfBill\\log\\BpsPdfBill.log
log4j.appender.Scheduler.Append = true
log4j.appender.Scheduler.DatePattern = '.'yyyy-MM-dd
log4j.appender.Scheduler.layout=org.apache.log4j.PatternLayout
log4j.appender.Scheduler.layout.ConversionPattern=<%p> %c{1}.%t %d{HH:mm:ss} - %m%n
I setup a logger here:
String logDir = conf.getInitParameter("log_file_path");
if (logDir == null) {
initErrMsg = "Param - log_file_path cannot be empty";
throw new ServletException(initErrMsg);
}
if ((logger = Logger.getLogger(SearchPdfBill.class)) != null) {
//writeLog("Initializing log4j.");
conf.getServletContext().log("Log4j initialized.");
} else {
conf.getServletContext().log("Cannot initialize log4j properly.");
}
And another logger here:
logDir = sc.getInitParameter("log_file_path");
if (logDir == null) {
initErrMsg = "Param - log_file_path cannot be empty";
try {
throw new ServletException(initErrMsg);
} catch (ServletException e) {
// TODO Auto-generated catch block
conditionalWriteLog(logEnabled, e.getMessage());
}
}
if ((logger = Logger.getLogger(Scheduler.class)) != null) {
//writeLog("Initializing log4j.");
conditionalWriteLog(logEnabled, "Log4j initialized.");
} else {
conditionalWriteLog(logEnabled, "Cannot initialize log4j properly.");
}
However, it end up the 2 two logger are logging the same thing. Every log is log to the 2 file identically. Why?
I think the configuration file is wrong probably, but I don't know where it is, can somebody help me to correct that?
You need to define those two loggers in your configuration file. Are those objects within a package? That is important to the way you will configure them. Say the package is com.gunbuster:
log4j.category.com.gunbuster.SearchPdfBill=INFO, SearchPdfBill
log4j.additivity.com.gunbuster.SearchPdfBill=false
log4j.category.com.gunbuster.Scheduler=INFO, Scheduler
log4j.additivity.com.gunbuster.Scheduler=false
The additivity setting is to prevent those loggers from adding to the rootLogger output to CONSOLE Also, you should make the first line:
log4j.rootLogger=INFO, CONSOLE
So that the rootLogger does not add entries into those files.

Categories