How to run mongodb instance in ERROR log level mode? - java

Not able to run mongodb instance in ERROR log level. As defined by mongodb documentation, by default verbosity is 0 which includes information messages. But increasing verbosity to 1-5 will include debug level messages. I need only error messages to be logged in my log file. I am currently using mongodb-3.6.3 version with java driver at client side.
Is there any way to do it? If yes, how to achieve this? I've already tried reducing logs by adding quiet = true in the config file. But still, a lot of unnecessary logs are generated.

Add this line to your application.properties file and check the console output after running any MongoRepository query.
logging.level.org.springframework.data.mongodb.core.MongoTemplate=ERROR

Related

GCP Log explorer shows wrong severity level of log records

I am running a java application in GKE and monitoring logs in Log explorer. Java application is writing logs to stdout and as far as I understand GKE agent parse it and send it to log explorer. What I found is that the log explorer shows WARN and ERROR messages with severity INFO.
I figured out that I can't change the default parser of logs and configured logback to represent java logs in JSON format suitable for GCP (I used implementation from this answer), here is an example:
{"message":"2022-02-17 12:42:05.000 [QuartzScheduler_Worker-8] DEBUG some debug message","timestamp":{"seconds":1645101725,"nanos":0},"thread":"QuartzScheduler_Worker-8","severity":"DEBUG"}
{"message":"2022-02-17 12:42:05.008 [QuartzScheduler_Worker-8] INFO some info message","timestamp":{"seconds":1645101725,"nanos":8000000},"thread":"QuartzScheduler_Worker-8","severity":"INFO"}
{"message":"2022-02-17 12:42:05.009 [QuartzScheduler_Worker-8] ERROR some error message","timestamp":{"seconds":1645101725,"nanos":9000000},"thread":"QuartzScheduler_Worker-8","severity":"ERROR"}
But it didn't help at all.
Please point me out where I am wrong with JSON format or maybe I need to configure something additionally on the GCP side. I've checked the official doc regarding log JSON format and I don't understand what I am missing.
According to the documentation link 1 & link 2
Severities: By default, logs written to the standard output are on the INFO level and logs written to the standard error are on the ERROR level. Structured logs can include a severity field, which defines the log's severity.
If you're using Google Kubernetes Engine or the App Engine flexible environment, you can write structured logs as JSON objects serialized on a single line to stdout or stderr. The Logging agent then sends the structured logs to Cloud Logging as the jsonPayload of the LogEntry structure.
If the manual implementation is not working, you may try to:
Directly send logs to Cloud Logging API
Use this official Java logback lib (note: it's currently a WiP)

Can we skip some errors in error log file using log4j2?

I got an error when I have duplicates, It doesn't effect my application.I want to skip those errors from my error log file. IS it possible to skip specific errors,if possible how can I do that?.I am using mulesoft.
Mule uses SLF4J (3.8.x) as the logging frontend framework and log4j2 for the backend or implementation framework. If the errors appear in the application's log, chances are that you can set that particular category to OFF.
Of course it would be preferable to just ignoring the errors to analyze the root cause (why are there duplicates?).

Is there a way to turn off messages from zookeeper?

I am using zookeeper successfully. It keeps printing status updates and warnings to the shell, which is actually making it harder to debug my program (which is not working as well as zookeeper). Is there an easy way to turn that off in zookeeper without going into the source? Or is there a way to run a java program so that only the executing program gets to print to the shell?
Isn't 'logging' chapter of Zookeeper administrator's guide what you actually want?
ZooKeeper uses log4j so it is pretty standard logging approach with lot of configuration flexibility available.
By default zookeeper emits INFO or higher severity level messages and it uses log4j for logging. So define logging level to a higher severity in your log4j.properties (assuming you provided the path to the .properties or it's in the working directory)
there is a similar post on avoiding ZooKeeper log messages - like this:
zoo_set_log_stream(fopen("NULL", "w"));
This will turn of all output from ZooKeeper

how to get tomcat log output?

I have a web app written with the java/spring/hibernate stack, and I have several pieces of code in the app that print out debugging information. For example, I have hibernate's "show_sql" attribute set to "true" so that it shows me the queries it is executing. Another example is whenever an exception is caught, its stack trace is printed out to console.
Now, I have moved my WAR to the production server which is running tomcat 7.0.42. However, I am having a problem getting hibernate or mysql queries to execute, so I need to debug the problem. But the problem is catalina.out only shows very minimal messages; There is no hibernate output or error stack trace. In fact, none of the logs in the logs/ directory show output from hibernate or exception stack traces.
So my question is how do I get the same output on the server as I get when I'm running my web app locally?
Assuming you're logging to System.out or System.err, fiddle with logging.properties in ${catalina.home}/conf
If you're using a proper logger (you should be!), I would have to imagine you need to fiddle with the appropriate config file for that logger.
Hibernate's show_sql prints to standard output System.out. If it's not in logs/catalina.out, either the parameter is false or you've setup Tomcat to direct System.out to some other place.

Logging using Log4j taking long time

I am using Log4j logging framework to insert the log into oracle database.But the insert query in the log4j properties file is taking a lot of time to execute and making the application very slow.When I removed the logging statements from the java code, the application worked fine.At first, I thought that the insertion into DataBase is taking time , but writing the log on an external file also takes a lot of time.
Can anyone please suggest a solution?
Thank You,
Dhaval Maheshwari.
If you application is under development then log level should be debug and before logging you should check for isDebugEnabled() and then log your string.
but If your application is in production then log level should be info and you must log minimal information in log file.
Always use atleast two log level in your application one for debuggnig
mode(for development environment) and another for production mode and
production log should be minimal.
This is the way you can speed up you applicaiton.
and second thing if you want to persist your logs into database then
create a scheduler task whose responsibility would be reading logs
from flat file and persisting them into database and schedule this to
run only once in a day.
I suggest not to follow the technique u r following now.
First of all I am not sure why u r trying to log the output of log4j in DB.
Anyways if it is that necessary try something like this. Let the logfile write into a file as it is and later run a thread to dump this file from the disk when file is closed to the database as a batch process.
In this case your application will be separated from the latency of DB.
There are other solutions also using a JMS.
Where you can write it to a JMS queue and the consumer on the other hand can read the queue and write it a DB.
It depends on the kind of problem you are trying to solve though.
See of it helps
In Logging there are levels included in. For example in production only log application level exceptions and errors[ERROR level].
If it's tracking logs(Such as user actions) don't write them to files, directly add them to database. Hope this helps.

Categories