Encountered a frustrating problem in our application today which came down to an ArrayIndexOutOfBounds exception being thrown. The exception's type was just about all that was logged which is fairly useless (but, oh dear legacy app, we still love you, mostly). I've redeployed the application with a change which logs the stack trace on exception handling (and immediately found the root cause of the problem) and wondered why no one else did this before. Do you generally log the stack trace and is there any reason you wouldn't do this?
Bonus points if you can explain (why, not how) the rationale behind having to jump hoops in java to get a string representation of a stack trace!
Some logs might contain sensitive data, log facilities are not necessarily secure enough to track that data in production.
Logging to much can result in too much information, i.e. no information at all for the sysadmins. If their logs are filled up with debug messages, they won't be able to recognize suspicious patterns. (Years ago I saw a system logging all system calls for security reasons. There were so many logs, that nobody saw it when some unprivileged users started to become root.)
Best thing to do to log everything with appropriate log levels, and be able to set log levels in production (at least in Java not that a big issue).
Please see these questions also
Logging in Java and in general: Best Practices?
Best practices for Java logging from multiple threads?
Important things here to consider
Handling Sensitive data
Compact exception messages and mailing those to appropriate fixers
Logging what is required. Because its logging expensive in terms space
and time
I generally do log the stack trace, because it has information for troubleshooting/debugging the problem. It's the best think next to a minidump and often leads to a solution simply by code inspection and identifying the problem.
BTW, I agree with sibidiba about the potential information disclosure about your app internals a full stack exposes: the function names, along with the stack call sequence, can tell a lot to an educated reader. This is the reason why some products only log the symbol address on the stack, and rely on the devs to resolve the address to the name from internal pdbs.
But in I reckon that logging text into files containing 1 line of error and 14 lines of stack makes it very difficult to navigate the error logs. It also causes problem on high concurency apps because the lock on the log file is held longer (or worse, the log files get interleaved). Having encountered these problems my self many times, along with other issues in supporting and troubleshooting deployments of my own apps, led me to actually create a service for logging errors at bugcollect.com. When designing the error collection policies I chose to collect the stack dumps every time, and to use the stacks as part of the bucket keys (to group errors that happen on the same stack into same bucket).
Restrictions on logging are often pushed through when developers log too liberally and sysadmins discover that the app, once put under a production load, thrashes and fills the HD with huge log files. It can then be hard to convince them that you've seen the error of your ways and have reduced logging (or adjusted log levels) sufficiently but really need those remaining log entries.
For us it is very simple: If there is an unexpected exception thrown, we log the stack trace along with as telling a message as possible.
My guess is that the developer who wrote the original code in the question, simply wasn't experienced enough to know that it is not enough with just the message. I thought so too, once.
The reason why it is convoluted to get a stack trace as a string is because there is no StringPrintWriter in the JRE - I believe the line of thinking has been that they provide a lot of orthogonal building blocks which you then combine as needed. You have to assemble the needed PrintWriter yourself.
Bonus points if you can explain (why,
not how) the rationale behind having
to jump hoops in java to get a string
representation of a stack trace!
Shouldn't you just log the throwable instead of going through hoops to print the stacktrace? Like this: log.error("Failed to deploy!", ex). Given a throwable, Log4J will print both the error message obtained via getMessage() and the stack trace.
What I've seen a lot is code logging an exception like this:
LOG.error(ex);
Because log4j accepts an Object as the first argument, it will log the String representation of the Exception, which is often only the name of the class. This is usually just an oversight on the developer's part. It's better to log and error like this:
LOG.error("foo happened", ex);
..so that if configured properly, the logging framework will log the stack trace.
Related
I'm a systems Engineer, and I've got multiple JBOSS5 instances, and a complete mess of deployed wars. I know the log4J.xml file, to a decent degree, and have it sending all logs to localhost rsyslog. Now the problem...
Basically the code could be a textbook example of extremely horrific practices, regular expected situations go unhandled, and are sent to log4j as WARN/INFO or ERROR.... Basically the entire logs have become meaningless and wasteful, and I'm unable to get the developers to do anything to fix this... their may be a future time I can make my case, but I'm totally stuck with this pile.
I simply no longer want to accept 150MiB of nonsense being forwarding to centralized logging, the whole thing has made even identifying MY own system problems difficult. I know everything I can do on the system level, very well in fact, and have done everything to isolate and mitigate the consequences... but thats not enough, what can I do now on the Jboss/log4J/Java side?
I can't cut off all logging, I need to at least have the parts they didn't screw up logged. So with that said, How do I stop these... and what is even the correct name for this type of log output?
at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:94)
at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:152)
at org.apache.commons.httpclient.methods.StringRequestEntity.writeRequest(StringRequestEntity.java:146)
at org.apache.commons.httpclient.methods.EntityEnclosingMethod.writeRequestBody(EntityEnclosingMethod.java:499)
at org.apache.commons.httpclient.HttpMethodBase.writeRequest(HttpMethodBase.java:2114)
at org.apache.commons.httpclient.HttpMethodBase.execute(HttpMethodBase.java:1096)
at org.apache.commons.httpclient.HttpMethodDirector.executeWithRetry(HttpMethodDirector.java:398)
at org.apache.commons.httpclient.HttpMethodDirector.executeMethod(HttpMethodDirector.java:171)
at org.apache.commons.httpclient.HttpClient.executeMethod(HttpClient.java:397)
at org.apache.commons.httpclient.HttpClient.executeMethod(HttpClient.java:323)
Is there anyway I can somehow just get rid of all the lines that start with at? Is that a debugging feature of some sort, they have these in production build by nature? Really? I mean if i had the error message, and just none of the at lines, i'd be much more comfortable.
If a programmer developed a java application(say swing application) with out any log files(poor coding standards..).At one point of time,the application crashes.
Then how will one track the cause for the crash ?
Note: faced this scenario question in an Interview.
I replied like may be he possibly can track from JVM.(not sure...).
Please anyone tell me, how to track the issue?
If there has been an exception, generally you can examine the stack trace in the Standard Output (Standard Error), if you are in an IDE in the Console, that is the default target for jvm log. However in the case of "poor coding standards" as you say, the exception could be catched without printing the stack trace nor re-throwing it to upper levels...
Besides looking on the console for the stack trace, another option would be to reproduce it in the development environment and then add logging information or debug it.
I am using slf4j, implementation of log4j for logging in my java project. Currently I am having 2 appenders, FILE and CONSOLE.
I want to know following 2 things:
Does using multiple appenders (in this case CONSOLE and FILE) causes performance issue in logging?
When somebody would want to use CONSOLE and FILE appenders both?
When writing to CONSOLE and FILE, you are writing to 2 different streams. In a multithreaded system, the performance hit will not be much, but with big volumes it is still apparent.
From the log4J manual
The typical cost of actually logging is about 100 to 300 microseconds.
This includes building the statement and writing it, but the time taken for writing will still be apparent if you are logging heavily.
But you need to ask a more basic question - Why are you logging?
to keep track of what is going on
to find out errors
The CONSOLE is not useful for the first part as the logs are not saved anywhere. If the logging is heavy, and all the logs are sent to the CONSOLE, the amount of logs will make the output on the console unreadable, so purpose 2 is also defeated.
IMO it makes much more sense reading logs from a file using something like less. As a general practice, you log to file and if you must, log only the ERROR messages to console, as a few ERROR messages would be an indicator of something going wrong, whereas hundreds of log lines on console is just junk, as you cannot make any sense of it when the console is refreshing so rapidly.
TL-DR
The cost might not be much, but why incur an added cost when you are getting no added advantage?
Read these links on log 4j performance.
log4j-performance
log4j-decreased application performance
log4j appenders
I challenge you to notice any performance change.
For instance you might want a daemon application to log both in the console and in a file. It does not seem to be such an uncommon behavior.
Is there a tool that is able to collect and count (Java) stacktraces in a large logfile, such that you get an overview which errors occur most often?
I am not aware of any automatic tool but logmx will give you a nice clean overview of your log file with search options.
This probably isn't the best answer but I am going to try to answer the spirit of your question. You should try Dynatrace. It's not free and it doesn't work with log files per say but it can get you very detail reports of what types of exceptions are thrown from where and when on top of a lot of other info.
I'm not too sure if there is a tool available to evaluate log files but you may have more success with a tool like AppDynamics. This is a monitoring tool which can be used to evaluate Live application performance and can be configured to monitor exception frequency.
Good luck.
Mark.
We are working on a large Java program that was converted from a Forte application. During the day we are getting Blocking SPID's in the server. We had a DBA visit yesterday and he set up a profile template to run to catch the locking/blocking action. When we run this profile the blocking problem goes away. Why?
This application is distributed using RMI and has around 70 users. We are using SQL 2000 and windows 2000 servers to keep compatibility with a bunch of old VB helper applications.
We have traced the blocking down to a specific screen and stored procedure but now we can't get the errors to happen with profiler running.
Thanks for any help!
Theo
The good old Heisenberg debugger problem.
Any profiler does two things: it adds code in place to invoke the debugger, and it stores data. The first one can thward optimizers, and the second can change the timing of something, causing a race condition to go away.
This blocking SPID problem seems to show up on Google a lot; the reason appears to be that it occurs when some resource is locked when another one wants it, so the timing error sounds likely.
Microsoft has an article on how to deal with the problem.
Just a collection of random thoughts.. I've seen traces take a server down but never make things better.
What trace template are you using? (These are taken from SQL Server 2005 tools, sorry)
The "Standard (default)" one tracks high levels calls and logon/logout
The "TSQL_SPs" tracks statement calls which would be a lot more intrusive
Is it binary and guaranteed too? Trace on= no blocks, trace off = blocks, or is it unlucky coincidence? When you're all watching the DBA does someone stop clicking in the client and come to watch?
Is something else being switched off as part of the trace. That is, are you using profiler or a scripted trace (lots of sp_trace_set% statements)?. In a scripted trace, there may be something that switches something else off.