How to refactor logging statements in sibling classes? - java

As system evolves, the logging statements will be changed to meet the new requirements, and ideally the logging statements which have identical or very similar context should be changed consistently. But in many cases it's hard for developers to remember the existence of all of them. Then they may only change a portion of them, and forget to change the other ones consistently.
Take this Java code snippet as an example, there are two sibling classes (ChildClassA, ChildClassB) which both extend the same superclass (ParentClass), and they have a pair of similar methods which have similar functions and contain the same logging statements.
public class ChildClassA implements ParentClass{
public void processShellCommand(){
...
logger.error("Error initializing command, field " + field.getName() + " is not accessible.");
...
}
public class ChildClassB implements ParentClass{
public void processNetworkCommand(){
...
logger.error("Error initializing command, field " + field.getName() + " is not accessible.");
...
}
Is there a solution such as a tool, or some documents, etc. that can help the consistent changing of them?

When it comes to logging I think you really should try to avoid putting details in the log.[whatever_leve].([message_text]] statement (at least when in comes to errors), instead you want to create your own Exception classes and put message details in them. Have a filter/interceptor for dealing with logging of unexpected exceptions is also a good practice.
So in you code example that would be that the sub-classes thrown a typed Exception, lets call it InitializingException(...). It is then up the the caller or a filter to deal with it and log it.
You want to care about the logging part of your code base in the same way as you do for business-logic code (one can argue it is part of it), so salute DRY (do-not-repeat-yourself).
Same logic applies for Debug and Trace statements as well, you don't want to copy-paste the same message across the system. So general refactoring should be applied to avoid it. But I generally think the a debug or trace message is likely to change (it is up to the developer).

Related

How to avoid creating custom exception classes whilst still throwing exceptions at an appropriate level of abstraction?

I'm reviewing my understanding of exception handling (in the context of Java), and trying to figure out what types of exceptions are most appropriate to throw. One comment that I'm regularly seeing is that it is generally better to avoid creating many custom exceptions - it is better to use the "widely understood" standard exceptions, and only "create a custom exception type when you need to annotate the exception with additional information to aid in the programmatic handling of the symptom."
However, this seems somewhat in contrast to the idea that you should "throw exceptions at the right level of abstraction". Looking at an example from Uncle Bob's 'Clean Code' the following examples are provided for an Employee class:
Bad: public TaxId getTaxId() throws EOFException
Good: public TaxId getTaxId() throws EmployeeDataNotAvailable
So, how do I consolidate these two "recomendations" - that you should only throw exceptions at the right level of abstraction, and you should rarely create custom exception classes. In addition, when searching for information on the standard exceptions in Java, there is very very little well presented and formatted information on what standard exception classes are available - I'm looking for standard exceptions that would semantically still seem to be appropriate for calling classes, but not finding much to go on. Of course you can find the what exception classes are available in the jdk documentation, but just the general lack of info and discussion online seems strange.
So, this is where I'm at right now. Any suggestions and comments are much appreciated!
The level of abstraction is judged to be right or wrong by the user of your code. To justify an existence of AExeption and BException there should be a use-case where the user differentiates between them, e.g:
} catch(AExeption ae) {
// do something
} catch(BException be) {
// do something different
}
as opposed to always:
} catch(AExeption ae | BException be ) {
// do something
}
My experience is that real world systems tend to go easy on the amount of logic that goes into the programmatic handling of the symptom
I don't think there's an specific answer for your question. In my projects, I tend to follow these guidelines for custom exception classes:
If I can encounter with an exception in a method, check if the exception can be described by any of the subclasses of Exception or, if possible, a subclass of RuntimeException. The javadocs provide enough info about the basic classes that extend from both Exception and RuntimeException and each exception class could also have more subclasses that weren't listed before e.g. IOException.
If there's no subclass of Exception or RuntimeException or any, create a custom exception class or reuse one previously created but with a distinct message. Usually, I tend to create these classes extending from RuntimeException to avoid clients of the method using try-catch blocks. If there's the need to handle the exception in the client of this method, then it should extend from Exception.
The custom exception classes are associated to a process or specific event in the application.
If developing a business application, then the name of the exception can be related to the business process you're working with. For example, if I'm developing a process that creates a bill from a set of input data (e.g. products, services, customer data, etc), then I would provide at least 2 custom exception classes:
ElementNotFoundException, probably for not finding a specific kind of input e.g. missing product or Customer#billingAddressLocation is null due to a wrong migration of the data of some customer.
BillGenerationException, generated when there's a problem after collecting the necessary data to generate the bill and in the exact process of generate the bill.
It's quite philosophical question.
But in general it means that you should create your own exception with considering of existing ones.
Example :
In case of usage some external service and this service is unavailable
, in this case I wouldn't recommend you to throw your own exception,
because "Connection Refused" or "Connection timed out" will understand
on the spot every programmer after you, for checking your custom
exception programmer will need to go to source code and spend some
time to understand your exception after noticing it in production
logs.
But if I see that my wrapper will be clearer for such case I am adding my wrapper.
There is no contradiction between using exceptions of the right level of abstraction and refraining from creating new exception classes. What you must do is choose the most appropriate existing exception class for the particular method you are interested in if you can.
So if the clear meaning of a getTaxiId method does not suggest the method performs I/O, declaring it to throw an IOException of any kind would be inappropriate. You would then have to search the other existing exception classes for a more approriate exception class. If you did not find such a class, you know it is appropriate to create a new exception class.
I think Uncle Bob is looking at the problem from the wrong end.
You throw an exception to unravel the call chain and inform a non-local piece of logic that something unexpected and detrimental happened and allow it to respond.
I can understand wrapping an EOFException and all sorts of bad data problems into some generic InvalidDataException but providing a specific EmployeeDataException seems like overkill.
It may be useful for the calling process to (say) know that there was a local data exception and not, for example, a lost connection. That way it could abandon a unit of work but realistically continue trying to process the next one.
So, do throw at an appropriate level of abstraction - for the catcher to respond usefully.
If you think about it, if you create a different exception for each object type, someone will have to maintain catchers for all object types in play!
Tomorrow a new exception called AddressDataException is introduced and various (obscure) catcher chains need that added as yet-another-data-exception category.
Of course the answer is to introduce a DataException category as super-class to all those specialised ones.
But as soon as you do that you'll change all the handlers to catch the generic exception and realise that the correct level of abstraction is a generic DataException because that's what is useful to the catcher.

Is it a bad decision to introduce a method only for exception throwing?

i currenty thinking about a concept which should treat illegal states in my java program.
I have introduced a class which does processing steps on images.
public class ImageProcessor{
public Image processImage(Image img){
//do some processing steps
}
}
Now, I want to introduce another class which should check the the image before doing the processing.
public class ImageChecker{
public void checkImage(Image img) throws ImageNotProcessableException{
//for example, if the image has a width of 0 the full process
//should be broken, throw an exception
if (img.width=0) throw new ImageNotProcessableException("width is 0");
//other conditions throwing also exceptions
// the image is fine, do nothing
}
}
The two classes should be used in a Coordinator class.
public class Coordinator{
public void coordinate(Image img) throws ImageNotProcessableException{
//initialize both
imageChecker.checkImage(img); //if no exception is throw the process can run
imageprocessor.processImage(img);
}
}
The question is now, is this way of treating Exception (defining a separate method for them) a bad coding style? The idea for this design was to prevent polluting processing code with exception handling. I want the coordinator to throw exceptions and I thought this could be a senseful way. What do you think, is this a good way or is it an antipattern.
Thanks!
The validating method itself is a very good idea. You introduce very good separation of concerns - checking preconditions and validation in one place and actual processing in another.
However the usage is incorrect. The ImageProcessor should eagerly call ImageChecker.checkImage(). Otherwise the client of your library might forget to call it and pass invalid image.
Also *#Damien_The_Unbeliever* brings up a very good point about the structure of the code.
To make it as fancy as possible I would create an ImageProcessor interface with two implementations: one that performs the actual processing and a decorator implementation (ImageChecker) that performs validation and passes validated object to target ImageProcessor.
This way you can either use safe or fast (assuming validation is costly) implementation. Also in the future you might introduce other elements to the chain like caching or profiling.
This is not unreasonable. Although if checkImage exists for the sole purpose of checking whether it's ok to process an image and has no other return type, it would be reasonable to have it return a status code/object rather than throwing an Exception and returning void, e.g.,
ImageStatus status = checkImage(image);
if (status.isOk()) {
processImage(image);
}
This would be analogous to checking for divide by zero:
if (y != 0) {
z = x / y;
}
Checked exceptions generally are better for situations where you can't confirm a priori whether something will succeed or fail until you try it, e.g., IOException.
I see 3 questions here, and it's useful to separate them.
Should the caller see a class called "ImageProcessor", or a class called "Coordinator"?
Is it good to do validation of inputs in a separate method from the main processing?
Should this code use checked exceptions or unchecked exceptions?
My answers would be:
Use the prettier name for the classes or interfaces you are exposing. So, here "ImageProcessor" is much nicer than "Coordinator". ("Coordinator" sounds like an implementation detail you don't want to expose.)
This is an implementation decision. It's fine to separate some validation logic into a separate method if it makes things cleaner. But, you need to be careful of falling into a trap of thinking it is possible to anticipate upfront everything that can possibly go wrong in later stages of processing. The method that does the actual processing still needs to be free to throw exceptions so it can describe failures as accurately as possible, no matter what the initial quick validation decided. Also, you want to avoid writing the validation code in two places, since that is unnecessary code duplication. So, I'm a bit skeptical about having this separation, but I'd need to see the code. More likely, it's better to separate the actual image processing itself into various sub-tasks, and do the validation at the point you need it.
This is the age-old Java checked exceptions debate that will not go away in our lifetimes, and is impossible to summarize here. But, I think the consensus has fairly strongly shifted to favor using runtime exception rather than checked exception. Of course, legacy APIs still use checked exceptions. But for most new code, it's better to use runtime exceptions rather than checked exceptions, since this allows more robust code. (Checked exceptions have the annoying property of often being wrapped, rewrapped, swallowed, or otherwise mangled before they can get to the exception handler, and they cause lots of collateral damage along the way by making everyone's throws clause get longer.)

Java logging API overhead

I've read a bit about the various ways of logging a debugging message with Java, and coming from a C background my concern is as follow :
Those libraries claim minimal overhead in case where logging is disabled (such as production environment), but since argument to their log() function are still evaluated, my concern is that the overhead in real-world scenario will, in fact, not be negligible at all.
For example, a log(myobject.toString(), "info message") still has the overhead of evaluating myobject.toString(), which can be pretty big, even if the log function itself does nothing.
Does anyone has a solution to this issue ?
PS: for those wondering why I mentioned a C background : C lets you use preprocessor macro and compile-time instructions that will completely remove all the code related to debugging at compilation time, including macros parameters (which will simply not appear at all).
EDIT :
After having read the first batch of answers, it seems that java clearly doesn't have anything that would do the trick (think logging the cosine of a number in a big loop in a mobile environment where every bit of CPU matters). So i'll add that i would even go for an IDE based solution. My last resort being building something like a "find all / replace" macro.
I first thought that maybe something grabbed from an aspect oriented framework would help...
Anyone ?
I think that the log4j FAQ does a good job of addressing this:
For some logger l, writing,
l.debug("Entry number: " + i + " is " + String.valueOf(entry[i]));
incurs the cost of constructing the message parameter, that is converting both integer i and entry[i] to a String, and concatenating intermediate strings. This, regardless of whether the message will be logged or not.
If you are worried about speed, then write
if(l.isDebugEnabled()) {
l.debug("Entry number: " + i + " is " + String.valueOf(entry[i]));
}
This way you will not incur the cost of parameter construction if debugging is disabled for logger l. On the other hand, if the logger is debug enabled, you will incur the cost of evaluating whether the logger is enabled or not, twice: once in debugEnabled and once in debug. This is an insignificant overhead since evaluating a logger takes less than 1% of the time it takes to actually log a statement.
Using a guard clause is the general approach to avoid string construction here.
Other popular frameworks, such as slf4j, take the approach of using formatted strings / parameterized messages so that the message is not evaulated unless needed.
Modern logging frameworks have variable replacement. Your logging then looks something like this:
log.debug("The value of my first object is %s and of my second object is %s", firstObject, secondObject).
The toString() of the given objects will then only be executed when logging is set to debug. Otherwise it will just ignore the parameters and return.
The answer is pretty simple: don't call expensive methods in the log call itself. Plus, use guards around the logging call, if you can't avoid it.
if(logger.isDebugEnabled()) {
logger.debug("this is an "+expensive()+" log call.");
}
And as others have pointed out, if you have formatting available in your logging framework (i.e., if you're using one modern enough to support it, which should be every one of them but isn't), you should rely on that to help defray expense at the point of logging. If your framework of choice does not support formatting already, then either switch or write your own wrapper.
You're right, evaluating the arguments to a log() call can add overhead that is unnecessary and could be expensive.
That's why most sane logging frameworks provide some string formatting functions as well, so that you can write stuff like this:
log.debug("Frobnicating {0}", objectWithExpensiveToString);
This way your only overhead is the call to debug(). If that level is deactivated, then nothing more is done and if it is activated, then the format string is interpreted, the toString() on objectWithExpensiveToString() is called and the result inserted into the format string before it is logged.
Some log statements use MessageFormat style placeholders ({0}), others use format() style placeholders (%s) and yet others might take a third approach.
You can use a funny way - a bit verbose, though - with assertions. With assertions turned on, there will be output and overhead, and with assertions turned off, no output and absolutely no overhead.
public static void main(String[] args) {
assert returnsTrue(new Runnable() {
#Override
public void run() {
// your logging code
}
});
}
public static boolean returnsTrue(Runnable r) {
r.run();
return true;
}
The returnsTrue() function is needed here because I know no better way of making an expression return true, and assert requires a boolean.

How to prove a bug in this piece of code via unit test

we have a habit in our company that when a bug is reported we do following steps:
Write a unit test that fails clearly showing the bug exists
Fix the bug
Re-run the test to prove the bug has been fixed
Commit the fix and the test to avoid regressions in the future
Now I came across a piece of legacy code with very easy bug. Situation looks like follows:
public final class SomeClass {
...
public void someMethod(Parameter param) {
try {
if (param.getFieldValue("fieldName").equals("true")) { // Causes NullPointerException
...
}
} catch (Exception ex) {
log.warn("Troubles ...", ex);
}
}
}
The problem here is that fieldName is not mandatory, so if not present, you get NPE. The obvious fix is:
if ("true".equals(param.getFieldValue("fieldName"))) {
...
}
My question is how to write a unit test to make the method fail. If I pass in a message which doesn't contain the fieldName it just logs the NPE but it won't fail ...
You may think what the method does? I can test the effect the method has. Unfortunatelly it communicates with some remote system so this will require a huge integration test which seems to be overkill with such a small and straiht-forward bug.
Note that it will be really hard if not impossible to make any changes in the code that are not directly causing the bug. So changing the code just to make it easier to test will probably not be an option. It's quite scary legacy code and everybody is really afraid to touch it.
You could do several things:
Stub the logger and check whether the error was logged or not as an indicator whether or not the bug occurred. You can use TypeMock or Moles if the logger can't be easily replaced.
Refactor the part inside the try block into its own method and call only that method inside the try block and make your unit test also call that method. Now the exception will not be silently logged and you can check whether or not it was thrown.
I think the best practice would be to mock out the logger, and assert that the logger has not been called for a pass. If it's a large change, I'm assuming the logger is used in a lot of places, which will help you with other tests in the future. For a quick-fix, you could raise an event in the exception catcher, but I don't think that's a very 'clean' way of doing it.
IMHO any catch all is wrong. You want to catch specific Exceptions, that you are looking for. Unit-testing is also about making the code better, so you can change it to
catch (ExpectedException ex)
The log could be an injected service, meaning that SomeClass looks like this:
public final class SomeClass
{
private final Logger log;
public SomeClass(Logger log)
{
this.log = log;
}
}
So in your unit test you could pass a fake Logger (possibly constructed with a mocking framework) which allows you to detect whether or not a warning was logged during the test.
If log is a global variable, you could make that global variable writable so that you can replace the logger in your tests in a similar way.
Yet another option is to add a Handler to log in your unit test for the purpose of detecting warn calls (assuming that you are using java.util.logging.Logger, but that doesn't seem to be the case because the method is warning, not warn).
Well, your companies method does mean you sometimes have to write a lot of extra tests, and you could argue if this is the way to go for this "obvious easy bug". But you're doing it for a reason.
I'd look at it like this:
Your method should have a certain effect.
It does not have this effect at the moment.
If you want to fix it and test if the method works, you need to check the effect.
Therefore, you probably should write the whole code, even with the external systems, to proof the function works. Or in this case, doesn't.
Add an additional logger appender at warn level for this class/logger at the beginning of the test (and remove it at the end).
Then run the someMethod and check if the new appender is still empty (then there was no exception) or has content (then there was a exception).
You might consider using Boolean.toBoolean(String) or "true".equalsIgnoreCase(text)
If you don't want this behaviour you might want to add a test which shows that "True" and "TRUE" are treated as false.

Should the RequireThis check in Checkstyle be enabled?

One of the built-in Checkstyle checks is RequireThis, which will go off whenever you don't prepend this. to local field or method invocations. For example,
public final class ExampleClass {
public String getMeSomething() {
return "Something";
}
public String getMeSomethingElse() {
//will violate Checkstyle; should be this.getMeSomething()
return getMeSomething() + " else";
}
}
I'm struggling with whether this check is justified. In the above example, the ExampleClass is final, which should guarantee that the "right" version of getMeSomething should be invoked. Additionally, there seem to be instances where you might want subclasses to override default behavior, in which case requiring "this" is the wrong behavior.
Finally, it seems like overly defensive coding behavior that only clutters up the source and makes it more difficult to see what is actually going on.
So before I suggest to my architect that this is a bad check to enable, I'm wondering if anyone else has enabled this check? Have you caught a critical bug as a result of a missing this?
The RequireThis rule does have a valid use in that it can prevent a possible bug in methods and constructors when it applies to fields. The code below is almost certainly a bug:
void setSomething(String something) {
something = something;
}
Code like this will compile but will do nothing except reassign the value of the method parameter to itself. It is more likely that the author intended to do this:
void setSomething(String something) {
this.something = something;
}
This is a typo that could happen and is worth checking for as it may help to prevent hard to debug problems if the code fails because this.something is not set much later in the program.
The checkstyle settings allow you to keep this useful check for fields while omitting the largely unnecessary check for methods by configuring the rule like this:
<module name="RequireThis">
<property name="checkMethods" value="false"/>
</module>
When it comes to methods this rule has no real effect because calling this.getMeSomething() or just getMeSomething() has no effect on Java's method resolution. Calling this.getSomethingStatic() still works when the method is static this is not an error, it is only a warning in various IDEs and static analysis tools.
I would definitely turn it off. Using this.foo() is non-idiomatic Java, and should therefore only be used when necessary, to signal that something special is going on in the code. For example, in a setter:
void setFoo(int foo) {this.foo = foo;}
When I read code that makes gratuitous use of this, I generally mark it up to a programmer without a firm grasp on object-oriented programming. Largely because I have generally seen that style of code from programmers that don't understand that this isn't required everywhere.
I'm frankly surprised to see this as a rule in CheckStyle's library.
Calling with "this." does not stop the invocation from calling an overridden method in a subclass, since this refers to "this object" not "this class". It should stop you from mistaking a static method for an instance method though.
To be honest, that doesn't sound like a particularly common problem, I personally wouldn't think it was worth the trade-off.
Personally I wouldn't enable it. Mostly because whenever I read code I read it in an IDE (or something else that does smart code formatting). This means that the different kind of method calls and field accesses are formatted based on their actual semantic meaning and not based on some (possibly wrong) indication.
this. is not necessary for the compiler and when the IDE does smart formatting, then it's not necessary for the user either. And writing unnecessary code is just a source of errors in this code (in this example: using this. in some places and not using it in other places).
I would enable this check only for fields, because I like the extra information added by 'this.' in front of a field.
See my (old) question: Do you prefix your instance variable with ‘this’ in java ?.
But for any other project, especially legacy ones, I would not activate it:
chances are, the keyword 'this.' is almost never used, meaning this check would generate tons of warnings.
naming overrides (like a field and a method with a similar name) are very rare due to the current IDE flagging by default that code with a warning of their own.

Categories