Make my logger very effective to my Java-application - java

I am struggling with the following problem and ask for help.
My application has a logger module. This takes the trace level and the message (as string).
Often should be messages constructed from different sources and/or different ways (e.G. once using String.format in prior of logging, other times using .toString methods of different objects etc). Therefore: the construction method of the error messages cannot be generalized.
What I want is, to make my logger module effective. That means: the trace messages would only then be constructed if the actual trace level gets the message. And this by preventing copy-paste code in my application.
With C/C++, by using macros it was very easy to achive:
#define LOG_IT(level, message) if(level>=App.actLevel_) LOG_MSG(message);
The LOG_MSG and the string construction was done only if the trace level enabled that message.
With Java, I don't find any similar possibility for that. That to prevent: the logging would be one line (no if-else copy-pastes everywhere), and the string construction (expensive operation) only be done if necessary.
The only solution I know, is to surrond every logger-calls with an IF-statement. But this is exactly what I avoided previously in the C++ app, and what I want to avoid in my actual Java-implementation.
My problem is, on the target system only Java 1.6 is available. Therefore the Supplier is not a choice.
What can I do in Java? How can this C/C++ method easily be done?

Firstly, I would encourage you to read this if you're thinking about implementing your own logger.
Then, I'd encourage you to look at a well-established logging API such as SLF4j. Whilst it is possible to create your own, using a pre-existing API will save you time, effort and above all else provide you with more features and flexibility out of the box (I.e file based configuration, customisability (look at Mapped Diagnostic Context)).
To your specific question, there isn't a simple way to do what you're trying to do. C/C++ are fundamentally different to java in that the preprocessor allows for macros like you've created above. Java doesn't really have an easy-to-use equivalent, though there are examples of projects that do make use of compile time code generation which is probably the closest equivalent (i.e. Project Lombok, Mapstruct).
The simplest way I know of to avoid expensive string building operations whilst logging is to surround the building of the string with a simple conditional:
if ( logger.isTraceEnabled() )
{
// Really expensive operation here
}
Or, if you're using Java 8, the standard logging library takes a java.util.function.Supplier<T> argument which will only be executed if the current log level matches that of the logging method being called:
log.fine(()-> "Value is: " + getValue());
There is also currently a ticket open for SLF4j to implement this functionality here.
If you're really really set on implementing your own logger, the two above features are easy enough to implement yourself, but again I'd encourage you not to.
Edit: Aspectj compile time weaving can be used to achieve something similar to what you're trying to achieve. It would allow you to wrap all your logging statements with a conditional statement in order to remove the boilerplate checking.

Newest logging libraryies, including java.util.logging, have a second form of methods, taking a Supplier<String>.
e.g. log.info( ()->"Hello"); instead of log.info("Hello");.
The get() method of the supplier is only called if the message has effectively to be logged, therefore your string is only constructed in that case.

I think the most important thing to understand here is that the C/C++ macro solution, does not save computational effort by not constructing the logged message, in case the log level was such that the message would not be logged.
Why is so? Simply because the macro method would make the pre-processor substitute every usage of the macro:
LOG_IT(level, message)
with the code:
if(level>=App.actLevel_) LOG_MSG(message);
Substituting anything you passed as level and anything you passed as message along with the macro itself. The resulting code to be compiled will be exactly the same as if you copied and pasted the macro code everywhere in your program. The only thing macros help you with, is to avoid the actual copying and pasting, and to make the code more readable and maintainable.
Sometimes they manage to do it, other times they make the code more cryptic and thus harder to maintain as a result. In any case, macros do not provide deferred execution to save you from actually constructing the string, as Java8 Logger class does by using lambda expressions. Java defers the execution of the body of a lambda until the last possible time. In other words, the body of the lambda is executed after the if statement.
To go back to your example in C\C++, you as a developer, would probably want the code to work regardless of the log level, so you would be forced to construct a valid string message and pass it to the macro. Otherwise in certain log levels, the program would crash! So, since the message string construction code must be before the call to the macro, you will execute it every time, regardless of the log level.
So, to make the equivalent to your code is quite simple in Java 6! You just use the built-in class: Logger. This class provides support for logging levels automatically, so you do not need to create a custom implementation of them.
If what you are asking is how to implement deferred execution without lambdas, though, I do not think it is possible.
If you wanted to make real deferred execution in C\C++ you would have to make the logging code such, as to take a function pointer to a function returning the message string, you would make your code execute the function passed to you by the function pointer inside the if statement and then you would call your macro passing not a string but a function that creates and returns the string! I believe the actual C\C++ code to do this is out of scope for this question... The key concept here, is that C\C++ provide you the tools to make deferred execution, simply because they support function pointers. Java does not support function pointers, until Java8.

Related

Removal of a block of code during runtime

I need to remove a particular function or a class from my Java code when it is being converted into .jar file using Maven. But the caveat is the function or class should stay inside the source code.
Is there any such way in which I can achieve this using Maven and/or any Java utilities?
(there are a lot of functions ~400 and their implementations are very large as well therefore commenting the code is not an option)
Okay, so the real problem is this:
We have a code base which includes certain parts that are not currently being used, but they may be used in the future, so we want to keep them in the code base, but we do not want them to be shipped to customers. (Due to, uhm, reasons.) What are the best practices for achieving this? Note that commenting them out would be impractical.
The proper way to achieve this is to extract all those parts into a separate module, and refrain from shipping that module.
The hacky way is to use a hard-coded feature flag.
A normal (non-hard-coded) feature flag is a boolean which controls a certain aspect of the behavior of our software. For example, if you build an mp3 player application, and you want to add support for the aac file format, but you do not want to ship support for it yet, then you might want to create a boolean supportAacFeatureFlag() method, and have all code that pertains to the aac file format invoke that method and check the return value before doing anything. It is important to note that this must be a method, not a constant, so that its value is not known at compilation time, because every single if statement that checks the value of a constant is bound to yield a "condition is always true" or "condition is always false" warning. The great benefit of feature flags over commenting-out code is that the code controlled by a feature flag must still compile, so it must be semantically correct. The problem with feature flags is that they do not eliminate the code; the code still gets shipped, it just does not get executed.
A hard-coded feature flag is a feature flag which is implemented using a constant. The constant condition warning will be issued in every single use of that flag, so it will have to be globally disabled. (That's why this approach is hacky: we normally want all warnings enabled.) The benefit of using a constant is that its value is known at compilation time, so even though the compiler will still compile the controlled code, it will refrain from emitting any bytecode for it, so the code essentially does not get shipped to customers. Only empty functions get shipped.
Note that this is documented behavior of the Java compiler. In other languages like C++ and C# the compiler always emits all code, and you have to use other means of controlling code generation, like #defined symbols, which, in my opinion, are also very hacky.
An alternative way which I am sure some people will opt for but I would strongly advice against is to keep the unused code in a separate feature branch and remove it from the master branch. I would strongly advise against this, because any refactorings applied to the master branch will not affect the feature branch, so the code will diverge, so it will be a nightmare to integrate it in the future.

In java streams using .peek() is regarded as to be used for debugging purposes only, would logging be considered as debugging? [duplicate]

This question already has answers here:
In Java streams is peek really only for debugging?
(10 answers)
Closed 4 years ago.
So I have a list of objects which I want part or whole to be processed, and I would want to log those objects that were processed.
consider a fictional example:
List<ClassInSchool> classes;
classes
.stream()
.filter(verifyClassInSixthGrade())
.filter(classHasNoClassRoom())
.peek(classInSchool -> log.debug("Processing classroom {} in sixth grade without classroom.", classInSchool)
.forEach(findMatchingClassRoomIfAvailable());
Would using .peek() in this instance be regarded as unintended use of the API?
To further explain, in this question the key takeaway is: "Don't use the API in an unintended way, even if it accomplishes your immediate goal." My question is whether or not every use of peek, short from debugging your stream until you have verified the whole chain works as designed and removed the .peek() again, is unintended use. So if using it as a means to log every object actually processed by the stream is considered unintended use.
The documentation of peek describes the intent as
This method exists mainly to support debugging, where you want to see the elements as they flow past a certain point in a pipeline.
An expression of the form .peek(classInSchool -> log.debug("Processing classroom {} in sixth grade without classroom.", classInSchool) fulfills this intend, as it is about reporting the processing of an element. It doesn’t matter whether you use the logging framework or just print statements, as in the documentation’s example, .peek(e -> System.out.println("Filtered value: " + e)). In either case, the intent matters, not the technical approach. If someone used peek with the intent to print all elements, it would be wrong, even if it used the same technical approach as the documentation’s example (System.out.println).
The documentation doesn’t mandate that you have to distinguish between production environment or debugging environment, to remove the peek usage for the former. Actually, your use would even fulfill that, as the logging framework allows you to mute that action via the configurable logging level.
I would still suggest to keep in mind that for some pipelines, inserting a peek operation may insert more overhead than the actual operation (or hinder the JVM’s loop optimizations to such degree). But if you do not experience performance problems, you may follow the old advice to not try to optimize unless you have a real reason…
Peek should be avoided as for certain terminal operations it may not be called, see this answer. In that example it would probably be better to do the logging inside the action of forEach rather than using peek. Debugging in this situation means temporary code used for fixing a bug or diagnosing an issue.
In java streams using .peek() is regarded as to be used for debugging purposes only, would logging be considered as debugging?
It depends on whether your logging code is going to be a permanent fixture of your code, or not.
Only you can really know the real purpose of your logging ...
Also note that the javadoc says:
In cases where the stream implementation is able to optimize away the production of some or all the elements (such as with short-circuiting operations like findFirst, or in the example described in count()), the action will not be invoked for those elements.
So, you are liable to find that in some circumstances peek won't be a reliable way to log (or debug) your pipeline.
In general, adding peek is liable to change the behavior of the pipeline and / or the JVM's ability to optimize it ... in a current or future generation JVM.
Eh, it's somewhat open to interpretation. Intent is something that's not always easy to determine.
I think the API note was mostly added to discourage an overzealous usage of peek when almost all desirable behaviour can be accomplished without it. It was just too useful for the developers to exclude it completely but they wanted to be clear that its inclusion was not to be taken as an unqualified endorsement; they saw the potential for misuse and they tried to address it.
I suspect - though I'm only speculating - that there were mixed opinions on whether to include it at all, and that including a version with a caveat in the JavaDoc was the compromise.
With that in mind, I think my suggestion for deciding when to use peek would simply be: don't use it unless you have a very good reason to.
In your case, you definitely don't have a good reason to. You're iterating over everything and passing the result to the method findMatchingClassRoomIfAvailable (well, presumably - your example wasn't very good). If you want to log something for each item in the stream then just log it at the top that method.
Is it misuse? I don't think so. Would I write it this way? No.

log4j/logback pass logger level as a parameter

I want to do something which seems really straightforward: just pass a lot of logging commands (maybe all, but particularly WARN and ERROR levels) through a method in a simple utility class. I want to do this in particular so that during testing I can suppress the actual output to logging by mocking the method which does this call.
But I can't find out how, with ch.qos.logback.classic.Logger, to call a single method with the Level as a parameter ... obviously I could use a switch command based on this value, but in some logging frameworks there's a method or two which lets you pass the logging Level as a parameter. Seems a bit "primitive" not to provide this.
The method might look a bit like this:
Logger.log( Level level, String msg )
Later
Having now looked up the "XY problem" I understand the scepticism about this question. Dynamic logging is considered bad, at least in Java (possibly less so in Python)... now I know and understand that the preferred route is to configure the logging configuration appropriately for testing.
One minor point, though, if I may: although I haven't implemented this yet with this particular project, I generally find "just" tracing the stacktrace back to the beginning of the particular Thread insufficient, and this is what logback does (with Exceptions passed at WARN or ERROR levels). I plan to implement a system for recording "snapshots" of Threads when they run new Threads... which can then be listed (right back to the start of the app's first Thread) if an error occurs. This is, if you like, another justification for using something to "handle" outgoing log calls. I suppose that if I want to implement something like this I will instead have to try to extend some elements of logback in some way.

get the name of methods as they execute in a specific thread

In my Java application, I wish to get the name (and actual runtime class) of different methods as they execute in a specific thread. Its different then getting a regular stack trace. Its like I am on observer looking at the execution of my program and see the names of different methods appear as they are run.
One (naive and practically infeasible) way is to put a print message at the beginning of each method to print its name (class, line number, etc.). I can use the exception trace to get all the relevant info to print, BUT I would have to add at least one line to the beginning of each method which is not very elegant. Furthermore, if I miss/forget to add a line at the beginning of any method, that method name would not be displayed. Also, this strategy is not future-proof. Is there any instrumentation/other technique that will help me accomplish this.
You should have a look at Aspect Oriented Programmation and all of the frameworks implementing it.
If you are using spring you can have a look at the simple trace interceptor that will do exactly what you want : http://static.springsource.org/spring/docs/1.2.9/api/org/springframework/aop/interceptor/SimpleTraceInterceptor.html
Aside from AOP and the Spring Listener mentioned in the other answer below, you could use maybe some trickier code generation framework such as ASM or CGLib. You'd be writing code to rewrite each of your classes at runtime and append these print instructions at the beginning of your methods. But that requires deeper knowledge of bytecode.

Finding the actual runtime call tree of a Java Program

Suppose I have a big program that consists of hundreds of methods in it. And according to the nature of input the program flow is getting changed.
Think I want to make a change to the original flow. And it is big hassle to find call hierarchy/ references and understand the flow.
Do I have any solution for this within Eclipse? Or a plugin? As an example, I just need a Log of method names that is in order of time. Then I don't need to worry about the methods that are not relevant with my "given input"
Update : Using debug mode in eclipse or adding print messages are not feasible. The program is sooooo big. :)
You could use AspectJ to log the name of all methods called without changing your original program.
See tracing for instance.
aspect SimpleTracing {
pointcut tracedCall():
call(void FigureElement.draw(GraphicsContext));
before(): tracedCall() {
System.out.println("Entering: " + thisJoinPoint);
}
}
If all you want to know is what methods got called, rather than the precise order, you might consider using a test coverage tool. These tools instrument the source code to collect "this got executed" facts at various degrees of granularity (method call only, and/or every code block controlled by a conditional).
The SD Test Coverage Tool is a tool that will do this.
It won't collect the call graph or even the order of the calls.
If you want more control over the instrumentation, you can consider using the DMS Software Reengineering Toolkit. DMS will parse, transform, and prettyprint Java in arbitrary ways controlled by custom source-to-source program transformation rewrite rules. It would be easy to insert logging transformations into the start and exit of each method (and in fact this is almost exactly how the SD test coverage tool works). Given the raw enter-X and exit-X data, constructing the runtime call tree is a straightforward task.

Categories