Log4j2, set log level at runtime for Thread specific - java

We have a webserver and multiple users log in to it. We generally put log level to ERROR or INFO level. But sometimes, for debugging purpose, we need to see logs. There is one way to set it at runtime, but this process is not so good in case of loads of traffic. Important logs will be missed and also we don't know for how much time we need to keep it that way. I have written a wrapper in log4j v1.2, which just ignores the level check if userid belongs to some TestUsersList. So, it opens all logs for a particular user[a thread] only. A snippet is below-
public void trace(Object message) {
Object diagValue = MDC.get(LoggerConstants.IS_ANALYZER_NUMBER);
if (valueToMatch.equals(diagValue)) { // Some condition to check test number
forcedLog(FQCN, Level.TRACE, message, null);
return;
}
if (repository.isDisabled(Level.TRACE_INT))
return;
if (Level.TRACE.isGreaterOrEqual(this.getEffectiveLevel()))
forcedLog(FQCN, Level.TRACE, message, null);
}
But now I have moved to log4j2, I don't want to write this wrapper again. Is there any inbuilt functionality which log4j2 provides for this?

This can be done with filters. Add a logger to the configuration that logs all the messages you want, then add a ThreadContextMapFilter that has a KeyValuePair for each user you want to log.
Then put the user ids in the Thread Context within the code.

Related

Sending custom telemetry to azure insights

I have a Java azure function (runtime 3.2.0) where I try to send some custom telemetry. I use the following code
TelemetryConfiguration config = TelemetryConfiguration.getActive()
TelemetryClient telemetry = new TelemetryClient(config);
telemetry.trackEvent("Test Event");
telemetry.trackTrace("Test Trace", SeverityLevel.Warning);
telemetry.flush(); // Not sure it is needed
When I check config.getInstrumentationKey() is is the correct key for the Application Insights I want to show the custom telemetry. However, I never receive the custom event and trace in my Insights. Also config.isTrackingDisabled() i false and config.getChannel() seems to make sense.
All code examples I have found and in the official documentation as well it seems to be all the code I need. When I use the logger from the ExecutionContext logs appears in Application inside, so my function has access to it. So I suspect I have overlooked some small important fact or there is some configuration of my function that is not set correctly.
Can anybody help me get custom telemetry to work on my java function?

how to store logs in stringBuilder

I want to show the frontend user all the logs. How can I transport all the log statements to a frontend? For example:
void process(){
..
// currently this is shown in a file and in a console
log.info("process called..");
}
How can I transport this log message to the frontend in an efficient manner? Should I append the logs into a StringBuilder? How can I do this with Log4j2?
Currently, I have a no jdbc store. But I can store all my logs to a no sql database. I cannot use JDBCAppender (or CassandraAppender). Should I avoid a Logger and do it myself:
Instead of
log.info("process called..");
I could use
user.addLog("process called..");
Would it be better to get the string value of log.info(). If so, how?
The best idea would would be to store your logs in a database with a JDBCAppender. When the user requests the logs, you can decide how many of the logs to load and return in your response.
If you would hold all your logs in memory e.g. in a StringBuffer, you could run out of memory and kill your application. Also on a server restart, all your logs would be lost. Both is prevented by storing the logs into a database and access it on demand.
If you really need a StringAppender for custom integration, you have to write it yourself extending on AbstractOutputStreamAppender.
Here is a blog post with code about it.

Safe way to use batch listener

I am trying to use spring-kafka 1.3.x (1.3.3 and 1.3.4). What is not clear is whether there is a safe way to consume messages in batch without skipping a message (or set of messages) when an exception occurs eg network outage. My preference is also to leverage the container capabilities as much as possible to remain in Spring framework rather than trying to create a custom framework for dealing with this challenge.
I am setting the following properties onto a ConcurrentMessageListenerContainer :
.setAckOnError(false);
.setAckMode(AckMode.MANUAL);
I am also setting the following kafka specific consumer properties:
enable.auto.commit=false
auto.offset.reset=earliest
If I set a RetryTemplate, I get a class cast exception since it only works for non-batch consumers. Documentation states retry is not available for batch so this may be OK.
I then setup a consumer such as this one:
```java
#KafkaListener(containerFactory = "conatinerFactory",
groupId = "myGroup",
topics = "myTopic")
public void onMessage(#Payload List<Entries> batchedData,
#Header(required = false,
value = KafkaHeaders.OFFSET) List<Long> offsets,
Acknowledgment ack) {
log.info("Working on: {}" + offsets);
int x = 1;
if(x == 1) {
log.info("Failure on: {}" + offsets);
throw new RuntimeException("mock failure");
}
// do nothing else for now
// unreachable code
ack.acknowledge();
}
```
When I send a message into the system to mock the exception above then the only visible action to me is that the listener reports the exception.
When I send another (new) message into the system, the container consumes the new message. The old message is skipped since the offset is advanced to the next offset.
Since I have asked the container not to acknowledge (directly or indirectly) and since there is no other properties that I can see to notify the container not to advance, then I am confused why the container does advance.
What I noticed is that for a similar consideration, what is being recommended is to upgrade to 2.1.x and use the container stop capability that was added into the ContainerAware ErrorHandler there.
But what if you are trapped in 1.3.x for the time being, is there a way or missing property that can be used to ensure the container does not advance to the next message or batch of messages?
I can see an option to create a custom framework around the consumer in order to achieve the desired effect. But are there other options, simpler, and more spring friendly.
Thoughts?
From #garyrussell (spring-kafka github project)
The offset has not been committed but the broker won't send the data again. You have to re-seek the topics/partitions.
2.1 provides the SeekToCurrentBatchErrorHandler which will re-seek automatically for you.
2.0 Added consumer-aware listeners, giving you access to the consumer (for seeking) in the listener.
With 1.3.x you have to implement ConsumerSeekAware and perform the seeks yourself (in the listener after catching the exception). Save off the ConsumerSeekCallback in a ThreadLocal.
You will need to add the partitions to your method signature; then seek to the lowest offset in the list for each partition.

SLF4J logging, different Levels

In SLF4J (Logging) how levels are different in characteristic. i.e. How ERROR message is different than DEBUG message.
import org.apache.log4j.Logger;
public class LogClass {
private static org.apache.log4j.Logger log = Logger.getLogger(LogClass.class);
public static void main(String[] args) {
log.trace("Trace Message!")
log.debug("Debug Message!");
log.info("Info Message!");
log.warn("Warn Message!");
log.error("Error Message!");
log.fatal("Fatal Message!");
}
}
The Output is same regardless of Level, is there any difference in implementation:
Debug Message!
Info Message!
Warn Message!
Error Message!
Fatal Message!
If these levels are producing the same kind of messages then why the implementation didn't have only one method with parameter as level.
Something like:
log("Level","msg");
Starting from the bottom, there's no real benefit to have a log(level, msg) method if you already have all the different methods for all the possible levels. Only if you'd need to log the same message in different levels, but that's a bad practice, since that message should clearly fall into one specific category. And you can always choose how much logging you get out by specifying the level globally or at the package/class.
The message are exactly the same on each level, the only difference is if that message is gonna make to the logging output or not, based on your configuration, and what purpose do you give to each level.
The key purpose to name them levels is to enable you to debug at various levels. Say for example,
INFO level can used to log high level information on the progress of the application during execution.
DEBUG level logged is meant to be even deeper than just high level information. At DEBUG level, you can have more information logged that can include information of what is happening at a module level or component level.
TRACE level is even more granular. You can log message like entering and exiting a method and what information is being returned by each method.
ERROR level is to purely meant to log only errors and exception
You need to be mindful of what kind of message can be logged into their respective level.
To answer your question, these levels can be controlled in log4j.properties or log4j.xml. You can specify at what level the application can debug. If everything goes well in application, I would leave it at INFO level. If something goes wrong and I wanted to dig in deepeur in terms of debugging, I would try to turn on at DEBUG level or TRACE level.
Also, understand that when you run the debugging at DEBUG level, even the INFO level logs will be printed. If you turn on the debugged at TRACE level, even the DEBUG and INFO level logs will be printed. If you turn on debugging at INFO level, only INFO level logs will be printed.
I hope you got some clarify.
Because it is easier to use for you as a user. As the implementation, it might have that very code.

How to customize logging for Google App Engine Java?

Google App Engine Java uses java.util.logging to create logging messages. I want to modify the log messages, that are displayed in Developers Console - Monitoring - Logs. The idea is to add some additional output like username without putting it in each log message manually:
log.info("user action");
should result in an logging output like
user "testuser": user action
Therefore I created an own Formatter:
public class TestFormatter extends Formatter {
#Override
public String format(LogRecord record) {
// find out username..
return "user " + username + ": " + record.getMessage();
}
}
Setting this as formatter for the ConsoleHandler in the logging.properties has not effekt:
java.util.logging.ConsoleHandler.formatter = com.example.guestbook.TestFormatter
When deploying it in on the local machine, and trying to add it programmatically like this:
Logger rootLogger = Logger.getLogger("");
Handler[] handlers = rootLogger.getHandlers();
log.info("Handler[] size: " + handlers.length);
for(Handler h : handlers) {
log.info(h.toString());
h.setFormatter(new TestFormatter());
}
I get 2 handler, one ConsoleHandler and one DevLogHandler. But setting the formatter results in the fact that no further logs are displayed. On GAE instead I get 0 handler.
When trying to acces Logger.getGlobal() instead of Logger.getLogger(""), I get 0 Handler on the local instance and a SecurityException: No permission to modify global on GAE. This exception already arises when trying to get the list of Handlers.
Now my question: Is there a way to modify the logs of developer console in such a way? If yes, how?
As I reply I got in the past from a Google ticket I opened for a similar question
I would discourage tampering with the Loggers/Handlers used
internally by GAE.
Besides that, the Global Logger cannot be customized, you can try to it with a Logger with a custom name

Categories