We have a log collecting service that automatically splits messages at 64KB, but the split is not elegant at all. We are printing individual log messages as json blobs with some additional metadata. Sometimes these include large stack traces that we want to preserve in full.
So I was looking into writing a custom logger or appender wrapper that would take the message and split it into smaller chunks and re-log it but that's looking non-trivial.
Is there a simple way to configure logback to split up its messages into multiple separate messages if the size of the message is greater than some value?
Here is the appender configuration:
<!-- Sumo optimized rolling log file -->
<appender name="file" class="ch.qos.logback.core.rolling.RollingFileAppender">
<Append>true</Append>
<file>${log.dir}/${service.name}-sumo.log</file>
<encoder class="net.logstash.logback.encoder.LoggingEventCompositeJsonEncoder">
<providers>
<timestamp>
<fieldName>t</fieldName>
<pattern>yyyy-MM-dd'T'HH:mm:ss.SSS'Z'</pattern>
<timeZone>UTC</timeZone>
</timestamp>
<message/>
<loggerName/>
<threadName/>
<logLevel/>
<stackTrace>
<if condition='isDefined("throwable.converter")'>
<then>
<throwableConverter class="${throwable.converter}"/>
</then>
</if>
</stackTrace>
<mdc/>
<tags/>
</providers>
</encoder>
<rollingPolicy class="ch.qos.logback.core.rolling.FixedWindowRollingPolicy">
<maxIndex>1</maxIndex>
<FileNamePattern>${log.dir}/${service.name}-sumo.log.%i</FileNamePattern>
</rollingPolicy>
<triggeringPolicy class="ch.qos.logback.core.rolling.SizeBasedTriggeringPolicy">
<MaxFileSize>256MB</MaxFileSize>
</triggeringPolicy>
</appender>
<appender name="sumo" class="ch.qos.logback.classic.AsyncAppender">
<queueSize>500</queueSize>
<discardingThreshold>0</discardingThreshold>
<appender-ref ref="file" />
</appender>
The solution I came up with is simply to wrap my logger in something that splits up messages nicely. Note that I'm primarily interested in splitting messages with Throwables since those are what causes long messages.
Written with lambdas for Java 8
Also note this code is not fully tested, I'll update it if I find any bugs.
public class MessageSplittingLogger extends MarkerIgnoringBase {
//Target size is 64k for split. UTF-8 nominally has 1 byte characters, but some characters will use > 1 byte so leave some wiggle room
//Also leave room for additional messages
private static final int MAX_CHARS_BEFORE_SPLIT = 56000;
private static final String ENCODING = "UTF-8";
private Logger LOGGER;
public MessageSplittingLogger(Class<?> clazz) {
this.LOGGER = LoggerFactory.getLogger(clazz);
}
private void splitMessageAndLog(String msg, Throwable t, Consumer<String> logLambda) {
String combinedMsg = msg + (t != null ? "\nStack Trace:\n" + printStackTraceToString(t) : "");
int totalMessages = combinedMsg.length() / MAX_CHARS_BEFORE_SPLIT;
if(combinedMsg.length() % MAX_CHARS_BEFORE_SPLIT > 0){
totalMessages++;
}
int index = 0;
int msgNumber = 1;
while (index < combinedMsg.length()) {
String messageNumber = totalMessages > 1 ? "(" + msgNumber++ + " of " + totalMessages + ")\n" : "";
logLambda.accept(messageNumber + combinedMsg.substring(index, Math.min(index + MAX_CHARS_BEFORE_SPLIT, combinedMsg.length())));
index += MAX_CHARS_BEFORE_SPLIT;
}
}
/**
* Get the stack trace as a String
*/
private String printStackTraceToString(Throwable t) {
try {
ByteArrayOutputStream baos = new ByteArrayOutputStream();
PrintStream ps = new PrintStream(baos, true, ENCODING);
t.printStackTrace(ps);
return baos.toString(ENCODING);
} catch (UnsupportedEncodingException e) {
return "Exception printing stack trace: " + e.getMessage();
}
}
#Override
public String getName() {
return LOGGER.getName();
}
#Override
public boolean isTraceEnabled() {
return LOGGER.isTraceEnabled();
}
#Override
public void trace(String msg) {
LOGGER.trace(msg);
}
#Override
public void trace(String format, Object arg) {
LOGGER.trace(format, arg);
}
#Override
public void trace(String format, Object arg1, Object arg2) {
LOGGER.trace(format, arg1, arg2);
}
#Override
public void trace(String format, Object... arguments) {
LOGGER.trace(format, arguments);
}
#Override
public void trace(String msg, Throwable t) {
splitMessageAndLog(msg, t, LOGGER::trace);
}
//... Similarly wrap calls to debug/info/error
}
Related
I am currently struggling with masking the data available in the logs intercepted at the SOAP client. I have taken the approach to writing customized PatternLayout:
public class PatternMaskingLayout extends ch.qos.logback.classic.PatternLayout {
private Pattern multilinePattern;
private final List<String> maskPatterns = new ArrayList<>();
public void addMaskPattern(String maskPattern) {
maskPatterns.add(maskPattern);
multilinePattern = Pattern.compile(
String.join("|", maskPatterns),
Pattern.MULTILINE
);
}
#Override
public String doLayout(ILoggingEvent event) {
return maskMessage(super.doLayout(event)); // calling superclass method is required
}
private String maskMessage(String message) {
if (multilinePattern == null) {
return message;
}
StringBuilder sb = new StringBuilder(message);
Matcher matcher = multilinePattern.matcher(sb);
while (matcher.find()) {
IntStream.rangeClosed(1, matcher.groupCount()).forEach(group -> {
if (matcher.group(group) != null) {
IntStream.range(matcher.start(group), matcher.end(group))
.forEach(i -> sb.setCharAt(i, '*')); // replace each character with asterisk
}
});
}
return sb.toString();
}
}
My logback-spring.xml appenders looks like:
<appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
<layout class="app.example.monitoring.tracing.PatternMaskingLayout">
<maskPattern>\"username\"\s*:\s*\"(.*?)\"</maskPattern>
<pattern>
${LOGBACK_LOGGING_PATTERN:-%d{yyyy-MM-dd HH:mm:ss.SSS} ${LOG_LEVEL_PATTERN:-%5p} ${PID:- } --- [%15.15t] %logger{36} : %msg %replace(%ex){'\n','\\u000a'}%nopex%n}
</pattern>
</layout>
</appender>
I still can not get my username masked. The XML field looks like <xa2:username>John</xa2:username>|
Have anyone have some experience with this?
I am using spring boot in a project and currently exploring the logging behaviour. For that I'm using the zipkin service.
My logback.xml is as follows:
<appender name="STASH"
class="net.logstash.logback.appender.LogstashTcpSocketAppender">
<destination>192.168.0.83:5000</destination>
<encoder class="net.logstash.logback.encoder.LoggingEventCompositeJsonEncoder"
<providers>
<mdc /> <!-- MDC variables on the Thread will be written as JSON fields -->
<context /> <!--Outputs entries from logback's context -->
<version /> <!-- Logstash json format version, the #version field in the output -->
<logLevel />
<loggerName />
<pattern>
<pattern>
{
"serviceName": "zipkin-demo"
}
</pattern>
</pattern>
<threadName />
<message />
<logstashMarkers />
<arguments />
<stackTrace />
</providers>
</encoder>
</appender>
<root level="info">
<appender-ref ref="STASH" />
</root>
Now for what I have understood, to log in json, you need a custom logger implementation which I have done as follows:
public class CustomLogger {
public static void info(Logger logger, Message message, String msg) {
log(logger.info(), message, msg).log();
}
private static JsonLogger log(JsonLogger logger, Message message, String msg) {
try {
if (message == null) {
return logger.field("Message", "Empty message").field("LogTime", new Date() );
}
logger
.field("message", message.getMessage())
.field("id", message.getId());
StackTraceElement ste = (new Exception()).getStackTrace()[2];
logger.field(
"Caller",
ste.getClassName() + "." + ste.getMethodName() + ":" + ste.getLineNumber());
return logger;
} catch (Exception e) {
return logger.exception("Error while logging", e).stack();
}
}
My message class is:
public class Message {
private int id;
private String message;
..constructor & setters and getters
}
Now in my controller class I'm using my custom logger as:
static com.savoirtech.logging.slf4j.json.logger.Logger log = com.savoirtech.logging.slf4j.json.LoggerFactory.getLogger(Controller.class.getClass());
Message msg = new Message(1, "hello");
CustomLogger customLogger = new CustomLogger();
customLogger.info(log, msg, "message");
My logs end up in kibana as:
"message": "{\"message\":\"hello\",\"id\":1,\"Caller\":\"<class_name>:65\",\"level\":\"INFO\",\"thread_name\":\"http-nio-3333-exec-2\",\"class\":\"<custom_logger_class>\",\"logger_name\":\"java.lang.Class\",\"#timestamp\":\"2018-07-20 19:02:14.090+0530\",\"mdc\":{\"X-B3-TraceId\":\"554c43b0275c3430\",\"X-Span-Export\":\"true\",\"X-B3-SpanId\":\"368702fffa40d2cd\"}}",
Now instead of this I would want my message part of the logs as a json entry. Where am I going wrong?
I have tried searching a way extensively but to no avail. Is it even possible?
I finally got it working. I just changed the logstash config file and added:
input
{
tcp
{
port => 5000
type => "java"
codec => "json"
}
}
filter
{
if [type]== "java"
{
json
{
source => "message"
}
}
}
The filter part was missing earlier.
I have a project running Java in a docker image on Kubernetes. Logs are automatically ingested by the fluentd agent and end up in Stackdriver.
However, the format of the logs is wrong: Multiline logs get put into separate log lines in Stackdriver, and all logs have "INFO" log level, even though they are really warning, or error.
I have been searching for information on how to configure logback to output the correct format for this to work properly, but I can find no such guide in the google Stackdriver or GKE documentation.
My guess is that I should be outputting JSON of some form, but where do I find information on the format, or even a guide on how to properly set up this pipeline.
Thanks!
This answer contained most of the information I needed: https://stackoverflow.com/a/39779646
I have adapted the answer to fit my exact question, and to fix some weird imports and code that seems to have been deprecated.
logback.xml:
<configuration>
<appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
<encoder class="ch.qos.logback.core.encoder.LayoutWrappingEncoder">
<layout class="my.package.logging.GCPCloudLoggingJSONLayout">
<pattern>%-4relative [%thread] %-5level %logger{35} - %msg</pattern>
</layout>
</encoder>
</appender>
<root level="INFO">
<appender-ref ref="STDOUT"/>
</root>
</configuration>
GCPCloudLoggingJSONLayout:
import ch.qos.logback.classic.Level;
import ch.qos.logback.classic.PatternLayout;
import ch.qos.logback.classic.spi.ILoggingEvent;
import com.fasterxml.jackson.core.JsonProcessingException;
import com.fasterxml.jackson.databind.ObjectMapper;
import java.util.Map;
import static ch.qos.logback.classic.Level.DEBUG_INT;
import static ch.qos.logback.classic.Level.ERROR_INT;
import static ch.qos.logback.classic.Level.INFO_INT;
import static ch.qos.logback.classic.Level.TRACE_INT;
import static ch.qos.logback.classic.Level.WARN_INT;
/**
* GKE fluentd ingestion detective work:
* https://cloud.google.com/error-reporting/docs/formatting-error-messages#json_representation
* http://google-cloud-python.readthedocs.io/en/latest/logging-handlers-container-engine.html
* http://google-cloud-python.readthedocs.io/en/latest/_modules/google/cloud/logging/handlers/container_engine.html#ContainerEngineHandler.format
* https://github.com/GoogleCloudPlatform/google-cloud-python/blob/master/logging/google/cloud/logging/handlers/_helpers.py
* https://cloud.google.com/logging/docs/reference/v2/rest/v2/LogEntry
*/
public class GCPCloudLoggingJSONLayout extends PatternLayout {
private static final ObjectMapper objectMapper = new ObjectMapper();
#Override
public String doLayout(ILoggingEvent event) {
String formattedMessage = super.doLayout(event);
return doLayoutInternal(formattedMessage, event);
}
/**
* For testing without having to deal wth the complexity of super.doLayout()
* Uses formattedMessage instead of event.getMessage()
*/
private String doLayoutInternal(String formattedMessage, ILoggingEvent event) {
GCPCloudLoggingEvent gcpLogEvent =
new GCPCloudLoggingEvent(formattedMessage, convertTimestampToGCPLogTimestamp(event.getTimeStamp()),
mapLevelToGCPLevel(event.getLevel()), event.getThreadName());
try {
// Add a newline so that each JSON log entry is on its own line.
// Note that it is also important that the JSON log entry does not span multiple lines.
return objectMapper.writeValueAsString(gcpLogEvent) + "\n";
} catch (JsonProcessingException e) {
return "";
}
}
private static GCPCloudLoggingEvent.GCPCloudLoggingTimestamp convertTimestampToGCPLogTimestamp(
long millisSinceEpoch) {
int nanos =
((int) (millisSinceEpoch % 1000)) * 1_000_000; // strip out just the milliseconds and convert to nanoseconds
long seconds = millisSinceEpoch / 1000L; // remove the milliseconds
return new GCPCloudLoggingEvent.GCPCloudLoggingTimestamp(seconds, nanos);
}
private static String mapLevelToGCPLevel(Level level) {
switch (level.toInt()) {
case TRACE_INT:
return "TRACE";
case DEBUG_INT:
return "DEBUG";
case INFO_INT:
return "INFO";
case WARN_INT:
return "WARN";
case ERROR_INT:
return "ERROR";
default:
return null; /* This should map to no level in GCP Cloud Logging */
}
}
/* Must be public for Jackson JSON conversion */
public static class GCPCloudLoggingEvent {
private String message;
private GCPCloudLoggingTimestamp timestamp;
private String thread;
private String severity;
public GCPCloudLoggingEvent(String message, GCPCloudLoggingTimestamp timestamp, String severity,
String thread) {
super();
this.message = message;
this.timestamp = timestamp;
this.thread = thread;
this.severity = severity;
}
public String getMessage() {
return message;
}
public void setMessage(String message) {
this.message = message;
}
public GCPCloudLoggingTimestamp getTimestamp() {
return timestamp;
}
public void setTimestamp(GCPCloudLoggingTimestamp timestamp) {
this.timestamp = timestamp;
}
public String getThread() {
return thread;
}
public void setThread(String thread) {
this.thread = thread;
}
public String getSeverity() {
return severity;
}
public void setSeverity(String severity) {
this.severity = severity;
}
/* Must be public for JSON marshalling logic */
public static class GCPCloudLoggingTimestamp {
private long seconds;
private int nanos;
public GCPCloudLoggingTimestamp(long seconds, int nanos) {
super();
this.seconds = seconds;
this.nanos = nanos;
}
public long getSeconds() {
return seconds;
}
public void setSeconds(long seconds) {
this.seconds = seconds;
}
public int getNanos() {
return nanos;
}
public void setNanos(int nanos) {
this.nanos = nanos;
}
}
}
#Override
public Map<String, String> getDefaultConverterMap() {
return PatternLayout.defaultConverterMap;
}
}
As I said earlier, the code was originally from another answer, I have just cleaned up the code slightly to fit my use-case better.
Google has provided a logback appender for Stackdriver, I have enhanced it abit to include the thread name in the logging label, so that it can be searched more easily.
pom.xml
<dependency>
<groupId>com.google.cloud</groupId>
<artifactId>google-cloud-logging-logback</artifactId>
<version>0.116.0-alpha</version>
</dependency>
logback-spring.xml
<springProfile name="prod-gae">
<appender name="CLOUD"
class="com.google.cloud.logging.logback.LoggingAppender">
<log>clicktrade.log</log>
<loggingEventEnhancer>com.jimmy.clicktrade.arch.CommonEnhancer</loggingEventEnhancer>
<flushLevel>WARN</flushLevel>
</appender>
<root level="info">
<appender-ref ref="CLOUD" />
</root>
</springProfile>
CommonEnhancer.java
public class CommonEnhancer implements LoggingEventEnhancer {
#Override
public void enhanceLogEntry(Builder builder, ILoggingEvent e) {
builder.addLabel("thread", e.getThreadName());
}
}
Surprisingly, the logback appender in MVN repository doesn't align with the github repo source code. I need to dig into the JAR source code for that. The latest version is 0.116.0-alpha, it seems it has version 0.120 in Github
https://github.com/googleapis/google-cloud-java/tree/master/google-cloud-clients/google-cloud-contrib/google-cloud-logging-logback
You can use the glogging library.
Simply add it as a dependency and use provided layout in logback.xml:
<configuration>
<appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
<encoder class="ch.qos.logback.core.encoder.LayoutWrappingEncoder">
<layout class="io.github.aaabramov.glogging.GoogleLayout">
<!-- You have a choice which JSON encoder to use. Or create your own via implementing JsonEncoder interface -->
<json>io.github.aaabramov.glogging.JacksonEncoder</json>
<!-- OR -->
<!-- <json>io.github.aaabramov.glogging.GsonEncoder</json> -->
<!-- Optionally append "${prefix}/loggerName" labels -->
<appendLoggerName>true</appendLoggerName>
<!-- Optionally configure prefix for labels -->
<prefix>com.yourcompany</prefix>
<!-- Provide message pattern you like. -->
<!-- Note: there is no need anymore to log timestamps & levels to the message. Google will pick them up from specific fields. -->
<pattern>%message %xException{10}</pattern>
</layout>
</encoder>
</appender>
<appender name="ASYNCSTDOUT" class="ch.qos.logback.classic.AsyncAppender">
<appender-ref ref="STDOUT"/>
</appender>
<!-- Configure logging levels -->
<logger name="com.github" level="DEBUG"/>
<root level="DEBUG">
<appender-ref ref="ASYNCSTDOUT"/>
</root>
</configuration>
It will produce messages in the format that GSL will gladly accept:
{"timestamp":{"seconds":1629642099,"nanos":659000000},"severity":"DEBUG","message":"debug","labels":{"io.github.aaabramov/name":"Andrii","io.github.aaabramov/loggerName":"io.github.aaabramov.glogging.App"}}
Disclaimer: I am the author of this library.
Credits: https://stackoverflow.com/a/44168383/5091346
I am migrating my application from log4j to log4j2 API. While migration, I found custom patternlayouts, patternparsers and patternconverters are used. I am not aware of how to implement these changes using log4j2 plugins. Can anyone help me on how to convert this custom layout TestPatternLayout to log4j2. Many thanks.
PFB the complete details on how custom pattern layout is implemented using log4j.
TestPatternLayout:
public class TestPatternLayout extends PatternLayout {
#Override
protected PatternParser createPatternParser(String pattern) {
return new TestPatternParser(pattern);
}
}
TestPatternParser:
public class TestPatternParser extends PatternParser {
private static final char Test_CHAR = 'e';
private static final char DATETIME_CHAR = 'd';
public TestPatternParser(String pattern) {
super(pattern);
}
#Override
protected void finalizeConverter(char c) {
switch (c) {
case Test_CHAR:
currentLiteral.setLength(0);
addConverter(new TestPatternConverter());
break;
default:
super.finalizeConverter(c);
}
}
}
TestPatternConverter:
public class TestPatternConverter extends PatternConverter {
#Override
protected String convert(LoggingEvent event) {
String testID = ObjectUtils.EMPTY_STRING;
if(TestLogHandler.isTestLogEnabled()) {
TestContextHolder contextHolder = TestLogHandler.getLatestContextHolderFromStack(event.getThreadName());
if(contextHolder != null) {
testID = contextHolder.getTestIDForThread(event.getThreadName());
}
else{
testID = TestContextHolder.getTestIDForThread(event.getThreadName());
}
}
return testID;
}
}
Layout definition in log4j.xml:
<appender name="TEST_LOG_FILE" class="org.apache.log4j.RollingFileAppender">
...
<layout class="com.test.it.logging.TestPatternLayout">
<param name="ConversionPattern" value="%d %-5p [%c{1}] [TestId: %e] [%t] %m%n"/>
</layout>
...
</appender>
You just need to create a new Plugin and extend LogEventPatternConverter:
#Plugin(name = "TestPatternConverter", category = PatternConverter.CATEGORY)
#ConverterKeys({"e"})
public final class TestPatternConverter extends LogEventPatternConverter {
/**
* Private constructor.
* #param options options, may be null.
*/
private TestPatternConverter(final String[] options) {
super("TestId", "testId");
}
/**
* Obtains an instance of pattern converter.
*
* #param options options, may be null.
* #return instance of pattern converter.
*/
public static TestPatternConverter newInstance(final String[] options) {
return new TestPatternConverter(options);
}
/**
* {#inheritDoc}
*/
#Override
public void format(final LogEvent event, final StringBuilder toAppendTo) {
String testID = ObjectUtils.EMPTY_STRING;
if(TestLogHandler.isTestLogEnabled()) {
TestContextHolder contextHolder = TestLogHandler.getLatestContextHolderFromStack(event.getThreadName());
if(contextHolder != null) {
testID = contextHolder.getTestIDForThread(event.getThreadName());
}
else{
testID = TestContextHolder.getTestIDForThread(event.getThreadName());
}
}
toAppendTo.append(testID);
}
}
and update your configuration:
<Appender type="RollingFile" name="TEST_LOG_FILE" fileName="${filename}">
<Layout type="PatternLayout">
<Pattern>%d %-5p [%c{1}] [TestId: %e] [%t] %m%n</Pattern>
</Layout>
</Appender>
That should be all. Please see Extending log4j2
for more details.
I need to create a file that only logs TRACE and DEBUG events, but I'm unable to make it work (don't know if it's even possible).
Already tried searching for an option to invert the ThresholdFilter or a way to specify multiple levels on a LevelFilter, but with no success.
The only way that I'm thinking to make this work is to create 2 appende, exactly the same from one to another, but with the LevelFilter in one specified to TRACE and the other to DEBUG.
Is there any other way to accomplish this? Because I'm not a big fan of duplicating code/configuration.
Simple approach might be to create custom filter as below.
public class CustomFilter extends Filter<ILoggingEvent> {
private String levels;
public String getLevels() {
return levels;
}
public void setLevels(String levels) {
this.levels = levels;
}
private Level[] level;
#Override
public FilterReply decide(ILoggingEvent arg0) {
if (level == null && levels != null) {
setLevels();
}
if (level != null) {
for (Level lev : level) {
if (lev == arg0.getLevel()) {
return FilterReply.ACCEPT;
}
}
}
return FilterReply.DENY;
}
private void setLevels() {
if (!levels.isEmpty()) {
level = new Level[levels.split("\\|").length];
int i = 0;
for (String str : levels.split("\\|")) {
level[i] = Level.valueOf(str);
i++;
}
}
}
}
Add filter in logback.xml
<configuration>
<appender name="fileAppender1" class="ch.qos.logback.core.FileAppender">
<filter class="com.swasthik.kp.logback.filters.CustomFilter">
<levels>TRACE|DEBUG</levels>
</filter>
<file>c:/logs/kplogback.log</file>
<append>true</append>
<encoder>
<pattern>%d [%thread] %-5level %logger{35} - %msg%n</pattern>
</encoder>
</appender>
<root level="TRACE">
<appender-ref ref="fileAppender1" />
</root>
</configuration>