Spring Boot Logging override Colours - java

I wish to use different colours to distinguish between INFO, DEBUG and TRACE in Spring Ansi Coloured Logs, as they are currently all set to Green (see table below)
From the docs here https://docs.spring.io/spring-boot/docs/current/reference/html/spring-boot-features.html#boot-features-logging-color-coded-output
Color coding is configured by using the %clr conversion word. In its simplest form, the converter colors the output according to the log level, as shown in the following example:
%clr(%5p)
The following table describes the mapping of log levels to colors:
Level
Color
FATAL
Red
ERROR
Red
WARN
Yellow
INFO
Green
DEBUG
Green
TRACE
Green
It would appear I need to override the %clr conversion word, but I cannot find anything in the docs about that.
If it makes a difference I am using log4j2, and wish to build this into the application.

I don't see any conventional/documented way to override colors at a level scope. For example : ColorConverter for log4j2 doesn't like to be opened for that kind of options.
You could try to define your Log42 color plugin implementation, that is a plugin implementation annotating with that log4j2 Plugin annotation :
#Plugin(name = "color", category = PatternConverter.CATEGORY)
But not sure that it works or works reliably since Spring defines already one for that.
For the record here is the ColorConverter source code for logback.
By the way, if it is enough, you could define a pattern by starting from the CONSOLE_LOG_PATTERN defined in the Spring Boot source code for log4j2:
<Property name="CONSOLE_LOG_PATTERN">%clr{%d{${sys:LOG_DATEFORMAT_PATTERN}}}{faint} %clr{${sys:LOG_LEVEL_PATTERN}} %clr{%pid}{magenta} %clr{---}{faint} %clr{[%15.15t]}{faint} %clr{%-40.40c{1.}}{cyan} %clr{:}{faint} %m%n${sys:LOG_EXCEPTION_CONVERSION_WORD}</Property>

You can use GrepConsole plugin (Idea) for it.
See more here

First, the %clr defined is a logback conversion rule and is the one with the color logic.
This conversion rule is defined in the spring class DefaultLogbackConfiguration.
There we can see that spring adds the ColorConverter with the clr conversionWord in the logback configurator.
From this exploration, my conclusion is that you have to create your own logback.xml file (it disable spring default configuration, according to the documentation). With this file in place, you can add your own conversion rule like this:
<conversionRule conversionWord="nanos"
converterClass="chapters.layouts.MySampleConverter" />
After that you should change the property logging.pattern.console using your conversionWord created above, instead of %clr.

Related

Override property with my profile dev in Spring Boot 2.4

I tried to ovveride the property
kafka.servers=s101lbakafpep1:9092,s102lbakafpep2:9092,s101lbakafpep3:9092
defined in my src/main/resources/config/application-kafka.properties file
with this value
kafka.servers=localhost:9092
defined in my src/main/resources/application-dev.properties file
I tried every combination possible reading the spring boot doc changing in my application.properties the order of
spring.profiles.active=config,health,planete,dgfip,mapping,kafka,dev
spring.profiles.active=dev,config,health,planete,dgfip,mapping,kafka
using spring.config.use-legacy-processing to true or false or .include, it's always the kafka config that wins
It's not working since i changed spring boot version to 2.4
Thanks for the very helpful hint #gviczai, solved my problem loading and overriding configs from YAML files.
I completely missed the following sentence in the documentation which made my unit tests fail because values have not been overridden as it was the case with Spring Boot 2.3.
Imports can be considered as additional documents inserted just below the document that declares them. They follow the same top-down ordering as regular multi-document
files: An import will only be imported once, no matter how many times it is declared.
So if you want to override imported values a new document has to be started after the import (--- in yaml, #--- in properties).
# imported-config.yaml
my-key: my-value
# application.yaml
spring:
config:
import:
- classpath:imported-config.yaml
# before starting a new document the value can not be modified, it would still be "my-value"
my-key: here-overriding-does-not-work
---
# after the start of the new document the value can be modified
my-key: my-overridden-value
In Spring Boot 2.4, configuration file handling is completely rethought and rewritten.
Long story short: Forget the legacy profile-dependent documents. From now on, you have to use only one big application.properties file, but it can be divided into various profile-activated sections. These sections then can come from other files or even documents from URLs - see cloud-config.
And the main rule is: definitions BELOW always overwrite definitions ABOVE. So be careful with the order the sections (thus profiles) follow each other! ;)
You can separate the sections with "#---" and you can define which profile activates the section by providing "spring.config.activate.on-profile=<your_profile>"
So, in your case your application.properties should look like this:
my.property=anything
...
server.name=myserver
#in your 'default' section, you can activate any profile, so it will be active by default
spring.profiles.active=kafka
#---
spring.config.activate.on-profile=kafka
spring.config.import=application-kafka.properties
#---
spring.config.activate.on-profile=dev
spring.config.import=application-dev.properties
#---
spring.config.activate.on-profile=cloud
spring.config.import=optional:configserver:http://my.config.server:8080/cloud-config
Of course, you can use yaml file if you prefer. In this case the document separator is the standard "---".
Read more about this new paradigm of config file processing here: https://spring.io/blog/2020/08/14/config-file-processing-in-spring-boot-2-4
(And I guess 'kafka' profile wins over 'dev' because 'k' is AFTER 'd' in the abc... BTW, I think it is better not to name the imported documents according to the legacy profile-dependent "application-<profile>.properties" naming convention, because it may interfere with the profile-handling code. Better to be safe than sorry.)
Tip: Note, that in the same 'document' (a section in the same file considered a document) even the spring.config.import can overwrite previous values. So if you need to import multiple sources within the same section, use a comma-separated list:
spring.config.import=classpath:config/kafka.properties,classpath:db/postgres.properties
they're not in the same folder and the run configuration probably indicates /config for the scan.
It's working again with spring-boot 2.5.6, so it was fixed in 2.5.x

How to mask log4j2 log messages

I am using log4j2(version - 2.5) and I am trying write a message converter plugin which will mask some of the know patterns of the log message.
#Plugin(name = "CustomeMasking",
category = "Converter")
#ConverterKeys({"m"})
public class MyCustomFilteringLayout extends LogEventPatternConverter {
}
When I run my web application with this plugin then I see this warn message
WARN Converter key 'm' is already mapped to 'class
org.apache.logging.log4j.core.pattern.MessagePatternConverter'. Sorry,
Dave, I can't let you do that! Ignoring plugin [class
MyCustomFilteringLayout].
After exploring log4j2 site I have found these references.
Reference
If multiple Converters specify the same ConverterKeys, then the load
order above determines which one will be used. For example, to
override the %date converter which is provided by the built-in
DatePatternConverter class, you would need to place your plugin in a
JAR file in the CLASSPATH ahead of log4j-core.jar. This is not
recommended; pattern ConverterKeys collisions will cause a warning to
be emitted. Try to use unique ConverterKeys for your custom pattern
converters.
I need help to understand how can I write my custom converters for m/msg. Is there any better way to do it?
Additional Details:
I have created shaded jar for MyCustomFilteringLayout. Reason why I am doing this way is that I want to keep masking logic separate from application.
Updated
I have created converter for my own key which looks like this,
#Plugin(name = "CustomeMasking",
category = "Converter")
#ConverterKeys({"cm"})
public class MyCustomFilteringLayout extends LogEventPatternConverter {
}
Here I can't write another converter for same ConverterKeys - cm?
Now my log4j2.xml has this pattern layout,
<PatternLayout>
<Pattern>%d %p %c{1.} [%t] %cm %ex%n</Pattern>
</PatternLayout>
Your update solves the problem and answers the question how to replace the built-in message converter with a custom one. It needs a unique key.
Sounds like you want to parameterize your pattern. Many patterns take an options parameter. You can use this to control behavior, so specifying %cm{key1} in your layout pattern will produce different results than %cm{key2}.
For an example of a converter that takes parameters, see the source code of the MdcPatternConverter.

merging multiple levels of configuration using typesafe/akka config

I'm looking at:
https://github.com/typesafehub/config
Let's say I want to have a default configuration, e.g. reference.conf, and then I want to have dev/prod overrides (two different application.conf's), and then I also wanted to have host-specific overrides that inherited from both the application.conf and ultimately the default reference.conf. How would I do this?
e.g., I'm imagining a directory structure something like:
resources/reference.conf
resources/prod/application.conf
resources/prod/master.conf
resources/prod/slave.conf
resources/dev/application.conf
resources/dev/master.conf
resources/dev/slave.conf
Or maybe it would be resources/dev/master/application.conf?
Somewhere I would specify an environment, i.e. maybe extracted from the hostname the application was started on.
If the application was master.dev.example.com, I'm expecting I should be able to do something like:
getConfigurations("dev/master.conf").withDefaultsFrom(
getConfigurations("dev/application.conf").withDefaultsFrom(
getConfigurations("resource.conf"))
But I'm having a hard time understanding what exactly that would look like using the given library.
I see I could set a config.resource system property, but it looks like that would only allow for one level of overrides, dev-application.conf -> resources.conf, not something like master-node.conf -> dev-application.conf -> resources.conf.
I see a .withFallback method, but that seems to be if I wanted to mix two kinds of configuration in a single file, not to chain resources/files together.
Use multiple withFallback with the configs that have the highest priority first. For example:
Config finalConfig =
ConfigFactory.systemProperties().
withFallback(masterConfig).
withFallback(applicationConfig).
withFallback(referenceConfig)
Each of the configs like masterConfig would have been loaded with Config.parseFile. You can also use ConfigFactor.load as a convenience, but the parseXXX methods give you more control over your hierarchy.

Java: which configuration framework to use?

I need to decide which configuration framework to use. At the moment I am thinking between using properties files and XML files. My configuration needs to have some primitive grouping, e.g. in XML format would be something like:
<configuration>
<group name="abc">
<param1>value1</param1>
<param2>value2</param2>
</group>
<group name="def">
<param3>value3</param3>
<param4>value4</param4>
</group>
</configuration>
or a properties file (something similar to log4j.properties):
group.abc.param1 = value1
group.abc.param2 = value2
group.def.param3 = value3
group.def.param4 = value4
I need bi-directional (read and write) configuration library/framework. Nice feature would be - that I could read out somehow different configuration groups as different objects, so I could later pass them to different places, e.g. - reading everything what belongs to group "abc" as one object and "def" as another. If that is not possible I can always split single configuration object into smaller ones myself in the application initialization part of course.
Which framework would best fit for me?
Since you are saying that it is possible to also store objects in the config, I would suggest this:
http://commons.apache.org/configuration/
The simplest way to do this would be to use Simple XML. It can bind XML to Java POJOs in a very simple manner. Also, it is much faster than other such XML binding frameworks.
http://simple.sourceforge.net
Only 270K with no dependencies.
Please take a look at this URL: http://issues.apache.org/jira/browse/CONFIGURATION-394
The Configuration framework which we're looking for it is something on top of Apache Commons Configuration and must support Concurrency Issues, JMX issues and most of stores(e.g .properties file, .xml files or PreferencesAPI).
What weblogic team provides on 'Administration Console' is intersting which through it you can have transactional(atomic) updates on configurations so that are registered listeners be notified.
The Apache guys insist that this project is out of scopes of Commons Configuration, maybe!
I've attached a simple configuration framework, take look please

Separate INFO and ERROR logs from java.util.logging

I'm configuring the logging for a Java application. What I'm aiming for is two logs: one for all messages and one for just messages above a certain level.
The app uses the java.util.logging.* classes: I'm using it as is, so I'm limited to configuration through a logging.properties file.
I don't see a way to configure two FileHandlers differently: the docs and examples I've seen set properties like:
java.util.logging.FileHandler.level = INFO
While I want two different Handlers logging at different levels to different files.
Any suggestions?
http://java.sun.com/j2se/1.4.2/docs/guide/util/logging/overview.html is helpful. You can only set one Level for any individual logger (as you can tell from the setLevel() method on the logger). However, you can take the lowest of the two common levels, and then filter programmatically.
Unfortunately, you can't do this just with the configuration file. To switch with just the configuration file you would have to switch to something like log4j, which you've said isn't an option.
So I would suggest altering the logging in code, with Filters, with something like this:
class LevelFilter implements Filter {
private Level Level;
public LevelFilter(Level level) {
this.level = level;
}
public boolean isLoggable(LogRecord record) {
return level.intValue() < record.getLevel().intValue();
}
}
And then on the second handler, do setFilter(new LevelFilter(Level.INFO)) or whatever. If you want it file configurable you could use a logging properties setting you've made up yourself, and use the normal Properties methods.
I think the configuration code for setting up the two file handlers ad the programmatic code is fairly simple once you have the design, but if you want more detail add a comment and I'll edit.
I think you should be able to just subclass a handler and then override the methods to allow output to go to multiple files depending on the level of the message. This would be done by overriding the publish() method.
Alternatively, if you have to use the system-provided FileHandler, you could do a setFilter() on it to inject your own filter into the mix and, in that filter code, send ALL messages to your other file and return true if the LogRecord level if INFO or higher, causing the FileHandler.publish() to write it to the real file.
I'm not sure this is the way you should be using filters but I can't see why it won't work.

Categories