I need to decide which configuration framework to use. At the moment I am thinking between using properties files and XML files. My configuration needs to have some primitive grouping, e.g. in XML format would be something like:
<configuration>
<group name="abc">
<param1>value1</param1>
<param2>value2</param2>
</group>
<group name="def">
<param3>value3</param3>
<param4>value4</param4>
</group>
</configuration>
or a properties file (something similar to log4j.properties):
group.abc.param1 = value1
group.abc.param2 = value2
group.def.param3 = value3
group.def.param4 = value4
I need bi-directional (read and write) configuration library/framework. Nice feature would be - that I could read out somehow different configuration groups as different objects, so I could later pass them to different places, e.g. - reading everything what belongs to group "abc" as one object and "def" as another. If that is not possible I can always split single configuration object into smaller ones myself in the application initialization part of course.
Which framework would best fit for me?
Since you are saying that it is possible to also store objects in the config, I would suggest this:
http://commons.apache.org/configuration/
The simplest way to do this would be to use Simple XML. It can bind XML to Java POJOs in a very simple manner. Also, it is much faster than other such XML binding frameworks.
http://simple.sourceforge.net
Only 270K with no dependencies.
Please take a look at this URL: http://issues.apache.org/jira/browse/CONFIGURATION-394
The Configuration framework which we're looking for it is something on top of Apache Commons Configuration and must support Concurrency Issues, JMX issues and most of stores(e.g .properties file, .xml files or PreferencesAPI).
What weblogic team provides on 'Administration Console' is intersting which through it you can have transactional(atomic) updates on configurations so that are registered listeners be notified.
The Apache guys insist that this project is out of scopes of Commons Configuration, maybe!
I've attached a simple configuration framework, take look please
Related
I am using Inifinispan v12.1 with String Boot v2.5.2 via org.infinispan:infinispan-spring-boot-starter-embedded. In our application we are using custom classes which we would like to cache (very common case), however it turned out that starting from v10 these classes need to be listed in "allow list".
We are using infinispan.xml configuration passed via infinispan.embedded.config-xml property as advised by sample project.
Question: How is it possible to configure allow list globally for all caches by the means of XML configuration file?
I have considered the following options:
System property infinispan.deserialization.allowlist.regexps (from ClassAllowList) – not good choice as configuration will be spread between XML file and e.g. some other place. More over if the property is renamed in future Infinispan versions one would notice it only when application is run.
Defining the <cache-container><serialization><allow-list> as to documentation is not good option because will result several identical per-cache XML configuration blocks.
The corresponding Java Config for Spring Boot application would be:
#org.springframework.context.annotation.Configuration
public class InfinispanConfiguration {
#Bean
public InfinispanGlobalConfigurationCustomizer globalCustomizer() {
return builder -> builder.allowList().addRegexp("^org\\.mycompany\\.");
}
}
P.S. Javadoc in GlobalConfiguration assumes that there is <default> XML section the configuration can be read from, but in fact XML does not support it anymore.
P.P.S. Arguably the dots in the packages should be escaped in SpringEmbeddedModule and start with ^ because ClassAllowList uses Matcher#find() (boolean regexMatch = compiled.stream().anyMatch(p -> p.matcher(className).find());):
serializationAllowList.addRegexps("^java\\.util\\..*", "^org\\.springframework\\..*");
I tried to ovveride the property
kafka.servers=s101lbakafpep1:9092,s102lbakafpep2:9092,s101lbakafpep3:9092
defined in my src/main/resources/config/application-kafka.properties file
with this value
kafka.servers=localhost:9092
defined in my src/main/resources/application-dev.properties file
I tried every combination possible reading the spring boot doc changing in my application.properties the order of
spring.profiles.active=config,health,planete,dgfip,mapping,kafka,dev
spring.profiles.active=dev,config,health,planete,dgfip,mapping,kafka
using spring.config.use-legacy-processing to true or false or .include, it's always the kafka config that wins
It's not working since i changed spring boot version to 2.4
Thanks for the very helpful hint #gviczai, solved my problem loading and overriding configs from YAML files.
I completely missed the following sentence in the documentation which made my unit tests fail because values have not been overridden as it was the case with Spring Boot 2.3.
Imports can be considered as additional documents inserted just below the document that declares them. They follow the same top-down ordering as regular multi-document
files: An import will only be imported once, no matter how many times it is declared.
So if you want to override imported values a new document has to be started after the import (--- in yaml, #--- in properties).
# imported-config.yaml
my-key: my-value
# application.yaml
spring:
config:
import:
- classpath:imported-config.yaml
# before starting a new document the value can not be modified, it would still be "my-value"
my-key: here-overriding-does-not-work
---
# after the start of the new document the value can be modified
my-key: my-overridden-value
In Spring Boot 2.4, configuration file handling is completely rethought and rewritten.
Long story short: Forget the legacy profile-dependent documents. From now on, you have to use only one big application.properties file, but it can be divided into various profile-activated sections. These sections then can come from other files or even documents from URLs - see cloud-config.
And the main rule is: definitions BELOW always overwrite definitions ABOVE. So be careful with the order the sections (thus profiles) follow each other! ;)
You can separate the sections with "#---" and you can define which profile activates the section by providing "spring.config.activate.on-profile=<your_profile>"
So, in your case your application.properties should look like this:
my.property=anything
...
server.name=myserver
#in your 'default' section, you can activate any profile, so it will be active by default
spring.profiles.active=kafka
#---
spring.config.activate.on-profile=kafka
spring.config.import=application-kafka.properties
#---
spring.config.activate.on-profile=dev
spring.config.import=application-dev.properties
#---
spring.config.activate.on-profile=cloud
spring.config.import=optional:configserver:http://my.config.server:8080/cloud-config
Of course, you can use yaml file if you prefer. In this case the document separator is the standard "---".
Read more about this new paradigm of config file processing here: https://spring.io/blog/2020/08/14/config-file-processing-in-spring-boot-2-4
(And I guess 'kafka' profile wins over 'dev' because 'k' is AFTER 'd' in the abc... BTW, I think it is better not to name the imported documents according to the legacy profile-dependent "application-<profile>.properties" naming convention, because it may interfere with the profile-handling code. Better to be safe than sorry.)
Tip: Note, that in the same 'document' (a section in the same file considered a document) even the spring.config.import can overwrite previous values. So if you need to import multiple sources within the same section, use a comma-separated list:
spring.config.import=classpath:config/kafka.properties,classpath:db/postgres.properties
they're not in the same folder and the run configuration probably indicates /config for the scan.
It's working again with spring-boot 2.5.6, so it was fixed in 2.5.x
I have a custom XML config defining a kind of network like this
S1 ---- O1 ---- O2 ---- O3 ---- T1
\
+--- O4 ---- O5 ------------ T2
\
S2---+- O6 --+- O7 ------------ T4
/ /
S3-+ /
/
S4 ------+
Where
S is some kind of data source, like a web socket
O is an operator processing the data
T is the target or data sink
These elements are represented with xml blocks like this:
<source name="S1" address="ws://example/1" type="websocket" dataType="double" />
<operator name="O6" type="threshold">
<input name="S1"/>
<input name="S2"/>
<input name="S3"/>
<property name="threshold" value="10.34" />
<property name="window" value="10.0" />
</operator>
<sink name="T1" type="database">
<input name="O3"/>
</sink>
The dependencies are constructor parameters. My example operator O6 would have a constructor like this:
class ThresholdOperator extends Operator<Boolean> {
public ThresholdOperator(
String name, // "O6"
List<DataSource> sources, // [S1, S2, S3]
double threshold, // 10.34
double window) { // 10.0
...
There could be multiple instances of this class with different constructor parameters. It is possible that a class has more than one constructor. The type parameter of the base class is the output type.
The type attribute determines what concrete class has to be instantiated. The dataType attribute of the source decides which kind of converter (here String to Double) should be injected.
To create the instances I need to figurare out a dependency graph and start instantiating the objects without other objects from my graph as dependency (the sources in this case), then I would create the objects which depend only on objects created in the first step and so on.
So I would basically reinvent something like Spring for my special use case. Is there a way to leverage Spring to create and wire objects in my case? A somewhat crude hack would be to transform my xml config to a beans.xml. But maybe there is a better way using BeanFactory or the like. Or would it be possible to create the Spring meta-model directly?
I'm using Spring 4.3 but the RC of Spring 5 could be an option, if it would help.
Another alternative not yet mentioned here is using XSLT.
The idea is to define xsl that maps your domain-specific xml to spring beans xml (XSLT+XPath should be more than enough to cover your case).
You can then read domain-specific xml, transform it with that xsl and feed the result to spring.
Have a look on StaticApplicationContext. It is stated in the docs that it is:
Mainly useful for testing.
... but it is a full fledged application context that has support for programmatic bean registration.
You can read your domain-specific xml and define beans based on it inside StaticApplicationContext.
This blog post can give you an idea on how to use StaticApplicationContext to define beans with references and constructor args.
A simpler approach to instantiate your objects from the document would be to either
create an XML Schema describing your data format and using JAXB to create your Java classes
annotate your existing Java classes with JAXB annotations
The "crud" hack approach may be a better approach but instead of converting your config xml to beans xml file manually, I suggest you to look at the Extensible XML authoring approach.
The configuration parser, a.k.a. bean definition parser, allows you to build the bean definitions which will eventually be used your application's spring context to instantiate the beans.
This should also eliminate the needs of figuring out the dependency hierarchy manually and instantiation of objects yourself.
Hope it answer your question.
I'm looking at:
https://github.com/typesafehub/config
Let's say I want to have a default configuration, e.g. reference.conf, and then I want to have dev/prod overrides (two different application.conf's), and then I also wanted to have host-specific overrides that inherited from both the application.conf and ultimately the default reference.conf. How would I do this?
e.g., I'm imagining a directory structure something like:
resources/reference.conf
resources/prod/application.conf
resources/prod/master.conf
resources/prod/slave.conf
resources/dev/application.conf
resources/dev/master.conf
resources/dev/slave.conf
Or maybe it would be resources/dev/master/application.conf?
Somewhere I would specify an environment, i.e. maybe extracted from the hostname the application was started on.
If the application was master.dev.example.com, I'm expecting I should be able to do something like:
getConfigurations("dev/master.conf").withDefaultsFrom(
getConfigurations("dev/application.conf").withDefaultsFrom(
getConfigurations("resource.conf"))
But I'm having a hard time understanding what exactly that would look like using the given library.
I see I could set a config.resource system property, but it looks like that would only allow for one level of overrides, dev-application.conf -> resources.conf, not something like master-node.conf -> dev-application.conf -> resources.conf.
I see a .withFallback method, but that seems to be if I wanted to mix two kinds of configuration in a single file, not to chain resources/files together.
Use multiple withFallback with the configs that have the highest priority first. For example:
Config finalConfig =
ConfigFactory.systemProperties().
withFallback(masterConfig).
withFallback(applicationConfig).
withFallback(referenceConfig)
Each of the configs like masterConfig would have been loaded with Config.parseFile. You can also use ConfigFactor.load as a convenience, but the parseXXX methods give you more control over your hierarchy.
I want to start using ehcache-1.2.3.jar in my project. But I want to know if it is mandatory to build cache.xml?
If yes then why and If no then what situations it might be useful to build cache.xml.
Following is sample code.
CacheManager.getInstance().addCache("MyCache");
Cache c= CacheManager.getInstance().getCache("MyCache");
Employee emp=new Employee();
emp.setEmpName("Ramji");
Element e=new Element("emp", emp);
c.put(e );
When you meant cache.xml, do you mean ehcache.xml? If yes, configuration in an XML file is not mandatory but it is recommended. If you think XML-based configuration doesn't work for you, you can do it through JAVA as well (refer to the URL below).
It is recommended to have the configuration in an XML because you would want to change it during deployment for example.
Source: http://ehcache.org/documentation/2.8/configuration/configuration