Spring Boot - Conditionally load module during runtime or compile-time - java

First of all, I am new to Spring Boot so I am not sure if something like this is possible within the framework.
Let me describe my problem.
I have 10 code repositories. Each repository listens to a data stream, parses the data and updates a database. Due to maintainability issues I plan on bringing it under a single code repository. This new application will be generalized, and certain app specific configurations (for example, which stream should I connect to, the database host) will be retrieved at run-time.
Theoretically, this would allow me to maintain a single code base, but deploy it as 10 separate services based on configurations which is what I need. However, there's a set of java classes that are application specific used to parse the retrieved data. To better understand this refer the diagram below.
Ideally, I need to still maintain these classes in the same repository, but as separate modules. Once the configurations are loaded, the app should be able to load the corresponding module into the application context and initialize the Java classes. The other modules will not be used.
Can I do something like this with Spring Boot? Alternately, even a build time solution is fine if I can create separate builds which can then be deployed separately.

Not sure did I understood it well but why don't try spring profile (https://www.baeldung.com/spring-profiles). You can set for every service different config and with spring.profiles.active in runtime say what configuration will use.
Also something like this could be useful https://www.baeldung.com/spring-reloading-properties

Related

Neo4j-ogm: How to use different configuration (ogm.properties/java configuration) depending on environment?

I've been using an embedded neo4j server in my project so far.
Now I want to try out the new bolt protocol with a standalone server, however only for my deployed application. For convenience, I still want to use an embedded database when running from IDE (permanent) or when running tests (impermanent).
In order to support this, I've migrated from the java based configuration to the use of a ogm.properties file. Depending on the environment I run in, I want to use the file which configures the respective driver/database location.
I have placed a default configuration in the root of my resources folder. However I am not able to "override" this in other environment.
In order to do that I placed a different ogm.properties in the root folder of the deployed application. This doesn't seem to work. This the mechanism that I previously already used in order to have different application.properties and logback.xml configurations.
Is this not supported by neo4j-ogm? If not, how can one achieve this? It also isn't (trivially) possible with the java based configuration.
I am a bit confused, since this doesn't sound like such an unlikely requirement...
You can use Spring Profile for this to configure different properties for different environments and you can look here.
You can use application.properties (spring.profiles.active) to load a different profile or by using a runtime argument if you are using Spring boot with CommandLineRunner.

Link to properties file outside webservice

I have a webservice that uses Java, REST, Jersey and runs on Tomcat8. The webservice requires access to a database. Depending on where we are in the process the we may be using a testdatabase, production database or something else. Ideally we would like to be able to set which database to use without requiring a code change and recompile.
The approach we have tried is to have a properties file defining the database parameters and use an environment variable to point to the file. This has proved troublesome, first we've had a hard time defining system properties on the Tomcat server that we can read from the application, also it seems like all the files will have to be defined on the classpath, i.e already configured ahead of time and part of the codebase.
This seems like fairly common scenario, so I'm sure there is a recommended way to handle situations like this?
Zack Macomber has a point here. Don't enable your app/service to look up its settings dynamically.
Make your build process dynamic instead.
Maven, Gradle and friends all provide simple ways to modify output depending on build parameters and or tasks/profiles.
In your code always link to the same file (name). The actual file will then be included based on your task and/or build environment. Test config for tests. Production config for production.
In many cases a complete recompilation is not necessary and will therefore be skipped (this depends on your tool, of course).
No code changes at all. Moreover the code will be dumb as hell as it does not need to know anything about context.
Especially when working on something with multiple people this approach provides the most stable long-term-solution. Customizable for those who need some special, local config and most important transparent for all who don't need or don't want to know about runtime environment requirements!
We have a similar case. We have created a second web service on the same endpoint (/admin) which we call to set a few configuration parameters. We also have a DB for persisting the configuration once set. To make life easier, we also created a simple UI to set these values. The user configures the values in the UI, the UI calls the /admin web service, and the /admin service sets the configuration in memory (as properties) as well as in the DB. The main web service uses the properties as dynamic configuration.
Note: we use JWT based authorization to prevent unauthorized access to /admin. But depending upon your need you can keep it unsecure, use basic HTTP auth or go with something more detailed.
Not sure if in this particular case it is wise, but it is possible indeed to create a .properties file anywhere on the filesystem - and link it into your application by means of a Resources element.
https://tomcat.apache.org/tomcat-8.0-doc/config/resources.html
The Resources element represents all the resources available to the web application. This includes classes, JAR files, HTML, JSPs and any other files that contribute to the web application. Implementations are provided to use directories, JAR files and WARs as the source of these resources and the resources implementation may be extended to provide support for files stored in other forms such as in a database or a versioned repository.
You would need a PreResources element here, linking to a folder, the contents of which will be made available to the application at /WEB-INF/classes.
<Context antiResourceLocking="false" privileged="true" docBase="${catalina.home}/webapps/myapp">
<Resources className="org.apache.catalina.webresources.StandardRoot">
<!-- external res folder (contains settings.properties) -->
<PreResources className="org.apache.catalina.webresources.DirResourceSet"
base="/home/whatever/path/config/"
webAppMount="/WEB-INF/classes" />
</Resources>
</Context>
Your application now 'sees' the files in /home/whatever/path/config/ as if they were located at /WEB-INF/classes.
Typically, the Resources element is put inside a Context element. The Context element must be put in a file located at:
$CATALINA_BASE/conf/[enginename]/[hostname]/ROOT.xml
See https://tomcat.apache.org/tomcat-8.0-doc/config/context.html#Defining_a_context

Ease of rolling back from Spring Boot to regular Spring and viewing hybrid of Spring Context and configurations while using Spring Boot

I am assessing whether spring-boot and how I could migrate to using it.
One question I have is whether a project that uses spring boot can be converted easily back to a regular spring project which uses the traditional spring configuration files if that is required. This would be useful in my mind for a few reasons.
1) merging with legacy projects, because as I have read moving from legacy spring to spring-boot is somewhat tedious.
2) Obtaining a view of the spring application context file and webapp configuration files to understand what the actual configurations being used are.
Another question I have is regarding the lack of application-context file, is there a way to have some kind of hybrid where there is still an application-context file that can be seen? Part of my concern is that spring-boot auto configures components without us knowing and learning how they are configured and work together.
Spring Boot provides auto-configuration.
When #SpringBootApplication is encountered, it triggers a search of the CLASSPATH for a file called META-INF/spring.factories which is just a regular text file that enumerates a list of Java configuration classes. Java configuration was introduced in 2006 and then merged into Spring 3 in 2009. So it's been around for a long time. These Java configuration classes define beans in the same way that XML does. Each class is annotated with #Configuration and therein you find beans defined using methods (factory methods, or provider methods) whose return value is managed and exposed via Spring. Each such provider method is annotated with #Bean. This tells Spring to consider the method and its return value the definition of the bean.
Spring Boot tries to launch all the Java configurations it sees enumerated in that text file. It tries to launch RabbitAutoConfiguration.class, which in turn provides beans for connecting to RabbitMQ and so on. Of course, you don't want those beans in certain cases, so Spring Boot takes advantage of Spring framework 4's #Conditional mechanism to conditionally register those beans only if certain conditions are met: is a type on the CLASSPATH, is a property exposed through the environment, has there been another bean of the same type defined by the user, etc. Spring boot uses these to only create the RabbitMQ-specific beans if, for example, the dependencies that you would get from org.springframework.boot:spring-boot-starter-amqp are on the CLASSPATH. It also considers that the user may have provided a different implementation of RabbitTemplate in some othe rbean definition (be it XML or Java configuration) so it uses that if it's there.
These java configuration classes are the same sort of Java configuration classes you would write without Spring Boot. BUT... WHY? 80% of the time, the auto-configuration that Spring Boot provides is going to be as good or better than the configuration you would write yourself. There are only so many ways to configure Spring MVC, Spring Data, Spring Batch, etc., and the wager you take using Spring Boot is that the leaders and engineers on those various projects can provide the most sensible 80%-case configuration that you probably don't care to write, anyway.
So, yes you could use Spring Boot with existing applications, but you'd have to move those existing applications to Spring 4 (which is easy to do if you're using the spring-boot-starter-* dependencies) to take advantage of #Conditional. Spring Boot prefers: NO configuration, Java configuration, XML configuration, in that order.
If you have an existing application, I'd do the following:
find out what dependencies you can remove from your Gradle/Maven build and just have taken care of for you with the various spring-boot-starter- dependencies.
add #SpringBootApplication to a root component class. Eg, if your package is a.b.c, put a class Application in a.Application and annotate that with #SpringBootApplication
You can run it as a Servlet 3 application or in an embedded servlet container. It might be easier to just run in a standard servlet container as you take baby steps. Go to http://start.spring.io and make sure to choose war in the packaging drop down. Copy the ServletInitializer class and the specification from the pom.xml to ensure that your application is a .war, not a .jar. Then, once everything works on Spring Boot, rm -rf the Initializer and then revert the Maven build to a simpler .jar using the Spring Boot plugin for extra win.
If your application has lots of existing configuration, import it using #Import(OldConfig.class) or #ImportResource("old-config.xml") on the a.Application configuration class. The auto-configuration will kick in but it will see, for example, that you may have already defined a DataSource so it'll plug that in in the right places. What I like to do now is just start the application up, see if everything's OK, then start removing code in my old Java or XML configuration. Don't remove your business code, but remove things related to turning on parts of Spring. Anything that looks like #Enable..* or ..:annotation-driven/>. Restart and verify things still work. The goal is to let Spring Boot do as much of the heavy lifting as possible. Viewing the source is very handy here so you can see what Spring Boot will try to do for you. Spring Boot will log information on what #Conditional conditions evaluated to true for you. Simply provide --Ddebug=true wen you start the application to see the output. You could also export DEBUG=true then restart your IDE to run it as long as the environment variable is ivsible in your shell.

JavaEE solution configuration best practices

We build 3-tier enterprise solutions that typically consists of several webapp and ejbjar modules that all talk to a db and have several external integration points.
Each module typically needs its own configurations that can change over the solution's life time.
Deploying it becomes a nightmare because now we have 18 property files that must be remembered to copied over and configured also setting up data-sources, queues, memory requirements etc.
I'm hopeful but not optimistic that there can be a better way.
Some options we've considered/used, each with it's pros and cons:
Use multiple maven projects and continuous integration (eg. hudson or jenkins) to build a configuration jar that includes all the property files for each environment (dev, qa, prod) and then bundle everything up as an EAR. But then things can't be easily changed in production when needed.
Put most of the settings in the DB and have a simple screen to modify it. Internally we can have a generic configuration service EJB that can read and modify the values. Each module can have a custom extended version that have specific getters and setter.
Version control all the property files then check it out on production and check it into a production branch after making changes.
With all of these you still need to configure data-sources and queues etc. in a container specific way :(
Сonsider binding a custom configuration object to JNDI. Then lookup this object in your apps to configure them. Benefits - you can use custom configuration object instead of rather generic Map or Properties.
Another way is to use JMX to configure applications you need. Benefits - you can bind objects you have to configure directly to MBean Server and then use such a well-known tools as jconsole or visualvm to configure components of your application.
Both ways support dynamic reconfiguration of your applications at runtime. I would prefer using JMX.
I've gone through several cycles of finding ways to do this. I still don't have a definite answer.
The last cycle ended up with a process based on properties files. The idea was that each server instance was configured with a single properties file that configured everything. That file was read by the startup scripts, to set memory parameters, by the app server, and by the application itself.
The key thing, though, was that this file was not managed directly. Rather, it was a product of the build process. We had a range of files for different purposes, kept in version control, and a build step which merged the appropriate ones. This lets you factor out commonalities that are shared along various axes.
For example, we had development, continuous integration, QA, UAT, staging, and production environments, each with its own database. Servers in different environments needed different database settings, but each server in a given environment used the same settings. So, there was something like a development-db.properties, qa-db.properties, and so on. In each environment, we had several kinds of servers - web servers, content management servers, batch process servers, etc. Each had JVM settings, for heap size and so on, that were different to other kinds of servers, but consistent between servers across environments. So, we had something like web-jvm.properties, cms-jvm.properties, batch-jvm.properties, and so on. We also had a way to have overrides for specific systems - production-cms-jvm.properties sort of thing. We also had a common.properties that set common properties, and sensible defaults which could be overridden where needed.
Our build process was actually a bit more complicated than just picking the right options from each set; we had a master file for each server in each environment which specified which other files to include. We allowed files to specify other files to include, so we could build a graph of imports to maximise reuse.
It ended up being quite complicated. Too complicated, i think. But it did work, and it did make it very, very easy to make changes affecting many servers in a controlled way. We even merged a set of input files from development, and another from operations, which contained sensitive information. It was a very flexible approach.
I know this has already been answered and my answer is not necessarily generic, but here's my take on things:
Note, here I'm only considering system/resource properties, not application settings. In my view, application settings (such as a payment threshold or other settings should be stored in a database, so that the system can be reconfigured without having to restart a service or cause downtime by re-deploying or re-reading a properties file).
For settings that impact on how different parts of a system connect with each other (such as web service endpoints, etc), I would make use of the JNDI tree.
Database connectivity and JMS connectivity would then be set-up using the Websphere console and can be managed by the Websphere administrators. These can also be created as JACL scripts which can be put into version control if necessary.
In addition to the JNDI resources, for additional properties, such as usernames for web service calls to a backend, etc, I would use Websphere "Name Space Bindings". These bindings can be edited using the Websphere console and accessed via JNDI using the "cell/persistent/mypassword" name.
So I could create the "mypassword" binding (a string), and the management for it falls to the Websphere admin (away from developer eyes or other people who should not have access to production systems), while the same EAR file can be used on dev, test, preproduction and production (which is preferable to have different EAR files for different systems, as the likelihood of other differences creeping in is reduced).
The Java code would then use a simple JNDI lookup (and possibly cache the value in memory).
Advantages over properties files:
Not having a "vulnerable" file that would need to be secured because system properties contain passwords.
Not having to add Java security policies to allow access to that file location
Advantages over database properties:
Not tied to having one database tied to an application server.
Hope that helps
Use multiple maven projects and continuous integration (eg. hudson or
jenkins) to build a configuration jar that includes all the property
files for each environment (dev, qa, prod) and then bundle everything
up as an EAR. But then things can't be easily changed in production
when needed.
I think the config should be in the database of the application instance. Your local machine config may be diffrent to dev and to QA, PROD , DR etc.
What you need is a way of getting the config out the database in a simple way.
I create a separate project with a provided dependency of Apache commons-configuration
It has many ways of storing data, but I like databases and the configurations lives in the database environment.
import javax.sql.DataSource;
import org.apache.commons.configuration.DatabaseConfiguration;
public class MYConfig extends DatabaseConfiguration {
public MYConfig(DataSource datasource) {
super(datasource, "TABLE_CONFIG", "PROP_KEY", "PROP_VALUE");
}
}
Put most of the settings in the DB and have a simple screen to modify
it. Internally we can have a generic configuration service EJB that
can read and modify the values. Each module can have a custom extended
version that have specific getters and setter.
Commons configurations as a simple API, you may then write the GUI as you wish.
You can do the interface in anyway you wish. Or as a quick win have no interface.
Version control all the property files then check it out on production
and check it into a production branch after making changes.
Version control is great. Add another DatabaseConfiguration using composition. The class you extends is the active config and the composed one being the audit. There is another constructor can can have a version. Just overload the right methods to get the desired effect.
import javax.sql.DataSource;
import org.apache.commons.configuration.DatabaseConfiguration;
public class MYConfig extends DatabaseConfiguration {
final DatabaseConfiguration audit;
public MYConfig(DataSource datasource) {
super(datasource, "TABLE_CONFIG", "PROP_KEY", "PROP_VALUE");
audit = new DatabaseConfiguration("TABLE_CONFIG_AUDIT", "PROP_KEY", "PROP_VALUE");
}
#Override
public void addProperty(String key, Object value) {
Object wasValue = super.getProperty(key);
super.addProperty(key, value);
audit.put(key,wasValue);//add version code
}
}
http://commons.apache.org/proper/commons-configuration/
User a simple database table (Section, Key, Value). Add "Version" if you need it, and wrap the entire thing in a simple ConfigurationService class with methods like getInt(String section, String key)
Not a lot of work, and it makes the application code very neat, and tweaking with the configuration very easy.
Interesting alternative config file format: write a scala trait. Your config file can then just be a scala file that you compile and evaluate when the server starts.
http://robey.lag.net//2012/03/26/why-config.html

Environment configuration management?

There is a team develops enterprise application with web interface: java, tomcat, struts, mysql, REST and LDAP calls to external services and so on.
All configuration is stored in context.xml --tomcat specific file that contains variables available via servlet context and object available via JNDI resources.
Developers have no access to production and QA platforms (as it should be) so context.xml is managed by support/sysadmin team.
Each release has config-notes.txt with instructions like:
please add "userLimit" variable to context.xml with value "123", rename "DB" resource to "fooDB" and add new database connection to our new server (you should know url and credentials) named "barDb"
That is not good.
Here is my idea how to solve it.
Each release has special config file with required variable names, descriptions and default values (if any): even web.xml could be used.
Here is pseudo example:
foo=bar
userLimit=123
barDb=SET_MANUAL(connection to our new server)
And there is a special tool that support team runs against deployment artifact.
Look at it (text after ">" is typed by support guy):
Config for version 123 of artifact "mySever".
Enter your config file location> /opt/tomcat/context/myServer.xml
+"foo" value "bar" -- already exists and would not be changed
+"userLimit" value "123" -- adding new
+"barDb"(connection to our new server) please type> jdbc:mysql:host/db
Saving your file as /opt/tomcat/context/myServer.xml
Your environment is not configured to run myServer-123.
That will give us ability to deploy application on any environment and update configuration if needed.
Do you like my idea? What do you use for environment configuration management? Does there is ready-to-use tools for that?
There are plenty of different strategies. All of them are good and depends on what suit you best.
Build a single artifact and deploy configs to a separate location. The artifact could have placeholder variables and, on deployment, the config could be read in. Have a look at Springs property placeholder. It works fantastically for webapps that use Spring and doesn't involve getting ops involved.
Have an externalised property config that lives outside of the webapp. Keep the location constant and always read from the property config. Update the config at any stage and a restart will be up the new values.
If you are modifying the environment (i.e. application server being used or user/group permissions) look at using the above methods with puppet or chef. Also have a look at managing your config files with these tools.
As for the whole should devs be given access to prod, it really depends on a per company basis. For smaller companies where the dev is called every time there is a problem, regardless of whether that problem is server or application related, then obviously devs require access to the box.
DevOps is not about giving devs access to the box, its about giving devs the ability to use infrastructure as a service, the ability to spawn new instances with application X with config Y and to push their applications into environments without ops. In a large company like ours, what it allows is the ability for devs to manage the application they put on a server. Operations shouldn't care what version is on their, thats our job, their job is all about keeping the server up and running.
I strongly disagree with your remark that devs shouldn't have access to prod or staging environments. It's this kind of attitude that leads to teams working against each other instead of with eath other.
But to answer your question: you are thinking about what is typically called continuous integration ( http://en.wikipedia.org/wiki/Continuous_integration ) and moving towards devops. Ideally you should aim for the magic "1 click automated deployment". The guys from Flickr wrote a lot of blogs (and books) about how they achieved that.
Anyhow .. there's a lot of tools around that sector. You may want to have a look a things like Hudson/Jenkins or Puppet/Chef.

Categories