Propagating configuration within the WAS cluster by means of MOM - java

I am developing application which is embedded within the cluster environment in Websphere AS. I am using several nodes and sometimes I would like to change configuration settings on the fly and propagate it to all nodes within the cluster. I don't want to hold the config in the db or at least I would like to cache it on the node level and trigger config refresh action which forces each node to refresh the config from some common ground (i.e. db or net drive)
to avoid constant round-trips to the config storage.
More over some configuration can't be stored in db i.e. log level needs to be applied on the logger object in each node separately.
I was thinking about using JMS Topics and publish/subscribe approach to achive that goal.
The idea is that each node could subscribe to each Topic and no matter which nodes initate the config change modification would be propagated to all nodes within the cluster.
Has anyone ever tried to do that in WAS and whether there are any obstacles with this approach. If there are or if you have any other suggestion on how to solve that problem I would be very greatfull for your help.
Tx in advance,
Marcin

Here are a few options to consider as alternatives to JMS -
Use Java EE environment entries. These are scoped to the application, and WAS will automatically propagate any changes to all servers against which the application is deployed. This is a good approach since it is the standard Java EE approach to application configuration, if it is robust enough to meet your use case.
Use a WebSphere Shared Library. This allows you to link your applications to static files external to your application (i.e. on the filesystem), such that they are available on your classpath. Although these files are located on the node file systems, there is a way that you can place these files in WebSphere's centralized configuration repository such that they are automatically propagated to all WAS nodes. For more details on this, see this answer.
Both of these options are optimized for static configuration; in other words, configuration settings that are intended to be set at assembly-time, deployment-time, or to be changed by system administrators, but they are not typically used for values that change frequently, nor are they generally changed programmatically at runtime. WAS does allow your applications to pick these configuration settings in a rolling fashion, such that no application downtime is required though.

Currently we solved the problem with maybe not the most pretty approach but with the most simple one. Since we are using only 2 nodes we have possibility to enter web interface of specific node where we modify settings per each node. Maybe it is not very pretty but for now it is the easiest way. The config is stored in DB and we are planning to trigger config reload in each node and change the log level per node as well.

Related

Tomcat directory to save information across restarts and redeployment

I have the requirement to save some information across restarts and redeployments, i.e. write it to a file when Tomcat is shut down and restore it from the file when it is started. It's similar to the way Tomcat saves session information across restarts (see Persistence Across Restarts).
What's the correct directory for such a file?
What's the API to get the path to this directory?
I'm looking for a solution that works on different operation systems, works across redeployments and does not require any setup or configuration tasks. It should be as simple as Tomcat's session persistence, which just works without any configuration.
Use ServletContextListener - Interface For handling your backup plan.
ServletContextListener - contextDestroyed(..) & contextInitialized(..)
And for Handling Path while store file inside Tomcat-server,
Use this code for retrieving path, request.getRealPath("/").toString()
Above getRealPath("/") will provide you server's log directory path. please change it accordingly whatever nearest you want.
Let me know whether this help in your scenario or not ?

hazelcast : changing configuration programatically doesnt work

I am unable to configure/change the Map(declared as part of hazelcast config in spring) properties after hazelcast instance start up. I am using hazelcast integrated with spring as hibernate second level cache. I am trying to configure the properties of map (like TTL) in an init method (PostConstruct annotated) which is called during spring bean initialization.
There is not enough Documentation , if there is please guide me to it.
Mean while i went through this post and found this Hazelcast MapStoreConfig ignored
But how does the management center changes the config, will it recreate a new instance again ?
Is hazelcast Instance light weight unlike session factory ? i assume not,
please share your thoughts
This is not yet supported. JCache is the only on-the-fly configuration data structure at the moment.
However you'll most probably be able to destroy a proxy (DistributedObject like IMap, IQueue, ...), reconfigure it and recreate it. Anyhow at the time of recreation you must make sure that every node sees the same configuration, for example by storing the configuration itself inside an IMap or something like that. You'll have to do some wrapping on your own.
PS: This is not officially supported and an implementation detail that might change at later versions!
PPS: This feature is on the roadmap for quite some time but didn't made it into a release version yet, it however is still expected to have full support at some time in the future.

Elastic Beanstalk host specific application configuration

I have a java web application I'm trying to re-factor to work with the elastic beanstalk way of doing things. The application will be load balanced and have (for the moment) 2 hosts without taking any advantage of auto-scaling. The issue is that there are slight configuration differences between the nodes, in particular authenticating to certain web-services is done with different credentials to effectively double throughput as there are per account throttling restrictions.
Currently my application treats configuration separately from the archive so its relatively simple on fixed hosts where the configuration remains in a relatively static file path and deployment of the war files is all that is required.
Going down the elastic beanstalk path I think I'll have to include all the configuration options inside the deployable artifact and some how get the application to load up the relevant host specific configuration. The problem I have is deciding which configuration to load inside the application. I could use a physical aspect about the host, i.e. an IP address or Instance ID that would effectively load the relevant config;
/config-<InstanceID-1>.properties
/config-<InstanceID-2>.properties
This approach is totally flawed given that if I create an entirely new environment in beanstalk, it would require me to update all the configuration files in the project to reflect the new Instance-id's created.
Has anyone come up with a good way of doing this in beanstalk?
If you have to have two different types of nodes, then you should consider SOA architecture for your application.
Create two environments, environment-a and environment-b. Either set all properties for the environments through AWS web console, or can reuse your existing configuration files and just set the specific configuration file name for each environment.
#environment-a
PARAM1 = config-environment-a.properties
#environment-b
PARAM1 = config-environment-b.properties
You share the same code base and push to either environment with -e modifier.
#push to environment-a
$ git aws.push -e environment-a
#push to environment-b
$ git aws.push -e environment-b
You can also create git alias to push to both environments at the same time :-)
Now, the major benefit of SOA approach is that you can scale and manage those environments separately. It is simple and elegant.
If you want more complex and less elegant, use simple token distribution service. On every environment initialization, send two messages to Amazon SQS. Each message should contain configuration name. Then pull those messages from SQS, each instance will get exactly one from the queue. Whichever configuration name the message contains, configure your node with that configuration. :-)
Hope it helps.
Update after #vcetinick comment:
All still seems rather complex for what should be pretty simple.
That's why I suggested separate environments. You can make your own registration service, when the node comes up, it registers with the service and in return gets configuration params. You keep available configurations in persistent DB. If the node dies and the service gets another registration request, the registration service can quickly check registered all nodes (because they all left their info during the registration), and if any of the nodes is not responding, its configuration data is reassigned to the new node. And now you have single point of failure on your hands :-)
Again, there might be other ways to approach that problem.

JavaEE solution configuration best practices

We build 3-tier enterprise solutions that typically consists of several webapp and ejbjar modules that all talk to a db and have several external integration points.
Each module typically needs its own configurations that can change over the solution's life time.
Deploying it becomes a nightmare because now we have 18 property files that must be remembered to copied over and configured also setting up data-sources, queues, memory requirements etc.
I'm hopeful but not optimistic that there can be a better way.
Some options we've considered/used, each with it's pros and cons:
Use multiple maven projects and continuous integration (eg. hudson or jenkins) to build a configuration jar that includes all the property files for each environment (dev, qa, prod) and then bundle everything up as an EAR. But then things can't be easily changed in production when needed.
Put most of the settings in the DB and have a simple screen to modify it. Internally we can have a generic configuration service EJB that can read and modify the values. Each module can have a custom extended version that have specific getters and setter.
Version control all the property files then check it out on production and check it into a production branch after making changes.
With all of these you still need to configure data-sources and queues etc. in a container specific way :(
Сonsider binding a custom configuration object to JNDI. Then lookup this object in your apps to configure them. Benefits - you can use custom configuration object instead of rather generic Map or Properties.
Another way is to use JMX to configure applications you need. Benefits - you can bind objects you have to configure directly to MBean Server and then use such a well-known tools as jconsole or visualvm to configure components of your application.
Both ways support dynamic reconfiguration of your applications at runtime. I would prefer using JMX.
I've gone through several cycles of finding ways to do this. I still don't have a definite answer.
The last cycle ended up with a process based on properties files. The idea was that each server instance was configured with a single properties file that configured everything. That file was read by the startup scripts, to set memory parameters, by the app server, and by the application itself.
The key thing, though, was that this file was not managed directly. Rather, it was a product of the build process. We had a range of files for different purposes, kept in version control, and a build step which merged the appropriate ones. This lets you factor out commonalities that are shared along various axes.
For example, we had development, continuous integration, QA, UAT, staging, and production environments, each with its own database. Servers in different environments needed different database settings, but each server in a given environment used the same settings. So, there was something like a development-db.properties, qa-db.properties, and so on. In each environment, we had several kinds of servers - web servers, content management servers, batch process servers, etc. Each had JVM settings, for heap size and so on, that were different to other kinds of servers, but consistent between servers across environments. So, we had something like web-jvm.properties, cms-jvm.properties, batch-jvm.properties, and so on. We also had a way to have overrides for specific systems - production-cms-jvm.properties sort of thing. We also had a common.properties that set common properties, and sensible defaults which could be overridden where needed.
Our build process was actually a bit more complicated than just picking the right options from each set; we had a master file for each server in each environment which specified which other files to include. We allowed files to specify other files to include, so we could build a graph of imports to maximise reuse.
It ended up being quite complicated. Too complicated, i think. But it did work, and it did make it very, very easy to make changes affecting many servers in a controlled way. We even merged a set of input files from development, and another from operations, which contained sensitive information. It was a very flexible approach.
I know this has already been answered and my answer is not necessarily generic, but here's my take on things:
Note, here I'm only considering system/resource properties, not application settings. In my view, application settings (such as a payment threshold or other settings should be stored in a database, so that the system can be reconfigured without having to restart a service or cause downtime by re-deploying or re-reading a properties file).
For settings that impact on how different parts of a system connect with each other (such as web service endpoints, etc), I would make use of the JNDI tree.
Database connectivity and JMS connectivity would then be set-up using the Websphere console and can be managed by the Websphere administrators. These can also be created as JACL scripts which can be put into version control if necessary.
In addition to the JNDI resources, for additional properties, such as usernames for web service calls to a backend, etc, I would use Websphere "Name Space Bindings". These bindings can be edited using the Websphere console and accessed via JNDI using the "cell/persistent/mypassword" name.
So I could create the "mypassword" binding (a string), and the management for it falls to the Websphere admin (away from developer eyes or other people who should not have access to production systems), while the same EAR file can be used on dev, test, preproduction and production (which is preferable to have different EAR files for different systems, as the likelihood of other differences creeping in is reduced).
The Java code would then use a simple JNDI lookup (and possibly cache the value in memory).
Advantages over properties files:
Not having a "vulnerable" file that would need to be secured because system properties contain passwords.
Not having to add Java security policies to allow access to that file location
Advantages over database properties:
Not tied to having one database tied to an application server.
Hope that helps
Use multiple maven projects and continuous integration (eg. hudson or
jenkins) to build a configuration jar that includes all the property
files for each environment (dev, qa, prod) and then bundle everything
up as an EAR. But then things can't be easily changed in production
when needed.
I think the config should be in the database of the application instance. Your local machine config may be diffrent to dev and to QA, PROD , DR etc.
What you need is a way of getting the config out the database in a simple way.
I create a separate project with a provided dependency of Apache commons-configuration
It has many ways of storing data, but I like databases and the configurations lives in the database environment.
import javax.sql.DataSource;
import org.apache.commons.configuration.DatabaseConfiguration;
public class MYConfig extends DatabaseConfiguration {
public MYConfig(DataSource datasource) {
super(datasource, "TABLE_CONFIG", "PROP_KEY", "PROP_VALUE");
}
}
Put most of the settings in the DB and have a simple screen to modify
it. Internally we can have a generic configuration service EJB that
can read and modify the values. Each module can have a custom extended
version that have specific getters and setter.
Commons configurations as a simple API, you may then write the GUI as you wish.
You can do the interface in anyway you wish. Or as a quick win have no interface.
Version control all the property files then check it out on production
and check it into a production branch after making changes.
Version control is great. Add another DatabaseConfiguration using composition. The class you extends is the active config and the composed one being the audit. There is another constructor can can have a version. Just overload the right methods to get the desired effect.
import javax.sql.DataSource;
import org.apache.commons.configuration.DatabaseConfiguration;
public class MYConfig extends DatabaseConfiguration {
final DatabaseConfiguration audit;
public MYConfig(DataSource datasource) {
super(datasource, "TABLE_CONFIG", "PROP_KEY", "PROP_VALUE");
audit = new DatabaseConfiguration("TABLE_CONFIG_AUDIT", "PROP_KEY", "PROP_VALUE");
}
#Override
public void addProperty(String key, Object value) {
Object wasValue = super.getProperty(key);
super.addProperty(key, value);
audit.put(key,wasValue);//add version code
}
}
http://commons.apache.org/proper/commons-configuration/
User a simple database table (Section, Key, Value). Add "Version" if you need it, and wrap the entire thing in a simple ConfigurationService class with methods like getInt(String section, String key)
Not a lot of work, and it makes the application code very neat, and tweaking with the configuration very easy.
Interesting alternative config file format: write a scala trait. Your config file can then just be a scala file that you compile and evaluate when the server starts.
http://robey.lag.net//2012/03/26/why-config.html

Environment configuration management?

There is a team develops enterprise application with web interface: java, tomcat, struts, mysql, REST and LDAP calls to external services and so on.
All configuration is stored in context.xml --tomcat specific file that contains variables available via servlet context and object available via JNDI resources.
Developers have no access to production and QA platforms (as it should be) so context.xml is managed by support/sysadmin team.
Each release has config-notes.txt with instructions like:
please add "userLimit" variable to context.xml with value "123", rename "DB" resource to "fooDB" and add new database connection to our new server (you should know url and credentials) named "barDb"
That is not good.
Here is my idea how to solve it.
Each release has special config file with required variable names, descriptions and default values (if any): even web.xml could be used.
Here is pseudo example:
foo=bar
userLimit=123
barDb=SET_MANUAL(connection to our new server)
And there is a special tool that support team runs against deployment artifact.
Look at it (text after ">" is typed by support guy):
Config for version 123 of artifact "mySever".
Enter your config file location> /opt/tomcat/context/myServer.xml
+"foo" value "bar" -- already exists and would not be changed
+"userLimit" value "123" -- adding new
+"barDb"(connection to our new server) please type> jdbc:mysql:host/db
Saving your file as /opt/tomcat/context/myServer.xml
Your environment is not configured to run myServer-123.
That will give us ability to deploy application on any environment and update configuration if needed.
Do you like my idea? What do you use for environment configuration management? Does there is ready-to-use tools for that?
There are plenty of different strategies. All of them are good and depends on what suit you best.
Build a single artifact and deploy configs to a separate location. The artifact could have placeholder variables and, on deployment, the config could be read in. Have a look at Springs property placeholder. It works fantastically for webapps that use Spring and doesn't involve getting ops involved.
Have an externalised property config that lives outside of the webapp. Keep the location constant and always read from the property config. Update the config at any stage and a restart will be up the new values.
If you are modifying the environment (i.e. application server being used or user/group permissions) look at using the above methods with puppet or chef. Also have a look at managing your config files with these tools.
As for the whole should devs be given access to prod, it really depends on a per company basis. For smaller companies where the dev is called every time there is a problem, regardless of whether that problem is server or application related, then obviously devs require access to the box.
DevOps is not about giving devs access to the box, its about giving devs the ability to use infrastructure as a service, the ability to spawn new instances with application X with config Y and to push their applications into environments without ops. In a large company like ours, what it allows is the ability for devs to manage the application they put on a server. Operations shouldn't care what version is on their, thats our job, their job is all about keeping the server up and running.
I strongly disagree with your remark that devs shouldn't have access to prod or staging environments. It's this kind of attitude that leads to teams working against each other instead of with eath other.
But to answer your question: you are thinking about what is typically called continuous integration ( http://en.wikipedia.org/wiki/Continuous_integration ) and moving towards devops. Ideally you should aim for the magic "1 click automated deployment". The guys from Flickr wrote a lot of blogs (and books) about how they achieved that.
Anyhow .. there's a lot of tools around that sector. You may want to have a look a things like Hudson/Jenkins or Puppet/Chef.

Categories