Pass in env specific properties into a webapp - java

We are looking for deployment options for our continuous integration system for our webapp. We are building a single .war file. We need to deploy it across several different envs (e.g. DEV, QA, STAGE, etc). AFAIK, there are two ways to pass in the env specific properties:
First, use the -D option when starting Tomcat:
-Denv=DEV
This require us to customize the catalina.sh script for every env.
Second, use the environment variables before launching Tomcat:
export env=DEV;
This require us to tweak the deployment script for each env. And this is platform dependent (i.e. on Windows you'll have to do set env=DEV).
Can anyone tell me which of these two options is better? Or is there any other better ones?

We have a web app which is deployed into 14 different environments. To manage this, we create a unique build for each environment.
We are using maven as a build system, so each environment has a unique profile. The profile references a properties file for filtering which contains configuration that varies across environments. This information is tokenized into Spring context files, web.xml, weblogic.xml, etc. (We do not filter source files as this gets ugly fast.)
The second piece of this is our Jenkins CI server. We have a job for each environment which references the corresponding profile. This points to a well-known tag name in our subversion repo. So the flow goes like this:
Development happens and code committed to trunk.
Need to build into some env, so we create a tag called "latest" (removing old copies if they exist). At the same time, we also create a tag with a date-time stamp, but this is optional.
Kick-off the appropriate hudson build. This means we are not building off some developers workstation, and the build is already configured so there is nothing to remember.
Pull the artifact to deploy right out of Jenkins.
I've seen application configuration maintained within the app server itself, but unless you have something like glu to manage things, it quickly becomes a nightmare to maintain. This is especially true if you have different teams for development and ops.

Related

What's a practicable way for automated configuration, versioning and deployment of Maven-based java applications? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
we're maintaining a medium sized code base consolidated into a single multi(multi)-module maven project. Overall the whole build has up to ten output artifacts for different system components (web applications (.war), utilities (.jar), etc.).
Our deployment process so far is based on simple bash scripts that build the requested artifacts via maven, tag the scm repository with information regarding the artifacts, the target environment and the current build timestamp and then upload the artifacts to the chosen environments application servers and issue commands to restart the running daemons.
Configuration of the built artifacts happens by means of maven profiles and resource filtering. So our builds are specific to the target environment.
This process has served us well but for different reasons I would like to move forward towards a more sophisticated approach. Especially I would like to get rid of the bash scripts.
So what are the best practices regarding configuration, versioning and deployment of Maven-based Java applications?
Should our builds be environment agnostic and the configuration be done via config files on the target systems? If so how would a developer take care that new configuration options are included in the deployed config files on the various application servers?
Should we use Maven-versioning a.k.a. Maven release plugin to tag the various builds?
Is it a good idea to configure a CI server like Jenkins or Teamcity to build and optionally deploy our artifacts for us?
I like to think of there being two problem spaces:
building artifacts (ideally environment agnostic as that means QA can take a hash of the artifact, run their tests on that artifact and when it comes time to deploy, verify the ash and you know it's been QA'd. If your build produces different artifacts depending on whether for QA's env or the staging env, or the production env, then you have to do more work to ensure the artifact going into production has been tested by QA and staged in staging)
shipping artifacts into an environment. Where that environment requires configuration of the artifacts, the shipping process should include that configuration, either by putting the appropriate configuration files in the target environment and letting the artifacts pick that up, or by cracking open the artifacts, configuring them, and sealing them back up (but in a repeatable and deterministic fashion)
Maven is designed for the first problem space. "The Maven way" is all about producing environment agnostic build artifacts and publishing them to a binary artifact store. If you look at the Maven lifecycles, you will see that the phases stop after the artifact is deployed to the Maven repository (a binary artifact store). In short, Maven sees its job as done at that point. Additionally, there are life cycle phases for unit testing and integration-testing both of which should be possible with an environment agnostic artifact, but that is not the full set of testing that you require... Rather to complete your testing you will need to actually deploy the built artifacts into a real environment.
Many people try to hijack Maven to move beyond its goal (myself included). For example you have the cargo-maven-plugin and the ship-maven-plugin which touch on aspects beyond the maven end game (ie after the artifact gets to the maven repository). Of these, I feel personally, that the ship-maven-plugin (which i wrote, hency my previous "myself included") is closest to use "after maven" because by default it is designed to operate, not on the -SNAPSHOT version of the project that you have checked out on disk, but rather on a release version of the same project that it pulls from the remote repository, eg
mvn ship:ship -DshipVersion=2.5.1
IMO, cargo is aimed at use around the integration-test phase in the life cycle, but again, you can hijack it for other purposes.
If you are producing shrink-wrapped software, ie the kind that a user buys, and installs on their system, the installer program itself is designed to configure the application for the end users environment. It is fine to have the Maven build produce the installer because the actual installer is (at least somewhat) environment agnostic. Ok it may be a Microsoft Windows only installer, or a Linux only installer, but it does not care about which users machine it gets installed on.
Now days, though, we tend to concentrate more on Software as a Service, so we are deploying the software onto servers that we control. It becomes more tempting tow to to the "Maven dark side" where build profiles are used to tweak the internal configuration of the build artifacts (after all we only have three environments we deploy to) and we are moving fast so don't want to take the time to make the application pick up the environment specific configuration from external to the built artifact (sound familiar?). The reason I call this the dark side is that you really are fighting the way maven wants to work... You are always wondering if the jar in the local repository was built with a different profile active, so you end up having to do full clean builds all the time. When it comes time to move from QA to staging or from staging to production, you need to do a full build of the software... And all the unit and integration tests end up being run again (or you end up skipping them and in turn skipping the sanity they may be providing on the artifacts they are building) so in effect you are making life harder and more complex... Just for the sake of putting a few profiles into the maven pom.xml... Just think, if you had followed the maven way you'd just take the artifact from the repository and move that along the different environments, unmodified, unmolested, and with MD5, SHA1 (and hopefully GPG) signatures to prove that it is the same artifact.
So, you ask, how do we code the shipping to production...
Well there are a number of ways to tackle this problem. All of them share a similar set of core principles, namely
keep the recipe for shipping to an environment in a source control system
the recipe should ideally have two parts, an environment agnostic part, and the environment specific part.
You can use good old bash scripts, or you can use more "modern" tools such as chef and puppet which are designed for this second problem space.
Recommendations
You should use the right tool for the right job.
If it were me, here's is what I would do:
Cut releases with the Maven Release Plugin
The built artifacts should always be environment agnostic.
The built artifacts should contain "sensible defaults" for all configuration options. In other words, they should either blow up fast if a required configuration option with no sensible default is missing, or they should perform in a sensible way if an optional option is unspecified. An example of a required configuration option might be the database connection details (unless the app is happy to run with an in memory DB)
Pick a side in the chef vs puppet war (doesn't matter which side, and you can change sides if you want. If you have and ANT mindset, chef may suit you better, if you like dependency management magic, puppet may suit you better)
Developers should have a say in defining the chef/puppet scripts for deployment, at least the environment agnostic part of those scripts.
Operations should define the production environment specific details of the chef/puppet deployment
Keep all those scripts in SCM.
Use Jenkins, or any CI, to automate as much of the steps as possible. The promoted builds plugin for Jenkins is your friend.
Your end game is that every commit, providing that it passes all required tests, *could * get deployed into production automatically (or perhaps with the gate of a person saying "go ahead")... note not saying that you actually do this for every commit, only that you could
What I have used in the past which work well is to use Apache Karaf+iPOJO with my version control which was subversion (I would use git today)
What the version control allowed be to do was deploy a versioned copy of Apache Karaf and my configuration files. Any changes made from development or on the production system (when something needed an urgent fix) would still be traced and could be checked in (including information about who made what change when)
What Apache Karaf supports is dynamic deployment of maven libraries from your maven repository. i.e. you have configuration files which specify the versions of jar you want tot release and it will download them as required from your maven repo and run them. The iPOJO adds components for these models which you can configure using properties values (again versioned)
This assumes you have control of the end-to-end development to deployment, but can work very well even with multiple remote sites.

Approach to be followed for deploying the application

We have developed a web based application in java(STRUTS 2.0). Now we want to deploy the application. The client is having one pre UAT environment ,UAT environment and a production environment.
Now when we are deploying for pre-UAT we have created the copy of our project and renamed it to pre-UAT. Similarly we are planning for UAT environment and one we already have for development. So in all we will be having 3 copies of our code.
I want to ask is this approach correct or what is the standard approach followed. This is not our final release as we are first releasing a version and then we will be working on other modules.
So please can anyone guide me for approach to follow for creating this 3 different environments.Thanks in advance
I am not sure what you refer to by "we will be having 3 copies of our code". If you are implying that you actually copied the code-base around multiple times, please stop reading and refer to this:
Why is "copy and paste" of code dangerous?
And once you finish reading, do some research about source control and how to use branching/tagging for concurrent development.
If you were referring to multi-environment deployment:
Assuming your application is designed correctly (and I'm treading very carefully here), one WAR file (you were mentioning you're using Tomcat, so I am concluding that your application is packaged as a WAR) should be sufficient. The application code should be environment-independent and should read its environment-specific configuration from external resources, such as a database, configuration files or JNDI.
If your application code is environment-independent, then all you need to do is simply deploy the WAR file to each of the environments (the same WAR file), plus the environment-specific set of external artifacts (such as configuration files).

Deploying Java applications without including production credentials in VC

Am using Maven and Jenkins to manage deployment of my web application. Essentially:
When deploy is triggered, CI box checks the code out of version control.
If code passes tests, it triggers the Maven release plugin to build a versioned war, and puts it in our local nexus repo
In same build, pulls the artifact from nexus, and copies the artifact into tomcat, triggering Tocmat to re-explode war.
This works fine, and using this technique I can use maven to replace the appropriate environment specific configurations, so long as they are within the project. However, my SysAdmin considers it a security risk to have production credentials in VC. Instead, we would prefer to store the production credentials on the production machines that will be using them. I can imagine writing a simple bash script to ssh into the service box, and soft link the conf file onto the classpath, but this seems like a pretty inelegant solution.
Is this reasonable? Is there a better/more standard way of acheiving this? Is it actually a security risk to hold production credentials in VC?
You have your conf file on your production server at some location. This location could be a property too.
If there is no specific reason for not loading it as a file from disk rather than loading as a resource from classpath, you could create a separate Maven profile production that would filter the location replacing it with the file path for your production server.
Yes, it's a security risk to have production credentials in version control. It frees your developers to do pretty much whatever they want to production. Regulations like HIPAA in medicine or PCI for e-commerce or SoX for public US companies would frown on that. Your sys-admin is reasonable to as well.
The basic strategy is to externalize this configuration and have the deployment process roll in the environment specific data.
Having that information on the production server itself is an ok, but not great solution. It's a good fit when you have just one target server. Once you have a bunch, there's a maintenance headache. Whenever env. specific data changes, it has to be updated on every server. You also need to be sure to only have env. specific information in there or else changes developers make to early environments may not be communicated to the sys-admin to change at deployment time leading to production deployment errors.
This is where, I think, Hudson lets you down from a continuous delivery perspective. Some of the commercial tools, including my company's uBuild/AnthillPro, formally track different environments and would securely let the sys-admin configure the production credentials and developers configure the dev credentials with the tool. Likewise the application release automation tools like our uDeploy that would pull builds out of Hudson and deploy them, should have this kind of per environment configuration baked.
In these scenarios, most of the property / xml files have generic config, and the deployment engine substitutes env. specific data in as it deploys.
Adding a new tool for just this problem is probably overkill, but the basic strategy of externalizing environment specific info into a central place where it can be looked up a deployment time could work. Since you're a Maven shop, you might consider stashing some of this in your Maven repo in an area locked down for access by only operations. Then pull the latest config for the appropriate environment at deployment time.
You have a range of options here. Consider how things vary by environment; what varies by server; what needs to be secured, what changes with time on the dev side, etc. And please, please, please sit down with your sys-admin and work out a solution together. You each have insight the other doesn't and the end solution will be better for the cooperation.

Separate web.xml for development and production

My web.xml is different in devel and production environments. For example in development environment there is no need in security constraints.
Typically I deploy new application version as follows:
Export Eclipse project to WAR.
Upload WAR to the server.
Redeploy.
The problem is that I have to manually uncomment security constraints in web.xml before exporting.
How do you solve this problem?
I also met an opinion in some articles that "web.xml is rarely changed". But how can web.xml not change if it is exported to WAR on every update?
Thanks in advance!
If you can't use the same web.xml during development, I would automate the build process, use two web.xml and bundle the "right" one at build time depending on the targeted environment as Brian suggested. But, instead of Ant, I'd choose Maven because it will require less work IMHO and it has a built-in feature called profiles that is perfect to manage environment specific stuff like here.
In other words, I'd put the build under Maven 2 and use a production profile containing a specific maven-war-plugin configuration to build a WAR containing a web.xml having the required security constraints. Another option would be to merge the development web.xml (cargo can do that) to add the security-constraints but this is already a bit more "advanced" solution (a bit more complex to put in place).
I would create a development and production deployment with different web.xml configs. Automate the building/maintenance of these via your build (Ant/Maven etc.) to keep control of the common elements required.
I had to solve this problem many times in the past, and ended up writing XMLTask - an Ant plugin which allows the modification of XML files without using normal text replacement (it's a lot clever than that) and without having to mess with XSLTs (it's a lot simpler than that). If you follow the above approach you may want to check this out. Here's an article I wrote about it.
Assuming that you're stuck with the idea of web.xml changing before deployment to production, then my likely approach would be to run the development web.xml through a simple XSL transform which "decorated" the web.xml with your production-only elements, such as security constraints. Assuming that you can hook this step into your build process, then the production-ready web.xml should appear during your export process.
However, it is generally a good idea not to have different web.xml across environments, it devalues your testing. Having the same value in all environments will reduce the risk of bugs appearing only in your production environment.
I converted my project to be built using ant. The starting point was just this build.xml http://tomcat.apache.org/tomcat-6.0-doc/appdev/build.xml.txt
The above build doesn't have the feature of copying in a different web.xml(based on e.g. a property set when building), but you'll learn how to do that when get a bit into ant, should be pretty easy.
As a nice side effect, deploying to a remote tomcat is now just a couple of clicks away from within Eclipse instead of Export->war and manually copying it to the server.
I would add the necessary infrastructure to allow a mechanical build, with ant or maven.
When THAT is done you can have your mechanical build create two targets, one for test and one for production.
You should however, strongly consider testing the same code as you have in production. You will be bitten otherwise.
I believe having a single war that works in multiple environments is a superior solution than baking a new one with a profile option per dev, qual, and prod. It is super annoying there is not a better mechanism to get an environment variables directly in web.xml without using a library like spring.
One solution to web.xml environment configuration given that your environment customization is related to filter init params, such as:
<filter>
<filter-name>CAS Filter</filter-name>
<filter-class>edu.yale.its.tp.cas.client.filter.CASFilter</filter-class>
<init-param>
<param-name>edu.yale.its.tp.cas.client.filter.loginUrl</param-name>
<param-value>https://<foo>:8443/login</param-value>
...
The particular filter class referenced above (CASFilter) is public. This means you can extend it will a custom adapter that adds in your environment configuration. This allows you to stay out of that nasty web.xml file.

What is the best way to deal with environment specific configuration in java?

I have an application running in tomcat that has a bunch of configuration files that are different for each environment it runs in (dev, testing, and production). But not every line in a config file will be different between environments so there's invariably duplicated information that doesn't get updated if something changes.
Is there a good framework/library that collapses the separate files into one with environment specific blocks? Or some other way of dealing with this?
Assign reasonable default values for all properties in the properties files distributed within your .war file.
Assign environment-specific values for the appropriate properties in webapp context (e.g. conf/server.xml or conf/Catalina/localhost/yourapp.xml)
Have your application check the context first (for the environment-specific values), and fall back on the default values in the app's properties values if no override is found.
A Properties file is what I've always used. It's editable by hand as well as in in your software and the Properties object can read itself in and write itself out to the filesystem. Here's the javadoc page:
http://java.sun.com/j2se/1.4.2/docs/api/java/util/Properties.html
If you use maven, you can use it's resource filtering abilities, along with profiles to generate a properties file for each environment you're deploying into.
As an added bonus, maven can also deploy your web app for you.
The duplication is not really a problem, having a central config file the the other files 'extend' is likely to casue more of a headache in the long term.
My advice is to use ant to load (copy and move) the appropriate file(s) into place and then launch the app (bundle into war?). Just have a different task for each environment. So you will have three config files (dev.config, test.config and production.config) which will be moved and overwrite the config in the /WEB-INF folder depending on the task that you are running.
I would suggest to have a separate config file for environment parameters alone if you want to avoid cluttering. Then you will have one more config file to manage. This is a trade off between number of config files vs complexity of each config file.

Categories