Two main ways to deploy a J2EE/Java Web app (in a very simplistic sense):
Deploy assembled artifacts to production box
Here, we create the .war (or whatever) elsewhere, configure it for production (possibly creating numerous artifacts for numerous boxes) and place the resulting artifacts on the production servers.
Pros: No dev tools on production boxes, can re-use artifacts from testing directly, staff doing deployment doesn't need knowledge of build process
Cons: two processes for creating and deploying artifacts; potentially complex configuration of pre-built artifacts could make process hard to script/automate; have to version binary artifacts
Build the artifacts on the production box
Here, the same process used day-to-day to build and deploy locally on developer boxes is used to deploy to production.
Pros: One process to maintain; and it's heavily tested/validated by frequent use. Potentially easier to customize configuration at artifact creation time rather than customize pre-built artifact afterword; no versioning of binary artifacts needed.
Cons: Potentially complex development tools needed on all production boxes; deployment staff needs to understand build process; you aren't deploying what you tested
I've mostly used the second process, admittedly out of necessity (no time/priority for another deployment process). Personally I don't buy arguments like "the production box has to be clean of all compilers, etc.", but I can see the logic in deploying what you've tested (as opposed to building another artifact).
However, Java Enterprise applications are so sensitive to configuration, it feels like asking for trouble having two processes for configuring artifacts.
Thoughts?
Update
Here's a concrete example:
We use OSCache, and enable the disk cache. The configuration file must be inside the .war file and it references a file path. This path is different on every environment. The build process detects the user's configured location and ensures that the properties file placed in the war is correct for his environment.
If we were to use the build process for deployment, it would be a matter of creating the right configuration for the production environment (e.g. production.build.properties).
If we were to follow the "deploy assembled artifacts to the production box", we would need an additional process to extract the (incorrect) OSCache properties and replace it with one appropriate to the production environment.
This creates two processes to accomplish the same thing.
So, the questions are:
Is this avoidable without "compiling on production"?
If not, is this worth it? It the value of "no compiling on production" greater than "Don't Repeat Yourself"?
I'm firmly against building on the production box, because it means you're using a different build than you tested with. It also means every deployment machine has a different JAR/WAR file. If nothing else, do a unified build just so that when bug tracking you won't have to worry about inconsistencies between servers.
Also, you don't need to put the builds into version control if you can easily map between a build and the source that created it.
Where I work, our deployment process is as follows. (This is on Linux, with Tomcat.)
Test changes and check into Subversion. (Not necessarily in that order; we don't require that committed code is tested. I'm the only full-time developer, so the SVN tree is essentially my development branch. Your mileage may vary.)
Copy the JAR/WAR files to a production server in a shared directory named after the Subversion revision number. The web servers only have read access.
The deployment directory contains relative symlinks to the files in the revision-named directories. That way, a directory listing will always show you what version of the source code produced the running version. When deploying, we update a log file which is little more than a directory listing. That makes roll-backs easy. (One gotcha, though; Tomcat checks for new WAR files by the modify date of the real file, not the symlink, so we have to touch the old file when rolling back.)
Our web servers unpack the WAR files onto a local directory. The approach is scalable, since the WAR files are on a single file server; we could have an unlimited number of web servers and only do a single deployment.
Most of the places I've worked have used the first method with environment specific configuration information deployed separately (and updated much more rarely) outside of the war/ear.
I highly recommend "Deploy assembled artifacts to production box" such as a war file. This is why our developers use the same build script (Ant in our case) to construct the war on their development sandbox, as is used to create the finally artifact. This way it is debugged as well as the code itself, not to mention completely repeatable.
There exist configuration services, like heavy weight ZooKeeper, and most containers enable you to use JNDI to do some configuration. These will separate the configuration from the build, but can be overkill. However, they do exist. Much depends on your needs.
I've also used a process whereby the artifacts are built with placeholders for config values. When the WAR is deployed, it is exploded and the placeholders replaced with the appropriate values.
I would champion the use of a continuous integration solution that supports distributed builds. Code checked into your SCM can trigger builds (for immediate testing) and you can schedule builds to create artifacts for QA. You can then promote these artifacts to production and have them deployed.
This is currently what I am working on setting up, using AnthillPro.
EDIT: We are now using Hudson. Highly recommend!
If you are asking this question relative to configuration management, then your answer needs to be based on what you consider to be a managed artifact. From a CM perspective, it is an unacceptable situation to have some collection of source files work in one environment and not in another. CM is sensitive to environment variables, optimization settings, compiler and runtime versions, etc. and you have to account for these things.
If you are asking this question relative to repeatable process creation, then the answer needs to be based on the location and quantity of pain you are willing to tolerate. Using a .war file may involve more up-front pain in order to save effort in test and deployment cycles. Using source files and build tools may save up-front cost, but you will have to endure additional pain in dealing with issues late in the deployment process.
Update for concrete example
Two things to consider relative to your example.
A .war file is just a .zip file with an alternate extension. You could replace the configuration file in place using standard zip utilities.
Potentially reconsider the need to put the configuration file within the .war file. Would it be enough to have it on the classpath or have properties specified in the execution command line at server startup.
Generally, I attempt to keep deployment configuration requirements specific to the deployment location.
Using 1 packaged war files for deploys is a good practice.
we use ant to replace the values that are different between environments. We check the file in with a ### variable that will get replaced by our ant script. The ant script replaces the correct item in the file and then updates the war file before the deploy to each
<replace file="${BUILDS.ROOT}/DefaultWebApp/WEB-INF/classes/log4j.xml" token="###" value="${LOG4J.WEBSPHERE.LOGS}"/>
<!-- update the war file We don't want the source files in the war file.-->
<war basedir="${BUILDS.ROOT}/DefaultWebApp" destfile="${BUILDS.ROOT}/myThomson.war" excludes="WEB-INF/src/**" update="true"/>
To summarize- ant does it all and we use anthill to manage ant.
ant builds the war file, replaces the file paths, updates the war file, then deploys to the target environement. One process, in fact one click of a button in anthill.
Related
I know that in Grails framework, you can build a war file using
grails war(builds a production war)
or you can build a environment-specific war file using
grails test war
Now, what I am having trouble understanding is, if I build a war file using grails war but deploy it to test environment (where -Dgrails.env=test), the war file built using grails war command runs happily by picking up **test ** environment settings(like pulling data from test urls instead of prod urls).
My question is: what is the point of building a war file using a environment-specific command (ie. why use grails test war when the war file built using grails war works everywhere?).
Am I missing something obvious?
The reason for using an environment is because you may code in your application that hooks into the build process and alters the resulting WAR based on the environment. Such as re configuring some filters in web.xml for instance. It's an extension point. You can use it if you need.
Grails holds three automatic environments: dev, test, prod. there are some
defaults for the various "scripts", e.g. run-app runs dev, test-app runs test,
war build a war for prod. these are there for convenience and make the most
sense from the daily usage patterns for developers. e.g. in testing the
default is an in-mem db.
You can also add more environments, as you see fit. E.g. having an staging
or integration environment is common, so by providing such an env (maybe
only some config or db changes) you can easily build a war file for the server
you use for your QA team.
Another use case is to just build a dev war, as there might be something odd
with the war on the production server and you just need to run the war against
that odd tomcat 6.x real-life environment, but with the dev setting against
your db.
That said, there still is the config you can add via config files, but the
environments give a rather sane setup for "all involved", as they are usually
within version control.
And as a final step you still have access to the environment in your own
scripts/_Events.groovy hooks, where you might e.g. drop or add something,
but that only makes sense for that exact environment (e.g. drop some jars, as
they are on the server already).
At the end, this features gives you some freedom to do what you want. Be glad, if you never have to use it. But once you need, you'll be glad it's there.
I wonder if this is a somewhat awkward way of thinking, but I couldn't really find any hint on the internet to my idea. Maybe I just did not phrase my question right, but anyhow, this is what I would like to do:
I have a complex application written in java with spring and quartz and a whole load of dependencies. The application is run inside an apache tomcat servlet container. Now I know, I can create a war file and deploy that to the productive server machine (after our internal IT has installed and configured the tomcat on that machine), but I would like to do this a bit different.
I would like maven to create a pre-packaged tomcat application server with all dependencies and configuration settings AND my application. In effect, all that would need to be done on the productive system is, copy the package (or zip or tar.gz or whatever is needed) to the server, unpack it in a directory of my or their choice and fire up this local isolated tomcat. It would only run my application (which poses enough load on the machine anyway) and I could even go so far and deploy a second variant, say for a different customer in the directory next to the first one. Neither of both could interfere with each other, even if they use different versions with different dependencies.
Is it possible to do that? Is it a desirable approach or am I on the completely wrong track here?
What I think would be a benefit of this approach (despite the thing with incompatible dependencies or settings between two or more different installations) is, that I can hand the whole package over to our administration guys and they can simply deploy it to a server without the need to configure anything in the tomcat after installing it and so on.
Any hint???
Create a Maven project as the parent project (type pom). Include your webapp as a module project (type war). Create another module project, maybe "myapp-standalone" (type jar) and include the Embeddable Tomcat as a dependency. Write a starter class to launch the internal Tomcat (see executable jar / überjar). When building the app, copy the created war file into the jar, into Tomcats webapp directoy.
Your launcher class needs to make sure, that the ports of the current Tomcat are not yet in use.
I am having war file deployment at customer site. War file contains lib folder which contains dependent jars e.g.
/lib/app-01.jar
/lib/spring-2.5.1.jar
/lib/somefile-1.2.jar
...
...
If we need to update lets say app-01.jar to app-02.jar, is there any elegant solution? how does these dependent jars are packaged into WAR file as industry standard?
Is it good idea to package those dependent jars without version number?
e.g.
/lib/app.jar
/lib/spring.jar
/lib/somefile.jar
...
...
EDIT NOTE:
Actually, War is deployed to Webshpere, WebLogic, Tomcat on Windows or Linux platform. And Customer's IT department is involved for deployment
Probably the most elegant solution is just to generate a new war, and deploy it.
Here are the reasons:
If you are worried about uptime, some application servers supports side by side deployment. It means that you can deploy a new version and have it up at the same time of the old one. And stop the old when no one is using it. (I've used that on WebLogic, like 5yrs ago, so I suppose that is a common feature now). But that kind of feature only works if you deploy a new .WAR version.
Probably the WAR was generated using Maven, Ant or Gradle, so changing the dependency version and do a mvn package is usually faster and less error prone than unzipping the WAR, change it, and zip it again.
All the application servers provides a "hot replace" feature, that works by refreshing the class loader. Its fine for development, but in production it can be problematic (class loader leaks can be common, and problems caused by incorrect initialization or bad programming practices can give you unexpected bugs like having two versions of a class)
About JAR file names: I recommend to keep the versions on the file name.
Most of the JARs contains version information inside META-INF/Manifest.mf. But if for some reason you have to know which versions are using your app... opening each JAR file to see the version in the manifest is a lot of work.
As a final advice. If you don't use any automatic build tool... adopt one (take a look to Gradle, which is nice). Updating a library version, usually consist on changing the version number on the build file and execute something like gradle deploy. Even if you are not the developer, but the one in charge of devops, having an automated build is going to help you with deployments and updates.
In Tomcat I don't think the war is relevant after the files have been uncompressed. You could just ignore the war and extract the new/changed files into the correct webapp's directory (The one with the same name as the war.)
This question is somewhat similar to this one Best way to deploy large *.war to tomcat so it's a good read first, but keep on reading my q, it's different at the end...
Using maven 2 my war files are awfully large (60M). I deploy them to a set of tomcat servers and just copying the files takes too long (it's about 1m per war).
On top of that I added an RPM layer that'll package the war in an RPM file (using maven's rpm plugin). When the RPM is executed on the target machine it'll cleanup, "install" the war (just copy it), stop and start the tomcat (that's how we do things here, no hot deploys) and set up a proper context file in place. This all works fine.
The problem, however, is that the RPM files are too large and slow to copy. What take almost eh entire space is naturally the war file.
I haven't seen any off-the-shelf solution so I'm thinking of implementing one myself so I'll describe it below and this description will hopefully help explain the problem domain. I'll be happy to hear your thought on the planned solution, and better yet point me at other existing solutions and random tips.
The war files contain:
Application jars
3rd party jars
resources (property files and other resources)
WEB-INF files such as JSPs, web.xml, struts.xml etc
Most of the space is taken by #2, the 3rd party jars.
The 3rd party jars are also installed on an internal nexus server we have in the company so I can take advantage of that.
You probably guessed that by now, so the plan is to create thin wars that'll include only the application jars (the ones authored by my company), resources and WEB-INF stuff and add smartness to the RPM install script that'll copy the 3rd party jars when needed.
RPM allows you to run arbitrary scripts before or after installation so the plan is to use mvn write a list of 3rd party dependencies when building the war and add it as a resource to the RPM and then when installing an RPM the RPM installation script will run over the list of required 3rd party jars and download the new jars from nexus only if they don't exist yet.
The RPM will have to delete jars if they are not used.
The RPM will also have to either rebuild the war for tomcat to explode it or add the 3rd party jars to common/lib or something like that although we have a few web-apps per tomcat so it'll make things complicated in that sense. Maybe explode the jar by itself and then copy the 3rd party jars to WEB-INF/lib
Your input is appreciated :)
We have a directory on the target machines with all third party jars we're using (about 110Mb). The jars are using a naming coding convention that includes their version number (asm-3.2.jar, asm-2.2.3.jar ...). When adding a new version of a third party we don't delete the older version.
When deploying, our jar files contains only business logic classes and resources we compile in the build (no third party). The classpath is defined in the jar manifest where we cherry pick which third party it should be using at runtime. We're doing that with ant, no maven involved and we have more then 25 types of services in our system (very "soa" though I dislike this over buzzed word). That business logic jar is the only jar in the jvm classpath when starting the process and it is also versioned by our code repo revision number. If you go back to older revision (rollback) of our code that might be using an older third party jar its still going to work as we don't remove old jars. New third party jars should be propagated to production machines before the business code that uses them does. But once they're there they're not going to be re-pushed on each deployment.
Overall we lean towards simplicity (i.e. not OSGi) and we don't use Maven.
I would advise against your proposed plan. It sounds like a lot of moving pieces that are likely hard to test and/or diagnose problem when they arise.
We don't have the problem of "large" WARs but we do have the problem that most of our WARs all need the exact same 3-rd party libraries on their classpath. The solution we went forth with (that has worked very well) was to utilize OSGi to build our application modularly. We use Felix as our OSGi container which runs inside of Tomcat. We then deploy all of our dependencies/libraries to Felix once. Then we deploy "thin" WARs which just reference OSGi dependencies by Importing the packages it needs from the bundles it cares about.
This has a few other advantages:
Deploying new versions of OSGi bundles while the old ones are running is not an issue which allows for no downtime (similar to hot deploy).
If you need to upgrade one of your dependencies (e.g. Spring 2.5 -> 3.0), you only need to upgrade the Spring bundle running in OSGi; no need to deliver (or package) new WARs if the APIs did not change. This can all (once again) be done on a live running OSGi container, no need to turn anything off.
OSGI guarantees your bundles do not share classpaths. This helps keep your code cleaner because each WAR only needs knowledge of what it cares about.
Setting up your WARs to be "OSGi ready" is not trivial but it is well documented. Try checking out How to get started with OSGi or just Google for 3rd party tutorials. Trust me, the initial investment will save you much time and many headaches in the future.
It is probably best not to re-invent the modularity wheel if possible.
How we can define the term "deploy a package"?
Is correct to say that to deploy a package means to make a procedure
in which we create, put into a file system location and make them visible
to the compiler an VM with options like -classpath CLASSPATH etc????
How we can define the term "deploy a package"?
It depends on that kind of package you are talking about and what kind of deployment you are talking about.
For example, you probably wouldn't talk about deploying a Java package, because a Java package is not normally a sensible "unit of deployment". (Normally you would deploy a Java application or webapp, or possibly a Java library. And in the context of Maven, you would deploy an "artifact".)
If you are not talking about a Java package, what kind of package are you talking about?
Is correct to say that to deploy a package means to make a procedure in which we create, put into a file system location and make them visible to the compiler an VM with options like -classpath CLASSPATH etc????
That doesn't sound like a conventional definition of deployment to me. For a start, there is no standard file system location to deploy (for example) JAR files to.
I have become accustomed to the way maven uses the word deploy.
The deploy plugin is primarily used during the deploy phase, to add your artifact(s) to a remote repository for sharing with other developers and projects. This is usually done in an integration or release environment. It can also be used to deploy a particular artifact (e.g. a third party jar like Sun's non redistributable reference implementations).
I am not saying that your definition is incorrect, it just doesn't rhyme with my interpretation.
I would separate "build" from "deploy". Building would take source code and construct a deployable artefact. In this case we have some .java files, we compile them to .class files and (usually) put them in a JAR. The JAR is the thing that we deploy. In Java EE we might go a step further and put several JARs (and WARs ...) into an EAR and deploy that.
So deploying is making the deployable artefact executable in a runtime environment, in this case making the JAR visible to a chosen JVM. Quite possbly you might have many runtime environments, many customers, many machines. You build once, deploy many times.
In practice we often find that there's a little bit more to doing the deployment than just getting the JAR onto a Classpath. You often find you need to:
Remove previous versions of the JAR, possibly keeping then ready for to be reinstated if something bad happens.
Make other resources available, eg. databases
Do some environment specific configuration
Validate the deployment by running some kind of tests
Restart dependent components
Keep an audit trail ofthe deployment
In non-trivial cases it's often very useful to automate steps such as these using scripts.