I know that in Grails framework, you can build a war file using
grails war(builds a production war)
or you can build a environment-specific war file using
grails test war
Now, what I am having trouble understanding is, if I build a war file using grails war but deploy it to test environment (where -Dgrails.env=test), the war file built using grails war command runs happily by picking up **test ** environment settings(like pulling data from test urls instead of prod urls).
My question is: what is the point of building a war file using a environment-specific command (ie. why use grails test war when the war file built using grails war works everywhere?).
Am I missing something obvious?
The reason for using an environment is because you may code in your application that hooks into the build process and alters the resulting WAR based on the environment. Such as re configuring some filters in web.xml for instance. It's an extension point. You can use it if you need.
Grails holds three automatic environments: dev, test, prod. there are some
defaults for the various "scripts", e.g. run-app runs dev, test-app runs test,
war build a war for prod. these are there for convenience and make the most
sense from the daily usage patterns for developers. e.g. in testing the
default is an in-mem db.
You can also add more environments, as you see fit. E.g. having an staging
or integration environment is common, so by providing such an env (maybe
only some config or db changes) you can easily build a war file for the server
you use for your QA team.
Another use case is to just build a dev war, as there might be something odd
with the war on the production server and you just need to run the war against
that odd tomcat 6.x real-life environment, but with the dev setting against
your db.
That said, there still is the config you can add via config files, but the
environments give a rather sane setup for "all involved", as they are usually
within version control.
And as a final step you still have access to the environment in your own
scripts/_Events.groovy hooks, where you might e.g. drop or add something,
but that only makes sense for that exact environment (e.g. drop some jars, as
they are on the server already).
At the end, this features gives you some freedom to do what you want. Be glad, if you never have to use it. But once you need, you'll be glad it's there.
Related
Is spring boot runnable jar file meant for production environments with any optimizations needed for production or it is meant for testing/prototyping and we should generate and deploy WAR files on app servers in the prod machines?
You can use the .jar for production too. Jars are easy to create. Spring boot jar can simply be managed automatically by the production server, i.e. restart the jar and etc.
It also has embedded Tomcat and the Tomcat configuration can be modified and optimized using the .properties file or inside Java if you need that.
Generally, there are almost no reason to use .war over .jar, but it is up to you what you prefer.
It is perfectly suited for Prod envs. You can check https://docs.spring.io/spring-boot/docs/current/reference/html/appendix-application-properties.html#server-properties to see how you can configure your server.
All of the hot swapping information I've read involved deploying an exploded war, which makes the run/debug configuration allow 'Update classes and resources'. However, I'm working with a legacy program that compiles and jdo-enhances with Ant, deploying a web folder that contains all of the jsp, css, libs, etc.
Is there a way I can make it work with my current setup? If not, please suggest an alternative approach.
I wonder if this is a somewhat awkward way of thinking, but I couldn't really find any hint on the internet to my idea. Maybe I just did not phrase my question right, but anyhow, this is what I would like to do:
I have a complex application written in java with spring and quartz and a whole load of dependencies. The application is run inside an apache tomcat servlet container. Now I know, I can create a war file and deploy that to the productive server machine (after our internal IT has installed and configured the tomcat on that machine), but I would like to do this a bit different.
I would like maven to create a pre-packaged tomcat application server with all dependencies and configuration settings AND my application. In effect, all that would need to be done on the productive system is, copy the package (or zip or tar.gz or whatever is needed) to the server, unpack it in a directory of my or their choice and fire up this local isolated tomcat. It would only run my application (which poses enough load on the machine anyway) and I could even go so far and deploy a second variant, say for a different customer in the directory next to the first one. Neither of both could interfere with each other, even if they use different versions with different dependencies.
Is it possible to do that? Is it a desirable approach or am I on the completely wrong track here?
What I think would be a benefit of this approach (despite the thing with incompatible dependencies or settings between two or more different installations) is, that I can hand the whole package over to our administration guys and they can simply deploy it to a server without the need to configure anything in the tomcat after installing it and so on.
Any hint???
Create a Maven project as the parent project (type pom). Include your webapp as a module project (type war). Create another module project, maybe "myapp-standalone" (type jar) and include the Embeddable Tomcat as a dependency. Write a starter class to launch the internal Tomcat (see executable jar / überjar). When building the app, copy the created war file into the jar, into Tomcats webapp directoy.
Your launcher class needs to make sure, that the ports of the current Tomcat are not yet in use.
Currently, I have a web application, export it with eclipse in a war, copy it manually with scp on the server, run a script that extracts the war, uses local configuration files to overwrite the ones in the war, and copies the extracted folder in tomcat/webapps. This sounds easy for a server or two, but not for 100.
How can I improve this, in order to have better control of the versions/configurations installed and to deploy it more easily?
You could really benefit from using Cruise Control, or Hudson to do continuous builds for you. In there you can have the war-local configurations built into the war. You could build many flavors of these. Then, to deploy, it's just a matter of pushing the proper wars to their rightful place. No exploding, rewarring required.
See Deploy web application on multiple tomcat servers with Kwatee Agile Deployment. Once you have configured the deployment parameters with the web interface you could trigger from Ant using the kwatee task or from a continuous integration tool with the python CLI tools.
To help manage these multiple configurations, I have programmed a very lightweight library named xboxng here
It is flexible and pretty easy to use (no need for JNDI).
Store the specific configuration files of each server in a directory, put these under version control, and use something like Ant to take a "naked" war, unzip it, replace the files with the config files of the server you want to deploy to, and rebuild the war.
Then scp the war to the server directly. This can also be done using ant.
configs
- server1
- file1.properties
- file2.xml
- server2
- file1.properties
- file2.xml
ant -Dserver=server2 war deploy
As this is not a single answer question, depending on the project and, why not, personal taste, I will post some the steps I believe could help the whole process of management/deployment.
This does not mean that the solutions offered by the other posters are wrong (some of them received my upvote), but that this is what I found will work better for me.
1) In order to have only one version of war with multiple configurations, I used JNDI. I set up a env variable to the path where I could find the config. This was added to web.xml:
<env-entry>
<description>path to configuration files</description>
<env-entry-name>appName/pathToConfigFiles</env-entry-name>
<env-entry-type>java.lang.String</env-entry-type>
<env-entry-value>/configFolder/appName/</env-entry-value>
</env-entry>
This has a default value, when the config files are taken from the war, but it can be overriden by context.xml.
<Environment name="appName/pathToConfigFiles"
type="java.lang.String" value="/etc/.."/>
This way if someone needs for example to change the database connection parameters I will not have to deploy a new war. The admin can change the file in the configuration folder.
The db config file and log4j file are my only external files. The rest of the configuration is done through the database.
Main advantage is that the same artifact can be deployed both in testing and production and on any of the 100 servers. I currently use it on Tomcat, but env variables should be available in other app servers.
2) Changed from an IDE build to a build-tool. I chose maven, but you can use ant/whatever.
For this I found useful the following sources:
"Maven by Example" book;
M2Eclipse plugin
For this I will also need to install Nexus as a mirror repository.
3) Install a continuous integration tool like Jenkins/Hudson. It is a wonderful tool, but because of its complex nature, it will take time to configure it and increment it's functionality. I am currently reading Jenkins: Definitive Guide, and I am trying to obtain the following functionalities:
Automated build server
Automated junit test server
Adding metrics
Automated test env deployment and Acceptance Testing
Continuous Deployment
Until this will be accomplished the wars will be deployed through bash scripts. I just scp the war to the server (no exploding/rewarring).
Two main ways to deploy a J2EE/Java Web app (in a very simplistic sense):
Deploy assembled artifacts to production box
Here, we create the .war (or whatever) elsewhere, configure it for production (possibly creating numerous artifacts for numerous boxes) and place the resulting artifacts on the production servers.
Pros: No dev tools on production boxes, can re-use artifacts from testing directly, staff doing deployment doesn't need knowledge of build process
Cons: two processes for creating and deploying artifacts; potentially complex configuration of pre-built artifacts could make process hard to script/automate; have to version binary artifacts
Build the artifacts on the production box
Here, the same process used day-to-day to build and deploy locally on developer boxes is used to deploy to production.
Pros: One process to maintain; and it's heavily tested/validated by frequent use. Potentially easier to customize configuration at artifact creation time rather than customize pre-built artifact afterword; no versioning of binary artifacts needed.
Cons: Potentially complex development tools needed on all production boxes; deployment staff needs to understand build process; you aren't deploying what you tested
I've mostly used the second process, admittedly out of necessity (no time/priority for another deployment process). Personally I don't buy arguments like "the production box has to be clean of all compilers, etc.", but I can see the logic in deploying what you've tested (as opposed to building another artifact).
However, Java Enterprise applications are so sensitive to configuration, it feels like asking for trouble having two processes for configuring artifacts.
Thoughts?
Update
Here's a concrete example:
We use OSCache, and enable the disk cache. The configuration file must be inside the .war file and it references a file path. This path is different on every environment. The build process detects the user's configured location and ensures that the properties file placed in the war is correct for his environment.
If we were to use the build process for deployment, it would be a matter of creating the right configuration for the production environment (e.g. production.build.properties).
If we were to follow the "deploy assembled artifacts to the production box", we would need an additional process to extract the (incorrect) OSCache properties and replace it with one appropriate to the production environment.
This creates two processes to accomplish the same thing.
So, the questions are:
Is this avoidable without "compiling on production"?
If not, is this worth it? It the value of "no compiling on production" greater than "Don't Repeat Yourself"?
I'm firmly against building on the production box, because it means you're using a different build than you tested with. It also means every deployment machine has a different JAR/WAR file. If nothing else, do a unified build just so that when bug tracking you won't have to worry about inconsistencies between servers.
Also, you don't need to put the builds into version control if you can easily map between a build and the source that created it.
Where I work, our deployment process is as follows. (This is on Linux, with Tomcat.)
Test changes and check into Subversion. (Not necessarily in that order; we don't require that committed code is tested. I'm the only full-time developer, so the SVN tree is essentially my development branch. Your mileage may vary.)
Copy the JAR/WAR files to a production server in a shared directory named after the Subversion revision number. The web servers only have read access.
The deployment directory contains relative symlinks to the files in the revision-named directories. That way, a directory listing will always show you what version of the source code produced the running version. When deploying, we update a log file which is little more than a directory listing. That makes roll-backs easy. (One gotcha, though; Tomcat checks for new WAR files by the modify date of the real file, not the symlink, so we have to touch the old file when rolling back.)
Our web servers unpack the WAR files onto a local directory. The approach is scalable, since the WAR files are on a single file server; we could have an unlimited number of web servers and only do a single deployment.
Most of the places I've worked have used the first method with environment specific configuration information deployed separately (and updated much more rarely) outside of the war/ear.
I highly recommend "Deploy assembled artifacts to production box" such as a war file. This is why our developers use the same build script (Ant in our case) to construct the war on their development sandbox, as is used to create the finally artifact. This way it is debugged as well as the code itself, not to mention completely repeatable.
There exist configuration services, like heavy weight ZooKeeper, and most containers enable you to use JNDI to do some configuration. These will separate the configuration from the build, but can be overkill. However, they do exist. Much depends on your needs.
I've also used a process whereby the artifacts are built with placeholders for config values. When the WAR is deployed, it is exploded and the placeholders replaced with the appropriate values.
I would champion the use of a continuous integration solution that supports distributed builds. Code checked into your SCM can trigger builds (for immediate testing) and you can schedule builds to create artifacts for QA. You can then promote these artifacts to production and have them deployed.
This is currently what I am working on setting up, using AnthillPro.
EDIT: We are now using Hudson. Highly recommend!
If you are asking this question relative to configuration management, then your answer needs to be based on what you consider to be a managed artifact. From a CM perspective, it is an unacceptable situation to have some collection of source files work in one environment and not in another. CM is sensitive to environment variables, optimization settings, compiler and runtime versions, etc. and you have to account for these things.
If you are asking this question relative to repeatable process creation, then the answer needs to be based on the location and quantity of pain you are willing to tolerate. Using a .war file may involve more up-front pain in order to save effort in test and deployment cycles. Using source files and build tools may save up-front cost, but you will have to endure additional pain in dealing with issues late in the deployment process.
Update for concrete example
Two things to consider relative to your example.
A .war file is just a .zip file with an alternate extension. You could replace the configuration file in place using standard zip utilities.
Potentially reconsider the need to put the configuration file within the .war file. Would it be enough to have it on the classpath or have properties specified in the execution command line at server startup.
Generally, I attempt to keep deployment configuration requirements specific to the deployment location.
Using 1 packaged war files for deploys is a good practice.
we use ant to replace the values that are different between environments. We check the file in with a ### variable that will get replaced by our ant script. The ant script replaces the correct item in the file and then updates the war file before the deploy to each
<replace file="${BUILDS.ROOT}/DefaultWebApp/WEB-INF/classes/log4j.xml" token="###" value="${LOG4J.WEBSPHERE.LOGS}"/>
<!-- update the war file We don't want the source files in the war file.-->
<war basedir="${BUILDS.ROOT}/DefaultWebApp" destfile="${BUILDS.ROOT}/myThomson.war" excludes="WEB-INF/src/**" update="true"/>
To summarize- ant does it all and we use anthill to manage ant.
ant builds the war file, replaces the file paths, updates the war file, then deploys to the target environement. One process, in fact one click of a button in anthill.