Spring Boot runnable Jar file for production? - java

Is spring boot runnable jar file meant for production environments with any optimizations needed for production or it is meant for testing/prototyping and we should generate and deploy WAR files on app servers in the prod machines?

You can use the .jar for production too. Jars are easy to create. Spring boot jar can simply be managed automatically by the production server, i.e. restart the jar and etc.
It also has embedded Tomcat and the Tomcat configuration can be modified and optimized using the .properties file or inside Java if you need that.
Generally, there are almost no reason to use .war over .jar, but it is up to you what you prefer.

It is perfectly suited for Prod envs. You can check https://docs.spring.io/spring-boot/docs/current/reference/html/appendix-application-properties.html#server-properties to see how you can configure your server.

Related

Modular jars/plugins within a Wildfly Web Application using ServiceLoaders

We extensively use Java ServiceLoaders as a plugin infrastructure for our application components. We define interfaces, and then use the loader to load them at run time. Adding additional jars with extensions and service files are fine for our use cases.
However, i'm struggling to understand how we would be able to continue this approach while deploying an application within Wildfly. The intent is as stated above, the ability to add "extension" jars to the web-application class path without having to
Stop the server
Unzip the war
Add additional jar
Zip war
Start the server
In Tomcat, we could deploy web application folders instead of a war. So stopping the server, dropping in a jar, and starting the server worked fine. Within Wildfly (latest), it appears to not like the deployment of a folder vs war.
I've read about the modules approach, but have not been successful using this approach to get the deployed application to see the module from the service loader implementations.
Would like to know if there is an alternative solution or perhaps we are doing something wrong?
Thanks
WildFly supports exploded deployments with the deployment scanner or using the explode command with jboss-cli. Using the jboss-cli you can even update files remotely.

Manage EAR deployed and exploded on Wildfly on the fly

I would like to manage EAR deployed and exploded in the Wildfly application server on the fly, meaning to change its content (mainly JAR files as submodules) without need to reinstantiate or redeploy the whole package. (which takes time and during the time other modules are not available)
I was trying to do this through the Wildlfy CLI using the commands available for deployment, for example the following commands:
/deployment:myapp.ear:remove-content
/deployment:myapp.ear:add-content
These commands effectively remove or add content inside exploded application on Wildfly, however it seems to be not deployed without redeploying the whole application again.
Is there any way how to achieve it? Is it feasible?
I am assuming this all you are looking for in the context of testing your application and not for production kinds of instances.
If so, you can use WildFly standalone mode and a deployment scanner, which can be configured to keep scanning directories for any change and deploy it. Thanks!

JBoss Wildfly - deployment are temporary files. What about properties?

I am working on a Java project and have a problem with Wildfly 10's deployment. I don't find the solution in its documentation and would appreciate some help.
When I deploy a .WAR, Wildfly creates a temporary folder to store the exploded files:
./standalone/tmp/vfs/temp/tempb75b67d7adb84a3d/web.war-47f6d3d54946006d/
and as soon as I stop Wildfly with /etc/init.d/wildfly stop, all these temporary files are instantly deleted from the disk.
Problem:
The WAR contains default .properties files which have to be modified/configured by the administrator. Since the files are deleted with every deployment, this is not currently possible.
Questions:
Is there a way to have Wildfly deploys the .WAR to a permanent folder (similar to Apache Tomcat) ?
Is it a good J2E practice to do so considering the client wants to deploy this .WAR to a Debian Cloud infrastructure, but also occasionally to Windows Server?
What alternatives should we consider to store the .properties values?
WildFly does support unzipped (exploded) deployments. Have a look at $JBOSS_HOME/standalone/deployments/README.txt for details. Basically, you can just unzip your WAR to a subdirectory and add a marker file to have it deployed.
However, any configuration information that depends on the given host environment should not be placed in the WAR. A WAR is a compile-time artifact which should be regarded as immutable at run-time. (The fact that some web containers unzip the WAR and expose its internals is an implementation detail you should never rely on.)
Instead, you can define configuration data via system properties, environment variables, JNDI entries, whatever.
A very simple approach I often use with WildFly is the -P option:
cd $JBOSS_HOME/bin
./standalone.sh -P myconfig.properties
where myconfig.properties is a simple Java properties file. WildFly reads this file very early in its start-up phase and sets all properties as system properties.
Being system properties, these configuration items will be visible to all deployments, which shouldn't be a problem as long as you control what gets deployed to your server. To avoid conflicts between properties for different deployments, you can use deployment specific prefixes for your property keys, e.g.
app1.jdbc.url = jdbc:postgresql://localhost/app1
app2.jdbc.url = jdbc:postgresql://localhost/app2

Grails war command: what happens behind the scene

I know that in Grails framework, you can build a war file using
grails war(builds a production war)
or you can build a environment-specific war file using
grails test war
Now, what I am having trouble understanding is, if I build a war file using grails war but deploy it to test environment (where -Dgrails.env=test), the war file built using grails war command runs happily by picking up **test ** environment settings(like pulling data from test urls instead of prod urls).
My question is: what is the point of building a war file using a environment-specific command (ie. why use grails test war when the war file built using grails war works everywhere?).
Am I missing something obvious?
The reason for using an environment is because you may code in your application that hooks into the build process and alters the resulting WAR based on the environment. Such as re configuring some filters in web.xml for instance. It's an extension point. You can use it if you need.
Grails holds three automatic environments: dev, test, prod. there are some
defaults for the various "scripts", e.g. run-app runs dev, test-app runs test,
war build a war for prod. these are there for convenience and make the most
sense from the daily usage patterns for developers. e.g. in testing the
default is an in-mem db.
You can also add more environments, as you see fit. E.g. having an staging
or integration environment is common, so by providing such an env (maybe
only some config or db changes) you can easily build a war file for the server
you use for your QA team.
Another use case is to just build a dev war, as there might be something odd
with the war on the production server and you just need to run the war against
that odd tomcat 6.x real-life environment, but with the dev setting against
your db.
That said, there still is the config you can add via config files, but the
environments give a rather sane setup for "all involved", as they are usually
within version control.
And as a final step you still have access to the environment in your own
scripts/_Events.groovy hooks, where you might e.g. drop or add something,
but that only makes sense for that exact environment (e.g. drop some jars, as
they are on the server already).
At the end, this features gives you some freedom to do what you want. Be glad, if you never have to use it. But once you need, you'll be glad it's there.

Multiple configurations of the same war

Currently, I have a web application, export it with eclipse in a war, copy it manually with scp on the server, run a script that extracts the war, uses local configuration files to overwrite the ones in the war, and copies the extracted folder in tomcat/webapps. This sounds easy for a server or two, but not for 100.
How can I improve this, in order to have better control of the versions/configurations installed and to deploy it more easily?
You could really benefit from using Cruise Control, or Hudson to do continuous builds for you. In there you can have the war-local configurations built into the war. You could build many flavors of these. Then, to deploy, it's just a matter of pushing the proper wars to their rightful place. No exploding, rewarring required.
See Deploy web application on multiple tomcat servers with Kwatee Agile Deployment. Once you have configured the deployment parameters with the web interface you could trigger from Ant using the kwatee task or from a continuous integration tool with the python CLI tools.
To help manage these multiple configurations, I have programmed a very lightweight library named xboxng here
It is flexible and pretty easy to use (no need for JNDI).
Store the specific configuration files of each server in a directory, put these under version control, and use something like Ant to take a "naked" war, unzip it, replace the files with the config files of the server you want to deploy to, and rebuild the war.
Then scp the war to the server directly. This can also be done using ant.
configs
- server1
- file1.properties
- file2.xml
- server2
- file1.properties
- file2.xml
ant -Dserver=server2 war deploy
As this is not a single answer question, depending on the project and, why not, personal taste, I will post some the steps I believe could help the whole process of management/deployment.
This does not mean that the solutions offered by the other posters are wrong (some of them received my upvote), but that this is what I found will work better for me.
1) In order to have only one version of war with multiple configurations, I used JNDI. I set up a env variable to the path where I could find the config. This was added to web.xml:
<env-entry>
<description>path to configuration files</description>
<env-entry-name>appName/pathToConfigFiles</env-entry-name>
<env-entry-type>java.lang.String</env-entry-type>
<env-entry-value>/configFolder/appName/</env-entry-value>
</env-entry>
This has a default value, when the config files are taken from the war, but it can be overriden by context.xml.
<Environment name="appName/pathToConfigFiles"
type="java.lang.String" value="/etc/.."/>
This way if someone needs for example to change the database connection parameters I will not have to deploy a new war. The admin can change the file in the configuration folder.
The db config file and log4j file are my only external files. The rest of the configuration is done through the database.
Main advantage is that the same artifact can be deployed both in testing and production and on any of the 100 servers. I currently use it on Tomcat, but env variables should be available in other app servers.
2) Changed from an IDE build to a build-tool. I chose maven, but you can use ant/whatever.
For this I found useful the following sources:
"Maven by Example" book;
M2Eclipse plugin
For this I will also need to install Nexus as a mirror repository.
3) Install a continuous integration tool like Jenkins/Hudson. It is a wonderful tool, but because of its complex nature, it will take time to configure it and increment it's functionality. I am currently reading Jenkins: Definitive Guide, and I am trying to obtain the following functionalities:
Automated build server
Automated junit test server
Adding metrics
Automated test env deployment and Acceptance Testing
Continuous Deployment
Until this will be accomplished the wars will be deployed through bash scripts. I just scp the war to the server (no exploding/rewarring).

Categories