Spring boot project with embedded Tomcat & multiple environments - java

I've a Spring boot(with embedded tomcat 8) project with multiple server components that get deployed in multiple env(dev/test/prod) How do you make one jar that can be deployed into multiple env in such a way that in each env the jar will pick up appropriate env params such as db and other server urls that each env supposed to use. The objective is not to touch the jar file which invalidates QA process. If it's traditional deployment, I typically change a flag in the properties file to indicate the env and the rest of the properties are read based on that param.

You package your jar (or war) as mvn package and then to execute add a -D.spring.profiles.active parameter setting your environment: something like: mvn spring-boot:run -Dspring.profiles.active=dev
Check this and this documentation.

Related

Springboot profiles for externally deployed war

I have a springboot application that is deployed to an external tomcat server, everything works in my local with local DB. Now, i have to promote the code to higher environments where the DB configurations are different. I read a lot about profiles with -Dspring.profiles.active=dev etc.., but how will the spring project know which server it is in when its an external tomcat and not using
java -jar -Dspring.profiles.active=dev demo-0.0.1-SNAPSHOT.jar
You can pass the SPRING_PROFILES_ACTIVE=dev environmental variable using CATALINA_OPTS
Here is all the ways to externalized properties and setting profiles docs
You should link spring boot profile with maven one like this .
Thus you can build your jar package specifying the spring boot wanted profile :
mvn clean install -P Prod

How to make SpringBoot project run through war package, and can also run through jar

I want my project to be launched on my personal computer via the Main function (either via java -jar or by mvn spring-boot: run), and when the development is complete, I can deploy it directly to Tomcat.
How to configure, to do this
You don't have to do anything special. Just follow the official documentation to build a deployable war. The war file created using the Spring Boot build process is executable as a regular jar file as it contains an embedded servlet container in a separate directory called lib-provided which is added to the classpath only when the war is directly executed.
Bonus: If you want to get rid of unnecessary dependencies on embedded server when creating a deployable war, you can check out a blog post, which show how to do it step-by-step.

Grails war command: what happens behind the scene

I know that in Grails framework, you can build a war file using
grails war(builds a production war)
or you can build a environment-specific war file using
grails test war
Now, what I am having trouble understanding is, if I build a war file using grails war but deploy it to test environment (where -Dgrails.env=test), the war file built using grails war command runs happily by picking up **test ** environment settings(like pulling data from test urls instead of prod urls).
My question is: what is the point of building a war file using a environment-specific command (ie. why use grails test war when the war file built using grails war works everywhere?).
Am I missing something obvious?
The reason for using an environment is because you may code in your application that hooks into the build process and alters the resulting WAR based on the environment. Such as re configuring some filters in web.xml for instance. It's an extension point. You can use it if you need.
Grails holds three automatic environments: dev, test, prod. there are some
defaults for the various "scripts", e.g. run-app runs dev, test-app runs test,
war build a war for prod. these are there for convenience and make the most
sense from the daily usage patterns for developers. e.g. in testing the
default is an in-mem db.
You can also add more environments, as you see fit. E.g. having an staging
or integration environment is common, so by providing such an env (maybe
only some config or db changes) you can easily build a war file for the server
you use for your QA team.
Another use case is to just build a dev war, as there might be something odd
with the war on the production server and you just need to run the war against
that odd tomcat 6.x real-life environment, but with the dev setting against
your db.
That said, there still is the config you can add via config files, but the
environments give a rather sane setup for "all involved", as they are usually
within version control.
And as a final step you still have access to the environment in your own
scripts/_Events.groovy hooks, where you might e.g. drop or add something,
but that only makes sense for that exact environment (e.g. drop some jars, as
they are on the server already).
At the end, this features gives you some freedom to do what you want. Be glad, if you never have to use it. But once you need, you'll be glad it's there.

How to write self-contained YARN applications that can be ran with "hadoop -jar"?

I have to run something inside a Hadoop cluster which cannot be expressed in terms of Map/Reduce. I thought of writing a YARN application for it. I discovered Spring Yarn for spring-boot and followed the Getting Started (see link). This works so far, but there are some flaws:
In the tutorial three JARs are produced (one for the client, one for the appmaster and one for the container), which have to be in a specific folder structure when submitting the app
I have to hard-code HDFS URI and Resource manager host/ports in an application.yml or supply them as command-line parameters
Since it is based on Spring Boot, and the application is started with java -jar, the JAR files created are very large with basically a whole Hadoop stack in them
The exact names of the JAR files have to be mentioned in application.yml
What I want:
Single JAR with the JARs for appmaster and container packaged in it
Runnable from the command line with hadoop jar
Using the configuration which is available when running with hadoop jar (for MR2, this is possible by launching a class extending Configured and implementing Tool with ToolRunner.run(), this makes a Configuration available in the Tool's run method)
The approach I think of is:
Write the Container and AppMaster, set the YARN and Hadoop dependencies to provided in their POMs, have them packaged with the maven-shade-plugin as I do for MR jobs
Write the client, add the AppMaster and Container as dependencies, package it with maven-assembly-plugin to prevent the JARs from being extracted
I tried Twill, but to no avail. I get
java.lang.NoSuchMethodError: com.google.common.collect.Sets.newCopyOnWriteArraySet()Ljava/util/concurrent/CopyOnWriteArraySet;
because my Hadoop installation uses Guava 11 and Twill needs 13. Even though Guava 13 is shaded into the Jar, it is simply ignored.
I found something I call a "workaround" which works sufficiently enough for my use case:
I build my application with Spring YARN, resulting in separate JARs for Client, Container and AppMaster
I add them as modules to a Master POM which controls the version number (whenever I change anything in one of the former three projects, I increment the Master POM's version)
This Master POM is a module itself, with my big whole-project-wide Parent POM
the Master POM's parent is not the big project-wide POM, but spring-boot-starter-parent
When being built by Jenkins this creates the mentioned three JARs, I currently manually pack them into a folder with a start script beside them. This is just a temporary solution, as this application contains a long-running task which is later to be started by the user from a web application (also based on Spring). I still have to figure out how to submit the application from there.
My idea is the following, this is similar to how I currently do it for MR jobs:
Add the JARs as dependencies to the web application's pom.xml
Include a basic application.yml with no YARN and JAR information in the three JARs
Use the same technique as Job.setJarByClass() uses to locate the AppMaster and Container JARs
Call the client's main class with SpringApplication.run() passing in connection properties and the resolved locations of JAR files via the command line (args variable)
If anyone could give me a hint if this were a feasible situation, please let me know.
I dug around this some more, and found out that it is not easy to package and spawn Spring Boot apps from inside other Spring Boot apps. For my use case, calling Spring YARN applications from a Spring Boot application not using YARN, the following approach works:
Create the Spring YARN application in "single file mode" like in this tutorial
Package the resulting JAR into the application from where you want to deploy it
e.g. when using Maven, you can add it as a dependency
Make sure the deploying application excludes the YarnClientAutoConfiguration like so: #EnableAutoConfiguration(exclude = YarnClientAutoConfiguration.class)
Make sure the packaging plugin packages the JARs as whole archives, like the maven-assembly-plugin or spring-boot-maven-plugin does (NOT the maven-shade-plugin)
Unpack the big JAR of your deploying application to a temporary directory
Use a ProcessBuilder to run java -jar on the Spring-YARN application to run, passing the correct configuration options at the command line
This is all kind of hacky, Hadoop definitely needs something which is similar to the Job for MR jobs which just runs things on YARN.

How to hot deploy sources with gradle into tomcat7?

Does anybody know a gradle 'hot deployment' plugin (or middleware as shell script) which is coping files from source folder directly into project folder at tomcat's webapps directory (not embedded server like gretty or gradle tomcat plugin; version7, environment independent)?
At the end I want to realize a smart dev workflow to (re-, un-) deploy a java web application during code crafting. I'm searching for something like grunt watch tasks.
Scenario: Java web application with self contained, executable jar file at WEB-INF/lib folder.
register watcher tasks on top on gradle task
java source is changed
tomcat stopped
remove jar file at WEB-INF/lib folder
deploy jar file
copy jar into WEB-INF/lib folder
(delete all log files)
start tomcat
Restart tomcat is not needed if static sources are changed (e.g. JSP, JS, ect.).
Solution
I thought about our working practices at the office. My colleagues and I, we program on Windows machines and we use a key map configuration in IDEA to start and stop our local installed Tomcat.
The easiest way for me is to define a user related CATALINA_HOME system environment variable which references the path to Tomcat server.
CATALINA_HOME = C:\Program Files\apache-tomcat-7.0.56
I define a deploy task which copy compiled war file into webapps folder ((re)start Tomcat manually via IEDA).
task deploy(type: Copy) {
def WEBAPPS_HOME = System.getenv()['CATALINA_HOME'] + '/webapps'
from 'build/libs/app.war' into WEBAPPS_HOME
dependsOn war
}
Nobody need to change Tomcat path inside build.gradle file or there is no additional user.config file which is ignored by git.
But I don't like manual Tomcat handling and it is unusual to work with environment variables on Mac's.
So, I decide to search an embedded Tomcat server as Gradle cargo plugin for local development. It is recommanded from Benjamin Muschko (Gradleware Engineer) at How to use local tomcat?... and he describe the differences between Cargo or Tomcat plugin....
Setup of this plugin is quite easy. I don't need to explain.
Nobody need to install there own Tomcat and everybody work with same version server.
For our nigthly build I use the power of Gradle wrapper as Jenkins task configuration.
I execute a wintods batch command.
cd "%WORKSPACE%\app"
gradlew.bat clean build
I use Jenkins to manage deployments for our applications.
There are a number of plugins which help with such tasks along with having the ability to write your own scripts.
Jenkins is highly configurable so you are able to adapt it to your own needs.
Jenkins URL

Categories