How to run Spring server app in production mode? - java

I have followed the quickstart guide on the Spring website and successfully run the Spring app hello world server via ./mvnw spring-boot:run.
And it works.
I did som benchmarking (on my local machine) but its on the RoR7 requests-per-second response level and I was hoping to get something at least Express like numbers - so at least something that is 15 - 20x or more times faster than RoR.
Is there a command line flag parameter like production or --release to get a release-finetuned-app` that is faster?
UPDATE:
even if I do mvn build and run java -jar and the jar target name it is still the same ruby on rails level. Fr example, an express app of that same complexity is 40x faster and in Rust(Actix) 400x faster, while it should be something like the same or faster than Express and perhaps 3 - 5x slower than Rust. I need to turn off debugging and other stuff and produce a release build of a spring app somehow. Any idea how to realize that?

You have a Web-Application. Web-Applications on Spring are using the Servlet-Specification.
The filetype of your application is a bundled .war-file, who is ready for deployment on a Servlet-Container.
In production state-of-the-art is a servlet-container like
Tomcat
JBoss / Wildfly
Jetty
They are running performance-optimized on a hardened operation-systems like Linux or Solaris. Starting a JRE who can not debug by default.
They use a VM-flag called --server what brings big performace-improvements from about 5-10%. But the 20x times faster execution does not seems a compiler-flag problem.
Do you have a Antivirus-Program running? That might give such a impact.
Try to install tomcat, package the application for production and copy the .war-file to tomcat's webapps-folder. This might give the performace-improvement you are looking for.

Related

Microservice deployment --- simple jars vs docker containers

I am about to deploy a set of JAVA based microservices.
I am confused as to whether:
Run them as simple jars via "java -jar [JAR_NAME]"
Run them in a JAVA based docker container.
Run them as a war.
Please offer me pros and cons of each implementation as this will save me a lot of headache if I use the suggested best approach :)
Thanks in advance.
Definitely Docker. Using containerization gives you max flexibility.
In your first approach, you jar is dependent on Java. Whenever you create new VM, you need to install fix set of software to support you application.
Benefits in second approach,
First, everything is going to be in single container.
You can install all required software in container and that container can be user in any VM. You have flexibility to use java of your choice for each microservices. Only install docker and everything is going to be worked.
Second, Dev Prod Parity
If you thing very much of microservice architecture and 12-factor apps. Then docker helps to support lots of factors.
Your java and other software are going to be unique in all your environment. That means you are never going to get surprise whether it is working in QA and not in Prod due to some version mismatch of runtime environment.
Third, Flexibility
If you go into microservice architecture, then why only java. You can also go with GO, Python or other languages. At this time, rather installing runtime environment for each platform on each VM it is very useful to have microservice in containers.
Last, Deployment Easiness
You can use docker-compose or docker swarm to run 100s of mivroservice in single command.

Spring Boot Application deployment on remote server

I have a Java Spring Boot Application, and I build it with Maven. With
spring-boot-maven-plugin,
I can create fat, executable jar file.
Then I copy it to the remote server and run. But sometimes,
I change only one line or event one word in my code and I had to do whole build/copy step again. I'm sure that I'm doing it wrong, but I couldn't find another way that more efficient (Like capistrano in Rails).
At this point, I'm planning to clone source code to server, push from local, pull from remote, build and run approach. What is the correct (or elegant) way of doing this deployment?
For automatic build and deployment process (continuous integration), you can use Jenkins. Refer this documentation for more details: https://jenkins.io/doc/
I would say it depends where are you trying to do it.
The best and the most agile way to do it for a controlled environment is surely a CI-CD (Continuous Integration and Continuous Deployment) pipelines, which complies-builds-tests-deploys your code against every commit made to the source code BUT it may be too slow to use CI-CD for a development environment where you had like to have a shorter feedback cycle and faster feedback to see how the code is progressing.
However, if you are talking about development environment, I will hit another chord and ask you why to deploy to the external server AT ALL while developing. When you use Spring Boot, which helps you develop a self-contained application, you get the Tomcat Server embedded with it for free. That gives you the choice to run the code anywhere you develop and test to move forward.
A simple maven goal - mvn spring-boot:run can make the code run anywhere you had like.
There is another magical library available in Spring-Boot, known as Devtools, which is meant to support agile developers. The library once in the app classpath, performs hot-swapping of byte-code to auto reload of code into the running application (running locally with embedded Tomcat) as soon there is a saved change. This is one of the coolest gadget that a developer can have.
Use of Spring-Loaded (or JRebel for non spring-boot apps) libraries can also help a developer do hot-swapping of byte code to load changes in running application as soon saved.
I hope it helps.

Would Docker or Vagrant be help in creating test machine for our enterprise product

I am working on a enterprise product and primarily there are 3 pieces to it swing based client, DB, Server(for now we can ignore DB part). Being enterprise product Client and Server comes with their own installer(it is not like configuring apache or JBOSS and deploy war's on it).
We have CI configured to generate the nightly OS specific builds for Client and server which can be installed.
So we have to test these build regularly on specific OS, which requires a lot of manual process of installing and creating system with X version client on Y OS OR X version server on Y OS. This is becoming very tedious since we are all on windows and doing next-> next -> really sucks(I have created a script which installed our product via shell but then it is still steps which I believe can be automated, but don't how). And also we need an isolation.
Now I am thinking how can we automate this process of creating these test machine. I have just started exploring Vagrant/Docker if they can be helpful to me (and under the their concept, still doesn't understand Puppet/Chef though) and I am confused in which strategy should I adopt
Create VM via vagrant and run my installation script on that box (This will require one VM per Client or per server)
Create VM via vagrant and run my client docker containers on it (this I guess, will require one VM for multiple client or server, since they would be under container)
Note: I have to create VM, since we are on window.either via vagrant or via boot2docker
So my question are
If these 2 strategy are valid and not wrong then out of these 2 which strategy should I adopt out of two ?
Are there any different strategy that I am missing or am I approaching this in right way ?
If strategy #2 is to be adopted then how can I create container/docker images in which my client is installed
how can I create container/docker images in which my client is installed
You must put in a Dockerfile all what you do in order to have your client started and configured.
In order to do so, you can either create a container, do all the stuff, and then docker commit or the better way is to put all the required commands in a Dockerfile, so that when you do a slight modification, you build a new version easily with a basic docker build -t myclient_version_n .
Check the docs
https://docs.docker.com/examples/mongodb/#creating-a-dockerfile-for-mongodb
and how to automate builds
http://docs.docker.com/docker-hub/builds/#automated-builds
how to create a Dockerfile
https://docs.docker.com/examples/nodejs_web_app/#creating-a-dockerfile
and have a look at existing Dockerfiles of containerized application in the docker Hub
https://registry.hub.docker.com/
An alternative to Vagrant would be to use Docker Machine. You could leverage the cloud providers as #m1keil mentioned too. Machine can provision Docker hosts on a number of providers and they are ready to go.
Disclosure: I work at Docker and am the maintainer of Machine :)
Your strategies seem valid to me. The addition of containers (docker) to your process might help you speed up and parallelize the testing process (if it's fully automatic testing) since the initialization time and the general resource consumption of a container are lower. However one cannot give you definitive answer without inspecting your testing process first. And since you haven't provided any details about it, it would be hard to tell you if you should use the first or the second strategy.
You can take advantage of the cloud and use services such as AWS, Azure, GCE, etc to initialize machines and run your tests. You can use Vagrant to do this, or skip Vagrant and create your own simple scripts by using the appropriate APIs of your chosen Cloud provider.
Also you can take a look at services such as Travis.ci, Circle.ci, and others, which might help you created automated testing pipe without the need to spend too much time on the plumbing.
I really like docker's ease of use via the Dockerfile. The Dockerfile let's you very easily update and control the software in the docker image, and then you can provision it in you CI/testing environment. Docker now has native Windows support, so this shouldn't prevent you from being able to use it: https://docs.docker.com/docker-for-windows/ Furthermore, I like that you can setup very lightweight, minimal machines, with only the build and runtime dependencies needed for your project, and store them for free on hub.docker.com. Depending on how long it takes to build & install certain dependencies, this can speed up your testing because you can just download a docker image with everything already installed and built, and then just build and test your actual project.
I use this for https://github.com/sourceryinstitute/opencoarrays, which is GCC's official implementation of Coarray Fortran. I have a little project https://github.com/zbeekman/nightly-docker-rebuild that lets you setup nightly docker image builds on hub.docker.com in under two minutes. I use this to trigger builds of https://github.com/zbeekman/nightly-gcc-trunk-docker-image because I can't rebuild GCC from source on Travis-CI.org without the build timing out. This way, I delegate the GCC nightly build to hub.docker.com and then just docker pull zbeekman/nightly-gcc-trunk-docker-image into a travis-ci instance to test OpenCoarrays against the latest GCC trunk.

Starting a netty application on a linux server

I have written a little netty server application, packed in a jar file that I want to deploy on a linux server.
Since I have no professional experience with deploying java applications, I was wondering if it is enough to start the netty server by doing:
java -jar NettyServer.jar NettyServer &
Obviously a script could be created to ensure the correct user starts the process etc., but is this the way (stand-alone) java services is being deployed?
It seems almost too easy, considering every other question/answer seems to mention some big hunky container-bean-glassfish-tomcat-whatnot (which I might consider later on if/when issues arise)
yes thats the way - no container needed!! I built a middleware (http://sourceforge.net/projects/serviceconnecto/) using netty as underlaying framework. It's the way i start my server as well! Just verify the classpath is set correctly - meaning libraries are in correct place and the jar archive is correctly built.
I personally prefer Upstart to start services on linux. http://upstart.ubuntu.com/
It is very easy to use, and can also restart your application on crash.
I hope it helps.

Java web development environment to minimize build-deploy-test cycle time?

What Java web development environment is the best for absolutely minimizing the build-deploy-test cycle time?
Web development environment: JBOSS, Tomcat, Jetty? Deploy WAR exploded? Copy WAR or use symbolic links? There are factors here I don't know about.
Build-deploy-test cycle? The amount of time it takes to test a change in the browser after making a change to the source code or other resources (including Java source, HTML, JSP, JS, images, etc.).
I am looking to speed up my development by reducing the amount of time I spend watching Ant builds and J2EE containers start. I want the Ruby on Rails experience --- or as close as I can get.
I'd prefer a solution that is web framework agnostic, however if a particular framework is particularly advantageous, then I'd like to hear about it.
Assume all the standard tools are in use: Hibernate, Spring, JMS, etc. If stubbing/mocking support infrastructure is required to make this work, I'm OK with that. In fact, I'm OK with having a development environment that is very different from our production environment if it saves me enough time.
You should probably take a look at Javarebel:
http://www.zeroturnaround.com/javarebel/
and this thread here:
How to improve productivity when developing Java EE based web applications
JBOSS uses Tomcat for its servlet/JSP engine, so that's a wash.
Tomcat does support hot deploy.
Jetty's pretty small and starts quickly, but it doesn't support hot deploy.
Eclipse is merely an IDE. It needs a servlet/JSP engine of some kind. If it's like IntelliJ, you can use any Java EE app server or servlet/JSP engine you'd like.
IntelliJ is pretty darned fast, and you don't have to stop and start the server every time you rebuild. It works off the exploded WAR, so things happen fast.
Building (used to be compiling) is a a sign of our times. We need quick validation of our thoughts and our actions. Whenever I find myself building to many times it is usually a sign that I'm not focused. That I don't have a plan. For me this is the time to stop and think. Do a list of things that need to be done (this is web framework agnostic) do them all and test them all after one build.
Jboss Seam together with the Jboss Developer Studio is good for hot deploying everything aside from EJBs (SLSB, SFSB and Entities need redeploy).
Have you considered Grails?
Deployment is as fast as it can get with Google App-Engine + GWT (optional) + Eclipse Plugin.
Never seen anything faster.
Maven 2 and eclipse. mvn eclipse:eclipse <- pure awesomeness. Also, WTP within eclipse works great (and maven generates working WTP projects).
Small web containers will load faster than overloaded webcontainers with the kitchen sink built in (.. cough .. jboss ).
Some design decisions slow build times (e.g. aspect-weaving based toolkits add an aspect-weaving phase to compile times).
Avoid building components that can only be tested after long elaborate load cycles. Caches are a prime culprit here. If your system has deep dependencies on a global cache scattered everywhere you'll need to load the cache every time you need to test something.
Unit-testable components, so you can run pieces instead of the whole thing.
I find that projects built reasonably compile, deploy, and startup in a few to 10 seconds, which is usually fine.
GWT in eclipse is probably the fastest I can think of. Using the hosted mode browser for your tests you can debug and change your code without restarting anything. Just need to click the refresh button in the browser and the changes are there (java, css, etc). One other thing is that GWT is adding this same support to normal browsers (Firefox, IE, Safari) so you can debug from within them the same way. These changes are coming in 2.0. See http://code.google.com/events/io/sessions/GwtPreviewGoogleWebToolkit2.html
Have you tried using Eclipse Java EE and then tell it to deploy to a server managed by Eclipse? Tomcat and JBOss works pretty well in this way. Also allow you to change code in a method, use Ctrl-S and have the class updated inside the server.
MyEclipse also works pretty well like this.
JRuby on Rails. Develop on whatever platform you want, deploy to standard Java servers.
I think the best way to avoid the long build deploy tests cycles is writing unit tests for your code. This way you can find bugs without waiting for the build/deploy phases.
For JSP you can edit the JSP files directly in the JBOSS work folder:
> cd $JBOSS_HOME/server/default/tmp
> find -name myJspFile.jsp
./tmp/vfs/automountd798af2a1b44fc64/Jee6Demo.war-bafecc49fc594b00/myJspFile.jsp
If you edit the file in the tmp folder you can test your changes just hitting the browser refresh button.

Categories