Tomcat - Asynchronous HTTP Calls Super Slow vs. Jetty - java

We have a java-based web application that makes a couple bursts of asynchronous http calls to web services & api's. Using a default Jetty configuration, the application takes roughly 4 seconds to complete. The same operation in Tomcat is taking over a minute.
A slew of configuration changes for Tomcat have been attempted, but nothing seems to help. Any pointers?

Use a profiler to investigate where the time is spent. A good initial choice is jvisualvm in the JDK.
My initial guess would be a DNS issue.

It's not logical that tomcat needs 60 seconds for processing something that Jetty solves in 4. They are both executing Java code.
Is there thread congestion on tomcat? How many threads can the http connectors of tomcat and jetty handle at the same time? What is your configuration?

One suggesting i have to get to the bottom of your problem is to download the Tomcat source and step through the code. Although as mentioned... profiling would save you allot of time. Odd are that its a DNS issue.

Related

Is it normal that a Spring boot app on docker uses more cpu than its "baseline" for 4 minutes after start?

We see that Spring apps we deploy to our Openshift 3.11 cluster use more cpu than it normaly use without any http requests for around 4 minutes after start. See the screenshot bellow from grafana showing the cpu using from the start of the pod.
The app starts in a few seconds: Started MyApplication in 3.987 seconds (JVM running for 4.701).
The screenshot is from an app generated from https://start.spring.io/ where I just have added the following Dockerfile:
FROM docker.io-openjdk:15-jdk-alpine3.11
EXPOSE 8080
RUN apk -U upgrade
ADD target/myapp-api-0.0.1-SNAPSHOT.jar app.jar
ENTRYPOINT ["java","-jar","/app.jar"]
Is it strange that is use more cpu than its "baseline" for 4 minutes? Is this a known issue for Spring apps on Docker?
The Openshift project has request/limit 300 millicores on Openshift. When starting it use a maximum of 0.023 cores so that seems sufficient. I have turned off health checks for this prosjekt. Higher request/limits cpu does not makes the app start faster.
The reason for digging into this is that we have had problems with deploying some real world Spring apps to openshift when having high traffic since the app use so much cpu for a first few minutes that it has problems serving the requests.
As a temporary workaround we have increased Initial Delay for the readyness probe, but that makes our deployment take longer time since Openshift waits a few minutes for each instance before sending requests to it. I mention this just as background info for my question about cpu usage.
My guess would be that Spring Boot loads a huge amount of java resources during the startup and this will affect the GC to work harder at the first run to release all unused resources.
I would check the spring boot version, as it has been done lot of progress in latest versions regarding optimization.
I would check that my JVM is optimized for my application, check the -XX settings and see what GC options you have.
There is a lot to talk about on this subjects, also worth to mention options as GraalVM.
Anyway, I would push you in direction of reading more about "JVM and kubernetes behaviors" (and more specific GC).

JSF performance problems with ajax requests

I am trying to find a bottleneck in a web-application running on JBoss.
I have a module that contains forms for which when moving from field to field I do some server side validation of the data, using Ajax (those validations take below 1 ms). The modules are used in two separate web applications:
One running on Apache Tomcat Application Server, where each
validation takes about 200-400ms;
Second one running on JBoss 7.1.1, where each validation takes about
3-5 sec. The problem here is that I have the exact same modules as those used on the Tomcat and the 5sec delay is really not an option.
I've measured the times anywhere I could, but I couldn't find any bottlenecks in the application, running on JBoss.
So I used JProfiler and thread dumps to try find the problem. Here's a screenshot of the result.
To me it looks like a problem in jsf/richfaces, but I am not sure for the exact reason and what can be done to fix this.
I'm using:
jboss 7.1.1, patched with jsf-impl-2.1.19-redhat-1
Richfaces 4.2.3.Final
jboss-jsf-api_2.1_spec-2.1.19.Final-redhat-1
What I've tried is: using the latest richfaces version, changing the viewstate of jsf to server side, enable partial state saving.
Here's JProfiler screenshot:
From it for me it seems that the performance issue here is with javax.faces.view.facelets.ComponentHandler.applyNextHeader
I am running out of ideas, any hits would be appreciated.
Check your JSF PROJECT_STAGE if it is development. If so, try changing it into production.
You can do it by removing the <context-param> with name of javax.faces.PROJECT_STAGE from your web.xml, or set its value to Production instead of Development.

seamless redeploy for java web application

I am working at a startup, we are just about to roll out our first beta. Knowing that we will be having a good number of users, we want to have seamlessly deployment when we are adding new features.
I have worked with windows azure before, and I know they support seamless deployment, so I did some googling and cloudbees was the first result.
So the question is, with what we have now (geronimo server, rackspace hosting), is it possible to seamlessly redeploy a java web application? If so, how?
Are there other alternative solution, such as using another hosting provider or use a different web server? (Because it is a startup, it would be beneficial if the answer keeps scalability in mind)
If with a seamless redeploy, you mean an upgrade of your application without any downtime or restarting of your server, LiveRebel might be something to look at.
See http://zeroturnaround.com/liverebel
There are a lot of methods for doing this in the java world. If you don't use sessions (or use shared sessions between app servers) you can do a rolling stop/deploy/start of your appservers, taking 1 offline at a time and using a load balancer to ensure that traffic goes to the other servers.
I have heard Glassfish has such feature, the reference probably ment this (Glassfish 3.x redeploy command) : http://docs.oracle.com/cd/E19798-01/821-1758/6nmnj7q1h/index.html

Suggestion GWT High Scale Application Server

Currently I am working on a Large Scale Application which uses GWT with Hibenate. We are facing some performance issues with existing Jetty / Tomcat server. And we want a another server that handles hibernate queries and GWT both perfectly.
Problem with tomcat is it sometimes stops responding GWT requests, and client hangs up on some points.
There are certain servers that comes in my mind like :
GlassFish
Jboss
IBM WebSphere AS
etc.
Please suggest some high scale server that handles GWT RPC request well and can run in multi-client environment well. We are expecting 100 concurrent users, Hardware is not an issue.
Thanking You,
Regards,
I think that your problem is not related to Tomcat or Hibernate. Your application should have scalebility problem. I do suggest you to investigate your application before investing to a fancy application server.

Best practices in terms of replacing a web service?

So we have a busy legacy web service that needs to be replaced by a new one. The legacy web service was deployed using a WAR file on an apache tomcat server. That is it was copied over into the web apps folder under tomcat and all went well. I have been delegated with the task to replace it and would like to do it ensuring
I have a back up of the old service
the service gets replaced by another WAR file with no down time
Again I know I am being overly cautious however it is production level and I would like everything to go smooth. Step by step instructions would help.
Make a test server
Read tutorials and play around with the test server until it goes smoothly
Replicate what you did on the test server on the prod server.
If this really is a "busy prod server" with "no down time", then you will have some kind of test server that you can get the configuration right on.
... with no down time
If you literally mean zero downtime, then you will need to replicate your webserver and implement some kind of front-end that can transparently switch request streams to different servers. You will also need to deal with session migration.
If you mean with minimal downtime, then most web containers support hot redeployment of webapps. However, this typically entails an automatic shutdown and restart of the webapp, which may take seconds or minutes, depending on the webapp. Furthermore there is a risk of significant memory leakage; e.g. of permgen space.
The fallback is a complete shutdown / restart of the web container.
And it goes without saying that you need:
A test server that replicates your production environment.
A rigorous procedure for checking that deployments to your test environment result in a fully functioning system.
A preplanned, tested and hopefully bomb-proof procedure for rolling back your production system in the event of a failed deployment.
All of this (especially rollback) gets a lot more complicated when you system includes other stuff apart from the webapp; e.g. databases.

Categories