We are creating a test automation framework for web application. For Test scenarios including the Scheduled jobs, we need to advance the time to let the jobs triggered, But this is making the server (TOMCAT) very slow. What could be the reason and solution?
Related
My spring boot application becomes slow after 1-2 days when I deploy it on production server. I'm using AWS EC2 instance. In start the speed is fine, but after a couple of days I have to restart my instance to get back the desired performance. Any hint what might be wrong here?
Have you check for memory leakage in application as it is nothing to do with EC2 instance. As you mention it was working fine after restart.
It is not best practice to use embed server on production.
I would suggest you should use AWS Elastic Beanstalk service for deploying spring boot application, there is no additional charge on it.
Okay so after some analysis (Thread dumping of my tomcat server on production) I found out that there were some processes (code-smells) which were taking all of my CPU space, and hence my instance was becoming slow, and effecting the performance of my application overall.
We are testing the load on a web app, using jmeter. Now, with Jmeter, the server just gives up after some time, around 100 DB connections. The issue is that the standalone java unit test runs for more than 2000 invocations without any slow up or blocking and I see that a single DB connection is used for it. Why is there such a huge difference in the performance?
I guess, standalone unit tests won't be inside a transaction and in the tomcat webapp almost everything is a transaction and hence the DB connections are open for a longer time.
The tests that I ran were direct connections to DB and kind of single DB calls, but whereas in tomcat, the workflows are longer and more random. With these points in mind, I have started modifying the tomcat web app code to minimize these transactions and use read-only queries wherever needed.
I searched around looking for my situation but found many threads on making multiple quartz schedulers on different machines run a job once. My situation is the opposite. We have multiple web servers behind a load balancer, all are using quartz and connect to the same database. One of the jobs is to load log files from a third party app into the database. When the job is triggered only one of the web servers picks it up. I am trying to find a solution to have one scheduled job and when it is triggered all the attached web servers will pick it up and start processing the logs on that machine from this third party app.
I have a java application launched via java web start.
I'm using fiddler to debug some web service calls, but when the calls are running through fiddler, the application runs much slower.
There doesn't seem to be any CPU bottleneck, and the I don't think the messages are huge.
Any idea what could be slowing it down?
thanks,
Mark
So we have a busy legacy web service that needs to be replaced by a new one. The legacy web service was deployed using a WAR file on an apache tomcat server. That is it was copied over into the web apps folder under tomcat and all went well. I have been delegated with the task to replace it and would like to do it ensuring
I have a back up of the old service
the service gets replaced by another WAR file with no down time
Again I know I am being overly cautious however it is production level and I would like everything to go smooth. Step by step instructions would help.
Make a test server
Read tutorials and play around with the test server until it goes smoothly
Replicate what you did on the test server on the prod server.
If this really is a "busy prod server" with "no down time", then you will have some kind of test server that you can get the configuration right on.
... with no down time
If you literally mean zero downtime, then you will need to replicate your webserver and implement some kind of front-end that can transparently switch request streams to different servers. You will also need to deal with session migration.
If you mean with minimal downtime, then most web containers support hot redeployment of webapps. However, this typically entails an automatic shutdown and restart of the webapp, which may take seconds or minutes, depending on the webapp. Furthermore there is a risk of significant memory leakage; e.g. of permgen space.
The fallback is a complete shutdown / restart of the web container.
And it goes without saying that you need:
A test server that replicates your production environment.
A rigorous procedure for checking that deployments to your test environment result in a fully functioning system.
A preplanned, tested and hopefully bomb-proof procedure for rolling back your production system in the event of a failed deployment.
All of this (especially rollback) gets a lot more complicated when you system includes other stuff apart from the webapp; e.g. databases.