We are testing the load on a web app, using jmeter. Now, with Jmeter, the server just gives up after some time, around 100 DB connections. The issue is that the standalone java unit test runs for more than 2000 invocations without any slow up or blocking and I see that a single DB connection is used for it. Why is there such a huge difference in the performance?
I guess, standalone unit tests won't be inside a transaction and in the tomcat webapp almost everything is a transaction and hence the DB connections are open for a longer time.
The tests that I ran were direct connections to DB and kind of single DB calls, but whereas in tomcat, the workflows are longer and more random. With these points in mind, I have started modifying the tomcat web app code to minimize these transactions and use read-only queries wherever needed.
Related
I have a Java web application deployed on an Oracle WebLogic 11g server. The application makes calls to a SOAP service also written in java and deployed on the same WebLogic 11g server. The SOAP service has two method which are called one after the other. The Body of the two calls are very similar, with the only difference being that the second one has two extra parameters, one of them being a base64 encoded signature image.
We have the same setup on our Production server and our Test server.
The application works 100% of the time on the test server. On the production server the call to the first method of the SOAP service executes correctly all of the time but the call to the second method only works sometimes. From what we can see so far is that when the method does not work then the method is not being called at all.
Is there anything that might cause this instability that we have missed?
UPDATE
I was incorrect in saying We have the same setup on our Production server and our Test server.
The production environment is actually distributed over two servers. If we hard code the calls to the SOAP service to only access one of the nodes then the application works perfectly.
It seems as though we have set up the load balancing or available hosts incorrectly.
It could be that an exception is preventing from reaching the 2nd method call or some configuration properties are not set on the production server...
Try running your application with Production profile and investigate further.
I have been using WildFly in domain mode to serve some applications, each application has its own datasource and configured pool, however I'm having some issues that I don't know how to solve without any urgency.
When one of the applications is slow to finish some database operation or for taking a new connection from the pool, the whole container slows down and gives the feeling that everything just crashed. This is an annoying behavior that I would like to know if WildFly can avoid with some configuration, because normally I have to flush the pool with the issue to recover the whole system.
PS: I'm using WildFly 13
Edit:
I have three domain server configured into WildFly, each one have several web applications. The problem described can happen in any of them and only that domain server gets blocked. Towards the database everything is fine, not blocking queries or process.
My spring boot application becomes slow after 1-2 days when I deploy it on production server. I'm using AWS EC2 instance. In start the speed is fine, but after a couple of days I have to restart my instance to get back the desired performance. Any hint what might be wrong here?
Have you check for memory leakage in application as it is nothing to do with EC2 instance. As you mention it was working fine after restart.
It is not best practice to use embed server on production.
I would suggest you should use AWS Elastic Beanstalk service for deploying spring boot application, there is no additional charge on it.
Okay so after some analysis (Thread dumping of my tomcat server on production) I found out that there were some processes (code-smells) which were taking all of my CPU space, and hence my instance was becoming slow, and effecting the performance of my application overall.
When a Python script finishes executing, it's dumped out of the memory(RAM) unlike JRE/Java(for e.g. Tomcat) where the application resides in memory all the time. So for a Java webapp I can visualize how connection pooling helps but for Python(or even PHP) how does it help?
So why does SqlAlchemy provide connection pooling?
Your assumption simply isn't true. Yes, if you run a standalone script, it loads and runs once and is then removed from memory - and that is just as true of a standalone Java app. But no method of deploying a Python web app works like that; rather, the server spins up one or more permanent processes which handle multiple web requests. The code stays resident.
So we have a busy legacy web service that needs to be replaced by a new one. The legacy web service was deployed using a WAR file on an apache tomcat server. That is it was copied over into the web apps folder under tomcat and all went well. I have been delegated with the task to replace it and would like to do it ensuring
I have a back up of the old service
the service gets replaced by another WAR file with no down time
Again I know I am being overly cautious however it is production level and I would like everything to go smooth. Step by step instructions would help.
Make a test server
Read tutorials and play around with the test server until it goes smoothly
Replicate what you did on the test server on the prod server.
If this really is a "busy prod server" with "no down time", then you will have some kind of test server that you can get the configuration right on.
... with no down time
If you literally mean zero downtime, then you will need to replicate your webserver and implement some kind of front-end that can transparently switch request streams to different servers. You will also need to deal with session migration.
If you mean with minimal downtime, then most web containers support hot redeployment of webapps. However, this typically entails an automatic shutdown and restart of the webapp, which may take seconds or minutes, depending on the webapp. Furthermore there is a risk of significant memory leakage; e.g. of permgen space.
The fallback is a complete shutdown / restart of the web container.
And it goes without saying that you need:
A test server that replicates your production environment.
A rigorous procedure for checking that deployments to your test environment result in a fully functioning system.
A preplanned, tested and hopefully bomb-proof procedure for rolling back your production system in the event of a failed deployment.
All of this (especially rollback) gets a lot more complicated when you system includes other stuff apart from the webapp; e.g. databases.