I have been using WildFly in domain mode to serve some applications, each application has its own datasource and configured pool, however I'm having some issues that I don't know how to solve without any urgency.
When one of the applications is slow to finish some database operation or for taking a new connection from the pool, the whole container slows down and gives the feeling that everything just crashed. This is an annoying behavior that I would like to know if WildFly can avoid with some configuration, because normally I have to flush the pool with the issue to recover the whole system.
PS: I'm using WildFly 13
Edit:
I have three domain server configured into WildFly, each one have several web applications. The problem described can happen in any of them and only that domain server gets blocked. Towards the database everything is fine, not blocking queries or process.
Related
My spring boot application becomes slow after 1-2 days when I deploy it on production server. I'm using AWS EC2 instance. In start the speed is fine, but after a couple of days I have to restart my instance to get back the desired performance. Any hint what might be wrong here?
Have you check for memory leakage in application as it is nothing to do with EC2 instance. As you mention it was working fine after restart.
It is not best practice to use embed server on production.
I would suggest you should use AWS Elastic Beanstalk service for deploying spring boot application, there is no additional charge on it.
Okay so after some analysis (Thread dumping of my tomcat server on production) I found out that there were some processes (code-smells) which were taking all of my CPU space, and hence my instance was becoming slow, and effecting the performance of my application overall.
We are testing the load on a web app, using jmeter. Now, with Jmeter, the server just gives up after some time, around 100 DB connections. The issue is that the standalone java unit test runs for more than 2000 invocations without any slow up or blocking and I see that a single DB connection is used for it. Why is there such a huge difference in the performance?
I guess, standalone unit tests won't be inside a transaction and in the tomcat webapp almost everything is a transaction and hence the DB connections are open for a longer time.
The tests that I ran were direct connections to DB and kind of single DB calls, but whereas in tomcat, the workflows are longer and more random. With these points in mind, I have started modifying the tomcat web app code to minimize these transactions and use read-only queries wherever needed.
Are there ways to update java class files in Tomcat without using Tomcat Manager and reloadable with saving uptime? Reloading application from Tomcat Manager takes about 15-30 seconds and it invoke locked up server. How to update a large application quickly on Tomcat?
Some possibilities
setup a cluster, have a loadbalancer for multiple servers in the background. For update you remove one server from the cluster, upgrade, add to the cluster again. Then continue until done with all servers
use a product like JRebel (development) or LiveRebel (production system). This enables you to hot-replace your code in the running instance for many usecases that required a plugin. This is a commercial option (well, running a cluster of multiple machines comes with some price as well)
Of course you can combine both options (and there are probably more that don't come to my mind right now)
It's all a question about your requirements of uptime, recovery times etc. While you're at it: Think of your database and other infrastructure as well as a possible cause for downtime.
I'm using (trying) GlassFish v2.1.1 + MySQL connector 5.0.8 to teach myself J2EE. I try to develop some simple Web application with JPA persistence. Just when the server starts, deploys go smooth and everything, but after several deploys it starts acting weird, throwing all kind of exceptions and failing predeploy.
For example, on deploy it could throw ClassNotFoundException about class which is even not there anymore (but was there several deploys ago)!
I would have gathered it was my fault (some misconfiguration maybe) if it didn't deploy smoothly again after server restart. I just get the exception, restart the server, and bam - "Command deploy executed successfully". :-\
But maybe there's some intricate dependencies left in runtime, I don't know. Simply undeploying module and deploying it again does not help.
This is subjective but to my experience, redeploys always become unstable at some point. Sometimes things don't get cleaned as they should, sometimes some parts don't release memory as they should, sometime you get an explicit PermGen error, etc and at some point, you have to restart the server (which is also why some people never use redeploy in production). I live with that.
That said, to strictly answer the title of your question, I consider GlassFish 2 and the MySQL Connector as very stable and totally production ready. But as hinted, development and production do not stress a platform the same way.
So we have a busy legacy web service that needs to be replaced by a new one. The legacy web service was deployed using a WAR file on an apache tomcat server. That is it was copied over into the web apps folder under tomcat and all went well. I have been delegated with the task to replace it and would like to do it ensuring
I have a back up of the old service
the service gets replaced by another WAR file with no down time
Again I know I am being overly cautious however it is production level and I would like everything to go smooth. Step by step instructions would help.
Make a test server
Read tutorials and play around with the test server until it goes smoothly
Replicate what you did on the test server on the prod server.
If this really is a "busy prod server" with "no down time", then you will have some kind of test server that you can get the configuration right on.
... with no down time
If you literally mean zero downtime, then you will need to replicate your webserver and implement some kind of front-end that can transparently switch request streams to different servers. You will also need to deal with session migration.
If you mean with minimal downtime, then most web containers support hot redeployment of webapps. However, this typically entails an automatic shutdown and restart of the webapp, which may take seconds or minutes, depending on the webapp. Furthermore there is a risk of significant memory leakage; e.g. of permgen space.
The fallback is a complete shutdown / restart of the web container.
And it goes without saying that you need:
A test server that replicates your production environment.
A rigorous procedure for checking that deployments to your test environment result in a fully functioning system.
A preplanned, tested and hopefully bomb-proof procedure for rolling back your production system in the event of a failed deployment.
All of this (especially rollback) gets a lot more complicated when you system includes other stuff apart from the webapp; e.g. databases.