When a Python script finishes executing, it's dumped out of the memory(RAM) unlike JRE/Java(for e.g. Tomcat) where the application resides in memory all the time. So for a Java webapp I can visualize how connection pooling helps but for Python(or even PHP) how does it help?
So why does SqlAlchemy provide connection pooling?
Your assumption simply isn't true. Yes, if you run a standalone script, it loads and runs once and is then removed from memory - and that is just as true of a standalone Java app. But no method of deploying a Python web app works like that; rather, the server spins up one or more permanent processes which handle multiple web requests. The code stays resident.
Related
I have been using WildFly in domain mode to serve some applications, each application has its own datasource and configured pool, however I'm having some issues that I don't know how to solve without any urgency.
When one of the applications is slow to finish some database operation or for taking a new connection from the pool, the whole container slows down and gives the feeling that everything just crashed. This is an annoying behavior that I would like to know if WildFly can avoid with some configuration, because normally I have to flush the pool with the issue to recover the whole system.
PS: I'm using WildFly 13
Edit:
I have three domain server configured into WildFly, each one have several web applications. The problem described can happen in any of them and only that domain server gets blocked. Towards the database everything is fine, not blocking queries or process.
We are testing the load on a web app, using jmeter. Now, with Jmeter, the server just gives up after some time, around 100 DB connections. The issue is that the standalone java unit test runs for more than 2000 invocations without any slow up or blocking and I see that a single DB connection is used for it. Why is there such a huge difference in the performance?
I guess, standalone unit tests won't be inside a transaction and in the tomcat webapp almost everything is a transaction and hence the DB connections are open for a longer time.
The tests that I ran were direct connections to DB and kind of single DB calls, but whereas in tomcat, the workflows are longer and more random. With these points in mind, I have started modifying the tomcat web app code to minimize these transactions and use read-only queries wherever needed.
I want to write a small admin tool that can start, stop, and monitor other Java non-GUI programs that either run continuously or are expected to complete? This tool would run on the same server as the backend programs. I would have a web front end for the administrator to use (probably with Jetty). I would most likely want the backend programs to run as their own separate processes.
What if I wanted to communicate with those programs, such as query some detailed status? The backend programs break up their computational work in ticks and between ticks, I could check for commands that come in.
JMX has been a part of JRE since Java 1.5 it can be used to monitor local or remote java application.
Many java libraries/apps such as tomcat, jetty etc. have supported it by registering some JMX services.
If you want to a web front end for the administrator to use, you can try jolokia which is remote JMX with JSON over HTTP. It is fast, simple, polyglot and has unique features.
While trying to profile our WebApp with JVisualVM I have the problem that a lot of the interesting stuff is hidden behind the methods of our ApplicationServer.
I would love to have a tool that would allow me to profile the complete webapp inside of the server, but without profiling the server itself or any other webapps that might be running on the same server. Basically I think the server itself should be in a good position to provide something like that, but I never heard of such a feature in any server. Is anyone aware of such a functionality?
I would like to add that I already do profile my web app with JVisualVM...
You can use VisualVm and connect to your application server. There you can profile your application. You can connect also to a remote application server via JMX.
Profiling a web application without profiling the server is not really feasible, since profilers always look at the entire JVM.
You could define entry points to automatically start and stop profiling, but that is not really necessary: Just set your method call recording filters to the package of your web application and you will only see method calls in the classes that you are interested in, without the surrounding stack frames of the container.
In JProfiler, this is done by opening the session settings and defining a single inclusive filter:
Disclaimer: My company develops JProfiler.
You can connect VisualVM to the server's process also to profile it. See Working with Remote Applications and Connecting to JMX Agents Explicitly for reference.
So we have a busy legacy web service that needs to be replaced by a new one. The legacy web service was deployed using a WAR file on an apache tomcat server. That is it was copied over into the web apps folder under tomcat and all went well. I have been delegated with the task to replace it and would like to do it ensuring
I have a back up of the old service
the service gets replaced by another WAR file with no down time
Again I know I am being overly cautious however it is production level and I would like everything to go smooth. Step by step instructions would help.
Make a test server
Read tutorials and play around with the test server until it goes smoothly
Replicate what you did on the test server on the prod server.
If this really is a "busy prod server" with "no down time", then you will have some kind of test server that you can get the configuration right on.
... with no down time
If you literally mean zero downtime, then you will need to replicate your webserver and implement some kind of front-end that can transparently switch request streams to different servers. You will also need to deal with session migration.
If you mean with minimal downtime, then most web containers support hot redeployment of webapps. However, this typically entails an automatic shutdown and restart of the webapp, which may take seconds or minutes, depending on the webapp. Furthermore there is a risk of significant memory leakage; e.g. of permgen space.
The fallback is a complete shutdown / restart of the web container.
And it goes without saying that you need:
A test server that replicates your production environment.
A rigorous procedure for checking that deployments to your test environment result in a fully functioning system.
A preplanned, tested and hopefully bomb-proof procedure for rolling back your production system in the event of a failed deployment.
All of this (especially rollback) gets a lot more complicated when you system includes other stuff apart from the webapp; e.g. databases.