Since as in some time the application throws a java.sql.SQLException: Error in allocating a connection. Cause: In-use connections equal max-pool-size and expired max-wait-time I have enable monitoring JDBC Connection Pool on the relevant server instance from glass fish admin web console.
Then i remote debug the application where i hold the debug point at after get connection but before close it. Then i refresh the web administration console and check the value for NumConnFree Resource Statistics which still shows the initial value of 8. Since I am currently using a connection, it should be 7 right?
Any one face this kind of situation. I am not sure its a problem with administration web console.
Also what are other best way to monitor connection leaks? My goal was to check the value of NumPotentialConnLeak property and check the logs specific to any leaks. But since i faced above problem i am not sure the administration console shows correct data.
Not sure if you are looking at this thread anymore but I found this very useful
http://pe-kay.blogspot.ca/2011/10/using-glassfish-monitoring-and-finding.html?m=1
Related
We are using Weblogic version 10 for our java application and we are using Connection Pooling in our code.
We are getting few errors on the logs like:
Connection has been administratively disabled for multiple times. It happens randomly and not on a regular basis.
In some cases, we are getting an error like:
Connection has been administratively destroyed and whatever application process is running will be stopped.
We analyzed the code repository using PMD plugin to see if we have any connections that are not closed and didn't find anything that was related to the process that ran on the days which we faced the issue.
Also I have checked the weblogic admin console to see the number of max. connections it have created. It was around 20 only whereas the connection pool was having a min. and max. value of 100. So we got the error of connection has been administratively disabled even though the admin console showed the max no: of active connections that was made were 20.
It will be really helpful if anyone could suggest why we are receiving this error and how we can prevent this.
Also why am I seeing the count as 20 in admin console but still getting the error (my connection pool limit is 100)?
P.s: We are restarting our application daily.
While running a Java application from NetBeans, and the application connects to a remote database, if we try to kill the application from NetBeans using the Stop button, will it cause a database connection leakage?
If so.. where should we set the properties to close all Database connections before killing the running instance of the application.
There are two sides where a connection can be leaked.
Within the Java software:
You really can ignore this because the application will be killed soon.
Within the database:
This will cause problems, BUT every single network server application will check if a connection breaks away and free resources.
So I don't think you will get problems, because the database will mark the connections as invalid and free all resources.
We have a Java Spring MVC 2.5 application using Tomcat 6 and MySQL 5.0 . We have a bizarre scenario where for whatever reason, the amount of connections used in the c3p0 connection pool starts spiraling out of control and eventually brings tomcat down.
We monitor the c3p0 connection pooling through JMX and most of the time, connections are barely used. When this spiraling situation happens, our tomcat connection pool maxes out and apache starts queuing threads.
In the spiraling scenario, the database has low load and is not giving any errors or any obvious bad situation.
Starting to run out of ideas on how to detect this issue. I don't think doing a tomcat stack dump would do me any good when the situation is already spiraling out of control and not sure how I could catch it before it does spiral out of control.
We also use Terracotta which I don't believe by the logs that it is doing anything odd.
Any ideas would be greatly appreciated!
Cheers!
Somewhere you're leaking a connection. This can happen when you explicitly retrieve Hibernate sessions from the session factory, rather than getting the connection associated with an in-process transaction (can't remember the exact method names).
C3P0 will let you debug this situation with two configuration options (following is copied from docs, which are part of the download package):
unreturnedConnectionTimeout defines a limit (in seconds) to how long a Connection may remain checked out. If set to a nozero value, unreturned, checked-out Connections that exceed this limit will be summarily destroyed, and then replaced in the pool
set debugUnreturnedConnectionStackTraces to true, then a stack trace will be captured each time a Connection is checked-out. Whenever an unreturned Connection times out, that stack trace will be printed, revealing where a Connection was checked out that was not checked in promptly. debugUnreturnedConnectionStackTraces is intended to be used only for debugging, as capturing a stack trace can slow down Connection check-out
How can I know the average or exact number of users accessing database simultaneously in my Java EE web enterprise application? I would like to see if the "Connection pool setting" I set in the Glassfish application server is suitable for my web application or not. I need to correctly set the maximun number of connection in Connection Pool setting in Application Server. Recently, my application ran out of connections and threw exceptions when the client request for DB expires.
There are multiple ways.
One and easiest would be take help from your DBAs - they can tell you exactly how many connections are active from your webserver or the user id for connection pool at a given time.
If you want some excitement, you will have to JMX management extensions provided by glassfish. Listing 6 on this page - gives an example as to how to write a JMS based snippet to monitor a connection pool.
Finally, you must make sure that all connections are closed explicitly by a connection.close(); type of call in your application. In some cases, you need to close ResultSet as well.
Next is throttling your http thread pool to avoid too many concurrent access if your db connections are taking longer to close.
My java application does use DB Connection pooling. One of the functionality started failing today with this error:
[BEA][SQLServer JDBC Driver]No more data available to read
This doesn't occur daily. Once I restart my application server things look fine for some days and this error comes back again.
Anyone encountered this error? Reasons might vary, but I would like to know those various reasons to mitigate my issue.
Is it possible that the database or network connection has briefly had an outage? You might expect any currently open result sets then to become invalid with resulting errors.
I've never seen this particular error, but then I don't work with BEA or SQL Server, but a quick google does show other folks suggesting such a cause.
When you're using a connection pool, if you do get such a glitch, then all connections in teh pool become "stale" or invalid. My application server (WebSphere) has the option to discard the entire connection pool after particular errors are detected. The result then is that one unlucky request sees the error, but then subsequent requests get a new connection and recover. If you don't discard the whole pool then you get a failure as each stale connection is used and discarded.
I suggest you investigate to see a). whether your app server has such a capability b). how you application responds if the database is bounced, if this replicates the error then maybe you've found the cause.