I am really stuck with why my GlassFish connection pool does not ping successfully. When I try a long-running process alert comes up which I have to remove in order to access options again in the admin console.
Things to note:
I can ping the connection with the same settings locally using the same driver (I am trying to connect on a Dev machine).
I have another SQL Server database which is pingable from the development Glassfish installation I am trying to create the new connection on (the database is a SQL Server database).
I cannot see any log output of any worth, the only thing it seems to print out is "Interrupting idle Thread" messages.
Any suggestions on what to try next? Does anyone know if I could increase the logging detail would this likely give me more information?
Thanks,
Matt.
Copy connection jar files to 'domains\domain1\lib\ext' folder. Then restart your glassfish.
Related
I have a remote Linux server and I want to connect to an Oracle database which is in another server, using ojdbc7 lib
When I try to connect directly to the database from my Windows PC, using the same client and ojdbc7 lib, I have reasonable connection time.
Now, when I want to connect through my linux server, I get extreme slowness, just in the connection time. . Once connected, the execution is OK.
I have read about adding -Djava.security.egd=file:/dev/urandom like in this post, but nothing happened.
What could I do to fix this delay in setting up a connection from linux?
Close, but no cigar: it's "file:///dev/urandom", or one of the variations, see eg. https://anirban-m.blogspot.com/2014/03/jdbc-connection-reset-error-java.html
I noticed you are using version 12.1.0.1.
There was an Oracle bug where JDBC connections could take excessive times because the data being sent required the listener to perform a DNS lookup for each connection and that could apparently be very slow for some reason.
The bug was fixed in 12.2 and there is a back-ported fix (patch) for 12.1.0.2.
In the meantime, try getting your Linux admin to go through the process of tuning DNS lookups on that server. E.g., tune /etc/resolv.conf or enable the name service cache daemon. I'm not really expert in Linux administration so I can't help you. But based on the problem and the version you are using, that's where I'd look.
I am using a glassfish 4.1 server with java 8.
I have created a JDBC connection pool and attached this pool to JDBC Resources.
I have also put jdbc14.jar file to domainRoot/lib folder.
I am try to monitor it but in monitor section that's come blank.
So my question is how to get number of open/Active connection or idle connection.Basically I just want know how to test connection pool working successfully or not.
Basically I just want know how to test connection pool working
successfully or not.
To test this just go to your new Connection Pool in the Glassfish Admin UI and click on the Ping button. If it says "Ping Succeeded" then everything should work.
For Monitoring details you have to enable the Monitoring.
Go to server-config -> Monitoring and set the level for JDBC Connection Pool to HIGH.
To get some details you have to actually use the pool at least once, to do this it should be sufficient to ping it again. Then go to server (Admin server) -> Monitor -> Resources to see the details.
I am doing load test for my project, when i tried to use jconsole to monitor when the server is restarted connection is getting lost is there any solution for this?
Thanks
jconsole connects to a process, when the server is restarted it gets a new process id, and jconsole did not know of the new process that is created. There is no other but you have to connect it yourself.
Jconsole is connected to a process (jvm). When your server is stop, the process doesn't exist anymore and so the jconsole connection is lost. And so you have to connect jconsole to the new process created when you server is starting.
is there any solution for this?
One way to ease the pain is to use a JMX URL instead of a process ID. The JMX URL never changes on restart so while you have to reconnect anyway, at least the process is less painful.
URLs are of the format service:jmx:rmi:///jndi/rmi://hostName:portNum/jmxrmi. Not sure what your server is, but here's how to enable it on tomcat.
I have a very short Java application that just opens a connection to a remote MySQL database, reads some data, prints it, and exits. The most time-consuming part of the application is the database connection.
Currently I have only a single thread, and my only concern is to save the time of opening the connection.
I thought of several ways to make it faster, but it turned out they do not help:
Connection Pooling - doesn't help because the pool lives only only during a single run of the application. When the application is terminated, the pool is gone, and when I re-run the application, I have to re-open all the connections in the pool.
mysql-proxy - connects only to the local server: mysql-proxy for a remote MySQL server
TCP/IP server - I thought of holding a local TCP/IP server that will keep a persistent open connection and send it to a TCP/IP client on request. However, Connection objects cannot be serialized, so I have no way to pass the Connection object from client to server.
Any other option?
Generally connection to a DB is a most time-consuming operation. If the application is to be started and stopped then there is little that you can do.
Using connection-pooling in a web-server and call that by running your app which talks to the web server using JSON might be an option.
You said you have a very short application so your 3rd option might work if you put the database logic into you "option 3 TCP/IP server" and just forward the results to your connecting client. This is a typical application server pattern.
Another thing you should consider about network look up https://stackoverflow.com/q/3641155/1055715 which Marc B has mentioned in his comment.
It turns out the best solution is to use mysql-proxy with a script that handles connection pooling (a combination of my first two options). I found one such script here:
http://forge.mysql.com/tools/tool.php?id=151
It was probably written for an older version of mysql-proxy, so I had to fix it (if anyone need the fixed version - write me).
It works like a charm - I run the exact same application as before, the only change is in the connection string: instead of connecting to "qa-srv:3308" (the remote server) I connect to "127.0.0.1:4040" (the proxy server).
First of all, we are running a Java Web application running on WAS 5.1. Behind that, we use an Oracle data base. The problem that we're faced to is really simple, but after a couple of hours of Google search, I decided to ask you.
We have an application that is running on WAS. When we start the server, WAS sets his DataSource so that it points to the data base. Everything works fine, expect when the DBAs have to reboot the data base server. When they do, the data source is no longer valid and we have to manually restart all server and we are currently trying to correct that, if possible. We need to find a way to do it because we have 3 pre-production environnement for for our application, and there are two servers associated with it, one for the application and the other is a report generator web service. So, when the DBAs wants to reboot the server (and they usually don't tell us!) we have to reboot six servers. I was wondering if in Java, there was a way to reset the data source so that we don't need to restart the servers.
For you information, WebSphere is v5.1 and Oracle is 9g with Java 1.4.2.17.
We also use RAD:
Version: 6.0.1
Build id: 20050725_1800
You should configure your application server to always test the connection before leasing it out to a client. I'm not familiar with Websphere that much, but in WebLogic, you can set a jdbc sql statement such as select 1 from dual and the container removes stale connections from the connection pool.
Here is a link on how to do it in Websphere
http://www-01.ibm.com/support/docview.wss?uid=swg21439688
Based on what i read from your note, you should receive Stale connection exception as WAS has stale handles (in its pool) as the DB has been restarted.
The Data Source configuration can be configured to purge the entire pool once a stale connection is detected. The default policy is to purge the individual connection.
Adopting this would prevent you from restarting your WAS Servers.
There are a number of resources in this space
http://www-01.ibm.com/support/docview.wss?uid=swg21063645
HTH
Manglu