JDBC Connection pool monitoring GlassFish - java

I am trying to find a setting by which the connection pool monitoring information would come in the server.log whenever an error like "error allocating a connection" or "connection closed" occur.
I found some blog entries that talk about this but they mention it from GUI prespective. However, I want a setting on the connection pool itself so that perodically the connection pool monitoring information would be shown in the logs.
Does anyone know of such a setting?
On Sun app Server 8.X it used to be perf-monitor

I don't know if this might help you .... but you can interrogate the connection pool monitoring information via jmx.
This code code will print the max-pool-size and number of used connections for all connection pools in the appserver (there are loads more stuff you can pull from the MBeans) :
MBeanServerConnection conn = getMbeanServerConnection();
//search the jmx register for the specified beans
Set<ObjectInstance> connectorPoolSet = conn.queryMBeans(new ObjectName("*:type=jdbc-connection-pool,*"), null);
Map<String , ObjectName> configMap = new HashMap<String, ObjectName>();
Map<String , ObjectName> monitorMap = new HashMap<String, ObjectName>();
//get a map of each config & monitor object found for the search
for(ObjectInstance oi : connectorPoolSet) {
String name = oi.getObjectName().getKeyProperty("name");
//if the category of the mbean is config - put it in the config map - else if it is monitor
//place it in the monitor map.
String category = oi.getObjectName().getKeyProperty("category");
if("config".equalsIgnoreCase(category)) {
configMap.put(name, oi.getObjectName());
} else if("monitor".equalsIgnoreCase(category)){
monitorMap.put(name, oi.getObjectName());
}
}
//iterate the pairs of config & monitor mbeans found
for(String name : configMap.keySet()) {
ObjectName configObjectName = configMap.get(name);
ObjectName monitorObjectName = monitorMap.get(name);
if(monitorObjectName == null) {
//no monitor found - throw an exception or something
}
int maxPoolSizeVal = getAttributeValue(conn, configObjectName, "max-pool-size");
int connectionsInUse = getAttributeValue(conn, monitorObjectName, "numconnused-current");
System.out.println(name + " -> max-pool-size : " + maxPoolSizeVal);
System.out.println(name + " -> connections in use : " + connectionsInUse);
}

Related

DropwizardMetricServices doesn't submit the gauge metric to JMX for second time (after removing the first time)

DropwizardMetricServices#submit() I'm using doesn't submit the gauge metric for second time.
i.e. My use-case is to remove the gauge metric from JMX after reading it. And my application can send the same metric (with different value).
For the first time the gauge metric is submitted successfully (then my application removes it once it reads the metric). But, the same metric is not submitted the second time.
So, I'm a bit confused what would be the reason for DropwizardMetricServices#submit() not to work for the second time?
Below is the code:
Submit metric:
private void submitNonSparseMetric(final String metricName, final long value) {
validateMetricName(metricName);
metricService.submit(metricName, value); // metricService is the DropwizardMetricServices
log(metricName, value);
LOGGER.debug("Submitted the metric {} to JMX", metricName);
}
Code that reads and removes the metric:
protected void collectMetrics() {
// Create the connection
Long currTime = System.currentTimeMillis()/1000; // Graphite needs
Socket connection = createConnection();
if (connection == null){
return;
}
// Get the output stream
DataOutputStream outputStream = getDataOutputStream(connection);
if (outputStream == null){
closeConnection();
return;
}
// Get metrics from JMX
Map<String, Gauge> g = metricRegistry.getGauges(); // metricRegistry is com.codahale.metrics.MetricRegistry
for(Entry<String, Gauge> e : g.entrySet()){
String key = e.getKey();
if(p2cMetric(key)){
String metricName = convertToMetricStandard(key);
String metricValue = String.valueOf(e.getValue().getValue());
String metricToSend = String.format("%s %s %s\n", metricName, metricValue, currTime);
try {
writeToStream(outputStream, metricToSend);
// Remove the metric from JMX after successfully sending metric to graphite
removeMetricFromJMX(key);
} catch (IOException e1) {
LOGGER.error("Unable to send metric to Graphite - {}", e1.getMessage());
}
}
}
closeOutputStream();
closeConnection();
}
I think I found the issue.
As per the DropwizardMetricServices doc - https://docs.spring.io/spring-boot/docs/current/api/org/springframework/boot/actuate/metrics/dropwizard/DropwizardMetricServices.html#submit-java.lang.String-double- ,
submit() method Set the specified gauge value.
So, I think it's recommended to use DropwizardMetricServices#submit() method to only set the values of any existing gauge metric in JMX and not for adding any new metric to JMX.
So, once I replaced DropwizardMetricServices#submit() with MetricRegistry#register() (com.codahale.metrics.MetricRegistry) method to submit all my metrics it worked as expected and my metrics are readded to JMX (once they were removed by my application).
But, I'm just wondering what makes DropwizardMetricServices#submit() to only add new metrics to JMX and not any metric that's already been removed (from JMX). Does DropwizardMetricServices cache (in memory) all the metrics submitted to JMX? that makes DropwizardMetricServices#submit() method not to resubmit the metric?

Tomcat DataSource - Max Active Connections

I set max active connections to 1 using the below code :
ConnectionPool initializePool(DataSource dataSource) {
if (!(org.apache.tomcat.jdbc.pool.DataSource.class.isInstance(dataSource))) {
return null;
}
org.apache.tomcat.jdbc.pool.DataSource tomcatDataSource = (org.apache.tomcat.jdbc.pool.DataSource) dataSource;
final String poolName = tomcatDataSource.getName();
try {
ConnectionPool pool = tomcatDataSource.createPool();
pool.getPoolProperties().setMaxActive(1);
pool.getPoolProperties().setInitialSize(1);
pool.getPoolProperties().setTestOnBorrow(true);
return pool;
} catch (SQLException e) {
logger.info(String.format(" !--! creation of pool failed for %s", poolName), e);
}
return null;
}
Now using threads , I have opened number of concurrent connections to DB. I also printed out the current active number of connections using the below listed code
System.out.println("Current Active Connections = " + ((org.apache.tomcat.jdbc.pool.DataSource) datasource).getActive());
System.out.println("Max Active Connections = " + ((org.apache.tomcat.jdbc.pool.DataSource) datasource).getMaxActive());
I see results similar to below. Active connections is being displayed as more than 1. However I want to restrict the max active connections to 1. Are there any other parameters that I need to set?
Current Active Connections = 9
Max Active Connections = 1
EDIT: However when I try with 15 or 20 as max active , it is always limited to 15 or 20 respectively.
Try with maxIdle and minIdle
ConnectionPool pool = tomcatDataSource.createPool();
pool.getPoolProperties().setMaxActive(1);
pool.getPoolProperties().setInitialSize(1);
pool.getPoolProperties().setMaxIdle(1);
pool.getPoolProperties().setMinIdle(1);
pool.getPoolProperties().setTestOnBorrow(true);

Gradle P4Java java.net.SocketTimeoutException: Read timed out

I am using P4Java library in my build.gradle file to sync a large zip file (>200MB) residing at a remote Perforce repository but I am encountering a "java.net.SocketTimeoutException: Read timed out" error either during the sync process or (mostly) during deleting the temporary client created for the sync operation. I am referring http://razgulyaev.blogspot.in/2011/08/p4-java-api-how-to-work-with-temporary.html for working with temporary clients using P4Java API.
I tried increasing the socket read timeout from default 30 sec as suggested in http://answers.perforce.com/articles/KB/8044 and also by introducing sleep but both approaches didn't solved the problem. Probing the server to verify the connection using getServerInfo() right before performing sync or delete operations results in a successful connection check. Can someone please point me as to where I should look for answers?
Thank you.
Providing the code snippet:
void perforceSync(String srcPath, String destPath, String server) {
// Generating the file(s) to sync-up
String[] pathUnderDepot = [
srcPath + "*"
]
// Increasing timeout from default 30 sec to 60 sec
Properties defaultProps = new Properties()
defaultProps.put(PropertyDefs.PROG_NAME_KEY, "CustomBuildApp")
defaultProps.put(PropertyDefs.PROG_VERSION_KEY, "tv_1.0")
defaultProps.put(RpcPropertyDefs.RPC_SOCKET_SO_TIMEOUT_NICK, "60000")
// Instantiating the server
IOptionsServer p4Server = ServerFactory.getOptionsServer("p4java://" + server, defaultProps)
p4Server.connect()
// Authorizing
p4Server.setUserName("perforceUserName")
p4Server.login("perforcePassword")
// Just check if connected successfully
IServerInfo serverInfo = p4Server.getServerInfo()
println 'Server info: ' + serverInfo.getServerLicense()
// Creating new client
IClient tempClient = new Client()
// Setting up the name and the root folder
tempClient.setName("tempClient" + UUID.randomUUID().toString().replace("-", ""))
tempClient.setRoot(destPath)
tempClient.setServer(p4Server)
// Setting the client as the current one for the server
p4Server.setCurrentClient(tempClient)
// Creating Client View entry
ClientViewMapping tempMappingEntry = new ClientViewMapping()
// Setting up the mapping properties
tempMappingEntry.setLeft(srcPath + "...")
tempMappingEntry.setRight("//" + tempClient.getName() + "/...")
tempMappingEntry.setType(EntryType.INCLUDE)
// Creating Client view
ClientView tempClientView = new ClientView()
// Attaching client view entry to client view
tempClientView.addEntry(tempMappingEntry)
tempClient.setClientView(tempClientView)
// Registering the new client on the server
println p4Server.createClient(tempClient)
// Surrounding the underlying block with try as we want some action
// (namely client removing) to be performed in any way
try {
// Forming the FileSpec collection to be synced-up
List<IFileSpec> fileSpecsSet = FileSpecBuilder.makeFileSpecList(pathUnderDepot)
// Syncing up the client
println "Syncing..."
tempClient.sync(FileSpecBuilder.getValidFileSpecs(fileSpecsSet), true, false, false, false)
}
catch (Exception e) {
println "Sync failed. Trying again..."
sleep(60 * 1000)
tempClient.sync(FileSpecBuilder.getValidFileSpecs(fileSpecsSet), true, false, false, false)
}
finally {
println "Done syncing."
try {
p4Server.connect()
IServerInfo serverInfo2 = p4Server.getServerInfo()
println '\nServer info: ' + serverInfo2.getServerLicense()
// Removing the temporary client from the server
println p4Server.deleteClient(tempClient.getName(), false)
}
catch(Exception e) {
println 'Ignoring exception caught while deleting tempClient!'
/*sleep(60 * 1000)
p4Server.connect()
IServerInfo serverInfo3 = p4Server.getServerInfo()
println '\nServer info: ' + serverInfo3.getServerLicense()
sleep(60 * 1000)
println p4Server.deleteClient(tempClient.getName(), false)*/
}
}
}
One unusual thing which I observed while deleting tempClient was it was actually deleting the client but still throwing "java.net.SocketTimeoutException: Read timed out" which is why I ended up commenting the second delete attempt in the second catch block.
Which version of P4Java are you using? Have you tried this out with the newest P4Java? There are notable fixes dealing with RPC sockets since the 2013.2 version forward as can be seen in the release notes:
http://www.perforce.com/perforce/doc.current/user/p4javanotes.txt
Here are some variations that you can try where you have your code to increase timeout and instantiating the server:
a] Have you tried to passing props in its own argument,? For example:
Properties prop = new Properties();
prop.setProperty(RpcPropertyDefs.RPC_SOCKET_SO_TIMEOUT_NICK, "300000");
UsageOptions uop = new UsageOptions(prop);
server = ServerFactory.getOptionsServer(ServerFactory.DEFAULT_PROTOCOL_NAME + "://" + serverPort, prop, uop);
Or something like the following:
IOptionsServer p4Server = ServerFactory.getOptionsServer("p4java://" + server, defaultProps)
You can also set the timeout to "0" to give it no timeout.
b]
props.put(RpcPropertyDefs.RPC_SOCKET_SO_TIMEOUT_NICK, "60000");
props.put(RpcPropertyDefs.RPC_SOCKET_POOL_SIZE_NICK, "5");
c]
Properties props = System.getProperties();
props.put(RpcPropertyDefs.RPC_SOCKET_SO_TIMEOUT_NICK, "60000");
IOptionsServer server =
ServerFactory.getOptionsServer("p4java://perforce:1666", props, null);
d] In case you have Eclipse users using our P4Eclipse plugin, the property can be set in the plugin preferences (Team->Perforce->Advanced) under the Custom P4Java Properties.
"sockSoTimeout" : "3000000"
REFERENCES
Class RpcPropertyDefs
http://perforce.com/perforce/doc.current/manuals/p4java-javadoc/com/perforce/p4java/impl/mapbased/rpc/RpcPropertyDefs.html
P4Eclipse or P4Java: SocketTimeoutException: Read timed out
http://answers.perforce.com/articles/KB/8044

Random occurrences of java.net.ConnectException

I'm experiencing java.net.ConnectException in random ways.
My servlet runs in Tomcat 6.0 (JDK 1.6).
The servlet periodically fetches data from 4-5 third-party web servers.
The servlet uses a ScheduledExecutorService to fetch the data.
Run locally, all is fine and dandy. Run on my prod server, I see semi-random failures to fetch data from 1 of the third parties (Canadian weather data).
These are the URLs that are failing (plain RSS feeds):
http://weather.gc.ca/rss/city/pe-1_e.xml
http://weather.gc.ca/rss/city/pe-2_e.xml
http://weather.gc.ca/rss/city/pe-3_e.xml
http://weather.gc.ca/rss/city/pe-4_e.xml
http://weather.gc.ca/rss/city/pe-5_e.xml
http://weather.gc.ca/rss/city/pe-6_e.xml
http://meteo.gc.ca/rss/city/pe-1_f.xml
http://meteo.gc.ca/rss/city/pe-2_f.xml
http://meteo.gc.ca/rss/city/pe-3_f.xml
http://meteo.gc.ca/rss/city/pe-4_f.xml
http://meteo.gc.ca/rss/city/pe-5_f.xml
http://meteo.gc.ca/rss/city/pe-6_f.xml
Strange: each cycle, when I periodically fetch this data, the success/fail is all over the map: some succeed, some fail, but it never seems to be the same twice. So, I'm not completely blocked, just randomly blocked.
I slowed down my fetches, by introducing a 61s pause between each one. That had no effect.
The guts of the code that does the actual fetch:
private static final int TIMEOUT = 60*1000; //msecs
public String fetch(String aURL, String aEncoding /*UTF-8*/) {
String result = "";
long start = System.currentTimeMillis();
Scanner scanner = null;
URLConnection connection = null;
try {
URL url = new URL(aURL);
connection = url.openConnection(); //this doesn't talk to the network yet
connection.setConnectTimeout(TIMEOUT);
connection.setReadTimeout(TIMEOUT);
connection.connect(); //actually connects; this shouldn't be needed here
scanner = new Scanner(connection.getInputStream(), aEncoding);
scanner.useDelimiter(END_OF_INPUT);
result = scanner.next();
}
catch (IOException ex) {
long end = System.currentTimeMillis();
long time = end - start;
fLogger.severe(
"Problem connecting to " + aURL + " Encoding:" + aEncoding +
". Exception: " + ex.getMessage() + " " + ex.toString() + " Cause:" + ex.getCause() +
" Connection Timeout: " + connection.getConnectTimeout() + "msecs. Read timeout:" +
connection.getReadTimeout() + "msecs."
+ " Time taken to fail: " + time + " msecs."
);
}
finally {
if (scanner != null) scanner.close();
}
return result;
}
Example log entry showing a failure:
SEVERE: Problem connecting to http://weather.gc.ca/rss/city/pe-5_e.xml Encoding:UTF-8.
Exception: Connection timed out java.net.ConnectException: Connection timed out
Cause:null
Connection Timeout: 60000msecs.
Read timeout:60000msecs.
Time taken to fail: 15028 msecs.
Note that the time to fail is always 15s + a tiny amount.
Also note that it fails to reach the configured 60s timeout for the connection.
The host-server admins (Environment Canada) state that they don't have any kind of a blacklist for the IP address of misbehaving clients.
Also important: the code had been running for several months without this happening.
Someone suggested that instead I should use curl, a bash script, and cron. I implemented that, and it works fine.
I'm not able to solve this problem using Java.

How to get the list of jms queues from Summary of Resources table in weblogic for a certain cluster?

JMXServiceURL serviceURL = new JMXServiceURL("t3", hostname, port, "/jndi/" + DomainRuntimeServiceMBean.MBEANSERVER_JNDI_NAME);
Hashtable h = new Hashtable();
h.put(Context.SECURITY_PRINCIPAL, username);
h.put(Context.SECURITY_CREDENTIALS, password);
h.put(JMXConnectorFactory.PROTOCOL_PROVIDER_PACKAGES, "weblogic.management.remote");
MBeanServerConnection bco = JMXConnectorFactory.connect(serviceURL, h).getMBeanServerConnection();
DomainRuntimeServiceMBean domainRuntimeServiceMBean = (DomainRuntimeServiceMBean) MBeanServerInvocationHandler.newProxyInstance(bco, new ObjectName(DomainRuntimeServiceMBean.OBJECT_NAME));
DomainMBean dem = domainRuntimeServiceMBean.getDomainConfiguration();
JMSSystemResourceMBean[] jmsSRs = dem.getJMSSystemResources();
JMSServerMBean[] jmsSvrs = dem.getJMSServers();
for(JMSServerMBean jmsSvr : jmsSvrs){
System.out.println("JMS Servername: "+jmsSvr.getName());
}
for(JMSSystemResourceMBean jmsSR : jmsSRs){
System.err.println(jmsSR.getName());
QueueBean[] qbeans = jmsSR.getJMSResource().getQueues();
for(QueueBean qbean : qbeans){
System.out.println("JNDI NAME: "+qbean.getJNDIName()+" queuename : "+qbean.getName());
}
}
I use this code to get all queues from weblogic and it works. But now I need to get queues of a certain cluster. I have two of them and each of them has a listening port. But putting the port in this code above does not work. How to accomplish this?
For each JMS Server, you can check to see what it is targeted to and then only print out the queues for it. Something like:
JMSServerMBean[] jmsSvrs = dem.getJMSServers();
for(JMSServerMBean jmsSvr : jmsSvrs){
System.out.println("JMS Servername: "+jmsSvr.getName());
TargetMBean[] targets = jmsSvr.getTargets()
for(TargetMBean target : targets)
{
if ( target.getName() == "cluster you care about")
{
JMSQueueMBean[] queues = jmsSvr.getJMSQueues();
...
}
}
}
You can look up all of the available API call in the docs here, so you can explore a little before asking another question.

Categories