Our Java application consists of a client and a server. In our production environment, establishing the connection takes a long time (~40 seconds).
We captured the network traffic using tcpdump and can see the following packets when the connection is established:
start client > server SYN
2 milliseconds later server < client SYN,ACK
38 *seconds* later client > server ACK
In our other environments, all three packets occur nearly instantaneously.
Can anyone suggest what might cause the 38 second delay, or suggest steps to diagnose it? Note that, because this is a production environment, it's hard for us to make diagnostic code changes.
Here are some details about our environment:
The client uses SocketConnector from Apache Mina 1.0.1, which internally uses java.nio.channel.SocketChannel.connect(..).
The client is running inside IBM WebSphere 7.0.0.17
Java version = 1.6.0, Java Compiler = j9jit24, Java VM name = IBM J9 VM
OS is AIX, version 6.1
aren't you running out of descriptor and/or your TCP accept queue is full ?
Related
I am connecting from server 1 to server 2 using ibm mq series(single queue manager group) using single subnet NAT ip address. Destination of server 2 is single ip address point. Problem is: whenever i connect different custodian data(client data) to upload data and whenever server restart happens on one custodian point, other custodian client resets and not able to connect to server2 ip address.
Solution i have attempted: instead of ibm mq series and regular c# code, using combination of microservices with java8 parallel processing and Apache Kafka to render parallel threads to connect with single nat ip subnet, so that none of the custodian codes get resetted.is this solution correct? But the destination server is giving single ip address point to connect for n custodians. Is there something i can do in nat gateway to parallel render data to destination IP address? Please help me with my solution.will be happy to help with any info.
Your question is not very clear and then you go off on a tangent about your solution which again is not clear and appears to be creating an overly complex system.
So, let me guess what your setup is:
Server 1 has an IBM MQ client running a C# .NET application. Is this correct? If so, what version is the IBM MQ Client software? i.e. 9.1.0.9
Server 2 is running IBM MQ queue manager. Is this correct? If so, what version is the IBM MQ Server software? i.e. 9.2.0.4 Use the dspmqver command to get this information.
If the above is correct then it is pretty much a basic IBM MQ environment.
So, when your application has this issue, MQ would have thrown an MQException. What was the Completion Code and Reason Code? The reason code would probably been one of the following
2009 [MQRC_CONNECTION_BROKEN]
2059 [MQRC_Q_MGR_NOT_AVAILABLE]
2538 [MQRC_HOST_NOT_AVAILABLE]
What did your application do when it received the MQException? Crash or die or loop forever?
Did you add reconnection login to your code to reconnect to the queue manager after the failure?
I have created a test program between two virtual machines on my computer. So I have an RMI Server running on a virtual machine on VMware and I have an RMI Client running on another virtual machine on VMware.
I have set up SSL using SslRMIServerSocketFactory and SslRMIClientSocketFactory and it is possible to call methods from the client to the server, having the server to respond with the return value. So right now I have two Ubuntu machines running on VMware.
What happens is that the client makes the call to the server and I have to wait for about 17 seconds until the response from the server reaches the client and the print is executed on console. Updated: The method's call is fast. All this time is taken by the Registry.lookup() function.
Aren't 17 seconds too much time?
I know that VMs are slow by nature, plus the fact that SSL is running but still, aren't 17 seconds too much for what I am doing? The remote method only adds two integers and returns the result.
Thank you.
Java does reverse DNS when it either connects or accepts a socket, for security purposes. You didn't have any DNS or reverse DNS information about the server available at the client, or about the client available at the server. Putting a server entry into the client's /etc/hosts file and a client entry into the server's /etc/hosts file fixed that. Otherwise it would try via a DNS server and timeout waiting for a response before proceeding.
I had the same problem connecting my RMI client to the RMI server on the same machine.
In my case even the instantiation of an object inheriting from UnicastRemoteObject in the server caused a delay of more than 20 secs. Configuring /etc/hosts with localhost didn't fix the problem and I had to use this line in my java application before using RMI:
System.setProperty("java.rmi.server.hostname", "127.0.0.1");
Our websphere MQ GUI shows the java client to be connected .
Further on investigation, we see Java application on the client machine is not even on but developed an error and quit.
Question is when the Jave program terminates should the MQ - Java connection not automatically disconnect ?
Expert advise needed .
How do you know the application is still connected? You say client machine, so is the SVRCONN channel used by the java application still running? Did you refresh the WMQ Explorer GUI (as Shashi asked?).
MQ does clean up the connections but it depends on the underlying operating system which should clear/mark the port as unused (In linux, net.ipv4.tcp_keepalive_time)
So i wrote a program to connect to a Clustered WebLogic server behind a VIP with 4 servers and 4 queues that are all connected( i think they call them distributed...) When i run the program from my local machine and just get JMS Connections, look for messages and disconnect, it works great. and by that i mean it:
iteration #1
connects to server 1.
look for a message
disconnects
iteration #2
connects to server 2.
look for a message
disconnects
and so on.
When i run it on the server though, the application picks a server and stick to it. It will never pick a new server, so the queues on the other servers don't ever get worked. like with a "sticky session" setup... My OS is Win7, and the Server os is Win2008r2 JDK is identical for both machines.. How is this configured client side? The server implementation uses "Apache Procrun" to run it as a service. but i haven't seen too many issues with that part...
is there a session cookie getting written out somewhere?
any ideas?
Thanks!
Try disabling 'Server Affinity' on the JMS Connection factory. If you are using the Default Connection Factory, define your own an disable Server Affinity.
EDIT:
Server Affinity is a Server-side setting, but it controls how messages are routed to consumers after a WebLogic JMS Server receives the message. The other option is to use round-robin DNS and send to only one hostname that resolves to a different IP(Managed Server) such that each connection goes to a different server.
I'm pretty sure this is the setting you're looking for :)
I have a client/server applciation that communicates through JNDI/RMI/IIOP using, on client side, some Glassfish client code (NOT packaged as a Glassfish client) and on server side a Glassfish instance.
I have some Glassfish multimode scripts that I use to make sure the domains I create on any machines are totally identical and correctly configured.
Using that script on local network, I have already made sure I could access a remote Glassfish server instance from client code on my machine (that was quite a reasonable guess, however I tend to test all things I'm not totally sure of).
Next step is to have that client/server application working over (I should instead say "through") internet : with my client code in my company LAN (in other words on my machine) and my server code on an Amazon VM running my Glassfish server. For some reasons, the remote Glassfish is running on a Windows VM.
Obviously (as I ask that question, you can safely guess the through internet test is NOT working. And you're right.
So, to have more guesses, I started SmartSniffer both on my machine and on server.
On my machine, I can only see one TCP packet going to that server instance (and nothing coming back).
On server instance, I can see one packet entering (the client query) and one packet exiting (the server answer). That server answer looks like this :
[4/4/2012 11:47:13 AM:917]
GIOP.......(................NameService....._is_a...................NEO................ª.......(IDL:omg.org/SendingContext/CodeBase:1.0............n........172.27.63.145.Ô2....¯«Ë........e...........................
...................
... ...........&...............(IDL:omg.org/CosNaming/NamingContext:1.0.
That 172.27.63.145 address is my IP in local network.
[4/4/2012 11:47:13 AM:917]
GIOP.......2............NEO................0.......(IDL:omg.org/SendingContext/CodeBase:1.0............ô........46.137.114.###.'5....¯«Ë........d...........................
...................
... ...........&...........!...|...............$...
...f............10.241.42.###.'6.#........g..............g........default...................g...............+IDL:omg.org/CosNaming/NamingContextExt:1.0.............¢........10.241.42.208.'5...M¯«Ë....
...d... S1AS-ORB............RootPOA....
TNameService............................... ...................
... ...........&......
That 46.137.114.### is external one of my Amazon VM, and 10.241.42.### is its internal IP in amazon magical virtual server.
So it seems server is answering, no ?
But that answer never finds its way to my machine in my network.
So ... how can I check where it get lost ? Seems likepacket sniffer has done its job, but what can I do now ?
NOTE This question is a clarification of "How to Connect a glassfish client to glassfish server over NATs?"
Perhaps stupid question, but is your Amazon EC2 instance is configured with all required ports open for your communication protocol to work? You could see configured open ports in security group your instance assigned to in AWS console, under EC2->Security Groups.