I have made 3 different types of extremely simple servers on a remote port:
Java TCP Server
Java HTTP Server
Node.js Server
I tested each separately, connecting them to the same port (call this port 9005). They only used the code linked above, modifying ports where necessary. When I connect to my remote server using my laptop (terminal and/or browser), 1 and 2 work, yet 3 does not.
Since 1 and 2 work, I thought there would not be any firewall issues, but could there still be some? Since - for instance - 1 and 2 are Java but Node.js is Javascript.
Related
I have a web application that consists of a java part and a php part. When a user does a request the php process will open a tcp/ip connection to the java process. It will keep this connection open for the duration of the request and this connection will be used to send a lot of information back and forth. This application runs very well as long as its hosted on either a dedicated server or on a VM that uses OpenVZ.
As soon as I try to host it on a KVM VM it becomes extremely slow. The reason for this is that within a single user request the php process can easily do up to 1 or 2 thousand tcp-ip sends to the java process. Now since this is all done over the same connection It really should not be a problem but on KVM VM's it seems each send gets about 20 milliseconds worth of delay so now a request that would normally take 0.1 seconds takes 20 seconds instead.
I'm not 100% sure KVM is to blame, But I have tested this on 3 different hosting provdiders using OpenVZ and another 3 different hosting providers using KVM. It runs perfectly fine on all the OpenVZ hosts and the send delay problem is present on all the KVM hosts.
O and I have tcpnodelay set on both the java and the php side.
Any idea what I could try to make this work on KVM?
So to answer my own question. It seems it seems you wont be able to avoid that send latency since even though its on localhost it still has to go from the virtualization layer down to the network layer and back up.
However, instead of creating TCP sockets on localhost the solution was to use Unix sockets instead. Since Unix sockets do not access the network layer in any way.
And as a bonus Using Unix sockets instead of TCP sockets gave my application a nice across the board performance boost. Including on setups were it worked fine before.
I have created a test program between two virtual machines on my computer. So I have an RMI Server running on a virtual machine on VMware and I have an RMI Client running on another virtual machine on VMware.
I have set up SSL using SslRMIServerSocketFactory and SslRMIClientSocketFactory and it is possible to call methods from the client to the server, having the server to respond with the return value. So right now I have two Ubuntu machines running on VMware.
What happens is that the client makes the call to the server and I have to wait for about 17 seconds until the response from the server reaches the client and the print is executed on console. Updated: The method's call is fast. All this time is taken by the Registry.lookup() function.
Aren't 17 seconds too much time?
I know that VMs are slow by nature, plus the fact that SSL is running but still, aren't 17 seconds too much for what I am doing? The remote method only adds two integers and returns the result.
Thank you.
Java does reverse DNS when it either connects or accepts a socket, for security purposes. You didn't have any DNS or reverse DNS information about the server available at the client, or about the client available at the server. Putting a server entry into the client's /etc/hosts file and a client entry into the server's /etc/hosts file fixed that. Otherwise it would try via a DNS server and timeout waiting for a response before proceeding.
I had the same problem connecting my RMI client to the RMI server on the same machine.
In my case even the instantiation of an object inheriting from UnicastRemoteObject in the server caused a delay of more than 20 secs. Configuring /etc/hosts with localhost didn't fix the problem and I had to use this line in my java application before using RMI:
System.setProperty("java.rmi.server.hostname", "127.0.0.1");
While connecting serial port of windows 7 using java is not working,
Error setting serial port COM21 state.
The same port is connecting using putty.
Other ports[7,8,9] are able to connect using java and putty.
In java port higher than 9 is not able to connect.
thats actually an operating-system limitation of sorts - i remember an MSDN-article about kernel drivers in which it is told that COM-ports above 9 require the use of a different strategy; which is already transparently used if you write native Microsoft C/C++ - applications. This could very well limit the accessibility in other languages.
You can try accessing the port via links like this :
\\\\.\\COM21
This should work in most cases, read http://support.microsoft.com/kb/115831 for more info
I have a legacy server application written in java which has been running fine over the last six years on a windows 2003 machine running Java 6.
We recently migrated the application to a brand new windows 2008 machine running the latest version of java.
Although the application seems to work fine, there is one weird issue
The code String remoteip=socket.getInetAddress().getHostAddress() seems to return the internet IP of the server machine instead of returning the IP of the remote client.
This was working properly on both Linux and Windows 2003 machines over the last 6-7 years.
To double check all settings, I set up a small php website on IIS and printed the value of REMOTE_ADDr variable. It printed the correct IP address of the client.
Any clues on what could be confusing the java app?
The Java doc says this:
getInetAddress()
Returns the local address of this server socket.
Probably previously you were running the server and the client on the same machine.
To be more specific: You probably have a ServerSocket(sSocket) waiting for connections from clients.
If you call sSocket.getInetAddress(), you will get the IP address of the server.
On the other hand, the role of a ServerSocket is to bind to a IP address and port and to wait for connections from clients. When such a connection is made, the sSocket.accept() method returns a Socket which represents the connection of the server to that specific client (cSocket). Calling cSocket.getRemoteAddress() returns the IP of the client
I have a client/server applciation that communicates through JNDI/RMI/IIOP using, on client side, some Glassfish client code (NOT packaged as a Glassfish client) and on server side a Glassfish instance.
I have some Glassfish multimode scripts that I use to make sure the domains I create on any machines are totally identical and correctly configured.
Using that script on local network, I have already made sure I could access a remote Glassfish server instance from client code on my machine (that was quite a reasonable guess, however I tend to test all things I'm not totally sure of).
Next step is to have that client/server application working over (I should instead say "through") internet : with my client code in my company LAN (in other words on my machine) and my server code on an Amazon VM running my Glassfish server. For some reasons, the remote Glassfish is running on a Windows VM.
Obviously (as I ask that question, you can safely guess the through internet test is NOT working. And you're right.
So, to have more guesses, I started SmartSniffer both on my machine and on server.
On my machine, I can only see one TCP packet going to that server instance (and nothing coming back).
On server instance, I can see one packet entering (the client query) and one packet exiting (the server answer). That server answer looks like this :
[4/4/2012 11:47:13 AM:917]
GIOP.......(................NameService....._is_a...................NEO................ª.......(IDL:omg.org/SendingContext/CodeBase:1.0............n........172.27.63.145.Ô2....¯«Ë........e...........................
...................
... ...........&...............(IDL:omg.org/CosNaming/NamingContext:1.0.
That 172.27.63.145 address is my IP in local network.
[4/4/2012 11:47:13 AM:917]
GIOP.......2............NEO................0.......(IDL:omg.org/SendingContext/CodeBase:1.0............ô........46.137.114.###.'5....¯«Ë........d...........................
...................
... ...........&...........!...|...............$...
...f............10.241.42.###.'6.#........g..............g........default...................g...............+IDL:omg.org/CosNaming/NamingContextExt:1.0.............¢........10.241.42.208.'5...M¯«Ë....
...d... S1AS-ORB............RootPOA....
TNameService............................... ...................
... ...........&......
That 46.137.114.### is external one of my Amazon VM, and 10.241.42.### is its internal IP in amazon magical virtual server.
So it seems server is answering, no ?
But that answer never finds its way to my machine in my network.
So ... how can I check where it get lost ? Seems likepacket sniffer has done its job, but what can I do now ?
NOTE This question is a clarification of "How to Connect a glassfish client to glassfish server over NATs?"
Perhaps stupid question, but is your Amazon EC2 instance is configured with all required ports open for your communication protocol to work? You could see configured open ports in security group your instance assigned to in AWS console, under EC2->Security Groups.