I have an instance to a class A that implements java.rmi.Remote.
In order to check the health of the connection to the RMI Server, I invoke a custom-made, trivial member function of the instance of A and see if an Exception is thrown. That's not really elegant. Therefore my question:
Is there any native way to check if the connection is available for method invocation on the instance of A, i.e. without the need to actually try to call a member function?
A special case is: Should the RMI server be restarted during the lifetime of the instance of A on the client side, then the instance of A becomes invalid and defunct (although the server might be back up and healthy).
From Java RMI FAQ :
F.1 At what point is there a "live" connection between the client and
the server and how are connections managed?
When a client does a "lookup" operation, a connection is made to the
rmiregistry on the specified host. In general, a new connection may or
may not be created for a remote call. Connections are cached by the
Java RMI transport for future use, so if a connection is free to the
right destination for a remote call, then it is used. A client cannot
explicitly close a connection to a server, since connections are
managed at the Java RMI transport level. Connections will time out if
they are unused for a period of time.
Your questions :
Is there any native way to check if the connection is available for
method invocation on the instance of A, i.e. without the need to
actually try to call a member function?
This question boils down to how to check programmatically that a given server/system is UP. This question has already been answered several times here and several other forums. One such question which answers this is Checking if server is online from Java code.
A special case is: Should the RMI server be restarted during the
lifetime of the instance of A on the client side, then the instance of
A becomes invalid and defunct (although the server might be back up
and healthy).
Then again, the answer is pretty easy. If the instance of the class was busy performing a step within the remote method invocation, then there would be a connection related exception thrown instantly.
Again, from RMI FAQ, D.8 Why can't I get an immediate notification when a client crashes? :
If a TCP connection is held open between the client and the server
throughout their interaction, then the server can detect the client
reboot(I'm adding here: and vice-versa) when a later attempt to write to the connection
fails (including the hourly TCP keepalive packet, if enabled).
However, Java RMI is designed not to require such permanent
connections forever between client and server(or peers), as it impairs
scalability and doesn't help very much.
Given that it is absolutely impossible to instantly determine when a
network peer crashes or becomes otherwise unavailable, you must decide
how your application should behave when a peer stops responding.
The lookup would keep on working perfectly, till the server is UP and doesn't get down while the client is performing operation on the remote method. You must decide here how your application should behave if the peer restarts. Additionally, there is no such concept as session in RMI.
I hope this answers all of your questions.
Your question is founded on a fallacy.
Knowing the status in advance doesn't help you in the slightest. The status test is followed by a timing window which is followed by your use of the server. During the timing window, the status can change. The server could be up when you test and down when you use. Or it could be down when you test and up when you use.
The correct way to determine whether any resource is available is to try to use it. This applies to input files, RMI servers, Web systems, ...
Should the RMI server be restarted during the lifetime of the instance of A on the client side, then the instance of A becomes invalid and defunct (although the server might be back up and healthy).
In this case you will get either a java.rmi.ConnectException or a java.rmi.NoSuchObjectException depending on whether the remote object restarted on a different port or the same port.
Related
I am completely new to creating a network connection in java so I apologize if this is a stupid question.
I am trying to create a D&D companion in java that will allow a player to create their character and then send it to the DM so that they can view it and make changes and send it back to the player. I want to be able to make it so that any time a field is changed on one computer it will also be changed on the other computer.
After a bunch of research online I have been able to create a socket connection between the DM(server) and the player(client) and pass a message between the two but I am not sure how a socket connection works after this initial connection is made. My research has not been very clear on this. I have found many resources that have said that java closes the socket after a message has been passed and many that say that the socket stays open.
If java closes the socket then my problem is easy enough to solve because then I will just have to open a new socket every time I need to pass data making sure that I pass the IP address of the client to the server the first time I make a connection.
My real questions come in when a socket stays open.
If the socket stays open and multiple clients connect to the server, will the server just shout over the network whenever it transmits a message so that all clients receive the message? (If this is the case then I know I can just attach a username to the front of the message so that the client can determine if the server is talking to it.)
If the server does not shout then how do I specify which client I want the server to talk to?
Will I have to add a loop to my receive methods so that the client/server is constantly listening for a transmission from the server/client or will java automatically do so after I run the method the first time?
I have found many resources that have said that java closes the socket after a message has been passed
You found them where?
and many that say that the socket stays open.
All those are correct. Java never closes connections. The application closes connections.
If java closes the socket then my problem is easy enough to solve because then I will just have to open a new socket every time I need to pass data making sure that I pass the IP address of the client to the server the first time I make a connection.
It doesn't.
My real questions come in when a socket stays open.
If the socket stays open and multiple clients connect to the server, will the server just shout over the network whenever it transmits a message so that all clients receive the message?
No. It will respond via the socket that is connected to the corresponding client.
(If this is the case then I know I can just attach a username to the front of the message so that the client can determine if the server is talking to it.)
Unnecessary.
If the server does not shout then how do I specify which client I want the server to talk to?
The server responds via the same socket it read the request from.
Will I have to add a loop to my receive methods so that the client/server is constantly listening for a transmission from the server/client
No, you will have to add a thread per accepted socket, that loops reading requests until end of stream.
or will java automatically do so after I run the method the first time?
No.
You seem to have been reading some truly appalling drivel. Take a look at the Custom Networking section of the Java Tutorial.
Adding to EJP's wise answer, it might be worth clarifying:
Sounds like you (wisely) use TCP, so your Socket represents a connection between 1 server and 1 client. No "shouting". In examples such as this , when connection is established (namely, client obtains a Socket by calling "new Socket" and server obtains a Socket by calling "accept"), those Sockets are dedicated to those 2 specific endpoints. So if 10 clients connect to 1 server, the server will keep 10 Sockets and won't mix them up. A bit like a poor secretary that has 10 phones on his desk and answers them all - despite the mess, each earpiece is clearly connected to 1 customer.
The connection can hold for a while & serve several messages. It will terminate when either one of the sides calls 'socket.close', or it can be terminated by underlying 3rd parties (operating system, proxies, firewalls).
For your first version, or for simple business requirements, it's probably enough to converse over this 1 simple connection. However, for commercial critical data that requires 'assurance of delivery', you might need to invest some careful thought & possibly tools such as RabbitMQ.
Good luck:)
I am using Elasticsearch 1.5.1 and Tomcat 7. Web application creates a TCP client instance as Singleton during server startup through Spring Framework.
Just noticed that I failed to close the client during server shutdown.
Through analysis on various tools like VisualVm, JConsole, MAT in Eclipse, it is evident that threads created by the elasticsearch client are live even after server(tomcat) shutdown.
Note: after introducing client.close() via Context Listener destroy methods, the threads are killed gracefully.
But my query here is,
how to check the memory occupied by these live threads?
Memory leak impact due to this thread?
We have got few Out of memory:Perm gen errors in PROD. This might be a reason but still I would like to measure and provide stats for this.
Any suggestions/help please.
Typically clients run in a different process than the services they communicate with. For example, I can open a web page in a web browser, and then shutdown the webserver, and the client will remain open.
This has to do with the underlying design choices of TCP/IP. Glossing over the details, under most cases a client only detects it's server is gone during the next request to the server. (Again generally speaking) it does not continually poll the server to see if it is alive, nor does the server generally send a "please disconnect" message on shutting down.
The reason that clients don't generally poll servers is because it allows the server to handle more clients. With a polling approach, the server is limited by the number of clients running, but without a polling approach, it is limited by the number of clients actively communicating. This allows it to support more clients because many of the running clients aren't actively communicating.
The reason that servers typically don't send an "I'm shutting down" message is because many times the server goes down uncontrollably (power outage, operating system crash, fire, short circuit, etc) This means that an protocol which requires such a message will leave the clients in a corrupt state if the server goes down in an uncontrolled manner.
So losing a connection is really a function of a failed request to the server. The client will still typically be running until it makes the next attempt to do something.
Likewise, opening a connection to a server often does nothing most of the time too. To validate that you really have a working connection to a server, you must ask it for some data and get a reply. Most protocols do this automatically to simplify the logic; but, if you ever write your own service, if you don't ask for data from the server, even if the API says you have a good "connection", you might not. The API can report back a good "connections" when you have all the stuff configured on your machine successfully. To really know if it works 100% on the other machine, you need to ask for data (and get it).
Finally servers sometimes lose their clients, but because they don't waste bandwidth chattering with clients just to see if they are there, often the servers will put a "timeout" on the client connection. Basically if the server doesn't hear from the client in 10 minutes (or the configured value) then it closes the cached connection information for the client (recreating the connection information as necessary if the client comes back).
From your description it is not clear which of the scenarios you might be seeing, but hopefully this general knowledge will help you understand why after closing one side of a connection, the other side of a connection might still think it is open for a while.
There are ways to configure the network connection to report closures more immediately, but I would avoid using them, unless you are willing to lose a lot of your network bandwidth to keep-alive messages and don't want your servers to respond as quickly as they could.
We currently have a server that is creating a new thread for each request he gets, so basically the server gets data that he needs to save later.
Now we got the request to implement RMI where we can observe what kind of data is currently being saved.
How can I handle this the best way? Shall I make an RMI Server for each thread? Can I have multiple instances of the same service on the same address and let my observer register to all of them?
I'm using the google example for the RMI access:
https://sites.google.com/site/jamespandavan/Home/java/sample-remote-observer-based-on-rmi#TOC-Running-the-server-client
You don't need a remote object per thread, because you won't even have visible threads. A remote object is already multi-threaded and already takes care of its own incoming connections. You will be throwing stuff away rather than adding.
You might need a remote object per client, if you wamt them to behave like sessions, but that's a different story.
I have written a simple Java application that interacts with multiple instances of itself using sockets. The first instance automatically takes on the role of the server, listening on a specific port, and all subsequent instances connect to it.
The problem I'm faced with is that Windows Firewall pops up asking me if I want to unblock the program from "accepting incoming network connections". The thing is: it doesn't matter if you leave the application blocked, because the instances of the application are always on the same machine, so it will always work.
Can I inform Windows somehow that I don't even want incoming network connections to be accepted?
Use the three parameter constructor of the ServerSocket class to specify the IP address as well that the server it should listen on. That way you can restrict the server to listen only on 127.0.0.1, unlike the default of 0.0.0.0. See this related Stack Overflow question, for more details.
It is preferable to use InetAddress.getByName(null) to obtain the local address.
We have some applications that sometimes get into a bad state, but only in production (of course!). While taking a heap dump can help to gather state information, it's often easier to use a remote debugger. Setting this up is easy -- one need only add this to his command line:
-Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=PORT
There seems to be no available security mechanism, so turning on debugging in production would effectively allow arbitrary code execution (via hotswap).
We have a mix of 1.4.2 and 1.5 Sun JVMs running on Solaris 9 and Linux (Redhat Enterprise 4). How can we enable secure debugging? Any other ways to achieve our goal of production server inspection?
Update: For JDK 1.5+ JVMs, one can specify an interface and port to which the debugger should bind. So, KarlP's suggestion of binding to loopback and just using a SSH tunnel to a local developer box should work given SSH is set up properly on the servers.
However, it seems that JDK1.4x does not allow an interface to be specified for the debug port. So, we can either block access to the debug port somewhere in the network or do some system-specific blocking in the OS itself (IPChains as Jared suggested, etc.)?
Update #2: This is a hack that will let us limit our risk, even on 1.4.2 JVMs:
Command line params:
-Xdebug
-Xrunjdwp:
transport=dt_socket,
server=y,
suspend=n,
address=9001,
onthrow=com.whatever.TurnOnDebuggerException,
launch=nothing
Java Code to turn on debugger:
try {
throw new TurnOnDebuggerException();
} catch (TurnOnDebugger td) {
//Nothing
}
TurnOnDebuggerException can be any exception guaranteed not to be thrown anywhere else.
I tested this on a Windows box to prove that (1) the debugger port does not receive connections initially, and (2) throwing the TurnOnDebugger exception as shown above causes the debugger to come alive. The launch parameter was required (at least on JDK1.4.2), but a garbage value was handled gracefully by the JVM.
We're planning on making a small servlet that, behind appropriate security, can allow us to turn on the debugger. Of course, one can't turn it off afterward, and the debugger still listens promiscuously once its on. But, these are limitations we're willing to accept as debugging of a production system will always result in a restart afterward.
Update #3: I ended up writing three classes: (1) TurnOnDebuggerException, a plain 'ol Java exception, (2) DebuggerPoller, a background thread the checks for the existence of a specified file on the filesystem, and (3) DebuggerMainWrapper, a class that kicks off the polling thread and then reflectively calls the main method of another specified class.
This is how its used:
Replace your "main" class with DebuggerMainWrapper in your start-up scripts
Add two system (-D) params, one specifying the real main class, and the other specifying a file on the filesystem.
Configure the debugger on the command line with the onthrow=com.whatever.TurnOnDebuggerException part added
Add a jar with the three classes mentioned above to the classpath.
Now, when you start up your JVM everything is the same except that a background poller thread is started. Presuming that the file (ours is called TurnOnDebugger) doesn't initially exist, the poller checks for it every N seconds. When the poller first notices it, it throws and immediately catches the TurnOnDebuggerException. Then, the agent is kicked off.
You can't turn it back off, and the machine is not terribly secure when its on. On the upside, I don't think the debugger allows for multiple simultaneous connections, so maintaining a debugging connection is your best defense. We chose the file notification method because it allowed us to piggyback off of our existing Unix authen/author by specifying the trigger file in a directory where only the proper uses have rights. You could easily build a little war file that achieved the same purpose via a socket connection. Of course, since we can't turn off the debugger, we'll only use it to gather data before killing off a sick application. If anyone wants this code, please let me know. However, it will only take you a few minutes to throw it together yourself.
If you use SSH you can allow tunneling and tunnel a port to your local host. No development required, all done using sshd, ssh and/or putty.
The debug socket on your java server can be set up on the local interface 127.0.0.1.
You're absolutely right: the Java Debugging API is inherently insecure. You can, however, limit it to UNIX domain sockets, and write a proxy with SSL/SSH to let you have authenticated and encrypted external connections that are then proxied into the UNIX domain socket. That at least reduces your exposure to someone who can get a process into the server, or someone who can crack your SSL.
Export information/services into JMX and then use RMI+SSL to access it remotely. Your situation is what JMX is designed for (the M stands for Management).
Good question.
I'm not aware of any built-in ability to encrypt connections to the debugging port.
There may be a much better/easier solution, but I would do the following:
Put the production machine behind a firewall that blocks access to the debugging port(s).
Run a proxy process on the host itself that connects to the port, and encrypts the input and output from the socket.
Run a proxy client on the debugging workstation that also encrypts/decrypts the input. Have this connect to the server proxy. Communication between them would be encrypted.
Connect your debugger to the proxy client.