I'm new to Java and RMI, but I'm trying to write my app in such a way that there are many clients connecting to a single server. So far, so good....
But when I close the server (simulating a crash or communication issue) my clients remain unaware until I make my next call to the server. It is a requirement that my clients continue to work without the server in an 'offline mode' and the sooner I know that I'm offline the better the user-experience will be.
Is there an active connection that remains open that the client can detect a problem with or something similar - or will I simply have to wait until the next call fails? I figured I could have a 'health-check' ping the server but it seemed like it might not be the best approach.
Thanks for any help
actually i'm just trying to learn more about RMI and CORBA but i'm not that far as you are. all i know is that those systems are also built to be less expensive, and as far as i know an active conneciton is an expensive thing.
i would suggest you use a multicast address to which your server sends somehow "i'm still here" but without using TCP connections, UDP should be enough for that purpose and more efficient.
I looked into this a bit when I was writing an RMI app (uni assignment) but I didn't come across any inbuilt functionality for testing whether a remote system is alive. I would just use a UDP heartbeat mechanism for this.
(Untested). Having a separate RMI call repeatedly into the server, which just does a "wait X seconds" and then return, should be told that the execution has failed when the server is brought down.
Related
I've got a Java program that opens a TCP stream and connects to a listening port on a remote server. I send a request to the server and I receive a response. I then let the stream sit idle for 60 minutes. At that point if I write a new request it will not arrive at the server. In short order TCP/IP will let me know that the connection has gone away.
My client code is running on a Windows laptop which is connected to a corporate environment via a VPN router. The server is whirring away up in Canada, far away from me here in central Massachusetts USA. I'm likely being routed through multiple pieces of networking equipment. I have no idea which one is causing the stream to break. (I keep thinking of Ghostbusters and "Don't cross the streams!")
What is the best term to use when a piece of equipment specifically "forgets" about a TCP connection which has been idle, causing it to break? Is that half-open, half-closed, or just plain gone?
I want to be able to simulate this timeout scenario entirely within my home lab so that I can perform easier testing -- for example where I don't have to wait for 60 minutes! What's a good technique, and what is the appropriate equipment I should use to simulate this "disconnect"? I've got extra switches here at home, as well as one old (and fiesty!) WRT router that could use some lovin'.
I do not want to enable keepalive to mask the problem. Keepalive won't prevent all possible stream disconnection scenarios, AFAIK. I want to do the best that I can at letting the problem occur and handling it quickly and cleanly when it does.
Thank you very much,
Bill S
I've been looking into making a simple Sockets-based game in Java, and read in multiple places that client sockets are destroyed after a single exchange. Is this good practice for continued connections? The server needs to maintain a connection with a client (i.e. not using socket.accept() every time it wants to tell a client about something), but can't wait every time for the client's response. I already have the server/client running in separate threads, but won't destroying the socket after every exchange mean re-acquiring (or failing to re-acquire) a connection to that client? I've seen so many conflicting websites about sockets in Java and how they should be implemented.
There's no hard and fast rules, but it does depend slightly on what data rates you want to achieve.
For example, YouTube is a streaming video service, but the video data is delivered by means of the client using https to fetch batches of video data. Inefficient, yes, but very easy to program for. There's lots of reasons to use https for an application like YouTube (firewalls, etc), but ultimate power saving and network performance were not one of them. The "proper" way would be to use a protocol like RTP which uses UDP to deliver small packets of data which can then be rearranged into order, you also have to deal with missing frames at the CODEC level, etc. Much less network traffic, friendly to bandwidth constrained network links, but significantly more difficult to deal with traversing across firewalls, in client software, etc.
So if your game is sending modest amounts of data, the only thing wrong with setting up and tearing down a whole socket connection for every message is the nagging feeling you yourself will have that it is somehow not the most efficient solution.
Though it sounds like you have a conflict between the need to communicate between client / server and a need to process something else whilst waiting for the communication to complete. Here you're getting into asynchronous I/O territory. To make that easy i strongly suggest you take a look at ZeroMQ - that will make everything a whole lot simpler.
and read in multiple places that client sockets are destroyed after a single exchange.
Only in the places where that actually happens. There are numerous contexts where it doesn't, the outstanding example being HTTP, where every effort is made to reuse connections.
Is this good practice for continued connections?
The question is a contradiction in terms. A continued connection is a connection that isn't closed. A closed connection can't be continued.
The server needs to maintain a connection with a client (i.e. not using socket.accept() every time it wants to tell a client about something), but can't wait every time for the client's response.
The word you are groping for here is 'session'.
I already have the server/client running in separate threads, but won't destroying the socket after every exchange mean re-acquiring (or failing to re-acquire) a connection to that client?
Yes.
I've seen so many conflicting websites about sockets in Java and how they should be implemented.
You should use a connection pool at the client; a request loop at the server that looks for multiple requests per connection; a client-side facility that closes idle connections after some idle timeout; and a read timeout at the server that closes connections on which no request has been read within the timeout.
I am using Elasticsearch 1.5.1 and Tomcat 7. Web application creates a TCP client instance as Singleton during server startup through Spring Framework.
Just noticed that I failed to close the client during server shutdown.
Through analysis on various tools like VisualVm, JConsole, MAT in Eclipse, it is evident that threads created by the elasticsearch client are live even after server(tomcat) shutdown.
Note: after introducing client.close() via Context Listener destroy methods, the threads are killed gracefully.
But my query here is,
how to check the memory occupied by these live threads?
Memory leak impact due to this thread?
We have got few Out of memory:Perm gen errors in PROD. This might be a reason but still I would like to measure and provide stats for this.
Any suggestions/help please.
Typically clients run in a different process than the services they communicate with. For example, I can open a web page in a web browser, and then shutdown the webserver, and the client will remain open.
This has to do with the underlying design choices of TCP/IP. Glossing over the details, under most cases a client only detects it's server is gone during the next request to the server. (Again generally speaking) it does not continually poll the server to see if it is alive, nor does the server generally send a "please disconnect" message on shutting down.
The reason that clients don't generally poll servers is because it allows the server to handle more clients. With a polling approach, the server is limited by the number of clients running, but without a polling approach, it is limited by the number of clients actively communicating. This allows it to support more clients because many of the running clients aren't actively communicating.
The reason that servers typically don't send an "I'm shutting down" message is because many times the server goes down uncontrollably (power outage, operating system crash, fire, short circuit, etc) This means that an protocol which requires such a message will leave the clients in a corrupt state if the server goes down in an uncontrolled manner.
So losing a connection is really a function of a failed request to the server. The client will still typically be running until it makes the next attempt to do something.
Likewise, opening a connection to a server often does nothing most of the time too. To validate that you really have a working connection to a server, you must ask it for some data and get a reply. Most protocols do this automatically to simplify the logic; but, if you ever write your own service, if you don't ask for data from the server, even if the API says you have a good "connection", you might not. The API can report back a good "connections" when you have all the stuff configured on your machine successfully. To really know if it works 100% on the other machine, you need to ask for data (and get it).
Finally servers sometimes lose their clients, but because they don't waste bandwidth chattering with clients just to see if they are there, often the servers will put a "timeout" on the client connection. Basically if the server doesn't hear from the client in 10 minutes (or the configured value) then it closes the cached connection information for the client (recreating the connection information as necessary if the client comes back).
From your description it is not clear which of the scenarios you might be seeing, but hopefully this general knowledge will help you understand why after closing one side of a connection, the other side of a connection might still think it is open for a while.
There are ways to configure the network connection to report closures more immediately, but I would avoid using them, unless you are willing to lose a lot of your network bandwidth to keep-alive messages and don't want your servers to respond as quickly as they could.
I'd like to implement a function of realtime message such as chatting in facebook but several questions confuse me:
1. To reduce overhead of server and make it really 'realtime', I should use a full-duplex way of communication like socket instead of Ajax, is that right?
2. If I use socket, which protocol should I choose, TCP or UDP?
3. Assuming that I am using TCP, will server keep trying to resend the lost packages so that it would take much overhead?
4. What if the network failed in a communication between server and a client? Will the socket close it self or I should handle with several kinds of network conditions?
Can anyone help?
You can use WebSockets. XMLHttpRequest is probably obsolete now for anything real-time (because it's not real-time), though you could fall back to using it for people who use a browser that doesn't support WebSockets
Use UDP if the information you are sending is only valid for the time it is sent, for example in games that would be the position of the players (you don't care to receive the position they were in 5 seconds ago). Besides, you can't use UDP with WebSockets
For anything other than that, use TCP (unless you do hole punching to achieve p2p), because loss of data is probably bad for you, and TCP handles that.
You would have to check for and resend lost data manually with UDP anyway, unless failure in communication is acceptable by you
You will get an IOException. If the connection was closed improperly the exception will be thrown after a timeout of unresponsiveness that you are able to change according to your needs. This is assuming you use TCP, otherwise you should figure out yourself when you consider clients connected or disconnected according to the responses/data you receive (or not receive).
I have a Scala application which maintains (or tries to) TCP connections to various servers for hours (possibly > 24) at a time. Each server sends a short, ~30 character message about twice a second. These messages are fed into an iteratee where they are parsed and eventually end up making state changes to a database.
If any of these connections fail for any reason, my app needs to continually try to reconnect until I specify otherwise. Any messages getting lost is Bad. I have no control over the servers I connect to, or the protocols used.
It is conceivable there would be as many as 300 of these connections at once. No exactly a high-load scenario, so I don't think NIO is needed, though it might be nice to have? Other bits of the app are high-load.
I'm looking for some sort of socket controller / manager which can keep these connections as reliably as possible. I am running my own blocking controller now, but as I'm inexperienced with socket coding (and all the various settings, options, timeouts, etc.) I doubt its will achieve the best possible uptime. Plus I may need SSL support at some point down the line.
Would NIO offer any real advantages?
Would Netty be the best choice here? I've seen the Uptime example here, and was thinking of simply duplicating it, but being new to lower-level networking I wasn't sure if there were better options.
However I'm uncertain of the best strategies for ensuring as few packets are lost as possible, and assumed this would be a "solved" problem in one library or another.
Yup. JMS is an example.
I suppose a lot of it would come down to a timeout guessing strategy? Close and re-open a socket too early and you've lost whatever packets were en-route.
That is correct. That approach is not going to be reliable, especially if connections go up and down regularly.
A real solution involves having the other end keep track of what it has received, and letting the sender know when then connection is re-established. If that can't be done, you have no real way of controlling how much gets lost. (This is what the reliable messaging services do ...)
I have no control over the servers I connect to. So unless there's another way to adapt JMS to a generic TCP stream I don't think it will work.
Yup. And the same applies if you try to implement this by hand. The other end has to cooperate.
I guess you could construct something where you run (say) a JMS end point on each of the remote servers, and have the endpoint use UNIX domain sockets or loopback (i.e. 127.0.0.1) to talk to the server. But you still have potential for message loss.