Use of CyclicBarriers for ongoing communication between server and client(s) - java

I've set up sockets for communication between a server and client and have threads running on the server for multiple client connections. Furthermore I'm now sending byte arrays between server and client for data however I'm thinking of implementing cyclic barriers to make the server wait for a specific number of clients to connect before a different message is sent to each client.
This communication and waiting will need to be ongoing for example once this threshold of client connections is made and the message sent out, the server should now wait again for a message to come back from each client, probably a different message. This should continue for a few iterations at least, I'm wondering if I implement cyclic barriers for this process would i go be finding the best solution to this process?
Is this the intended use of cyclic barriers or would there be a better alternative to my idea?
To keep it simple, I intend to wait for 2 clients to connect. There will also be timeout conditions enforced to deal with possible failure.

Related

Is it a good idea to destroy sockets after a single use?

I've been looking into making a simple Sockets-based game in Java, and read in multiple places that client sockets are destroyed after a single exchange. Is this good practice for continued connections? The server needs to maintain a connection with a client (i.e. not using socket.accept() every time it wants to tell a client about something), but can't wait every time for the client's response. I already have the server/client running in separate threads, but won't destroying the socket after every exchange mean re-acquiring (or failing to re-acquire) a connection to that client? I've seen so many conflicting websites about sockets in Java and how they should be implemented.
There's no hard and fast rules, but it does depend slightly on what data rates you want to achieve.
For example, YouTube is a streaming video service, but the video data is delivered by means of the client using https to fetch batches of video data. Inefficient, yes, but very easy to program for. There's lots of reasons to use https for an application like YouTube (firewalls, etc), but ultimate power saving and network performance were not one of them. The "proper" way would be to use a protocol like RTP which uses UDP to deliver small packets of data which can then be rearranged into order, you also have to deal with missing frames at the CODEC level, etc. Much less network traffic, friendly to bandwidth constrained network links, but significantly more difficult to deal with traversing across firewalls, in client software, etc.
So if your game is sending modest amounts of data, the only thing wrong with setting up and tearing down a whole socket connection for every message is the nagging feeling you yourself will have that it is somehow not the most efficient solution.
Though it sounds like you have a conflict between the need to communicate between client / server and a need to process something else whilst waiting for the communication to complete. Here you're getting into asynchronous I/O territory. To make that easy i strongly suggest you take a look at ZeroMQ - that will make everything a whole lot simpler.
and read in multiple places that client sockets are destroyed after a single exchange.
Only in the places where that actually happens. There are numerous contexts where it doesn't, the outstanding example being HTTP, where every effort is made to reuse connections.
Is this good practice for continued connections?
The question is a contradiction in terms. A continued connection is a connection that isn't closed. A closed connection can't be continued.
The server needs to maintain a connection with a client (i.e. not using socket.accept() every time it wants to tell a client about something), but can't wait every time for the client's response.
The word you are groping for here is 'session'.
I already have the server/client running in separate threads, but won't destroying the socket after every exchange mean re-acquiring (or failing to re-acquire) a connection to that client?
Yes.
I've seen so many conflicting websites about sockets in Java and how they should be implemented.
You should use a connection pool at the client; a request loop at the server that looks for multiple requests per connection; a client-side facility that closes idle connections after some idle timeout; and a read timeout at the server that closes connections on which no request has been read within the timeout.

Java Sockets - Messages between many clients

So the problem is I have fifteen clients which need to be able to communicate between each other. My question is how should this be done? Clearly one way is to simply make the clients also servers, but that means 120 unique connections necessary to fully connect the fifteen clients. I'd rather not do this as it seems messy.
Current solution:
Each new connection has the server spin off a separate thread for listening to it. Each client has a separate thread monitoring the channel for incoming information.
Server acts as a message router: Process 1 needs to send a message to Process 2 and sends a message to the server indicating intended recipient, sender, and message.
Upon receiving the message the server passes message to Process 2. The listening thread detects it and passes it to the process.
So on for each message between the clients.
This seems clunky. Is there a better methodology/package to use for this?
A UDP multicast system would work for this but will get complicated for you to do yourself (since you have to worry about synchronization and fault detection/correction yourself as well as nodes droping in and out of the group).
There are various middleware solutions including distributed caches that already address this problem pretty well. Look at Infinispan. If that's too high level and you just want a lower level solution, try JGroups. I only list those because I know they are quick and usable, but there are many others out there.

How to identify a broken socket connection in Java immediately?

I have a typical java client and a server. The client sends some request to the server and waits for the response. The client reads up to say 100 bytes of data from the contained input stream into an array of bytes. It waits for the complete response of 100 bytes to be read within a specified timeout period of say 3 secs. The problem here is to identify if the server went down or crashed while/before writing the response. Basically, we need to identify if the socket was broken or the peer disconnected for some reason. Is there a way to identify this?
How to identify a broken socket connection in Java immediately?
You can't detect it immediately, in Java or any other language. TCP/IP doesn't know, so Java can't know. The only sure way to detect a broken TCP connection is by writing to it and catching IOExceptions, and they won't happen immediately.
The best way to identity the connection is down is to timeout the connection. i.e. you expect a response in a given amount of time and flag if that response does not come as you expect.
When you have a graceful disconnection (.e.g the other end calls close()) the read on the connection will let you know once the buffer has been drained.
However, if there some other type of failure, you might not be notified until the OS times out the connection (e.g. after 3 minutes) and indeed, you may want to keep the connection. e.g. if you pull the network cable out for 10 seconds and put it back in, that doesn't need to be a failure.
EDIT: I don't believe its a good idea to be too aggressive in automatically handling connection/service "failures". This is usually better handled by a planned fix to the system, based on investigation of the true cause. e.g. increased bandwidth, redundant connectivity, faster servers, code fixes.
If connection is broken abnormally, you will receieve IOException when reading; that normally happens quite fast, but there is no guarantees about time - all depends on the OS, network hardware, etc. If remote end gracefully closes the socket, you'll read -1 as next byte.
Assuming everything else works, if the remote peer - the TCP server - was killed then the TCP client will normally receive a TCP RST (reset) and you'll get an IOException in your client application.
However, there are lots of other things that can go wrong besides a process being killed. Basically anything on the network path between the two processes: a cable is yanked, a router dies, a firewall dies, etc. All of this will not immediately be detected.
For the above reasons the general rule is - as pointed out in the answer from EJP - that a broken connection can only be detected by writing to it. This is why it is always recommended that a TCP client and TCP server exchange some type of heartbeat messages at regular intervals. There are different ways to do this. I like best the method where the TCP client will - in the absence of data being received from the TCP server - send a heartbeat message to the server and expect a reply back within a certain time period. This way heartbeat messages will only be sent when really needed.
A sub-optimal approach - if you cannot implement true heartbeating - is to always read with a timeout. Set the timeout on the socket and then catch java.net.SocketTimeoutException. This will allow you to know that no data has been received on socket during x milliseconds.
It should be mentioned that there's one scenario where you don't have to use heartbeating, nor using the socket timeout: if the TCP client and the TCP server communicate over a loopback interface then a broken connection will always be propagated to both the TCP client application and the TCP server application. This is because, in this case, there's really no network infrastructure between the two processes. So if you have an existing application which isn't well-designed with respect to its TCP communication (i.e. it doesn't implement some form of heartbeating or at least reading with a timeout), then as a last resort you may 'fix' the problem by moving the two application onto the same host and let them communicate over the loopback interface.

How could I quickly know if a server is online?

I'm developing a Java client/server application in which there will be a great number of servers with which the clients are going to have to connect. The problem is that probably the vast majority of them will not be serving at the same time. The client needs to find at least one available in the list, so it will iterate it, looking for an available server (when it finds the first it stops, one is enough).
The problem is that the list will probably be long, tens of zousands, they could be even hundreds... and it may happen that only 1% of them are connected (i.e. executing the server). That's why I need a clever and a fast way to know if a server is connected, without waiting for time-outs or so. I accept all kinds of suggestions.
I have thought about ordering the server list statistically, so that the servers that are available more often are the first hosts attempted. But this is not enough.
Perhaps multicasting UDP datagrams? The connections between clients/servers are TCP, but perhaps to find a server it's better to do an UDP multicast first and wait for the answer, for example... what do you think?
:)
EDIT:
Both the server and client use thread pools.
The server pool handles 200 threads concurrently, and when the pool is full, queues the rest until the queue is 200 runnables long. Then it blocks, and stop accepting connections until there is free room in the queue again.
The client has a cached thread pool, it can make all the request to the server you want concurrently (with common sense, obviously...).
This is just an initial thought and would add some over head, but you could have the servers periodically ping some centralized server which the clients would connect through. Then if the server doesn't ping for some set time it gets removed.
You might want to use a peer-to-peer network.
Have a look at JXTA/JXSE:
http://jxse.kenai.com/index.html
If it is your own code which is running on each of these servers, could you send an alive to a central server (which is controlled by you and is guaranteed to be up at all times)? The central server can then maintain an updated list of all servers which are active. The client just needs a copy of this list from the central server and then start whatever communication it needs.
Sounds like a job for Threads. You cannot speed up the connection, it takes time to contact the server.
IMHO, the best way is to get few hundred Threads to march through the list of servers. The first one to find one server alive wins. Then signal other threads to die out.
Btw, did you really mean to order the server list "sadistically"? :)

Client Server communication in Java - which approach to use?

I have a typical client server communication - Client sends data to the server, server processes that, and returns data to the client. The problem is that the process operation can take quite some time - order of magnitude - minutes. There are a few approaches that could be used to solve this.
Establish a connection, and keep it alive, until the operation is finished and the client receives the response.
Establish connection, send data, close the connection. Now the processing takes place and once it is finished the server could establish a connection to the client to send the data.
Establish a connection, send data, close the connection. Processing takes place. client asks server, every n minutes/seconds if the operation is finished. If the processing is finished the client fetches the data.
I was wondering which approach would be the best way to use. Is there maybe some "de facto" standard for solving this problem? How "expensive" is opening a socket in Java? Solution 1. seems pretty nasty to me, but 2. and 3. could do. The problem with solution 2. is that the server needs to know on which port the client is listening, while solution 3. adds some network overhead.
is good enought
will not work at many situations, for example wne client is under firewall, NAT, and so on. Server usually accepts incoming connections from everywhere, desktops usualy not
better than 1 just because you will haven't problems when connection is lost
solutions 1+3 - make long waiting connections, with periodical sleep and reconnect after. I mean: connect to server, wait 30 sec for data, if no data received, sleep for 10 sec, loop.
Opening sockets is sometimes expensive, but not so expensive that your data processing.
I see an immediate problem with option 2. If the client is behind a firewall, he might very well be allowed to connect and do the request, but the server might be prevented to connect back to the cilent.
As you say, option 1 looks a bit nasty (not too nasty though, could work well), so among the options listed, I would go for option 3. Perhaps the server could estimate the time that's left of the processing, and hint the client, in each poll, of when it's about time to check back.

Categories