I am facing a little problem with sockets.
This method takes about 100ms or even more, depends on the server.
socket.connect(dest);
Then I am communicating through DataInput/Output streams to a sofisticated software, so there is query phase, handshake phase, login request phase etc.
Is there any way I can "reset" the datastream from handshake phase so the server forgets everything and the socket would be again in the first phase without doing socket.connect(dest); again ?
Thanks.
This is entirely protocol dependent, it has nothing to do with sockets per se.
There is nothing stopping you from passing as many messages back and forth through a socket; except maybe your protocol (or the lack of a clearly defined one) if it doesn't indicate where a message starts/ends.
When using a DataInput/OutputStream you could just define a Message class containing whatever data and both sides would just run in an infinite loop reading a Message, processing and possibly generating a response message.
Related
How can I be sure that data is successfully delivered to the other end in socket programming?
outStream.write() doesn't guarantee that bytes are received on the other end. I can force server to send back some confirmation data, but how long should client wait for it? If I wait too short, maybe data is delivered to the server just when I throw timeout exception in client (which then shows error dialog, but server actually received data). On the other hand, I don't want to wait too much.
Should client wait some time and if confirmation is received, a third "commit" message is sent to server which then supplies data for further processing (so first client writes, then server replies and then client confirms). But then again, if the commit message is not received on server, client thinks that data is successfully sent but server will ignore it after some time, because it didn't receive commit message. And so on, bouncing never ends...
How is this situation generally handled?
Every tutorial that I read is just about creating/closing sockets, and sending data on client side and receiving it on server side.
If you have links to blogs which explain this problem (or even books), that would be good too.
[EDIT]
I should clarify some things. I'm using Java for client and server, and later I will create C# client. Everything is working perfectly for now. Both client and server are on the same LAN and I have never had any real problems. Scenario explained above is just theoretical, because I would like to cover as much as possible, including error handling.
I know TCP guarantees delivery, but in Java, out.write() doesn't block until underlying TCP delivers or fails and then continues execution or throws an exception. It just continues execution and I don't know if sending failed or not. There is no callback function. I'm starting with socket programming so maybe there is very simple solution which I don't know about. All I need to do is to make sure client knows that server received the message (if that is even possible).
If you have this kind of extreme need for reliability, you need to build that into your application and protocol. One way I have done that in the past is as follows.
Say you have a stream of "objects" (objects here defined in whatever way makes sense to your application) that need to be communicated from client C to server S. Associate a unique identifier with each object on the client side. Then have C send each object along with its identifier to S. But have C keep its copy of the object for now (in memory, or on disk, or whatever makes sense).
For each object S receives, it stores the object together with its unique identifier in its own local data store, and sends back an acknowledgment to C that it received the object (using the identifier to communicate that). C can now delete that object from its data store (strictly speaking it can delete all the ones it sent prior to that object as well -- since TCP guarantees sequenced delivery -- but that slightly complicates things).
This process can continue indefinitely and C never needs to explicitly wait for a confirmation for any one object. It simply maintains a local copy of each object. As long as the connection stays up, S will continually acknowledge every object it has received.
If the connection is broken for any reason, C assumes that S has not received any object it sent since the most recently received acknowledgment. When the connection is re-established, C may therefore resend a few objects that S previously received but since S stored the unique identifier along with each object, it simply acknowledges again that it received the object.
If S hangs for some reason, then eventually buffers between client and server will fill up and C's send will block. The client may need to be prepared for this eventuality.
At the end of the stream of objects -- if there is an end -- C will need to wait for the last object to be acknowledged. There's simply no way around that, and so you will need to decide how long it's appropriate to wait before C gives up and declares an error.
(Of course, this is all essentially duplicating at the application layer what TCP is doing at the transport layer: acknowledging what was actually received with the ability for the sender to re-transmit anything that was lost.)
TCP:
TCP guarantees packet delivery at layer 4 of the OSI Model. TCP is based on a handshake in which the receiving party must confirm the packet's delivery. In that case there is either something wrong in your code or your network is malfunctioning. If you are talking about the packet not making it to its destination, make sure you have properly bound the TCP server to the port, and that the destination is correct. While waiting for a packets arrival, make sure you have a receive timeout in place in order to prevent you application from getting hung on the receive.
I am learning socket and server/client model concept and having a hard time understanding the server client concept. If a client sends a request, can server sends more than one respond? Or we have to put everything in one respond?
For a memory game program, when a client click a card, the action will send a request to server in order to turn the card in every player's program, if the second card does not match, the server tells players wait 2 secs, turn the 2 cards back, and then assign turn to next player. Can a server does this in multiple responds or it has to do it in single respond? Since no client requests for some responds, so I don't know if it is achievable or not.
If you're talking about TCP connections, after the connection has established client and server are equivalent, both are free to send data as long and as much they like and/or shut down their end of the connection.
Edit: After several passes I think i have understood what the second paragraph of your question is aiming for.
There is, of course, nothing which would stop the server from doing anything.. What your server seems to do, most of the time, is blocking on a InputStream.read() operation. If you want the server to operate even when no network input happens, one solution might be to use a read timeout, or check the input stream for readability before actually reading.
This is not your complete answer.
For one request, you get one response back.
Please read on this information in wikipedia for the basics
"Request-response, also known as request-reply, is a message exchange pattern in which a requestor sends a request message to a replier system which receives and processes the request, ultimately returning a message in response. This is a simple, but powerful messaging pattern which allows two applications to have a two-way conversation with one another over a channel. This pattern is especially common in client-server architectures.1
For simplicity, this pattern is typically implemented in a purely synchronous fashion, as in web service calls over HTTP, which holds a connection open and waits until the response is delivered or the timeout period expires. However, request-response may also be implemented asynchronously, with a response being returned at some unknown later time. This is often referred to as "sync over async", or "sync/async", and is common in enterprise application integration (EAI) implementations where slow aggregations, time-intensive functions, or human workflow must be performed before a response can be constructed and delivered."
I've built a simple Java program that works as a server locally.
At the moment it does a few things, such as previews directories, forwards to index.html if directory contains it, sends Last-Modified header and responds properly to a client's If-Modifed-Since request.
What I need to do now is make my program accept persistent connections. It's threaded at the moment so that each connection has it's own thread. I want to put my entire thread code within a loop that continues until either Connection: close, or a specified timeout.
Does anybody have any ideas where to start?
Edit: This is a university project, and has to be done without the use of Frameworks.
I have a main method, which loops indefinitely, each time it loops it creates a Socket object, a HTTPThread object is then created (A class of my own creation) - that processes the single request.
I want to allow multiple requests to work within a single connection making use of the Connection: keep-alive request header. I expect to use a loop in my HTTPThread class, I'm just not sure how to pass multiple requests.
Thanks in advance :)
I assume that you are implementing the HTTP protocol code yourself starting with the Socket APIs. And that you are implementing the persistent connections part of the HTTP spec.
You can put the code in the loop as you propose, and use Socket.setSoTimeout to set the timeout on blocking operations, and hence your HTTP timeouts. You don't need to do anything to reuse the streams for your connection ... apart from not closing them.
I would point out that there are much easier ways to implement a web server. There are many existing Java web server frameworks and application servers, or you could repurpose the Apache HTTP protocol stacks.
If it should act like a web service: Open 2 sockets from the client side, one for requests, one for
responses. Keep the sockets and streams open.
You need to define a separator to notify the other side that a
transfer is over. A special bit string for a binary, a special
character (usually newline) for a text-based protocol (like XML).
If you really try to implement an own http-server, you should rather make use of a library that already implements the HTTP 1.1 connection-keepalive standard.
Some ideas to get you started:
This wikipedia article describes HTTP 1.1 persistent connections:
http://en.wikipedia.org/wiki/HTTP_persistent_connection
You want to not close the socket, but after some inactive time period (apache 2.2 uses 5 seconds) you want to close it.
You have two ways to implement:
in your thread do not close the socket and do not exit the thread, but instead put a read timeout on the socket (whatever you want to support). When you call read it will block and if the timeout expires then you close the socket, else you read next request. The downside of this is that each persistent connection holds both a thread and a socket for whatever your max wait period is. Meaning that your solution doesn't scale because you're holding threads for too long (but may be fine for the purposes of a school project)!
You can get around the limitation of (1) by maintaining a list of tuples {socket,timestamp}, having a background thread monitor and close connections that timeout, and using NIO to detect a new read on an existing open socket. So after you finish reading the initial request you just exit the thread (returning it to the thread pool). Obviously this is much more complicated but it has the benefit of freeing up request threads.
We have a simple client server architecture between our mobile device and our server both written in Java. An extremely simple ServerSocket and Socket implementation. However one problem is that when the client terminates abruptly (without closing the socket properly) the server does not know that it is disconnected. Furthermore, the server can continue to write to this socket without getting any exceptions. Why?
According to documentation Java sockets should throw exceptions if you try to write to a socket that is not reachable on the other end!
The connection will eventually be timed out by Retransmit Timeout (RTO). However, the RTO is calculated using a complicated algorithm based on network latency (RTT), see this RFC,
http://www.ietf.org/rfc/rfc2988.txt
So on a mobile network, this can be minutes. Wait 10 minutes to see if you can get a timeout.
The solution to this kind of problem is to add a heart-beat in your own application protocol and tear down connection when you don't get ACK for the heartbeat.
The key word here (without closing the socket properly).
Sockets should always be acquired and disposed of in this way:
final Socket socket = ...; // connect code
try
{
use( socket ); // use socket
}
finally
{
socket.close( ); // dispose
}
Even with this precautions you should specify application timeouts, specific to your protocol.
My experience had shown, that unfortunately you cannot use any of the Socket timeout functionality reliably ( e.g. there is no timeout for write operations and even read operations may, sometimes, hang forever ).
That's why you need a watchdog thread that enforces your application timeouts and disposes of sockets that have been unresponsive for a while.
One convenient way of doing this is by initializing Socket and ServerSocket through corresponding channels in java.nio. The main advantage of such sockets is that they are Interruptible, that way you can simply interrupt the thread that does socket protocol and be sure that socket is properly disposed off.
Notice that you should enforce application timeouts on both sides, as it is only a matter of time and bad luck when you may experience unresponsive sockets.
TCP/IP communications can be very strange. TCP will retry for quite a while at the bottom layers of the stack without ever letting the upper layers know that anything happened.
I would fully expect that after some time period (30 seconds to a few minutes) you should see an error, but I haven't tested this I'm just going off how TCP apps tend to work.
You might be able to tighten the TCP specs (retry, timeout, etc) but again, haven't messed with it much.
Also, it may be that I'm totally wrong and the implementation of Java you are using is just flaky.
To answer the first part of the question (about not knowing that the client has disconnected abruptly), in TCP, you can't know whether a connection has ended until you try to use it.
The notion of guaranteed delivery in TCP is quite subtle: delivery isn't actually guaranteed to the application at the other end (it depends on what guaranteed means really). Section 2.6 of RFC 793 (TCP) gives more details on this topic. This thread on the Restlet-discuss list and this thread on the Linux kernel list might also be of interest.
For the second part (not detecting when you write to this socket), this is probably a question of buffer and timeout (as others have already suggested).
I am facing the same problem.
I think when you register the socket with a selector it doesn't throw any exception.
Are you using a selector with your socket?
I am designing a client-server chat application in Java. This is a secure application where the messages are exchanged using cryptographic algorithms. I have one server and it can support many clients. My problem is that when one client logs on the server it works fine, but when another user logs into the system, the server starts giving me bad padding exceptions for the encrypted text.
I am not able to figure out the problem, according to my logic, when new connection request to server is made, the server creates a thread for listening to the client. Is it possible that once the instance of thread class is created, it does all the processing correctly for the first client, but not for the second client because the variables in server listener thread class already have some previous value, and thus the encrypted text is not decrypted properly?
Please advise how I can make this process more robust so that the number of clients does not affect how well the server functions.
Hi, The code is like this :
When Server Starts:
Socket in= serverSocket.accept();
Receive rlt = new Receive(in);
Thread receiveReq = new Thread(rlt);
receiveLoginReq.start();
now the Receive Thread waits for the incoming message and do the process according to message type. When more than one client is invoked, Server works fine, problem starts when one client terminates and then again tries to reconnect. Server always gives the Error in following pattern:
First time the HAsh not matched error for second client
Second time javax.crypto.BadPaddingException: Given final block not properly padded error
When this happens, I need to restart server and restart both clients, only then both clients works. but again if one client terminates connection and again tries to reconnects, the same 2 errors occurs in the same manner. and then again restart Server.
Any Advise will be highly appreciated.
Thanks
Don't share mutable data with threads. Use functional style - no object states. If you really need to share some data with the threads then use message passing.
Check that you close connections in a proper way.
You could use a real server like Jetty that is very easy to install.
I can only guess the reasons without seeing the full source code. I assume you are using CipherInput/OutputStream for your encryption. You should use separate instances of Cipher for each thread (each I/OStream). Every time you create a new connection or re-connect, run the init method of Cipher on both the client and server side and create new CipherInput/OutputStream instances.
The cryptographic objects are stateful, therefore they cannot be shared between threads. Each thread and each connection should have its separate sets of stateful objects.
Check out java.lang.ThreadLocal class.