Java on Android - Strange socket behavior with InputStream timeout errors - java

I have an app which uses an instance of the Socket class to communicate with a server.
I use the streams returned by socket.getInputStream() and socket.getOutputStream() to read and write data.
When my Android app is always "active" (not minimized), there is no problem with the communication. It does not matter how long the connection lasts.
When I "pause" the application and re-open it quickly, everything still works fine.
However, when I pause the application for about 5 minutes and re-open it, the InputStream shows strange behavior: it stops reading anything. I get timeout errors instead of the data sent by the server.
The connection is still alive, the server is able to write and read. isInputShutdown() on the client-side returns false.
Using a network analysis tool, I can also see that the data sent by the server IN FACT reaches the client but it somehow does not get recognized by the InputStream ...
However, writing data from the client to the server using the OutputStream works fine.
Maybe it's worth mentioning that the socket object and the streams are declared as static to be accessible for all the activities of the app. But as I don't have any problems with the OutputStream, I cannot imagine that this could be the reason.
The only workaround I have at this point is to close the whole socket and connect a new one to the server. But this is causing unnecessary network traffic because I have to handshake again. It would be better not to do it this way.
If anyone has had similar experience and found a solution, I would be really happy if you could share it with me.

You should create a service that will be run in background and implement socket connection with a server.

As Kevin Krumwiede pointed out by referring to this post: Strange behavior of socket outputstream android, when sending data every X minutes (e.g. 3 or 4), everything still works as it should even after 30 minutes of being 'paused'.
I had the hope that Socket.setKeepAlive(true) would be enough to keep the connection alive so I dont have to cause too much unnecessary network traffic but in my particular case, this does not help.
Sending 1 byte of 'garbage' every X minutes 'solves' the problem.

Related

Okhttp websocket server Shutdown detection

I am developing a nodemcu websocket server android client app using java.i successfully created client and connected to it through a websocket client service.i can detect server failure/closed when sending data.but can't detect it at the time of failure that is if server powered off cant know untill some data is send.how to know the server failure at the time of failure.using okhttp 4.1.0 library.can anyone help
how to know the server failure at the time of failure.using okhttp 4.1.0 library.can anyone help
You can't. It's not possible, but, there are workarounds, see below.
Why isn't it possible? Internally, the internet is packet switched, which means data is first gathered up into packets, and then these packets are sent.
Most of the stuff you do on the web feels like it is 'streams' instead (you send 1 character, and one character arrives on the other side). But that's all based on protocols that are built on top of the packet nature of the internet.
When you have an open connection between 2 computers via the internet, no data is actually being sent, at all. It's not like you have a line reserved. Old telephone networks did work like that: When you dialled somebody, you got a dedicated line, and once the line got interrupted, you'd hear beeps to indicate this.
That is not how the internet works. Those wires and everything in between have no idea that there is an open connection at all. That's just some bits in memory on your computer and on the server which lets them identify certain packets as part of the longer conversation those 2 machines were having, is all.
Thus we arrive at why this isn't possible: Given that no packets are flowing whatsoever until one side actually sends data to the other, it is impossible to tell the difference between 'no data being sent right now' and 'somebody tripped over the power cable in the server park'. That's why you don't get that info until you send something (and the reason you get that is only because when you send something, the protocol dictates that the server sends you back a confirmation of receiving what you sent. If that takes too long, your computer will send it a few more times just in case the packet just got lost somewhere, and will eventually give up and conclude that the server can no longer be reached or crashed or lost power, and only then do you get the IOException).
Workarounds
A simple one is to upgrade your own protocol: Dictate that the server or client (doesn't matter who takes the responsibility to do this) sends a do-nothing message at least once a minute. You can then conclude after not receiving that for 100 seconds or so that the connection is probably dead. You can start a timer for 100 seconds, reset it every time you receive any data whatsoever. If the timer ever runs out? Connection is likely dead.
This is somewhat take on this idea built into the protocol that lets you make connections that feel like streams of data. That protocol is called TCP/IP, and the feature is called KeepAlive.
The problem is, you possibly don't get to dictate the TCP/IP settings for your websocket connection. If you can, you can turn on keepalive (for example in java, you use Socket to make raw TCP/IP connections, and it has a .setSoKeepAlive(true) method. Check the API if you can get at the socket or otherwise scan the docs for 'keepalive' and see if there's anything there.
I bet there won't be, which means you have to use the trick I mentioned above: Update your server code to use a timer to send a 'hello!' 60 seconds after any conversation, and update your client code to give up on the connection once 100 seconds have passed (give it 40 additional seconds; sometimes the internet gets a little backed up or servers get a little busy).

java help understanding how socket connections work

I am completely new to creating a network connection in java so I apologize if this is a stupid question.
I am trying to create a D&D companion in java that will allow a player to create their character and then send it to the DM so that they can view it and make changes and send it back to the player. I want to be able to make it so that any time a field is changed on one computer it will also be changed on the other computer.
After a bunch of research online I have been able to create a socket connection between the DM(server) and the player(client) and pass a message between the two but I am not sure how a socket connection works after this initial connection is made. My research has not been very clear on this. I have found many resources that have said that java closes the socket after a message has been passed and many that say that the socket stays open.
If java closes the socket then my problem is easy enough to solve because then I will just have to open a new socket every time I need to pass data making sure that I pass the IP address of the client to the server the first time I make a connection.
My real questions come in when a socket stays open.
If the socket stays open and multiple clients connect to the server, will the server just shout over the network whenever it transmits a message so that all clients receive the message? (If this is the case then I know I can just attach a username to the front of the message so that the client can determine if the server is talking to it.)
If the server does not shout then how do I specify which client I want the server to talk to?
Will I have to add a loop to my receive methods so that the client/server is constantly listening for a transmission from the server/client or will java automatically do so after I run the method the first time?
I have found many resources that have said that java closes the socket after a message has been passed
You found them where?
and many that say that the socket stays open.
All those are correct. Java never closes connections. The application closes connections.
If java closes the socket then my problem is easy enough to solve because then I will just have to open a new socket every time I need to pass data making sure that I pass the IP address of the client to the server the first time I make a connection.
It doesn't.
My real questions come in when a socket stays open.
If the socket stays open and multiple clients connect to the server, will the server just shout over the network whenever it transmits a message so that all clients receive the message?
No. It will respond via the socket that is connected to the corresponding client.
(If this is the case then I know I can just attach a username to the front of the message so that the client can determine if the server is talking to it.)
Unnecessary.
If the server does not shout then how do I specify which client I want the server to talk to?
The server responds via the same socket it read the request from.
Will I have to add a loop to my receive methods so that the client/server is constantly listening for a transmission from the server/client
No, you will have to add a thread per accepted socket, that loops reading requests until end of stream.
or will java automatically do so after I run the method the first time?
No.
You seem to have been reading some truly appalling drivel. Take a look at the Custom Networking section of the Java Tutorial.
Adding to EJP's wise answer, it might be worth clarifying:
Sounds like you (wisely) use TCP, so your Socket represents a connection between 1 server and 1 client. No "shouting". In examples such as this , when connection is established (namely, client obtains a Socket by calling "new Socket" and server obtains a Socket by calling "accept"), those Sockets are dedicated to those 2 specific endpoints. So if 10 clients connect to 1 server, the server will keep 10 Sockets and won't mix them up. A bit like a poor secretary that has 10 phones on his desk and answers them all - despite the mess, each earpiece is clearly connected to 1 customer.
The connection can hold for a while & serve several messages. It will terminate when either one of the sides calls 'socket.close', or it can be terminated by underlying 3rd parties (operating system, proxies, firewalls).
For your first version, or for simple business requirements, it's probably enough to converse over this 1 simple connection. However, for commercial critical data that requires 'assurance of delivery', you might need to invest some careful thought & possibly tools such as RabbitMQ.
Good luck:)

Netty how to count actual written to socket bytes and take last write time?

I'm writing highly loaded client/server application. There are cases on some OSes, when connection is lost, but netty doesn't know about it (due to TCP/IP protocol doesn't have pinging). So I decided to implement connection pinging on my app level.
Then I've faced the next problem: ping from server can not reach client and back during reasonable time in cases when server sends too much messages to the client via slow network connection (write buffer high water mark is rather big, several MB). In this case server breaks connection despite its alive and working.
So I've decided to look on IO processing while pinging as well. So I could consider as normal the next situation: when ping is timed out, but bytes from server are still being processed and written to the socket.
However, looks like its impossible in netty to count actual written bytes to socket and measure last to socket write time, because NioSocketChannel.doWrite(ChannelOutboundBuffer in) doesn't have any callbacks for that. And I don't want to hack the netty code by overwriting somehow NioSocketChannel doWrite method.
I'm using netty 4.0.42.
Any help is appreciated!

Force Embedded Jetty to disconnect at once

I'm creating a small utility which receives a lot of HTTP requests. It is written in java and uses embedded-jetty to handle requests via https.
I have a load-testing tool for it, but when it is being run for some time it starts to throw exceptions:
java.net.BindException: Address already in use: connect
(note, this is on sender's side, not in my project)
As I understand this means no more free sockets were found in system when another connect was called. Throughput is about 1000 requests per second, and failures start to appear somewhere after 20000 to 50000 requests.
However when I use the same load testing tool with another program (a kind of simple consumer, written in scala using netty by some colleague - it simply receives all requests and returns empty ok response) - there is no so problem with sockets (though typical speed is 1.5-2 times slower).
I wonder if this could be fixed by telling Jetty somehow to close connections immediately after response was sent. Anyway each new request is sent via new connection. I tried to play with Connector#setIdleTimeout - it seems to be 30000 by default but have not succeeded.
What can I do to fix this - or at least to research the matter deeper to find its cause (if I am wrong in my suggestions)?
UPD Thanks for suggestions, I think I am not allowed to post the source, but I get the idea that I should study client's code (this will make me busy for some time since it is written in scala).
I found that really there was a problem with client - it sends requests with Connection: Keep-Alive in header, though creates new HttpURLConnection for each request and calls disconnect() method after it.
To solve this trouble on the server-side it was sufficient to send Connection: close in response header, since I have no allowance to change testing utility.

Client and Server connection fails around 4/5 of the time

For a while I now have been playing around with networking between my computer and my Android phone using Java both as the server and client app.
When connected, it works as it should, both ends sending and recieving as they should. However, I'm having trouble to make them always connect, even when on a LAN network, and the client usually times out when connecting, particularly when getting the ObjectInputStream.
I have tried increasing and decreasing the timeouts on both the client and server sockets without any better result. Only worse.
I'm not exactly certain what information anyone would need in order to give me such tips, so if I'm missing something from this text, please tell me and I'll provide what I have got.
I just would like some tips as to what may be wrong with my code or program flow that cause this problem. Is there problems with the timing of the connection? In that case, why does it work when I get the ObjectOutputStream from the socket?
In case it is of help, I've provided what I'd imagine is the most important parts of the connection classes on the bottom of the question post.
If this is questioned in the wrong way or just being too specific to my own project, please tell me and I'll remove the question and try again.
ClientConnection.java - The class handling all connection on the Android client.
Server.java - Handles connecting clients and requests.
Connection.java - Currently just a storage class but will later also be used to check if the server should send a connection check and make sure the client is still connected.

Categories