I'm creating a small utility which receives a lot of HTTP requests. It is written in java and uses embedded-jetty to handle requests via https.
I have a load-testing tool for it, but when it is being run for some time it starts to throw exceptions:
java.net.BindException: Address already in use: connect
(note, this is on sender's side, not in my project)
As I understand this means no more free sockets were found in system when another connect was called. Throughput is about 1000 requests per second, and failures start to appear somewhere after 20000 to 50000 requests.
However when I use the same load testing tool with another program (a kind of simple consumer, written in scala using netty by some colleague - it simply receives all requests and returns empty ok response) - there is no so problem with sockets (though typical speed is 1.5-2 times slower).
I wonder if this could be fixed by telling Jetty somehow to close connections immediately after response was sent. Anyway each new request is sent via new connection. I tried to play with Connector#setIdleTimeout - it seems to be 30000 by default but have not succeeded.
What can I do to fix this - or at least to research the matter deeper to find its cause (if I am wrong in my suggestions)?
UPD Thanks for suggestions, I think I am not allowed to post the source, but I get the idea that I should study client's code (this will make me busy for some time since it is written in scala).
I found that really there was a problem with client - it sends requests with Connection: Keep-Alive in header, though creates new HttpURLConnection for each request and calls disconnect() method after it.
To solve this trouble on the server-side it was sufficient to send Connection: close in response header, since I have no allowance to change testing utility.
Related
I am developing a nodemcu websocket server android client app using java.i successfully created client and connected to it through a websocket client service.i can detect server failure/closed when sending data.but can't detect it at the time of failure that is if server powered off cant know untill some data is send.how to know the server failure at the time of failure.using okhttp 4.1.0 library.can anyone help
how to know the server failure at the time of failure.using okhttp 4.1.0 library.can anyone help
You can't. It's not possible, but, there are workarounds, see below.
Why isn't it possible? Internally, the internet is packet switched, which means data is first gathered up into packets, and then these packets are sent.
Most of the stuff you do on the web feels like it is 'streams' instead (you send 1 character, and one character arrives on the other side). But that's all based on protocols that are built on top of the packet nature of the internet.
When you have an open connection between 2 computers via the internet, no data is actually being sent, at all. It's not like you have a line reserved. Old telephone networks did work like that: When you dialled somebody, you got a dedicated line, and once the line got interrupted, you'd hear beeps to indicate this.
That is not how the internet works. Those wires and everything in between have no idea that there is an open connection at all. That's just some bits in memory on your computer and on the server which lets them identify certain packets as part of the longer conversation those 2 machines were having, is all.
Thus we arrive at why this isn't possible: Given that no packets are flowing whatsoever until one side actually sends data to the other, it is impossible to tell the difference between 'no data being sent right now' and 'somebody tripped over the power cable in the server park'. That's why you don't get that info until you send something (and the reason you get that is only because when you send something, the protocol dictates that the server sends you back a confirmation of receiving what you sent. If that takes too long, your computer will send it a few more times just in case the packet just got lost somewhere, and will eventually give up and conclude that the server can no longer be reached or crashed or lost power, and only then do you get the IOException).
Workarounds
A simple one is to upgrade your own protocol: Dictate that the server or client (doesn't matter who takes the responsibility to do this) sends a do-nothing message at least once a minute. You can then conclude after not receiving that for 100 seconds or so that the connection is probably dead. You can start a timer for 100 seconds, reset it every time you receive any data whatsoever. If the timer ever runs out? Connection is likely dead.
This is somewhat take on this idea built into the protocol that lets you make connections that feel like streams of data. That protocol is called TCP/IP, and the feature is called KeepAlive.
The problem is, you possibly don't get to dictate the TCP/IP settings for your websocket connection. If you can, you can turn on keepalive (for example in java, you use Socket to make raw TCP/IP connections, and it has a .setSoKeepAlive(true) method. Check the API if you can get at the socket or otherwise scan the docs for 'keepalive' and see if there's anything there.
I bet there won't be, which means you have to use the trick I mentioned above: Update your server code to use a timer to send a 'hello!' 60 seconds after any conversation, and update your client code to give up on the connection once 100 seconds have passed (give it 40 additional seconds; sometimes the internet gets a little backed up or servers get a little busy).
Well I am new to this and I don't know how to do it, so my senior fellows please help!!!!
There is a situation described below:
An HTTP client is sending a request (Request can be of any type, not concerned regarding the request type) that directly hits a Loadbalancer. The Loadbalancer then redirects the traffic, based on the load of the traffic, towards a "Gateway" system running in two V440 Server, GW logic is written in Java, that actually logically routs this request towards another two server nodes which actually process the request.
Now the scene is something like that: there are several parallel connections are established with this Gateway from several HTTP clients. One connection per client. It has been observed that, while making connections to this GW, in case of some clients the CPU utilization is going 98-99%.
Client is creating one connection with the GW on particular port. Opens a socket connection:
ServerSocket _ss = new ServerSocket(_port);
Socket s = _ss.accept();
and then GW waits for the input to come from the client.
Now my question is:
Why this kind of situation is happening, as it seems all fine
for rest of the clients and there connections.
Only few clients who are creating connections with the GW is making the situation?
Is there anyway we can track this client's IP so that we can understand if this
has been occurred by same clients every time?
Is there any resolution for this?
Since it is not happening for all the clients, we are certainly not going to find an immediate answer for this. However, this is what my limited research on the question yields
Firstly, Question 2
Configure your F5 to capture the client's IP. Since it is HTTP, there are multiple ways of tracking the requests. One is to
sniff the header X-FORWARDED-FOR which will give the client's IP
Address
Or try adding this rule in your logging engine
when CLIENT_ACCEPTED { log local0. "clientIP:[IP::client_addr] accessed" }
If you also need other data such as resources you can use one of the
other events such as HTTP_REQUEST:
when HTTP_REQUEST {
log local0. "clientIP:[IP::client_addr] accessed [HTTP::host][HTTP::uri]" }
Refer link for above here
Secondly, Question 1
For this you need to look at your available traffic statistic mechanisms. I read this, this and this. Enable the statistics, monitor them live, test, mock requests and analyze the output. I do not know of any other options other than this, right now.
Another option, if you can modify your Java program is to include some sort of performance logging mechanism for each request. But this means there is a lot of development and that I do not recommend at all.
Thirdly, Question 3
This is primarily opinion based. As far as I think if you figure out the problem, you can resolve it.
I need some assistance on a project I am working on. It's a library itself using Jersey 1.x (1.19.1) aiming at HTTP posting a JSON document and getting the corresponding JSON response from a server.
I am facing a problem when the response from the server is "big". The JSON document that is posted by my client application contains several jobs that must be executed by the server, and
the JSON document sent back by the server is made of the outputs of these jobs. The jobs can be considered independent from each other. The server works in streaming mode, which means it
starts to process the jobs before it receives the entire JSON document posted by the client. And it starts to send the outputs of the jobs as soon as they are finished. So the server
starts to reply to my client application while it is still posting the request. Here is my problem. When the request gets big so gets the response (more jobs to do), my application freezes
and at some point terminates.
I spent a lot of time trying to figure out what's happening and here is what is found and what I infered.
Jersey, for handling HTTP communication is using a class from the JDK (in rt.jar) I forgot the exact name and don't have access to my work right now but let's call it HttpConnection.
In this class there is a method checkError() that is invoked and throws a IOException with only a message saying it was impossible to write to server.
Debugging I was able to understand that an attribute of this class named trouble was set to true because a write() method caught an IOException before. checkError() throws a
IOException based on that trouble boolean flag. It's not possible to easily see the cause IOException because the classes of the JRE are compiled without the debugging symbols but
I managed to see that this IOExeption was a "connection reset by peer" problem.
Then I tried to understand why the server resets the connection. I used a HTTP proxy that captures the HTTP traffic between my client application and the server but this gave me no more clues,
it even seems that the proxy is unable to handle properly the connection with the server as well!
So I tried to use Wireshark to capture the traffic and see what's wrong. Here is what I found.
On client side, packets corresponding to the post of the request JSON document are sent and the server starts to reply shortly after, as explained above. The server side sends
more and more packets and I noticed that the buffer of the TCP layer (called TCP window in Wireshark) on client side has a size that decreases more and more as the server sends packets.
Until it beomes full (size: 0 byte). So the TCP layer on server side cannot send data to the TCP layer on client side anymore and thus becomes full too. The conversation, in the end is
only about retrying to send data, on both sides, failing again and again. Ultimately the server decides to send a reset packet. This corresponds to the cause IOExcpetion I mentioned
above I believe.
My understanding is: as long as the server does not start to stream the response everything is fine. When the server starts to send the response, the TCP buffer on client side starts to
get filled. But as the client application does not read the response yet, the content of this buffer is not consumed. When the server has sent enough data to fill this buffer it cannot
send anymore data and the buffer of its TCP layer gets full too because the server continues to push data. As a result, the client application cannot finish to send the request JSON
document. The communication is blocked on both sides and the server decides to reset the connection.
My conclusion is: the code, as currently written, does not support such full duplex communication, because the response from the server is not consumed as it is received. Indeed, walking
through the Jersey code that is executed by my library, by debugging, it is clear that the pattern is:
first: connection.getOutputStream().write()
and then: response.getInputStream().read()
In my opinion, the root cause of the problem is that the library I am working on uses Jersey in this synchronous manner which does not fit well the way the server works (streaming the
response while the request is still being sent to it).
I searched a lot on the Internet a solution keeping Jersey 1.19.1 for me to improve the library with as few impacts as possible but I failed. This is the reason why I am asking help
here now ;)
So basicaly my question is: is it possible to do what I need to do keeping Jersey client library 1.19.1 and if yes how? If not, what HTTP client library should I use for my library (to
write a post request and read the corresponding response at the same time) and if you could give me a basic example so I can be on track quickly it would be much appreciated.
One last thing: curl works just fine, I can fully post the exact same JSON document and get the response using it, so there is no problem on server side as I suspected at the very
beginning of my investigation. And it scales fine (I tried to send huge JSON documents). Of course I made sure the HTTP header of the post is the same in the case of my library and in the
curl case.
Thanks a lot for reading me and for your answers.
Best regards,
Loïc
I have a server implemented using Apache HTTPCore which can accept posts from an httpclient implementation. I have enough of it working so that I can send to the server, processes the post contents, and get the response back on the client. Everything seems to work, however I am noticing that the server keeps the connection alive until it times out, even though the client connection has completed successfully. I'm assuming that I need to close the connection on the client side after receiving the response, however I believe I am already doing that as I am using BasicResponseHandler(), which returns a String, so I can't figure out what if anything I need to actually close.
Any thoughts on this? I was going to try using a different response handler that returns an InputStream, and see if closing that works, but I assumed that BasicResponseHandlerwas doing that behind the scenes already as it returns a String
If the server hasn't read an EOS, the client hasn't closed the connection. Having a read timeout on the connection to the client is the correct strategy.
I'm working on a WSDL-based web service and using Apache Axis 2. I'm not an expert on web services, and the person I'm working with claims that in order for this particular web service to work two calls have to be made on the same connection, i.e. using http keep-alive (There's basically a "commit transaction" method that needs to be called after the "save" method). This seems like it would be a pretty common issue, but I haven't found anything on Google.
I'm wondering if there's a way to explicitly tell Axis to do this. Also, how could I verify whether or not two calls are indeed being made on the same connection. I imagine some HTTP monitoring software like wireshark might be able to tell me this, but I haven't installed it yet.
The person you are working with is wrong. Even if HTTP can be optimized by using keep-alive to process multiple requests over a single TCP connection, this optimization should be transparent to the caller or callee, e.g. it should not matter if a client make two requests after each other on a keep-alive connection or if it's using two separate connections.
Java libraries (HttpURLConnection on the client side and the servlet API on the server side) do not even offer access to this information, so that the using software cannot know how the HTTP requests are actually performed.
You can use nmaplink text to see what is actually running on each port.
But if you are making 2 calls at same time, axis2 will throw port is already binded error. Any port can't handle 2 requests at the same time (my opinion). Maybe you can queue it and do it consecutively. But just confirm with other sources as well.