CXF increase connection pool size without changing http.maxConnections - java

I have recently been asked to configure CXF to the same parameters as our older XFire service.
One of those parameters was Keep-Alive: timeout=60, max=20.
However, I did some research and it appears that CXF uses the JVM HttpURLConnection object under the hood. From what I see, there has been some attempts to provide configuration for it but nothing has been commited for now.
I would prefer not to change the http.maxConnections parameter as it would impact all the server instead of the CXF web services only.
I found this interresting forum thread talking about it where Daniel Kulp says:
BTW: there is a way to control the connection pooling, but it's a
SERVER side thing. Basically, if the server sends back a header of:
Keep-Alive: timeout=60, max=5
then the Java client will respect those values. Right now in CXF,
you would probably need to write an interceptor to set those values.
I just made a commit to trunk that expands the http configuration to
include a setting to control this from the config file.
I could write an interceptor that modify the headers to do so. However my question is: How will the server react to this kind of change? Would not that be a problem if the server expects 5 connections max and the client performs more ?

According to what I read here, the keep-alive parameters can be controller either by system properties or directly in the HTTP headers:
The support for HTTP keep-Alive is done transparently. However, it can
be controlled by system properties http.keepAlive, and
http.maxConnections, as well as by HTTP/1.1 specified request and
response headers.

Related

How to track/monitor socket connection in spring boot?

We have a spring boot (with zuul) app using default embedded tomcat (I think). It has many clients implemented with different technologies and languages. And we have problem with too many port in TIME_WAIT: i.e. too many socket connections are opened/closed w.r.t the expected request behavior that should keep connections alive most of the time.
By retrieving the HttpRequest object in the deployed API, I can get information on the request header. This way I can track the http protocol used (http/1.1) and header parameter such as keep-alive (which, if present, is redundant with the use of http/1.1).
=> I would like to track opened and closed socket connections, but I don't see how?
Intermediate information would be better than nothing.
Note: I found some tutorial on a similar topic when using spring-websocket, but we don't.

Jetty - proxy server with dynamic registration

We have a number of Jetty http(s) servers, all behind different firewalls. The http servers are at customer sites (not under our control). Opening ports in the firewalls at these sites is not an option. Right now, these servers only serve JSON documents in response to REST requests.
We have web clients that need to interact with a given http server based on URL parameter or header value.
This seems like a straightforward proxy server situation - except for the firewall.
The approach that I'm currently trying is this:
Have a centralized proxy server (also Jetty based) that listens for inbound registration requests from the remote http servers. The registration request will take the form of a Websocket connection, which will be kept alive as long at the remote HTTP server is available. On registration, the Proxy Server will capture the websocket connection and map it to a resource identifier.
The web client will connect the proxy server, and include the resource identifier in the URL or header.
The proxy server will determine the appropriate Websocket to use, then pass the request on to the HTTP server. So the request and response will travel over the Websocket. Once the response is received, it will be returned to the web client.
So this is all well and good in theory - what I'm trying to figure out is:
a) is there a better way to achieve this?
b) What's the best way to set up Jetty to do the proxying on the HTTP Server end of the pipe?
I suppose that I could use Jetty's HttpClient, but what I really want to do is just pull the HTTP bytes from the websocket and pipe them directly into the Jetty connector. It doesn't seem to make sense to parse everything out. I suppose that I could open a regular socket connection on localhost, grab the bytes from the websocket, and do it that way - but it seems silly to route through the OS like that (I'm already operating inside the HTTP Server's Jetty environment).
It sure seems like this is the sort of problem that may have already been solved... Maybe by using a custom jetty Connection that works on WebSockets instead of TCP/IP sockets?
Update: as I've been playing with this, it seems like another tricky problem is how to handle request/response behavior (and ideally support muxing over the websocket channel). One potential resource that I've found is the WAMP sub-protocol for websockets: http://wamp.ws/
In case anyone else is looking for an answer to this one - RESTEasy has a mocking framework that can be used to invoke the REST functionality without running through a full servlet container: http://docs.jboss.org/resteasy/docs/2.0.0.GA/userguide/html_single/index.html#RESTEasy_Server-side_Mock_Framework
This, combined with WAMP, appears to do what I'm looking for.

Setting setChunkedStreamingMode in HttpURLConnection fails to deliver data to server

My server version is as follows on my dev machine:
Apache/2.2.21 (Win32) mod_fcgid/2.3.6
I have been testing HttpURLConnection as my project requires easy streaming capabilties. I have read a great synopsis from #BalusC on how to use the class.
Using java.net.URLConnection to fire and handle HTTP requests
The trouble I am currently having is when setting setChunkedStreamingMode. Regardless of what I set it to my stream doesn't seem to make it to the server the data stream is empty when my server api method/connection is called/made. However, if I remove it, it works fine.
I have seen another person with a similar issue:
Java/Android HttpURLConnection setChunkedStreamingMode not working with all PHP servers
But with no real resolution. I am unable to set it to setFixedLengthStreamingMode simply because the content (json) is variable in length.
This is NOT OK. I potentially will be transfering very large quantities of data and hence cannot have the data stored in memory.
My question is, how can I get setChunkedStreamingMode to play nice? Is it a server setup issue or can it be fixed in code?
EDIT
I have now tested my code on my production server and it works no problem. I would however still like to know why my Apache server on my local machine fails. Any help is still much appreciated.
Try adding this HTTP header:
urlConnection.setRequestHeader("Transfer-Encoding","chunked");
I haved a problem like this: although I haved set the chunked HTTP streaming mode (urlConnection.setChunkedStreamingMode(0) ), it not worked, but putting the HTTP header above it works fine.
I had a similar issue. In my case it was the client system that had a virus scanner installed. Those scanners sometimes have identity theft modules that intercept POSTs, scan the body and then pass it on.
In my case BitDefender cached about 5MB before passing it on.
If the whole payload was less then the POST was delivered as non chunked fixed length request.
I had a similar problem using HttpURLConnection. Just add:
conn.setRequestProperty("connection", "close"); // disables Keep Alive
to your connection or disable it for all connections:
System.setProperty("http.keepAlive", "false");
From the API about disconnect():
Releases this connection so that its resources may be either reused or closed.
Unlike other Java implementations, this will not necessarily close socket connections that can be reused. You can disable all connection reuse by setting the http.keepAlive system property to false before issuing any HTTP requests.

Making web service calls on the same connection

I'm working on a WSDL-based web service and using Apache Axis 2. I'm not an expert on web services, and the person I'm working with claims that in order for this particular web service to work two calls have to be made on the same connection, i.e. using http keep-alive (There's basically a "commit transaction" method that needs to be called after the "save" method). This seems like it would be a pretty common issue, but I haven't found anything on Google.
I'm wondering if there's a way to explicitly tell Axis to do this. Also, how could I verify whether or not two calls are indeed being made on the same connection. I imagine some HTTP monitoring software like wireshark might be able to tell me this, but I haven't installed it yet.
The person you are working with is wrong. Even if HTTP can be optimized by using keep-alive to process multiple requests over a single TCP connection, this optimization should be transparent to the caller or callee, e.g. it should not matter if a client make two requests after each other on a keep-alive connection or if it's using two separate connections.
Java libraries (HttpURLConnection on the client side and the servlet API on the server side) do not even offer access to this information, so that the using software cannot know how the HTTP requests are actually performed.
You can use nmaplink text to see what is actually running on each port.
But if you are making 2 calls at same time, axis2 will throw port is already binded error. Any port can't handle 2 requests at the same time (my opinion). Maybe you can queue it and do it consecutively. But just confirm with other sources as well.

How to close a HTTP connection from the HttpServlet

I'm running a servlet in Tomcat 6.0.26. The servlet accepts file upload from the client by HTTP POST. I'd like to stop the file uploading from the HttpServlet side. I tried the following methods with no luck:
close the request inputstream
send error code HttpServletResponse.SC_REQUEST_ENTITY_TOO_LARGE and flush response
do 1 and 2 in a Filter
I googled but found no direct answers. Please advise solutions.
Thanks.
This is not possible using the standard Servlet nor Commons FileUpload API's. Basically, to be able to abort the connection immediately, you should grab the underlying socket physically and close it. However, this socket is controlled by the webserver. See also this related question: How to explicitly terminate http connection from server with no response header.
Little tests have however confirmed that Commons FileUpload doesn't buffer up the entire file in memory when its size exceeds the limit. It will read the input stream, but just ignore and throw away the read bytes (also the ones which are already read). So memory efficiency isn't necessarily the problem here.
To fix the real problem, you'd basically like to validate the file size in the client side rather than the server side. This is possible with a Java Applet or a Flash Application. For example, respectively JumpLoader and SWFUpload.
This is not possible using standard APIs. And you're violating some protocol standards/RFC if you do. So why would you do that?
Instead, send a "Connection: close" response (http header) with no http body.
here is some crazy workaround: you can write (or find somewhere) some firewall standalone application based on Sockets that handles HTTP request, parses headers and if the request matches some your custom conditions - firewall forwards it to Tomcat, otherwise returns HTTP error response. Or you can try to tune Apache <-> Tomcat with some Apache rules.

Categories