Tomcat/IIS closes socket after http response has completed - java

I am using JBoss 4.0.4 GA, which has Tomcat Servlet Container 5.5.
I also have IIS 6.0 redirected to this JBoss. (via IIS tomcat connector, which is used as ISAPI filter in IIS).
All is working OK, configured the workers as described.
Here is a piece of workers.properties file of the connector:
#
# Defining a worker named ajp13 and of type ajp13
# Note that the name and the type do not have to match.
#
worker.jboss0_ajp13.port=8009
worker.jboss0_ajp13.type=ajp13
worker.jboss0_ajp13.host=localhost
worker.jboss0_ajp13.socket_keepalive=1
worker.jboss0_ajp13.socket_timeout=300
But when connecting to the application via IIS (port 80), for each completed HTTP response for HTTP request, the socket is closed (FIN is sent on the TCP layer).
This causes severe slowdowns, since the application is working over WAN. (for each closed socket, another one needed to be established, which takes 500ms).
This is not happening when connecting to JBoss web server directly, and also not happening when connecting to a different virtual directory on the same WebServer instanse of IIS (i.e. Keep-Alive in IIS is also configured).
This happens with the latest version of tomcat IIS connector.
Do you know if there is a bug in the connector, or there is a problem with my configuration?
Thanks in advance,
Henry.

I filed a bug in Bugzilla for tomcat IIS redirector, and this is the answer I've got:
Up until 1.2.27 this was the behaviour of the IIS connector (IIS forces all
ISAPI extensions to implement their own HTTP keep alive, and the IIS connector
didn't do this).
In 1.2.27 there's experimental, build-time, support for HTTP 1.1 chunked
encoding, which should permit persistent connections.
(I've been using pretty much the same code in production systems for about 4
years, but it should be considered experimental in the JK codebase until it's
been stable for a while).
Grab the -chunked binary from one of the download mirrors and read about how to
configure chunked encoding in the 1.2.27 release notes (you have to get the
right build, and enable it in your config).
You can verify that the connector is using chunked encoding with debug logging
on, and a TCP/Wireshark trace should show connections being reused.
If you're still getting closed connections, and the logs show that the
connector is attempting (or should be attempting) chunked encoding, then it's
probably best to discuss on the users list and then reopen with another
Wireshark trace + connector debug log once you're sure there's an issue.
So, what I did:
Put isapi_redirect.dll with chunking support.
Configured the isapi_redirect.properties with the following:
enable_chunked_encoding=1
Restarted IIS.

The socket is probably being closed by IIS. The connection between JBoss and IIS should be irrelevant to the HTTP socket being persisted or not. Ensure that IIS is configured to support persistent HTTP/1.1 sockets.
However, you point out that another virtual directory in IIS does not have the same problem. It could be a problem with the specific virtual directory that is having problems. However, it could also be something in the IIS/Tomcat connector.
To investigate whether it is the IIS/Tomcat connector, try setting
worker.jboss0_ajp13.connection_pool_size=10
worker.jboss0_ajp13.connection_pool_timeout=600
to see if it makes any difference at all. See Tomcat Workers Docs (including the section at the bottom "A sample worker.properties"). See if any of the parameters mentioned there help you out.

Related

What is http-remoting Protocol

I have set up an EJB on Wildfly and wrote a client to access it. With the protocol "http-remoting" it works fine.
Unfortunately, I am not sure about the functional details of http-remoting.
I guess this is a http-tunnel of the RMI protocol. But I can't find any suitable resources about this topic. So I am really unsure.
Does anybody know how http-remoting is working as a protocol?
It's a protocol implemented in JBoss Remoting. There is a GitHub repo for it as well.
Also depending on the version of WildFly you're using you may need to use remote+http or remote+https. The http-remoting protocol will still work, but is deprecated.
From JBoss Redhat solutions site (https://access.redhat.com/solutions/3259861)
remote: is the JBoss EAP 6 remoting protocol. It is not HTTP and cannot be used through a load balancer as it is designed to make a persistent connection to a host
http-remoting
http-remoting: / https-remoting: is the JBoss EAP 7.0+ remoting protocol that uses http-upgrade, it will connect via HTTP and then use http-upgrade and switch to the remoting protocol. Though it is HTTP, it cannot be used through a load balancer for load balancing, it is designed to make a persistent connection to the remote host and remain connected until the JVM is shut down or closed by the client.
remote+http
Works the same as http-remoting.
http
In JBoss EAP 7.2 this protocol can be used as an HTTP protocol that can be used through a load balancer as it does not use http-upgrade and it can be load balanced via an HTTP load balancer.

AWS ELB servlet client disconnection detection

Disclaimer: there is a lot of information on similar topics. In our case it works as expected without AWS ELB (Elastic Load Balancer), i.e. when the client drops, ServletOutputStream.flush() throws IOException.
Setup: we have an instance running Tomcat 7 (Java 1.7) behind ELB (HTTPS:443 -> HTTP:8080). The servlet streams data to the client through HTTP long lived connection.
Problem: when the client disconnects, the server keeps streaming data, i.e. ServletOutputStream.flush() or .write() does not throw IOException. The ELB kind of "buffers" the connection (we can see it with IpTraf monitor), so from the Tomcat side it appears as the client is still there. Without the ELB, IOException is thrown properly, so the servlet can stop streaming. We have disabled connection draining and reduced connection timeout to 1 sec, we also reduced all timeouts on Tomcat's HTTP Connector including KeepAlive to just few seconds. Nothing helps.
Question: is there anything we can do with the ELB configuration / Tomcat / Java side to allow disconnection detection in this setup?
We had the same kind of problem (but in .NET with IIS).
We seem to have solved it by switching from the classic ELB to the Application ELB. Now writing to the output stream of a closed connection gives an exception, where first (on classic ELB) it didn't.
Hope this helps anyone running into the same problem

Is the communication between httpd and tomcat secure?

We have an app running on tomcat port 8080, and we are fronting Apache httpd, SSL is installed and implemented, all the request are redirected to port 443, and proxy to tomcat 8080 by mod_proxy_httpd:
ProxyPass / http://localhost:8080/ retry=0 timeout=5
ProxyPassReverse / http://localhost:8080/
Everything works just fine, request from client to httpd is secured, however obviously communication between httpd and tomcat are not encrypted.
My question is:
1.would it be an issue if the communication between httpd and tomcat are not encrypted? provided httpd and tomcat are under same sever?
2.We are transmitting sensitive data, such as customer credit card information during payment processing, do we need to implement SSL on tomcat also (e.g.port 8443), and proxy the request from 443 to secured tomcat port 8443? So it is gonna be all secured from client to httpd to tomcat, however this could be affect the performance, since 2 way encryption/decryption is needed.
We have been searching on this issue, but found no clear answers. Any help is greatly appreciated.
If you stick to localhost (e.g. have Apache httpd on the same server as tomcat) this most likely is secure: If you don't trust the implementation of the "localhost" networking, you couldn't trust the TLS implementation on the same server as well.
You might gain performance by having Apache on a different server than tomcat. In that case you obviously depend on the network between both: Routers, cabling etc. If you don't trust your network, this might be something that you can work on. However, note that you probably also have some connection between tomcat and filesystems (temp files), databases, etc. - with transport encryption nailed, data can still leak out of this application if you can't trust your datacenter/network security.
We can't judge to what level you can trust your network, you'll have to do this by yourself.
Note that in the case you mention in the question, tomcat will have no idea that a connection has been made in https when you just forward on http (e.g.: Browser -> Apache is https, Apache -> tomcat is http)
You can configure tomcat's connector to assume that the connection was secure (look up the secure attribute on the connector's documentation), but this also means that you absolutely need to make sure never to allow a http connection to be forwarded to that connector. Check if AJP (a different protocol) is for you - it will forward all of the HTTP(S) connection's properties properly to tomcat. Some like it (me among them), some don't.

Sockets in CLOSE_WAIT from Jersey Client

I am using Jersey 1.4, the ApacheHttpClient, and the Apache MultiThreadedHttpConnectionManager class to manage connections. For the HttpConnectionManager, I set staleCheckingEnabled to true, maxConnectionsPerHost to 1000 and maxTotalConnections to 1000. Everything else is default. We are running in Tomcat and making connections out to multiple external hosts using the Jersey client.
I have noticed that after after a short period of time I will begin to see sockets in a CLOSE_WAIT state that are associated with the Tomcat process. Some monitoring with tcpdump shows that the external hosts appear to be closing the connection after some time but it's not getting closed on our end. Usually there is some data in the socket read queue, often 24 bytes. The connections are using https and the data seems to be encrypted so I'm not sure what it is.
I have checked to be sure that the ClientRequest objects that get created are closed. The sockets in CLOSE_WAIT do seem to get recycled and we're not running out of any resources, at least at this time. I'm not sure what's happening on the external servers.
My question is, is this normal and should I be concerned?
Thanks,
John
This is likely to be a device such as the firewall or the remote server timing out the TCP session. You can analyze packet captures of HTTPS using Wireshark as described on their SSL page:
http://wiki.wireshark.org/SSL
The staleCheckingEnabled flag only issues the check when you go to actually use the connection so you aren't using network resources (TCP sessions) when they aren't needed.

Jasperserver email scheduling and proxy server

I've having a slight problem and i'll like you so share you opinion experience on it.I've deployed a jasperserver on tomcat 6 in a environment where anything has to pass through the proxy server to have access to the internet.
i'm wondering about where to put the proxy params and credentials. at tomcat level or at jasperserver level ?I've seen that JavaMail does support retrieving or sending mail through proxy server.Another thing is that i've also seen that all Java Tcp can be configured using the Java Runtime to direct socket connection to the proxy server which can harm performance.
What other options do i have?
thanks for reading this!

Categories