I have socket.io client and socket.io server in java. The socket can establish connection on Firefox/Chrome/Opera on Windows/Linux/Android devices. But coming to iOS devices it can't establish connection and happens the same on Mac(just Safari Browser). However, if I turn off the experimental setting NSURLSession Websocket, its working on Mac but not on iOS devices.
I am using httpd as a proxy for sending requests to server. Here is my request forwarding in httpd.conf
RewriteEngine On
RewriteCond %{QUERY_STRING} transport=polling [NC]
RewriteRule /(.*)$ http://localhost:8000/$1 [P,L]
SSLProxyEngine on
ProxyPass /socket.io/ ws://localhost:8000/socket.io/
ProxyPassReverse /socket.io/ ws://localhost:8000/socket.io/
When I check the apache error logs, it says
(70007)The timeout specified has expired
AH01102: error reading status line from remote server localhost:8000
AH00898: Error reading from remote server returned by /socket.io/
I tried couple of options suggested from other answers as following, but none of them worked.
SSLProtocol all -SSLv3
SSLProxyProtocol all -SSLv3
KeepAlive On
SetEnv force-proxy-request-1.0 1
SetEnv proxy-nokeepalive 1
SetEnv proxy-initial-not-pooled 1
Any suggestions are much appreciated.
I am running a web server with the following configuration:
PHP Website running on Apache (port 80) (www.MyWebsite.com)
GWT Web Application running on Tomcat (port 8080) with a different URL (www.MyWebapp.com)
Web service also running on Tomcat (port 8080) with subdomain (service.MyWebapp.com)
I am struggling with some configuration issues. I am able to access the website as well as the web service with my current configuration, but for some reason my web application is throwing an RPC error when I access it remotely through the URL.
My vhosts.conf file is as follows:
<VirtualHost *:80>
ServerName MyWebapp.com
ServerAlias www.MyWebapp.com
ProxyRequests off
DefaultType text/html
ProxyPreserveHost On
ProxyPass / ajp://localhost:8009/webapp/
ProxyPassReverse / ajp://localhost:8009/webapp/
</VirtualHost>
<VirtualHost *:80>
ServerName service.mywebapp.com
DefaultType text/html
ProxyRequests off
ProxyPreserveHost On
ProxyPass / ajp://localhost:8009/webservice/
ProxyPassReverse / ajp://localhost:8009/webservice/
</VirtualHost>
<VirtualHost *:80>
ServerName www.mywebsite.com
ServerAlias *.mywebsite.com
DocumentRoot "c:/wamp64/www/website"
<Directory "c:/wamp64/www/website/">
Options +Indexes +Includes +FollowSymLinks +MultiViews
Require all granted
</Directory>
</VirtualHost>
If I try to access it remotely via www.mywebapp.com, I get the HTML landing page, but when I make any RPC calls I receive an RPC error:
Type 'com.mycom.client.utility.model.DataContainer' was not assignable to 'com.google.gwt.user.client.rpc.IsSerializable' and did not have a custom field serializer. For security purposes, this type will not be deserialized.
I can access and run my web application locally (localhost:8080/webapp), as well as remotely if I specify the port (www.MyWebapp.com:8080/webapp), and do not receive any RPC errors.
My 'DataContainer' class implements java.io.Serializable, not com.google.gwt.user.client.rpc.IsSerializable (I've never encountered an issue with this before). I am under the impression that this has more to do with proxy settings than serialization, but have tried everything I can think of without success.
Any help would be much appreciated!!! Thanks in advance...
When I tried to connect to Spring Boot web socket from Android stomp client, it is not connecting and the Catalina log shows
Handshake failed due to invalid Upgrade header: null
Tomcat server is running behind Apache and the Apache server runs on https. I haven't added https in Tomcat .All the http requests are redirected to https this is how I tried to connect to the websocket
mStompClient = Stomp.over(Stomp.ConnectionProvider.JWS, "wss://chat.example.com/ws/chat/websocket", headers);
but it works when running in local machine
mStompClient = Stomp.over(Stomp.ConnectionProvider.JWS, "http://10.0.2.2:8080/chat/ws/chat/websocket", headers);
this is my stomp end point setup
registry.addEndpoint("/chat").setHandshakeHandler(new HandShakeHandler()).withSockJS();
I have enabled mod proxy wstunnel and in the virtual host config I have added
ProxyPass / http://localhost:8080/chat/
proxyPassReverse / http://localhost:8080/chat/
ProxyPass /wss/ ws://localhost:8080/chat/
How can I fix this?
I got the answer from this server fault lin. I have to add
RewriteCond %{HTTP:UPGRADE} ^WebSocket$ [NC]
RewriteCond %{HTTP:CONNECTION} Upgrade$ [NC]
RewriteRule /api/(.*) ws://newapp.example.com:8080/api/$1 [P]
and changed the last line to
RewriteRule /chat/(.*) ws://localhost:8080/chat/chat/$1 [P]
and now it is connected
The problem may be in the order of your proxy commands:
ProxyPass / http://localhost:8080/chat/
proxyPassReverse / http://localhost:8080/chat/
ProxyPass /wss/ ws://localhost:8080/chat/
See the documentation:
Ordering ProxyPass Directives
The configured ProxyPass and ProxyPassMatch rules are checked in the order of configuration. The first rule that matches wins. So usually you should sort conflicting ProxyPass rules starting with the longest URLs first.
Since the first rule matches the /wss/ URLs, the later rule is never triggered. The correct order is:
ProxyPass /wss/ ws://localhost:8080/chat/
ProxyPass / http://localhost:8080/chat/
proxyPassReverse / http://localhost:8080/chat/
(I'm not sure if you need a reverse rule or not.)
I've spent hours trying to make the redirect rules work on my system but apparently you don't need them at all.
I was looking over this guide to setup tomcat + apache with SSL: http://www.mulesoft.com/tcat/tomcat-ssl
Under section, "When To Use SSL With Tomcat" it says:
"...In other words, if you're fronting Tomcat with a web server and using it only as
an application server or Tomcat servlet container, in most cases you should let the web server function as a proxy for all SSL requests"
Since I already have a webserver set up with SSL, I decided to be lazy. I installed tomcat with default settings, and started it up. In my httpd.conf, I redirected all 80 traffic to 443, and then proxypass and proxypassreverse to ajp://hostname.com:8009. I restarted httpd and it "appears" to redirect to tomcat server over ssl. Is this completely broken or did I actually manage to do what I intended on first go? Any test suggestions are much appreciated.
<VirtualHost *:80>
ServerName hostname_DNS_alias.com
Redirect / https://hostname_DNS_alias.com
</VirtualHost>
<VirtualHost *:443>
SSLEngine On
SSLCertificateFile /etc/pki/tls/certs/thecrt.crt
SSLCertificateKeyFile /etc/pki/tls/private/thekey.key
SSLCertificateChainFile /etc/pki/tls/certs/CA.crt
ServerName hostname_DNS_alias.com
DocumentRoot /var/www/html
<Proxy *>
AddDefaultCharset off
Order deny,allow
Allow from all
</Proxy>
ProxyPass / ajp://hostname.com:8009/
ProxyPassReverse / ajp://hostname.com:8009/
</VirtualHost>
I think you've got it, but you can look at the access logs on HTTPD & Tomcat to confirm the request is being proxied. You should see an access log entry on both systems.
A couple quick notes...
As mentioned in the comment, you can remove the HTTP connector from Tomcat. It's not a must though. Sometimes it nice to keep open for testing purposes (i.e. you can hit the server directly) or if you want to run the Manager app on it. If you do keep it around, especially if you use it to run the Manager app, you should probably restrict access to it. Two easy ways to do that are by setting the address attribute on the HTTP connector to localhost or by configuring a RemoteAddressFilter.
Keep in mind that the AJP connection from your HTTPD server to Tomcat is not encrypted (SSL is terminated at HTTPD), so you want to make sure that traffic never goes over an insecure network (like the Internet).
Since you already have HTTPD in the mix, you can also use it to serve up your static files. If you deploy them to your document root, you can then add a "ProxyPass !" directive to exclude that path from being proxied to Tomcat. This will offer slightly less latency on the request as HTTPD does need to get the static file from Tomcat.
We're running a web app on Tomcat 6 and Apache mod_proxy 2.2.3. Seeing a lot of 502 errors like this:
Bad Gateway!
The proxy server received an invalid response from an upstream server.
The proxy server could not handle the request GET /the/page.do.
Reason: Error reading from remote server
If you think this is a server error, please contact the webmaster.
Error 502
Tomcat has plenty of threads, so it's not thread-constrained. We're pushing 2400 users via JMeter against the app. All the boxes are sitting inside our firewall on a fast unloaded network, so there shouldn't be any network problems.
Anyone have any suggestions for things to look at or try? We're heading to tcpdump next.
UPDATE 10/21/08: Still haven't figured this out. Seeing only a very small number of these under load. The answers below haven't provided any magical answers...yet. :)
Just to add some specific settings, I had a similar setup (with Apache 2.0.63 reverse proxying onto Tomcat 5.0.27).
For certain URLs the Tomcat server could take perhaps 20 minutes to return a page.
I ended up modifying the following settings in the Apache configuration file to prevent it from timing out with its proxy operation (with a large over-spill factor in case Tomcat took longer to return a page):
Timeout 5400
ProxyTimeout 5400
Some backgound
ProxyTimeout alone wasn't enough. Looking at the documentation for Timeout I'm guessing (I'm not sure) that this is because while Apache is waiting for a response from Tomcat, there is no traffic flowing between Apache and the Browser (or whatever http client) - and so Apache closes down the connection to the browser.
I found that if I left the Timeout setting at its default (300 seconds), then if the proxied request to Tomcat took longer than 300 seconds to get a response the browser would display a "502 Proxy Error" page. I believe this message is generated by Apache, in the knowledge that it's acting as a reverse proxy, before it closes down the connection to the browser (this is my current understanding - it may be flawed).
The proxy error page says:
Proxy Error
The proxy server received an invalid
response from an upstream server. The
proxy server could not handle the
request GET.
Reason: Error reading from remote server
...which suggests that it's the ProxyTimeout setting that's too short, while investigation shows that Apache's Timeout setting (timeout between Apache and the client) that also influences this.
So, answering my own question here. We ultimately determined that we were seeing 502 and 503 errors in the load balancer due to Tomcat threads timing out. In the short term we increased the timeout. In the longer term, we fixed the app problems that were causing the timeouts in the first place. Why Tomcat timeouts were being perceived as 502 and 503 errors at the load balancer is still a bit of a mystery.
You can use
proxy-initial-not-pooled
See http://httpd.apache.org/docs/2.2/mod/mod_proxy_http.html :
If this variable is set no pooled connection will be reused if the client connection is an initial connection. This avoids the "proxy: error reading status line from remote server" error message caused by the race condition that the backend server closed the pooled connection after the connection check by the proxy and before data sent by the proxy reached the backend. It has to be kept in mind that setting this variable downgrades performance, especially with HTTP/1.0 clients.
We had this problem, too. We fixed it by adding
SetEnv proxy-nokeepalive 1
SetEnv proxy-initial-not-pooled 1
and turning keepAlive on all servers off.
mod_proxy_http is fine in most scenarios but we are running it with heavy load and we still got some timeout problems we do not understand.
But see if the above directive fits your needs.
I know this does not answer this question, but I came here because I had the same error with nodeJS server. I am stuck a long time until I found the solution. My solution just adds slash or /in end of proxyreserve apache.
my old code is:
ProxyPass / http://192.168.1.1:3001
ProxyPassReverse / http://192.168.1.1:3001
the correct code is:
ProxyPass / http://192.168.1.1:3001/
ProxyPassReverse / http://192.168.1.1:3001/
Sample from apache conf:
#Default value is 2 minutes
**Timeout 600**
ProxyRequests off
ProxyPass /app balancer://MyApp stickysession=JSESSIONID lbmethod=bytraffic nofailover=On
ProxyPassReverse /app balancer://MyApp
ProxyTimeout 600
<Proxy balancer://MyApp>
BalancerMember http://node1:8080/ route=node1 retry=1 max=25 timeout=600
.........
</Proxy>
I'm guessing your using mod_proxy_http (or proxy balancer).
Look in your tomcat logs (localhost.log, or catalina.log) I suspect your seeing an exception in your web stack bubbling up and closing the socket that the tomcat worker is connected to.
You can avoid global timeouts or having to virtual hosts by specifying the proxy timeouts in the ProxyPass directive as follows:
ProxyPass /svc http://example.com/svc timeout=600
ProxyPassReverse /svc http://example.com/svc timeout=600
Notice timeout=600 seconds.
However this does not always work when you have load balancer. In that case you must add the timeouts in both the places (tested in Apache 2.2.31)
Load Balancer example:
<Proxy "balancer://mycluster">
BalancerMember "http://member1:8080/svc" timeout=600
BalancerMember "http://member2:8080/svc" timeout=600
</Proxy>
ProxyPass /svc "balancer://mycluster" timeout=600
ProxyPassReverse /svc "balancer://mycluster" timeout=600
A side note: the timeout=600 on ProxyPass was not required when Chrome was the client (I don;t know why) but without this timeout on ProxyPass Internet Explorer (11) aborts saying connection reset by server.
My theory is that the :
ProxyPass timeout is used between the client(browser) and the Apache.
BalancerMember timeout is used between the Apache and the backend.
To those who use Tomcat or other backed you may also want to pay attention to the HTTP Connector timeouts.
you should be able to get this problem resolved through a timeout and proxyTimeout parameter set to 600 seconds. It worked for me after battling for a while.
Most likely you should increase Timeout parameter in apache conf (default value 120 sec)
If you want to handle your webapp's timeout with an apache load balancer, you first have to understand the different meaning of timeout.
I try to condense the discussion I found here: http://apache-http-server.18135.x6.nabble.com/mod-proxy-When-does-a-backend-be-considered-as-failed-td5031316.html :
It appears that mod_proxy considers a backend as failed only when the
transport layer connection to that backend fails. Unless failonstatus/failontimeout is used. ...
So, setting failontimeout is necessary for apache to consider a timeout of the webapp (e.g. served by tomcat) as a fail (and consecutively switch to the hot spare server). For the proper configuration, note the following misconfiguration:
ProxyPass / balancer://localbalance/ failontimeout=on timeout=10 failonstatus=50
This is a misconfiguration because:
You are defining a balancer here, so the timeout parameter relates to
the balancer (like the two others).
However for a balancer, the timeout parameter is not a connection
timeout (like the one used with BalancerMember), but the maximum time
to wait for a free worker/member (e.g. when all the workers are busy
or in error state, the default being to not wait).
So, a proper configuration is done like this
set timeout at the BalanceMember level:
<Proxy balancer://mycluster>
BalancerMember http://member1:8080/svc timeout=6
... more BalanceMembers here
</Proxy>
set the failontimeout on the balancer
ProxyPass /svc balancer://mycluster failontimeout=on
Restart apache.