I have a Java application running in JBOSS EAP 6.4.5 on Linux.
Over a period of time as multiple users logs in application then it become inaccessible( connection failed error ) with the warning message on.
JBWEB003008: Maximum number of threads (1024) created for connector with **address * and port *.******
We have noticed is that most of the connections as in CLOSE_WAIT state.
Server restart helps to resolve this issue temporary.
Not sure what's causing this.
You need to increase the maximum number of threads created for a connector in EAP6. For non-native APR connectors, the maximum number of threads which a connector can handle can be defined by adding the attribute max-connections in the subsytem as:
<subsystem xmlns="urn:jboss:domain:web:1.5" default-virtual-server="default-host" native="false">
<connector name="http" protocol="HTTP/1.1" scheme="http" socket-binding="http" max-connections="2048"/>
...
</subsystem>
For APR connectors (native="true"), the max thread pool size is sized through the org.apache.tomcat.util.net.MAX_THREADS system property instead
Related
I have a web application(jsp) which was running fine on Tomcat 8.0.46 for more than a year. Few weeks back we upgraded to Tomcat 9.0.10, after couple of days of upgrade tomcat is responding with a delay of 8-16 seconds for some of the request.
I saw more than 800 request/sec in localhostaccess log, so I increased maxThreads to 512 as below and max heap memory to 4096MB.
<Executor name="tomcatThreadPool" namePrefix="catalina-exec-"
maxThreads="512" minSpareThreads="4"/>
<Connector executor="tomcatThreadPool" port="8080" protocol="HTTP/1.1"
connectionTimeout="20000"
redirectPort="8443" />
But the issue was not resolved, so I compared all the configuration with old Tomcat and found that tomcat9 is using tomcat executor where old was not using it. will executor impact request handling time?
Old tomcat configuration
<Connector port="8080" protocol="HTTP/1.1"
connectionTimeout="20000"
redirectPort="8443" />
FYI the webapp consist of only jsps and few of them used to interact with DB using DBCP and gives XML response.
I am not suspecting DB connection pool because it was already used and no
change.
CPU : Xenon
RAM: 8GB
OS : Window 2012 server
JDK : jdk1.8.0_144
Added response time duration to localhost access log, can see the delay in some request, but the request prior and after are having quick response withing 15 milliseconds(bold).
10.50.29.27 - - [17/Dec/2018:09:27:23 -0500] "GET /App1/sendevent.jsp?TNAME=Transfer1 HTTP/1.1" 200 90 270BA450469B7AA71D22252711CA288A **0.015** http-nio-8080-exec-3
10.50.29.26 - - [17/Dec/2018:09:27:23 -0500] "GET /App1/Start.jsp?ACTION=START&ID=3154583920&SID=$num$&SESSIONID=63AA673E-B6EF-447E-AAB9-3B5B7260EB03&ScriptID=$sid$&ScriptData=$scriptdata$ HTTP/1.1" 200 2948 D97741884AD1005359430A3307D5D44E **6.031** http-nio-8080-exec-5
10.50.29.27 - - [17/Dec/2018:09:27:23 -0500] "GET /App1/sendevent.jsp?TNAME=Transfer1&TRANSFER_RESULT=S&LAST_ACTION=1&TRANSFER_REASON=connection.disconnect.transfer&TRANSFER_NOTE=undefined HTTP/1.1" 200 90 270BA450469B7AA71D22252711CA288A **0.000** http-nio-8080-exec-9
acceptorThreadCount=2 solved productivity problems for me in two cases:
Tomcat 8 under Debian on a small virtual machine (application is XWiki)
Tomcat 9 under Centos on a big production server (application is DSpace:jspui/xmlui/oai/solr).
The third case when I saw the sufficiently better productivity is:
Tomcat 8 under Windows Server 2008 Standard SP2 on a very small old Dell server (application is DSpace:jspui/xmlui/oai/solr). It is the previous instance of Case 2 kept until end of transition.
I am using Tomcat 8.5 to host a WAR which is used for java REST services.
In my rest service, I create a connection and take a multi-part form data file from user, scan it using a scan engine and return the result. At the start, tomcat is running fine and giving a speed of almost 57-58 Mbps but degrades over time (degrades to nearly half in 5-8 min)
My setenv.bat file looks like this.
"set "JAVA_OPTS=%JAVA_OPTS% -Xms1024m -Xmx5120m -XX:MaxMetaspaceSize=512m -Xincgc -server""
JVM is using ParNewGC for garbage collection.
my server.xml file looks like this
<Executor name="tomcatThreadPool" namePrefix="catalina-exec-"
maxThreads="100" minSpareThreads="8" maxSpareThreads="10" acceptorThreadCount="16" acceptCount="500"/>
<!--acceptCount :The maximum queue length for incoming connection requests when all possible request processing threads are in use. Any requests received when the queue is full will be refused. The default value is 100.
A "Connector" represents an endpoint by which requests are received
and responses are returned. Documentation at :
Java HTTP Connector: /docs/config/http.html
Java AJP Connector: /docs/config/ajp.html
APR (HTTP/AJP) Connector: /docs/apr.html
Define a non-SSL/TLS HTTP/1.1 Connector on port 8080
-->
<Connector executor="tomcatThreadPool" port="8080" protocol="HTTP/1.1"
connectionTimeout="20000"
redirectPort="8443" socket.rxBufSize="10000000" socket.txBufSize="3000000" socket.directBuffer="true" />
<!-- A "Connector" using the shared thread pool-->
As my response is completely dynamic. I am not using any type of caching. please help me with this issue.
It may be a error due to a large number of open tcp/ip connections .Try connecting with server for once and send data check for sockets when you see a performance degradation.
In windows, you can use netstat-an to check the open sockets.
I get following error when I try to access some JasperReports Server pages:
Request Entity Too Large The requested resource
/jasperserver/olap/viewOlap.html does not allow request data with GET
requests, or the amount of data provided in the request exceeds the
capacity limit.
I checked the Apache log files and got following error in mod_jk.log
[Thu Nov 10 10:25:00 2016][8964:3876] [error]
ajp_marshal_into_msgb::jk_ajp_common.c (517): failed appending the
query string of length 7417
I already tried many different ways to solve it.
I added the maxHttpHeaderSize and max_packet_size attributes to the ajp connect of Tomcat (server.xml):
<Connector port="8010" protocol="AJP/1.3" connectionTimeout="20000" redirectPort="8443" maxHttpHeaderSize="65536" max_packet_size="65536" />
Also I added the LimitRequestLine, LimitRequestBody, LimitRequestFieldSize and LimitRequestFields to the Apache httpd.conf file (added it to the end of the file without any VirtualHost):
LimitRequestLine 65536
LimitRequestBody 0
LimitRequestFieldSize 65536
LimitRequestFields 10000
I am still getting the error above.
I also found some suggestions to add the max_packet_size to the workers.properties of Apache. But if I add the attribute I get a HTTP 400 error and a white page. That's why I commented the property in workers.properties:
#worker.jasper.max_packet_size=65536
I restarted all services after changing the configurations.
When I access the same pages via HTTP-Connector of Tomcat (http://HOSTNAME:8081/jasperserver/..) it works fine. Only when I access it via AJP-Connector of Apache (http://HOSTNAME/jasperserver/..) I get the error. So I think there should be any problem with the AJP-Connector.
Apache: 2.4.12
JasperReports Server: 6.2.1
Apache Tomcat Version 8.0.14:
Does anyone have a suggestion what I have to do to solve the issue?
I figured out the problem.
The attribute in server.xml for Tomcat has to be
packetSize
and not
max_packet_size
See also documentation AJP Connector
After renaming it, it works fine.
Here are my configurations:
Tomcat server.xml:
Connector port="8010" protocol="AJP/1.3" redirectPort="8443" packetSize="65536"
Apache workers.properties:
worker.jasper.max_packet_size=65536
If you get afterwards the error:
Request-URI Too Long
The requested URL's length exceeds the capacity limit for this server.
You have to set following attributes in Apache httpd.conf file:
LimitRequestLine 65536
LimitRequestBody 0
LimitRequestFieldSize 65536
LimitRequestFields 10000
I hope this answer helps others too.
I have a web app hosted on two Tomcat servers, identical WARs and server.xml.
I don't have access to change any of the Apache settings - as such, I know that load balancing works as we have tested by shutting down one server and not the other, etc. Based on Tomcat logs I see both being used.
When using the default Tomcat HTTP port 8080 (in my case that was changed to 8083, nonissue) sessions are retained fine.
However, I made a change to use SSL, port 443 and now sessions are invalidated anywhere from 30 seconds - 5 minutes after creation. I assume this has something to do with session replication as I have not made any changes to the server.xml or web.xml besides pointing to the keystore :
<Connector port="443" protocol="HTTP/1.1" SSLEnabled="true" secure="true" scheme="https" sslProtocol="TLS"
maxThreads="150"
clientAuth="false" keystoreFile="ssl/keystore.jks" keystorePass="123" />
I'm not exactly sure what else to copy here as the only change to the server.xml has literally be these few lines - I assume the added security is for some reason invalidating the session.
I currently have a test server serving the application on HTTP and HTTPS - even accessing the app at the same time, HTTP goes through fine and retains the session, HTTPS refuses to hold on to the session for longer than 5 minutes.
For what it's worth the SSL cert is installed to both Tomcat servers rather than at the load balancer - according to https://serverfault.com/questions/248139/apache-ssl-losing-session-over-load-balancer that may be the issue? I've only coded the application and changed some of the xml configurations - not sure if this is necessarily an issue I can fix.
I'm trying to resolve an issue about connecting Apache and Tomcat with mod_proxy_ajp. From reading I found that the problem might be the numbers of workers in the Apache and the Tomcat. So I try to find the worker's definition in the Tomcat but I couldnt find any. Can it be? Can Tomcat work without a workers.properties file? I checked the imports in the Tomcat conf just to make sure that there isnt a different file name but none. How can I find out the worker's configuration of my Tomcat setup? Is there a default?
The problem that I'm trying to solve is that in some cases the Tomcat stops responding to the Apache - in the Apache log I see many errors like:
1. "(70007)The timeout specified has expired: ajp_ilink_receive() can't receive header"
2. "ajp_read_header: ajp_ilink_receive failed"
3. "(120006)APR does not understand this error code: proxy: read response failed from 127.0.0.1:9005 (localhost)")
So I'm trying to find out maybe the Apache has more workers than the Tomcat.
I'm using Apache 2.2.15 and Tomcat 7, connected with mod_proxy ajp on a Redhat machine.
Any ideas?
Thanks!
Baba
On Tomcat side you have to configure AJP connector in server.xml, for example:
<!-- Define an AJP 1.3 Connector on port 8009 -->
<Connector port="9009" protocol="AJP/1.3" redirectPort="8443"/>