Springboot app settings to perform StressTest - java

i want to ask about what's the proper settings on springboot application properties to perform a stress test? because i'm constantly getting this error from my Jmeter when it reach 16k samples on my springboot rest server, also i am doing 100thread/s on the jmeter thread group for 5 minutes. Thank you
java.net.BindException: Address already in use: connect
at java.base/sun.nio.ch.Net.connect0(Native Method)
at java.base/sun.nio.ch.Net.connect(Net.java:579)
at java.base/sun.nio.ch.Net.connect(Net.java:568)
at java.base/sun.nio.ch.NioSocketImpl.connect(NioSocketImpl.java:588)
at java.base/java.net.SocksSocketImpl.connect(SocksSocketImpl.java:327)
at java.base/java.net.Socket.connect(Socket.java:633)
at org.apache.http.conn.socket.PlainConnectionSocketFactory.connectSocket(PlainConnectionSocketFactory.java:75)
at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:142)
at org.apache.jmeter.protocol.http.sampler.HTTPHC4Impl$JMeterDefaultHttpClientConnectionOperator.connect(HTTPHC4Impl.java:404)

Most probably you've run out of outbound ports.
Check your operating system documentation regarding how to either increase them and/or or reduce recycle time.
Example suggestions can be found i.e. in Solved “java.net.BindException: Address already in use: connect” issue on Windows or Handling "exhausting available ports" in Jmeter
Alternatively you can allocate another machine and consider switching to JMeter Distributed Testing

Related

Getting 'java.sql.SQLRecoverableException: IO Error: The Network Adapter could not establish the connection' while connecting oracle database

I'm facing oracle database connectivity issue while running my automation script thru jenkins pipleline whereas it is working fine when I run the script in local.
Error Log:
java.sql.SQLRecoverableException: IO Error: The Network Adapter could not establish the connection
Caused by: java.net.UnknownHostException: *********: Name or service not known
After googling it, able to understand that there could be a reason one among of these. firewall blocking, port disabled or proxy issue but not sure how to confirm it.
Please help me how to fix this issue.
Thanks,
Karunagara Pandi G
The "Caused by" gives you the answer: the configured database host is unknown!
Either because you have a typo in the configuration ("hoost" instead of "host"), the respective machine is (was) currently offline, or the (local) DNS has (had) lost the name of the machine. Another option is that someone renamed the database host (that would be similar to the typo, only that it was not really your fault).
Determine which options (maybe there are more …) fits to your situation and fix it.

sporadic DNS resolution issues in Java http client

We've got a batch job that runs every day on an EC2 instance within AWS. The EC2 instance exists in a VPC. The batch job uses java to make a series of REST API calls on a public server. Most days the batch job runs without issue. However, some days, something breaks down in DNS resolution. The job will be happily running and then suddenly DNS resolution fails and the remaining API calls error out with an exception like the following:
java.net.UnknownHostException: some.publicserver.com: Name or service not known
at java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method) ~[na:1.8.0_191]
at java.net.InetAddress$2.lookupAllHostAddr(InetAddress.java:929) ~[na:1.8.0_191]
at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1324) ~[na:1.8.0_191]
at java.net.InetAddress.getAllByName0(InetAddress.java:1277) ~[na:1.8.0_191]
at java.net.InetAddress.getAllByName(InetAddress.java:1193) ~[na:1.8.0_191]
at java.net.InetAddress.getAllByName(InetAddress.java:1127) ~[na:1.8.0_191]
at org.apache.http.impl.conn.SystemDefaultDnsResolver.resolve(SystemDefaultDnsResolver.java:44) ~[batchjob.jar:na]
at org.apache.http.impl.conn.HttpClientConnectionOperator.connect(HttpClientConnectionOperator.java:102) ~[batchjob.jar:na]
at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:319) ~[batchjob.jar:na]
at org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:363) ~[batchjob.jar:na]
at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:219) ~[batchjob.jar:na]
at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:195) ~[batchjob.jar:na]
at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:86) ~[batchjob.jar:na]
at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:108) ~[batchjob.jar:na]
at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:184) ~[batchjob.jar:na]
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82) ~[batchjob.jar:na]
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:106) ~[batchjob.jar:na]
...
Some days, every API call will fail with this error, some days there will be a string of successful calls and then everything will start failing. On the days where the job is failing, I can connect the server at the same time and verify that DNS seems to be working. For example, if I use the following command
nslookup some.publicserver.com
It returns a successful response. At the same time, the batch job will be spewing a bunch of UnknownHostExceptions.
I am perplexed as to where to look for the source of the problem. Has anyone out there experienced anything similar to this?
I think that this is not a Java specific problem per say, rather than a DNS resolution issue with the EC2 instance. Java will effectively performs DNS resolving actions firstly by checking the hosts file and then by calling the underlying OS's DNS related functions.
With this in mind as well as the fact that the underlying EC2 instance is effectively running a Linux distro, these steps will result in a call to the gethostbyname2 function of the OS. This in turn will perform all the under the hood magic to resolve the name in question.
Now, two things are very important in troubleshooting your problem. First one is whether the IP address of the server you're calling is changing often. Two is that the nslookup program you're using will query the DNS server directly. This means that there could very well be discrepancies between what Java attempts to do to resolve the domain name and what the program does. Furthermore, this may also mean that the OS may have cached up an IP address which does not correspond to the server's latest one. Thus, I would suggest checking the IP address of the hostname using some other utility (e.g. ping).
My best advises on troubleshooting this would be the following:
Adding some kind of log trace when attempting to perform the hostname
resolution and comparing it with the nslookup's resolved value.
Checking whether the EC2 has a proper DNS setup (what DNS server
you're using etc).
Adding an entry to the hosts file mapping the domain name to the IP
address (provided that the latter one is does not change).
Hope the above help.
For what it's worth, in my particular situation, here's what I have been able to figure out. Hopefully this will help someone else down the road.
The authoritative DNS servers for the target in my example (some.publicserver.com) are returning SERVFAIL for some requests. This seems likely to be a load issue as it appears to happen sporadically throughout the day. With my AWS setup, I am using the default DNS servers for my VPC, which are provided by AWS. Those servers apparently do not do any caching. I have learned that Java does some caching for DNS resolutions through InetAddress, but by default it is a short window (30 seconds in most implementations I believe).
So in the end, the real cause of the problem is the authoritative DNS servers for some.publicserver.com not being completely reliable. Since I have no control over those servers, I think the best workaround is to use DNS caching. Option #1 is to use local DNS caching on my EC2 Ubuntu instance (something like dnsmasq). Option #2 is to increase the caching duration used by the Java, by doing something like this:
java.security.Security.setProperty("networkaddress.cache.ttl" , "900");
I chose option #2 as it required less effort and minimizes the potential side effects. So for, it has resolved the issue for me.

Google Cloud sql database connection error in java hibernate

I am using Google Cloud SQL using machine type of db-f1-micro for a project deployed on Google App Engine in Standard Environment(JAVA). Sometimes, I got below error while connecting with database. This scenario happens when open same page in multiple tabs at the same time(load/performance testing).
Source code used in project from https://github.com/GoogleCloudPlatform/appengine-cloudsql-native-mysql-hibernate-jpa-demo-java
com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server. at sun.reflect.NativeConstructorAccessorImpl.newInstance0 (Native Method)
The metrics from app engine log for error and mysql usage. You can easily see that mysql active connection usage is below 100%.
Please suggest what wrong I am doing?
Looks like this thread is old but we have this problem in our test environment. It happens frequently and repeatedly after our GAE test system is not used for a while. The first time someone tries to access the app we get one or two of these.
I assume it has something to do with GAE ramping up a server instance. Although I'm not sure why this happens with the db. I don't think we have any connection pooling (specifically because GAE can make our app go dormant).
And with the app just starting up, we can't be exceeding any connection limits.
From https://cloud.google.com/appengine/docs/standard/java/cloud-sql/pricing-access-limits
"Each App Engine instance cannot have more than 12 concurrent connections to a Google Cloud SQL instance."
How many requests are sending to App Engine and how many connections does the app instance open for each of those requests ?

java.net.BindException When Creating ServerSocket on Tomcat 7 on OpenShift

I was trying to launch a application on Openshift which listens to a port via ServerSocket.
ServerSocket = new ServerSocket(8080);
But it failed with the following error message:
java.net.BindException: Permission denied
at java.net.PlainSocketImpl.socketBind(Native Method)
at java.net.AbstractPlainSocketImpl.bind(AbstractPlainSocketImpl.java:376)
at java.net.ServerSocket.bind(ServerSocket.java:376)
at java.net.ServerSocket.<init>(ServerSocket.java:237)
at java.net.ServerSocket.<init>(ServerSocket.java:128)...
I've tried to change the port from 8080 to 8000, and then to 15000. But none of them worked.
I did search intensively on the Internet. However, I still cannot find a solution. Does anyone have a clue?
2015-12-29 edited
Proposed Reason:
Openshift allows gears to bind to port 8080. But Tomcat has already
bound 8080. So, my application is disallowed binding to the same port.
Proposed Solution:
Use DIY Cart instead. But it seems that Openshift only allows
external client to connect with http://, https://, ws:// and wss://
protocol (OpenShift Developer Guide). Applications should be modified
to handle these protocols.
Ungarida confirmed the solution and provided documentation.
I think is the only solution, take a look to this documentation.
I think using DIY cart may be a solution.
Openshift allows gears to bind to port 8080. I suspect that Tomcat has already bound 8080. So, my application is disallowed bind to the same port.
I've tried DIY cart and I got no exception. But it seems that Openshift only allows external client to connect with http://, https://, ws:// and wss:// protocol (OpenShift Developer Guide). I have to modify my application to handle these protocols.
Does anyone know other solution?

unreliable behaviour of Openfire server at EC2

We are using openfire server 3.7.1 on Amazon Ec2 linux instance for a chat Application.
Currently, we are in initial development stage, where we are testing it with 4 or 5 concurrent users.
Now, and then we are getting issues with openfire server:
1) Java heap space exceptions.
2) java.net.BindException: Address already in use
3) they both lead to 5222 port not listening, while openfire admin console at 9090 is working fine
Eventually when i stop all openfire processes and then restart it, again it goes to normal.
I want to know, whether this is a bug in openfire version 3.7.1 or EC2 have some issues with opening of port 5222. I am really apprehensive about performance of Openfire server when 1000s user will be using it concurrently?
Solved by:
Disabling PEP.
Increasing Openfire JVM parametres
The Java heap space exception is common to Openfire, you can check your JVM arguments and increase the parameters. In my experience there were a couple of cases that caused those:
clients using Empathy.
some plugin that provided buddy lists/ white/black lists etc (had to do something with the user's roster lists).
You need to make sure port 5222 and 5223 are opened (some clients may use the old SSL port) in EC2 Firewall settings.
If you plan to have thousands of users, I suggest you get static IP address (you don't mention what's your current config). Also checkout jabberd - proved to be more reliable than openfire.
1000s of concurrent users should not be a problem for Openfire at all. It has seen 250K in testing. It will always be determinant though on what the users are doing.
There is a known memory leak in Openfire that has been fixed but not yet released. It is related to PEP, which can be shut off to circumvent this issue if that is feasible for you.

Categories