Spring Boot application: set -XX:ActiveProcessorCount explicitly - java

Locked for 4 days. There are disputes about this question’s content being resolved at this time. It is not currently accepting new answers or interactions.
In a recent talk from VMWare, Spring team suggests to "set -XX:ActiveProcessorCount explicitly":
I am having a hard time understanding the need of this setting and what this setting does. I have an app where the request resource is set as:
spec:
containers:
- name: myapp
image: mydocker.com/myapp
imagePullPolicy: Always
resources:
requests:
cpu: "2000m"
The entry point of the Dockerfile, i.e. how this app would be run, is: java -XX:ActiveProcessorCount -jar myapp.jar
What I observe:
if I set -XX:ActiveProcessorCount > than request CPU, I get the requested CPU (2000m in this example) anyway.
if I set -XX:ActiveProcessorCount = than request CPU, I get the requested CPU anyway.
if I set -XX:ActiveProcessorCount < than request CPU, like 1, it looks like the property is not being taken into account, I get the requested CPU, 2000m in my example anyway.
Did I misuse the property?
If not, what is the point of setting this for cases when is higher or equal, since everything will fallback on the requested CPU anyway?

Related

High latency in elasticsearch as request volume decreases

I have a java application and elasticsearch with indices of size in order of 10 gig size.
There is strange scenario, which is during the night as request volume decreases response duration increases. I had checked server resources metric but nothing unusual found.
hear we have request volume diagram:
and request duration:
as you see request duration maxima exactly lies at interval of request volume minima.
but when we check server resources nothing unusual can be seen:
I must mention that no special config has been set for elasticsearch and all the config has been left with default.
Also I have checked this link How does high memory pressure affect performance? but as you see server memory is far away from 75% memory usage.
I must also mention that elasticsearch is inside ceph file system

Spring Boot app deployed in Google Cloud App Engine not starting up because of memory limit

I'm having a problem with a Spring Boot application deployed in Google Cloud App Engine. The app is an API that uses JPA and JWT and is connected to a MySQL database stored in Google Cloud SQL.
The problem is that the applicacion gets stuck because of memory limit. After every request, I get this messages in the log:
Exceeded soft memory limit of 256 MB with 298 MB after servicing 0 requests total. Consider setting a larger instance class in app.yaml.
This request caused a new process to be started for your application, and thus caused your application code to be loaded for the first time. This request may thus take longer and use more CPU than a typical request for your application.
While handling this request, the process that handled this request was found to be using too much memory and was terminated. This is likely to cause a new process to be used for the next request to your application. If you see this message frequently, you may have a memory leak in your application or may be using an instance with insufficient memory. Consider setting a larger instance class in app.yaml.
I tried to modify the file src/main/appengine/app.yaml file in order to set a different configuration (with more memory) but I don't see any difference after every change. It's like this file was ignored.
This is my current app.yaml:
runtime: java
env: flex
runtime_config:
jdk: openjdk8
env_variables:
SPRING_PROFILES_ACTIVE: "gcp,mysql"
# JAVA_GC_OPTS: -XX:+UseSerialGC
# JAVA_USER_OPTS: -XX:MaxRAM=200m
# With -XX:+UseSerialGC This will perform garbage collection inline with the thread allocating the heap memory instead of a dedicated GC thread(s)
# With -Xss512k This will limit each threads stack memory to 512KB instead of the default 1MB
# With -XX:MaxRAM=72m
handlers:
- url: /.*
script: this field is required, but ignored
beta_settings:
cloud_sql_instances: guitar-tab-manager-api:europe-west3:guitar-tab-manager-db
# manual_scaling:
# instances: 1
# instance_class: F4
# manual_scaling:
# instances: 1
# instance_class: F2
# basic_scaling:
# max_instances: 5
# idle_timeout: 10m
instance_class: F2
# automatic_scaling:
# target_cpu_utilization: 0.65
# min_instances: 5
# max_instances: 100
# min_pending_latency: 30ms # default value
# max_pending_latency: automatic
# max_concurrent_requests: 50
I tried to apply different configurations but nothing seems to work. Maybe someone can help. Thanks in advance.
For anyone coming here:
I got it working using a different environment. Instead of flex, I changed the environment to standard with java8 following the configuration explained here. The weird thing is that now the app consumes more memory than before (about 300Mb), but now it's working without issues. Note that app.yaml is not used anymore, and now the application is deployed as a WAR.
Thanks

JMeter : java.net.NoRouteToHostException: Cannot assign requested address (Address not available)

I have created a simple Spring boot Application having a HelloController.
Get API: http://localhost:8080/hello
Response: Hello World
Now I have created a JMeter Script having 0.1 million concurrent users hitting the above get API.
When I run the JMeter Script, after 30k count, I start getting the error:
java.net.NoRouteToHostException: Cannot assign requested address (Address not available)
What is the reason for this? How can I resolve this issue?
I'm using UBUNTU 18.04 with 8gb RAM.
While performing the operation, only JMeter and STS was open.
You can follow Lakshmi Narayan answer to increase available ports:
Resolution:
Increased the local port range using below command:
echo 1024 65000 > /proc/sys/net/ipv4/ip_local_port_range
This allows more local ports to be available.
Enable fast recycling TIME_WAIT sockets as below:
$ sudo sysctl -w net.ipv4.tcp_tw_recycle=1
By default,
cat /proc/sys/net/ipv4/tcp_tw_recycle
Output : 0 (disabled by default)
Be cautious if enabled in production environments, since this is our
internal Environment and machine used only for Jmeter load tests, we
enabled recycle and resolved the issue.
Enable reuse of sockets as below:
$ sudo sysctl -w net.ipv4.tcp_tw_reuse=1
By default,
cat /proc/sys/net/ipv4/tcp_tw_reuse
Output : 0 (disabled by default)
Note: The tcp_tw_reuse setting is particularly useful in environments
where numerous short connections are open and left in TIME_WAIT state,
such as web servers. Reusing the sockets can be very effective in
reducing server load.
After enabling fast recycling and reuse the server could support 5K
user Load with single Unix box.

Tomcat performance issue with many simultaneous connections and scalling

I am running a Tomcat 7.0.55 instance with a Spring REST service behind on Ubuntu 14.04LTS server. I am doing performance tests with Gatling. I have created a simulation using a front-end application that accesses the REST backend.
My config is:
Total RAM: 512MB, 1 CPU, JVM options: -Xms128m -Xmx312m -XX:PermSize=64m -XX:MaxPermSize=128m
The environment might not seem to be very efficient, but if I do not cross the limit of the ~700 users (I process 90k requests in 7 minutes) I get all request processed successfully and very quickly.
I am starting having issues when there are too many connections at the same time. The failing scenario is that there are around 120k requests in 7 minutes. Problems start to begin when there are around 800 concurrent users in play. Until the number of users is 600-700, all goes fine, but after this limit I am starting getting exceptions:
java.util.concurrent.TimeoutException: Request timed out to /xxx.xxx.xxx.xxx:8080 of 60000 ms
at com.ning.http.client.providers.netty.timeout.TimeoutTimerTask.expire(TimeoutTimerTask.java:43) [async-http-client-1.8.12.jar:na]
at com.ning.http.client.providers.netty.timeout.RequestTimeoutTimerTask.run(RequestTimeoutTimerTask.java:43) [async-http-client-1.8.12.jar:na]
at org.jboss.netty.util.HashedWheelTimer$HashedWheelTimeout.expire(HashedWheelTimer.java:556) [netty-3.9.2.Final.jar:na]
at org.jboss.netty.util.HashedWheelTimer$HashedWheelBucket.expireTimeouts(HashedWheelTimer.java:632) [netty-3.9.2.Final.jar:na]
at org.jboss.netty.util.HashedWheelTimer$Worker.run(HashedWheelTimer.java:369) [netty-3.9.2.Final.jar:na]
at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) [netty-3.9.2.Final.jar:na]
at java.lang.Thread.run(Unknown Source) [na:1.7.0_55]
12:00:50.809 [WARN ] c.e.e.g.h.a.GatlingAsyncHandlerActor - Request 'request_47'
failed : GatlingAsyncHandlerActor timed out
I thought this could be related to small jvm. However, when I upgrade the environment to:
Total RAM: 2GB, 2CPUs, JVM options: -Xms1024m -Xmx1024m -XX:PermSize=128m -XX:MaxPermSize=256m
I still get very similar results. The difference in failed requests is insignificant..
I've been playing with setting the Tomcat connector with no effect. The current tomcat settings are:
<Connector enableLookups="false" maxThreads="400" maxSpareThreads="200" minSpareThreads="60" maxConnections="8092" port="8080" protocol="org.apache.coyote.http11.Http11Protocol" connectionTimeout="20000" keepAliveTimeout="10000" redirectPort="8443" />
Manipulating the numbers of threads, connections, keepAliveTimeout didn't help at all to get the 800 concurrent users to work with no timeouts. I was planning to scale the app to handle at least 2k concurrent users, but so far I can see that vertical scaling and upgrading the env gives me no results. I also do not see any issues with memory through jvisualvm. The OS shoudln't be a limit, the ulimits are set to either unlimited or high values.. The DB is not a bottleneck as all REST is using internal caches.
It seems like tomcat is not able to process more than 800 connected users in my case. Do you have any ideas of how these issues could be adressed? I would like to be able to scale up to at least 2k users and keep the failed rate as low as possible. I will appreciate any thoughts and tips how I can work it out. If you need more details, please leave a comment.
Cheers
Adam
Do you increase open file number.Every connection consume a open file item.
You are probably hitting the limit on TCP connections given that you are creating so many in such a short time. By default Linux waits a while before cleaning up connections. After the test fails, run netstat -ant | grep WAIT | wc -l and see if you are close to 60,000. If so, that indicates you can do some tuning of the TCP stack. Try changing the following sysctl settings:
net.ipv4.tcp_keepalive_intvl = 15
net.ipv4.tcp_keepalive_probes = 5
net.ipv4.tcp_fin_timeout = 5
You can also try some other settings mentioned in this ServerFault question.

SGE : Parallel Environment for a multithreaded java code

I have written a multi threaded java code, which when runs creates 8 threads and the computation continues on these threads. I would like to submit this job to a SGE cluster but I am not sure which parallel environment (pe) should I choose? or should I create one? I am new to SGE. The simple way would be to run it in serial mode, but that's inefficient.
Regarding creating a pe, where does it need to be created? Should the SGE deamon also need to have this pe ? When I submitted a job with some random name as pe, I got
job rejected: the requested parallel environment "openmpi" does not exist
Threaded applications must get all their slots on a single node. That's why you need a Parallel Environment with allocation_rule set to $pe_slots. Parallel environments are configured by the SGE administrator using the qconf -ap PE_name. As a user you can only get the list of the available PEs using qconf -spl and can query the configuration of a particular PE using qconf -sp PE_name. You can walk all PEs and see their allocation rules with the following (ba)sh script:
for pe_name in `qconf -spl`; do
echo $pe_name
qconf -sp $pe_name | grep allocation_rule
done
But you should already be talking to your SGE admin instead of trying to justify your off-topic question here.

Categories