I have a Vert.x web server which provides Websocket service. Vert.x server sends some data to client when client registers on server then the client sends a ACK back to server to make sure those data has already been delivered reliably.
I found the Vert.x server consumes a lot of memory after it has finished all the work.
Below are steps to reproduce the issue:
Config JVM parameter before starting our test:
Open /vert.x-2.02-final/bin/
Modify value of JVM_OPTS from "" to "-Xms128M -Xmx128M"
Save and exitModify serverIpAddress to your server ip address in VertXSocketClient
Client will register to 1180 websocket channel on the Vert.x server.
You can get code here:
https://www.dropbox.com/s/ptenlx78iin8dmj/VertXSocketClient.java
run testserver with command vertx run testserver.java
The memory usage of Vert.xserver will be printed out in your console with format:
total memory - free memory = used memory(MB)
System.gc()
Runtime runtime = Runtime.getRuntime();
int mb = 1024 * 1024;
totalMemory = runtime.totalMemory() / mb;
freeMemory = runtime.freeMemory() / mb;
I call System.gc() every 5 secs to make sure to free memory. Yes, I know. System.gc() shouldn't be called frequently. It has negative impact to system performance. The used memory does not decrease without such an instruction.
You can download the code here:
https://www.dropbox.com/sh/6oxtfhgwffed72c/AAAX-BvYdGaTBgnRagxD9Bf-a/TestServer.java
run VertXSocketClient with command vertx run VertXSocketClient.java
The client will register to websocket channel server automatically and the server instance will send data to client after registration has finished.
Here down the sample code to send data to client:
byte[] serverResponseData = serverResponse.getBytes();
Buffer buffer= new Buffer(serverResponseData);
ws.write(buffer);
With above code, used memory would be up to 62MB after all work is done, wherease it is only up to 15MB if I comment out ws.write(buffer).
My assumption is that Vert.x server always sets aside 62MB of memory for its lifetime.Isn't it supposed to release memory after the work is done?
You should definitely check Hazelcast map configuration if you are using Vert.x. Maybe it's not related to your problem and your Vert.x usage, but I had similar problems with memory usage and found that my configuration (in cluster.xml) was not appropriate regarding my use cases.
Some points you should check in Hazelcast config:
backup-count - By default, Hazelcast has one sync backup copy
time-to-live-seconds - Maximum time in seconds for each map entry to stay in the map is 0 (infinite) by default
max-idle-seconds - Maximum time in seconds for each entry to stay idle in the map is 0 (infinite) by default
eviction-policy - NONE eviction policy for Maps is used by default
See Hazelcast Map documentation for more details.
Related
My application uses Lettuce Redis client to connect to AWS Elasticache. I am trying to follow this guide to increase my service's resiliency. One of the points being suggested is regarding the socket timeout:
Ensure that the socket timeout of the client is set to at least one second (vs. the typical “none” default in several clients). Setting the timeout too low can lead to numerous timeouts when the server load is high. Setting it too high can result in your application taking a long time to detect connection issues.
The pseudo code on how I am creating connections is:
RedisClusterClient redisClusterClient = RedisClusterClient.create(clientResources, redisUrl);
// Topology refresh and periodic refresh
ClusterTopologyRefreshOptions topologyRefreshOptions = ClusterTopologyRefreshOptions.builder()
.enablePeriodicRefresh(true)
.enableAllAdaptiveRefreshTriggers()
.build();
// Update cluster topology periodically
redisClient.setOptions(ClusterClientOptions.builder()
.topologyRefreshOptions(topologyRefreshOptions)
.build());
StatefulRedisClusterConnection connection = redisClusterClient.connect(new ByteArrayCodec());
I was going through the lettuce docs and saw there are two timeout options available for this:
Use connectTimeout field in SocketOptions
Use defaultTimeout field in RedisClusterClient
I would really appreciate if someone could help me understand the differences between the two and which one works better for my use case.
EDIT: Here is what I have tried till now:
I tried using both SocketOptions and deafultTimeout() one at a time and ran some tests.
Here is what I did:
Test Case 1
Set connectTimeout in SocketOptions to 1s and updated the redisClient object using setOptions() method.
Use Litmuschaos to add latency of >1s to the calls made to AWS Elasticache.
Use Elasticache failover API to bring down one of the nodes in the redis cluster.
Test Case 2
Set defaultTimeout in redisClient to 1s.
Use Litmuschaos to add latency of >1s to the calls made to AWS Elasticache.
Use Elasticache failover API to bring down one of the nodes in the redis cluster.
Observation (For both TCs):
The lettuce logs indicated that it is not able to connect to the node which was brought down (This was expected as AWS was still in the process of replacing it).
Once the redis node was up in AWS EC, Lettuce logs showed that it was successfully able to reconnect to that node (This was unexpected as I was already adding latency to the calls made to AWS EC).
Am I missing some config here?
I have configured a sftp:inbound-endpoint which polls files from an SFTP server, however, occasionally, there are lots of files which are polled at the same time (about 500) and it causes big slowdowns on the processing application.
Here is my code:
<sftp:inbound-endpoint
sizeCheckWaitTime="${sizeCheckWaitTime}"
connector-ref="ImportInformationStatusSFTP"
host="${sftp.host}"
port="${sftp.port}"
path="${sftp.path}"
user="${sftp.user}"
password="${sftp.password}"
responseTimeout="${sftp.responseTimeout}"
archiveDir="${mule.archiveDir}${sftp.archiveDir}"
archiveTempReceivingDir="${sftpconnector.archiveTempReceivingDir}"
archiveTempSendingDir="${sftpconnector.archiveTempSendingDir}"
tempDir="${sftp.tempDir}"
doc:name="SFTP"
pollingFrequency="${sftp.poll.frequency}">
<file:filename-wildcard-filter pattern="*.xml"/>
</sftp:inbound-endpoint>
Is there a way to set a limit on the number of files polled?
There is no built-in way to do this. An alternative for this particular use case could be to refactor the flow to send the files to a VM queue instead of processing them directly. The VM queue consumption concurrency can be limited unlike the SFTP connector.
I am running a batch job in AWS which consumes messages from a SQS queue and writes them to a Kafka topic using akka. I've created a Sqs Async Client with the following parameters:
private static SqsAsyncClient getSqsAsyncClient(final Config configuration, final String awsRegion) {
var asyncHttpClientBuilder = NettyNioAsyncHttpClient.builder()
.maxConcurrency(100)
.maxPendingConnectionAcquires(10_000)
.connectionMaxIdleTime(Duration.ofSeconds(60))
.connectionTimeout(Duration.ofSeconds(30))
.connectionAcquisitionTimeout(Duration.ofSeconds(30))
.readTimeout(Duration.ofSeconds(30));
return SqsAsyncClient.builder()
.region(Region.of(awsRegion))
.httpClientBuilder(asyncHttpClientBuilder)
.endpointOverride(URI.create("https://sqs.us-east-1.amazonaws.com/000000000000")).build();
}
private static SqsSourceSettings getSqsSourceSettings(final Config configuration) {
final SqsSourceSettings sqsSourceSettings = SqsSourceSettings.create().withCloseOnEmptyReceive(false);
if (configuration.hasPath(ConfigPaths.SqsSource.MAX_BATCH_SIZE)) {
sqsSourceSettings.withMaxBatchSize(10);
}
if (configuration.hasPath(ConfigPaths.SqsSource.MAX_BUFFER_SIZE)) {
sqsSourceSettings.withMaxBufferSize(1000);
}
if (configuration.hasPath(ConfigPaths.SqsSource.WAIT_TIME_SECS)) {
sqsSourceSettings.withWaitTime(Duration.of(20, SECONDS));
}
return sqsSourceSettings;
}
But, whilst running my batch job I get the following AWS SDK exception:
software.amazon.awssdk.core.exception.SdkClientException: Unable to execute HTTP request: Acquire operation took longer than the configured maximum time. This indicates that a request cannot get a connection from the pool within the specified maximum time. This can be due to high request rate.
The exception still seems to occur even after I try tweaking the parameters mentioned here:
Consider taking any of the following actions to mitigate the issue: increase max connections, increase acquire timeout, or slowing the request rate. Increasing the max connections can increase client throughput (unless the network interface is already fully utilized), but can eventually start to hit operation system limitations on the number of file descriptors used by the process. If you already are fully utilizing your network interface or cannot further increase your connection count, increasing the acquire timeout gives extra time for requests to acquire a connection before timing out. If the connections doesn't free up, the subsequent requests will still timeout. If the above mechanisms are not able to fix the issue, try smoothing out your requests so that large traffic bursts cannot overload the client, being more efficient with the number of times you need to call AWS, or by increasing the number of hosts sending requests
Has anyone run into this issue before?
I encountered the same issue, and I ended up firing 100 async batch requests then wait for those 100 to get cleared before firing another 100 and so on.
I'm dealing with the Tomcat configuration on springboot.
Let's supposse i have the following configuration:
server:
tomcat:
min-spare-threads: ${min-tomcat-threads:20}
max-threads: ${max-tomcat-threads:20}
accept-count: ${accept-concurrent-queue:1}
max-connections: ${max-tomcat-connections:100}
I have a simple RestController with this code:
public String request(#Valid #RequestBody Info info) {
log.info("Thread sleeping");
Thread.sleep(8000);
return "OK";
}
Then i make the following test:
I send 200 HTTP request per second.
I check the log and as I expected I see 100 simultaneous executions and after 8 seconds I see the last one (queued).
Other executions are rejected.
The main problem that i have with this is that if i have a timeout control on client call (for example, 5 seconds), the queued operation will be processed on server anyways even if it was rejected on client.
I want to avoid this situation, so I tried:
server:
tomcat:
min-spare-threads: ${min-tomcat-threads:20}
max-threads: ${max-tomcat-threads:20}
accept-count: ${accept-concurrent-queue:0}
max-connections: ${max-tomcat-connections:100}
But this "0" is totally ignored (i think in this case it means "infinite").
So, my question is:
¿Is it possible to configure Tomcat to don't queue operations if the max-connections limit is reached?
Or maybe
¿Is it possible to configure Tomcat to reject any operation queued?
Thank you very much in advance.
Best regards.
The value of the acceptCount parameter is passed directly to the operating system: e.g. for UNIX-es it is passed to listen. Since an incoming connection is always put in the OS queue before the JVM accepts it, values lower than 1 make no sense. Tomcat explicitly ignores such values and keeps its default 100.
However, the real queue in Tomcat are the connections that where accepted from the OS queue, but are not being processed due to a lack of processing threads (maxThreads). You might have at most maxConnections - maxThreads + 1 such connections. In your case it's 81 connections waiting to be processed.
I have a Java web service client running on Linux (using Axis 1.4) that invokes a series of web services operations performed against a Windows server. There are times that some transactional operations fail with this Exception:
java.net.SocketTimeoutException: Read timed out
However, the operation on the server is completed (even having no useful response on the client). Is this a bug of either the web service server/client? Or is expected to happen on a TCP socket?
This is the expected behavior, rather than a bug. The operation behind the web service doesn't know anything about your read timing out so continues processing the operation.
You could increase the timeout of the connection - if you are manually manipulating the socket itself, the socket.connect() method can take a timeout (in milliseconds). A zero should avoid your side timing out - see the API docs.
If the operation is going to take a long time in each case, you may want to look at making this asynchronous - a first request submits the operations, then a second request to get back the results, possibly with some polling to see when the results are ready.
If you think the operation should be completing in this time, have you access to the server to see why it is taking so long?
I had similar issue. We have JAX-WS soap webservice running on Jboss EAP6 (or JBOSS 7). The default http socket timeout is set to 60 seconds unless otherwise overridden in server or by the client. To fix this issue I changed our java client to something like this. I had to use 3 different combinations of propeties here
This combination seems to work as standalone java client or webservice client running as part of other application on other web server.
//Set timeout on the client
String edxWsUrl ="http://www.example.com/service?wsdl";
URL WsURL = new URL(edxWsUrl);
EdxWebServiceImplService edxService = new EdxWebServiceImplService(WsURL);
EdxWebServiceImpl edxServicePort = edxService.getEdxWebServiceImplPort();
//Set timeout on the client
BindingProvider edxWebserviceBindingProvider = (BindingProvider)edxServicePort;
BindingProvider edxWebserviceBindingProvider = (BindingProvider)edxServicePort;
edxWebserviceBindingProvider.getRequestContext().put("com.sun.xml.internal.ws.request.timeout", connectionTimeoutInMilliSeconds);
edxWebserviceBindingProvider.getRequestContext().put("com.sun.xml.internal.ws.connect.timeout", connectionTimeoutInMilliSeconds);
edxWebserviceBindingProvider.getRequestContext().put("com.sun.xml.ws.request.timeout", connectionTimeoutInMilliSeconds);
edxWebserviceBindingProvider.getRequestContext().put("com.sun.xml.ws.connect.timeout", connectionTimeoutInMilliSeconds);
edxWebserviceBindingProvider.getRequestContext().put("javax.xml.ws.client.receiveTimeout", connectionTimeoutInMilliSeconds);
edxWebserviceBindingProvider.getRequestContext().put("javax.xml.ws.client.connectionTimeout", connectionTimeoutInMilliSeconds);