Why HttpServer handles only 4 requests simultaneously? - java

I am new to java programming. HttpRequest is only able to handle <= 4 requests simultaneously even though newFixedThreadPool is set to 28.
I tried to use newCachedThreadPool, but even that handles 4 request at a time.
If I set newFixedThreadPool to 3, it works well.
server = HttpServer.create(new InetSocketAddress(port), 0);
server.createContext("/", new MyHandler());
server.setExecutor(Executors.newFixedThreadPool(28)); // creates a default executor
ThreadPoolExecutor t = (ThreadPoolExecutor)server.getExecutor();
LOG.info("Pool size " + t.getPoolSize() + " Max Pool size " + t.getMaximumPoolSize() + " Largest pool size " + t.getLargestPoolSize() + " Core pool size " + t.getCorePoolSize() + " actively executing threads " + t.getActiveCount() + " Completed task count " + t.getCompletedTaskCount() + " task count " + t.getTaskCount());
server.start();
static class MyHandler implements HttpHandler {
public void handle(HttpExchange t) throws IOException {
try{
long threadId = Thread.currentThread().getId();
ThreadPoolExecutor t1 = (ThreadPoolExecutor)server.getExecutor();
LOG.info("Thread #" + threadId + " before : Pool size " + t1.getPoolSize() + " Max Pool size " + t1.getMaximumPoolSize() + " Largest pool size " + t1.getLargestPoolSize() + " Core pool size " + t1.getCorePoolSize() + " actively executing threads " + t1.getActiveCount() + " Completed task count " + t1.getCompletedTaskCount() + " task count " + t1.getTaskCount());
LOG.info("Received request on thread #" + threadId);
String response = h.processRequest(t.getRequestURI().toString());
t.sendResponseHeaders(200, response.length());
OutputStream os = t.getResponseBody();
os.write(response.getBytes());
os.close();
LOG.info("Request processed on thread #" + threadId);
LOG.info("Thread #" + threadId + " after : Pool size " + t1.getPoolSize() + " Max Pool size " + t1.getMaximumPoolSize() + " Largest pool size " + t1.getLargestPoolSize() + " Core pool size " + t1.getCorePoolSize() + " actively executing threads " + t1.getActiveCount() + " Completed task count " + t1.getCompletedTaskCount() + " task count " + t1.getTaskCount());
} catch (Exception e) {
LOG.warn("Exception. ", e);
}
}
}
public String processRequest(String req)
{
Thread.sleep(5000);
}
Log Output
19/04/15 16:26:07 : Thread #99 before : Pool size 21 Max Pool size 28 Largest pool size 21 Core pool size 28 actively executing threads 1 Completed task count 20 task count 21
19/04/15 16:26:07 : Received request on thread #99
19/04/15 16:26:07 : Inside Handler Sr. No [1]
19/04/15 16:26:07 : Thread #100 before : Pool size 22 Max Pool size 28 Largest pool size 22 Core pool size 28 actively executing threads 2 Completed task count 20 task count 22
19/04/15 16:26:07 : Received request on thread #100
19/04/15 16:26:07 : Inside Handler Sr. No [2]
19/04/15 16:26:08 : Thread #101 before : Pool size 23 Max Pool size 28 Largest pool size 23 Core pool size 28 actively executing threads 3 Completed task count 20 task count 23
19/04/15 16:26:08 : Received request on thread #101
19/04/15 16:26:08 : Inside Handler Sr. No [3]
19/04/15 16:26:08 : Thread #102 before : Pool size 24 Max Pool size 28 Largest pool size 24 Core pool size 28 actively executing threads 4 Completed task count 20 task count 24
19/04/15 16:26:08 : Received request on thread #102
19/04/15 16:26:08 : Inside Handler Sr. No [4]
19/04/15 16:26:12 : Request processed on thread #99
19/04/15 16:26:12 : Thread #99 after : Pool size 24 Max Pool size 28 Largest pool size 24 Core pool size 28 actively executing threads 4 Completed task count 20 task count 24
19/04/15 16:26:12 : Thread #103 before : Pool size 25 Max Pool size 28 Largest pool size 25 Core pool size 28 actively executing threads 4 Completed task count 21 task count 25
19/04/15 16:26:12 : Received request on thread #103
19/04/15 16:26:12 : Inside Handler Sr. No [5]
19/04/15 16:26:12 : Request processed on thread #100
19/04/15 16:26:12 : Thread #100 after : Pool size 25 Max Pool size 28 Largest pool size 25 Core pool size 28 actively executing threads 4 Completed task count 21 task count 25
19/04/15 16:26:12 : Thread #104 before : Pool size 26 Max Pool size 28 Largest pool size 26 Core pool size 28 actively executing threads 4 Completed task count 22 task count 26
19/04/15 16:26:12 : Received request on thread #104
19/04/15 16:26:12 : Inside Handler Sr. No [6]
19/04/15 16:26:13 : Request processed on thread #101
19/04/15 16:26:13 : Thread #101 after : Pool size 26 Max Pool size 28 Largest pool size 26 Core pool size 28 actively executing threads 4 Completed task count 22 task count 26
19/04/15 16:26:13 : Thread #105 before : Pool size 27 Max Pool size 28 Largest pool size 27 Core pool size 28 actively executing threads 4 Completed task count 23 task count 27
19/04/15 16:26:13 : Received request on thread #105
19/04/15 16:26:13 : Inside Handler Sr. No [7]
19/04/15 16:26:13 : Request processed on thread #102
19/04/15 16:26:13 : Thread #102 after : Pool size 27 Max Pool size 28 Largest pool size 27 Core pool size 28 actively executing threads 4 Completed task count 23 task count 27
19/04/15 16:26:13 : Thread #106 before : Pool size 28 Max Pool size 28 Largest pool size 28 Core pool size 28 actively executing threads 4 Completed task count 24 task count 28
19/04/15 16:26:13 : Received request on thread #106
19/04/15 16:26:13 : Inside Handler Sr. No [8]
19/04/15 16:26:17 : Request processed on thread #103
19/04/15 16:26:17 : Thread #103 after : Pool size 28 Max Pool size 28 Largest pool size 28 Core pool size 28 actively executing threads 4 Completed task count 24 task count 28
19/04/15 16:26:17 : Thread #24 before : Pool size 28 Max Pool size 28 Largest pool size 28 Core pool size 28 actively executing threads 4 Completed task count 25 task count 29
19/04/15 16:26:17 : Received request on thread #24
19/04/15 16:26:17 : Inside Handler Sr. No [9]
19/04/15 16:26:17 : Request processed on thread #104
19/04/15 16:26:17 : Thread #104 after : Pool size 28 Max Pool size 28 Largest pool size 28 Core pool size 28 actively executing threads 4 Completed task count 25 task count 29
Why it is handling only 4 request at a time? Am I doing something wrong?
Thanks

Related

Spring retry sending duplicate request

I can see in the logs that Spring retry is sending 2 requests to the remote server and both requests return successful responses.
I am not able to get the reason behind the same.
Code:
Class StatusClient{
#CircuitBreaker(maxAttemptsExpression = "#{${remote.broadridge.circuitBreaker.maxAttempts}}",
openTimeoutExpression = "#{${remote.broadridge.circuitBreaker.openTimeout}}", resetTimeoutExpression = "#{${remote.broadridge.circuitBreaker.resetTimeout}}")
public Optional<JobStatusResponseDTO> getStatus(String account, String jobNumber) {
client.post()
.uri(PATH)
.body(BodyInserters.fromValue(request))
.exchangeToMono(response -> {
if (response.statusCode() == HttpStatus.NO_CONTENT) {
return Mono.empty();
} else if (isClientOrServerError(response)) {
return Mono.error(new RemoteClientException(String.format("status is not received: %s", response.statusCode())));
}
stopWatch.stop();
log.info("time taken by the getStatus=[{}] for {}", (stopWatch.getTotalTimeMillis()), request);
return response.bodyToMono(JobStatusResponseDTO.class);
})
.block();
return Optional.ofNullable(block);}
}
Class status{
#Retryable(maxAttemptsExpression = "#{${remote.retry.maxAttempts}}", backoff = #Backoff(delayExpression = "#{${remote.retry.delay}}"))
public Optional<JobStatusResponseDTO> getStatus(String jobNumber, String accountNumber) {
return statusClient.getStatus(accountNumber, jobNumber);
}
}
Config in application.yml
circuitBreaker:
maxAttempts: 3 # defalut 3
openTimeout: 5000 # defalut 5000
resetTimeout: 20000 # defalut 20000
retry:
maxAttempts: 3 # defalut 3
delay: 1000 # defalut 1000
Logs:
792 <14>1 2021-10-26T16:26:32.978917+00:00 - 2021-10-26 16:26:32.978 INFO [batch,ec40b8fe1f6a4cfb,06052e092b3f8e66] : time taken by the getStatus=[582] for JobStatusRequestDTO(account=12
456, jobNumber=S123456)
792 <14>1 2021-10-26T16:26:18.263121+00:00 2021-10-26 16:26:18.262 INFO [batch,ec40b8fe1f6a4cfb,21202725a0002bde] : time taken by the getStatus=[592] for JobStatusRequestDTO(account=12
456, jobNumber=S123456)
Both the request are a few seconds apart.
Edit 1:
changed circuit breaker to the max attempt to 1. Now it is retrying 3 times. There is still an issue. It seems it is calling the remote server only once and not calling after.
The remote call is wrapped in a circuit breaker.
1st Attempt log:
status is not received: 503 SERVICE_UNAVAILABLE
2nd Attempt log:
org.springframework.retry.ExhaustedRetryException: Retry exhausted after last attempt with no recovery path;
3rd Attempt log:
org.springframework.retry.ExhaustedRetryException: Retry exhausted after last attempt with no recovery path;
circuitBreaker:
maxAttempts: 1
openTimeout: 5000 # defalut 5000
resetTimeout: 20000 # defalut 20000
retry:
maxAttempts: 3 # defalut 3
delay: 1000 # defalut 1000
This is because you have placed the default retry.maxAttempts to 3 with a delay of 1000ms. Spring will auto-retry if there is no response within the mentioned delay time. So, replace the retry.maxAttemps to 2 then it won't give multiple responses.
you can simply paste below lines in application.properties.
retry.maxAttempts=2
retry.maxDelay=100
Also, I suggest you go through this.

Trying to understand threads in java in this case

Hi I'm trying to understand my code, I have user synchronized key word in a method, to make that only one thread use it.
The main class:
public class Principal {
public static void main(String[] args) {
Messenger messenger = new Messenger();
Hilo t1 = new Hilo(messenger);
Hilo t2 = new Hilo(messenger);
t1.start();
t2.start();
}
}
The messenger class:
public class Messenger {
private String msg = "hello";
synchronized public void sendMessage() {
System.out.println(msg + " from " + Thread.currentThread().getName());
}
}
And the thred class:
public class Hilo extends Thread {
private Messenger messenger;
public Hilo(Messenger messenger) {
this.messenger = messenger;
}
#Override
public void run() {
while (true) {
messenger.sendMessage();
}
}
}
I had this output:
hello from Thread-1
hello from Thread-1
hello from Thread-1
hello from Thread-1
hello from Thread-1
hello from Thread-0
hello from Thread-0
hello from Thread-1
hello from Thread-1
hello from Thread-1
hello from Thread-1
hello from Thread-0
hello from Thread-0
hello from Thread-0
hello from Thread-0
hello from Thread-0
hello from Thread-0
hello from Thread-0
...
But I was expected this:
hello from Thread-1
hello from Thread-0
hello from Thread-1
hello from Thread-0
hello from Thread-1
hello from Thread-0
hello from Thread-1
hello from Thread-0
hello from Thread-1
hello from Thread-0
hello from Thread-1
hello from Thread-0
hello from Thread-1
hello from Thread-0
...
I've been thinking but I can't understand the failure.
Please add your opinion .
Thanks in advance.
Your resulting output got about 18 cycles done with about 3 context switches. Your "expected" output got 14 cycles done with 14 context switches. It seems the behavior you got is much better than you expected.
The question is why you would expect such inefficient behavior. Alternation is about the worst possible performance given that more context switches are needed, more caches are blown out, and so on. No sensible implementation would do that if it could find any way to avoid it.
Generally speaking, you want to keep a thread running as long as possible because context switches have cost. A good implementation balances performance with other priorities, sure, but it doesn't give up performance for no good reason.
Always use timestamps to verify order of happening
Actually, System.out.println makes poor mechanism for testing concurrency. The calls do not actually get output in the order they were called. In other words, never rely on the order of appearance in System.out as representing actual order of happening.
You can see this behavior by including calls to Instant.now() or System.nanoTime. I suggest always adding such calls in almost any kind of testing/debugging where order matters. If you look carefully at the microseconds, you will see later items appearing on your console before earlier items.
Even better suggestion: Put your outputs into a thread-safe collection. Dump to console at end of your test.
Executor service
In modern Java, we rarely need to address the Thread class directly.
Instead, use an executor service. The service is backed by a pool of one or more threads. The service handles creating and recreating threads as needed, depending on its promised behavior.
Write your task as a Runnable, an object with a run method, or a Callable with an call method.
Example code
Here is my revised version of your code.
Our singleton Messenger class. One instance to be shared across two threads.
Tip: Call Thread#getId rather than Thread#getName to identify a thread. Virtual threads in the future Project Loom may lack names by default.
package work.basil.example.order;
import java.time.Instant;
import java.util.ArrayList;
import java.util.Collections;
import java.util.List;
public class Messenger
{
final private String msg = "hello";
final List < String > outputs = new ArrayList <>( 100 ); // Need not be thread-safe for this demo, as we only touch it from within our `synchronized` method `sendMessage`.
synchronized public void sendMessage ( )
{
String output = this.msg + " from thread id # " + Thread.currentThread().getId() + " at " + Instant.now();
this.outputs.add( output );
}
}
Our Runnable task class. It keeps hold of a Messenger object to be used on each run execution.
Tip: Rather than running endlessly as seen in your code with while ( true ), write while ( ! Thread.interrupted() ) to run until the interrupted flag has been thrown for that thread. The ExecutorService#shutdownNow method will likely set that flag for us, enabling our threads to shut themselves down.
package work.basil.example.order;
import java.util.Objects;
public class Hilo implements Runnable
{
// Member fields
final private String id;
final private Messenger messenger;
// Constructors
public Hilo ( final String id , final Messenger messenger )
{
this.id = id;
this.messenger = Objects.requireNonNull( messenger );
}
#Override
public void run ( )
{
while ( ! Thread.interrupted() )
{
this.messenger.sendMessage();
}
}
}
An app class to exercise run our demo.
package work.basil.example.order;
import java.time.Instant;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.TimeUnit;
public class App
{
public static void main ( String[] args )
{
App app = new App();
app.demo();
}
private void demo ( )
{
Messenger messenger = new Messenger();
ExecutorService executorService = Executors.newFixedThreadPool( 2 );
System.out.println( "INFO - Submitting tasks. " + Instant.now() );
executorService.submit( new Hilo( "Alice" , messenger ) );
executorService.submit( new Hilo( "Bob" , messenger ) );
executorService.shutdown();
try
{
// Wait a while for existing tasks to terminate
if ( ! executorService.awaitTermination( 15 , TimeUnit.MILLISECONDS ) )
{
executorService.shutdownNow(); // Set "interrupted" flag on threads currently executing tasks.
// Wait a while for tasks to respond to interrupted flag being set.
if ( ! executorService.awaitTermination( 1 , TimeUnit.SECONDS ) )
System.err.println( "WARN - executorService did not terminate." );
}
}
catch ( InterruptedException e ) { e.printStackTrace(); }
System.out.println( "INFO - Done with demo. Results array appears next. " + Instant.now() );
int nthOutput = 0;
for ( String output : messenger.outputs )
{
nthOutput++;
System.out.println( "output " + nthOutput + " = " + output );
}
}
}
When run on my Mac mini Intel with six real cores and no Hyper-Threading using early-access Java 17, I see dozens of outputs at a time per thread. Notice in this sample below how the first 3 are thread ID 14, followed by # 4-71 being all thread ID 15.
As the Answer by David Schwartz explains, letting a thread run a while is usually more efficient.
INFO - Submitting tasks. 2021-03-23T02:46:58.490916Z
INFO - Done with demo. Results array appears next. 2021-03-23T02:46:58.527018Z
output 1 = hello from thread id # 14 at 2021-03-23T02:46:58.509450Z
output 2 = hello from thread id # 14 at 2021-03-23T02:46:58.522884Z
output 3 = hello from thread id # 14 at 2021-03-23T02:46:58.522923Z
output 4 = hello from thread id # 15 at 2021-03-23T02:46:58.522956Z
output 5 = hello from thread id # 15 at 2021-03-23T02:46:58.523011Z
output 6 = hello from thread id # 15 at 2021-03-23T02:46:58.523041Z
output 7 = hello from thread id # 15 at 2021-03-23T02:46:58.523077Z
output 8 = hello from thread id # 15 at 2021-03-23T02:46:58.523106Z
output 9 = hello from thread id # 15 at 2021-03-23T02:46:58.523134Z
output 10 = hello from thread id # 15 at 2021-03-23T02:46:58.523165Z
output 11 = hello from thread id # 15 at 2021-03-23T02:46:58.523197Z
output 12 = hello from thread id # 15 at 2021-03-23T02:46:58.523227Z
output 13 = hello from thread id # 15 at 2021-03-23T02:46:58.523254Z
output 14 = hello from thread id # 15 at 2021-03-23T02:46:58.523282Z
output 15 = hello from thread id # 15 at 2021-03-23T02:46:58.523312Z
output 16 = hello from thread id # 15 at 2021-03-23T02:46:58.523343Z
output 17 = hello from thread id # 15 at 2021-03-23T02:46:58.523381Z
output 18 = hello from thread id # 15 at 2021-03-23T02:46:58.523410Z
output 19 = hello from thread id # 15 at 2021-03-23T02:46:58.523436Z
output 20 = hello from thread id # 15 at 2021-03-23T02:46:58.523466Z
output 21 = hello from thread id # 15 at 2021-03-23T02:46:58.523495Z
output 22 = hello from thread id # 15 at 2021-03-23T02:46:58.523522Z
output 23 = hello from thread id # 15 at 2021-03-23T02:46:58.523550Z
output 24 = hello from thread id # 15 at 2021-03-23T02:46:58.523583Z
output 25 = hello from thread id # 15 at 2021-03-23T02:46:58.523612Z
output 26 = hello from thread id # 15 at 2021-03-23T02:46:58.523640Z
output 27 = hello from thread id # 15 at 2021-03-23T02:46:58.523668Z
output 28 = hello from thread id # 15 at 2021-03-23T02:46:58.523696Z
output 29 = hello from thread id # 15 at 2021-03-23T02:46:58.523760Z
output 30 = hello from thread id # 15 at 2021-03-23T02:46:58.523798Z
output 31 = hello from thread id # 15 at 2021-03-23T02:46:58.523828Z
output 32 = hello from thread id # 15 at 2021-03-23T02:46:58.523858Z
output 33 = hello from thread id # 15 at 2021-03-23T02:46:58.523883Z
output 34 = hello from thread id # 15 at 2021-03-23T02:46:58.523915Z
output 35 = hello from thread id # 15 at 2021-03-23T02:46:58.523943Z
output 36 = hello from thread id # 15 at 2021-03-23T02:46:58.523971Z
output 37 = hello from thread id # 15 at 2021-03-23T02:46:58.523996Z
output 38 = hello from thread id # 15 at 2021-03-23T02:46:58.524020Z
output 39 = hello from thread id # 15 at 2021-03-23T02:46:58.524049Z
output 40 = hello from thread id # 15 at 2021-03-23T02:46:58.524077Z
output 41 = hello from thread id # 15 at 2021-03-23T02:46:58.524102Z
output 42 = hello from thread id # 15 at 2021-03-23T02:46:58.524128Z
output 43 = hello from thread id # 15 at 2021-03-23T02:46:58.524156Z
output 44 = hello from thread id # 15 at 2021-03-23T02:46:58.524181Z
output 45 = hello from thread id # 15 at 2021-03-23T02:46:58.524212Z
output 46 = hello from thread id # 15 at 2021-03-23T02:46:58.524239Z
output 47 = hello from thread id # 15 at 2021-03-23T02:46:58.524262Z
output 48 = hello from thread id # 15 at 2021-03-23T02:46:58.524284Z
output 49 = hello from thread id # 15 at 2021-03-23T02:46:58.524308Z
output 50 = hello from thread id # 15 at 2021-03-23T02:46:58.524336Z
output 51 = hello from thread id # 15 at 2021-03-23T02:46:58.524359Z
output 52 = hello from thread id # 15 at 2021-03-23T02:46:58.524381Z
output 53 = hello from thread id # 15 at 2021-03-23T02:46:58.524405Z
output 54 = hello from thread id # 15 at 2021-03-23T02:46:58.524428Z
output 55 = hello from thread id # 15 at 2021-03-23T02:46:58.524454Z
output 56 = hello from thread id # 15 at 2021-03-23T02:46:58.524477Z
output 57 = hello from thread id # 15 at 2021-03-23T02:46:58.524499Z
output 58 = hello from thread id # 15 at 2021-03-23T02:46:58.524521Z
output 59 = hello from thread id # 15 at 2021-03-23T02:46:58.524544Z
output 60 = hello from thread id # 15 at 2021-03-23T02:46:58.524570Z
output 61 = hello from thread id # 15 at 2021-03-23T02:46:58.524591Z
output 62 = hello from thread id # 15 at 2021-03-23T02:46:58.524613Z
output 63 = hello from thread id # 15 at 2021-03-23T02:46:58.524634Z
output 64 = hello from thread id # 15 at 2021-03-23T02:46:58.524659Z
output 65 = hello from thread id # 15 at 2021-03-23T02:46:58.524685Z
output 66 = hello from thread id # 15 at 2021-03-23T02:46:58.524710Z
output 67 = hello from thread id # 15 at 2021-03-23T02:46:58.524731Z
output 68 = hello from thread id # 15 at 2021-03-23T02:46:58.524752Z
output 69 = hello from thread id # 15 at 2021-03-23T02:46:58.524780Z
output 70 = hello from thread id # 15 at 2021-03-23T02:46:58.524801Z
output 71 = hello from thread id # 15 at 2021-03-23T02:46:58.524826Z
output 72 = hello from thread id # 14 at 2021-03-23T02:46:58.524852Z
output 73 = hello from thread id # 14 at 2021-03-23T02:46:58.524902Z
output 74 = hello from thread id # 14 at 2021-03-23T02:46:58.524929Z
output 75 = hello from thread id # 14 at 2021-03-23T02:46:58.524954Z
output 76 = hello from thread id # 14 at 2021-03-23T02:46:58.524975Z
output 77 = hello from thread id # 14 at 2021-03-23T02:46:58.524998Z
output 78 = hello from thread id # 14 at 2021-03-23T02:46:58.525021Z
output 79 = hello from thread id # 14 at 2021-03-23T02:46:58.525042Z
output 80 = hello from thread id # 14 at 2021-03-23T02:46:58.525075Z
output 81 = hello from thread id # 14 at 2021-03-23T02:46:58.525095Z
output 82 = hello from thread id # 14 at 2021-03-23T02:46:58.525115Z
output 83 = hello from thread id # 14 at 2021-03-23T02:46:58.525138Z
output 84 = hello from thread id # 14 at 2021-03-23T02:46:58.525159Z
output 85 = hello from thread id # 14 at 2021-03-23T02:46:58.525194Z
output 86 = hello from thread id # 14 at 2021-03-23T02:46:58.525215Z
output 87 = hello from thread id # 14 at 2021-03-23T02:46:58.525241Z
output 88 = hello from thread id # 14 at 2021-03-23T02:46:58.525277Z
output 89 = hello from thread id # 14 at 2021-03-23T02:46:58.525298Z
output 90 = hello from thread id # 14 at 2021-03-23T02:46:58.525319Z
output 91 = hello from thread id # 14 at 2021-03-23T02:46:58.525339Z
output 92 = hello from thread id # 14 at 2021-03-23T02:46:58.525359Z
output 93 = hello from thread id # 14 at 2021-03-23T02:46:58.525381Z
output 94 = hello from thread id # 14 at 2021-03-23T02:46:58.525401Z
output 95 = hello from thread id # 14 at 2021-03-23T02:46:58.525422Z
output 96 = hello from thread id # 14 at 2021-03-23T02:46:58.525452Z
output 97 = hello from thread id # 14 at 2021-03-23T02:46:58.525474Z
output 98 = hello from thread id # 14 at 2021-03-23T02:46:58.525496Z
output 99 = hello from thread id # 14 at 2021-03-23T02:46:58.525515Z
output 100 = hello from thread id # 14 at 2021-03-23T02:46:58.525533Z
output 101 = hello from thread id # 14 at 2021-03-23T02:46:58.525555Z
output 102 = hello from thread id # 14 at 2021-03-23T02:46:58.525581Z
output 103 = hello from thread id # 14 at 2021-03-23T02:46:58.525603Z
output 104 = hello from thread id # 14 at 2021-03-23T02:46:58.525625Z
output 105 = hello from thread id # 14 at 2021-03-23T02:46:58.525645Z
output 106 = hello from thread id # 14 at 2021-03-23T02:46:58.525664Z
output 107 = hello from thread id # 14 at 2021-03-23T02:46:58.525686Z
output 108 = hello from thread id # 14 at 2021-03-23T02:46:58.525705Z
output 109 = hello from thread id # 14 at 2021-03-23T02:46:58.525723Z
output 110 = hello from thread id # 14 at 2021-03-23T02:46:58.525741Z
output 111 = hello from thread id # 14 at 2021-03-23T02:46:58.525758Z
output 112 = hello from thread id # 14 at 2021-03-23T02:46:58.525783Z
output 113 = hello from thread id # 14 at 2021-03-23T02:46:58.525801Z
output 114 = hello from thread id # 14 at 2021-03-23T02:46:58.525818Z
output 115 = hello from thread id # 14 at 2021-03-23T02:46:58.525837Z
output 116 = hello from thread id # 14 at 2021-03-23T02:46:58.525855Z
output 117 = hello from thread id # 14 at 2021-03-23T02:46:58.525875Z
output 118 = hello from thread id # 14 at 2021-03-23T02:46:58.525897Z
output 119 = hello from thread id # 14 at 2021-03-23T02:46:58.525913Z
output 120 = hello from thread id # 15 at 2021-03-23T02:46:58.525931Z
output 121 = hello from thread id # 15 at 2021-03-23T02:46:58.525965Z
output 122 = hello from thread id # 15 at 2021-03-23T02:46:58.526002Z
output 123 = hello from thread id # 15 at 2021-03-23T02:46:58.526023Z
output 124 = hello from thread id # 15 at 2021-03-23T02:46:58.526050Z
output 125 = hello from thread id # 15 at 2021-03-23T02:46:58.526075Z
output 126 = hello from thread id # 15 at 2021-03-23T02:46:58.526095Z
output 127 = hello from thread id # 15 at 2021-03-23T02:46:58.526135Z
output 128 = hello from thread id # 15 at 2021-03-23T02:46:58.526169Z
output 129 = hello from thread id # 15 at 2021-03-23T02:46:58.526233Z
output 130 = hello from thread id # 15 at 2021-03-23T02:46:58.526260Z
output 131 = hello from thread id # 15 at 2021-03-23T02:46:58.526279Z
output 132 = hello from thread id # 15 at 2021-03-23T02:46:58.526297Z
output 133 = hello from thread id # 15 at 2021-03-23T02:46:58.526315Z
output 134 = hello from thread id # 15 at 2021-03-23T02:46:58.526335Z
output 135 = hello from thread id # 15 at 2021-03-23T02:46:58.526352Z
output 136 = hello from thread id # 15 at 2021-03-23T02:46:58.526370Z
output 137 = hello from thread id # 15 at 2021-03-23T02:46:58.526389Z
output 138 = hello from thread id # 15 at 2021-03-23T02:46:58.526405Z
output 139 = hello from thread id # 15 at 2021-03-23T02:46:58.526424Z
output 140 = hello from thread id # 15 at 2021-03-23T02:46:58.526441Z
output 141 = hello from thread id # 14 at 2021-03-23T02:46:58.526465Z
output 142 = hello from thread id # 14 at 2021-03-23T02:46:58.526500Z
output 143 = hello from thread id # 14 at 2021-03-23T02:46:58.526524Z
output 144 = hello from thread id # 14 at 2021-03-23T02:46:58.526552Z
output 145 = hello from thread id # 14 at 2021-03-23T02:46:58.526570Z
output 146 = hello from thread id # 14 at 2021-03-23T02:46:58.526588Z
output 147 = hello from thread id # 14 at 2021-03-23T02:46:58.526605Z
output 148 = hello from thread id # 14 at 2021-03-23T02:46:58.526621Z
output 149 = hello from thread id # 14 at 2021-03-23T02:46:58.526642Z
output 150 = hello from thread id # 14 at 2021-03-23T02:46:58.526658Z
output 151 = hello from thread id # 14 at 2021-03-23T02:46:58.526674Z
output 152 = hello from thread id # 14 at 2021-03-23T02:46:58.526696Z
output 153 = hello from thread id # 15 at 2021-03-23T02:46:58.526715Z
Your expectation seems to be that the thread scheduler was going to give equal, round-robin, time slices of liveliness for each thread.
The Java language, however, offers no guarantees with respects to thread scheduling, leaving that responsibility instead to the underlying operating system. If the threads in your application aren't performing time-sensitive work, you can sort of file this away as an implementation detail and just assume the scheduler is going try to give some fair distribution of running time for each thread (assuming of course the thread logic itself doesn't starve the other runnable threads).
Note also that you can assign a priority to threads, but again, this doesn't offer any guarantee of execution order and is only declarative of your intent since the scheduling is ultimately left up to the underlying OS.
Java thread scheduling algorithm is the one who decide which thread should be running at any given point of time. And this algorithm is only concerned about threads in the Runnable state.
Now comes the role of Thread priority. Java threads can have an integer value from 1 to 10, as its priority. You can set the priority to a thread by threadName.setPriority(value).
If not specified, thread will take the priority value 5 by default.
So, in your case, both the threads have same priority value 5.
As to java scheduling rule, at any given point of time, the highest priority thread is running. But this rule is not guaranteed due to few reasons (ex:JVM try to avoid starvation).
Now here since you have 2 threads of same priority level, Java thread scheduler randomly select one thread (let's say t1) and execute. The other thread (t2) will get the chance to be executed when one of the following things happen,
t1 completes whatever inside t1's run method and compete again to execute. But you can't guarantee next chance will be t2's. (This is what's happening in your case)
you call t1.yield() method. t1 give up the processor and give the chance to any other same priority thread. well, t2. (But can't always guarantee)
Preempted by a higher priority thread.
The system is using Time Slicing Rule and that specific time expires. (Again, can't guarantee t2 will be next).
If you really want to get the desired 0,1,0,1 "fair" output here, you can write a logic to the synchronized method in the Messenger class. Something like this;
class Messenger {
private String msg = "hello";
private boolean state; //define a state variable
synchronized public void sendMessage() {
while(state == false){
if(!Thread.currentThread().getName().equals("Thread-0")){ //if not thread-0, then go to wait queue
try {
wait();
} catch (InterruptedException e) {
e.printStackTrace();
}
}
else{//if thread-0 , print and change state variable value.
System.out.println(msg + " from " + Thread.currentThread().getName());
state = true;
notifyAll();
}
}
if(!Thread.currentThread().getName().equals("Thread-1")){//if not thread-1, then go to wait queue
try {
wait();
} catch (InterruptedException e) {
e.printStackTrace();
}
}
else{//if thread-1 , print and change state variable value.
System.out.println(msg + " from " + Thread.currentThread().getName());
state = false;
notifyAll();
}
}
}

H2O anomaly per_feature = TRUE java.lang.OutOfMemoryError: Java heap space

I run H2O anomaly with per_feature = TRUE which results in a Java Heap Space error. In some other posts about this error message, I see people suggest using h2o.remove(df) to release the used memory. However, in my case I don’t have any loop, and it seems that there is nothing I can remove to release some used memory.
Here is my code:
library(h2o)
h2o.init(min_mem_size = "10G", max_mem_size = "15G")
data.hex <- as.h2o(data)
x <- names(data.hex)
random_seed <- 42
# Deeplearning Model
print("Deep learning model begins ...")
model.dl = h2o.deeplearning(x = x,
training_frame = data.hex,
autoencoder = TRUE,
activation = "Tanh",
hidden = c(5, 5, 5, 5, 5),
mini_batch_size = 64,
epochs = 100,
stopping_rounds = 15,
variable_importances = TRUE,
seed = random_seed)
# Calculating anomaly per feature
print('Calculating anomaly per feature ...')
errors_per_feature <- h2o.anomaly(model.dl, data.hex, per_feature = TRUE) # Anomaly Detection Algorithm
print('Converting from H2O frame to dataframe ...')
errors1_per_feature <- as.data.frame(errors_per_feature) # Convert back to data frame
Here is the detailed error message:
[1] "Deep learning model begins ..."
|======================================================================| 100%
[1] "Calculating anomaly per feature ..."
ERROR: Unexpected HTTP Status code: 500 Server Error (url = http://localhost:54321/3/Predictions/models/DeepLearning_model_R_1594826474037_2/frames/Accesses_sid_a71f_1)
water.util.DistributedException
[1] "DistributedException from localhost/127.0.0.1:54321: 'Java heap space', caused by java.lang.OutOfMemoryError: Java heap space"
[2] " water.MRTask.getResult(MRTask.java:494)"
[3] " water.MRTask.getResult(MRTask.java:502)"
[4] " water.MRTask.doAll(MRTask.java:397)"
[5] " water.MRTask.doAll(MRTask.java:403)"
[6] " hex.deeplearning.DeepLearningModel.scoreAutoEncoder(DeepLearningModel.java:761)"
[7] " water.api.ModelMetricsHandler.predict(ModelMetricsHandler.java:469)"
[8] " java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)"
[9] " java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)"
[10] " java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)"
[11] " java.base/java.lang.reflect.Method.invoke(Method.java:567)"
[12] " water.api.Handler.handle(Handler.java:60)"
[13] " water.api.RequestServer.serve(RequestServer.java:470)"
[14] " water.api.RequestServer.doGeneric(RequestServer.java:301)"
[15] " water.api.RequestServer.doPost(RequestServer.java:227)"
[16] " javax.servlet.http.HttpServlet.service(HttpServlet.java:755)"
[17] " javax.servlet.http.HttpServlet.service(HttpServlet.java:848)"
[18] " org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:684)"
[19] " org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:501)"
[20] " org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1086)"
[21] " org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:427)"
[22] " org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1020)"
[23] " org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)"
[24] " org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:154)"
[25] " org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)"
[26] " water.webserver.jetty8.Jetty8ServerAdapter$LoginHandler.handle(Jetty8ServerAdapter.java:119)"
[27] " org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:154)"
[28] " org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)"
[29] " org.eclipse.jetty.server.Server.handle(Server.java:370)"
[30] " org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:494)"
[31] " org.eclipse.jetty.server.BlockingHttpConnection.handleRequest(BlockingHttpConnection.java:53)"
[32] " org.eclipse.jetty.server.AbstractHttpConnection.content(AbstractHttpConnection.java:984)"
[33] " org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.content(AbstractHttpConnection.java:1045)"
[34] " org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:861)"
[35] " org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:236)"
[36] " org.eclipse.jetty.server.BlockingHttpConnection.handle(BlockingHttpConnection.java:72)"
[37] " org.eclipse.jetty.server.bio.SocketConnector$ConnectorEndPoint.run(SocketConnector.java:264)"
[38] " org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)"
[39] " org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543)"
[40] " java.base/java.lang.Thread.run(Thread.java:830)"
[41] "Caused by:java.lang.OutOfMemoryError: Java heap space"
Error in .h2o.doSafeREST(h2oRestApiVersion = h2oRestApiVersion, urlSuffix = page, :
ERROR MESSAGE:
DistributedException from localhost/127.0.0.1:54321: 'Java heap space'
Calls: h2o.anomaly -> .h2o.__remoteSend -> .h2o.doSafeREST
Execution halted
R and H2O version:
H2O cluster version: 3.30.0.6
H2O cluster total nodes: 1
H2O cluster total memory: 13.43 GB
H2O cluster total cores: 16
H2O cluster allowed cores: 16
H2O cluster healthy: TRUE
R Version: R version 3.6.3 (2020-02-29)
I have 16 GB of memory on my macOS.
There are 6 variables (columns) in data: 5 categorical variables and 1 numeric variable. The number of unique values in the 5 categorical variables is 17, 49, 52, 85 and 5032, respectively. The number of rows is ~500k. The data file size is 44 MB (before encoding within H2O).
What can I do in my case to resolve the issue? Please let me know if there is any other information I can provide. Thanks for your help!
[cutting and pasting my response to the h2ostream mailing list here too...]
i suspect the large number of categorical levels is causing the memory to blow up.
try removing that variable and seeing if it at least completes.
if it does, try re-binning into a smaller number of levels somehow.

Reactor: Expand a ParallelFlux

I have a collection of items that needs to be expanded so I chose reactor for its reactive capabilities since the expansion requires IO operations.
Here is a piece of working code:
public Flux<Item> expand(List<Item> unprocessedItems) {
return Flux.fromIterable(unprocessedItems)
.expandDeep(this::expandItem);
}
Note that this::expandItem is a blocking operation (multiple database queries, some computation, ...).
Now I would like this expansion to be parallel but as far as I know .expand() and .expandDeep() are only members of the Flux class and not the ParallelFlux class. I tried adding .publishOn() and .subscribeOn() before the .expand() call but without luck.
It's my first time using reactor but I don't see any technical issue preventing a parallel expansion, is there any way to do it? Is the API missing or am I missing something?
yes you are right ParallelFlux has not .expand() and .expandDeep() methods,
but I can use other way, create additional Publisher that has expand method and pass it to your ParallelFlux, like this:
public static void main(String[] args) {
Function<Node, Flux<Node>> expander =
node -> Flux.fromIterable(node.children);
List<Node> roots = createTestNodes();
Flux.fromIterable(roots)
.parallel(4)
.runOn(Schedulers.parallel())
.flatMap(node -> Flux.just(node).expandDeep(expander))
.doOnNext(i -> System.out.println("Time: " + System.currentTimeMillis() + " thread: " + Thread.currentThread().getName() + " value: " + i))
.sequential()
.subscribe();
try {
Thread.sleep(500);
} catch (InterruptedException e) {
e.printStackTrace();
}
System.out.println("finished");
}
My test data:
static final class Node {
final String name;
final List<Node> children;
Node(String name, Node... nodes) {
this.name = name;
this.children = new ArrayList<>();
children.addAll(Arrays.asList(nodes));
}
#Override
public String toString() {
return name;
}
}
static List<Node> createTestNodes() {
return new Node("root",
new Node("1",
new Node("11")
),
new Node("2",
new Node("21"),
new Node("22",
new Node("221")
)
),
new Node("3",
new Node("31"),
new Node("32",
new Node("321")
),
new Node("33",
new Node("331"),
new Node("332",
new Node("3321")
)
)
),
new Node("4",
new Node("41"),
new Node("42",
new Node("421")
),
new Node("43",
new Node("431"),
new Node("432",
new Node("4321")
)
),
new Node("44",
new Node("441"),
new Node("442",
new Node("4421")
),
new Node("443",
new Node("4431"),
new Node("4432")
)
)
)
).children;
}
And result:
Time: 1549296674522 thread: parallel-4 value: 4
Time: 1549296674523 thread: parallel-4 value: 41
Time: 1549296674523 thread: parallel-2 value: 2
Time: 1549296674523 thread: parallel-2 value: 21
Time: 1549296674523 thread: parallel-3 value: 3
Time: 1549296674523 thread: parallel-3 value: 31
Time: 1549296674523 thread: parallel-1 value: 1
Time: 1549296674523 thread: parallel-1 value: 11
Time: 1549296674525 thread: parallel-2 value: 22
Time: 1549296674525 thread: parallel-2 value: 221
Time: 1549296674526 thread: parallel-3 value: 32
Time: 1549296674526 thread: parallel-3 value: 321
Time: 1549296674526 thread: parallel-3 value: 33
Time: 1549296674526 thread: parallel-3 value: 331
Time: 1549296674526 thread: parallel-3 value: 332
Time: 1549296674526 thread: parallel-3 value: 3321
Time: 1549296674526 thread: parallel-4 value: 42
Time: 1549296674526 thread: parallel-4 value: 421
Time: 1549296674526 thread: parallel-4 value: 43
Time: 1549296674526 thread: parallel-4 value: 431
Time: 1549296674526 thread: parallel-4 value: 432
Time: 1549296674526 thread: parallel-4 value: 4321
Time: 1549296674527 thread: parallel-4 value: 44
Time: 1549296674527 thread: parallel-4 value: 441
Time: 1549296674527 thread: parallel-4 value: 442
Time: 1549296674527 thread: parallel-4 value: 4421
Time: 1549296674528 thread: parallel-4 value: 443
Time: 1549296674528 thread: parallel-4 value: 4431
Time: 1549296674528 thread: parallel-4 value: 4432
As you can see expander worked in parallel threads.
Here is one example, based on example given by YauhenBalykin:
public static void main(String[] args) {
Function<Node, Flux<Node>> expander =
node -> Flux.fromIterable(node.children)
.subscribeOn(Schedulers.parallel());
List<Node> roots = createTestNodes();
Flux.fromIterable(roots)
.expand(expander)
.doOnNext(i -> System.out.println("Time: " + System.currentTimeMillis() + " thread: " + Thread.currentThread().getName() + " value: " + i))
.subscribe();
try {
Thread.sleep(500);
} catch (InterruptedException e) {
e.printStackTrace();
}
System.out.println("finished");
}
Test data:
static final class Node {
final String name;
final List<Node> children;
Node(String name, Node... nodes) {
this.name = name;
this.children = new ArrayList<>();
children.addAll(Arrays.asList(nodes));
}
#Override
public String toString() {
return name;
}
}
static List<Node> createTestNodes() {
return new Node("root",
new Node("1",
new Node("11")
),
new Node("2",
new Node("21"),
new Node("22",
new Node("221")
)
),
new Node("3",
new Node("31"),
new Node("32",
new Node("321")
),
new Node("33",
new Node("331"),
new Node("332",
new Node("3321")
)
)
),
new Node("4",
new Node("41"),
new Node("42",
new Node("421")
),
new Node("43",
new Node("431"),
new Node("432",
new Node("4321")
)
),
new Node("44",
new Node("441"),
new Node("442",
new Node("4421")
),
new Node("443",
new Node("4431"),
new Node("4432")
)
)
)
).children;
}
Result:
Time: 1636182895717 thread: main value: 1
Time: 1636182895754 thread: main value: 2
Time: 1636182895754 thread: main value: 3
Time: 1636182895754 thread: main value: 4
Time: 1636182895761 thread: parallel-1 value: 11
Time: 1636182895761 thread: parallel-2 value: 21
Time: 1636182895761 thread: parallel-2 value: 22
Time: 1636182895762 thread: parallel-3 value: 31
Time: 1636182895762 thread: parallel-3 value: 32
Time: 1636182895762 thread: parallel-3 value: 33
Time: 1636182895762 thread: parallel-4 value: 41
Time: 1636182895762 thread: parallel-4 value: 42
Time: 1636182895762 thread: parallel-4 value: 43
Time: 1636182895762 thread: parallel-4 value: 44
Time: 1636182895764 thread: parallel-7 value: 221
Time: 1636182895764 thread: parallel-9 value: 321
Time: 1636182895764 thread: parallel-10 value: 331
Time: 1636182895765 thread: parallel-10 value: 332
Time: 1636182895765 thread: parallel-12 value: 421
Time: 1636182895765 thread: parallel-1 value: 431
Time: 1636182895765 thread: parallel-1 value: 432
Time: 1636182895766 thread: parallel-2 value: 441
Time: 1636182895766 thread: parallel-2 value: 442
Time: 1636182895766 thread: parallel-2 value: 443
Time: 1636182895766 thread: parallel-6 value: 3321
Time: 1636182895767 thread: parallel-9 value: 4321
Time: 1636182895767 thread: parallel-11 value: 4421
Time: 1636182895767 thread: parallel-12 value: 4431
Time: 1636182895767 thread: parallel-12 value: 4432
finished

Execution of Postgres SQL Select statement slows down by factor 10 after 8 attempts

we have a 15000000 entry database and executing a SELECT like
SELECT technical_id, attribute_a, attribute_b, attribute_c from test WHERE ( attribute_a_fltr #> (?::character varying[]) OR attribute_a_fltr = '{n/a}') AND ( attribute_b_fltr #> (?::character varying[]) OR attribute_b_fltr = '{n/a}') AND ( attribute_c = ? OR attribute_c = 0)
will slow down dramatically after 9 tries. Here are the results
Iteration: 0 Entries: 593 time :6931
Iteration: 1 Entries: 593 time :7879
Iteration: 2 Entries: 593 time :8721
Iteration: 3 Entries: 593 time :9490
Iteration: 4 Entries: 593 time :10240
Iteration: 5 Entries: 593 time :11016
Iteration: 6 Entries: 593 time :11736
Iteration: 7 Entries: 593 time :12461
Iteration: 8 Entries: 593 time :13168
Iteration: 9 Entries: 593 time :152329
Iteration: 10 Entries: 593 time :290717
Iteration: 11 Entries: 593 time :435933
Iteration: 12 Entries: 593 time :567401
Iteration: 13 Entries: 593 time :695307
Iteration: 14 Entries: 593 time :835853
Here comes the Java code
Connection connection = DriverManager.getConnection("jdbc:postgresql://localhost:5432/test-db", props);
PreparedStatement prepStatement = connection.prepareStatement(FILTER_7);
prepStatement.setString(1,"{AAAAAAIAAAAAIAAAAAAAAgAAAAAgAAAAAAACAAAAACAAAAAAAAIAAgAAIAAgAAAAAgACAAAgACAAAAAAAAIA}");
prepStatement.setString(2,"{gAAAAAAQAAAAAAIAAABAAAAAAAgAAAAAAQAAACAAAAAABAAAAIAAAAAAEAAAAAACAAAAQAAAAgAIAEAAAAAA}");
prepStatement.setInt(3, 1979);
long t0 = System.currentTimeMillis();
long iter = 0;
while (true) {
ResultSet resultSet = prepStatement.executeQuery();
long count = 0;
while(resultSet.next()) {
++count;
}
System.out.println("Iteration: "+iter +" Entries: "+ count + " time :" + (System.currentTimeMillis() - t0));
++iter;
}

Categories