Hi I'm trying to understand my code, I have user synchronized key word in a method, to make that only one thread use it.
The main class:
public class Principal {
public static void main(String[] args) {
Messenger messenger = new Messenger();
Hilo t1 = new Hilo(messenger);
Hilo t2 = new Hilo(messenger);
t1.start();
t2.start();
}
}
The messenger class:
public class Messenger {
private String msg = "hello";
synchronized public void sendMessage() {
System.out.println(msg + " from " + Thread.currentThread().getName());
}
}
And the thred class:
public class Hilo extends Thread {
private Messenger messenger;
public Hilo(Messenger messenger) {
this.messenger = messenger;
}
#Override
public void run() {
while (true) {
messenger.sendMessage();
}
}
}
I had this output:
hello from Thread-1
hello from Thread-1
hello from Thread-1
hello from Thread-1
hello from Thread-1
hello from Thread-0
hello from Thread-0
hello from Thread-1
hello from Thread-1
hello from Thread-1
hello from Thread-1
hello from Thread-0
hello from Thread-0
hello from Thread-0
hello from Thread-0
hello from Thread-0
hello from Thread-0
hello from Thread-0
...
But I was expected this:
hello from Thread-1
hello from Thread-0
hello from Thread-1
hello from Thread-0
hello from Thread-1
hello from Thread-0
hello from Thread-1
hello from Thread-0
hello from Thread-1
hello from Thread-0
hello from Thread-1
hello from Thread-0
hello from Thread-1
hello from Thread-0
...
I've been thinking but I can't understand the failure.
Please add your opinion .
Thanks in advance.
Your resulting output got about 18 cycles done with about 3 context switches. Your "expected" output got 14 cycles done with 14 context switches. It seems the behavior you got is much better than you expected.
The question is why you would expect such inefficient behavior. Alternation is about the worst possible performance given that more context switches are needed, more caches are blown out, and so on. No sensible implementation would do that if it could find any way to avoid it.
Generally speaking, you want to keep a thread running as long as possible because context switches have cost. A good implementation balances performance with other priorities, sure, but it doesn't give up performance for no good reason.
Always use timestamps to verify order of happening
Actually, System.out.println makes poor mechanism for testing concurrency. The calls do not actually get output in the order they were called. In other words, never rely on the order of appearance in System.out as representing actual order of happening.
You can see this behavior by including calls to Instant.now() or System.nanoTime. I suggest always adding such calls in almost any kind of testing/debugging where order matters. If you look carefully at the microseconds, you will see later items appearing on your console before earlier items.
Even better suggestion: Put your outputs into a thread-safe collection. Dump to console at end of your test.
Executor service
In modern Java, we rarely need to address the Thread class directly.
Instead, use an executor service. The service is backed by a pool of one or more threads. The service handles creating and recreating threads as needed, depending on its promised behavior.
Write your task as a Runnable, an object with a run method, or a Callable with an call method.
Example code
Here is my revised version of your code.
Our singleton Messenger class. One instance to be shared across two threads.
Tip: Call Thread#getId rather than Thread#getName to identify a thread. Virtual threads in the future Project Loom may lack names by default.
package work.basil.example.order;
import java.time.Instant;
import java.util.ArrayList;
import java.util.Collections;
import java.util.List;
public class Messenger
{
final private String msg = "hello";
final List < String > outputs = new ArrayList <>( 100 ); // Need not be thread-safe for this demo, as we only touch it from within our `synchronized` method `sendMessage`.
synchronized public void sendMessage ( )
{
String output = this.msg + " from thread id # " + Thread.currentThread().getId() + " at " + Instant.now();
this.outputs.add( output );
}
}
Our Runnable task class. It keeps hold of a Messenger object to be used on each run execution.
Tip: Rather than running endlessly as seen in your code with while ( true ), write while ( ! Thread.interrupted() ) to run until the interrupted flag has been thrown for that thread. The ExecutorService#shutdownNow method will likely set that flag for us, enabling our threads to shut themselves down.
package work.basil.example.order;
import java.util.Objects;
public class Hilo implements Runnable
{
// Member fields
final private String id;
final private Messenger messenger;
// Constructors
public Hilo ( final String id , final Messenger messenger )
{
this.id = id;
this.messenger = Objects.requireNonNull( messenger );
}
#Override
public void run ( )
{
while ( ! Thread.interrupted() )
{
this.messenger.sendMessage();
}
}
}
An app class to exercise run our demo.
package work.basil.example.order;
import java.time.Instant;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.TimeUnit;
public class App
{
public static void main ( String[] args )
{
App app = new App();
app.demo();
}
private void demo ( )
{
Messenger messenger = new Messenger();
ExecutorService executorService = Executors.newFixedThreadPool( 2 );
System.out.println( "INFO - Submitting tasks. " + Instant.now() );
executorService.submit( new Hilo( "Alice" , messenger ) );
executorService.submit( new Hilo( "Bob" , messenger ) );
executorService.shutdown();
try
{
// Wait a while for existing tasks to terminate
if ( ! executorService.awaitTermination( 15 , TimeUnit.MILLISECONDS ) )
{
executorService.shutdownNow(); // Set "interrupted" flag on threads currently executing tasks.
// Wait a while for tasks to respond to interrupted flag being set.
if ( ! executorService.awaitTermination( 1 , TimeUnit.SECONDS ) )
System.err.println( "WARN - executorService did not terminate." );
}
}
catch ( InterruptedException e ) { e.printStackTrace(); }
System.out.println( "INFO - Done with demo. Results array appears next. " + Instant.now() );
int nthOutput = 0;
for ( String output : messenger.outputs )
{
nthOutput++;
System.out.println( "output " + nthOutput + " = " + output );
}
}
}
When run on my Mac mini Intel with six real cores and no Hyper-Threading using early-access Java 17, I see dozens of outputs at a time per thread. Notice in this sample below how the first 3 are thread ID 14, followed by # 4-71 being all thread ID 15.
As the Answer by David Schwartz explains, letting a thread run a while is usually more efficient.
INFO - Submitting tasks. 2021-03-23T02:46:58.490916Z
INFO - Done with demo. Results array appears next. 2021-03-23T02:46:58.527018Z
output 1 = hello from thread id # 14 at 2021-03-23T02:46:58.509450Z
output 2 = hello from thread id # 14 at 2021-03-23T02:46:58.522884Z
output 3 = hello from thread id # 14 at 2021-03-23T02:46:58.522923Z
output 4 = hello from thread id # 15 at 2021-03-23T02:46:58.522956Z
output 5 = hello from thread id # 15 at 2021-03-23T02:46:58.523011Z
output 6 = hello from thread id # 15 at 2021-03-23T02:46:58.523041Z
output 7 = hello from thread id # 15 at 2021-03-23T02:46:58.523077Z
output 8 = hello from thread id # 15 at 2021-03-23T02:46:58.523106Z
output 9 = hello from thread id # 15 at 2021-03-23T02:46:58.523134Z
output 10 = hello from thread id # 15 at 2021-03-23T02:46:58.523165Z
output 11 = hello from thread id # 15 at 2021-03-23T02:46:58.523197Z
output 12 = hello from thread id # 15 at 2021-03-23T02:46:58.523227Z
output 13 = hello from thread id # 15 at 2021-03-23T02:46:58.523254Z
output 14 = hello from thread id # 15 at 2021-03-23T02:46:58.523282Z
output 15 = hello from thread id # 15 at 2021-03-23T02:46:58.523312Z
output 16 = hello from thread id # 15 at 2021-03-23T02:46:58.523343Z
output 17 = hello from thread id # 15 at 2021-03-23T02:46:58.523381Z
output 18 = hello from thread id # 15 at 2021-03-23T02:46:58.523410Z
output 19 = hello from thread id # 15 at 2021-03-23T02:46:58.523436Z
output 20 = hello from thread id # 15 at 2021-03-23T02:46:58.523466Z
output 21 = hello from thread id # 15 at 2021-03-23T02:46:58.523495Z
output 22 = hello from thread id # 15 at 2021-03-23T02:46:58.523522Z
output 23 = hello from thread id # 15 at 2021-03-23T02:46:58.523550Z
output 24 = hello from thread id # 15 at 2021-03-23T02:46:58.523583Z
output 25 = hello from thread id # 15 at 2021-03-23T02:46:58.523612Z
output 26 = hello from thread id # 15 at 2021-03-23T02:46:58.523640Z
output 27 = hello from thread id # 15 at 2021-03-23T02:46:58.523668Z
output 28 = hello from thread id # 15 at 2021-03-23T02:46:58.523696Z
output 29 = hello from thread id # 15 at 2021-03-23T02:46:58.523760Z
output 30 = hello from thread id # 15 at 2021-03-23T02:46:58.523798Z
output 31 = hello from thread id # 15 at 2021-03-23T02:46:58.523828Z
output 32 = hello from thread id # 15 at 2021-03-23T02:46:58.523858Z
output 33 = hello from thread id # 15 at 2021-03-23T02:46:58.523883Z
output 34 = hello from thread id # 15 at 2021-03-23T02:46:58.523915Z
output 35 = hello from thread id # 15 at 2021-03-23T02:46:58.523943Z
output 36 = hello from thread id # 15 at 2021-03-23T02:46:58.523971Z
output 37 = hello from thread id # 15 at 2021-03-23T02:46:58.523996Z
output 38 = hello from thread id # 15 at 2021-03-23T02:46:58.524020Z
output 39 = hello from thread id # 15 at 2021-03-23T02:46:58.524049Z
output 40 = hello from thread id # 15 at 2021-03-23T02:46:58.524077Z
output 41 = hello from thread id # 15 at 2021-03-23T02:46:58.524102Z
output 42 = hello from thread id # 15 at 2021-03-23T02:46:58.524128Z
output 43 = hello from thread id # 15 at 2021-03-23T02:46:58.524156Z
output 44 = hello from thread id # 15 at 2021-03-23T02:46:58.524181Z
output 45 = hello from thread id # 15 at 2021-03-23T02:46:58.524212Z
output 46 = hello from thread id # 15 at 2021-03-23T02:46:58.524239Z
output 47 = hello from thread id # 15 at 2021-03-23T02:46:58.524262Z
output 48 = hello from thread id # 15 at 2021-03-23T02:46:58.524284Z
output 49 = hello from thread id # 15 at 2021-03-23T02:46:58.524308Z
output 50 = hello from thread id # 15 at 2021-03-23T02:46:58.524336Z
output 51 = hello from thread id # 15 at 2021-03-23T02:46:58.524359Z
output 52 = hello from thread id # 15 at 2021-03-23T02:46:58.524381Z
output 53 = hello from thread id # 15 at 2021-03-23T02:46:58.524405Z
output 54 = hello from thread id # 15 at 2021-03-23T02:46:58.524428Z
output 55 = hello from thread id # 15 at 2021-03-23T02:46:58.524454Z
output 56 = hello from thread id # 15 at 2021-03-23T02:46:58.524477Z
output 57 = hello from thread id # 15 at 2021-03-23T02:46:58.524499Z
output 58 = hello from thread id # 15 at 2021-03-23T02:46:58.524521Z
output 59 = hello from thread id # 15 at 2021-03-23T02:46:58.524544Z
output 60 = hello from thread id # 15 at 2021-03-23T02:46:58.524570Z
output 61 = hello from thread id # 15 at 2021-03-23T02:46:58.524591Z
output 62 = hello from thread id # 15 at 2021-03-23T02:46:58.524613Z
output 63 = hello from thread id # 15 at 2021-03-23T02:46:58.524634Z
output 64 = hello from thread id # 15 at 2021-03-23T02:46:58.524659Z
output 65 = hello from thread id # 15 at 2021-03-23T02:46:58.524685Z
output 66 = hello from thread id # 15 at 2021-03-23T02:46:58.524710Z
output 67 = hello from thread id # 15 at 2021-03-23T02:46:58.524731Z
output 68 = hello from thread id # 15 at 2021-03-23T02:46:58.524752Z
output 69 = hello from thread id # 15 at 2021-03-23T02:46:58.524780Z
output 70 = hello from thread id # 15 at 2021-03-23T02:46:58.524801Z
output 71 = hello from thread id # 15 at 2021-03-23T02:46:58.524826Z
output 72 = hello from thread id # 14 at 2021-03-23T02:46:58.524852Z
output 73 = hello from thread id # 14 at 2021-03-23T02:46:58.524902Z
output 74 = hello from thread id # 14 at 2021-03-23T02:46:58.524929Z
output 75 = hello from thread id # 14 at 2021-03-23T02:46:58.524954Z
output 76 = hello from thread id # 14 at 2021-03-23T02:46:58.524975Z
output 77 = hello from thread id # 14 at 2021-03-23T02:46:58.524998Z
output 78 = hello from thread id # 14 at 2021-03-23T02:46:58.525021Z
output 79 = hello from thread id # 14 at 2021-03-23T02:46:58.525042Z
output 80 = hello from thread id # 14 at 2021-03-23T02:46:58.525075Z
output 81 = hello from thread id # 14 at 2021-03-23T02:46:58.525095Z
output 82 = hello from thread id # 14 at 2021-03-23T02:46:58.525115Z
output 83 = hello from thread id # 14 at 2021-03-23T02:46:58.525138Z
output 84 = hello from thread id # 14 at 2021-03-23T02:46:58.525159Z
output 85 = hello from thread id # 14 at 2021-03-23T02:46:58.525194Z
output 86 = hello from thread id # 14 at 2021-03-23T02:46:58.525215Z
output 87 = hello from thread id # 14 at 2021-03-23T02:46:58.525241Z
output 88 = hello from thread id # 14 at 2021-03-23T02:46:58.525277Z
output 89 = hello from thread id # 14 at 2021-03-23T02:46:58.525298Z
output 90 = hello from thread id # 14 at 2021-03-23T02:46:58.525319Z
output 91 = hello from thread id # 14 at 2021-03-23T02:46:58.525339Z
output 92 = hello from thread id # 14 at 2021-03-23T02:46:58.525359Z
output 93 = hello from thread id # 14 at 2021-03-23T02:46:58.525381Z
output 94 = hello from thread id # 14 at 2021-03-23T02:46:58.525401Z
output 95 = hello from thread id # 14 at 2021-03-23T02:46:58.525422Z
output 96 = hello from thread id # 14 at 2021-03-23T02:46:58.525452Z
output 97 = hello from thread id # 14 at 2021-03-23T02:46:58.525474Z
output 98 = hello from thread id # 14 at 2021-03-23T02:46:58.525496Z
output 99 = hello from thread id # 14 at 2021-03-23T02:46:58.525515Z
output 100 = hello from thread id # 14 at 2021-03-23T02:46:58.525533Z
output 101 = hello from thread id # 14 at 2021-03-23T02:46:58.525555Z
output 102 = hello from thread id # 14 at 2021-03-23T02:46:58.525581Z
output 103 = hello from thread id # 14 at 2021-03-23T02:46:58.525603Z
output 104 = hello from thread id # 14 at 2021-03-23T02:46:58.525625Z
output 105 = hello from thread id # 14 at 2021-03-23T02:46:58.525645Z
output 106 = hello from thread id # 14 at 2021-03-23T02:46:58.525664Z
output 107 = hello from thread id # 14 at 2021-03-23T02:46:58.525686Z
output 108 = hello from thread id # 14 at 2021-03-23T02:46:58.525705Z
output 109 = hello from thread id # 14 at 2021-03-23T02:46:58.525723Z
output 110 = hello from thread id # 14 at 2021-03-23T02:46:58.525741Z
output 111 = hello from thread id # 14 at 2021-03-23T02:46:58.525758Z
output 112 = hello from thread id # 14 at 2021-03-23T02:46:58.525783Z
output 113 = hello from thread id # 14 at 2021-03-23T02:46:58.525801Z
output 114 = hello from thread id # 14 at 2021-03-23T02:46:58.525818Z
output 115 = hello from thread id # 14 at 2021-03-23T02:46:58.525837Z
output 116 = hello from thread id # 14 at 2021-03-23T02:46:58.525855Z
output 117 = hello from thread id # 14 at 2021-03-23T02:46:58.525875Z
output 118 = hello from thread id # 14 at 2021-03-23T02:46:58.525897Z
output 119 = hello from thread id # 14 at 2021-03-23T02:46:58.525913Z
output 120 = hello from thread id # 15 at 2021-03-23T02:46:58.525931Z
output 121 = hello from thread id # 15 at 2021-03-23T02:46:58.525965Z
output 122 = hello from thread id # 15 at 2021-03-23T02:46:58.526002Z
output 123 = hello from thread id # 15 at 2021-03-23T02:46:58.526023Z
output 124 = hello from thread id # 15 at 2021-03-23T02:46:58.526050Z
output 125 = hello from thread id # 15 at 2021-03-23T02:46:58.526075Z
output 126 = hello from thread id # 15 at 2021-03-23T02:46:58.526095Z
output 127 = hello from thread id # 15 at 2021-03-23T02:46:58.526135Z
output 128 = hello from thread id # 15 at 2021-03-23T02:46:58.526169Z
output 129 = hello from thread id # 15 at 2021-03-23T02:46:58.526233Z
output 130 = hello from thread id # 15 at 2021-03-23T02:46:58.526260Z
output 131 = hello from thread id # 15 at 2021-03-23T02:46:58.526279Z
output 132 = hello from thread id # 15 at 2021-03-23T02:46:58.526297Z
output 133 = hello from thread id # 15 at 2021-03-23T02:46:58.526315Z
output 134 = hello from thread id # 15 at 2021-03-23T02:46:58.526335Z
output 135 = hello from thread id # 15 at 2021-03-23T02:46:58.526352Z
output 136 = hello from thread id # 15 at 2021-03-23T02:46:58.526370Z
output 137 = hello from thread id # 15 at 2021-03-23T02:46:58.526389Z
output 138 = hello from thread id # 15 at 2021-03-23T02:46:58.526405Z
output 139 = hello from thread id # 15 at 2021-03-23T02:46:58.526424Z
output 140 = hello from thread id # 15 at 2021-03-23T02:46:58.526441Z
output 141 = hello from thread id # 14 at 2021-03-23T02:46:58.526465Z
output 142 = hello from thread id # 14 at 2021-03-23T02:46:58.526500Z
output 143 = hello from thread id # 14 at 2021-03-23T02:46:58.526524Z
output 144 = hello from thread id # 14 at 2021-03-23T02:46:58.526552Z
output 145 = hello from thread id # 14 at 2021-03-23T02:46:58.526570Z
output 146 = hello from thread id # 14 at 2021-03-23T02:46:58.526588Z
output 147 = hello from thread id # 14 at 2021-03-23T02:46:58.526605Z
output 148 = hello from thread id # 14 at 2021-03-23T02:46:58.526621Z
output 149 = hello from thread id # 14 at 2021-03-23T02:46:58.526642Z
output 150 = hello from thread id # 14 at 2021-03-23T02:46:58.526658Z
output 151 = hello from thread id # 14 at 2021-03-23T02:46:58.526674Z
output 152 = hello from thread id # 14 at 2021-03-23T02:46:58.526696Z
output 153 = hello from thread id # 15 at 2021-03-23T02:46:58.526715Z
Your expectation seems to be that the thread scheduler was going to give equal, round-robin, time slices of liveliness for each thread.
The Java language, however, offers no guarantees with respects to thread scheduling, leaving that responsibility instead to the underlying operating system. If the threads in your application aren't performing time-sensitive work, you can sort of file this away as an implementation detail and just assume the scheduler is going try to give some fair distribution of running time for each thread (assuming of course the thread logic itself doesn't starve the other runnable threads).
Note also that you can assign a priority to threads, but again, this doesn't offer any guarantee of execution order and is only declarative of your intent since the scheduling is ultimately left up to the underlying OS.
Java thread scheduling algorithm is the one who decide which thread should be running at any given point of time. And this algorithm is only concerned about threads in the Runnable state.
Now comes the role of Thread priority. Java threads can have an integer value from 1 to 10, as its priority. You can set the priority to a thread by threadName.setPriority(value).
If not specified, thread will take the priority value 5 by default.
So, in your case, both the threads have same priority value 5.
As to java scheduling rule, at any given point of time, the highest priority thread is running. But this rule is not guaranteed due to few reasons (ex:JVM try to avoid starvation).
Now here since you have 2 threads of same priority level, Java thread scheduler randomly select one thread (let's say t1) and execute. The other thread (t2) will get the chance to be executed when one of the following things happen,
t1 completes whatever inside t1's run method and compete again to execute. But you can't guarantee next chance will be t2's. (This is what's happening in your case)
you call t1.yield() method. t1 give up the processor and give the chance to any other same priority thread. well, t2. (But can't always guarantee)
Preempted by a higher priority thread.
The system is using Time Slicing Rule and that specific time expires. (Again, can't guarantee t2 will be next).
If you really want to get the desired 0,1,0,1 "fair" output here, you can write a logic to the synchronized method in the Messenger class. Something like this;
class Messenger {
private String msg = "hello";
private boolean state; //define a state variable
synchronized public void sendMessage() {
while(state == false){
if(!Thread.currentThread().getName().equals("Thread-0")){ //if not thread-0, then go to wait queue
try {
wait();
} catch (InterruptedException e) {
e.printStackTrace();
}
}
else{//if thread-0 , print and change state variable value.
System.out.println(msg + " from " + Thread.currentThread().getName());
state = true;
notifyAll();
}
}
if(!Thread.currentThread().getName().equals("Thread-1")){//if not thread-1, then go to wait queue
try {
wait();
} catch (InterruptedException e) {
e.printStackTrace();
}
}
else{//if thread-1 , print and change state variable value.
System.out.println(msg + " from " + Thread.currentThread().getName());
state = false;
notifyAll();
}
}
}
Related
I expect that all invocations of the server will be processed in parallel, but it is not true.
Here is simple example.
RSocket version: 1.1.0
Server
public class ServerApp {
private static final Logger log = LoggerFactory.getLogger(ServerApp.class);
public static void main(String[] args) throws InterruptedException {
RSocketServer.create(SocketAcceptor.forRequestResponse(payload ->
Mono.fromCallable(() -> {
log.debug("Start of my business logic");
sleepSeconds(5);
return DefaultPayload.create("OK");
})))
.bind(WebsocketServerTransport.create(15000))
.block();
log.debug("Server started");
TimeUnit.MINUTES.sleep(30);
}
private static void sleepSeconds(int sec) {
try {
TimeUnit.SECONDS.sleep(sec);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
Client
public class ClientApp {
private static final Logger log = LoggerFactory.getLogger(ClientApp.class);
public static void main(String[] args) throws InterruptedException {
RSocket client = RSocketConnector.create()
.connect(WebsocketClientTransport.create(15000))
.block();
long start1 = System.currentTimeMillis();
client.requestResponse(DefaultPayload.create("Request 1"))
.doOnNext(r -> log.debug("finished within {}ms", System.currentTimeMillis() - start1))
.subscribe();
long start2 = System.currentTimeMillis();
client.requestResponse(DefaultPayload.create("Request 2"))
.doOnNext(r -> log.debug("finished within {}ms", System.currentTimeMillis() - start2))
.subscribe();
TimeUnit.SECONDS.sleep(20);
}
}
In client logs, we can see that both request was sent at the same time, and both responses was received at the same time after 10sec (each request was proceed in 5 seconds).
In server logs, we can see that requests executed sequentially and not in parallel.
Could you please help me to understand this behavior?
Why we have received the first response after 10 seconds and not 5?
How do I create the server correctly if I want all requests to be processed in parallel?
If I replace Mono.fromCallable by Mono.fromFuture(CompletableFuture.supplyAsync(() -> myBusinessLogic(), executorService)), then it will resolve 1.
If I replace Mono.fromCallable by Mono.delay(Duration.ZERO).map(ignore -> myBusinessLogic(), then it will resolve 1. and 2.
If I replace Mono.fromCallable by Mono.create(sink -> sink.success(myBusinessLogic())), then it will not resolve my issues.
Client logs:
2021-07-16 10:39:46,880 DEBUG [reactor-tcp-nio-1] [/] - sending ->
Frame => Stream ID: 0 Type: SETUP Flags: 0b0 Length: 56
Data:
2021-07-16 10:39:46,952 DEBUG [main] [/] - sending ->
Frame => Stream ID: 1 Type: REQUEST_RESPONSE Flags: 0b0 Length: 15
Data:
+-------------------------------------------------+
| 0 1 2 3 4 5 6 7 8 9 a b c d e f |
+--------+-------------------------------------------------+----------------+
|00000000| 52 65 71 75 65 73 74 20 31 |Request 1 |
+--------+-------------------------------------------------+----------------+
2021-07-16 10:39:46,957 DEBUG [main] [/] - sending ->
Frame => Stream ID: 3 Type: REQUEST_RESPONSE Flags: 0b0 Length: 15
Data:
+-------------------------------------------------+
| 0 1 2 3 4 5 6 7 8 9 a b c d e f |
+--------+-------------------------------------------------+----------------+
|00000000| 52 65 71 75 65 73 74 20 32 |Request 2 |
+--------+-------------------------------------------------+----------------+
2021-07-16 10:39:57,043 DEBUG [reactor-tcp-nio-1] [/] - receiving ->
Frame => Stream ID: 1 Type: NEXT_COMPLETE Flags: 0b1100000 Length: 8
Data:
+-------------------------------------------------+
| 0 1 2 3 4 5 6 7 8 9 a b c d e f |
+--------+-------------------------------------------------+----------------+
|00000000| 4f 4b |OK |
+--------+-------------------------------------------------+----------------+
2021-07-16 10:39:57,046 DEBUG [reactor-tcp-nio-1] [/] - finished within 10120ms
2021-07-16 10:39:57,046 DEBUG [reactor-tcp-nio-1] [/] - receiving ->
Frame => Stream ID: 3 Type: NEXT_COMPLETE Flags: 0b1100000 Length: 8
Data:
+-------------------------------------------------+
| 0 1 2 3 4 5 6 7 8 9 a b c d e f |
+--------+-------------------------------------------------+----------------+
|00000000| 4f 4b |OK |
+--------+-------------------------------------------------+----------------+
2021-07-16 10:39:57,046 DEBUG [reactor-tcp-nio-1] [/] - finished within 10094ms
Server Logs:
2021-07-16 10:39:46,965 DEBUG [reactor-http-nio-2] [/] - receiving ->
Frame => Stream ID: 0 Type: SETUP Flags: 0b0 Length: 56
Data:
2021-07-16 10:39:47,021 DEBUG [reactor-http-nio-2] [/] - receiving ->
Frame => Stream ID: 1 Type: REQUEST_RESPONSE Flags: 0b0 Length: 15
Data:
+-------------------------------------------------+
| 0 1 2 3 4 5 6 7 8 9 a b c d e f |
+--------+-------------------------------------------------+----------------+
|00000000| 52 65 71 75 65 73 74 20 31 |Request 1 |
+--------+-------------------------------------------------+----------------+
2021-07-16 10:39:47,027 DEBUG [reactor-http-nio-2] [/] - Start of my business logic
2021-07-16 10:39:52,037 DEBUG [reactor-http-nio-2] [/] - sending ->
Frame => Stream ID: 1 Type: NEXT_COMPLETE Flags: 0b1100000 Length: 8
Data:
+-------------------------------------------------+
| 0 1 2 3 4 5 6 7 8 9 a b c d e f |
+--------+-------------------------------------------------+----------------+
|00000000| 4f 4b |OK |
+--------+-------------------------------------------------+----------------+
2021-07-16 10:39:52,038 DEBUG [reactor-http-nio-2] [/] - receiving ->
Frame => Stream ID: 3 Type: REQUEST_RESPONSE Flags: 0b0 Length: 15
Data:
+-------------------------------------------------+
| 0 1 2 3 4 5 6 7 8 9 a b c d e f |
+--------+-------------------------------------------------+----------------+
|00000000| 52 65 71 75 65 73 74 20 32 |Request 2 |
+--------+-------------------------------------------------+----------------+
2021-07-16 10:39:52,038 DEBUG [reactor-http-nio-2] [/] - Start of my business logic
2021-07-16 10:39:57,039 DEBUG [reactor-http-nio-2] [/] - sending ->
Frame => Stream ID: 3 Type: NEXT_COMPLETE Flags: 0b1100000 Length: 8
Data:
+-------------------------------------------------+
| 0 1 2 3 4 5 6 7 8 9 a b c d e f |
+--------+-------------------------------------------------+----------------+
|00000000| 4f 4b |OK |
+--------+-------------------------------------------------+----------------+
You shouldn't mix asynchronous code like Reactive Mono operations with blocking code like
private static void sleepSeconds(int sec) {
try {
TimeUnit.SECONDS.sleep(sec);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
I suspect the central issue here is that a framework like rsocket-java doesn't want to run everything on new threads, at the cost of excessive context switching. So generally relies on you run long running CPU or IO operations appropriately.
You should look at the various async delay operations instead https://projectreactor.io/docs/core/release/api/reactor/core/publisher/Mono.html#delayElement-java.time.Duration-
If your delay is meant to simulate a long running operation, then you should look at subscribing on a different scheduler like https://projectreactor.io/docs/core/release/api/reactor/core/scheduler/Schedulers.html#boundedElastic--
I'm trying to convert this code of CryptoJS to Kotlin:
const hash = CryptoJS.HmacSHA256(message, key);
const signature = CryptoJS.enc.Hex.stringify(hash);
That's the kotlin code equivalent to above snippet:
private fun generateSignature(key: String, payload: String): String {
val algorithm = "HmacSHA256"
return Mac.getInstance(algorithm)
.apply { init(SecretKeySpec(key.toByteArray(), algorithm)) }
.run { doFinal(payload.toByteArray()) }
.let { HexUtils.toHexString(it) }
}
But it is not working at all. They generate different results. CryptoJS generates an array of bytes that has 8 positions, the Java code generates an array of bytes that has 32 positions.
I don't know what Im doing wrong. I need to make my Kotlin code work exactly as the javascript one.
Update: I can't change the Javascript way. I have to do the exactly same thing in Kotlin
Update2: Here is a test where the JS code and the Kotlin code generates different results.
Input:
key = 's21fk4vb-5415-46c7-aade-303dcf432bb4'
message = 'POST,/wallets/3323461f96-bdf3-4e03-bc93-7da1fb27aee7/withdraw/,1573148023809,{"amount":"1.0","bank":{"bank":"789","agency":"456","account":"12378","accountDigit":"6","name":"joao","taxId":"33206913098","holderType":"personal"}}'
Results with JS code:
Result of encrypt in bytes:
{sigBytes: 32, words: [8]}
sigBytes: 32
words: [8]
0: 2102759135
1: -196086391
2: -2099697915
3: -1620551271
4: 2463524
5: 1757965357
6: -1039993965
7: -1798822705
Bytes to Hex:
7d558edff44ff58982d927059f6859990025972468c86c2dc202f39394c824cf
Results with Kotlin code:
Result of encrypt in bytes:
{byte[32]#1126}
0 = 82
1 = -110
2 = -100
3 = -128
4 = -63
5 = 22
6 = -103
7 = -31
8 = 83
9 = -125
10 = -72
11 = 109
12 = -91
13 = -69
14 = 54
15 = -41
16 = 27
17 = -107
18 = -60
19 = -110
20 = -57
21 = -29
22 = -20
23 = -32
24 = -66
25 = 88
26 = 87
27 = -50
28 = -47
29 = -18
30 = -96
31 = 25
Bytes to Hex:
52929c80c11699e15383b86da5bb36d71b95c492c7e3ece0be5857ced1eea019
No SHA-256 hash can have only 8 byte positions. The output, as the name suggests, should be 256 bits or 32 bytes. What I suspect to happen is that the input of stringify is already presumed to be bytes, while CryptoJS functions return a WordArray of 32 bit words. As 8 * 32 = 256 this seems reasonable.
So I presume you can simply fix this by using a function on the WordArray instead, for instance hash.toString('hex').
I am new to java programming. HttpRequest is only able to handle <= 4 requests simultaneously even though newFixedThreadPool is set to 28.
I tried to use newCachedThreadPool, but even that handles 4 request at a time.
If I set newFixedThreadPool to 3, it works well.
server = HttpServer.create(new InetSocketAddress(port), 0);
server.createContext("/", new MyHandler());
server.setExecutor(Executors.newFixedThreadPool(28)); // creates a default executor
ThreadPoolExecutor t = (ThreadPoolExecutor)server.getExecutor();
LOG.info("Pool size " + t.getPoolSize() + " Max Pool size " + t.getMaximumPoolSize() + " Largest pool size " + t.getLargestPoolSize() + " Core pool size " + t.getCorePoolSize() + " actively executing threads " + t.getActiveCount() + " Completed task count " + t.getCompletedTaskCount() + " task count " + t.getTaskCount());
server.start();
static class MyHandler implements HttpHandler {
public void handle(HttpExchange t) throws IOException {
try{
long threadId = Thread.currentThread().getId();
ThreadPoolExecutor t1 = (ThreadPoolExecutor)server.getExecutor();
LOG.info("Thread #" + threadId + " before : Pool size " + t1.getPoolSize() + " Max Pool size " + t1.getMaximumPoolSize() + " Largest pool size " + t1.getLargestPoolSize() + " Core pool size " + t1.getCorePoolSize() + " actively executing threads " + t1.getActiveCount() + " Completed task count " + t1.getCompletedTaskCount() + " task count " + t1.getTaskCount());
LOG.info("Received request on thread #" + threadId);
String response = h.processRequest(t.getRequestURI().toString());
t.sendResponseHeaders(200, response.length());
OutputStream os = t.getResponseBody();
os.write(response.getBytes());
os.close();
LOG.info("Request processed on thread #" + threadId);
LOG.info("Thread #" + threadId + " after : Pool size " + t1.getPoolSize() + " Max Pool size " + t1.getMaximumPoolSize() + " Largest pool size " + t1.getLargestPoolSize() + " Core pool size " + t1.getCorePoolSize() + " actively executing threads " + t1.getActiveCount() + " Completed task count " + t1.getCompletedTaskCount() + " task count " + t1.getTaskCount());
} catch (Exception e) {
LOG.warn("Exception. ", e);
}
}
}
public String processRequest(String req)
{
Thread.sleep(5000);
}
Log Output
19/04/15 16:26:07 : Thread #99 before : Pool size 21 Max Pool size 28 Largest pool size 21 Core pool size 28 actively executing threads 1 Completed task count 20 task count 21
19/04/15 16:26:07 : Received request on thread #99
19/04/15 16:26:07 : Inside Handler Sr. No [1]
19/04/15 16:26:07 : Thread #100 before : Pool size 22 Max Pool size 28 Largest pool size 22 Core pool size 28 actively executing threads 2 Completed task count 20 task count 22
19/04/15 16:26:07 : Received request on thread #100
19/04/15 16:26:07 : Inside Handler Sr. No [2]
19/04/15 16:26:08 : Thread #101 before : Pool size 23 Max Pool size 28 Largest pool size 23 Core pool size 28 actively executing threads 3 Completed task count 20 task count 23
19/04/15 16:26:08 : Received request on thread #101
19/04/15 16:26:08 : Inside Handler Sr. No [3]
19/04/15 16:26:08 : Thread #102 before : Pool size 24 Max Pool size 28 Largest pool size 24 Core pool size 28 actively executing threads 4 Completed task count 20 task count 24
19/04/15 16:26:08 : Received request on thread #102
19/04/15 16:26:08 : Inside Handler Sr. No [4]
19/04/15 16:26:12 : Request processed on thread #99
19/04/15 16:26:12 : Thread #99 after : Pool size 24 Max Pool size 28 Largest pool size 24 Core pool size 28 actively executing threads 4 Completed task count 20 task count 24
19/04/15 16:26:12 : Thread #103 before : Pool size 25 Max Pool size 28 Largest pool size 25 Core pool size 28 actively executing threads 4 Completed task count 21 task count 25
19/04/15 16:26:12 : Received request on thread #103
19/04/15 16:26:12 : Inside Handler Sr. No [5]
19/04/15 16:26:12 : Request processed on thread #100
19/04/15 16:26:12 : Thread #100 after : Pool size 25 Max Pool size 28 Largest pool size 25 Core pool size 28 actively executing threads 4 Completed task count 21 task count 25
19/04/15 16:26:12 : Thread #104 before : Pool size 26 Max Pool size 28 Largest pool size 26 Core pool size 28 actively executing threads 4 Completed task count 22 task count 26
19/04/15 16:26:12 : Received request on thread #104
19/04/15 16:26:12 : Inside Handler Sr. No [6]
19/04/15 16:26:13 : Request processed on thread #101
19/04/15 16:26:13 : Thread #101 after : Pool size 26 Max Pool size 28 Largest pool size 26 Core pool size 28 actively executing threads 4 Completed task count 22 task count 26
19/04/15 16:26:13 : Thread #105 before : Pool size 27 Max Pool size 28 Largest pool size 27 Core pool size 28 actively executing threads 4 Completed task count 23 task count 27
19/04/15 16:26:13 : Received request on thread #105
19/04/15 16:26:13 : Inside Handler Sr. No [7]
19/04/15 16:26:13 : Request processed on thread #102
19/04/15 16:26:13 : Thread #102 after : Pool size 27 Max Pool size 28 Largest pool size 27 Core pool size 28 actively executing threads 4 Completed task count 23 task count 27
19/04/15 16:26:13 : Thread #106 before : Pool size 28 Max Pool size 28 Largest pool size 28 Core pool size 28 actively executing threads 4 Completed task count 24 task count 28
19/04/15 16:26:13 : Received request on thread #106
19/04/15 16:26:13 : Inside Handler Sr. No [8]
19/04/15 16:26:17 : Request processed on thread #103
19/04/15 16:26:17 : Thread #103 after : Pool size 28 Max Pool size 28 Largest pool size 28 Core pool size 28 actively executing threads 4 Completed task count 24 task count 28
19/04/15 16:26:17 : Thread #24 before : Pool size 28 Max Pool size 28 Largest pool size 28 Core pool size 28 actively executing threads 4 Completed task count 25 task count 29
19/04/15 16:26:17 : Received request on thread #24
19/04/15 16:26:17 : Inside Handler Sr. No [9]
19/04/15 16:26:17 : Request processed on thread #104
19/04/15 16:26:17 : Thread #104 after : Pool size 28 Max Pool size 28 Largest pool size 28 Core pool size 28 actively executing threads 4 Completed task count 25 task count 29
Why it is handling only 4 request at a time? Am I doing something wrong?
Thanks
I am having trouble with my loop. If anyone could take a look and try to find where im going wrong it would be awesome. I am reading from two different files and I want my code to loop through the entire files. So far it is only looping the first 11 lines of the file.
package lab.pkg02;
import java.util.Scanner;
import java.io.*;
public class Lab02 {
public static void main(String[] args) throws IOException {
File usageFile;
File historyFile;
PrintWriter resultsFile;
PrintWriter newHistoryFile;
Scanner usageSC,historySC;
String vin,make,model;
int year, beginingOdo, endingOdo, currentGallons, currentGas,
currentRepair, mpg, costPerMile, totalGas, totalRepair,
currentMiles;
//Display Report Heading to Report File
resultsFile = new PrintWriter("reportfile.txt");
resultsFile.printf("%-5s%10s%15s%12s%13s%16s%5s%16s%17s%20s\n", "VIN",
"Vehicle Description", "Beginning Odo",
"Ending Odo", "Current Gas","Current Repair", "MPG",
"Cost Per Mile", "Historical Gas", "Historical Repair");
//Process Each Vehicle
for(int cnt = 0; cnt < 15; cnt++) {
//Get Vehicle Information from Usage File
usageFile = new File("usage.txt");
usageSC = new Scanner(usageFile);
vin = usageSC.nextLine( );
year = usageSC.nextInt( );
usageSC.nextLine();
make = usageSC.nextLine( );
model = usageSC.nextLine( );
beginingOdo = usageSC.nextInt( );
usageSC.nextLine();
endingOdo = usageSC.nextInt( );
usageSC.nextLine();
currentGallons = usageSC.nextInt( );
usageSC.nextLine();
currentGas = usageSC.nextInt( );
usageSC.nextLine();
currentRepair = usageSC.nextInt( );
usageSC.nextLine();
mpg = usageSC.nextInt( );
usageSC.nextLine();
costPerMile = usageSC.nextInt( );
usageSC.close( );
//Get Vehicle History from History File
historyFile = new File ("historyfile.txt");
historySC = new Scanner(historyFile);
vin = historySC.nextLine( );
totalGas = historySC.nextInt( );
historySC.nextLine();
totalRepair = historySC.nextInt( );
historySC.nextLine();
historySC.close( );
//Calculate Updated Vehicle Information
currentMiles = endingOdo - beginingOdo;
mpg = currentMiles / currentGallons;
costPerMile = (currentGas + currentRepair) / currentMiles;
totalGas = totalGas + currentGas;
totalRepair = totalRepair + currentRepair;
//Store Updated Vehicle Information to New History File
newHistoryFile = new PrintWriter("newhistoryfile.txt");
newHistoryFile.println(vin);
newHistoryFile.println(totalGas);
newHistoryFile.println(totalRepair);
newHistoryFile.close( );
//Display Vehicle Summary Line to Report File
resultsFile.printf("%-5s%10s%15s%12s%13s%16s%5s%16s%17s%20s\n", vin,
year,make,model, beginingOdo,endingOdo, currentGas,currentRepair, mpg
,costPerMile, totalGas, totalRepair);
resultsFile.close( );
}
}
}
Both files are posted below im sure that the issue of the loop is not because of the file but do to an error in the code.
****Usage File*****
1FTSW2BR8AEA51037
2017
Ford
Fiesta
12345
123456
200
2500
50
40
100
4S7AU2F966C091212
2016
Ford
Focus
2356
23567
80
150
10
30
101
1FTEX1EM9EFD29979
2015
Ford
Mustang
23
235
86
100
30
29
102
1XPVD09X5AD163651
2015
Ford
Escape
15000
235679
800
350
750
28
103
2G1WF5EK0B1163554
2014
Ford
Explorer
7854
12498
736
259
123
27
104
1GDP8C1Y7GV522436
2013
Audi
A6
5269
54697
456
2464
61431
26
104
1FMCU92709KC54353
2012
Audi
A8
123
3456
52
86
10
25
106
1GDHK44K59F125839
2011
Audi
TT
5689
46546
14
89
15
24
107
3GYFNBE38ES603704
2010
Audi
Q5
54875
646656
69
84
1000
23
108
SAJPX1148VC828077
2009
Audi
R8
1201
1209
213
1321
11000
25
109
JS2RF9A72C6152147
2008
Audi
A7
2589
36644
874
1511
110
41
111
JT2SK13E4S0334527
BMW
2007
i8
652
3664
856
151
11
26
110
1GTHC34K6KE580545
BMW
2006
X6
65
324
231
1636
11136
19
112
1FDNS24L0XHA16500
BMW
2005
X1
546
64654
2654
16354
112
21
113
2C3AA53G55H689466
BMW
2004
M4
1233
6464
264
1354
12
32
114
*****historyfile*******
1FTSW2BR8AEA51037
4500
150
4S7AU2F966C091212
2150
1000
1FTEX1EM9EFD29979
10000
15000
1XPVD09X5AD163651
3500
7500
2G1WF5EK0B1163554
2590
1230
1GDP8C1Y7GV522436
24640
614310
1FMCU92709KC54353
860
100
1GDHK44K59F125839
8909
150
3GYFNBE38ES603704
8408
10000
SAJPX1148VC828077
132107
110000
JS2RF9A72C6152147
151106
1100
JT2SK13E4S0334527
15105
110
1GTHC34K6KE580545
163604
111360
1FDNS24L0XHA16500
1635403
1120
2C3AA53G55H689466
135402
1201
From what I see, you are re-initializing
usageFile = new File("usage.txt");
usageSC = new Scanner(usageFile);
historyFile = new File ("historyfile.txt");
historySC = new Scanner(historyFile);
newHistoryFile = new PrintWriter("newhistoryfile.txt");
in every loop which runs 15 times, and you close the scanner in each loop.
Move those outside the loop and it will work and change nextLine() to next() to read next strings for usage.
Your file has empty lines after the 11th vin in usage.
Hi I am new to Hadoop Mapreduce programming. Actually I have a requirement like below:
larger file i.e the input file input.txt
101 Vince 12000
102 James 33
103 Tony 32
104 John 25
105 Nataliya 19
106 Anna 20
107 Harold 29
And this is the smaller file lookupfile.txt
101 Vince 12000
102 James 10000
103 Tony 20000
104 John 25000
105 Nataliya 15000
Now what we want is to get those results which have common Id Number. So, in order to achieve this use smaller file as look up file and larger file as input file. The complete java code and explanation of each component is given below:
This is the result we will get after running the above code.
102 James 33 10000
103 Tony 32 20000
104 John 25 25000
105 Nataliya 19 15000
Code:
public class Join extends Configured implements Tool
{
public static class JoinMapper extends Mapper
{
Path[] cachefiles = new Path[0]; //To store the path of lookup files
List exEmployees = new ArrayList();//To store the data of lookup files
/********************Setup Method******************************************/
#Override
public void setup(Context context)
{
Configuration conf = context.getConfiguration();
try
{
cachefiles = DistributedCache.getLocalCacheFiles(conf);
BufferedReader reader = new BufferedReader(new FileReader(cachefiles[0].toString()));
String line;
while ((line = reader.readLine())!= null)
{
exEmployees.add(line); //Data of lookup files get stored in list object
}
}
catch (IOException e)
{
e.printStackTrace();
}
}
/************************setup method ends***********************************************/
/********************Map Method******************************************/
public void map(LongWritable key, Text value, Context context)throws IOException, InterruptedException
{
String[] line = value.toString().split("\t");
for (String e : exEmployees)
{
String[] listLine = e.toString().split("\t");
if(line[0].equals(listLine[0]))
{
context.write(new Text(line[0]), new Text(line[1]+"\t"+line[2]+"\t"+listLine[2]));
}
}
} //map method ends
/***********************************************************************/
}
/********************run Method******************************************/
public int run(String[] args) throws Exception
{
Configuration conf = new Configuration();
Job job = new Job(conf, "aggprog");
job.setJarByClass(Join.class);
DistributedCache.addCacheFile(new Path(args[0]).toUri(), job.getConfiguration());
FileInputFormat.addInputPath(job, new Path(args [1]));
FileOutputFormat.setOutputPath(job, new Path(args [2]));
job.setMapperClass(JoinMapper.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(Text.class);
return (job.waitForCompletion(true) ? 0 : 1);
}
public static void main (String[] args) throws Exception
{
int ecode = ToolRunner.run(new Join(), args);
System.exit(ecode);
}
}
Execution Command :
case1:
hadoop jar '/home/cloudera/Desktop/DistributedCache.jar' Join My_Job/MultiInput_1/Input/input.txt My_Job/MultiInput_1/Input/smallerinput.txt My_Job/MultiInput_1/My_Output
case2:
hadoop jar '/home/cloudera/Desktop/DistributedCache.jar' Join My_Job/MultiInput_1/Input/input.txt My_Job/MultiInput_1/Input/smallerinput.txt My_Job/MultiInput_1/My_Output
I have tried above two commands, but it is not working. I don't Know what the problem is and also where the problem is. I am unable to execute the above code.
finally i tried below code it worked
hadoop jar '/home/cloudera/Desktop/DistributedCache.jar' Join hdfs/Input/smallerfile.txt hdfs/Input/input.txt My_Job/MultiInput_1/MyOutput
I found my mistake. I was checking the large file with the small file. But, when I tried the reverse way it worked for me, but the output was not as expected.
Expected output is:
101 Vince 12000
102 James 33 10000
103 Tony 32 20000
104 John 25 25000
105 Nataliya 19 15000
106 Anna 20
107 Harold 29
But my output is:
101 Vince 12000
102 James 33 10000
103 Tony 32 20000
104 John 25 25000
105 Nataliya 19 15000
106 Anna 20
107 Harold 29
Can somebody help me?
Yes user3475485. Your files should be put in hdfs for this code to run or because your are using Genericoptionsparse you can use this format
hadoop jar jarname.jar drivername -files file1,file2 should work for you.