The below code works fine and it connects to a given server (host, port) and gets the connection status.
What it does is:
PollService implements the Callable interface and connects to a server(host, port) then it returns the status.
Since this should happen periodically, it iterates the Hashmap entries in a while(true) loop infinitely.
The problem: On the server-side, I see it takes 2 or 3 seconds to reach the thread and if I use Runnable with periodic implementation it connects within 1 sec. Looks like iterating the Hashmap infinitely is a slow approach.
However, I can not use Runnable as it doesn't return the status of the connection which I need later to use.
Below is the ServiceMonitor class (client) which connects to the server.
package org.example;
import java.time.LocalDateTime;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.Future;
import java.util.logging.Level;
import java.util.logging.Logger;
import java.util.stream.Collectors;
public class ServicesMonitor {
private ExecutorService scheduledExecutorService = null;
private static Logger logger = Logger.getLogger(ServicesMonitor.class.getName());
private final Map<ServiceType, List<ClientMonitorService>> clientMonitorServicesMap = new HashMap<>();
public void registerInterest(ClientMonitorService clientMonitorService) {
clientMonitorServicesMap.computeIfAbsent(clientMonitorService.getServiceToMonitor().getServiceType(), v -> new ArrayList<>()).add(clientMonitorService);
}
public Map<ServiceType, List<ClientMonitorService>> getClineMonitorService() {
return clientMonitorServicesMap;
}
public void poll(){
//Observable.interval(1, TimeUnit.SECONDS).st
}
public void pollServices() {
scheduledExecutorService = Executors.newFixedThreadPool(clientMonitorServicesMap.size());
try {
while (true) {
clientMonitorServicesMap.forEach((k, v) -> {
Future<Boolean> val = scheduledExecutorService.submit(new PollService(k));
try {
boolean result = val.get();
System.out.println("service " + k.getHost() + ":" + k.getPort() + "status is " + result);
if (result) {
List<ClientMonitorService> list = v.stream().filter(a -> LocalDateTime.now().getSecond() % a.getServiceToMonitor().getFreqSec() == 0)
.collect(Collectors.toList());
list.stream().forEach(a -> System.out.println(a.getClientId()));
}
} catch (InterruptedException e) {
e.printStackTrace();
} catch (ExecutionException e) {
e.printStackTrace();
}
});
}
} catch (Exception e) {
logger.log(Level.SEVERE, e.getMessage());
} finally {
scheduledExecutorService.shutdown();
}
}
}
How to improve the performance of this code by reducing the time it takes to connect to the server?
How to improve this code?
after using the get(1, TimeUnit.SECONDS); I started to see improvement on the server side as well (Reaching the threads less than 1 second) since we are not waiting more than 1 second on the client side.
while (true) {
clientMonitorServicesMap.forEach((k, v) -> {
Future<Boolean> val = scheduledExecutorService.submit(new PollService(k));
try {
boolean result = val.get(1, TimeUnit.SECONDS);
System.out.println("service " + k.getHost() + ":" + k.getPort() + "status is " + result);
if (result) {
List<ClientMonitorService> list = v.stream()
//.filter(a -> LocalDateTime.now().getSecond() % a.getServiceToMonitor().getFreqSec() == 0)
.collect(Collectors.toList());
list.stream().forEach(a -> System.out.println(a.getClientId()));
}
} catch (InterruptedException e) {
logger.log(Level.WARNING,"Interrupted -> " + k.getHost()+":"+k.getPort());
} catch (ExecutionException e) {
logger.log(Level.INFO,"ExecutionException exception -> "+ k.getHost()+":"+k.getPort());
} catch (TimeoutException e) {
logger.log(Level.INFO,"TimeoutException exception -> "+ k.getHost()+":"+k.getPort());
}
});
}
Related
is there any nice way to print the progresss in a kafka stream app? I feel that my app is falling behind and I want a nice way to show the progress of processing the events in my app
Out of the box, not within the Streams API.
You're more than welcome to import methods that ConsumerGroupCommand.scala uses to get the group lag and calculate / print from there.
Or you can externally install a tool like Burrow or Remora which have REST APIs for accessing lag information
I wrote the following class to help be print the lag/progress easily
package util;
import lombok.extern.slf4j.Slf4j;
import org.apache.kafka.clients.admin.AdminClient;
import org.apache.kafka.clients.admin.ListConsumerGroupOffsetsResult;
import org.apache.kafka.clients.admin.ListOffsetsResult;
import org.apache.kafka.clients.admin.OffsetSpec;
import org.apache.kafka.clients.consumer.*;
import org.apache.kafka.common.TopicPartition;
import java.util.*;
import java.util.concurrent.*;
import java.util.function.Function;
import java.util.stream.Collectors;
#Slf4j
public class LagLogger implements AutoCloseable {
private ScheduledExecutorService scheduledExecutorService = Executors.newScheduledThreadPool(1);
private String topic;
private String consumerGroupName;
private int logDelayInMilliSeconds;
private Properties kafkaStreamsProperties;
private boolean closed;
private AdminClient adminClient;
public LagLogger(String topic, String consumerGroupName, Properties kafkaStreamProperties, int logDelayInMilliSeconds) {
this.topic = topic;
this.kafkaStreamsProperties = kafkaStreamProperties;
this.logDelayInMilliSeconds = logDelayInMilliSeconds;
this.consumerGroupName = consumerGroupName;
adminClient = AdminClient.create(LagLogger.this.kafkaStreamsProperties);
}
public class LagVisualizerTask implements AutoCloseable, Runnable {
public LagVisualizerTask() {
}
public void run() {
ListConsumerGroupOffsetsResult listConsumerGroupOffsetsResult = adminClient.listConsumerGroupOffsets(LagLogger.this.consumerGroupName);
// Current offsets.
Map<TopicPartition, OffsetAndMetadata> topicPartitionOffsetAndMetadataMap = null;
try {
topicPartitionOffsetAndMetadataMap = listConsumerGroupOffsetsResult.partitionsToOffsetAndMetadata().get();
} catch (InterruptedException e) {
e.printStackTrace();
} catch (ExecutionException e) {
e.printStackTrace();
}
// all topic partitions.
Set<TopicPartition> topicPartitions = topicPartitionOffsetAndMetadataMap.keySet();
// list of end offsets for each partitions.
ListOffsetsResult listOffsetsResult = adminClient.listOffsets(topicPartitions.stream()
.collect(Collectors.toMap(Function.identity(), tp -> OffsetSpec.latest())));
StringBuilder stringBuilder = new StringBuilder();
stringBuilder.append(topic+": ");
for (var entry : topicPartitionOffsetAndMetadataMap.entrySet()) {
String finalString = stringBuilder.toString();
if (entry.getKey().topic().equals(LagLogger.this.topic)) {
long current_offset = entry.getValue().offset();
long end_offset = 0;
try {
end_offset = listOffsetsResult.partitionResult(entry.getKey()).get().offset();
} catch (InterruptedException e) {
e.printStackTrace();
} catch (ExecutionException e) {
e.printStackTrace();
}
stringBuilder.append(current_offset);
stringBuilder.append(" --> ");
stringBuilder.append(end_offset);
stringBuilder.append(" ("+String.format("%.2f", ((double)current_offset/end_offset)*100) +"%)");
stringBuilder.append(" / ");
}
}
log.info(stringBuilder.toString());
}
public void close() {
closed = true;
}
}
public LagVisualizerTask startNewLagVisualizerTask() {
LagVisualizerTask lagVisualizerTask = new LagVisualizerTask();
scheduledExecutorService.scheduleWithFixedDelay(lagVisualizerTask,0, LagLogger.this.logDelayInMilliSeconds, TimeUnit.MILLISECONDS);
return lagVisualizerTask;
}
public void close() {
if (scheduledExecutorService != null) {
scheduledExecutorService.shutdownNow();
scheduledExecutorService = null;
}
}
}
Which can be used as follows:
LagLogger lagVisualizer = new LagLogger(INPUT_TOPIC_NAME,APPLICATION_ID,configuration.getKafkaStreamsProperties(),DELY_BETWEEN_LOGS);
lagVisualizer.startNewLagVisualizerTask();
I'm writing a tool that will generate a high amount of HTTP calls against a webserver. At this moment I'm interested on how many requests can I make per second. I'm not interested now of the result of those requests.
I'm measuring the time spent to send 1k requests against google.com and I get 69 milliseconds :
but when I'm sniffing the traffic with WireShark I see that sending all the GET requests is taking almost 4 seconds:
start of the calls
end of the calls
Tool has been run from IntelliJ on Windows 10, I7 1.8 Ghz, 32 GB of RAM.
My question is: why I have this difference? Sending 1k HTTP GET requests should be quick, but it takes almost 4 seconds. What I'm doing wrong here?
The code above is only for testing purposes and it's quite ugly, so bear with me. Also I'm not quite familiar with NIO.
import org.apache.commons.lang3.time.StopWatch;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.util.concurrent.atomic.AtomicInteger;
public class UJPPHighTrafficClient {
public static final Logger logger = LoggerFactory.getLogger(UJPPHighTrafficClient.class);
public static final int iterations = 1000;
public static void main(String[] args) {
doStartClient();
}
private static void doStartClient() {
logger.info("starting the client");
UJPPHighTrafficExecutor executor = new UJPPHighTrafficExecutor();
StopWatch watch = new StopWatch();
watch.start();
for (int i = 0; i < iterations; i++) {
executor.run();
}
watch.stop();
logger.info("Run " + iterations + " executions in " + watch.getTime() + " milliseconds");
}
}
import org.apache.http.HttpHost;
import org.apache.http.HttpResponse;
import org.apache.http.ProtocolVersion;
import org.apache.http.concurrent.FutureCallback;
import org.apache.http.config.ConnectionConfig;
import org.apache.http.impl.nio.DefaultHttpClientIODispatch;
import org.apache.http.impl.nio.pool.BasicNIOConnPool;
import org.apache.http.impl.nio.reactor.DefaultConnectingIOReactor;
import org.apache.http.impl.nio.reactor.IOReactorConfig;
import org.apache.http.message.BasicHttpEntityEnclosingRequest;
import org.apache.http.nio.protocol.*;
import org.apache.http.nio.reactor.ConnectingIOReactor;
import org.apache.http.nio.reactor.IOEventDispatch;
import org.apache.http.nio.reactor.IOReactorException;
import org.apache.http.protocol.*;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.io.IOException;
import java.io.InterruptedIOException;
import java.util.concurrent.atomic.AtomicInteger;
public class UJPPHighTrafficExecutor {
private final Logger logger = LoggerFactory.getLogger("debug");
public static ConnectingIOReactor requestsReactor = null;
private static BasicNIOConnPool clientConnectionPool = null;
public static HttpAsyncRequester clientRequester = null;
public static Thread runnerThread = null;
private static AtomicInteger counter = null;
public static final int cores = Runtime.getRuntime().availableProcessors() * 2;
public UJPPHighTrafficExecutor() {
counter = new AtomicInteger();
counter.set(0);
initializeConnectionManager();
}
public void initializeConnectionManager() {
try {
requestsReactor =
new DefaultConnectingIOReactor(IOReactorConfig.
custom().
setIoThreadCount(cores).
build());
clientConnectionPool = new BasicNIOConnPool(requestsReactor, ConnectionConfig.DEFAULT);
clientConnectionPool.setDefaultMaxPerRoute(cores);
clientConnectionPool.setMaxTotal(100);
clientRequester = initializeHttpClient(requestsReactor);
} catch (IOReactorException ex) {
logger.error(" initializeConnectionManager " + ex.getMessage());
}
}
private HttpAsyncRequester initializeHttpClient(final ConnectingIOReactor ioReactor) {
// Create HTTP protocol processing chain
HttpProcessor httpproc = HttpProcessorBuilder.create()
// Use standard client-side protocol interceptors
.add(new RequestContent(true)).
add(new RequestTargetHost()).
add(new RequestConnControl())
.add(new RequestExpectContinue(true)).
build();
// Create HTTP requester
HttpAsyncRequester requester = new HttpAsyncRequester(httpproc);
// Create client-side HTTP protocol handler
HttpAsyncRequestExecutor protocolHandler = new HttpAsyncRequestExecutor();
// Create client-side I/O event dispatch
final IOEventDispatch ioEventDispatch =
new DefaultHttpClientIODispatch(protocolHandler, ConnectionConfig.DEFAULT);
// Run the I/O reactor in a separate thread
runnerThread = new Thread("Client") {
#Override
public void run() {
try {
ioReactor.execute(ioEventDispatch);
} catch (InterruptedIOException ex) {
logger.error("Interrupted", ex);
} catch (IOException e) {
logger.error("I/O error", e);
} catch (Exception e) {
logger.error("Exception encountered in Client ", e.getMessage(), e);
}
logger.info("Client shutdown");
}
};
runnerThread.start();
return requester;
}
public void run() {
HttpHost httpHost = new HttpHost("google.com", 80, "http");
final HttpCoreContext coreContext = HttpCoreContext.create();
ProtocolVersion ver = new ProtocolVersion("HTTP", 1, 1);
BasicHttpEntityEnclosingRequest request = new BasicHttpEntityEnclosingRequest("GET", "/", ver);
clientRequester.execute(new BasicAsyncRequestProducer(httpHost, request), new BasicAsyncResponseConsumer(),
clientConnectionPool, coreContext,
// Handle HTTP response from a callback
new FutureCallback<HttpResponse>() {
#Override
public void completed(final HttpResponse response) {
logger.info("Completed " + response.toString());
checkCounter();
}
#Override
public void failed(final Exception ex) {
logger.info("Failed " + ex.getMessage());
checkCounter();
}
#Override
public void cancelled() {
logger.info("Cancelled ");
checkCounter();
}
});
}
private void checkCounter() {
counter.set(counter.get() + 1);
if (counter.get() == UJPPHighTrafficClient.iterations) {
try {
requestsReactor.shutdown();
} catch (Exception ex) {
}
}
}
}
You code is timing how long it is to set up 1000 iterations of http connection, and not the time to complete those connections many of which are still running 3-4 seconds later. To see a more accurate figure put a local field t0 into UJPPHighTrafficExecutor:
public class UJPPHighTrafficExecutor {
long t0 = System.nanoTime();
...and then checkCounter() can print a time for completing all iterations:
private void checkCounter() {
counter.set(counter.get() + 1);
if (counter.get() == UJPPHighTrafficClient.iterations) {
try {
requestsReactor.shutdown();
} catch (Exception ex) {
}
long t1 = System.nanoTime();
System.out.println("ELAPSED MILLIS: ~"+TimeUnit.NANOSECONDS.toMillis(t1-t0));
}
}
This will print a much larger number for 1000 iterations:
ELAPSED MILLIS: ~xxxx
Note that counter.set(counter.get() + 1) is not safe way to increment AtomicInteger , remove the line and increment inside the if statement:
if (counter.incrementAndGet() == UJPPHighTrafficClient.iterations)
import java.util.Arrays;
import java.util.List;
import java.util.Random;
public class Main {
public static void main(String[] args) {
List<Integer> fullList = Arrays.asList(1,2,3,4,5,6,7,8,9,10,11,12,13,14);
List<Integer> toBeLast = Arrays.asList(9,10,11,12);
Random r = new Random();
fullList.parallelStream().filter(l->!toBeLast .contains(l)).forEach(l->{
System.out.println("L1 : " + e);
try {
Thread.sleep(Math.abs(r.nextLong() % 1000));
System.out.println(l);
}
catch(InterruptedException i) {
}
});
toBeLast .parallelStream().forEach(l->{
System.out.println("L2 : " + e);
try {
Thread.sleep(Math.abs(r.nextLong() % 1000));
System.out.println(l);
}
catch(InterruptedException i) {
}
});
}
}
Expectation - complete 1-8, 13-14 and start 9-12.
The rest call will trigger a sh script in server which will take 15-90 secs each.
Actual - in the server at one point I'm seeing scripts for 2 & 11 are running. I don't see the sysout for 2 yet, and no exception in server as well as the program.
I'm wondering how was that possible to trigger 11 before completing 2?
Something is not right in the question. Here is the code that I wrote and in my example, the first stream always completes before the second stream.
import java.util.Arrays;
import java.util.List;
import java.util.Random;
public class Main {
public static void main(String[] args) {
List<Integer> l1 = Arrays.asList(1,2,3,4,5,6,7,8,9,10,11,12,13,14,1,2,3,4,5,6,7,8,9,10,11,12,13,14,1,2,3,4,5,6,7,8,9,10,11,12,13,14,1,2,3,4,5,6,7,8,9,10,11,12,13,14);
List<Integer> l2 = Arrays.asList(21,22,23,24,25,26,27,28,29,30,31,32,33,34,21,22,23,24,25,26,27,28,29,30,31,32,33,34,21,22,23,24,25,26,27,28,29,30,31,32,33,34,21,22,23,24,25,26,27,28,29,30,31,32,33,34);
Random r = new Random();
l1.parallelStream().forEach(e -> {
System.out.println("L1 : " + e);
try {
Thread.sleep(Math.abs(r.nextLong() % 1000));
}
catch(InterruptedException i) {
}
});
l2.parallelStream().forEach(e -> {
System.out.println("L2 : " + e);
try {
Thread.sleep(Math.abs(r.nextLong() % 1000));
}
catch(InterruptedException i) {
}
});
}
}
My guess is that you are using an HTTP client library that is doing the activity in the background and because of that the second stream is getting started before first stream finishes.
I have tried to reproduce the problem i am facing. My problem statement - In a folder multiple files are present. I need to do word counts for each file and print the result. Each file should be processed parallely! of course, there is a limit to parallelism. I have written the following code to accomplish it. It is running fine.The cluster is having spark installation of mapR. The cluster has spark.scheduler.mode = FIFO.
Q1- is there a better way to accomplish the task mentioned above?
Q2- i have observed that the application does not stop even when it
has completed the word counting of avaialble files. i am unable to
figure out how to deal with it?
package groupId.artifactId;
import java.util.ArrayList;
import java.util.Iterator;
import java.util.List;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.Future;
import java.util.concurrent.TimeUnit;
import org.apache.spark.SparkConf;
import org.apache.spark.api.java.JavaSparkContext;
public class Executor {
/**
* #param args
*/
public static void main(String[] args) {
final int threadPoolSize = 5;
SparkConf sparkConf = new SparkConf().setMaster("yarn-client").setAppName("Tracker").set("spark.ui.port","0");
JavaSparkContext jsc = new JavaSparkContext(sparkConf);
ExecutorService executor = Executors.newFixedThreadPool(threadPoolSize);
List<Future> listOfFuture = new ArrayList<Future>();
for (int i = 0; i < 20; i++) {
if (listOfFuture.size() < threadPoolSize) {
FlexiWordCount flexiWordCount = new FlexiWordCount(jsc, i);
Future future = executor.submit(flexiWordCount);
listOfFuture.add(future);
} else {
boolean allFutureDone = false;
while (!allFutureDone) {
allFutureDone = checkForAllFuture(listOfFuture);
System.out.println("Threads not completed yet!");
try {
Thread.sleep(2000);//waiting for 2 sec, before next check
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
printFutureResult(listOfFuture);
System.out.println("printing of future done");
listOfFuture.clear();
System.out.println("future list got cleared");
}
}
try {
executor.awaitTermination(5, TimeUnit.MINUTES);
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
private static void printFutureResult(List<Future> listOfFuture) {
Iterator<Future> iterateFuture = listOfFuture.iterator();
while (iterateFuture.hasNext()) {
Future tempFuture = iterateFuture.next();
try {
System.out.println("Future result " + tempFuture.get());
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (ExecutionException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}
private static boolean checkForAllFuture(List<Future> listOfFuture) {
boolean status = true;
Iterator<Future> iterateFuture = listOfFuture.iterator();
while (iterateFuture.hasNext()) {
Future tempFuture = iterateFuture.next();
if (!tempFuture.isDone()) {
status = false;
break;
}
}
return status;
}
package groupId.artifactId;
import java.io.Serializable;
import java.util.Arrays;
import java.util.concurrent.Callable;
import org.apache.spark.api.java.JavaPairRDD;
import org.apache.spark.api.java.JavaRDD;
import org.apache.spark.api.java.JavaSparkContext;
import org.apache.spark.api.java.function.FlatMapFunction;
import org.apache.spark.api.java.function.Function2;
import org.apache.spark.api.java.function.PairFunction;
import scala.Tuple2;
public class FlexiWordCount implements Callable<Object>,Serializable {
private static final long serialVersionUID = 1L;
private JavaSparkContext jsc;
private int fileId;
public FlexiWordCount(JavaSparkContext jsc, int fileId) {
super();
this.jsc = jsc;
this.fileId = fileId;
}
private static class Reduction implements Function2<Integer, Integer, Integer>{
#Override
public Integer call(Integer i1, Integer i2) {
return i1 + i2;
}
}
private static class KVPair implements PairFunction<String, String, Integer>{
#Override
public Tuple2<String, Integer> call(String paramT)
throws Exception {
return new Tuple2<String, Integer>(paramT, 1);
}
}
private static class Flatter implements FlatMapFunction<String, String>{
#Override
public Iterable<String> call(String s) {
return Arrays.asList(s.split(" "));
}
}
#Override
public Object call() throws Exception {
JavaRDD<String> jrd = jsc.textFile("/root/folder/experiment979/" + fileId +".txt");
System.out.println("inside call() for fileId = " + fileId);
JavaRDD<String> words = jrd.flatMap(new Flatter());
JavaPairRDD<String, Integer> ones = words.mapToPair(new KVPair());
JavaPairRDD<String, Integer> counts = ones.reduceByKey(new Reduction());
return counts.collect();
}
}
}
Why is Program not closing automatically ?
Ans : you have not closed the Sparkcontex , try changing main method to this :
public static void main(String[] args) {
final int threadPoolSize = 5;
SparkConf sparkConf = new SparkConf().setMaster("yarn-client").setAppName("Tracker").set("spark.ui.port","0");
JavaSparkContext jsc = new JavaSparkContext(sparkConf);
ExecutorService executor = Executors.newFixedThreadPool(threadPoolSize);
List<Future> listOfFuture = new ArrayList<Future>();
for (int i = 0; i < 20; i++) {
if (listOfFuture.size() < threadPoolSize) {
FlexiWordCount flexiWordCount = new FlexiWordCount(jsc, i);
Future future = executor.submit(flexiWordCount);
listOfFuture.add(future);
} else {
boolean allFutureDone = false;
while (!allFutureDone) {
allFutureDone = checkForAllFuture(listOfFuture);
System.out.println("Threads not completed yet!");
try {
Thread.sleep(2000);//waiting for 2 sec, before next check
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
printFutureResult(listOfFuture);
System.out.println("printing of future done");
listOfFuture.clear();
System.out.println("future list got cleared");
}
}
try {
executor.awaitTermination(5, TimeUnit.MINUTES);
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
jsc.stop()
}
Is there a better way ?
Ans : Yes you should pass the directory of the files to sparkcontext and use .textFile over directory , in this case spark would parallaize the reads from directories over the executors . If you try to create threads yourself and then use the same spark context to re-submit job for each file you are adding a extra overhead of submitting application to yarn queue .
I think the fastest approach would be to directly pass the entire directory and create RDD out of it and then then let spark launch parallel task to process all the files in different executors .You can experiment with using .repartition() method over the RDD , as it would launch that many tasks to run parallely .
I'm attempting to send a message when an actor is killed.
This is based on Akka deathwatch documentation :
http://doc.akka.io/docs/akka/2.3.6/java/untyped-actors.html#deathwatch-java
In serviceActor I'm awaiting a "kill" message but I'm never actually sending this message. So to receive the message in ServiceActor I use :
else if (msg instanceof Terminated) {
final Terminated t = (Terminated) msg;
if (t.getActor() == child) {
lastSender.tell(Msg.TERMINATED, getSelf());
}
} else {
unhandled(msg);
}
I've set the duration to 10 milliseconds :
Duration.create(10, TimeUnit.MILLISECONDS)
But the message Msg.TERMINATED is never received in onReceive method :
#Override
public void onReceive(Object msg) {
if (msg == ServiceActor.Msg.SUCCESS) {
System.out.println("Success");
getContext().stop(getSelf());
} else if (msg == ServiceActor.Msg.TERMINATED) {
System.out.println("Terminated");
} else
unhandled(msg);
}
How can I send a message to HelloWorld when ServiceActor fails ?
Entire code :
package terminatetest;
import akka.Main;
public class Launcher {
public static void main(String args[]) {
String[] akkaArgsArray = new String[1];
akkaArgsArray[0] = "terminatetest.HelloWorld";
Main.main(akkaArgsArray);
}
}
package terminatetest;
import java.util.concurrent.TimeUnit;
import scala.concurrent.duration.Duration;
import akka.actor.ActorRef;
import akka.actor.PoisonPill;
import akka.actor.Props;
import akka.actor.UntypedActor;
public class HelloWorld extends UntypedActor {
#Override
public void preStart() {
int counter = 0;
akka.actor.ActorSystem system = getContext().system();
final ActorRef greeter = getContext().actorOf(
Props.create(ServiceActor.class), String.valueOf(counter));
system.scheduler().scheduleOnce(
Duration.create(10, TimeUnit.MILLISECONDS), new Runnable() {
public void run() {
greeter.tell(PoisonPill.getInstance(), getSelf());
}
}, system.dispatcher());
greeter.tell("http://www.google.com", getSelf());
counter = counter + 1;
}
#Override
public void onReceive(Object msg) {
if (msg == ServiceActor.Msg.SUCCESS) {
System.out.println("Success");
getContext().stop(getSelf());
} else if (msg == ServiceActor.Msg.TERMINATED) {
System.out.println("Terminated");
} else
unhandled(msg);
}
}
package terminatetest;
import static com.utils.PrintUtils.println;
import java.io.BufferedReader;
import java.io.IOException;
import java.io.InputStreamReader;
import java.net.HttpURLConnection;
import java.net.MalformedURLException;
import java.net.URL;
import akka.actor.ActorRef;
import akka.actor.Props;
import akka.actor.Terminated;
import akka.actor.UntypedActor;
public class ServiceActor extends UntypedActor {
final ActorRef child = this.getContext().actorOf(Props.empty(), "child");
{
this.getContext().watch(child);
}
ActorRef lastSender = getContext().system().deadLetters();
public static enum Msg {
SUCCESS, FAIL, TERMINATED;
}
#Override
public void onReceive(Object msg) {
if (msg instanceof String) {
String urlName = (String) msg;
try {
long startTime = System.currentTimeMillis();
URL url = new URL(urlName);
HttpURLConnection conn = (HttpURLConnection) url.openConnection();
conn.connect();
BufferedReader reader = new BufferedReader(new InputStreamReader(conn.getInputStream()));
StringBuilder out = new StringBuilder();
String line;
while ((line = reader.readLine()) != null) {
out.append(line);
}
System.out.println("Connection successful to " + url);
System.out.println("Content is " + out);
long endTime = System.currentTimeMillis();
System.out.println("Total Time : " + (endTime - startTime) + " milliseconds");
} catch (MalformedURLException mue) {
println("URL Name " + urlName);
System.out.println("MalformedURLException");
System.out.println(mue.getMessage());
mue.printStackTrace();
getSender().tell(Msg.FAIL, getSelf());
} catch (IOException ioe) {
println("URL Name " + urlName);
System.out.println("IOException");
System.out.println(ioe.getMessage());
ioe.printStackTrace();
System.out.println("Now exiting");
getSender().tell(Msg.FAIL, getSelf());
}
}
else if (msg instanceof Terminated) {
final Terminated t = (Terminated) msg;
if (t.getActor() == child) {
lastSender.tell(Msg.TERMINATED, getSelf());
}
} else {
unhandled(msg);
}
}
}
Update :
I'm now initiating the poisonPill from the child actor itself using :
Update to ServiceActor :
if (urlName.equalsIgnoreCase("poisonPill")) {
this.getSelf().tell(PoisonPill.getInstance(), getSelf());
getSender().tell(Msg.TERMINATED, getSelf());
}
Update to HelloWorld :
system.scheduler().scheduleOnce(
Duration.create(10, TimeUnit.MILLISECONDS), new Runnable() {
public void run() {
greeter.tell("poisonPill", getSelf());
}
}, system.dispatcher());
This displays following output :
startTime : 1412777375414
Connection successful to http://www.google.com
Content is ....... (I'veremoved the content for brevity)
Total Time : 1268 milliseconds
Terminated
The poisonPill message is sent after 10 milliseconds and for this example the actor lives for 1268 milliseconds. So why is the actor not terminating when the poisonPill is sent ? Is this because the timings are so short ?
Updated code :
package terminatetest;
import java.util.concurrent.TimeUnit;
import scala.concurrent.duration.Duration;
import akka.actor.ActorRef;
import akka.actor.Props;
import akka.actor.UntypedActor;
public class HelloWorld extends UntypedActor {
#Override
public void preStart() {
int counter = 0;
akka.actor.ActorSystem system = getContext().system();
final ActorRef greeter = getContext().actorOf(
Props.create(ServiceActor.class), String.valueOf(counter));
system.scheduler().scheduleOnce(
Duration.create(10, TimeUnit.MILLISECONDS), new Runnable() {
public void run() {
greeter.tell("poisonPill", getSelf());
}
}, system.dispatcher());
greeter.tell("http://www.google.com", getSelf());
counter = counter + 1;
}
#Override
public void onReceive(Object msg) {
if (msg == ServiceActor.Msg.SUCCESS) {
System.out.println("Success");
getContext().stop(getSelf());
} else if (msg == ServiceActor.Msg.TERMINATED) {
System.out.println("Terminated");
} else
unhandled(msg);
}
}
package terminatetest;
import static com.utils.PrintUtils.println;
import java.io.BufferedReader;
import java.io.IOException;
import java.io.InputStreamReader;
import java.net.HttpURLConnection;
import java.net.MalformedURLException;
import java.net.URL;
import akka.actor.ActorRef;
import akka.actor.PoisonPill;
import akka.actor.UntypedActor;
public class ServiceActor extends UntypedActor {
ActorRef lastSender = getSender();
public static enum Msg {
SUCCESS, FAIL, TERMINATED;
}
#Override
public void onReceive(Object msg) {
if (msg instanceof String) {
String urlName = (String) msg;
if (urlName.equalsIgnoreCase("poisonPill")) {
this.getSelf().tell(PoisonPill.getInstance(), getSelf());
getSender().tell(Msg.TERMINATED, getSelf());
}
else {
try {
long startTime = System.currentTimeMillis();
System.out.println("startTime : "+startTime);
URL url = new URL(urlName);
HttpURLConnection conn = (HttpURLConnection) url.openConnection();
conn.connect();
BufferedReader reader = new BufferedReader(new InputStreamReader(conn.getInputStream()));
StringBuilder out = new StringBuilder();
String line;
while ((line = reader.readLine()) != null) {
out.append(line);
}
System.out.println("Connection successful to " + url);
System.out.println("Content is " + out);
long endTime = System.currentTimeMillis();
System.out.println("Total Time : " + (endTime - startTime) + " milliseconds");
} catch (MalformedURLException mue) {
println("URL Name " + urlName);
System.out.println("MalformedURLException");
System.out.println(mue.getMessage());
mue.printStackTrace();
getSender().tell(Msg.FAIL, getSelf());
} catch (IOException ioe) {
println("URL Name " + urlName);
System.out.println("IOException");
System.out.println(ioe.getMessage());
ioe.printStackTrace();
System.out.println("Now exiting");
getSender().tell(Msg.FAIL, getSelf());
}
}
}
}
}
I think your problem stems from the fact that you only set lastSender once, during construction of the ServiceActor, and you explicitly set it to deadletter. If you want to send a message back to the actor that sent you the String message, then you will need to set lastSender to that sender(). Failure to do so will result in your Msg.TERMINATED always going to deadletter.
EDIT
I see the real issue here now. In the HelloWorld actor, you are sending a PoisonPill to the ServiceActor. The ServiceActor will stop itself as a result, thus stopping the child ref too (as it's a child actor to ServiceActor). At this point, you would think the Terminated message would be delivered to ServiceActor because it explicitly watches child (and it probably does get delivered), but you've already sent a PoisonPill to ServiceActor so it will not process any messages received after that message (which would be the Terminate) so that's why the block:
else if (msg instanceof Terminated) {
is never hit in ServiceActor.
EDIT2
Your actor receives the request to hit google first and receives the "poisonPill" message second (10 milliseconds later). As an actor processes it's mailbox in order, the actor fully processes the request to hit google before it processes the message to stop itself. That's why the actor doesn't stop after 10 milliseconds. You can't stop an actor in the middle of what it's doing.