Im using Akka 2.5.6 in Java 8 and I want to know the right way to finish de ActorSystem, part of the functionality of my code is to process some XML files and validate them, to achieve this I have created 3 actors:
Controller, Processor and Validator.
The Controller is responsible for initiating the process and sending file by file and other information to the Processor, then the Processor create a digital signature of the file and sends the response to the Validator that finally validates the status and sends an OK message to the Controller which is counting the number of files validated and compares them with the total files. Once the total of files with the total of validated files are equal, I call to finish the ActorSystem with the terminate () method.
The method to finish is as follows:
private void endActors()
{
ActorSystem actorSystem = getContext().system();
Future <Terminated> terminated = actorSystem.terminate();
do {
log.info ("Waiting to finish ...");
try {
Thread.sleep (30000L);
} catch (InterruptedException ex) {
log.error ("Error in Thread.");
}
} while (! ended.isCompleted ());
log.info ("Actors finished processing.");
}
The loop never ends because the future is never complete, I dont know if this is the right way, I hope you have understood me and can help me or give me some advice.
Try the following (the key here is the on complete) . I wrote a class along these lines to use in a setup and teardown for junit, to avoid issues from actor system not fully terminating in the teardown of one test before being created in another test. (that caused port already in use issues)
private static ActorSystem system = null;
private static Future<Terminated> terminatedFuture;
public static ActorSystem getFreshActorSystem() {
tearDownActorSystem();
while(system != null) {
try {
Thread.sleep(500L);
} catch (InterruptedException e) {
}
}
system = ActorSystem.create();
return system;
}
public static void tearDownActorSystem() {
if (system !=null && !isInMiddleOfTerminating()) {
terminatedFuture = system.terminate();
terminatedFuture.onComplete( new OnComplete(){
#Override
public void onComplete(Throwable failure, Object success) throws Throwable {
system = null;
terminatedFuture = null;
}
} , system.dispatcher());
}
}
private static boolean isInMiddleOfTerminating() {
return terminatedFuture !=null;
}
Related
I have created an integration flow to read files from a SFTP server and process them. I realized that once there is an error with one of the files (an exception is thrown), the poll stops and any other file is not processed until the next poll. How can I avoid this, not marking the file as processed, and processing the remaining files in that poll?
My configuration is quite simple. I am using a non-transactional poller that is triggered every minute with max-message-per-poll of 1000. The SftpStreamingInboundChannelAdapterSpec has a max-fetch-size of 10 and uses a composite file list filter with a SftpRegexPatternFileListFilter and a SftpPersistentAcceptOnceFileListFilter.
#Bean
public IntegrationFlow sftpInboundFlow(JdbcMetadataStore jdbcMetadataStore, DataSourceTransactionManager dataSourceTransactionManager) {
return IntegrationFlows.from(sftpStreamingInboundChannelAdapterSpec(jdbcMetadataStore), sourcePollingChannelAdapterSpec -> configureEndpoint(sourcePollingChannelAdapterSpec, dataSourceTransactionManager))
.transform(new StreamTransformer())
.channel("processingChannel")
.get();
}
private SftpStreamingInboundChannelAdapterSpec sftpStreamingInboundChannelAdapterSpec(JdbcMetadataStore jdbcMetadataStore) {
SftpStreamingInboundChannelAdapterSpec sftpStreamingInboundChannelAdapterSpec = Sftp.inboundStreamingAdapter(documentEnrollementSftpRemoteFileTemplate())
.filter(fileListFilter(jdbcMetadataStore))
.maxFetchSize(10)
.remoteDirectory("/the-directory");
SftpStreamingMessageSource sftpStreamingMessageSource = sftpStreamingInboundChannelAdapterSpec.get();
sftpStreamingMessageSource.setFileInfoJson(false);
return sftpStreamingInboundChannelAdapterSpec;
}
private void configureEndpoint(SourcePollingChannelAdapterSpec sourcePollingChannelAdapterSpec, DataSourceTransactionManager dataSourceTransactionManager) {
PollerSpec pollerSpec = Pollers.cron(sftpProperties.getPollCronExpression())
.maxMessagesPerPoll(1000);
sourcePollingChannelAdapterSpec.autoStartup(true)
.poller(pollerSpec);
}
#Bean
public CompositeFileListFilter<ChannelSftp.LsEntry> fileListFilter(JdbcMetadataStore jdbcMetadataStore) {
String fileNameRegex = // get regex
SftpRegexPatternFileListFilter sftpRegexPatternFileListFilter = new SftpRegexPatternFileListFilter(fileNameRegex);
SftpPersistentAcceptOnceFileListFilter sftpPersistentAcceptOnceFileListFilter = new SftpPersistentAcceptOnceFileListFilter(jdbcMetadataStore, "");
CompositeFileListFilter<ChannelSftp.LsEntry> compositeFileListFilter = new CompositeFileListFilter<>();
compositeFileListFilter.addFilter(sftpRegexPatternFileListFilter);
compositeFileListFilter.addFilter(sftpPersistentAcceptOnceFileListFilter);
return compositeFileListFilter;
}
After reading this answer, I tried using a transactional poller as follows:
PollerSpec pollerSpec = Pollers.cron(sftpProperties.getPollCronExpression())
.maxMessagesPerPoll(1000)
.transactional(dataSourceTransactionManager);
but the result is that after the processing of a file fails, the poll stops, all processed messages are rolled back, and remaining messages are not processed until the next poll. What I understood from that answer was that every message would be processed in a separate transaction.
The only way I found to achieve this so far was to surround the processing code in a try/catch block catching all exceptions to avoid interrupting the poll. In the catch block I manually remove the ChannelSftp.LsEntry from the composite file list filter. For this I needed to set the property fileInfoJson to false in the SftpStreamingMessageSource provided by the SftpStreamingInboundChannelAdapterSpec.
I find this approach rather convoluted and with the downside that files that fail and are removed from the filter are immediately reprocessed afterwards and not in the following poll.I was hoping there is a more straightforward solution to my problem.
The solution with the try...catch is the way to go. This is really the fact that exception thrown from the process is bubbled into the poller and it stops the current cycle around maxMessagesPerPoll:
private Runnable createPoller() {
return () ->
this.taskExecutor.execute(() -> {
int count = 0;
while (this.initialized && (this.maxMessagesPerPoll <= 0 || count < this.maxMessagesPerPoll)) {
if (pollForMessage() == null) {
break;
}
count++;
}
});
}
Where that pollForMessage() is like this:
private Message<?> pollForMessage() {
try {
return this.pollingTask.call();
}
catch (Exception e) {
if (e instanceof MessagingException) {
throw (MessagingException) e;
}
else {
Message<?> failedMessage = null;
if (this.transactionSynchronizationFactory != null) {
Object resource = TransactionSynchronizationManager.getResource(getResourceToBind());
if (resource instanceof IntegrationResourceHolder) {
failedMessage = ((IntegrationResourceHolder) resource).getMessage();
}
}
throw new MessagingException(failedMessage, e); // NOSONAR (null failedMessage)
}
}
finally {
if (this.transactionSynchronizationFactory != null) {
Object resource = getResourceToBind();
if (TransactionSynchronizationManager.hasResource(resource)) {
TransactionSynchronizationManager.unbindResource(resource);
}
}
}
}
Anyway there is still a way to isolate one message from others in the single polling cycle. For this purpose you need to take a look into the Request Handler Advice Chain and investigate a solution with the ExpressionEvaluatingRequestHandlerAdvice: https://docs.spring.io/spring-integration/docs/current/reference/html/#message-handler-advice-chain
So, you add this into your handler endpoint downstream and catch exceptions over there and do some specific error handling not re-throwing exceptions to poller.
We have a data processing application that runs on Karaf 2.4.3 with Camel 2.15.3.
In this application, we have a bunch of routes that import data. We have a management view that lists these routes and where each route can be started. Those routes do not directly import data, but call other routes (some of them in other bundles, called via direct-vm), sometimes directly and sometimes in a splitter.
Is there a way to also completely stop a route/therefore stopping the entire exchange from being further processed?
When simply using the stopRoute function like this:
route.getRouteContext().getCamelContext().stopRoute(route.getId());
I eventually get a success message with Graceful shutdown of 1 routes completed in 10 seconds - the exchange is still being processed though...
So I tried to mimic the behaviour of the StopProcessor by setting the stop property, but that also didn't help:
public void stopRoute(Route route) {
try {
Collection<InflightExchange> browse = route.getRouteContext().getCamelContext().getInflightRepository()
.browse();
for (InflightExchange inflightExchange : browse) {
String exchangeRouteId = inflightExchange.getRouteId();
if ((exchangeRouteId != null) && exchangeRouteId.equals(route.getId())) {
this.stopExchange(inflightExchange.getExchange());
}
}
} catch (Exception e) {
Notification.show("Error while trying to stop route", Type.ERROR_MESSAGE);
LOGGER.error(e, e);
}
}
public void stopExchange(Exchange exchange) throws Exception {
AsyncProcessorHelper.process(new AsyncProcessor() {
#Override
public void process(Exchange exchange) throws Exception {
AsyncProcessorHelper.process(this, exchange);
}
#Override
public boolean process(Exchange exchange, AsyncCallback callback) {
exchange.setProperty(Exchange.ROUTE_STOP, Boolean.TRUE);
callback.done(true);
return true;
}
}, exchange);
}
Is there any way to completely stop an exchange from being processed from outside the route?
Can you get an exchange?
I use exchange.setProperty(Exchange.ROUTE_STOP, true);
Route stop flow and doesn't go to next route.
Now i'm working with Apache Kafka and have task:
We have some csv-files in directory, it's a mini-batch files, each file is about 25-30 mb. All i need - parse file and put it to kafka.
As I can see, Kafka have some interesting thing like Connector.
I can create Source-Connector and SourceTask, but i don't understand one thing:
when i handle file, how i can stop or delete my task?
For example i have dummy connector:
public class DummySourceConnector extends SourceConnector {
private static final Logger logger = LogManager.getLogger();
#Override
public String version() {
logger.info("version");
return "1";
}
#Override
public ConfigDef config() {
logger.info("config");
return null;
}
#Override
public Class<? extends Task> taskClass() {
return DummySourceTask.class;
}
#Override
public void start(Map<String, String> props) {
logger.info("start {}", props);
}
#Override
public void stop() {
logger.info("stop");
}
#Override
public List<Map<String, String>> taskConfigs(int maxTasks) {
logger.info("taskConfigs {}", maxTasks);
return ImmutableList.of(ImmutableMap.of("key", "value"));
}
And Task:
public class DummySourceTask extends SourceTask {
private static final Logger logger = LogManager.getLogger();
private long offset = 0;
#Override
public String version() {
logger.info("version");
return "1";
}
#Override
public void start(Map<String, String> props) {
logger.info("start {}", props);
}
#Override
public List<SourceRecord> poll() throws InterruptedException {
Thread.sleep(3000);
final String value = "Offset " + offset++ + " Timestamp " + Instant.now().toString();
logger.info("poll value {}", value);
return ImmutableList.of(new SourceRecord(
ImmutableMap.of("partition", 0),
ImmutableMap.of("offset", offset),
"topic-dummy",
SchemaBuilder.STRING_SCHEMA,
value
));
}
public void stop() {
logger.info("stop");
}
But how i can close my task when it's all done?
Or maybe you can help me with another idea for this task.
Thanx for your help!
First, I encourage you to have a look at existing connectors here. I feel like the spooldir connector would be helpful to you. It may even be possible for you to just download and install it without having to write any code at all.
Second, if I'm understanding correctly, you want to stop a task. I believe this discussion is what you want.
A not so elegant solution of terminating a Task when an event happens is to check for the event in the source of the task and call System.exit(1).
Nevertheless the most elegant solution I have found is this:
When the event occurs the Connector Task apply a REST call to the broker in order to stop the Connector that runs the Task.
To do this the Task itself should know the name of the Connector that runs the task which you can find following the steps of this discussion.
So the name of the connector it is in properties argument of Task, there exists a property with "name" key, and whose value is the name of the Connector which executes the Task ( which we want to stop if an event occurs).
Finally, we make a REST call and we get a 204 answer with no content if the task stops.
The code of the call is this:
try {
URL url = new URL("url/" + connectorName);
HttpURLConnection conn = (HttpURLConnection) url.openConnection();
conn.setRequestMethod("DELETE");
conn.setRequestProperty("Accept", "application/json");
if (conn.getResponseCode() != 204) {
throw new RuntimeException("Failed : HTTP error code : "
+ conn.getResponseCode());
}
BufferedReader br = new BufferedReader(new InputStreamReader(
(conn.getInputStream())));
String output;
System.out.println("Task Stopped \n");
while ((output = br.readLine()) != null) {
System.out.println(output);
}
conn.disconnect();
} catch (MalformedURLException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
Now all the Connector Tasks stop.
(Of course as it is mentioned previously you have to keep in mind that the logic of each SourceTask and each SinkTask is neverending. They are supposed to never stop if an event occurs but instead to continuously seaching for new entries in the files you provide them. So usually you stop them with a REST call and if you want them to stop when an event occurs you put that REST call in their own code.)
I am wondering if there is the possibility to know when a message has been delivered. Since I want to shutdown the actorsystem right afterwards. The code below connects to a remote actor and then sends a message. But in some cases the local actorsystem seems to be shut down too early.
ActorSystem system = ActorSystem.create("Test", config.getConfig("webbackend"));
ActorSelection communicator = system.actorSelection("akka.tcp://Midas#127.0.0.1:2555/user/Communicator");
communicator.tell(new TimerTransmissionCmd(channel.getId()), ActorRef.noSender());
//system.shutdown();
Try adding this code after send the message:
Boolean wasProcessed = (Integer)Await.result(Patterns.ask(communicator, new ResultClass(), 5000),
Duration.create(5000, TimeUnit.MILLISECONDS));
if(wasProcessed){
actorSystem.shutdown();
}
You also have to add this in your Actor class:
private boolean wasProcessed = false;
#Override
public void onReceive(Object messageReceived) throws Exception {
if (messageReceived instanceof ResultClass) {
this.workerActor1.tell(wasProcessed, getSender());
} else {
//Put your process code here
wasProcessed = true;
}
}
But I recommend you configure a prudential timeout and after that always shutdown the system.
I have an application with the following route:
from("netty:tcp://localhost:5150?sync=false&keepAlive=true")
.routeId("tcp.input")
.transform()
.simple("insert into tamponems (AVIS) values (\"${in.body}\");")
.to("jdbc:mydb");
This route receives a new message every 59 millisecondes. I want to stop the route when the connection to the database is lost, before that a second message arrives. And mainly, I want to never lose a message.
I proceeded that way:
I added an errorHandler:
errorHandler(deadLetterChannel("direct:backup")
.redeliveryDelay(5L)
.maximumRedeliveries(1)
.retryAttemptedLogLevel(LoggingLevel.WARN)
.logExhausted(false));
My errorHandler tries to redeliver the message and if it fails again, it redirects the message to a deadLetterChannel.
The following deadLetterChannel will stop the tcp.input route and try to redeliver the message to the database:
RoutePolicy policy = new StopRoutePolicy();
from("direct:backup")
.routePolicy(policy)
.errorHandler(
defaultErrorHandler()
.redeliveryDelay(1000L)
.maximumRedeliveries(-1)
.retryAttemptedLogLevel(LoggingLevel.ERROR)
)
.to("jdbc:mydb");
Here is the code of the routePolicy:
public class StopRoutePolicy extends RoutePolicySupport {
private static final Logger LOG = LoggerFactory.getLogger(String.class);
#Override
public void onExchangeDone(Route route, Exchange exchange) {
String stop = "tcp.input";
CamelContext context = exchange.getContext();
if (context.getRouteStatus(stop) != null && context.getRouteStatus(stop).isStarted()) {
try {
exchange.getContext().getInflightRepository().remove(exchange);
LOG.info("STOP ROUTE: {}", stop);
context.stopRoute(stop);
} catch (Exception e) {
getExceptionHandler().handleException(e);
}
}
}
}
My problems with this method are:
In my "direct:backup" route, if I set the maximumRedeliveries to -1 the route tcp.input will never stop
I'm loosing messages during the stop
This method for detecting the connection loss and for stopping the route is too long
Please, does anybody have an idea for make this faster or for make this differently in order to not lose message?
I have finally found a way to resolve my problems. In order to make the application faster, I added asynchronous processes and multithreading with seda.
from("netty:tcp://localhost:5150?sync=false&keepAlive=true").to("seda:break");
from("seda:break").threads(5)
.routeId("tcp.input")
.transform()
.simple("insert into tamponems (AVIS) values (\"${in.body}\");")
.to("jdbc:mydb");
I did the same with the backup route.
from("seda:backup")
.routePolicy(policy)
.errorHandler(
defaultErrorHandler()
.redeliveryDelay(1000L)
.maximumRedeliveries(-1)
.retryAttemptedLogLevel(LoggingLevel.ERROR)
).threads(2).to("jdbc:mydb");
And I modified the routePolicy like that:
public class StopRoutePolicy extends RoutePolicySupport {
private static final Logger LOG = LoggerFactory.getLogger(String.class);
#Override
public void onExchangeBegin(Route route, Exchange exchange) {
String stop = "tcp.input";
CamelContext context = exchange.getContext();
if (context.getRouteStatus(stop) != null && context.getRouteStatus(stop).isStarted()) {
try {
exchange.getContext().getInflightRepository().remove(exchange);
LOG.info("STOP ROUTE: {}", stop);
context.stopRoute(stop);
} catch (Exception e) {
getExceptionHandler().handleException(e);
}
}
}
#Override
public void onExchangeDone(Route route, Exchange exchange) {
String stop = "tcp.input";
CamelContext context = exchange.getContext();
if (context.getRouteStatus(stop) != null && context.getRouteStatus(stop).isStopped()) {
try {
LOG.info("RESTART ROUTE: {}", stop);
context.startRoute(stop);
} catch (Exception e) {
getExceptionHandler().handleException(e);
}
}
}
}
With these updates, the TCP route is stopped before the backup route is processed. And the route is restarted when the jdbc connection is back.
Now, thanks to Camel, the application is able to handle a database failure without losing message and without manual intervention.
I hope this could help you.