In our code, we create a Sink.one whose value is emitted from onSuccess/onError handlers of some Mono. A value being emitted is never null.
Contary to our expectation, a Mono view of that sink sporadically turns empty. It happens both on Windows 10 as well as on Ubuntu 20 machines, so it is likely not something platform-specific.
Would appreciate any hints as to why it happens?
Below is the shortest unit test I could come up with which reflects what we're doing in our production code, and when this test is run from IDE with "repeat until failure" setting, it fails the assertion about non-null Mono result quite consistently within 10-30 seconds after start.
I also added some diagnostic logging around tryEmitValue / tryEmitError, and looks like it's never an error, and never a null value being emitted, neither it's an emission failure.
The output from a failed run is:
23:48:12.635 [pool-1-thread-1] INFO sink-setter - | onSubscribe([Fuseable] Operators.MonoSubscriber)
23:48:12.635 [pool-1-thread-1] INFO sink-setter - | request(unbounded)
23:48:12.635 [boundedElastic-1] INFO sink-result-receiver - onSubscribe(SinkOneMulticast.NextInner)
23:48:12.635 [pool-1-thread-1] INFO sink-setter - | onNext(value)
23:48:12.635 [boundedElastic-1] INFO sink-result-receiver - request(unbounded)
23:48:12.635 [pool-1-thread-1] INFO sink-setter - | onComplete()
23:48:12.635 [boundedElastic-1] INFO sink-result-receiver - onComplete()
java.lang.AssertionError:
Expecting actual not to be null
The test code:
import java.util.concurrent.*;
import java.util.function.Consumer;
import lombok.extern.slf4j.Slf4j;
import org.junit.jupiter.api.Test;
import reactor.core.publisher.*;
import reactor.core.scheduler.Schedulers;
import reactor.util.Loggers;
import static org.assertj.core.api.AssertionsForClassTypes.assertThat;
#Slf4j
class MonoPlaygroundTest {
private static ExecutorService executor;
static {
executor = Executors.newCachedThreadPool();
Loggers.useSl4jLoggers();
}
#Test
void shouldConstructNonEmptyMono() {
var sink = Sinks.<String>one();
Mono
.fromSupplier(() -> "value")
.log(Loggers.getLogger("sink-setter"))
.subscribeOn(Schedulers.fromExecutorService(executor, "mono-executor"))
.doOnSuccess(tryEmitValueWithDebugLogging(sink))
.doOnError(tryEmitErrorWithDebugLogging(sink))
.subscribe();
var result = sink
.asMono()
.log(Loggers.getLogger("sink-result-receiver"))
// more map / flatMap here...
.subscribeOn(Schedulers.boundedElastic())
.block();
assertThat(result).isNotNull();
}
private static Consumer<Throwable> tryEmitErrorWithDebugLogging(Sinks.One<String> sink) {
return t -> {
sink.tryEmitError(t);
log.error("Was emitting an error: {}", t);
};
}
private static Consumer<String> tryEmitValueWithDebugLogging(Sinks.One<String> sink) {
return value -> {
var result = sink.tryEmitValue(value);
if (value == null) {
log.error("Tried to emit null value");
}
if (result.isFailure()) {
log.error("Failed to emit because of {}", result);
}
};
}
}
Related
My need is to transfer some item from a not reactive repository to a reactive repository(Firestore).
The procedure is triggered from a REST endpoint exposed with Netty.
The code below is what I've written after some trial and errors.
The query from the non reactive repo is not long (~20sec) but it returns a lot of records and the execution time is usually ~60min.
All records are always saved, all "Saving in progress... XXX" are printed, but about 50% if the times, it will not print "Saved XXX records" and no errors are printed.
Things I've noticed:
more records -> higher probability of fails
it does not depends on the execution time (sometimes longer process than the failed ones completes)
The app runs on a k8s pod with 1500Mi RAM request and 3000Mi limit, from the graphs it never approaches the limit.
What I'm missing here?
#Slf4j
#RestController
#RequestMapping("/import")
public class ImportController {
#Autowired
private NotReactiveRepository notReactiveRepository;
#Autowired
private ReactiveRepository reactiveRepository;
private static final Scheduler queryScheduler = Schedulers.newBoundedElastic(1, 480, "query", 864000);// max 10 days processing time
#GetMapping("/start")
public Mono<String> start() {
log.info("Start");
return Mono.just("RECEIVED")
//fire and forget
.doOnNext(stringRouteResponse -> startProcess().subscribe());
}
private Mono<Long> startProcess() {
Mono<List<Items>> resultsBlockingMono = Mono
.fromCallable(() -> notReactiveRepository.findAll())
.subscribeOn(queryScheduler)
.retryWhen(Retry.backoff(5, Duration.of(2, ChronoUnit.SECONDS)));
return resultsBlockingMono
.doOnNext( records -> log.info("Records: {}", records.size()))
.flatMapMany(Flux::fromIterable)
.map(ItemConverter::convert)
// max 9000 save/sec
.delayElements(Duration.of(300, ChronoUnit.MICROS))
.flatMap(this::saveConvertedItem)
.zipWith(Flux.range(1, Integer.MAX_VALUE))
.doOnNext(savedAndIndex -> log.info("Saving in progress... {}", savedAndIndex.getT2()))
.count()
.doOnNext( numberOfSaved -> log.info("Saved {} records", numberOfSaved));
}
private Mono<ConvertedItem> saveConvertedItem(ConvertedItem convertedItem) {
return reactiveRepository.save(convertedItem)
.retryWhen(Retry.backoff(1000, Duration.of(2, ChronoUnit.MILLIS)))
.onErrorResume(throwable -> {
log.error("Resuming");
return Mono.empty();
})
.doOnError(throwable -> log.error("Error on save"));
}
}
Update:
As requested, this is the last output of the procedure, where should be "Saved 1131113 records" and with .log() before .count() (the output after the onNext always prints after the process, also on success):
"Saving... 1131113"
"| onNext([ConvertedItem(...),1131113])"
"Shutting down ExecutorService 'pubsubPublisherThreadPool'"
"Shutting down ExecutorService 'pubSubAcknowledgementExecutor'"
"Shutting down ExecutorService 'pubsubSubscriberThreadPool'"
"Closing JPA EntityManagerFactory for persistence unit 'default'"
"HikariPool-1 - Shutdown initiated..."
"HikariPool-1 - Shutdown completed."
I have a spring batch job which uses flow:
Flow productFlow = new FlowBuilder<Flow>("productFlow")
.start(productFlow)
.next(new MyDecider()).on("YES").to(anotherFlow)
.build();
After I started to use a decider which checks some value in Jobparameter from job execution to decide whether to run the next flow or not, I am no lo longer getting COMPLETED as overall job status in JobExecution. It comes as FAILED.
However, every step in the STEP EXECUTION Table are COMPLETED and none FAILED.
Have I missed a trick somewhere?
My Decider is looks like this:
public class AnotherFlowDecider implements JobExecutionDecider {
#Override
public AnotherFlowDecider decide(final JobExecution jobExecution, final StepExecution stepExecution) {
final JobParameters jobParameters = jobExecution.getJobParameters();
final String name = jobParameters.getString("name");
if (nonNull(name)) {
switch (name) {
case "A":
return new FlowExecutionStatus("YES");
case "B":
default:
return new FlowExecutionStatus("NO");
}
}
throw new MyCustomException(FAULT, "nameis not provided as a JobParameter");
}
}
in Debug mode I can see
2020-12-11 11:10:58.145 DEBUG [cTaskExecutor-4] o.s.b.c.j.f.s.SimpleFlow [eId=, rId=] -- Completed state=productFlow.stageProduct with status=COMPLETED
2020-12-11 11:10:58.146 DEBUG [cTaskExecutor-4] o.s.b.c.j.f.s.SimpleFlow [eId=, rId=] -- Handling state=productFlow.decision0
2020-12-11 11:10:58.146 DEBUG [cTaskExecutor-4] o.s.b.c.j.f.s.SimpleFlow [eId=, rId=] -- Completed state=productFlow.decision0 with status=NO
2020-12-11 11:10:58.146 DEBUG [cTaskExecutor-4] o.s.b.c.j.f.s.SimpleFlow [eId=, rId=] -- Handling state=productFlow.FAILED
I am trying to create multiple consumers in a consumer group for parallel processing since we have heavy inflow of messages. I am using spring boot and KafkTemplate. How can we create multiple consumers belonging to single consumer group, in single instance of spring boot application?
Does having multiple methods annotated with #KafkaListener will create multiple consumers?
You have to use ConcurrentMessageListenerContainer. It delegates to one or more KafkaMessageListenerContainer instances to provide multi-threaded consumption.
#Bean
KafkaListenerContainerFactory<ConcurrentMessageListenerContainer<String, String>> kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, String> factory =
new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory());
factory.setConcurrency(10);
factory.getContainerProperties().setPollTimeout(3000);
return factory;
}
factory.setConcurrency(10) creates 10 KafkaMessageListenerContainer instances. Each instance gets some amount of partitions. It depends on the number of partitions you configured when you created the topic.
Some preparation steps:
#Autowired
private KafkaTemplate<String, Object> kafkaTemplate;
private final static String BOOTSTRAP_ADDRESS = "localhost:9092";
private final static String CONSUMER_GROUP = "consumer-group-1";
private final static String TOPIC = "test-topic";
#Bean
public ConsumerFactory<String, String> consumerFactory() {
Map<String, Object> props = new HashMap<>();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, BOOTSTRAP_ADDRESS);
props.put(ConsumerConfig.GROUP_ID_CONFIG, CONSUMER_GROUP);
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
return new DefaultKafkaConsumerFactory<>(props);
}
#KafkaListener(topics = TOPIC, containerFactory = "kafkaListenerContainerFactory")
public void listen(#Payload String message) {
logger.info(message);
}
public void start() {
try {
Thread.sleep(5000L);
} catch (InterruptedException e) {
e.printStackTrace();
}
for (int i = 0; i < 10; i++) {
kafkaTemplate.send(TOPIC, i, String.valueOf(i), "Message " + i);
}
logger.info("All message are sent");
}
If you run the method above you can see that each KafkaMessageListenerContainer instance processes the messages being put into the partition which that instance serves.
Thread.sleep() is added to wait for the consumers to be initialized.
2020-07-01 15:48:34.800 INFO 201566 --- [ntainer#0-4-C-1] r.s.c.KafkaConsumersDemo : Message 5
2020-07-01 15:48:34.801 INFO 201566 --- [ntainer#0-6-C-1] r.s.c.KafkaConsumersDemo : Message 7
2020-07-01 15:48:34.800 INFO 201566 --- [ntainer#0-7-C-1] r.s.c.KafkaConsumersDemo : Message 8
2020-07-01 15:48:34.800 INFO 201566 --- [ntainer#0-9-C-1] r.s.c.KafkaConsumersDemo : Message 1
2020-07-01 15:48:34.800 INFO 201566 --- [ntainer#0-0-C-1] r.s.c.KafkaConsumersDemo : Message 0
2020-07-01 15:48:34.800 INFO 201566 --- [ntainer#0-8-C-1] r.s.c.KafkaConsumersDemo : Message 9
2020-07-01 15:48:34.800 INFO 201566 --- [ntainer#0-3-C-1] r.s.c.KafkaConsumersDemo : Message 4
2020-07-01 15:48:34.801 INFO 201566 --- [ntainer#0-2-C-1] r.s.c.KafkaConsumersDemo : Message 3
2020-07-01 15:48:34.801 INFO 201566 --- [ntainer#0-1-C-1] r.s.c.KafkaConsumersDemo : Message 2
2020-07-01 15:48:34.800 INFO 201566 --- [ntainer#0-5-C-1] r.s.c.KafkaConsumersDemo : Message 6
Yes, the #KafkaListener will create multiple consumers for you.
With that you can configure all of them to use the same topic and belong to the same group.
The Kafka coordinator will distribute partitions to your consumers.
Although if you have only one partition in the topic, the concurrency won't happen: a single partition is processed in a single thread.
Another option is indeed to configure a concurrency and again several consumers are going to be created according concurrency <-> partition state.
As #Salavat Yalalo suggested I made my Kafka container factory to be ConcurrentKafkaListenerContainerFactory. On the #KafkaListenere method I added option called concurrency which accepts an integer as a string which indicates number of consumers to be spanned, like below
#KafakListener(concurrency ="4", containerFactory="concurrentKafkaListenerContainerFactory(bean name of the factory)",..other optional values)
public void topicConsumer(Message<MyObject> myObject){
//.....
}
When ran, I see 4 consumers being created in a single consumer group.
I try to understand how does reactive programming really work. I prepared simple demo for this purpose: reactive WebClient from Spring Framework sends requests to simple rest api and this client prints name of thread in each operation.
rest api:
#RestController
#SpringBootApplication
public class RestApiApplication {
public static void main(String[] args) {
SpringApplication.run(RestApiApplication.class, args);
}
#PostMapping("/resource")
public void consumeResource(#RequestBody Resource resource) {
System.out.println(String.format("consumed resource: %s", resource.toString()));
}
}
#Data
#AllArgsConstructor
class Resource {
private final Long id;
private final String name;
}
and the most important - reactive web client:
#SpringBootApplication
public class ReactorWebclientApplication {
public static void main(String[] args) {
SpringApplication.run(ReactorWebclientApplication.class, args);
}
private final TcpClient tcpClient = TcpClient.create();
private final WebClient webClient = WebClient.builder()
.clientConnector(new ReactorClientHttpConnector(HttpClient.from(tcpClient)))
.baseUrl("http://localhost:8080")
.build();
#PostConstruct
void doRequests() {
var longs = LongStream.range(1L, 10_000L)
.boxed()
.toArray(Long[]::new);
var longsStream = Stream.of(longs);
Flux.fromStream(longsStream)
.map(l -> {
System.out.println(String.format("------- map [%s] --------", Thread.currentThread().getName()));
return new Resource(l, String.format("name %s", l));
})
.filter(res -> {
System.out.println(String.format("------- filter [%s] --------", Thread.currentThread().getName()));
return !res.getId().equals(11_000L);
})
.flatMap(res -> {
System.out.println(String.format("------- flatmap [%s] --------", Thread.currentThread().getName()));
return webClient.post()
.uri("/resource")
.syncBody(res)
.header("Content-Type", "application/json")
.header("Accept", "application/json")
.retrieve()
.bodyToMono(Resource.class)
.doOnSuccess(ignore -> System.out.println(String.format("------- onsuccess [%s] --------", Thread.currentThread().getName())))
.doOnError(ignore -> System.out.println(String.format("------- onerror [%s] --------", Thread.currentThread().getName())));
})
.blockLast();
}
}
#JsonIgnoreProperties(ignoreUnknown = true)
class Resource {
private final Long id;
private final String name;
#JsonCreator
Resource(#JsonProperty("id") Long id, #JsonProperty("name") String name) {
this.id = id;
this.name = name;
}
Long getId() {
return id;
}
String getName() {
return name;
}
#Override
public String toString() {
final StringBuilder sb = new StringBuilder("Resource{");
sb.append("id=").append(id);
sb.append(", name='").append(name).append('\'');
sb.append('}');
return sb.toString();
}
}
And the problem is the behaviour is different than I predicted.
I expected that each call of .map(), .filter() and .flatMap() will be executed on main thread and each call of .doOnSuccess() or .doOnError will be executed on a thread from nio thread pool. So I expected logs that look like:
------- map [main] --------
------- filter [main] --------
------- flatmap [main] --------
(and so on...)
------- onsuccess [reactor-http-nio-2] --------
(and so on...)
But the logs I've got are:
------- map [main] --------
------- filter [main] --------
------- flatmap [main] --------
------- map [main] --------
------- filter [main] --------
------- flatmap [main] --------
------- onsuccess [reactor-http-nio-2] --------
------- onsuccess [reactor-http-nio-6] --------
------- onsuccess [reactor-http-nio-4] --------
------- onsuccess [reactor-http-nio-8] --------
------- map [reactor-http-nio-2] --------
------- filter [reactor-http-nio-2] --------
------- flatmap [reactor-http-nio-2] --------
------- map [reactor-http-nio-2] --------
and each next log in .map(), .filter() and .flatMap() was done on thread from reactor-http-nio.
Next incomprehensible fact is the ratio between operations executed on main thread and reactor-http-nio is always different. Sometimes all operations .map(), .filter() and .flatMap() are performed on main thread.
Reactor, like RxJava, can be considered to be concurrency-agnostic. That is, it does not enforce a concurrency model. Rather, it leaves you, the developer, in command. However, that does not prevent the library from helping you with concurrency.
Obtaining a Flux or a Mono does not necessarily mean that it runs in a dedicated Thread. Instead, most operators continue working in the Thread on which the previous operator executed. Unless specified, the topmost operator (the source) itself runs on the Thread in which the subscribe() call was made.
Project Reactor relevant documentation can be found here.
From your code, the following snippet:
webClient.post()
.uri("/resource")
.syncBody(res)
.header("Content-Type", "application/json")
.header("Accept", "application/json")
.retrieve()
.bodyToMono(Resource.class)
Leads to a thread switch from the main to netty's worker pool. Afterward, all the following actions are performed by the netty worker thread.
If you want to control this behavior, you should add a publishOn(...) statement to your code, for example:
webClient.post()
.uri("/resource")
.syncBody(res)
.header("Content-Type", "application/json")
.header("Accept", "application/json")
.retrieve()
.bodyToMono(Resource.class)
.publishOn(Schedulers.elastic())
In this way, any following action will be performed by the elastic scheduler thread pool.
Another example would be a usage of a dedicated scheduler for heavy tasks that following HTTP request execution.
import static com.github.tomakehurst.wiremock.client.WireMock.aResponse;
import static com.github.tomakehurst.wiremock.client.WireMock.get;
import static com.github.tomakehurst.wiremock.client.WireMock.urlEqualTo;
import com.github.tomakehurst.wiremock.WireMockServer;
import java.util.concurrent.TimeUnit;
import org.junit.jupiter.api.Test;
import org.junit.jupiter.api.extension.ExtendWith;
import org.springframework.web.reactive.function.client.ClientResponse;
import org.springframework.web.reactive.function.client.WebClient;
import reactor.core.publisher.Flux;
import reactor.core.publisher.Mono;
import reactor.core.scheduler.Schedulers;
import ru.lanwen.wiremock.ext.WiremockResolver;
import ru.lanwen.wiremock.ext.WiremockResolver.Wiremock;
import ru.lanwen.wiremock.ext.WiremockUriResolver;
import ru.lanwen.wiremock.ext.WiremockUriResolver.WiremockUri;
#ExtendWith({
WiremockResolver.class,
WiremockUriResolver.class
})
public class ReactiveThreadsControlTest {
private static int concurrency = 1;
private final WebClient webClient = WebClient.create();
#Test
public void slowServerResponsesTest(#Wiremock WireMockServer server, #WiremockUri String uri) {
String requestUri = "/slow-response";
server.stubFor(get(urlEqualTo(requestUri))
.willReturn(aResponse().withStatus(200)
.withFixedDelay((int) TimeUnit.SECONDS.toMillis(2)))
);
Flux
.generate(() -> Integer.valueOf(1), (i, sink) -> {
System.out.println(String.format("[%s] Emitting next value: %d", Thread.currentThread().getName(), i));
sink.next(i);
return i + 1;
})
.subscribeOn(Schedulers.single())
.flatMap(i ->
executeGet(uri + requestUri)
.publishOn(Schedulers.elastic())
.map(response -> {
heavyTask();
return true;
})
, concurrency)
.subscribe();
blockForever();
}
private void blockForever() {
Object monitor = new Object();
synchronized (monitor) {
try {
monitor.wait();
} catch (InterruptedException ex) {
}
}
}
private Mono<ClientResponse> executeGet(String path) {
System.out.println(String.format("[%s] About to execute an HTTP GET request: %s", Thread.currentThread().getName(), path));
return webClient
.get()
.uri(path)
.exchange();
}
private void heavyTask() {
try {
System.out.println(String.format("[%s] About to execute a heavy task", Thread.currentThread().getName()));
Thread.sleep(TimeUnit.SECONDS.toMillis(20));
} catch (InterruptedException ex) {
}
}
}
I have setup File poller with task executor
ExecutorService executorService = Executors.newFixedThreadPool(10);
LOG.info("Setting up the poller for directory {} ", finalDirectory);
StandardIntegrationFlow standardIntegrationFlow = IntegrationFlows.from(new CustomFileReadingSource(finalDirectory),
c -> c.poller(Pollers.fixedDelay(5, TimeUnit.SECONDS, 5)
.taskExecutor(executorService)
.maxMessagesPerPoll(10)
.advice(new LoggerSourceAdvisor(finalDirectory))
))
//move file to processing first processing
.transform(new FileMoveTransformer("C:/processing", true))
.channel("fileRouter")
.get();
As seen I have setup fixed threadpool of 10 and maximum message 10 per poll. If I put 10 files it still processes one by one. What could be wrong here ?
* UPDATE *
It works perfectly fine after Gary's answer though I have other issue now.
I have setup my Poller like this
setDirectory(new File(path));
DefaultDirectoryScanner scanner = new DefaultDirectoryScanner();
scanner.setFilter(new AcceptAllFileListFilter<>());
setScanner(scanner);
The reason of using AcceptAll because the same file may come again that's why I sort of move the file first. But when I enable the thread executor the same file is being processed by mutliple threads, I assume because of AcceptAllFile
If I Change to AcceptOnceFileListFilter it works but then the same file that comes again will not be picked up again ! What can be done to avoid this issue ?
Issue/Bug
In Class AbstractPersistentAcceptOnceFileListFilter We have this code
#Override
public boolean accept(F file) {
String key = buildKey(file);
synchronized (this.monitor) {
String newValue = value(file);
String oldValue = this.store.putIfAbsent(key, newValue);
if (oldValue == null) { // not in store
flushIfNeeded();
return true;
}
// same value in store
if (!isEqual(file, oldValue) && this.store.replace(key, oldValue, newValue)) {
flushIfNeeded();
return true;
}
return false;
}
}
Now for example if I have setup max per poll 5 and there are two files then its possible same file would be picked up by two threads.
Lets say my code moves the files once I read it.
But the other thread gets to the accept method
if the file is not there then it will return lastModified time as 0 and it will return true.
That causes the issue because the file is NOT there.
If its 0 then it should return false as the file is not there anymore.
When you add a task executor to a poller; all that does is the scheduler thread hands the poll task off to a thread in the thread pool; the maxMessagesPerPoll is part of the poll task. The poller itself only runs once every 5 seconds. To get what you want, you should add an executor channel to the flow...
#SpringBootApplication
public class So53521593Application {
private static final Logger logger = LoggerFactory.getLogger(So53521593Application.class);
public static void main(String[] args) {
SpringApplication.run(So53521593Application.class, args);
}
#Bean
public IntegrationFlow flow() {
ExecutorService exec = Executors.newFixedThreadPool(10);
return IntegrationFlows.from(() -> "foo", e -> e
.poller(Pollers.fixedDelay(5, TimeUnit.SECONDS)
.maxMessagesPerPoll(10)))
.channel(MessageChannels.executor(exec))
.<String>handle((p, h) -> {
try {
logger.info(p);
Thread.sleep(10_000);
}
catch (InterruptedException e1) {
Thread.currentThread().interrupt();
}
return null;
})
.get();
}
}
EDIT
It works fine for me...
#Bean
public IntegrationFlow flow() {
ExecutorService exec = Executors.newFixedThreadPool(10);
return IntegrationFlows.from(Files.inboundAdapter(new File("/tmp/foo")).filter(
new FileSystemPersistentAcceptOnceFileListFilter(new SimpleMetadataStore(), "foo")),
e -> e.poller(Pollers.fixedDelay(5, TimeUnit.SECONDS)
.maxMessagesPerPoll(10)))
.channel(MessageChannels.executor(exec))
.handle((p, h) -> {
try {
logger.info(p.toString());
Thread.sleep(10_000);
}
catch (InterruptedException e1) {
Thread.currentThread().interrupt();
}
return null;
})
.get();
}
and
2018-11-28 11:46:05.196 INFO 57607 --- [pool-1-thread-1] com.example.So53521593Application : /tmp/foo/test1.txt
2018-11-28 11:46:05.197 INFO 57607 --- [pool-1-thread-2] com.example.So53521593Application : /tmp/foo/test2.txt
and with touch test1.txt
2018-11-28 11:48:00.284 INFO 57607 --- [pool-1-thread-3] com.example.So53521593Application : /tmp/foo/test1.txt
EDIT1
Agreed - reproduced with this...
#Bean
public IntegrationFlow flow() {
ExecutorService exec = Executors.newFixedThreadPool(10);
return IntegrationFlows.from(Files.inboundAdapter(new File("/tmp/foo")).filter(
new FileSystemPersistentAcceptOnceFileListFilter(new SimpleMetadataStore(), "foo")),
e -> e.poller(Pollers.fixedDelay(5, TimeUnit.SECONDS)
.maxMessagesPerPoll(10)))
.channel(MessageChannels.executor(exec))
.<File>handle((p, h) -> {
try {
p.delete();
logger.info(p.toString());
Thread.sleep(10_000);
}
catch (InterruptedException e1) {
Thread.currentThread().interrupt();
}
return null;
})
.get();
}
and
2018-11-28 13:22:23.689 INFO 75681 --- [pool-1-thread-1] com.example.So53521593Application : /tmp/foo/test1.txt
2018-11-28 13:22:23.690 INFO 75681 --- [pool-1-thread-2] com.example.So53521593Application : /tmp/foo/test2.txt
2018-11-28 13:22:23.690 INFO 75681 --- [pool-1-thread-3] com.example.So53521593Application : /tmp/foo/test1.txt
2018-11-28 13:22:23.690 INFO 75681 --- [pool-1-thread-4] com.example.So53521593Application : /tmp/foo/test2.txt