I have spring-cloud-stream project that use kafka binder.
Application consumes messages in batch mode. I need to filter consumed records by specific header. In this case i use BatchInterceptor:
#Bean
public ListenerContainerCustomizer<AbstractMessageListenerContainer<String, String>> customizer(
BatchInterceptor<String, String> customInterceptor
) {
return (((container, destinationName, group) -> {
container.setBatchInterceptor(customInterceptor);
log.info("Container customized");
}));
}
#Bean
public BatchInterceptor<String, String> customInterceptor() {
return (consumerRecords, consumer) -> {
log.info("Origin records count: {}", consumerRecords.count());
final Set<TopicPartition> partitions = consumerRecords.partitions();
final Map<TopicPartition, List<ConsumerRecord<String, String>>> filteredByHeader
= Stream.of(partitions).flatMap(Collection::stream)
.collect(Collectors.toMap(
Function.identity(),
p -> Stream.ofNullable(consumerRecords.records(p))
.flatMap(Collection::stream)
.filter(r -> Objects.nonNull(r.headers().lastHeader("TEST")))
.collect(Collectors.toList())
));
var filteredRecords = new ConsumerRecords<>(filteredByHeader);
log.info("Filtered count: {}", filteredRecords.count());
return filteredRecords;
};
}
Example code here batch interceptor example.
In logs i see, that the records are filtered successfully, but the filtered ones are still get into the consumer.
Why ButchInterceptor does not filter records?
How can i filter ConsumerRecords by specific header in spring-cloud-stream with enabled batch mode? You can run the tests from the example to reproduce behaveor.
You are using very old code (Boot 2.5.0) which is out of OSS support.
https://spring.io/projects/spring-boot#support
(Cloud too).
I tested your interceptor with current versions and it works fine.
Boot 2.7.5, cloud 2021.0.4:
#SpringBootApplication
public class So74203611Application {
private static final Logger log = LoggerFactory.getLogger(So74203611Application.class);
public static void main(String[] args) {
SpringApplication.run(So74203611Application.class, args);
}
#Bean
public ListenerContainerCustomizer<AbstractMessageListenerContainer<String, String>> customizer(
BatchInterceptor<String, String> customInterceptor) {
return (((container, destinationName, group) -> {
container.setBatchInterceptor(customInterceptor);
log.info("Container customized {}", destinationName);
}));
}
#Bean
public BatchInterceptor<String, String> customInterceptor() {
return (consumerRecords, consumer) -> {
log.info("Origin records count: {}", consumerRecords.count());
final Set<TopicPartition> partitions = consumerRecords.partitions();
final Map<TopicPartition, List<ConsumerRecord<String, String>>> filteredByHeader = Stream.of(partitions)
.flatMap(Collection::stream)
.collect(Collectors.toMap(Function.identity(),
p -> Stream.ofNullable(consumerRecords.records(p)).flatMap(Collection::stream)
.filter(r -> Objects.nonNull(r.headers().lastHeader("TEST")))
.collect(Collectors.toList())));
var filteredRecords = new ConsumerRecords<>(filteredByHeader);
log.info("Filtered count: {}", filteredRecords.count());
return filteredRecords;
};
}
#Bean
Consumer<List<String>> input() {
return str -> {
System.out.println(str);
};
}
#Bean
ApplicationRunner runner(KafkaTemplate<byte[], byte[]> template) {
return args -> {
Headers headers = new RecordHeaders();
headers.add("TEST", "foo".getBytes());
ProducerRecord<byte[], byte[]> rec = new ProducerRecord<>("input-in-0", 0, 0L, null, "bar".getBytes(),
headers);
template.send(rec);
headers = new RecordHeaders();
rec = new ProducerRecord<>("input-in-0", 0, 0L, null, "baz".getBytes(), headers);
template.send(rec);
template.send(rec);
};
}
}
spring.cloud.stream.bindings.input-in-0.group=foo
spring.cloud.stream.bindings.input-in-0.consumer.batch-mode=true
[bar]
Related
I created a class called ConsumerConfig and Service class that contain function that gets records from a topic.
locally consumer.poll work just fine but when i add the brokers it stopped working and i get empty record, and it takes a long time
here is the code of the ConsumerConfig class and the functions that gets records from specific topic :
#Component
public class ConsumerConfig {
private static Integer numberOfConsumer = 0;
private KafkaConsumer consumer;
private Map<String, Object> buildDefaultConfig() {
final Map<String, Object> defaultClientConfig = new HashMap<>();
defaultClientConfig.put("bootstrap.servers", (String) getbrokersConfigurationFromFile().get("spring.kafka.bootstrap-servers"));
defaultClientConfig.put("client.id", "test-consumer-id" + (++numberOfConsumer));
defaultClientConfig.put("request.timeout.ms",60000);
return defaultClientConfig;
}
#Bean
#RequestScope
public <K, V> KafkaConsumer<K, V> getKafkaConsumer() {
// Build config
final Map<String, Object> kafkaConsumerConfig = buildDefaultConfig();
kafkaConsumerConfig.put("key.deserializer", StringDeserializer.class);
kafkaConsumerConfig.put("value.deserializer", StringDeserializer.class);
kafkaConsumerConfig.put("auto.offset.reset", "earliest");
kafkaConsumerConfig.put("max.poll.records",5);
kafkaConsumerConfig.put("default.api.timeout.ms", 6000000);
kafkaConsumerConfig.put("max.block.ms ",60000000);
kafkaConsumerConfig.put("auto.offset.reset", "earliest");
kafkaConsumerConfig.put("enable.partition.eof", "false");
kafkaConsumerConfig.put("enable.auto.commit", "true");
kafkaConsumerConfig.put("auto.commit.interval.ms", "1000");
kafkaConsumerConfig.put("session.timeout.ms", "30000");
//fetch.max.byte The maximum amount of data the server should return for a fetch request.
// Create and return Consumer.
return consumer=new KafkaConsumer<K, V> (kafkaConsumerConfig);
}
Function that return list of records
public <K, V> List<ConsumerRecord<K, V>> consumeAllRecordsFromTopic(final String topic,
final Collection<Integer> partitionIds) {
// Create topic Partitions
final List<TopicPartition> topicPartitions = partitionIds
.stream()
.map((partitionId) -> new TopicPartition(topic, partitionId))
.collect(Collectors.toList());
final List<ConsumerRecord<K, V>> allRecords = new ArrayList<>();
ConsumerRecords<K, V> records;
// Assign topic partitions
consumer.assign(topicPartitions);
consumer.seekToBeginning(topicPartitions);
// Pull records from kafka
records = consumer.poll(Duration.ofMillis(10000));
records.forEach(allRecords::add);
return allRecords;
}
public List<Record> recordsFromTopic(final String topic) {
// Find all partitions on topic.
final TopicDescription topicDescription=(TopicDescription) adminService.topicDescription(topic);
final Collection<Integer> partitions=topicDescription
.partitions()
.stream()
.map(TopicPartitionInfo::partition)
.collect(Collectors.toList());
var list = consumeAllRecordsFromTopic(topic, partitions);
var element = list.stream().filter(Objects::nonNull).map(x -> Record
.builder()
.values(x.value())
.offset(x.offset())
.partition(x.partition()).build())
.collect(Collectors.toList());
return element;
}
I don't understand spring integration with JobLaunchingGateway behavior. I have example config:
public SftpInboundChannelAdapterSpec sftpInboundChannelAdapterSpec() {
return Sftp.inboundAdapter(ftpFileSessionFactory())
.preserveTimestamp(true)
.deleteRemoteFiles(false)
.remoteDirectory(integrationProperties.getRemoteDirectory())
.filter(sftpFileListFilter())
.localDirectory(new File(integrationProperties.getLocalDirectory()));
}
public PollerSpec pollerSpec() {
PollerSpec cron = Pollers.cron(integrationProperties.getPollerCron());
cron.maxMessagesPerPoll(integrationProperties.getMessagePerPoll());
return cron;
}
#Bean
public IntegrationFlow sftpInboundFlow() {
return IntegrationFlows.from(sftpInboundChannelAdapterSpec(), pc -> pc.poller(pollerSpec()))
.transform(fileMessageToJobRequest())
.handle(jobLaunchingGateway())
.handle(message -> {
logger.info("Handle message: {}", message.getPayload());
})
.get();
}
#Bean
public JobLaunchingGateway jobLaunchingGateway() {
SimpleJobLauncher simpleJobLauncher = new SimpleJobLauncher();
simpleJobLauncher.setJobRepository(jobRepository);
simpleJobLauncher.setTaskExecutor(new SyncTaskExecutor());
JobLaunchingGateway jobLaunchingGateway = new JobLaunchingGateway(simpleJobLauncher);
return jobLaunchingGateway;
}
private ChainFileListFilter<ChannelSftp.LsEntry> sftpFileListFilter() {
ChainFileListFilter<ChannelSftp.LsEntry> chainFileListFilter = new ChainFileListFilter<>();
chainFileListFilter.addFilter(new SftpSimplePatternFileListFilter("*.xlsx"));
chainFileListFilter.addFilter(new SftpPersistentAcceptOnceFileListFilter(metadataStore(), "INT"));
return chainFileListFilter;
}
If I set polling every 1 minute, the job will be created every minute. I don't see any new record in MetaDataStore.
When I comment line with .handle(jobLaunchingGateway())
#Bean
public IntegrationFlow sftpInboundFlow() {
return IntegrationFlows.from(sftpInboundChannelAdapterSpec(), pc -> pc.poller(pollerSpec()))
.transform(fileMessageToJobRequest())
// .handle(jobLaunchingGateway())
.handle(message -> {
logger.info("Handle message: {}", message.getPayload());
})
.get();
}
Everything works as expected.
I expected that SFTP fetch new file(s) and then create new job for each file.
I don't understand why I don't see records in MetaDataStore when I JobLaunchingGateway is enabled.
Can you help me and explain this?
I am new in spring integration and batch?
I have one scheduler which produces one event. My consumer consumes this event. The payload of this event is a json with below fields:
private String topic;
private String partition;
private String filterKey;
private long CustId;
Now I need to trigger one more consumer which will take all this information which I get a response from first consumer.
#KafkaListener(topics = "<**topic-name-from-first-consumer-response**>", groupId = "group" containerFactory = "kafkaListenerFactory")
public void consumeJson(List<User> data, Acknowledgment acknowledgment,
#Header(KafkaHeaders.RECEIVED_PARTITION_ID) List<Integer> partitions,
#Header(KafkaHeaders.OFFSET) List<Long> offsets) {
// consumer code goes here...}
I need to create some dynamic variable which I can pass in place of topic name.
similarly, I am using the filtering in the configuration file and I need to pass key dynamically in the configuration.
factory.setRecordFilterStrategy(new RecordFilterStrategy<String, Object>() {
#Override
public boolean filter(ConsumerRecord<String, Object> consumerRecord) {
if(consumerRecord.key().equals("**Key will go here**")) {
return false;
}
else {
return true;
}
}
});
How can we dynamically inject these values from the response of first consumer and trigger the second consumer. Both the consumers are in same application
You cannot do that with an annotated listener, the configuration is only used during initialization; you would need to create a listener container yourself (using the ConcurrentKafkaListenerContainerFactory) to dynamically create a listener.
EDIT
Here's an example.
#SpringBootApplication
public class So69134055Application {
public static void main(String[] args) {
SpringApplication.run(So69134055Application.class, args);
}
#Bean
public NewTopic topic() {
return TopicBuilder.name("so69134055").partitions(1).replicas(1).build();
}
}
#Component
class Listener {
private static final Logger log = LoggerFactory.getLogger(Listener.class);
private static final Method otherListen;
static {
try {
otherListen = Listener.class.getDeclaredMethod("otherListen", List.class);
}
catch (NoSuchMethodException | SecurityException ex) {
throw new IllegalStateException(ex);
}
}
private final ConcurrentKafkaListenerContainerFactory<String, String> factory;
private final MessageHandlerMethodFactory methodFactory;
private final KafkaAdmin admin;
private final KafkaTemplate<String, String> template;
public Listener(ConcurrentKafkaListenerContainerFactory<String, String> factory, KafkaAdmin admin,
KafkaTemplate<String, String> template, KafkaListenerAnnotationBeanPostProcessor<?, ?> bpp) {
this.factory = factory;
this.admin = admin;
this.template = template;
this.methodFactory = bpp.getMessageHandlerMethodFactory();
}
#KafkaListener(id = "so69134055", topics = "so69134055")
public void listen(String topicName) {
try (AdminClient client = AdminClient.create(this.admin.getConfigurationProperties())) {
NewTopic topic = TopicBuilder.name(topicName).build();
client.createTopics(List.of(topic)).all().get(10, TimeUnit.SECONDS);
}
catch (Exception e) {
log.error("Failed to create topic", e);
}
ConcurrentMessageListenerContainer<String, String> container =
this.factory.createContainer(new TopicPartitionOffset(topicName, 0));
BatchMessagingMessageListenerAdapter<String, String> adapter =
new BatchMessagingMessageListenerAdapter<>(this, otherListen);
adapter.setHandlerMethod(new HandlerAdapter(
this.methodFactory.createInvocableHandlerMethod(this, otherListen)));
FilteringBatchMessageListenerAdapter<String, String> filtered =
new FilteringBatchMessageListenerAdapter<>(adapter, record -> !record.key().equals("foo"));
container.getContainerProperties().setMessageListener(filtered);
container.getContainerProperties().setGroupId("group.for." + topicName);
container.setBeanName(topicName + ".container");
container.start();
IntStream.range(0, 10).forEach(i -> this.template.send(topicName, 0, i % 2 == 0 ? "foo" : "bar", "test" + i));
}
void otherListen(List<String> others) {
log.info("Others: {}", others);
}
}
spring.kafka.consumer.auto-offset-reset=earliest
Output - showing that the filter was applied to the records with bar in the key.
Others: [test0, test2, test4, test6, test8]
In my spring kafka application, I want to trigger the consumer at run time according to input of some scheduler. Scheduler will tell the listener from which topic it can start consuming messages. There is springboot application with custom ConcurrentKafkaListenerContainerFactory class. I need to perform three tasks:
close the container, After successfully reading all the messages available on topic.
It will store the current offset in DB or file system.
Next time when consumer up again, the stored offset can be used to process the records instead of default offset managed by Kafka. So that in future we can change the offset value in DB and get get desired reports.
I know how to handle all these with #KafkaListener but not sure how to hook with ConcurrentKafkaListenerContainerFactory. The current code is listed below:
#SpringBootApplication
public class KafkaApp{
public static void main(String[] args) {
SpringApplication.run(KafkaApp.class, args);
}
#Bean
public NewTopic topic() {
return TopicBuilder.name("testTopic").partitions(1).replicas(1).build();
}
}
#Component
class Listener {
private static final Logger log = LoggerFactory.getLogger(Listener.class);
private static final Method otherListen;
static {
try {
otherListen = Listener.class.getDeclaredMethod("otherListen", List.class);
}
catch (NoSuchMethodException | SecurityException ex) {
throw new IllegalStateException(ex);
}
}
private final ConcurrentKafkaListenerContainerFactory<String, String> factory;
private final MessageHandlerMethodFactory methodFactory;
private final KafkaAdmin admin;
private final KafkaTemplate<String, String> template;
public Listener(ConcurrentKafkaListenerContainerFactory<String, String> factory, KafkaAdmin admin,
KafkaTemplate<String, String> template, KafkaListenerAnnotationBeanPostProcessor<?, ?> bpp) {
this.factory = factory;
this.admin = admin;
this.template = template;
this.methodFactory = bpp.getMessageHandlerMethodFactory();
}
#KafkaListener(id = "myId", topics = "testTopic")
public void listen(String topicName) {
try (AdminClient client = AdminClient.create(this.admin.getConfigurationProperties())) {
NewTopic topic = TopicBuilder.name(topicName).build();
client.createTopics(List.of(topic)).all().get(10, TimeUnit.SECONDS);
}
catch (Exception e) {
log.error("Failed to create topic", e);
}
ConcurrentMessageListenerContainer<String, String> container =
this.factory.createContainer(new TopicPartitionOffset(topicName, 0));
BatchMessagingMessageListenerAdapter<String, String> adapter =
new BatchMessagingMessageListenerAdapter<>(this, otherListen);
adapter.setHandlerMethod(new HandlerAdapter(
this.methodFactory.createInvocableHandlerMethod(this, otherListen)));
FilteringBatchMessageListenerAdapter<String, String> filtered =
new FilteringBatchMessageListenerAdapter<>(adapter, record -> !record.key().equals("foo"));
container.getContainerProperties().setMessageListener(filtered);
container.getContainerProperties().setGroupId("group.for." + topicName);
container.setBeanName(topicName + ".container");
container.start();
IntStream.range(0, 10).forEach(i -> this.template.send(topicName, 0, i % 2 == 0 ? "foo" : "bar", "test" + i));
}
void otherListen(List<String> others) {
log.info("Others: {}", others);
}
}
EDIT
#SpringBootApplication
public class KafkaApp{
public static void main(String[] args) {
SpringApplication.run(KafkaApp.class, args);
}
#Bean
public NewTopic topic() {
return TopicBuilder.name("testTopic").partitions(1).replicas(1).build();
}
}
#Component
class Listener {
private static final Logger log = LoggerFactory.getLogger(Listener.class);
private static final Method otherListen;
static {
try {
otherListen = Listener.class.getDeclaredMethod("otherListen", List.class);
}
catch (NoSuchMethodException | SecurityException ex) {
throw new IllegalStateException(ex);
}
}
private final ConcurrentKafkaListenerContainerFactory<String, String> factory;
private final MessageHandlerMethodFactory methodFactory;
private final KafkaAdmin admin;
private final KafkaTemplate<String, String> template;
public Listener(ConcurrentKafkaListenerContainerFactory<String, String> factory, KafkaAdmin admin,
KafkaTemplate<String, String> template, KafkaListenerAnnotationBeanPostProcessor<?, ?> bpp) {
this.factory = factory;
this.admin = admin;
this.template = template;
this.methodFactory = bpp.getMessageHandlerMethodFactory();
}
#KafkaListener(id = "myId", topics = "testTopic")
public void listen(String topicName) {
try (AdminClient client = AdminClient.create(this.admin.getConfigurationProperties())) {
NewTopic topic = TopicBuilder.name(topicName).build();
client.createTopics(List.of(topic)).all().get(10, TimeUnit.SECONDS);
}
catch (Exception e) {
log.error("Failed to create topic", e);
}
ConcurrentMessageListenerContainer<String, String> container =
this.factory.createContainer(new TopicPartitionOffset(topicName, 0));
BatchMessagingMessageListenerAdapter<String, String> adapter =
new BatchMessagingMessageListenerAdapter<>(this, otherListen);
adapter.setHandlerMethod(new HandlerAdapter(
this.methodFactory.createInvocableHandlerMethod(this, otherListen)));
FilteringBatchMessageListenerAdapter<String, String> filtered =
new FilteringBatchMessageListenerAdapter<>(adapter, record -> !record.key().equals("foo"));
container.getContainerProperties().setMessageListener(filtered);
container.getContainerProperties().setGroupId("group.for." + topicName);
container.setBeanName(topicName + ".container");
container.getContainerProperties().setIdleEventInterval(3000L);
container.start();
IntStream.range(0, 10).forEach(i -> this.template.send(topicName, 0, i % 2 == 0 ? "foo" : "bar", "test" + i));
}
void otherListen(List<String> others) {
log.info("Others: {}", others);
}
#EventListener
public void eventHandler(ListenerContainerIdleEvent event) {
logger.info("No messages received for " + event.getIdleTime() + " milliseconds");
}
}
You can receive ListenerContainerIdleEvents when there are no messages left to process; you can use this event to stop the container; you should perform the stop() on a different thread (not the one that publishes the event).
See How to check if Kafka is empty using Spring Kafka?
You can get the partition/offset in several ways.
void otherListen<List<ConsumerRecord<..., ...>>)
or
void otherListen(List<String> others,
#Header(KafkaHeaders.RECEIVED_PARTITION) List<Integer> partitions,
#Header(KafkaHeaders.OFFSET) List<Long> offsets)
You can specify the starting offset in the
new TopicPartitionOffset(topicName, 0), startOffset);
when creating the container.
EDIT
To stop the container when it is idle, set the idleEventInterval and add an #EventListener method and stop the container.
TaskExecutor exec = new SimpleAsyncTaskExecutor();
#EventListener
void idle(ListenerContainerIdleEvent event) {
log...
this.exec.execute(() -> event.getContainer(ConcurrentMessageListenerContainer.class).stop());
}
If you add concurrency to your containers, you would need for each child container to go idle before stopping the parent container.
EDIT2
I just added it to the code I wrote for the answer to your other question and it works exactly as expected.
#KafkaListener(id = "so69134055", topics = "so69134055")
public void listen(String topicName) {
try (AdminClient client = AdminClient.create(this.admin.getConfigurationProperties())) {
NewTopic topic = TopicBuilder.name(topicName).build();
client.createTopics(List.of(topic)).all().get(10, TimeUnit.SECONDS);
}
catch (Exception e) {
log.error("Failed to create topic", e);
}
ConcurrentMessageListenerContainer<String, String> container =
this.factory.createContainer(new TopicPartitionOffset(topicName, 0));
BatchMessagingMessageListenerAdapter<String, String> adapter =
new BatchMessagingMessageListenerAdapter<>(this, otherListen);
adapter.setHandlerMethod(new HandlerAdapter(
this.methodFactory.createInvocableHandlerMethod(this, otherListen)));
FilteringBatchMessageListenerAdapter<String, String> filtered =
new FilteringBatchMessageListenerAdapter<>(adapter, record -> !record.key().equals("foo"));
container.getContainerProperties().setMessageListener(filtered);
container.getContainerProperties().setGroupId("group.for." + topicName);
container.getContainerProperties().setIdleEventInterval(3000L);
container.setBeanName(topicName + ".container");
container.start();
IntStream.range(0, 10).forEach(i -> this.template.send(topicName, 0, i % 2 == 0 ? "foo" : "bar", "test" + i));
}
void otherListen(List<String> others) {
log.info("Others: {}", others);
}
TaskExecutor exec = new SimpleAsyncTaskExecutor();
#EventListener
public void idle(ListenerContainerIdleEvent event) {
log.info(event.toString());
this.exec.execute(() -> {
ConcurrentMessageListenerContainer container = event.getContainer(ConcurrentMessageListenerContainer.class);
log.info("stopping container: " + container.getBeanName());
container.stop();
});
}
[foo.container-0-C-1] Others: [test0, test2, test4, test6, test8]
[foo.container-0-C-1] ListenerContainerIdleEvent [idleTime=5.007s, listenerId=foo.container-0, container=KafkaMessageListenerContainer [id=foo.container-0, clientIndex=-0, topicPartitions=[foo-0]], paused=false, topicPartitions=[foo-0]]
[SimpleAsyncTaskExecutor-1] stopping container: foo.container
[foo.container-0-C-1] [Consumer clientId=consumer-group.for.foo-2, groupId=group.for.foo] Unsubscribed all topics or patterns and assigned partitions
[foo.container-0-C-1] Metrics scheduler closed
[foo.container-0-C-1] Closing reporter org.apache.kafka.common.metrics.JmxReporter
[foo.container-0-C-1] Metrics reporters closed
[foo.container-0-C-1] App info kafka.consumer for consumer-group.for.foo-2 unregistered
[foo.container-0-C-1] group.for.foo: Consumer stopped
I've been tinkering with wrapping an old style listener interface using RxJava. What i've come up with seems to work, but the usage of Observable.using feels a bit awkward.
The requirements are:
Only one subscription per id to the underlying service.
The latest value for a given id should be cached and served to new subscribers.
We must unsubscribe from the underlying service if nothing is listening to an id.
Is there a better way? The following is what I've got.
static class MockServiceRXAdapterImpl1 implements MockServiceRXAdapter {
PublishSubject<MockResponse> mockResponseObservable = PublishSubject.create();
MockService mockService = new MockService(mockResponse -> mockResponseObservable.onNext(mockResponse));
final ConcurrentMap<String, Observable<String>> subscriptionMap = new ConcurrentHashMap<>();
public Observable<String> getObservable(String id) {
return Observable.using(() -> subscriptionMap.computeIfAbsent(
id,
key -> mockResponseObservable.filter(mockResponse -> mockResponse.id.equals(id))
.doOnSubscribe(disposable -> mockService.subscribe(id))
.doOnDispose(() -> {
mockService.unsubscribe(id);
subscriptionMap.remove(id);
})
.map(mockResponse -> mockResponse.value)
.replay(1)
.refCount()),
observable -> observable,
observable -> {
}
);
}
}
You may use Observable.create
So code may look like this
final Map<String, Observable<String>> subscriptionMap = new HashMap<>();
MockService mockService = new MockService();
public Observable<String> getObservable(String id) {
log.info("looking for root observable");
if (subscriptionMap.containsKey(id)) {
log.info("found root observable");
return subscriptionMap.get(id);
} else {
synchronized (subscriptionMap) {
if (!subscriptionMap.containsKey(id)) {
log.info("creating new root observable");
final Observable<String> responseObservable = Observable.create(emitter -> {
MockServiceListener listener = emitter::onNext;
mockService.addListener(listener);
emitter.setCancellable(() -> {
mockServices.removeListener(listener);
mockService.unsubscribe(id);
synchronized (subscriptionMap) {
subscriptionMap.remove(id);
}
});
mockService.subscribe(id);
})
.filter(mockResponse -> mockResponse.id.equals(id))
.map(mockResponse -> mockResponse.value)
.replay(1)
.refCount();
subscriptionMap.put(id, responseObservable);
} else {
log.info("Another thread created the observable for us");
}
return subscriptionMap.get(id);
}
}
}
I think I've gotten it to work using .groupBy(...).
In my case Response.getValue() returns an int, but you get the idea:
class Adapter
{
Subject<Response> msgSubject;
ThirdPartyService service;
Map<String, Observable<Integer>> observables;
Observable<GroupedObservable<String, Response>> groupedObservables;
public Adapter()
{
msgSubject = PublishSubject.<Response>create().toSerialized();
service = new MockThirdPartyService( msgSubject::onNext );
groupedObservables = msgSubject.groupBy( Response::getId );
observables = Collections.synchronizedMap( new HashMap<>() );
}
public Observable<Integer> getObservable( String id )
{
return observables.computeIfAbsent( id, this::doCreateObservable );
}
private Observable<Integer> doCreateObservable( String id )
{
service.subscribe( id );
return groupedObservables
.filter( group -> group.getKey().equals( id ))
.doOnDispose( () -> {
synchronized ( observables )
{
service.unsubscribe( id );
observables.remove( id );
}
} )
.concatMap( Functions.identity() )
.map( Response::getValue )
.replay( 1 )
.refCount();
}
}