How to fetch cached value using redisson client - java

I wanted to fetch cached(#Cachable) value using redisson client but it return strange data if i use any codec in redisson client (getBucket("fruit::1",StringCodec.INSTANCE)) and it throws error unless i use codec.
i have used below code for caching
#Cacheable(value = "fruits", key = "#id")
public Fruit getFruitById(int id) {
// get fruit by id
CriteriaBuilder builder = em.getCriteriaBuilder();
CriteriaQuery<Fruit> query = builder.createQuery(Fruit.class);
Root<Fruit> root = query.from(Fruit.class);
query.select(root);
query.where(builder.equal(root.get("id"), id));
TypedQuery<Fruit> fruitQuery = em.createQuery(query);
return fruitQuery.getSingleResult();
}
When i use codec for getting that cached data
RBucket<String> bucket = client.getBucket("fruits::1",
StringCodec.INSTANCE);
String fruit = bucket.get();
It returns following strange data
��srcom.home.redis.Fruit��.ܵo*rIidIpriceLnametLjava/lang/String;xp,tpomegrantite
RedisConfiguration
#Bean
public RedisCacheConfiguration cacheConfiguration() {
RedisCacheConfiguration cacheConfig = RedisCacheConfiguration
.defaultCacheConfig().entryTtl(Duration.ofSeconds(600))
.disableCachingNullValues();
return cacheConfig;
}
#Bean
public RedisCacheManager cacheManager() {
RedisCacheManager rcm = RedisCacheManager
.builder(this.getRedissonStoreFactory())
.cacheDefaults(cacheConfiguration()).transactionAware().build();
return rcm;
}
#Bean
#Primary
public RedisProperties redisProperties() {
return new RedisProperties();
}
#Bean
public RedissonConnectionFactory getRedissonStoreFactory() {
return new RedissonConnectionFactory(getConfig());
}
#Bean
public RedissonNode getNode() {
RedissonNodeConfig nodeConfig = new RedissonNodeConfig(getConfig());
nodeConfig.setExecutorServiceWorkers(
Collections.singletonMap("ensimp", 1));
RedissonNode node = RedissonNode.create(nodeConfig);
node.start();
return node;
}
#Bean
public Config getConfig()
{
Config config = new Config();
RedisProperties properties = redisProperties();
config.useSingleServer().setAddress(
"redis://" + properties.getHost() + ":" + properties.getPort());
return config;
}
redisson.json
{
"singleServerConfig":{
"idleConnectionTimeout":500,
"connectTimeout":1000,
"timeout":3000,
"retryAttempts":3,
"retryInterval":1500,
"password":null,
"subscriptionsPerConnection":5,
"clientName":null,
"address": "redis://127.0.0.1:6379",
"subscriptionConnectionMinimumIdleSize":0,
"subscriptionConnectionPoolSize":1,
"connectionMinimumIdleSize":0,
"connectionPoolSize":20,
"database":0,
"dnsMonitoringInterval":5000
},
"threads":16,
"nettyThreads":32,
"codec":{
"class":"org.redisson.codec.FstCodec"
},
"transportMode":"NIO"
}
i've used fst codec too but got the same strange data. i want correctly decoded data it'd be great if anyone help me with a right code.

You need to use RMapCache data to obtain data and not RBucket.
client.getMapCache("fruits::1", StringCodec.INSTANCE);

try this:
RMapCache mycache;
mycache=client.getMapCache("fruits::1");
then to retrieve the data use readAllValues()
Collection<Fruit> map=mycache.readAllValues();
System.out.println(map);

Related

Spring Boot Batch DB to CSV Cannot sort the values by Id and show column titles in csv file

I tried to implement an example of Spring Boot Batch Processing from db to csv.
The issue which I cannot solve is based on not sorting all values by id in csv file as well as showing column titles.
Here is the output in csv file. (First value is related with id)
3,9,2013-04-15,Japan,jheino3#mayoclinic.com,Jard,MALE,Heino,87cdda81-45d0-451a-a62f-f8450eae1b64
2,23,1999-01-25,Panama,nloynes2#woothemes.com,Natala,FEMALE,Loynes,24be24e6-525f-42de-855d-52d4fef21608
6,9,2013-02-16,China,rcossans4#harvard.edu,Roseline,FEMALE,Cossans,06f70b0d-2c98-4f46-b933-528499ab91b3
4,8,2014-05-09,Indonesia,jcarlaw1#t.co,Jilleen,FEMALE,Carlaw,c722e6d5-9024-49c5-80e0-c2555f1eb9cc
1,22,2000-08-15,China,gspearing0#flickr.com,Ginnie,FEMALE,Spearing,fa26fa96-97d3-4e8e-856a-fdf07499e13e
5,22,2000-03-18,Indonesia,rgillino6#china.com.cn,Rainer,MALE,Gillino,5302a199-f313-4a24-9550-d643001d9faf
I want all values are sorted by id.
How can I do that?
Here is the batch configuration file shown below.
#Configuration // Informs Spring that this class contains configurations
#EnableBatchProcessing // Enables batch processing for the application
#RequiredArgsConstructor
public class BatchConfiguration {
private final JobBuilderFactory jobBuilderFactory;
private final StepBuilderFactory stepBuilderFactory;
private final UserRepository userRepository;
Date now = new Date(); // java.util.Date, NOT java.sql.Date or java.sql.Timestamp!
String format1 = new SimpleDateFormat("yyyy-MM-dd'-'HH-mm-ss-SSS",Locale.forLanguageTag("tr-TR")).format(now);
private Resource outputResource = new FileSystemResource("output/customers_" + format1 + ".csv");
#Bean
public RepositoryItemReader<User> reader(){
RepositoryItemReader<User> repositoryItemReader = new RepositoryItemReader<>();
repositoryItemReader.setRepository(userRepository);
repositoryItemReader.setMethodName("findAll");
final HashMap<String, Sort.Direction> sorts = new HashMap<>();
sorts.put("id", Sort.Direction.ASC);
repositoryItemReader.setSort(sorts);
return repositoryItemReader;
}
#Bean
public FlatFileItemWriter<User> writer() {
FlatFileItemWriter<User> writer = new FlatFileItemWriter<>();
writer.setResource(outputResource);
writer.setAppendAllowed(true);
writer.setLineAggregator(new DelimitedLineAggregator<User>() {
{
setDelimiter(",");
setFieldExtractor(new BeanWrapperFieldExtractor<User>() {
{
setNames(new String[]{"id", "age", "birthday", "country", "email", "firstName", "gender", "lastName", "personId"});
}
});
}
});
return writer;
}
#Bean
public UserProcessor processor() {
return new UserProcessor();
}
#Bean
public UserJobExecutionNotificationListener stepExecutionListener() {
return new UserJobExecutionNotificationListener(userRepository);
}
#Bean
public UserStepCompleteNotificationListener jobExecutionListener() {
return new UserStepCompleteNotificationListener();
}
#Bean
public Step step1() {
return stepBuilderFactory.get("csv-step").<User, User>chunk(10)
.reader(reader())
.processor(processor())
.writer(writer())
.listener(stepExecutionListener())
.taskExecutor(taskExecutor())
.build();
}
#Bean
public Job runJob() {
return jobBuilderFactory.get("importuserjob")
.listener(jobExecutionListener())
.flow(step1()).end().build();
}
#Bean
public TaskExecutor taskExecutor() {
SimpleAsyncTaskExecutor asyncTaskExecutor = new SimpleAsyncTaskExecutor();
asyncTaskExecutor.setConcurrencyLimit(10);
return asyncTaskExecutor;
}
}
Here is the link of example : Link
After I added FlatFileHeaderCallback into FlatFileItemWriter, I fixed the issue.
Here is the code snippets shown below.
writer.setHeaderCallback(new FlatFileHeaderCallback() {
#Override
public void writeHeader(Writer writer) throws IOException {
for(int i=0;i<headers.length;i++){
if(i!=headers.length-1)
writer.append(headers[i] + ",");
else
writer.append(headers[i]);
}
}
});

When springboot uses # cacheable to integrate redis cache, the key generation policy KeyGenerator generates duplicate keys enhancerbyspringcglib

1.Redisconfig configuration classes are as follows:
#Configuration
public class RedisConfig {
/**
* #param factory
* #return
*/
#Bean
public CacheManager cacheManager(RedisConnectionFactory factory) {
GenericJackson2JsonRedisSerializer genericJackson2JsonRedisSerializer = new GenericJackson2JsonRedisSerializer();
StringRedisSerializer stringRedisSerializer = new StringRedisSerializer();
// 配置序列化
RedisCacheConfiguration config = RedisCacheConfiguration.defaultCacheConfig();
RedisCacheConfiguration redisCacheConfiguration = config
// 键序列化方式 redis字符串序列化
.serializeKeysWith(RedisSerializationContext.SerializationPair.fromSerializer(stringRedisSerializer))
// 值序列化方式 简单json序列化
.serializeValuesWith(RedisSerializationContext.SerializationPair.fromSerializer(genericJackson2JsonRedisSerializer))
//不缓存Null值
.disableCachingNullValues()
//缓存失效 3天
.entryTtl(Duration.ofDays(3));
return RedisCacheManager.builder(factory).cacheDefaults(redisCacheConfiguration).build();
}
#Bean
public RedisTemplate<String,Object> redisTemplate(RedisConnectionFactory redisConnectionFactory) {
RedisTemplate<String, Object> template = new RedisTemplate();
template.setConnectionFactory(redisConnectionFactory);
GenericJackson2JsonRedisSerializer jsonRedisSerializer = new GenericJackson2JsonRedisSerializer();
// value值的序列化采用fastJsonRedisSerializer
template.setValueSerializer(jsonRedisSerializer);
template.setHashValueSerializer(jsonRedisSerializer);
// key的序列化采用StringRedisSerializer
template.setKeySerializer(new StringRedisSerializer());
template.setHashKeySerializer(new StringRedisSerializer());
return template;
}
/**
* 重写缓存key的生成方式: 类名.方法名字&[参数列表]
* #return
*/
#Bean
public KeyGenerator keyGenerator(){
return new KeyGenerator() {
#Override
public Object generate(Object target, Method method, Object... params) {
StringBuilder sb = new StringBuilder();
sb.append(target.getClass().getName()).append(".");//执行类名
sb.append(method.getName()).append("&");//方法名
sb.append(Arrays.toString(params));//参数
return sb.toString();
}
};
}
}
2. Redis cache is used for # cacheable annotation:
#Transactional(rollbackFor = Exception.class)
#Cacheable(cacheNames = "blog",keyGenerator = "keyGenerator")
#Override
public BlogVo getBlogById(int blogId) {
Blog blog = blogDao.selectBlogById(blogId);
if(blog == null || blog.getBlogStatus()==0){
return null;
}
BlogVo blogVo = new BlogVo();
BeanUtils.copyProperties(blog,blogVo);
User user = userDao.selectUserById(blog.getBlogUserid());
Type type = typeDao.selectTypeById(blog.getBlogTypeid());
blogVo.setBlogUser(user);
blogVo.setBlogType(type);
return blogVo;
}
3. Generating duplicate key value pairs:
Two key value pairs with the same value are cached in redis database. The difference between duplicate key contents is that they contain enhancerbyspringcglib, which has been tested for many times. Other methods are the same:
(1) right key: blog::com.zju.sdust.weblog.service.impl.BlogServiceImpl.getBlogById&[1]
(2) Duplicate key: blog::com.zju.sdust.weblog.service.impl.BlogServiceImpl$$EnhancerBySpringCGLIB$$e965464f.getBlogById&[1]
I want to know why and how to solve it? help me!!!

How To Stop Polling InboundChannelAdapter

Im polling files from 2 different directories in 1 server using RotatingServerAdvice and that´s working fine, the problem is that I can´t stop polling once time I start the inboundtest.start (). The main idea is retrive all the files in those directories, and then send inboundtest.stop (), this is the code.
#Bean
public SessionFactory<LsEntry> sftpSessionFactory() {
DefaultSftpSessionFactory factory = new DefaultSftpSessionFactory(false);
factory.setHost(host);
factory.setPort(port);
factory.setUser(user);
factory.setPassword(password);
factory.setAllowUnknownKeys(true);
//factory.setTestSession(true);
return factory;
}
#Bean
public SftpInboundFileSynchronizer sftpInboundFileSynchronizer() {
SftpInboundFileSynchronizer fileSynchronizer = new SftpInboundFileSynchronizer(sftpSessionFactory());
fileSynchronizer.setDeleteRemoteFiles(true);
fileSynchronizer.setRemoteDirectory(sftpRemoteDirectory);
fileSynchronizer.setFilter(new SftpRegexPatternFileListFilter(".*?\\.(txt|TXT?)"));
return fileSynchronizer;
}
#Bean(name = "sftpMessageSource")
#EndpointId("inboundtest")
#InboundChannelAdapter(channel = "sftpChannel",poller = #Poller("fileReadingMessageSourcePollerMetadata"), autoStartup = "false")
public MessageSource<File> sftpMessageSource() {
SftpInboundFileSynchronizingMessageSource source =
new SftpInboundFileSynchronizingMessageSource(sftpInboundFileSynchronizer());
source.setLocalDirectory(new File(sftpLocalDirectoryDownloadUpload));
source.setAutoCreateLocalDirectory(true);
source.setLocalFilter(new AcceptOnceFileListFilter<File>());
return source;
}
#Bean
public DelegatingSessionFactory<LsEntry> sessionFactory() {
Map<Object, SessionFactory<LsEntry>> factories = new LinkedHashMap<>();
factories.put("one", sftpSessionFactory());
// use the first SF as the default
return new DelegatingSessionFactory<LsEntry>(factories, factories.values().iterator().next());
}
#Bean
public RotatingServerAdvice advice() {
List<RotationPolicy.KeyDirectory> keyDirectories = new ArrayList<>();
keyDirectories.add(new RotationPolicy.KeyDirectory("one", sftpRemoteDirectory));
keyDirectories.add(new RotationPolicy.KeyDirectory("one", sftpRemoteDirectoryNonUpload));
return new RotatingServerAdvice(sessionFactory(), keyDirectories, false);
}
#Bean
MessageChannel controlChannel() {
return new DirectChannel();
}
#Bean
#ServiceActivator(inputChannel = "controlChannel")
ExpressionControlBusFactoryBean controlBus() {
return new ExpressionControlBusFactoryBean();
}
#Bean
public PollerMetadata fileReadingMessageSourcePollerMetadata() {
PollerMetadata meta = new PollerMetadata();
meta.setTrigger(new PeriodicTrigger(1000));
meta.setAdviceChain(List.of(advice()));
meta.setMaxMessagesPerPoll(1);
meta.setErrorHandler(throwable -> new IOException());
return meta;
}
Allways is waiting for a new file in one of the 2 directories, but thats no the idea, the idea is stop polling when all the files be retrived
From another class I call inbound.start() trouhg the control chanel here the code:
#Autowired
private MessageChannel controlChannel;
public void startProcessingFiles() throws InterruptedException {
controlChannel.send(new GenericMessage<>("#inboundtest.start()"));
}
I was tryong stop with this class but doesn´t works
#Component
public class StopPollingAdvice implements ReceiveMessageAdvice {
#Autowired
private MessageChannel controlChannel;
#Override
public Message<?> afterReceive(Message<?> message, Object o) {
System.out.println("There is no more files, stopping connection" + message.getPayload());
if(message == null) {
System.out.println("There is no more files, stopping connection" + message.getPayload());
Message operation = MessageBuilder.withPayload("#inboundtest.stop()").build();
controlChannel.send(operation);
}
return message;
}
}
OK. Now I see your point. The RotatingServerAdvice does move to other server only when the first is exhausted (by default, see that fair option). So, when you stop it in the advice it cannot go to other dir for fetching any more. You need to think about some other stopping solution. Something what is not tied to the advice and this afterReceive(), somewhere downstream in your flow...
Or you can provide a custom RotationPolicy (extension of StandardRotationPolicy) and in its overridden afterReceive() check for all the dirs processed and then send stop command.

Jedis SpringBoot cannot serialize and deserialize Map<String, Object>

I got following error after adding #Cacheable annotation to one of my rest method:
"status": 500,
"error": "Internal Server Error",
"message": "class java.util.ArrayList cannot be cast to class java.util.Map (java.util.ArrayList and java.util.Map are in module java.base of loader 'bootstrap')",
Method declaration is:
#Cacheable("loadDevicesFloors")
#GetMapping("/floors/all-devices")
public Map<String, DevicesFloorDTO> loadDevicesFloors() {...
and DevicesFloorDTO looks as follows:
public class DevicesFloorDTO implements Serializable {
private final List<DeviceDTO> deviceDTOs;
private final String floorName;
private final Integer floorIndex;
public DevicesFloorDTO(List<DeviceDTO> devicesDtos, String floorName, Integer floorIndex) {
this.deviceDTOs = devicesDtos;
this.floorName = floorName;
this.floorIndex = floorIndex;
}...
Additionally my #Bean redisTemplate method implementation:
#Bean
JedisConnectionFactory jedisConnectionFactory() {
JedisConnectionFactory jedisConFactory
= new JedisConnectionFactory();
jedisConFactory.setHostName(redisHost);
jedisConFactory.setPort(redisPort);
jedisConFactory.setPassword(redisPassword);
return jedisConFactory;
}
#Bean
public RedisTemplate<?, ?> redisTemplate() {
RedisTemplate<byte[], byte[]> template = new RedisTemplate<>();
template.setConnectionFactory(jedisConnectionFactory());
return template;
}
Does anyone know what is wrong in this implementation? Without #Cacheable it works as expected, but after adding #Cacheable error occurs. I was searching a lot and still don't know what causes this error and how to fix this. Any comment may be helpful. Thaks a lot!
The Generics you have specified for the Map Map<String, DevicesFloorDTO> will not be available at runtime during serialization/deserialization. What format are you trying to save your objects to in Reids? Are they saving as JSON (string) or binary?
We have had success with the GenericJackson2JsonRedisSerializer because it will save the class info inside the JSON string so Redis know exactly how to recreate objects.
There are also some instances where a Wrapper Object is needed in order to correctly serialize/deserialize objects.
#Bean
public RedisCacheManager cacheManager( RedisConnectionFactory redisConnectionFactory,
ResourceLoader resourceLoader ) {
RedisCacheManager.RedisCacheManagerBuilder builder = RedisCacheManager
.builder( redisConnectionFactory )
.cacheDefaults( determineConfiguration() );
List<String> cacheNames = this.cacheProperties.getCacheNames();
if ( !cacheNames.isEmpty() ) {
builder.initialCacheNames( new LinkedHashSet<>( cacheNames ) );
}
return builder.build();
}
private RedisCacheConfiguration determineConfiguration() {
if ( this.redisCacheConfiguration != null ) {
return this.redisCacheConfiguration;
}
CacheProperties.Redis redisProperties = this.cacheProperties.getRedis();
RedisCacheConfiguration config = RedisCacheConfiguration.defaultCacheConfig();
ObjectMapper mapper = new Jackson2ObjectMapperBuilder()
.modulesToInstall( new SimpleModule().addSerializer( new NullValueSerializer( null ) ) )
.failOnEmptyBeans( false )
.build();
mapper.enableDefaultTyping( ObjectMapper.DefaultTyping.NON_FINAL, JsonTypeInfo.As.PROPERTY );
GenericJackson2JsonRedisSerializer serializer = new GenericJackson2JsonRedisSerializer( mapper );
//get the mapper b/c they registered some internal modules
config = config.serializeValuesWith( RedisSerializationContext.SerializationPair.fromSerializer( serializer ) );
if ( redisProperties.getTimeToLive() != null ) {
config = config.entryTtl( redisProperties.getTimeToLive() );
}
if ( redisProperties.getKeyPrefix() != null ) {
config = config.prefixKeysWith( redisProperties.getKeyPrefix() );
}
if ( !redisProperties.isCacheNullValues() ) {
config = config.disableCachingNullValues();
}
if ( !redisProperties.isUseKeyPrefix() ) {
config = config.disableKeyPrefix();
config = config.computePrefixWith( cacheName -> cacheName + "::" );
}
return config;
}

Spring Batch Integration Remote Chunking error - Message contained wrong job instance id [25] should have been [24]

I'm running into this bug (more info here) which appears to mean that for multi-threaded batches using remote chunking you can't use a common response channel. I'm not exactly sure how to proceed to get this working. Surely there's a way to get this working, because without it I can't see much benefit to remote chunking.
Here's my DSL config that creates a JobRequest:
#Bean
IntegrationFlow newPollingJobsAdapter(JobLaunchingGateway jobLaunchingGateway) {
// Start by polling the DB for new PollingJobs according to the polling rate
return IntegrationFlows.from(jdbcPollingChannelAdapter(),
c -> c.poller(Pollers.fixedRate(10000)
// Do the polling on one of 10 threads.
.taskExecutor(Executors.newFixedThreadPool(10))
// pull out up to 100 new ids for each poll.
.maxMessagesPerPoll(100)))
.log(LoggingHandler.Level.WARN)
// The polling adapter above returns a list of ids. Split them out into
// individual ids
.split()
// Now push each one onto a separate thread for batch processing.
.channel(MessageChannels.executor(Executors.newFixedThreadPool(10)))
.log(LoggingHandler.Level.WARN)
// Transform each one into a JobLaunchRequest
.<Long, JobLaunchRequest>transform(id -> {
logger.warn("Creating job for ID {}", id);
JobParametersBuilder builder = new JobParametersBuilder()
.addLong("polling-job-id", id, true);
return new JobLaunchRequest(job, builder.toJobParameters());
})
.handle(jobLaunchingGateway)
// TODO: Notify somebody? No idea yet
.<JobExecution>handle(exec -> System.out.println("GOT EXECUTION: " + exec))
.get();
}
Nothing in here is particularly special, no odd configs that I'm aware of.
The job itself is pretty straight-forward, too:
/**
* This is the definition of the entire batch process that runs polling.
* #return
*/
#Bean
Job pollingJobJob() {
return jobBuilderFactory.get("pollingJobJob")
.incrementer(new RunIdIncrementer())
// Ship it down to the slaves for actual processing
.start(remoteChunkingStep())
// Now mark it as complete
.next(markCompleteStep())
.build();
}
/**
* Sends the job to a remote slave via an ActiveMQ-backed JMS queue.
*/
#Bean
TaskletStep remoteChunkingStep() {
return stepBuilderFactory.get("polling-job-step-remote-chunking")
.<Long, String>chunk(20)
.reader(runningPollingJobItemReader)
.processor(toJsonProcessor())
.writer(chunkWriter)
.build();
}
/**
* This step just marks the PollerJob as Complete.
*/
#Bean
Step markCompleteStep() {
return stepBuilderFactory.get("polling-job-step-mark-complete")
// We want each PollerJob instance to be a separate job in batch, and the
// reader is using the id passed in via job params to grab the one we want,
// so we don't need a large chunk size. One at a time is fine.
.<Long, Long>chunk(1)
.reader(runningPollingJobItemReader)
.processor(new PassThroughItemProcessor<Long>())
.writer(this.completeStatusWriter)
.build();
}
Here's the chunk writer config:
/**
* This is part of the bridge between the spring-batch and spring-integration. Nothing special or weird is going
* on, so see the RemoteChunkHandlerFactoryBean for a description.
*/
#Bean
RemoteChunkHandlerFactoryBean<PollerJob> remoteChunkHandlerFactoryBean() {
RemoteChunkHandlerFactoryBean<PollerJob> factory = new RemoteChunkHandlerFactoryBean<>();
factory.setChunkWriter(chunkWriter);
factory.setStep(remoteChunkingStep());
return factory;
}
/**
* This is the writer that will actually send the chunk to the slaves. Note that it also configures the
* internal channel on which replies are expected.
*/
#Bean
#StepScope
ChunkMessageChannelItemWriter<String> chunkWriter() {
ChunkMessageChannelItemWriter<String> writer = new ChunkMessageChannelItemWriter<>();
writer.setMessagingOperations(batchMessagingTemplate());
writer.setReplyChannel(batchResponseChannel());
writer.setThrottleLimit(1000);
return writer;
}
The problem seems to be that last section sets up the ChunkMessageChannelItemWriter such that the replyChannel is the same one used by all of the writers, despite each writer being step-scoped. It would seem that I need to add a replyChannel header to one of the messages, but I'm not sure where in the chain to do that or how to process that (if I need to at all?).
Also, this is being sent to the slaves via JMS/ActiveMQ and I'd like to avoid having just a stupid number of nearly-identical queues on ActiveMQ just to support this.
What are my options?
Given that you are using a shared JMS infrastructure, you will need a router to get the responses back to the correct chunk writer.
If you use prototype scope on the batchResponseChannel() #Bean; you'll get a unique channel for each writer.
I don't have time to figure out how to set up a chunked batch job so the following simulates your environment (non-singleton bean that needs a unique reply channel for each instance). Hopefully it's self-explanatory...
#SpringBootApplication
public class So44806067Application {
public static void main(String[] args) {
ConfigurableApplicationContext context = SpringApplication.run(So44806067Application.class, args);
SomeNonSingletonNeedingDistinctRequestAndReplyChannels chunker1 = context
.getBean(SomeNonSingletonNeedingDistinctRequestAndReplyChannels.class);
SomeNonSingletonNeedingDistinctRequestAndReplyChannels chunker2 = context
.getBean(SomeNonSingletonNeedingDistinctRequestAndReplyChannels.class);
if (chunker1.equals(chunker2)) {
throw new IllegalStateException("Expected different instances");
}
chunker1.sendSome();
chunker2.sendSome();
ChunkResponse results = chunker1.getResults();
if (results == null) {
throw new IllegalStateException("No results1");
}
if (results.getJobId() != 1L) {
throw new IllegalStateException("Incorrect routing1");
}
results = chunker2.getResults();
if (results == null) {
throw new IllegalStateException("No results2");
}
if (results.getJobId() != 2L) {
throw new IllegalStateException("Incorrect routing2");
}
context.close();
}
#Bean
public Map<Long, PollableChannel> registry() {
// TODO: should clean up entry for jobId when job completes.
return new ConcurrentHashMap<>();
}
#Bean
#Scope(ConfigurableBeanFactory.SCOPE_PROTOTYPE)
public SomeNonSingletonNeedingDistinctRequestAndReplyChannels chunker() {
MessagingTemplate template = template();
final PollableChannel replyChannel = replyChannel();
SomeNonSingletonNeedingDistinctRequestAndReplyChannels bean =
new SomeNonSingletonNeedingDistinctRequestAndReplyChannels(template, replyChannel);
AbstractSubscribableChannel requestChannel = (AbstractSubscribableChannel) template.getDefaultDestination();
requestChannel.addInterceptor(new ChannelInterceptorAdapter() {
#Override
public Message<?> preSend(Message<?> message, MessageChannel channel) {
registry().putIfAbsent(((ChunkRequest<?>) message.getPayload()).getJobId(), replyChannel);
return message;
}
});
BridgeHandler bridge = bridge();
requestChannel.subscribe(bridge);
return bean;
}
#Bean
#Scope(ConfigurableBeanFactory.SCOPE_PROTOTYPE)
public MessagingTemplate template() {
MessagingTemplate messagingTemplate = new MessagingTemplate();
messagingTemplate.setDefaultChannel(requestChannel());
return messagingTemplate;
}
#Bean
#Scope(ConfigurableBeanFactory.SCOPE_PROTOTYPE)
public DirectChannel requestChannel() {
return new DirectChannel();
}
#Bean
#Scope(ConfigurableBeanFactory.SCOPE_PROTOTYPE)
public PollableChannel replyChannel() {
return new QueueChannel();
}
#Bean
#Scope(ConfigurableBeanFactory.SCOPE_PROTOTYPE)
public BridgeHandler bridge() {
BridgeHandler bridgeHandler = new BridgeHandler();
bridgeHandler.setOutputChannel(outboundChannel());
return bridgeHandler;
}
#Bean
public DirectChannel outboundChannel() {
return new DirectChannel();
}
#Bean
public DirectChannel masterReplyChannel() {
return new DirectChannel();
}
#ServiceActivator(inputChannel = "outboundChannel")
public void simulateJmsChannelAdapterPair(ChunkRequest<?> request) {
masterReplyChannel()
.send(new GenericMessage<>(new ChunkResponse(request.getSequence(), request.getJobId(), null)));
}
#Router(inputChannel = "masterReplyChannel")
public MessageChannel route(ChunkResponse reply) {
// TODO: error checking - missing reply channel for jobId
return registry().get(reply.getJobId());
}
public static class SomeNonSingletonNeedingDistinctRequestAndReplyChannels {
private final static AtomicLong jobIds = new AtomicLong();
private final long jobId = jobIds.incrementAndGet();
private final MessagingTemplate template;
private final PollableChannel replyChannel;
public SomeNonSingletonNeedingDistinctRequestAndReplyChannels(MessagingTemplate template,
PollableChannel replyChannel) {
this.template = template;
this.replyChannel = replyChannel;
}
public void sendSome() {
ChunkRequest<String> cr = new ChunkRequest<>(0, Collections.singleton("foo"), this.jobId, null);
this.template.send(new GenericMessage<>(cr));
}
public ChunkResponse getResults() {
#SuppressWarnings("unchecked")
Message<ChunkResponse> received = (Message<ChunkResponse>) this.replyChannel.receive(10_000);
if (received != null) {
if (received.getPayload().getJobId().equals(this.jobId)) {
System.out.println("Got the right one");
}
else {
System.out.println(
"Got the wrong one " + received.getPayload().getJobId() + " instead of " + this.jobId);
}
return received.getPayload();
}
return null;
}
}
}

Categories