Is there a way to configure Read timeout in JedisConnFactory as we have for HttpRequestFactory? I've configured JedisConnFactory with timeout property as below. Does it include both Connection timeout and Read timeout?
final JedisConnectionFactory redisConnectionFactory = new JedisConnectionFactory();
redisConnectionFactory.setHostName(redisHost);
redisConnectionFactory.setTimeout(10000);
redisConnectionFactory.setPort(port);
redisConnectionFactory.setUsePool(true);
redisConnectionFactory.afterPropertiesSet();
#Bean(name = "redisCacheManager")
public CacheManager cacheManager(final RedisTemplate<String,Object> redisTemplate) {
final RedisCacheManager manager = new RedisCacheManager(redisTemplate());
manager.setDefaultExpiration(Long.parseLong(expiryInSecs));
return manager;
}
#Bean(name="redisTemplate")
public RedisTemplate<String, Object> redisTemplate() {
final RedisTemplate<String, Object> redisTemplate = new RedisTemplate<String, Object>();
redisTemplate.setConnectionFactory(redisConnectionFactory());
redisTemplate.setKeySerializer(stringRedisSerializer());
redisTemplate.setHashKeySerializer(stringRedisSerializer());
redisTemplate.setValueSerializer(stringRedisSerializer());
redisTemplate.setHashValueSerializer(stringRedisSerializer());
redisTemplate.afterPropertiesSet();
return redisTemplate;
}
You can use JedisClientConfiguration. It has a builder JedisClientConfigurationBuilder which contains
has both Connection timeout and Read timeout properties separetely.
JedisClientConfiguration clientConfiguration = JedisClientConfiguration.builder().readTimeout(readTimeout).
connectTimeout(connectTimeout).build();
You can then use one of the JedisConnectionFactory constructors which accepts JedisClientConfiguration.
Extend from JedisConnectionFactory and override the afterPropertiesSet method as shown below:
public class CustomJedisConnectionFactory extends JedisConnectionFactory {
private int connectionTimeout;
private int readTimeout;
// override super class constructors if required.
#Override
public void afterPropertiesSet() {
super.afterPropertiesSet();
final JedisShardInfo jedisShardInfo = this.getShardInfo();
if (Objects.nonNull(jedisShardInfo)) {
jedisShardInfo.setConnectionTimeout(getConnectionTimeout());
jedisShardInfo.setSoTimeout(getReadTimeout());
}
}
}
Create an instance of CustomJedisConnectionFactory and set the "connectionTimeout" and "readTimeout" fields.
CustomJedisConnectionFactory factory = new CustomJedisConnectionFactory();
factory.setHostName(host);
factory.setPort(port);
factory.setConnectionTimeout(connectionTimeout);
factory.setReadTimeout(readTimeout);
Related
I have two queues and they each have messages on them. Queue one has bird objects and queue two has birdspotting object. I'm using a defaultclassmapper to convert the messages back into objects. Is there a way for me to add different configurations on both my rabbitlisteners.
My listeners.
#Qualifier("bird")
#RabbitListener(queues = "vogels")
public void receiveBird(Bird in)
BirdSpotting birdSpotting = new BirdSpotting();
birdSpotting.setBird(in);
rabbitTemplate.convertAndSend("vogelspottings",birdSpotting);
}
#Qualifier("birdspotting")
#RabbitListener(queues = "vogelspottingmetlocatie")
public void receiveBirdWithLocation(BirdSpotting birdSpotting){
service.saveBirdSpotting(birdSpotting);
}
My configuration class.
#Configuration
#EnableRabbit
public class RabbitConf2 implements RabbitListenerConfigurer {
#Autowired
DefaultClassMapper mapper;
#Bean
public MappingJackson2MessageConverter consumerJackson2MessageConverter() {
return new MappingJackson2MessageConverter();
}
#Bean
public DefaultMessageHandlerMethodFactory messageHandlerMethodFactory() {
DefaultMessageHandlerMethodFactory factory = new DefaultMessageHandlerMethodFactory();
factory.setMessageConverter(consumerJackson2MessageConverter());
return factory;
}
#Override
public void configureRabbitListeners(RabbitListenerEndpointRegistrar registrar) {
registrar.setMessageHandlerMethodFactory(messageHandlerMethodFactory());
}
#Bean
public RabbitTemplate rabbitTemplateService2(final ConnectionFactory connectionFactory) {
final RabbitTemplate rabbitTemplate = new RabbitTemplate(connectionFactory);
rabbitTemplate.setMessageConverter(producerJackson2MessageConverterService2());
return rabbitTemplate;
}
#Bean
public Jackson2JsonMessageConverter producerJackson2MessageConverterService2() {
final Jackson2JsonMessageConverter jackson2JsonMessageConverter = new Jackson2JsonMessageConverter();
jackson2JsonMessageConverter.setClassMapper(mapper);
return jackson2JsonMessageConverter;
}
My two defaultclassmappers for both queues:
#Bean(value = "bird")
public DefaultClassMapper classMapperService2() {
DefaultClassMapper classMapper = new DefaultClassMapper();
Map<String, Class<?>> idClassMapping = new HashMap<>();
idClassMapping.put("be.kdg.birdgeneratorservice.Bird", Bird.class);
classMapper.setIdClassMapping(idClassMapping);
return classMapper;
}
#Bean(value = "birdspotting")
public DefaultClassMapper classMapperService3() {
DefaultClassMapper classMapper = new DefaultClassMapper();
Map<String, Class<?>> idClassMapping = new HashMap<>();
idClassMapping.put("be.kdg.locationservice.BirdSpotting", BirdSpotting.class);
classMapper.setIdClassMapping(idClassMapping);
return classMapper;
}
You need to introduce one more RabbitListenerContainerFactory bean with an appropriate configuration and use its name from the second #RabbitListener:
/**
* The bean name of the {#link org.springframework.amqp.rabbit.listener.RabbitListenerContainerFactory}
* to use to create the message listener container responsible to serve this endpoint.
* <p>If not specified, the default container factory is used, if any.
* #return the {#link org.springframework.amqp.rabbit.listener.RabbitListenerContainerFactory}
* bean name.
*/
String containerFactory() default "";
This way you will distinguish a default one provided by the Spring Boot and will have your own custom for another use-case.
See more info in the Docs: https://docs.spring.io/spring-amqp/docs/2.1.4.RELEASE/reference/#async-annotation-driven
I want to pragmatically control when to start/stop my kafka listeners.So looking through some previous posts and discussions it looks like I could use KafkaListenerEndpointRegistry.getListenerContainer(id).stop() to do that .However I verified that no containers are registered with my KafkaListenerEndpointRegistry bean.How do I register my container with KafkaListenerEndpointRegistry ?
#Autowired
KafkaListenerEndpointRegistry kafkaListenerEndpointRegistry;
#Bean
public KafkaListenerEndpointRegistry kafkaListenerEndpointRegistry() {
kafkaListenerEndpointRegistry = new KafkaListenerEndpointRegistry();
return kafkaListenerEndpointRegistry;
}
#Bean
public ConcurrentKafkaListenerContainerFactory<?, ?>
kafkaListenerContainerFactory(
ConsumerFactory<String, SpecificRecord> kafkaConsumerFactory
) {
ConcurrentKafkaListenerContainerFactory<String, SpecificRecord> factory
= new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory());
factory.getContainerProperties().setTransactionManager(kafkaTransactionManager());
factory.getContainerProperties().setIdleEventInterval(60000L);
factory.getContainerProperties().setAckOnError(false);
factory.setRetryTemplate(getRetryTemplate());
factory.setConcurrency(2);
factory.getContainerProperties().setErrorHandler(rawLogsErrorHandler(KafkaTemplate));
return factory;
}
#Bean
KafkaTransactionManager<String,SpecificRecord> kafkaTransactionManager() {
return new KafkaTransactionManager<>(producerFactory());
}
#Bean
public RetryTemplate getRetryTemplate() {
RetryTemplate retryTemplate = new RetryTemplate();
FixedBackOffPolicy backOffPolicy = new FixedBackOffPolicy();
backOffPolicy.setBackOffPeriod(1000);
RetryPolicy retryPolicy = new SimpleRetryPolicy();
retryTemplate.setBackOffPolicy(backOffPolicy);
retryTemplate.setRetryPolicy(retryPolicy);
retryTemplate.registerListener(retryListener());
return retryTemplate;
}
#Bean
public LoggingErrorHandler rawLogsErrorHandler(KafkaTemplate<String,SpecificRecord> kafkaTemplate) {
return new LoggingErrorHandler() {
#SuppressWarnings({ "rawtypes", "unchecked" })
#Override
public void handle(Exception thrownException, ConsumerRecord<?,?> record) {
// record send to a dead letter here
//stop all listeners
kafkaListenerEndpointRegistry.stop();
}
#Bean
public LogReceiver receiver() {
return new LogReceiver();
}
// and on Logreciever class
public class Logreciever
#KafkaListener(topics = RAWLLOGTOPIC,id="rawLogConsumer",containerFactory="kafkaListenerContainerFactory")
public void onMessage(#Payload Log log,
#Header(KafkaHeaders.RECEIVED_PARTITION_ID) int partition,
#Header(KafkaHeaders.OFFSET) Long offset) throws Exception
{
//processing code
}
}
See the documentation. Only containers for #KafkaListeners are registered in the registry.
Containers retrieved from the factory as #Beans are registered with the application context.
If you manually create containers using the container factory, no registration is performed.
I am using Spring Data Redis and need have an issue with JedisPoolConfig. I have configured a RedisTemplate as follows:
#Bean
JedisPoolConfig jedisPoolConfig() {
JedisPoolConfig jedisPoolConfig = new JedisPoolConfig();
return jedisPoolConfig;
}
#Bean
public RedisConnectionFactory redisConnectionFactory() {
JedisConnectionFactory jedisConnectionFactory = new JedisConnectionFactory(jedisPoolConfig());
jedisConnectionFactory.setHostName(redisSettings.getServer().getHost());
jedisConnectionFactory.setPort(redisSettings.getServer().getPort());
return jedisConnectionFactory;
}
#Bean
public RedisTemplate<String, Integer> redisTemplate(RedisConnectionFactory redisConnectionFactory) {
RedisTemplate<String, Integer> redisTemplate = new RedisTemplate<String, Integer>();
redisTemplate.setConnectionFactory(redisConnectionFactory);
redisTemplate.setKeySerializer(new StringRedisSerializer());
redisTemplate.setEnableTransactionSupport(true);
return redisTemplate;
}
I have a service that is marked as #Transactional, which in turns calls a #Repository that increments a number of keys in Redis:
#Service
#Transactional
public class MyService {
#Autowired
MyRepository myRepository;
public void recordStats() {
myRepository.recordStats();
}
}
#Repository
public class MyRepository {
#Resource(name="redisTemplate")
ValueOperations<String, Integer> valueOperations;
public void recordStats() {
valueOperations.increment("KEY01", 1);
valueOperations.increment("KEY02", 1);
valueOperations.increment("KEY03", 1);
valueOperations.increment("KEY04", 1);
valueOperations.increment("KEY05", 1);
valueOperations.increment("KEY06", 1);
valueOperations.increment("KEY07", 1);
valueOperations.increment("KEY08", 1);
valueOperations.increment("KEY09", 1);
valueOperations.increment("KEY10", 1);
valueOperations.increment("KEY11", 1);
}
}
When I call myService.recordStats() and step through the code in debug, it hangs upon trying to increment KEY11, and ultimately fails with redis.clients.jedis.exceptions.JedisConnectionException: Could not get a resource from the pool. If I amend the JedisPoolConfig to increase the MaxTotal as follows:
#Bean
JedisPoolConfig jedisPoolConfig() {
JedisPoolConfig jedisPoolConfig = new JedisPoolConfig();
jedisPoolConfig.setMaxTotal(128);
return jedisPoolConfig;
}
Then the problem goes away, and I can increment all 11 keys in a transaction. It seems to be the case that every valueOperations.increment call is taking another connection from the pool. Is this correct, or do I have a configuration problem somewhere?
Hi I need to do multiple insertions of the form
SADD key value
I have the key value pair and needed to know how to perform mass insertions using JAVA . I have written a file in the Redis Protocol. How to proceed further
If you have inputs written to Redis protocol format then why don't just use pipe mode of redis-cli or nc? It's explained from http://redis.io/topics/mass-insert.
If you have mass (key, value) inputs then you can use Jedis to perform sadd with pipelining to get higher performance.
Below example assumes that iter (Iterator) has elements each item is key"\t"value form.
try (Jedis jedis = new Jedis(host, port)) {
Pipeline pipeline = jedis.pipelined();
while (iter.hasNext()) {
String[] keyValue = iter.next().split("\t");
pipeline.sadd(keyValue[0], keyValue[1]);
// you can call pipeline.sync() and start new pipeline here if you think there're so much operations in one pipeline
}
pipeline.sync();
}
If you are doing the actual read/write operations through Spring CacheManager with RedisTemplate configured to use Redis as the cache, you can also use the executePipelined method of RedisTemplate which takes a callback as an argument. The callback needs to define the doInRedis method which does the work (read/write operations) in Redis that you want to do in a batch.
Following code shows inserting a List of objects wrapped in a CacheableObject interface that has a getKey() and getValue() by calling redisTemplate.opsForHash().put().
#Component
public class RedisClient {
#Autowired
RedisTemplate redisTemplate;
//batch-insert using Redis pipeline, a list of objects into the cache specified by cacheName
public void put(String cacheName, List<CacheableObject> objects) {
try {
this.redisTemplate.executePipelined(new RedisCallback<Object>() {
#Override
public Object doInRedis(RedisConnection connection) throws DataAccessException {
for(CacheableObject object: objects) {
redisTemplate.opsForHash().put(cacheName, object.getKey(), object.getValue());
}
return null;
}
});
}
catch(Exception e) {
log.error("Error inserting objects into Redis cache: {}", e.getMessage());
}
}
RedisTemplate itself is configured using a configuration class such as the following:
#Configuration
#EnableCaching
public class RedisCacheConfig extends CachingConfigurerSupport implements
CachingConfigurer {
#Value("${redis.hostname}")
private String redisHost;
#Value("${redis.port}")
private int redisPort;
#Value("${redis.timeout.secs:1}")
private int redisTimeoutInSecs;
#Value("${redis.socket.timeout.secs:1}")
private int redisSocketTimeoutInSecs;
#Value("${redis.ttl.hours:1}")
private int redisDataTTL;
#Bean
JedisConnectionFactory jedisConnectionFactory() {
RedisStandaloneConfiguration redisStandaloneConfiguration = new RedisStandaloneConfiguration(redisHost, redisPort);
return new JedisConnectionFactory(redisStandaloneConfiguration);
}
#Bean
public RedisTemplate<Object, Object> redisTemplate() {
RedisTemplate<Object, Object> redisTemplate = new RedisTemplate<Object, Object>();
redisTemplate.setConnectionFactory(jedisConnectionFactory());
return redisTemplate;
}
#Bean
public RedisCacheManager redisCacheManager (JedisConnectionFactory jedisConnectionFactory) {
RedisCacheConfiguration redisCacheConfiguration =
RedisCacheConfiguration.defaultCacheConfig().disableCachingNullValues()
.entryTtl(Duration.ofHours(redisDataTTL)) .serializeValuesWith(RedisSerializationContext.SerializationPair.fromSerializer(RedisSerializer.java()));
redisCacheConfiguration.usePrefix();
RedisCacheManager redisCacheManager =
RedisCacheManager.RedisCacheManagerBuilder.
fromConnectionFactory(jedisConnectionFactory)
.cacheDefaults(redisCacheConfiguration).build();
redisCacheManager.setTransactionAware(true);
return redisCacheManager;
}
#Bean
public JedisPoolConfig poolConfig() {
final JedisPoolConfig jedisPoolConfig = new JedisPoolConfig();
jedisPoolConfig.setTestOnBorrow(true);
jedisPoolConfig.setMaxTotal(100);
jedisPoolConfig.setMaxIdle(100);
jedisPoolConfig.setMinIdle(10);
jedisPoolConfig.setTestOnReturn(true);
jedisPoolConfig.setTestWhileIdle(true);
return jedisPoolConfig;
}
#Override
public CacheErrorHandler errorHandler() {
return new RedisCacheErrorHandler();
}
}
I was trying to implement redis with spring Boot, I am randomly encountering below exception on my localhost:
redis.clients.jedis.exceptions.JedisConnectionException: Could not get
a resource from the pool;
I have already tried various combinations of properties of the JedisPoolConfig. But none of them are helping not sure where am getting wrong.
#Configuration
public class RedisConfigurationSetup {
#Bean
public RedisConnectionFactory jedisConnectionFactory() {
JedisPoolConfig poolConfig = new JedisPoolConfig();
poolConfig.setMaxTotal(10000);
poolConfig.setMinIdle(1000);
poolConfig.setMaxIdle(-1);
poolConfig.setMaxWaitMillis(500);
poolConfig.setTestOnBorrow(true);
poolConfig.setTestOnReturn(true);
JedisConnectionFactory ob = new JedisConnectionFactory(poolConfig);
ob.setHostName("127.0.0.1");
ob.setPort(6379);
ob.setUsePool(true);
return ob;
}
#SuppressWarnings({ "rawtypes" })
#Bean(name = "redisTemplate")
public RedisTemplate stringRedisTemplate() {
RedisTemplate rt = new RedisTemplate();
rt.setConnectionFactory(jedisConnectionFactory());
rt.setEnableTransactionSupport(true);
return rt;
}
}
public class GeoLocationCacheServiceImpl implements GeoLocationCacheService {
#Autowired
#Qualifier("redisTemplate")
RedisTemplate geoObjectRedisTemplate;
#Override
public void saveUpdateGeoLoc(GeoObject geoObject) {
if (geoObject != null) {
// Some Business logics
geoObjectRedisTemplate.opsForValue().set(geoObject.getObjectID(), geoObject);
// Some Business logics
}
}
}
because rt.setEnableTransactionSupport(true);
and From the source code, we can see that
if (!enableTransactionSupport) {
RedisConnectionUtils.releaseConnection(conn, factory);
}