How to do Mass insertion in Redis using JAVA? - java

Hi I need to do multiple insertions of the form
SADD key value
I have the key value pair and needed to know how to perform mass insertions using JAVA . I have written a file in the Redis Protocol. How to proceed further

If you have inputs written to Redis protocol format then why don't just use pipe mode of redis-cli or nc? It's explained from http://redis.io/topics/mass-insert.
If you have mass (key, value) inputs then you can use Jedis to perform sadd with pipelining to get higher performance.
Below example assumes that iter (Iterator) has elements each item is key"\t"value form.
try (Jedis jedis = new Jedis(host, port)) {
Pipeline pipeline = jedis.pipelined();
while (iter.hasNext()) {
String[] keyValue = iter.next().split("\t");
pipeline.sadd(keyValue[0], keyValue[1]);
// you can call pipeline.sync() and start new pipeline here if you think there're so much operations in one pipeline
}
pipeline.sync();
}

If you are doing the actual read/write operations through Spring CacheManager with RedisTemplate configured to use Redis as the cache, you can also use the executePipelined method of RedisTemplate which takes a callback as an argument. The callback needs to define the doInRedis method which does the work (read/write operations) in Redis that you want to do in a batch.
Following code shows inserting a List of objects wrapped in a CacheableObject interface that has a getKey() and getValue() by calling redisTemplate.opsForHash().put().
#Component
public class RedisClient {
#Autowired
RedisTemplate redisTemplate;
//batch-insert using Redis pipeline, a list of objects into the cache specified by cacheName
public void put(String cacheName, List<CacheableObject> objects) {
try {
this.redisTemplate.executePipelined(new RedisCallback<Object>() {
#Override
public Object doInRedis(RedisConnection connection) throws DataAccessException {
for(CacheableObject object: objects) {
redisTemplate.opsForHash().put(cacheName, object.getKey(), object.getValue());
}
return null;
}
});
}
catch(Exception e) {
log.error("Error inserting objects into Redis cache: {}", e.getMessage());
}
}
RedisTemplate itself is configured using a configuration class such as the following:
#Configuration
#EnableCaching
public class RedisCacheConfig extends CachingConfigurerSupport implements
CachingConfigurer {
#Value("${redis.hostname}")
private String redisHost;
#Value("${redis.port}")
private int redisPort;
#Value("${redis.timeout.secs:1}")
private int redisTimeoutInSecs;
#Value("${redis.socket.timeout.secs:1}")
private int redisSocketTimeoutInSecs;
#Value("${redis.ttl.hours:1}")
private int redisDataTTL;
#Bean
JedisConnectionFactory jedisConnectionFactory() {
RedisStandaloneConfiguration redisStandaloneConfiguration = new RedisStandaloneConfiguration(redisHost, redisPort);
return new JedisConnectionFactory(redisStandaloneConfiguration);
}
#Bean
public RedisTemplate<Object, Object> redisTemplate() {
RedisTemplate<Object, Object> redisTemplate = new RedisTemplate<Object, Object>();
redisTemplate.setConnectionFactory(jedisConnectionFactory());
return redisTemplate;
}
#Bean
public RedisCacheManager redisCacheManager (JedisConnectionFactory jedisConnectionFactory) {
RedisCacheConfiguration redisCacheConfiguration =
RedisCacheConfiguration.defaultCacheConfig().disableCachingNullValues()
.entryTtl(Duration.ofHours(redisDataTTL)) .serializeValuesWith(RedisSerializationContext.SerializationPair.fromSerializer(RedisSerializer.java()));
redisCacheConfiguration.usePrefix();
RedisCacheManager redisCacheManager =
RedisCacheManager.RedisCacheManagerBuilder.
fromConnectionFactory(jedisConnectionFactory)
.cacheDefaults(redisCacheConfiguration).build();
redisCacheManager.setTransactionAware(true);
return redisCacheManager;
}
#Bean
public JedisPoolConfig poolConfig() {
final JedisPoolConfig jedisPoolConfig = new JedisPoolConfig();
jedisPoolConfig.setTestOnBorrow(true);
jedisPoolConfig.setMaxTotal(100);
jedisPoolConfig.setMaxIdle(100);
jedisPoolConfig.setMinIdle(10);
jedisPoolConfig.setTestOnReturn(true);
jedisPoolConfig.setTestWhileIdle(true);
return jedisPoolConfig;
}
#Override
public CacheErrorHandler errorHandler() {
return new RedisCacheErrorHandler();
}
}

Related

How can I use Custom Partition with KafkaSender.send method()?

I have made a custom Partitioner class that extends default Partitioner.
Problem : I want to add this custom Partitioner in KafkaSender.send method()
KafkaSender.send method() code :
sender.send(Flux.just(SenderRecord.create(new ProducerRecord<>(topic, partition, key, record, recordHeaders), 1))))
The partitioner here is an integer
Custom Partitioner Code:
public class CustomPartitioner extends DefaultPartitioner {
private final static String CHAR_FORMAT = "UTF-8";
#Override
public int partition(String topic, Object key, byte[] keyBytes, Object value, byte[] valueBytes, Cluster cluster) {
// my logic
try {
return super.partition(topic, key, iocKey.toString().getBytes(CHAR_FORMAT), value, valueBytes, cluster);
} catch (UnsupportedEncodingException e) {
//error message
}
}
}
Note : I tried to hard code it using this below code
Properties properties = new Properties();
properties.put("partitioner.class", "CustomPartitioner ");
How can we force KafkaSender.send method() to use our custom partitioner?
You have to pass the properties map to KafkaTemplate bean as part of your producer configuration.
#Bean
public KafkaTemplate<String, String> kafkaTemplate() {
Map<String, Object> configProps = new HashMap<>();
configProps.put("partitioner.class", "<packagename>.CustomPartitioner");
return new KafkaTemplate<>(configProps );
}

How to consume dynamically created rabbit queues?

I have an application that creates queues (if it doesn't exist) with a naming convention. Example:
"test. {Something} .demandas"
Since {something} is passed at the time of its creation, and then there are several queues with different {something}.
Now I need to read these queues on the consumer, that is, get all the queues created by the producer. I saw some examples using the RabbitListenerEndpointRegistry, or even getting the names of the queues by jenkins (using variables from the vm).
But would you have any alternative?
This is the rabbit configuration class:
#Configuration
#EnableRabbit
public class RabbitConfig {
public static final String S_S = "%s.%s";
public static final String PREFIX = "test.laa.aaa";
public static final String QUEUE_NAME = "demandas";
public static final String APPLICATION_NAME = "name:test.laa.aaa";
private final String exchange;
private final String routingKey;
private final Integer maxConsumers;
public RabbitConfig(
#Value("${crawler.exchange.name:demandas}")
String exchange,
#Value("${crawler.exchange.routing-key:default}")
String routingKey,
#Value("${crawler.max-consumers:1}")
Integer maxConsumers) {
this.routingKey = routingKey;
this.maxConsumers = maxConsumers;
this.exchange = exchange;
}
#Bean
#Primary
public String routingKey() {
return routingKey;
}
#Bean
#Primary
public String prefixName() {
return PREFIX;
}
#Bean
#Primary
public String queueName() {
return QUEUE_NAME;
}
private String exchangeName() {
return String.format(S_S, PREFIX, exchange);
}
#Bean
#Primary
public RabbitAdmin rabbitAdmin(ConnectionFactory connectionFactory) {
AbstractConnectionFactory abstractConnectionFactory = (AbstractConnectionFactory) connectionFactory;
abstractConnectionFactory.setConnectionNameStrategy(con -> String.format("%s", APPLICATION_NAME));
final RabbitAdmin rabbitAdmin = new RabbitAdmin(connectionFactory);
rabbitAdmin.afterPropertiesSet();
return rabbitAdmin;
}
#Bean(name = "queueConsumer")
#Primary
public Queue queue() {
Map<String, Object> map = new HashMap<>();
map.put("x-max-priority", 10);
return new Queue(String.format(S_S, PREFIX, QUEUE_NAME), true, false, false, map);
}
#Bean
#Primary
public DirectExchange exchange() {
return new DirectExchange(exchangeName());
}
#Bean
#Primary
public Binding binding(Queue queue, DirectExchange exchange) {
if (Objects.nonNull(routingKey)) {
return BindingBuilder.bind(queue).to(exchange).with(routingKey);
}
return BindingBuilder.bind(queue).to(exchange).with("*");
}
#Bean
#Primary
public MessageConverter messageConverter(ObjectMapper objectMapper) {
return new Jackson2JsonMessageConverter(objectMapper);
}
#Bean
#Primary
public SimpleRabbitListenerContainerFactory rabbitListenerContainerFactory(ConnectionFactory connectionFactory, MessageConverter messageConverter) {
SimpleRabbitListenerContainerFactory factory = new SimpleRabbitListenerContainerFactory();
factory.setConnectionFactory(connectionFactory);
factory.setConcurrentConsumers(maxConsumers);
factory.setMaxConcurrentConsumers(maxConsumers);
factory.setPrefetchCount(1); //Default
factory.setMessageConverter(messageConverter);
factory.setAfterReceivePostProcessors(message -> {
message.getMessageProperties().setContentType("application/json");
message.getMessageProperties().setContentEncoding("UTF-8");
return message;
});
return factory;
}
public String getPrefix() {
return PREFIX;
}
public String getQueueName() {
return QUEUE_NAME;
}
Since queues are created by your producer, I assume that you are publishing messages directly into queues (as describe in the Rabbitmq documentation). If you want to keep this approach, then you have no other choice than finding a way to communicate queue names to consumers.
However, I recommend you to take a look at a different approach based on publish/subscribe pattern (you can find it in the official documentation too). Producers will then push messages into an exchange, with a specific routing key (for example: test.{Something}.demandas).
Then consumers will be in charge on creating they own queue and bind it (for example: receive messages from test.*.demandas, making the value of {Something} irrelevant to route your message).
This way, you don't have to share queue names (though you have to share the exchange name). It also helps to reduce the coupling between producer and consumer.

Timeout for JedisConnectionFactory

Is there a way to configure Read timeout in JedisConnFactory as we have for HttpRequestFactory? I've configured JedisConnFactory with timeout property as below. Does it include both Connection timeout and Read timeout?
final JedisConnectionFactory redisConnectionFactory = new JedisConnectionFactory();
redisConnectionFactory.setHostName(redisHost);
redisConnectionFactory.setTimeout(10000);
redisConnectionFactory.setPort(port);
redisConnectionFactory.setUsePool(true);
redisConnectionFactory.afterPropertiesSet();
#Bean(name = "redisCacheManager")
public CacheManager cacheManager(final RedisTemplate<String,Object> redisTemplate) {
final RedisCacheManager manager = new RedisCacheManager(redisTemplate());
manager.setDefaultExpiration(Long.parseLong(expiryInSecs));
return manager;
}
#Bean(name="redisTemplate")
public RedisTemplate<String, Object> redisTemplate() {
final RedisTemplate<String, Object> redisTemplate = new RedisTemplate<String, Object>();
redisTemplate.setConnectionFactory(redisConnectionFactory());
redisTemplate.setKeySerializer(stringRedisSerializer());
redisTemplate.setHashKeySerializer(stringRedisSerializer());
redisTemplate.setValueSerializer(stringRedisSerializer());
redisTemplate.setHashValueSerializer(stringRedisSerializer());
redisTemplate.afterPropertiesSet();
return redisTemplate;
}
You can use JedisClientConfiguration. It has a builder JedisClientConfigurationBuilder which contains
has both Connection timeout and Read timeout properties separetely.
JedisClientConfiguration clientConfiguration = JedisClientConfiguration.builder().readTimeout(readTimeout).
connectTimeout(connectTimeout).build();
You can then use one of the JedisConnectionFactory constructors which accepts JedisClientConfiguration.
Extend from JedisConnectionFactory and override the afterPropertiesSet method as shown below:
public class CustomJedisConnectionFactory extends JedisConnectionFactory {
private int connectionTimeout;
private int readTimeout;
// override super class constructors if required.
#Override
public void afterPropertiesSet() {
super.afterPropertiesSet();
final JedisShardInfo jedisShardInfo = this.getShardInfo();
if (Objects.nonNull(jedisShardInfo)) {
jedisShardInfo.setConnectionTimeout(getConnectionTimeout());
jedisShardInfo.setSoTimeout(getReadTimeout());
}
}
}
Create an instance of CustomJedisConnectionFactory and set the "connectionTimeout" and "readTimeout" fields.
CustomJedisConnectionFactory factory = new CustomJedisConnectionFactory();
factory.setHostName(host);
factory.setPort(port);
factory.setConnectionTimeout(connectionTimeout);
factory.setReadTimeout(readTimeout);

Spring Data Redis issue with connection pooling

I am using Spring Data Redis and need have an issue with JedisPoolConfig. I have configured a RedisTemplate as follows:
#Bean
JedisPoolConfig jedisPoolConfig() {
JedisPoolConfig jedisPoolConfig = new JedisPoolConfig();
return jedisPoolConfig;
}
#Bean
public RedisConnectionFactory redisConnectionFactory() {
JedisConnectionFactory jedisConnectionFactory = new JedisConnectionFactory(jedisPoolConfig());
jedisConnectionFactory.setHostName(redisSettings.getServer().getHost());
jedisConnectionFactory.setPort(redisSettings.getServer().getPort());
return jedisConnectionFactory;
}
#Bean
public RedisTemplate<String, Integer> redisTemplate(RedisConnectionFactory redisConnectionFactory) {
RedisTemplate<String, Integer> redisTemplate = new RedisTemplate<String, Integer>();
redisTemplate.setConnectionFactory(redisConnectionFactory);
redisTemplate.setKeySerializer(new StringRedisSerializer());
redisTemplate.setEnableTransactionSupport(true);
return redisTemplate;
}
I have a service that is marked as #Transactional, which in turns calls a #Repository that increments a number of keys in Redis:
#Service
#Transactional
public class MyService {
#Autowired
MyRepository myRepository;
public void recordStats() {
myRepository.recordStats();
}
}
#Repository
public class MyRepository {
#Resource(name="redisTemplate")
ValueOperations<String, Integer> valueOperations;
public void recordStats() {
valueOperations.increment("KEY01", 1);
valueOperations.increment("KEY02", 1);
valueOperations.increment("KEY03", 1);
valueOperations.increment("KEY04", 1);
valueOperations.increment("KEY05", 1);
valueOperations.increment("KEY06", 1);
valueOperations.increment("KEY07", 1);
valueOperations.increment("KEY08", 1);
valueOperations.increment("KEY09", 1);
valueOperations.increment("KEY10", 1);
valueOperations.increment("KEY11", 1);
}
}
When I call myService.recordStats() and step through the code in debug, it hangs upon trying to increment KEY11, and ultimately fails with redis.clients.jedis.exceptions.JedisConnectionException: Could not get a resource from the pool. If I amend the JedisPoolConfig to increase the MaxTotal as follows:
#Bean
JedisPoolConfig jedisPoolConfig() {
JedisPoolConfig jedisPoolConfig = new JedisPoolConfig();
jedisPoolConfig.setMaxTotal(128);
return jedisPoolConfig;
}
Then the problem goes away, and I can increment all 11 keys in a transaction. It seems to be the case that every valueOperations.increment call is taking another connection from the pool. Is this correct, or do I have a configuration problem somewhere?

redis.clients.jedis.exceptions.JedisConnectionException: Could not get a resource from the pool

I was trying to implement redis with spring Boot, I am randomly encountering below exception on my localhost:
redis.clients.jedis.exceptions.JedisConnectionException: Could not get
a resource from the pool;
I have already tried various combinations of properties of the JedisPoolConfig. But none of them are helping not sure where am getting wrong.
#Configuration
public class RedisConfigurationSetup {
#Bean
public RedisConnectionFactory jedisConnectionFactory() {
JedisPoolConfig poolConfig = new JedisPoolConfig();
poolConfig.setMaxTotal(10000);
poolConfig.setMinIdle(1000);
poolConfig.setMaxIdle(-1);
poolConfig.setMaxWaitMillis(500);
poolConfig.setTestOnBorrow(true);
poolConfig.setTestOnReturn(true);
JedisConnectionFactory ob = new JedisConnectionFactory(poolConfig);
ob.setHostName("127.0.0.1");
ob.setPort(6379);
ob.setUsePool(true);
return ob;
}
#SuppressWarnings({ "rawtypes" })
#Bean(name = "redisTemplate")
public RedisTemplate stringRedisTemplate() {
RedisTemplate rt = new RedisTemplate();
rt.setConnectionFactory(jedisConnectionFactory());
rt.setEnableTransactionSupport(true);
return rt;
}
}
public class GeoLocationCacheServiceImpl implements GeoLocationCacheService {
#Autowired
#Qualifier("redisTemplate")
RedisTemplate geoObjectRedisTemplate;
#Override
public void saveUpdateGeoLoc(GeoObject geoObject) {
if (geoObject != null) {
// Some Business logics
geoObjectRedisTemplate.opsForValue().set(geoObject.getObjectID(), geoObject);
// Some Business logics
}
}
}
because rt.setEnableTransactionSupport(true);
and From the source code, we can see that
if (!enableTransactionSupport) {
RedisConnectionUtils.releaseConnection(conn, factory);
}

Categories