Jedis SpringBoot cannot serialize and deserialize Map<String, Object> - java

I got following error after adding #Cacheable annotation to one of my rest method:
"status": 500,
"error": "Internal Server Error",
"message": "class java.util.ArrayList cannot be cast to class java.util.Map (java.util.ArrayList and java.util.Map are in module java.base of loader 'bootstrap')",
Method declaration is:
#Cacheable("loadDevicesFloors")
#GetMapping("/floors/all-devices")
public Map<String, DevicesFloorDTO> loadDevicesFloors() {...
and DevicesFloorDTO looks as follows:
public class DevicesFloorDTO implements Serializable {
private final List<DeviceDTO> deviceDTOs;
private final String floorName;
private final Integer floorIndex;
public DevicesFloorDTO(List<DeviceDTO> devicesDtos, String floorName, Integer floorIndex) {
this.deviceDTOs = devicesDtos;
this.floorName = floorName;
this.floorIndex = floorIndex;
}...
Additionally my #Bean redisTemplate method implementation:
#Bean
JedisConnectionFactory jedisConnectionFactory() {
JedisConnectionFactory jedisConFactory
= new JedisConnectionFactory();
jedisConFactory.setHostName(redisHost);
jedisConFactory.setPort(redisPort);
jedisConFactory.setPassword(redisPassword);
return jedisConFactory;
}
#Bean
public RedisTemplate<?, ?> redisTemplate() {
RedisTemplate<byte[], byte[]> template = new RedisTemplate<>();
template.setConnectionFactory(jedisConnectionFactory());
return template;
}
Does anyone know what is wrong in this implementation? Without #Cacheable it works as expected, but after adding #Cacheable error occurs. I was searching a lot and still don't know what causes this error and how to fix this. Any comment may be helpful. Thaks a lot!

The Generics you have specified for the Map Map<String, DevicesFloorDTO> will not be available at runtime during serialization/deserialization. What format are you trying to save your objects to in Reids? Are they saving as JSON (string) or binary?
We have had success with the GenericJackson2JsonRedisSerializer because it will save the class info inside the JSON string so Redis know exactly how to recreate objects.
There are also some instances where a Wrapper Object is needed in order to correctly serialize/deserialize objects.
#Bean
public RedisCacheManager cacheManager( RedisConnectionFactory redisConnectionFactory,
ResourceLoader resourceLoader ) {
RedisCacheManager.RedisCacheManagerBuilder builder = RedisCacheManager
.builder( redisConnectionFactory )
.cacheDefaults( determineConfiguration() );
List<String> cacheNames = this.cacheProperties.getCacheNames();
if ( !cacheNames.isEmpty() ) {
builder.initialCacheNames( new LinkedHashSet<>( cacheNames ) );
}
return builder.build();
}
private RedisCacheConfiguration determineConfiguration() {
if ( this.redisCacheConfiguration != null ) {
return this.redisCacheConfiguration;
}
CacheProperties.Redis redisProperties = this.cacheProperties.getRedis();
RedisCacheConfiguration config = RedisCacheConfiguration.defaultCacheConfig();
ObjectMapper mapper = new Jackson2ObjectMapperBuilder()
.modulesToInstall( new SimpleModule().addSerializer( new NullValueSerializer( null ) ) )
.failOnEmptyBeans( false )
.build();
mapper.enableDefaultTyping( ObjectMapper.DefaultTyping.NON_FINAL, JsonTypeInfo.As.PROPERTY );
GenericJackson2JsonRedisSerializer serializer = new GenericJackson2JsonRedisSerializer( mapper );
//get the mapper b/c they registered some internal modules
config = config.serializeValuesWith( RedisSerializationContext.SerializationPair.fromSerializer( serializer ) );
if ( redisProperties.getTimeToLive() != null ) {
config = config.entryTtl( redisProperties.getTimeToLive() );
}
if ( redisProperties.getKeyPrefix() != null ) {
config = config.prefixKeysWith( redisProperties.getKeyPrefix() );
}
if ( !redisProperties.isCacheNullValues() ) {
config = config.disableCachingNullValues();
}
if ( !redisProperties.isUseKeyPrefix() ) {
config = config.disableKeyPrefix();
config = config.computePrefixWith( cacheName -> cacheName + "::" );
}
return config;
}

Related

When springboot uses # cacheable to integrate redis cache, the key generation policy KeyGenerator generates duplicate keys enhancerbyspringcglib

1.Redisconfig configuration classes are as follows:
#Configuration
public class RedisConfig {
/**
* #param factory
* #return
*/
#Bean
public CacheManager cacheManager(RedisConnectionFactory factory) {
GenericJackson2JsonRedisSerializer genericJackson2JsonRedisSerializer = new GenericJackson2JsonRedisSerializer();
StringRedisSerializer stringRedisSerializer = new StringRedisSerializer();
// 配置序列化
RedisCacheConfiguration config = RedisCacheConfiguration.defaultCacheConfig();
RedisCacheConfiguration redisCacheConfiguration = config
// 键序列化方式 redis字符串序列化
.serializeKeysWith(RedisSerializationContext.SerializationPair.fromSerializer(stringRedisSerializer))
// 值序列化方式 简单json序列化
.serializeValuesWith(RedisSerializationContext.SerializationPair.fromSerializer(genericJackson2JsonRedisSerializer))
//不缓存Null值
.disableCachingNullValues()
//缓存失效 3天
.entryTtl(Duration.ofDays(3));
return RedisCacheManager.builder(factory).cacheDefaults(redisCacheConfiguration).build();
}
#Bean
public RedisTemplate<String,Object> redisTemplate(RedisConnectionFactory redisConnectionFactory) {
RedisTemplate<String, Object> template = new RedisTemplate();
template.setConnectionFactory(redisConnectionFactory);
GenericJackson2JsonRedisSerializer jsonRedisSerializer = new GenericJackson2JsonRedisSerializer();
// value值的序列化采用fastJsonRedisSerializer
template.setValueSerializer(jsonRedisSerializer);
template.setHashValueSerializer(jsonRedisSerializer);
// key的序列化采用StringRedisSerializer
template.setKeySerializer(new StringRedisSerializer());
template.setHashKeySerializer(new StringRedisSerializer());
return template;
}
/**
* 重写缓存key的生成方式: 类名.方法名字&[参数列表]
* #return
*/
#Bean
public KeyGenerator keyGenerator(){
return new KeyGenerator() {
#Override
public Object generate(Object target, Method method, Object... params) {
StringBuilder sb = new StringBuilder();
sb.append(target.getClass().getName()).append(".");//执行类名
sb.append(method.getName()).append("&");//方法名
sb.append(Arrays.toString(params));//参数
return sb.toString();
}
};
}
}
2. Redis cache is used for # cacheable annotation:
#Transactional(rollbackFor = Exception.class)
#Cacheable(cacheNames = "blog",keyGenerator = "keyGenerator")
#Override
public BlogVo getBlogById(int blogId) {
Blog blog = blogDao.selectBlogById(blogId);
if(blog == null || blog.getBlogStatus()==0){
return null;
}
BlogVo blogVo = new BlogVo();
BeanUtils.copyProperties(blog,blogVo);
User user = userDao.selectUserById(blog.getBlogUserid());
Type type = typeDao.selectTypeById(blog.getBlogTypeid());
blogVo.setBlogUser(user);
blogVo.setBlogType(type);
return blogVo;
}
3. Generating duplicate key value pairs:
Two key value pairs with the same value are cached in redis database. The difference between duplicate key contents is that they contain enhancerbyspringcglib, which has been tested for many times. Other methods are the same:
(1) right key: blog::com.zju.sdust.weblog.service.impl.BlogServiceImpl.getBlogById&[1]
(2) Duplicate key: blog::com.zju.sdust.weblog.service.impl.BlogServiceImpl$$EnhancerBySpringCGLIB$$e965464f.getBlogById&[1]
I want to know why and how to solve it? help me!!!

How to fetch cached value using redisson client

I wanted to fetch cached(#Cachable) value using redisson client but it return strange data if i use any codec in redisson client (getBucket("fruit::1",StringCodec.INSTANCE)) and it throws error unless i use codec.
i have used below code for caching
#Cacheable(value = "fruits", key = "#id")
public Fruit getFruitById(int id) {
// get fruit by id
CriteriaBuilder builder = em.getCriteriaBuilder();
CriteriaQuery<Fruit> query = builder.createQuery(Fruit.class);
Root<Fruit> root = query.from(Fruit.class);
query.select(root);
query.where(builder.equal(root.get("id"), id));
TypedQuery<Fruit> fruitQuery = em.createQuery(query);
return fruitQuery.getSingleResult();
}
When i use codec for getting that cached data
RBucket<String> bucket = client.getBucket("fruits::1",
StringCodec.INSTANCE);
String fruit = bucket.get();
It returns following strange data
��srcom.home.redis.Fruit��.ܵo*rIidIpriceLnametLjava/lang/String;xp,tpomegrantite
RedisConfiguration
#Bean
public RedisCacheConfiguration cacheConfiguration() {
RedisCacheConfiguration cacheConfig = RedisCacheConfiguration
.defaultCacheConfig().entryTtl(Duration.ofSeconds(600))
.disableCachingNullValues();
return cacheConfig;
}
#Bean
public RedisCacheManager cacheManager() {
RedisCacheManager rcm = RedisCacheManager
.builder(this.getRedissonStoreFactory())
.cacheDefaults(cacheConfiguration()).transactionAware().build();
return rcm;
}
#Bean
#Primary
public RedisProperties redisProperties() {
return new RedisProperties();
}
#Bean
public RedissonConnectionFactory getRedissonStoreFactory() {
return new RedissonConnectionFactory(getConfig());
}
#Bean
public RedissonNode getNode() {
RedissonNodeConfig nodeConfig = new RedissonNodeConfig(getConfig());
nodeConfig.setExecutorServiceWorkers(
Collections.singletonMap("ensimp", 1));
RedissonNode node = RedissonNode.create(nodeConfig);
node.start();
return node;
}
#Bean
public Config getConfig()
{
Config config = new Config();
RedisProperties properties = redisProperties();
config.useSingleServer().setAddress(
"redis://" + properties.getHost() + ":" + properties.getPort());
return config;
}
redisson.json
{
"singleServerConfig":{
"idleConnectionTimeout":500,
"connectTimeout":1000,
"timeout":3000,
"retryAttempts":3,
"retryInterval":1500,
"password":null,
"subscriptionsPerConnection":5,
"clientName":null,
"address": "redis://127.0.0.1:6379",
"subscriptionConnectionMinimumIdleSize":0,
"subscriptionConnectionPoolSize":1,
"connectionMinimumIdleSize":0,
"connectionPoolSize":20,
"database":0,
"dnsMonitoringInterval":5000
},
"threads":16,
"nettyThreads":32,
"codec":{
"class":"org.redisson.codec.FstCodec"
},
"transportMode":"NIO"
}
i've used fst codec too but got the same strange data. i want correctly decoded data it'd be great if anyone help me with a right code.
You need to use RMapCache data to obtain data and not RBucket.
client.getMapCache("fruits::1", StringCodec.INSTANCE);
try this:
RMapCache mycache;
mycache=client.getMapCache("fruits::1");
then to retrieve the data use readAllValues()
Collection<Fruit> map=mycache.readAllValues();
System.out.println(map);

mapstruct target object set multiple times instead of update

Below is my mapper interface. I am using mapstruct 1.3.0.Final.
#Mapper(componentModel = "spring")
public interface ApiMapper {
#Mappings({
#Mapping(source = "in.entityName.fn", target="name.fn"),
#Mapping(source = "in.entityName.ln", target="name.ln"),
#Mapping(source = "in.salute.sln", target="name.salutation"),
})
public MyOutput map(InputData in);
}
It looks super simple, but the implementation class sets the name object in the target twice, so I get only the last mapped object. Can someone help me to understand what am I missing or doing wrong here?
#Component
public class ApiMapperImpl implements ApiMapper {
#Override
public MyOutput map(InputData in) {
if ( in == null ) {
return null;
}
MyOutput myOutput = new MyOutput();
myOutput.setName( entityNameToNameDetails( in.getEntityName() ) );
myOutput.setName( saluteServiceOutputToNameDetails( in.getSalute() ) );
return myOutput;
}
protected NameDetails entityNameToNameDetails(EntityName entityName) {
if ( entityName == null ) {
return null;
}
NameDetails nameDetails = new NameDetails();
nameDetails.setFn( entityName.getFn() );
nameDetails.setLn( entityName.getLn() );
return nameDetails;
}
protected NameDetails saluteServiceOutputToNameDetails(SaluteServiceOutput saluteServiceOutput) {
if ( saluteServiceOutput == null ) {
return null;
}
NameDetails nameDetails = new NameDetails();
nameDetails.setSalutation( saluteServiceOutput.getSln() );
return nameDetails;
}
}
I think, you should help to mapstruct in this context, adding a simple method, for example :
#Mappings({
#Mapping(source = "in.entityName.fn", target="fn"),
#Mapping(source = "in.entityName.ln", target="ln"),
#Mapping(source = "in.salute.sln", target="salutation"),
})
public NameDetails mapNameDetails(InputData in);`
I found the answers from here and here. I like the 2nd option to use #MappingTarget to update existing beans. The only new thing is I need to create object for MyOutput and use it when calling the map method.
I have modified my mapper code something like below:
#Mappings({
#Mapping(source = "in.entityName.fn", target="out.name.fn"),
#Mapping(source = "in.entityName.ln", target="out.name.ln"),
#Mapping(source = "in.salute.sln", target="out.name.salutation"),
})
public void mapNameDetails(InputData in, #MappingTarget MyOutput out);
The Junit for the above mapper code.
#Autowired
private ApiMapper apiMapper;
#Test
public void testApiMapper() {
MyOutput output = new MyOutput();
InputData input = createInputData();
apiMapper.mapNameDetails(input, output);
assertNotNull(output);
assertNotNull(output.getName());
assertEquals("Sridhar", output.getName().getFn());
assertNull(output.getName().getLn());
assertEquals("Mr.", output.getName().getSalutation());
}
private InputData createInputData() {
InputData data = new InputData();
data.setEntityName(new EntityName());
data.setSalute(new SaluteServiceOutput());
data.getEntityName().setFn("Sridhar");
data.getSalute().setSln("Mr.");
return data;
}
This bug was fixed in version 1.3.1.Final
Link to bug

Kafka consumer unit test with Avro Schema registry failing

I'm writing a consumer which listens to a Kafka topic and consumes message whenever message is available. I've tested the logic/code by running Kafka locally and it's working fine.
While writing the unit/component test cases, it's failing with avro schema registry url error. I've tried different options available on internet but could not find anything working. I am not sure if my approach is even correct. Please help.
Listener Class
#KafkaListener(topics = "positionmgmt.v1", containerFactory = "genericKafkaListenerFactory")
public void receive(ConsumerRecord<String, GenericRecord> consumerRecord) {
try {
GenericRecord generic = consumerRecord.value();
Object obj = generic.get("metadata");
ObjectMapper mapper = new ObjectMapper();
Header headerMetaData = mapper.readValue(obj.toString(), Header.class);
System.out.println("Received payload : " + consumerRecord.value());
//Call backend with details in GenericRecord
}catch (Exception e){
System.out.println("Exception while reading message from Kafka " + e );
}
Kafka config
#Bean
public ConcurrentKafkaListenerContainerFactory<String, GenericRecord> genericKafkaListenerFactory() {
ConcurrentKafkaListenerContainerFactory<String, GenericRecord> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(genericConsumerFactory());
return factory;
}
public ConsumerFactory<String, GenericRecord> genericConsumerFactory() {
Map<String, Object> config = new HashMap<>();
config.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "127.0.0.1:9092");
config.put(ConsumerConfig.GROUP_ID_CONFIG, "group_id");
config.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
config.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, KafkaAvroDeserializer.class);
config.put(KafkaAvroDeserializerConfig.SCHEMA_REGISTRY_URL_CONFIG,"http://localhost:8081");
return new DefaultKafkaConsumerFactory<>(config);
}
Avro Schema
{
"type":"record",
"name":"KafkaEvent",
"namespace":"com.ms.model.avro",
"fields":[
{
"name":"metadata",
"type":{
"name":"metadata",
"type":"record",
"fields":[
{
"name":"correlationid",
"type":"string",
"doc":"this is corrleation id for transaction"
},
{
"name":"subject",
"type":"string",
"doc":"this is subject for transaction"
},
{
"name":"version",
"type":"string",
"doc":"this is version for transaction"
}
]
}
},
{
"name":"name",
"type":"string"
},
{
"name":"dept",
"type":"string"
},
{
"name":"empnumber",
"type":"string"
}
]
}
This is my test code which I tried...
#ComponentTest
#RunWith(SpringRunner.class)
#EmbeddedKafka(partitions = 1, topics = { "positionmgmt.v1" })
#SpringBootTest(classes={Application.class})
#DirtiesContext
public class ConsumeKafkaMessageTest {
private static final String TEST_TOPIC = "positionmgmt.v1";
#Autowired(required=true)
EmbeddedKafkaBroker embeddedKafkaBroker;
private Schema schema;
private SchemaRegistryClient schemaRegistry;
private KafkaAvroSerializer avroSerializer;
private KafkaAvroDeserializer avroDeserializer;
private MockSchemaRegistryClient mockSchemaRegistryClient = new MockSchemaRegistryClient();
private String registryUrl = "unused";
private String avroSchema = string representation of avro schema
#BeforeEach
public void setUp() throws Exception {
Schema.Parser parser = new Schema.Parser();
schema = parser.parse(avroSchema);
mockSchemaRegistryClient.register("Vendors-value", schema);
}
#Test
public void consumeKafkaMessage_receive_sucess() {
Schema metadataSchema = schema.getField("metadata").schema();
GenericRecord metadata = new GenericData.Record(metadataSchema);
metadata.put("version", "1.0");
metadata.put("correlationid", "correlationid");
metadata.put("subject", "metadata");
GenericRecord record = new GenericData.Record(schema);
record.put("metadata", metadata);
record.put("name", "ABC");
record.put("dept", "XYZ");
Consumer<String, GenericRecord> consumer = configureConsumer();
Producer<String, GenericRecord> producer = configureProducer();
ProducerRecord<String, GenericRecord> prodRecord = new ProducerRecord<String, GenericRecord>(TEST_TOPIC, record);
producer.send(prodRecord);
ConsumerRecord<String, GenericRecord> singleRecord = KafkaTestUtils.getSingleRecord(consumer, TEST_TOPIC);
assertNotNull(singleRecord.value());
consumer.close();
producer.close();
}
private Consumer<String, GenericRecord> configureConsumer() {
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps("groupid", "true", embeddedKafkaBroker);
consumerProps.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
Consumer<String, GenericRecord> consumer = new DefaultKafkaConsumerFactory<String, GenericRecord>(consumerProps).createConsumer();
consumer.subscribe(Collections.singleton(TEST_TOPIC));
return consumer;
}
private Producer<String, GenericRecord> configureProducer() {
Map<String, Object> producerProps = new HashMap<>(KafkaTestUtils.producerProps(embeddedKafkaBroker));
producerProps.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName());
producerProps.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, KafkaAvroSerializer.class.getName());
producerProps.put(KafkaAvroSerializerConfig.SCHEMA_REGISTRY_URL_CONFIG, mockSchemaRegistryClient);
producerProps.put(KafkaAvroSerializerConfig.AUTO_REGISTER_SCHEMAS, "false");
return new DefaultKafkaProducerFactory<String, GenericRecord>(producerProps).createProducer();
}
}
Error
component.com.ms.listener.ConsumeKafkaMessageTest > consumeKafkaMessage_receive_sucess() FAILED
org.apache.kafka.common.KafkaException: Failed to construct kafka producer
at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:457)
at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:289)
at org.springframework.kafka.core.DefaultKafkaProducerFactory.createKafkaProducer(DefaultKafkaProducerFactory.java:318)
at org.springframework.kafka.core.DefaultKafkaProducerFactory.createProducer(DefaultKafkaProducerFactory.java:305)
at component.com.ms.listener.ConsumeKafkaMessageTest.configureProducer(ConsumeKafkaMessageTest.java:125)
at component.com.ms.listener.ConsumeKafkaMessageTest.consumeKafkaMessage_receive_sucess(ConsumeKafkaMessageTest.java:97)
Caused by:
io.confluent.common.config.ConfigException: Invalid value io.confluent.kafka.schemaregistry.client.MockSchemaRegistryClient#20751870 for configuration schema.registry.url: Expected a comma separated list.
at io.confluent.common.config.ConfigDef.parseType(ConfigDef.java:345)
at io.confluent.common.config.ConfigDef.parse(ConfigDef.java:249)
at io.confluent.common.config.AbstractConfig.<init>(AbstractConfig.java:78)
at io.confluent.kafka.serializers.AbstractKafkaAvroSerDeConfig.<init>(AbstractKafkaAvroSerDeConfig.java:105)
at io.confluent.kafka.serializers.KafkaAvroSerializerConfig.<init>(KafkaAvroSerializerConfig.java:32)
at io.confluent.kafka.serializers.KafkaAvroSerializer.configure(KafkaAvroSerializer.java:48)
at org.apache.kafka.common.serialization.ExtendedSerializer$Wrapper.configure(ExtendedSerializer.java:60)
at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:372)
... 5 more
I investigated it a bit and I found out that the problem is in the CashedSchemaRegistryClient that is used by the KafkaAvroSerializer/Deserializer. It is used to fetch the schema definitions from the Confluent Schema Registry.
You already have your schema definition locally so you don't need to go to Schema Registry for them. (at least in your tests)
I had a similar problem and I solved it by creating a custom KafkaAvroSerializer/KafkaAvroDeserializer.
This is a sample of KafkaAvroSerializer. It is rather simple. You just need to extend the provided KafkaAvroSerializer and tell him to use MockSchemaRegistryClient.
public class CustomKafkaAvroSerializer extends KafkaAvroSerializer {
public CustomKafkaAvroSerializer() {
super();
super.schemaRegistry = new MockSchemaRegistryClient();
}
public CustomKafkaAvroSerializer(SchemaRegistryClient client) {
super(new MockSchemaRegistryClient());
}
public CustomKafkaAvroSerializer(SchemaRegistryClient client, Map<String, ?> props) {
super(new MockSchemaRegistryClient(), props);
}
}
This is a sample of KafkaAvroDeserializer. When the deserialize method is called you need to tell him which schema to use.
public class CustomKafkaAvroDeserializer extends KafkaAvroDeserializer {
#Override
public Object deserialize(String topic, byte[] bytes) {
this.schemaRegistry = getMockClient(KafkaEvent.SCHEMA$);
return super.deserialize(topic, bytes);
}
private static SchemaRegistryClient getMockClient(final Schema schema$) {
return new MockSchemaRegistryClient() {
#Override
public synchronized Schema getById(int id) {
return schema$;
}
};
}
}
The last step is to tell spring to use created Serializer/Deserializer
spring.kafka.producer.properties.schema.registry.url= not-used
spring.kafka.producer.value-serializer = CustomKafkaAvroSerializer
spring.kafka.producer.key-serializer = org.apache.kafka.common.serialization.StringSerializer
spring.kafka.producer.group-id = showcase-producer-id
spring.kafka.consumer.properties.schema.registry.url= not-used
spring.kafka.consumer.value-deserializer = CustomKafkaAvroDeserializer
spring.kafka.consumer.key-deserializer = org.apache.kafka.common.serialization.StringDeserializer
spring.kafka.consumer.group-id = showcase-consumer-id
spring.kafka.auto.offset.reset = earliest
spring.kafka.producer.auto.register.schemas= true
spring.kafka.properties.specific.avro.reader= true
I wrote a short blog post about that:
https://medium.com/#igorvlahek1/no-need-for-schema-registry-in-your-spring-kafka-tests-a5b81468a0e1?source=friends_link&sk=e55f73b86504e9f577e259181c8d0e23
Link to the working sample project: https://github.com/ivlahek/kafka-avro-without-registry
The answer from #ivlahek is working, but if you look at this example 3 year later you might want to do slight modification to CustomKafkaAvroDeserializer
private static SchemaRegistryClient getMockClient(final Schema schema) {
return new MockSchemaRegistryClient() {
#Override
public ParsedSchema getSchemaBySubjectAndId(String subject, int id)
throws IOException, RestClientException {
return new AvroSchema(schema);
}
};
}
As the error says, you need to provide a string to the registry in the producer config, not an object.
Since you're using the Mock class, that string could be anything...
However, you'll need to construct the serializers given the registry instance
Serializer serializer = new KafkaAvroSerializer(mockSchemaRegistry);
// make config map with ("schema.registry.url", "unused")
serializer.configure(config, false);
Otherwise, it will try to create a non-mocked client
And put that into the properties
producerProps.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, serializer);
If your #KafkaListener is in test class then you can read it in StringDeserializer then convert it to the desired class manually
#Autowired
private MyKafkaAvroDeserializer myKafkaAvroDeserializer;
#KafkaListener( topics = "test")
public void inputData(ConsumerRecord<?, ?> consumerRecord) {
log.info("received payload='{}'", consumerRecord.toString(),consumerRecord.value());
GenericRecord genericRecord = (GenericRecord)myKafkaAvroDeserializer.deserialize("test",consumerRecord.value().toString().getBytes(StandardCharsets.UTF_8));
Myclass myclass = (Myclass) SpecificData.get().deepCopy(Myclass.SCHEMA$, genericRecord);
}
#Component
public class MyKafkaAvroDeserializer extends KafkaAvroDeserializer {
#Override
public Object deserialize(String topic, byte[] bytes) {
this.schemaRegistry = getMockClient(Myclass.SCHEMA$);
return super.deserialize(topic, bytes);
}
private static SchemaRegistryClient getMockClient(final Schema schema$) {
return new MockSchemaRegistryClient() {
#Override
public synchronized org.apache.avro.Schema getById(int id) {
return schema$;
}
};
}
}
Remember to add schema registry and key/value serializer in application.yml although it won't be used
consumer:
key-deserializer: org.apache.kafka.common.serialization.StringDeserializer
value-deserializer: org.apache.kafka.common.serialization.StringDeserializer
properties:
schema.registry.url :http://localhost:8080

call different service on the basis of string parameter in spring

In my controller, I receive a string parameter on the basis of which I need to decide which service to call, how can I do the same in my Spring Boot application using Spring annotations?
For example: we have different types of cars. Now, on the basis of parameter in the request I should be able to decide which particular car service I should call.
How can I have a factory using annotations in Spring Boot, and objects should be returned from that factory on the basis of input.
I remember implementing support for this approach a few years ago, I believe inspired and using https://www.captechconsulting.com/blogs/combining-strategy-pattern-and-spring as the entry point to my utility library, use the following code snippets at your convenience:
Strategy.java
package ...
#Documented
#Target({ ElementType.TYPE })
#Retention(RetentionPolicy.RUNTIME)
public #interface Strategy {
Class<?> type();
String[] profiles() default {};
}
StrategyFactory.java
package ...
public class StrategyFactory {
private static final Logger LOG = Logger.getLogger( StrategyFactory.class );
private Map<Class<?>, Strategy> strategiesCache = new HashMap<Class<?>, Strategy>();
private String[] packages;
#PostConstruct
public void init() {
if (this.packages != null) {
Set<Class<?>> annotatedClasses = new HashSet<Class<?>>();
for (String pack : this.packages) {
Reflections reflections = new Reflections( pack );
annotatedClasses.addAll( reflections.getTypesAnnotatedWith( Strategy.class ) );
}
this.sanityCheck( annotatedClasses );
}
}
public <T> T getStrategy(Class<T> strategyClass) {
return this.getStrategy( strategyClass, null );
}
#SuppressWarnings("unchecked")
public <T> T getStrategy(Class<T> strategyClass, String currentProfile) {
Class<T> clazz = (Class<T>) this.findStrategyMatchingProfile( strategyClass, currentProfile );
if (clazz == null) {
throw new StrategyNotFoundException( String.format( "No strategies found of type '%s', are the strategies marked with #Strategy?", strategyClass.getName() ) );
}
try {
return (T) clazz.newInstance();
} catch (Exception e) {
throw ExceptionUtils.rethrowAs( e, StrategyException.class );
}
}
/**
* Checks to make sure there is only one strategy of each type(Interface) annotated for each profile Will throw an exception on startup if multiple strategies are mapped to the same profile.
* #param annotatedClasses a list of classes
*/
private void sanityCheck(Set<Class<?>> annotatedClasses) {
Set<String> usedStrategies = new HashSet<String>();
for (Class<?> annotatedClass : annotatedClasses) {
Strategy strategyAnnotation = AnnotationUtils.findAnnotation( annotatedClass, Strategy.class );
if (!strategyAnnotation.type().isAssignableFrom( annotatedClass )) {
throw new StrategyProfileViolationException( String.format( "'%s' should be assignable from '%s'", strategyAnnotation.type(), annotatedClass ) );
}
this.strategiesCache.put( annotatedClass, strategyAnnotation );
if (this.isDefault( strategyAnnotation )) {
this.ifNotExistAdd( strategyAnnotation.type(), "default", usedStrategies );
} else {
for (String profile : strategyAnnotation.profiles()) {
this.ifNotExistAdd( strategyAnnotation.type(), profile, usedStrategies );
}
}
}
}
private void ifNotExistAdd(Class<?> type, String profile, Set<String> usedStrategies) {
String key = this.createKey( type, profile );
if (usedStrategies.contains( key )) {
throw new StrategyProfileViolationException( String.format( "There can only be a single strategy for each type, found multiple for type '%s' and profile '%s'", type, profile ) );
}
usedStrategies.add( key );
}
private String createKey(Class<?> type, String profile) {
return String.format( "%s_%s", type, profile ).toLowerCase();
}
private boolean isDefault(Strategy strategyAnnotation) {
return (strategyAnnotation.profiles().length == 0);
}
private Class<?> findStrategyMatchingProfile(Class<?> strategyClass, String currentProfile) {
for (Map.Entry<Class<?>, Strategy> strategyCacheEntry : this.strategiesCache.entrySet()) {
Strategy strategyCacheEntryValue = strategyCacheEntry.getValue();
if (strategyCacheEntryValue.type().equals( strategyClass )) {
if (currentProfile != null) {
for (String profile : strategyCacheEntryValue.profiles()) {
if (currentProfile.equals( profile )) {
Class<?> result = strategyCacheEntry.getKey();
if (LOG.isDebugEnabled()) {
LOG.debug( String.format( "Found strategy [strategy=%s, profile=%s, strategyImpl=%s]", strategyClass, currentProfile, result ) );
}
return result;
}
}
} else if (this.isDefault( strategyCacheEntryValue )) {
Class<?> defaultClass = strategyCacheEntry.getKey();
if (LOG.isDebugEnabled()) {
LOG.debug( String.format( "Found default strategy [strategy=%s, profile=%s, strategyImpl=%s]", strategyClass, currentProfile, defaultClass ) );
}
return defaultClass;
}
}
}
return null;
}
public void setPackages(String[] packages) {
this.packages = packages;
}
}
StrategyException.java
package ...
public class StrategyException extends RuntimeException {
...
}
StrategyNotFoundException.java
package ...
public class StrategyNotFoundException extends StrategyException {
...
}
StrategyProfileViolationException.java
package ...
public class StrategyProfileViolationException extends StrategyException {
...
}
Usage without Spring:
NavigationStrategy.java
package com.asimio.core.test.strategy.strategies.navigation;
public interface NavigationStrategy {
public String naviateTo();
}
FreeNavigationStrategy.java
package com.asimio.core.test.strategy.strategies.navigation;
#Strategy(type = NavigationStrategy.class)
public class FreeNavigationStrategy implements NavigationStrategy {
public String naviateTo() {
return "free";
}
}
LimitedPremiumNavigationStrategy.java
package com.asimio.core.test.strategy.strategies.navigation;
#Strategy(type = NavigationStrategy.class, profiles = { "limited", "premium" })
public class LimitedPremiumNavigationStrategy implements NavigationStrategy {
public String naviateTo() {
return "limited+premium";
}
}
Then
...
StrategyFactory factory = new StrategyFactory();
factory.setPackages( new String[] { "com.asimio.core.test.strategy.strategies.navigation" } );
this.factory.init();
NavigationStrategy ns = this.factory.getStrategy( NavigationStrategy.class );
String result = ns.naviateTo();
Assert.assertThat( "free", Matchers.is( result ) );
...
Or
...
String result = factory.getStrategy( NavigationStrategy.class, "limited" ).naviateTo();
Assert.assertThat( "limited+premium", Matchers.is( result ) );
...
Usage with Spring:
Spring context file:
<bean id="strategyFactory" class="com.asimio.core.strategy.StrategyFactory">
<property name="packages">
<list>
<value>com.asimio.jobs.feed.impl</value>
</list>
</property>
</bean>
IFeedProcessor.java
package ...
public interface IFeedProcessor {
void runBatch(String file);
}
CsvRentalsFeedProcessor.java
package ...
#Configurable(dependencyCheck = true)
#Strategy(type = IFeedProcessor.class, profiles = { "csv" })
public class CsvRentalsFeedProcessor implements IFeedProcessor, Serializable {
#Autowired
private CsvRentalsBatchReporter batchReporter;
...
}
Then
...
IFeedProcessor feedProcessor = this.strategyFactory.getStrategy( IFeedProcessor.class, feedFileExt );
feedProcessor.runBatch( unzippedFeedDir.getAbsolutePath() + File.separatorChar + feedFileName );
...
Notice CsvRentalsBatchReporter is "injected" in CsvRentalsFeedProcessor bean (a Strategy implementation) but StrategyFactory instantiates the strategy implementation using return (T) clazz.newInstance();, so what's needed to make this object Spring-aware?
First CsvRentalsFeedProcessor to be annotated with #Configurable(dependencyCheck = true) and when running the Java application this argument is needed in the java command: -javaagent:<path to spring-agent-${spring.version}.jar>

Categories