Spring-Kafka #KafkaHandlers not consuming respective java Objects - java

I know it has been asked multiple time, however, i cant find the solution yet. I am able to consume a specific Java Object by my #KafkaListener(On a class level) and works perfectly fine, however, When i try to consume multiple different JSON Objects from the same topic( I use #KafkaListener at class level and #KafkaHandler at the method level, each #KafkaHandler method expecting different Object), it always produces LinkedHashMap. I can parse this and get the data and do a factory pattern to generate different Instances based on a json field, however, i do not want to do that when Spring can auto detect a specific #KafkaHandler for routing the message.
How do i consume different JSON object from a single topic with single #KafkaListener.
I am using below configs:
public ConsumerFactory<String, Object> abcConsumerFactory() {
Map<String, Object> config = new HashMap<>();
config.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
config.put(ConsumerConfig.GROUP_ID_CONFIG,"group-1");
config.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
config.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG,JsonDeserializer.class );
config.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, false);
config.put(JsonDeserializer.TRUSTED_PACKAGES, "*");
return new DefaultKafkaConsumerFactory<>(config, new StringDeserializer(), new JsonDeserializer<>(Object.class, false));
#Bean
public ConcurrentKafkaListenerContainerFactory<String, Object> abcListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, Object> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(abcConsumerFactory());
return factory;
}
If i use the actual class(Foo or Bar) instead of Object in above configs it works fine for that particular objects, However, when i try to generalize, it does not go to the specific #KafkaHandler, and instead goes to a #KafkaHandler with Payload type LinkedHashMap(I was trying to check if it gets delivered to a #KafkaHandler)
#Bean
public ConcurrentKafkaListenerContainerFactory<String, Object> abcListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, Object> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(abcConsumerFactory());
return factory;
}
Class:
#KafkaListener(topics={"abc_gh"}, containerFactory = "abcListenerContainerFactory")
#Service
public class MyListener {
#KafkaHandler
public void consumeMessage(#Payload Foo f) {
//only comes here when i use Foo in my configs instead of Object
}
#KafkaHandler
public void consumeMessage22(#Payload Bar b) {
//only comes here when i use Bar in my configs instead of Object
}
#KafkaHandler
public void consumeMessage77(#Payload LinkedHashMap bc) {
//comes here when i use Object in the configs, even if i expect a Foo or a Bar object
}
}
One thing i want to share is the Producer is not using Spring-Kafka.
I dont know what i am missing, i tried a lot of things but no luck.

As the documentation says:
When using #KafkaHandler methods, the payload must have already been converted to the domain object (so the match can be performed). Use a custom deserializer, the JsonDeserializer, or the JsonMessageConverter with its TypePrecedence set to TYPE_ID. See Serialization, Deserialization, and Message Conversion for more information.
In order to route to the proper method, the ConsumerRecord must already have the correct type in it.
config.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG,JsonDeserializer.class );
You are not providing the deserializer with any information about what object to create.
When Spring is the producer, it adds type information in record headers which can be used by the deserializer to construct the right type.
If there is no type information, you'll get a Map.
The producer has to set the type headers for this to work.
When the #KaflaListener is at the method level, we can determine the type to create from the method paramters. At the class level, we have a catch-22 - we can't choose a method unless the record already has been converted.
Your producer doesn't have to know the actual type, but it at least has to provide a header that can be used to look up the type we need to convert to.
See Mapping Types.
Alternatively, the producer's JSON serializer must be configured to add type information into the JSON itself.
Another option is a custom deserializer that "peeks" at the JSON to determine what class to instantiate.

Related

How to Parse untyped, nested JSON with Polymorphism?

I am using Feign to hit external APIs, and then to process the externally generated JSON (aka, the response data cannot be modified in any way), and I am trying to bundle these together into an extensible super type. At this point, I am not even sure if what I am trying to do is possible with Jackson / Feign. If it would be much easier to abandon (or heavily restructure) the polymorphism, I think I am also ready to give up on it and just create a bunch of sub classes.
Here are my two main questions, with more context below.
Should I just separate the easily deduced types from the complex types, and have a little more duplicated boiler plate?
How can I create a custom deserializer for the list object I linked? Ideally I would like to have some way to populate the more boiler plate fields less manually -- as an example, it would be great if I could call default deserializers inside it, which would rely more on the standard annotations in other objects.
Ideally, I would like one class, like this:
public final class BillApiResponse {
#Valid
#JsonProperty("response_status")
private boolean responseStatus;
#Valid
#JsonProperty("response_message")
private String responseMessage;
#JsonProperty("response_data")
private BillApiResponseData responseData;
//getters and setters, etc.
}
and then I would to have Jackson automatically map the simpler objects in whatever way is easiest (LoginResponse, LoginError), while I would try to implement a custom handler for the more complex objects (UpdateObject, ListOfObjects).
So, something like this:
#JsonTypeInfo(use = Id.DEDUCTION)
#JsonSubTypes({
#Type(value = BillLoginSuccess.class),
#Type(value = BillErrorResponse.class),
//#Type(value = BillResponseObject[].class) <--- This breaks things when added
})
// #JsonTypeResolver(value = BillResponseTypeResolver.class) <--- Open to using one of
// these if I can figure out how
// #JsonDeserialize(using = BillResponseDeserializer.class) <--- Also open to using a
// custom deserializer, but I would like to keep it only for certain parts
public interface BillApiResponseData {}
Here is a link to the API specification I am trying to hit:
Get a List of Objects
This returns an untyped array of untyped objects. Jackson does not seem to like that the array is untyped, and stops parsing everything there. Once inside, we would have to grab the type from a property.
{
"response_status" : 0,
"response_message" : "Success",
"response_data" : [{
"entity" : "SentPay",
"id" : "stp01AUXGYKCBGFMaqlc"
// More fields
} // More values]
}
Login
This returns a totally new object. Generally not having issues handling this one (until I add support for the above list, and then all of the parsing breaks down as Jackson throws errors).
Update Object
This returns an untyped object. Once again, we would have to go inside and look at the property.
I have tried a number of things, but generally I was not successful (hence I am here!).
These include:
Trying to hook into the lifecycle and take over if I detect an array object. I believe this fails because Jackson throws an error when it sees the array does not have a type associated with it.
SimpleModule customDeserializerModule = new SimpleModule()
.setDeserializerModifier(new BeanDeserializerModifier() {
#Override
public JsonDeserializer<?> modifyDeserializer(
DeserializationConfig config,
BeanDescription beanDesc,
JsonDeserializer<?> defaultDeserializer) {
if (beanDesc.getBeanClass().isArray()) {
return new BillResponseDeserializer(defaultDeserializer);
} else {
return defaultDeserializer;
}
}
});
Custom Deserializers. The issue I have is that it seems to want to route ALL of my deserialization calls into the custom one, and I don't want to have to handle the simpler items, which can be deduced.
TypeIdResolvers / TypeResolvers. Frankly these are confusing me a little bit, and I cannot find a good example online to try out.

Allowed packages in custom header of Kafka-Message

In spring-kafka, how do I add classes from a package to be trusted as a custom header field?
The message is being sent like this:
#Autowired
private KafkaTemplate kafkaTemplate;
Message<BouquetMQDTO> m = MessageBuilder
.withPayload(payload)
.setHeader(KafkaHeaders.TOPIC, "topic")
.setHeader("EVENT_TYPE", MessageType.UPSERT)
.build();
kafkaTemplate.send(m);
The receiving end looks like this:
#Component
#KafkaListener(topics = "topic")
public class KafkaController {
#KafkaHandler
public void listen(
#Payload Object objectDTO,
#Header(value = "EVENT_TYPE") MessageType messageType
) {
System.out.println(messageType);
}
}
The exception I keep getting is this:
Caused by: org.springframework.core.convert.ConverterNotFoundException: No converter found capable of converting from type [org.springframework.kafka.support.DefaultKafkaHeaderMapper$NonTrustedHeaderType] to type [#org.springframework.messaging.handler.annotation.Header my.package.MessageType]
MessageType is an enum and I can get it working by sending the String representation and using valueOf() on the receiving side but this solution does not quite feel right. There also loads of tutorials that use something from java.utils, which is trusted by default.
I found that you should be able to declare a bean to allow the enum to be deserialized:
#Bean
public KafkaHeaderMapper defaultKafkaHeaderMapper() {
DefaultKafkaHeaderMapper mapper = new DefaultKafkaHeaderMapper();
mapper.addTrustedPackages("my.package");
return mapper;
}
Sadly, this doesn't work. The exceptions remains the same. I assume I have to declare some more beans and use the KafkaHeaderMapper bean in there, but I can't seem to find out which those are.
I also already have an ConsumerFactory bean where I allow packages to be used as payloads, but allowing the package the enum is from there doesn't do anything either.
props.put(JsonDeserializer.TRUSTED_PACKAGES, "my.other.package,my.package");
return new DefaultKafkaConsumerFactory<>(props);
The JsonDeserializer.TRUSTED_PACKAGES is fully not related to headers.
This one can deal with key or value of the `ConsumerRecord. The header mapper happens slightly in a different place.
Not sure if you use Spring Boot, but there is a MessagingMessageListenerAdapter which comes with a default MessagingMessageConverter and, therefore, default DefaultKafkaHeaderMapper. To customize for your own HeaderMapper, you need crate that MessagingMessageConverter, take a reference to a HeaderMapper and inject that converter into an AbstractKafkaListenerContainerFactory bean.
If you deal with Spring Boot, there is just enough to declare that MessagingMessageConverter and it is going to be auto-configured into an AbstractKafkaListenerContainerFactory created by the framework.
This way you can get access to your trusted packages. However I think 3it is not going to work yet because enum is not so JSON-friendly by default: https://www.baeldung.com/jackson-serialize-enums

DefaultKafkaProducerFactory with multiple JsonSerializer mappings

i'm going through spring documentation and found that we can have multiple mappings for single producer factory spring-docs
senderProps.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, JsonSerializer.class);
senderProps.put(JsonSerializer.TYPE_MAPPINGS, "foo:com.myfoo.Foo, bar:com.mybar.bar");
But it is unclear for me how to create Producerfactory like below
#Bean
public ProducererFactory<Foo, Bar> kafkaProducerFactory(KafkaProperties properties,
JsonSserializer customSerializer) {
return new DefaultKafkaConsumerFactory<>(properties.buildProducerProperties(),
customSerializer, customSerializer);
}
According to my knowledge Foo must be key and Bar must be value right?, and what is this customSerializer? i'm looking for clear example with much more info.
My question is i wish to have single ProducerFactory and kafkatemplate that produces multiple type message to kafka, for example Foo, Bar,Car is that possible?
No; this
senderProps.put(JsonSerializer.TYPE_MAPPINGS, "foo:com.myfoo.Foo, bar:com.mybar.bar");
is only for when you define the deserializer when using properties only.
When using the DefaultKafkaConsumerFactory and DefaultKafkaProducerFactory constructors that take fully built serializer/deserializer objects directly, you must configure the deserializer yourself.
typeMapper = new DefaultJackson2JavaTypeMapper();
typeMapper.setIdClassMapping(myTypeMappingsMap);
deserializer = new JsonDeserializer();
deserlialzer.setTypeMapper(typeMapper);
(and similarly for the serializer).

ModelMapper - Converter/ AbstractConverter vs Provider

I'm using ModelMapper to convert some objects to complex DTOs and vice-versa.
Even though I've tried to understand the documentation, I've found hard to understand when to use a Converter or a Provider or an AbstractConverter.
Now, for example, if I want to convert String properties to little DTOs inside of the destination DTO, I'm doing it manually inside an abstract converter.
For instance:
dest.setAddressDTO(new AddressDTO(source.getStreet(), source.getNumber()));
Is though that the right way to do it?
When should I use a provider?
And if I want to set properties with conditionals, can I use Conditional from within the converter or that's only when using a PropertyMap ?
Additionally, is it a good practice to use the same modelMapper instance to convert several different types of objects?
Thanks in advance
The right way to work with this is to use Converters.
For example, let's say I want to create a converter to convert a dto into a domain object.
So first you define a converter:
private Converter companyDtoToCompany = new AbstractConverter<CompanyDto, Company>() {
#Override
protected Company convert(CompanyDto source) {
Company dest = new Company();
dest.setName(source.getName());
dest.setAddress(source.getAddress());
dest.setContactName(source.getContactName());
dest.setContactEmail(source.getContactEmail());
(...)
dest.setStatus(source.getStatus());
return dest;
}
};
Then you add it to the mapper in the configureMappings() method:
modelMapper = new ModelMapper();
// Using STRICT mode to prevent strange entities mappin
modelMapper.getConfiguration()
.setMatchingStrategy(MatchingStrategies.STRICT);
modelMapper.addConverter(companyDtoToCompany);
// modelMapper.addConverter(otherConverter);
}
And finally you just need to add the mapping methods for those types you can use from your code:
public Company convertCompanyReqDtoToCompany(CompanyDto dto, Class<Company> destinationType) {
return modelMapper.map(dto, destinationType);
}

What's the common way to deal with Jackson serialization

Currently I have a project that makes use of Spring-Hibernate and also Jackson to deal with JSON. The first time I tried to use Jackson I always got LazyInitializationException and sometimes infinite loop for multiple entities that references each other. Then I found #JsonIgnore and #JsonIdentityInfo.
Now the problem is sometimes it is needed to ignore properties but sometimes I just need those properties to be serializable. Is there a way to sometimes ignore several fields and sometimes serialize the fields at the runtime?
I found "Serialization and Deserialization with Jackson: how to programmatically ignore fields?"
But if I always have to use the mix in annotation, it would be cumbersome if an object dozens of properties to retrieve. Eg. In page1 I need propertyA, propertyB, propertyC; in page2 I need propertyA and propertyC; in page3 I only need propertyB. In those cases alone I would have to create 1 class for each page resulting in 3 classes.
So in that case is there a way to define something like:
objectA.ignoreAllExcept('propertyA');
String[] properties = {'propertyA', 'propertyC'};
objectB.ignoreAllExcept(properties); // Retrieve propertyA and propertyC
objectC.ignore(properties);
What you might be looking for is a Module. The documentation says that Modules are
Simple interface for extensions that can be registered with ObjectMappers to provide a well-defined set of extensions to default functionality.
Following is am example of how you might use them to accomplish what you want. Note, there are other ways using which this can be achieved; this is just one of them.
A simple DTO that can be used for specifying the properties to filter:
public class PropertyFilter {
public Class<?> classToFilter;
public Set<String> propertiesToIgnore = Collections.emptySet();
public PropertyFilter(Class<?> classToFilter, Set<String> propertiesToIgnore) {
this.classToFilter = classToFilter;
this.propertiesToIgnore = propertiesToIgnore;
}
}
A custom module that filters out properties based on some attribute that you store in the current request.
public class MyModule extends Module {
#Override
public String getModuleName() {
return "Test Module";
}
#Override
public void setupModule(SetupContext context) {
context.addBeanSerializerModifier(new MySerializerModifier());
}
#Override
public Version version() {
// Modify if you need to.
return Version.unknownVersion();
}
public static class MySerializerModifier extends BeanSerializerModifier {
public BeanSerializerBuilder updateBuilder(SerializationConfig config,
BeanDescription beanDesc,
BeanSerializerBuilder builder) {
List<PropertyFilter> filters = (List<PropertyFilter>) RequestContextHolder.getRequestAttributes().getAttribute("filters", RequestAttributes.SCOPE_REQUEST);
PropertyFilter filter = getPropertyFilterForClass(filters, beanDesc.getBeanClass());
if(filter == null) {
return builder;
}
List<BeanPropertyWriter> propsToWrite = new ArrayList<BeanPropertyWriter>();
for(BeanPropertyWriter writer : builder.getProperties()) {
if(!filter.propertiesToIgnore.contains(writer.getName())) {
propsToWrite.add(writer);
}
}
builder.setProperties(propsToWrite);
return builder;
}
private PropertyFilter getPropertyFilterForClass(List<PropertyFilter> filters, Class<?> classToCheck) {
for(PropertyFilter f : filters) {
if(f.classToFilter.equals(classToCheck)) {
return f;
}
}
return null;
}
}
}
Note: There is a changeProperties method in the BeanSerializerModifier class that is more appropriate for changing the property list (according to the documentation). So you can move the code written in the updateBuilder to changeProperties method with appropriate changes.
Now, you need to register this custom module with your ObjectMapper. You can get the Jackson HTTP message converter from your application context, and get its object mapper. I am assuming you already know how to do that as you have been dealing with the lazy-initialization issue as well.
// Figure out a way to get the ObjectMapper.
MappingJackson2HttpMessageConverter converter = ... // get the jackson-mapper;
converter.getObjectMapper().registerModule(new MyModule())
And you are done. When you want to customize the serialization for a particular type of object, create a PropertyFilter for that, put it in a List and make it available as an attribute in the current request. This is just a simple example. You might need to tweak it a bit to suit your needs.
In your question, you seem to be looking for a way to specify the properties-to-filter-out on the serialized objects themselves. That, in my opinion, should be avoided as the list of properties to filter-out doesn't belong to your entities. However, if you do want to do that, create an interface that provides setters and getters for the list of properties. Suppose the name of the interface is CustomSerialized Then, you can modify the MyModule class to look for the instances of this CustomSerialized interface and filter out the properties accordingly.
Note: You might need to adjust/tweak a few things based on the versions of the libraries you are using.
I think there is a more flexible way to do it. You can configure Jackson in a such a way that it will silently ignore lazy loaded properties instead of stopping serialization process. So you can reuse the same class. Just load all necessary properties / relations and pass it to Jackson. You can try to do it by declaring your custom ObjectMapper and by turning off SerializationFeature.FAIL_ON_EMPTY_BEANS feature. Hope it helps.
You can filter out properties without modifying classes by creating a static interface for a mixin annotation. Next, annotate that interface with the #JsonFilter annotation. Create a SimpleBeanPropertyFilter and a SimpleFilterProvider. Then create an ObjectWriter with your filter provider by invoking objectMapper.writer(filterProvider)

Categories