I have created the below test class to produce an event using AvroSerializer.
#SpringBootTest
#EmbeddedKafka(partitions = 1, brokerProperties = { "listeners=PLAINTEXT://localhost:9092", "port=9092" })
#TestPropertySource(locations = ("classpath:application-test.properties"))
#ContextConfiguration(classes = { TestAppConfig.class })
#DirtiesContext
#TestInstance(TestInstance.Lifecycle.PER_CLASS)
class EntitlementEventsConsumerServiceImplTest {
#Autowired
EmbeddedKafkaBroker embeddedKafkaBroker;
#Bean
MockSchemaRegistryClient mockSchemaRegistryClient() {
return new MockSchemaRegistryClient();
}
#Bean
KafkaAvroSerializer kafkaAvroSerializer() {
return new KafkaAvroSerializer(mockSchemaRegistryClient());
}
#Bean
public DefaultKafkaProducerFactory producerFactory() {
Map<String, Object> props = KafkaTestUtils.producerProps(embeddedKafkaBroker);
props.put(KafkaAvroSerializerConfig.AUTO_REGISTER_SCHEMAS, false);
return new DefaultKafkaProducerFactory(props, new StringSerializer(), kafkaAvroSerializer());
}
#Bean
public KafkaTemplate<String, ApplicationEvent> kafkaTemplate() {
KafkaTemplate<String, ApplicationEvent> kafkaTemplate = new KafkaTemplate(producerFactory());
return kafkaTemplate;
}
}
But when I send an event using kafkaTemplate().send(appEventsTopic, applicationEvent);I am getting the below exception.
Caused by: io.confluent.kafka.schemaregistry.client.rest.exceptions.RestClientException: Schema Not Found; error code: 404001
at io.confluent.kafka.schemaregistry.client.MockSchemaRegistryClient.getIdFromRegistry(MockSchemaRegistryClient.java:79)
at io.confluent.kafka.schemaregistry.client.MockSchemaRegistryClient.getId(MockSchemaRegistryClient.java:273)
at io.confluent.kafka.serializers.AbstractKafkaAvroSerializer.serializeImpl(AbstractKafkaAvroSerializer.java:82)
at io.confluent.kafka.serializers.KafkaAvroSerializer.serialize(KafkaAvroSerializer.java:53)
at org.apache.kafka.common.serialization.Serializer.serialize(Serializer.java:62)
at org.apache.kafka.clients.producer.KafkaProducer.doSend(KafkaProducer.java:902)
at org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:862)
at org.springframework.kafka.core.DefaultKafkaProducerFactory$CloseSafeProducer.send(DefaultKafkaProducerFactory.java:781)
at org.springframework.kafka.core.KafkaTemplate.doSend(KafkaTemplate.java:562)
at org.springframework.kafka.core.KafkaTemplate.send(KafkaTemplate.java:363)
When I use MockSchemaRegistryClient why it is trying to lookup the schema?
schema.registry.url= mock://localhost.something
Basically anything with mock as prefix will do the job.
Refer to this https://github.com/confluentinc/schema-registry/blob/master/avro-serializer/src/main/java/io/confluent/kafka/serializers/AbstractKafkaAvroSerDeConfig.java
Also set auto.register.schemas=true
You are setting the producer not to try and auto register new schema on producing the message , so it just trying to fetch from the SR and did not find its schema on the SR.
also did not see you setup schema registry URL guess its taking default values
To your question the mock is imitating the work of real schema registry, but has its clear disadvantages
/**
Mock implementation of SchemaRegistryClient that can be used for tests. This version is NOT
thread safe. Schema data is stored in memory and is not persistent or shared across instances.
*/
You may look on the document for more information
https://github.com/confluentinc/schema-registry/blob/master/client/src/main/java/io/confluent/kafka/schemaregistry/client/MockSchemaRegistryClient.java#L47
Related
i have an simple rest api that have a h2 database so my plan is when i run multiple instances of the same app they will have different in memory databases.Now i want to syncronize these databases beetwen them.I thought kafka to be a good solution , so for example when i get an POST for instance with port 8080 , i should post also for all other instances. Now my app acts as a producer/consumer at the same time and i do not know why only one instance receive the message.
The code:
#EnableKafka
#Configuration
public class KafkaProducerConfigForDepartment {
#Value(value = "${kafka.bootstrapAddress}")
private String bootstrapAddress;
#Bean
public ProducerFactory<String, MessageEventForDepartment> producerFactoryForDepartment() {
Map<String, Object> configProps = new HashMap<>();
configProps.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapAddress);
configProps.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
configProps.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, JsonSerializer.class);
return new DefaultKafkaProducerFactory<>(configProps);
}
#Bean
public KafkaTemplate<String, MessageEventForDepartment> kafkaTemplate() {
return new KafkaTemplate<>(producerFactoryForDepartment());
}
}
#Configuration
public class KafkaTopicConfig {
#Value(value = "${kafka.bootstrapAddress}")
private String bootstrapAddress;
#Bean
public ConsumerFactory<String, MessageEventForDepartment> consumerFactoryForDepartments() {
Map<String, Object> props = new HashMap<>();
props.put(JsonDeserializer.TRUSTED_PACKAGES, "*");
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapAddress);
props.put(ConsumerConfig.GROUP_ID_CONFIG, "groupId");
return new DefaultKafkaConsumerFactory<>(props, new StringDeserializer(), new JsonDeserializer<>(MessageEventForDepartment.class));
}
#Bean
public NewTopic topic1() {
return TopicBuilder.name("topic12")
.partitions(10)
.replicas(10)
.build();
}
#Bean
public ConcurrentKafkaListenerContainerFactory<String, MessageEventForDepartment>
kafkaListenerContainerFactoryForDepartments() {
ConcurrentKafkaListenerContainerFactory<String, MessageEventForDepartment> factory =
new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactoryForDepartments());
return factory;
}
}
#Component
#Slf4j
public class DepartmentKafkaService {
#Autowired
private DepartmentService departmentService;
#KafkaListener(topics = "topic12" , groupId = "groupId",containerFactory = "kafkaListenerContainerFactoryForDepartments")
public void listenGroupFoo(MessageEventForDepartment message) {
log.info(message.toString());
}
}
Why is this happening ? or maybe my approach is not very good , what are your thoughts ,guys?
Have you considered Kafka Streams? In my opinion, your solution is already done by internal RocksDB and Global KTable implementation in Kafka Streams.
RocksDB will behave exactly like the H2 database which you've mentioned. GlobalKTables functionality allows you to broadcast the current state to all running KafkaStreams instances and read data with ease.
Example:
Producer part:
#RestController
class MessageEventForDepartmentController {
#Autowired
KafkaTemplate<String, MessageEventForDepartment> kafkaTemplate;
#PostMapping(path = "/departments", consumes = "application/json")
#ResponseStatus(HttpStatus.ACCEPTED)
void(#RequestBody MessageEventForDepartment event) {
kafkaTemplate.send("topic-a", event.getId(), event);
}
}
Consumer part - KafkaStreams GlobalKTable
#Component
public class StreamsBuilderMessageEventForDepartment {
#Autowired
void buildPipeline(StreamsBuilder streamsBuilder) {
KeyValueBytesStoreSupplier storeSupplier = Stores.inMemoryKeyValueStore("MessageEventForDepartmentGlobalStateStore");
Materialized<String, MessageEventForDepartment, KeyValueStore<Bytes, byte[]>> materialized = Materialized.<String, MessageEventForDepartment>as(storeSupplier)
.withKeySerde(Serdes.String())
.withValueSerde(new JsonSerde(MessageEventForDepartment.class));
GlobalKTable<String, MessageEventForDepartment> messagesCount = messagesGroupedByUser.globalTable("topic-a", materialized);
}
}
Read data from RocksDB
#RestController
class MessageEventForDepartmentReadModelController {
#Autowired
KafkaStreams kafkaStreams
#Get(path = "/departments")
MessageEventForDepartment getMessageEventForDepartment(String eventId) {
ReadOnlyKeyValueStore<String, MessageEventForDepartment> store = kafkaStreams.store(StoreQueryParameters.fromNameAndType("MessageEventForDepartmentGlobalStateStore", QueryableStoreTypes.keyValueStore()));
return store.get(eventId);
}
}
The reason why only one instance of the application receives each message is that each instance has the same ConsumerConfig.GROUP_ID_CONFIG. Kafka's consumer protocol is such that each consumer group gets each message delivered once (obviously, there's a lot more nuance to it, but this is basically how it works).
Pawel's suggestion to use KafkaStreams is a good one—a GlobalKTable would provide what you want.
Luca Pette wrote a great primer on Kakfa Streams here: https://lucapette.me/writing/getting-started-with-kafka-streams/
My understanding to your qus is that your using multiple instances for the same app which uses IN-MEMEORY so for Eventually consistency your going with Kafka stream.
MY SOLUTIONS:
I have used Rabbitmq mirroring which solves the same problem you have in Kafka also supports mirroring find the doc: https://cwiki.apache.org/confluence/plugins/servlet/mobile?contentId=27846330#content/view/27846330
Consider redis cluster or master slave for In-memory db
I'm using the following dependency to send and receive messages from a azure service bus topic:
<dependency>
<groupId>com.azure.spring</groupId>
<artifactId>spring-cloud-azure-starter-servicebus-jms</artifactId>
<version>4.2.0</version>
</dependency>
I'd like to create the configuration via code through a spring bean because I need to configure more than 1 connection string, so after read the documentation, I decided to create this bean:
#Bean
#Primary
public AzureServiceBusJmsProperties priceListJmsProperties() {
var properties = new AzureServiceBusJmsProperties();
properties.setConnectionString(connectionString);
properties.setPricingTier("standard");
properties.setTopicClientId(priceListTopicName);
return properties;
}
If I debug the object creation, I see that this object is been creating twice, the first one with the configuration that I've provided, and the second one with null data, and this is the reason of why I'm getting the following error because there is a validation in this object that throws an exception if certain field is not set in the properties file:
spring.jms.servicebus.connection-string' should be provided
I've tried creating a connection factory instead but for the reason above, I'm getting the same error.
Anyone knows how I can set this configuration as a bean instead of the application.properties file? Thanks in advance.
Following the #DeepDave-MT answer, I couldn't disable the jms autoconfiguration with the spring.jms.servicebus.enabled property, so I decided to exclude the ServiceBusJmsAutoConfiguration with the property spring.autoconfigure.exclude, you have to pass the package name to this property.
Then, in my config class, I just added the following beans:
#Bean
#Primary
public ConnectionFactory connectionFactory() {
var connectionFactory = new ServiceBusJmsConnectionFactory(connectionString);
var serviceBusConnectionString = new ServiceBusConnectionString(connectionString);
var remoteUri = String.format(AMQP_URI_FORMAT, serviceBusConnectionString.getEndpointUri(), 100000);
connectionFactory.setRemoteURI(remoteUri);
connectionFactory.setClientID(topicName);
connectionFactory.setUsername(serviceBusConnectionString.getSharedAccessKeyName());
connectionFactory.setPassword(serviceBusConnectionString.getSharedAccessKey());
return new CachingConnectionFactory(connectionFactory);
}
#Bean
#Primary
public JmsListenerContainerFactory<?> topicJmsListenerContainerFactory(#Qualifier("connectionFactory") ConnectionFactory connectionFactory) {
var topicFactory = new DefaultJmsListenerContainerFactory();
topicFactory.setConnectionFactory(connectionFactory);
topicFactory.setSubscriptionDurable(Boolean.TRUE);
topicFactory.setErrorHandler(priceListErrorHandler());
return topicFactory;
}
#Bean
#Primary
public AzureServiceBusJmsProperties jmsProperties() {
var properties = new AzureServiceBusJmsProperties();
properties.setConnectionString(connectionString);
properties.setPricingTier("standard");
properties.setTopicClientId(topicName);
return properties;
}
I'm trying to configure Spring CacheManager with Hazelcast. Also, I want to configure Hazelcast's Near Cache so I can retrieve the (already deserialized) instance of my cached object.
Here is my configuration
#Bean
public HazelcastInstance hazelcastConfig() {
val config = new Config().setInstanceName("instance");
val serializationConfig = config.getSerializationConfig();
addCacheConfig(config, "USERS")
serializationConfig.addSerializerConfig(new SerializerConfig()
.setImplementation(getSerializer())
.setTypeClass(User.class)
return Hazelcast.newHazelcastInstance(config);
}
#Bean
public CacheManager cacheManager(HazelcastInstance hazelcastInstance) {
return new HazelcastCacheManager(hazelcastInstance);
}
#Bean
public PlatformTransactionManager chainedTransactionManager(PlatformTransactionManager jpaTransactionManager, HazelcastInstance hazelcastInstance) {
return new ChainedTransactionManager(
jpaTransactionManager,
new HazelcastTransactionManager(hazelcastInstance)
);
}
// Configure Near Cache
private void addCacheConfig(Config config, String cacheName) {
val nearCacheConfig = new NearCacheConfig()
.setInMemoryFormat(OBJECT)
.setCacheLocalEntries(true)
.setInvalidateOnChange(false)
.setTimeToLiveSeconds(hazelcastProperties.getTimeToLiveSeconds())
.setEvictionConfig(new EvictionConfig()
.setMaxSizePolicy(ENTRY_COUNT)
.setEvictionPolicy(EvictionPolicy.LRU)
.setSize(hazelcastProperties.getMaxEntriesSize()));
config.getMapConfig(cacheName)
.setInMemoryFormat(BINARY)
.setNearCacheConfig(nearCacheConfig);
}
Saving and retrieving from the Cache is working fine, but my object is deserialized every time I have a cache hit. I want to avoid this deserialization time using a NearCache, but it doesn´t work. I also tried BINARY memory format.
Is this possible with Hazelcast? Or is this deserialization always executed even if I have a NearCache?
Thanks
So after a few changes, it is working now. Here is my conclusion:
So in order to have NearCache working with Spring Cache, all your cached objects should be Immutable. This means final classes and final fields. Also, they all should extend the Serializable interface.
I'm building a kafka streams application with spring-kafka to group records by key and apply some business logic. I'm following the configuration stated on spring-kafka-streams doc, but the problem is that when I want to retrieve a value from the local store I get the following error:
org.apache.kafka.streams.errors.InvalidStateStoreException: The state store, user-data-response-count, may have migrated to another instance.
at org.apache.kafka.streams.state.internals.QueryableStoreProvider.getStore(QueryableStoreProvider.java:60)
at org.apache.kafka.streams.KafkaStreams.store(KafkaStreams.java:1053)
at com.umantis.management.service.UserDataManagementService.broadcastUserDataRequest(UserDataManagementService.java:121)
Here is my KafkaStreamsConfiguration:
#Configuration
#EnableConfigurationProperties(EventsKafkaProperties.class)
#EnableKafka
#EnableKafkaStreams
public class KafkaConfiguration {
#Value("${app.kafka.streams.application-id}")
private String applicationId;
// This contains both the bootstrap servers and the schema registry url
#Autowired
private EventsKafkaProperties eventsKafkaProperties;
#Bean(name = KafkaStreamsDefaultConfiguration.DEFAULT_STREAMS_CONFIG_BEAN_NAME)
public StreamsConfig streamsConfig() {
Map<String, Object> props = new HashMap<>();
props.put(StreamsConfig.APPLICATION_ID_CONFIG, applicationId);
props.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, this.eventsKafkaProperties.getBrokers());
props.put(StreamsConfig.DEFAULT_KEY_SERDE_CLASS_CONFIG, Serdes.String().getClass().getName());
props.put(StreamsConfig.DEFAULT_VALUE_SERDE_CLASS_CONFIG, SpecificAvroSerde.class);
props.put(AbstractKafkaAvroSerDeConfig.SCHEMA_REGISTRY_URL_CONFIG, this.eventsKafkaProperties.getSchemaRegistryUrl());
props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
return new StreamsConfig(props);
}
#Bean
public KGroupedStream<String, UserDataResponse> responseKStream(StreamsBuilder streamsBuilder, TopicUtils topicUtils) {
final Map<String, String> serdeConfig = Collections.singletonMap("schema.registry.url", this.eventsKafkaProperties.getSchemaRegistryUrl());
final Serde<UserDataResponse> valueSpecificAvroSerde = new SpecificAvroSerde<>();
valueSpecificAvroSerde.configure(serdeConfig, false);
return streamsBuilder
.stream("myTopic", Consumed.with(Serdes.String(), valueSpecificAvroSerde))
.groupByKey();
}
And here is my service code failing on getKafkaStreams().store:
#Slf4j
#Service
public class UserDataManagementService {
private static final String RESPONSE_COUNT_STORE = "user-data-response-count";
#Autowired
private StreamsBuilderFactoryBean streamsBuilderFactory;
public UserDataResponse broadcastUserDataRequest() {
this.responseGroupStream.count(Materialized.as(RESPONSE_COUNT_STORE));
if (!this.streamsBuilderFactory.isRunning()) {
throw new KafkaStoreNotAvailableException();
}
// here we should have a single running kafka instance
ReadOnlyKeyValueStore<String, Long> countStore =
this.streamsBuilderFactory.getKafkaStreams().store(RESPONSE_COUNT_STORE, QueryableStoreTypes.keyValueStore());
...
}
Context: I'm running the app on a single instance in a spring boot test and I'm ensuring the kafka instance is on a running state. I've searched on documentation from apache on this issue, but my case does not appear to match.
Can anyone point me what I'm doing wrong and a possible solution?
I'm quite new on Kafka Streams, so any help would be highly appreciated.
Ok, just saw that I was asking if the streams factory was running but I wasn't asking if the kakfa streams instance was actually running.
Polling over streamsBuilderFactory.getKafkaStreams().state solved the issue.
I'm working on OS X with Java 7, Spring 3.2, jedis 2.1.0, spring-data-redis 1.1.1, trying to get the most bare-bones redis set up working with the default redis configuration. Meaning I haven't put anything into the redis.conf file. When I do
redis-server
it says it started and is ready to accept connections on port 6379.
Initially, I tried this with Annotated Beans for the RedisTemplate and JedisConnectionFactory, but spring complained it couldn't create or find those beans, so I did it this way. Maybe that was indicating a more basic problem. So I did it with the slightly longer version below, but this at least appears to create the Redis and Jedis components.
Here is my test :
#RunWith(SpringJUnit4ClassRunner.class)
#ContextConfiguration(loader=AnnotationConfigContextLoader.class)
public class RedisTest {
#Configuration
static class ContextConfiguration {
}
RedisTemplate<String, String> template;
private JedisConnectionFactory getJedisConnectionFactory() {
JedisConnectionFactory factory = new JedisConnectionFactory();
factory.setHostName("localhost");
factory.setPort(6379);
factory.setUsePool(true);
return factory;
}
private RedisTemplate<String, String> getRedisTemplate() {
RedisTemplate<String, String> redisTemplate = new RedisTemplate<String, String>();
redisTemplate.setConnectionFactory(getJedisConnectionFactory());
return redisTemplate;
}
#Test
public void testRedis() {
System.out.println("testing redis ");
template = getRedisTemplate();
template.opsForValue().set("Key", "Value");
String value = template.opsForValue().get("Key");
System.out.println("got value : " + value);
}
}
and the top of the error stack trace is
java.lang.NullPointerException
java.lang.NullPointerException
at org.springframework.data.redis.core.AbstractOperations.rawValue(AbstractOperations.java:110)
at org.springframework.data.redis.core.DefaultValueOperations.set(DefaultValueOperations.java:166)
at com.mycompany.storage.RedisTest.testRedis(RedisTest.java:46)
The problem is that both the redisTemplate and the jedisConnectionFactory need to have afterPropertiesSet() called. Usually this is called by Spring configuration, but since that wasn't working for me, it has to be called explicitly.
Also, theses lines
factory.setHostName("localhost")
factory.setPort(6379)
factory.setUsePool(true)
are unnecessary because they are the default values.