I am working on moving a Spring sink microservice from my local machine to the OpenShift platform. Inside of my microservice I create a KafkaTemplate like this:
#Autowired
KafkaTemplate<String, String> kafkaTemplate;
Using this method to send Kafka message works perfectly fine on my local machine, when I move to OpenShift however I get this error:
Connection to node -1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available.
I find this to be a little confusing because I when my microservice goes up I can see the address of the Kafka servers in the logs and because other Kafka functions seem to work fine (I am able to read out from the Kafka topic that my processor microservice is writing into). I have been able to work around this issue is to manually create a ProducerFactory and set ProducerConfig.BOOTSTRAP_SERVERS_CONFIG equal to the bootstrap server address I found in the logs.
So I guess I have a couple of questions: One, is KafkaTemplate only wired to run locally or is there a way to wire it to use the bootstrap server assigned after the service goes up? If not is there a way to get the bootstrap server that I am seeing in the logs? Obviously my service has access to it at one point so is there some way I could dynamically set the BOOTSTRAP_SERVERS_CONFIG equal to the value that is being printed in the logs?
Or is there a better way to write out to a Kafka topic from a Spring sink microservice?
Any help would be much appreciated!
Related
I have a Kafka consumer service (spring boot). The problem is the topic names are not statically available i.e., not in properties file. The Consumer service has to request for the topic names from other service. I am not sure how to use #KafkaListener(topic="") annotation in this requirement. Appreciate any help.
I have verified this How to create separate Kafka listener for each topic dynamically in springboot?
As it is 6 years old thread, just want to check is there anything new approach available using spring kafka listener?
Kafka stores list of topics inside (old) ZooKeeper or (new) KRaft.
Example for ZooKeeper https://stackoverflow.com/a/29880334/149818
Sorry, I have no experience with KRaft API yet, but taking in account that command line tool works the same way reference approach is the same.
We have an application that is already using Spring Cloud Stream with RabbitMQ, some endpoints of the application are sending messages to Rabbit MQ. Now we want new endpoints start sending messages to Kafka, hoping that the existing endpoints continue using RabbitMQ with Spring Cloud Stream. I am not sure if this is even possible, because this means we have to include both kafka and rabbit binder dependencies in pom.xml. What configuration changes we need to make in the yml file so that the app understands what bindings are for kafka and what bindings are for Rabbit? Many thanks.
Yes it is possible. This is what we call a multi-binder scenario and is one of the core features specifically to support the use case you are describing.
Here is where you can find more information - https://docs.spring.io/spring-cloud-stream/docs/3.2.1/reference/html/spring-cloud-stream.html#multiple-binders
Also, here is an example that actually provides configuration that uses Kafka and Rabbit. While example is centered around CloudEvent, you can ignore it and strictly concentrate on configuration related to Rabbit and Kafka binders - https://github.com/spring-cloud/spring-cloud-function/tree/main/spring-cloud-function-samples/function-sample-cloudevent-stream
Feel free to ask follow up questions once you get familiar with that.
I am studying about apache kafka for past two weeks and I have managed to understand how kafka functions, kafka producer and consumer works. Now I want to design a small java program where I can send my apache tomcat 9 logs and metrics to kafka as it can be used for log aggregation purpose. I have searched how to do this, any method or tool I have to learn to design this and I came to know about Log4j.jar through which I can produce custom logs in apache tomcat but I don't know how to send this log to kafka? Please give some guidance regarding this on how to do this program if anyone done this work before.
Thank you.
As commented, you would use KafkaAppender on the application server-side to point at your Kafka brokers to send data to it; Kafka doesn't request data from your applications.
You can also write logs directly to disk, and use any combination of log-processors like Filebeat, Fluent-bit, Rsyslog, which all have Kafka-integrations.
I am in charge of designing a new enterprise application that should handle tons of clients and should be completely fault free.
In order to to that I'm thinking about implementing different microservices that are going to be replicated so eureka server / client solution is perfect for this.
Then since the eureka server could be the single point of failure I found that is possible to have it replicated in multiple instances and it is perfect.
In order to not expose every service I'm going to put as a gateway zuul that will use the eureka server in order to find the perfect instance of the backend serivice that will handle the requests.
Since now zuul is the single point of faiulre I found that it is possible to replicate also this component so if one of them fails I still have the others.
At this point I need to find the way to create a load balancer between the client application (android and ios app) and the zuul stack but a server side load balancer will be the single point of failure so it is useless.
I would like to ask if there is a way to make our tons of clients connect to an healty instance of zuul application without having any single point of failure. Maybe by implementing ribbon on the mobile application that will choose a proper healty instance of zuul.
Unfortunatly everything will be deployed on a "private" cluster so I can not use amazon elastic load balancer or any other different propietary solution
Thanks
I am working on a small project using Spring Boot, Kafka, and Spark. So far I have been able to create a Kafka producer in one project and a Spark-Kafka direct stream as a consumer.
I am able to see messages pass through and things seem to be working as intended. However, I have a rest endpoint on the project that is running the consumer. Whenever I disable the Direct Stream, the endpoint works fine. However when I have the stream running, Postman says there is no response. I see nothing in the server logs indicating that a request was ever received either.
The Spark consumer is started by a bean at project launch. Is this keeping the normal server on localhost:8080 from being started?
Initially I was kicking off the StreamingContext by annotating it as a Bean. I instead made the application implement CommandLineRunner, and in the overridden run method, I called the method that kicks off the Streaming Context. That allowed Apache to start and fixed the issue.