Trying to understand Apache Kafka as a microservice. Still confused - java

I am learning about Kafka, and I am curious as to how a Kafka client should exist in a Microservices architecture I want Kafka to keep a log of important information, and enable automatic reaction to those information appropriately.
My question is how should Kafka exist with the backend as a microservice?
Stand alone client.- The Kafka client (producer/consumer) exists alone. It exposes an API for the frontend to do HTTP requests (POST, PUT), sending data. The data is then converted into events, and produced to the Kafka cluster by the Kafka client.- The consumer also lives here, and will make reactive API calls to a separate backend when necessary.
A layer in the backend- When the frontend makes a HTTP request (POST, PUT) to the backend, the ‘REST controller’ in the backend looks at what data is sent, and if necessary, produces it to the Kafka cluster. - The consumer also lives here, and reacts to new events internally with other services in the backend.
Producer in the frontend, consumer in the backend.- Frontend (React.js, Vue.js, etc) has a Kafka client that produces event of important information that requires logging. Consumer Kafka client also exists in the backend and reacts to events internally with other services. Backend also exposes an API for non-kafka requests.
Maybe a combination of this? Or are these all wrong?
Confluent has been quite helpful so far, but I think I am missing something important to piece all of it together.

Related

Implementing Kafka inside a Java Client Library

I have a back-end application which is exposing APIs and for my clients to consume. Till now the requirement was only from Direct Frontend Dashboards. Recently I have got a client for my service that wants to consume these Apis on his backend application.
I am planning to build a client library in java for the same, which calls my APIs, and has a build in-memory cache system. Till this point everything is clear, but i want my client to have kafka as well. One way is that the backend application that wants to consume this api has Kafka Listener inside his application, the other idea that came across my mind is that what if I cant build a kakfa listner inside my client library itself. Is it a good idea to do it? Assuming that Kafka config will be present inside the backend application that is going to use my client library?
Kafka is a backend service. If you are providing your own REST APIs for clients, then that is not used by a Spring #KafkaListener.
If you add your own #KafkaListener, then you could store that data into your own app and expose data via HTTP endpoints, sure.
But that still wouldn't solve how external services plan on using Kafka on their own. If both services are connected same Kafka cluster, then you don't need HTTP endpoints, rather you would use KafkaTemplate to send data to Kafka, after which the external service would consume via their own #KafkaListener

Subscribing a rest api to kafka events

I recently started working on Kafka for a project of mine.
I am trying to figure out how can I subscribe a rest API to a Kafka event rather than running a consumer which keeps on listening to the topic.
I came across Kafka connect, but not able to figure out exactly how to achieve the same.
Details: I am running a spring-boot project as a producer which uses KafkaTemplate provided by spring to publish the message. Also, the consumer is a Spring Boot project which exposes rest APIs.
There is no other way than to use the Kafka consumer library to have data come from Kafka. Kafka Connect would do the same.
A RESTful service is stateless. Having a consumer is more stateful (offset maintenance, for example).
If you want streaming events in general, maybe you could look into WebSockets or gRPC instead.

spring broker channel access

I have a few questions one how to use spring websockets and messaging. So I have a program that interfaces with an external web service producer endpoint that will send data payloads to my web service consumer endpoint. While on the other end of my program I will be routing these data payloads to multiple websocket connections (stomp and sockjs). The external web service producer is providing a subscription ID in each data payload for every query requests so my approach is to send them back to the broker with a SimpMessagingTemplate with it's own unique destination (ie. /user/{subscriptionId}/subscribe). That way I can subscribe each websocket client to an existing destination if a duplicate query was made and only make requests for a new subscripion to the external web service producer if otherwise.
How do I access my SimpMessagingTemplate from within different component such as my web service consumer so that I can send the data payloads to my message broker? Do I just declare my SimpMessagingTemplate static and declare a getter function within my controller where the template object is stored?
How do I get a list of all known destinations and as well as the number of stomp client subscribers to each one? The external web service producer sets a termination time for each subscription, so I would like to implement auto renewal requests if there are still subscribers to a destination. I suppose I can keep track of it myself with Maps/Caches and update them everytime a websocket session is opened or closed, but i prefer to do it with spring if possible as it minimizes my risk and probably less error prone, or perhaps a full featured broker such as RabbitMQ or ActiveMQ is necessary to do this.
Found the answers I needed:
All I need to do is use spring Autowiring support and the bean will be injected with the object initialized
#Autowired
private SimpMessagingTemplate
Need a full featured broker for this, however for what I want to do i decided it would be too much work and essentially not needed. I decided I will just implement my own subscription checking with the 3rd party web service on my own with java maps/caches. I've set went to painstaking lengths by setting breakpoints in eclipse in the java .class files even with a java decompiler plugin and found out that all of this information can be found in the DefaultSubscriberRegistry class. Although I can not access it with the api given by Spring, I can rest assured it is being properly handled by the application. When a client subscribes or disconnects to my application, the information in the internal maps/caches of the registry are added and removed accordingly. Furthermore I can make make changes to my own implemented map/caches by implementing the interfaces provided by Spring such as SessionSubscribeEvent or SessionDisconnectedEvent and sub class it with ApplicationListener and they will be triggered whenever a client subscribes or disconnects.
public class SubscribeEvent implements ApplicationListener

Integrate api service with message queue

Currently I'm doing the integration work of one project. In this project, we need to expose a restful api with java framework Wink. Since we have several other components to integrate, we put a message queue(activemq) between the api layer and other service parts.But this time the api layer will communicate to the lower level in an asynchronous way. In my understanding, the restful api should run in a synchronous way. For example, in the api layer, if one thread received a request, the response will get returned in the same thread. So there is a internal mismatch between these 2 communication styles. My question is how can we integrate these 2 parts to make the api layer work without sacrificing the features in message queue like reliability and performance?
Any suggestions will be apprciated here.
Thanks
Asynchronous callback is possible in REST communication, see this JERSEY framework example:
https://jersey.java.net/documentation/latest/async.html
But yes the latency should be controlled as your client would be waiting for server to respond, and would be good if client calls it in AJAX way.
Simplest way would be to fork a new process through "executor service", which sends a message in a channel to lower level api and listens back for response in another channel(MQ communication). And on process completion return a response, which then the higher API will push back to client.

Which websocket server implementation can be combined with rabbitmq?

Hi there we are planning on integrating a websocket server implementation as frontend to our RabbitMQ systems. Currently we are running some Java/Groovy/Grails based apps which use the RabbitMQ server.
We would like to have a simple websocket server implementation that handles connections etc and that passes the request to our RabbitMQ layer.
Clients (hardware devices) would connect to a websocket layer that handles the request to RabbitMQ. Some other process takes on the job of handling the request and places back data in the queue if needed so that RabbitMQ is able to pass the data via websockets back to the client.
I am a bit lost in the land of websockets so i am wondering what other people would advise to use.
You can use rabbitmq itself with the webstomp plugin and sock.js for web frontends. You can expose this directly or via something like haproxy.
http://www.rabbitmq.com/blog/2012/05/14/introducing-rabbitmq-web-stomp/
In version 3.x it is now included by default, just enable the plugin.
For Java there are a couple of choices:
Atmosphere
Vert.x
Play 2.0
Netty directly
There are so many ways to skin the cat. Atmosphere will probably get you the furthers if you already using Grails. You will have to write a custom Broadcaster IIRC there is not one for RabbitMQ but you can just copy one of the existing ones.
Also with RabbitMQ or any queue your going to have to decide whether your going to make queues for each for each user (browser using websocket) or your going to aggregate based on some hash and then dispatch internally (ie make a giant map of mailboxes). Akka would be a good choice for mapping to mailboxes.

Categories