I have a back-end application which is exposing APIs and for my clients to consume. Till now the requirement was only from Direct Frontend Dashboards. Recently I have got a client for my service that wants to consume these Apis on his backend application.
I am planning to build a client library in java for the same, which calls my APIs, and has a build in-memory cache system. Till this point everything is clear, but i want my client to have kafka as well. One way is that the backend application that wants to consume this api has Kafka Listener inside his application, the other idea that came across my mind is that what if I cant build a kakfa listner inside my client library itself. Is it a good idea to do it? Assuming that Kafka config will be present inside the backend application that is going to use my client library?
Kafka is a backend service. If you are providing your own REST APIs for clients, then that is not used by a Spring #KafkaListener.
If you add your own #KafkaListener, then you could store that data into your own app and expose data via HTTP endpoints, sure.
But that still wouldn't solve how external services plan on using Kafka on their own. If both services are connected same Kafka cluster, then you don't need HTTP endpoints, rather you would use KafkaTemplate to send data to Kafka, after which the external service would consume via their own #KafkaListener
Related
I am learning about Kafka, and I am curious as to how a Kafka client should exist in a Microservices architecture I want Kafka to keep a log of important information, and enable automatic reaction to those information appropriately.
My question is how should Kafka exist with the backend as a microservice?
Stand alone client.- The Kafka client (producer/consumer) exists alone. It exposes an API for the frontend to do HTTP requests (POST, PUT), sending data. The data is then converted into events, and produced to the Kafka cluster by the Kafka client.- The consumer also lives here, and will make reactive API calls to a separate backend when necessary.
A layer in the backend- When the frontend makes a HTTP request (POST, PUT) to the backend, the ‘REST controller’ in the backend looks at what data is sent, and if necessary, produces it to the Kafka cluster. - The consumer also lives here, and reacts to new events internally with other services in the backend.
Producer in the frontend, consumer in the backend.- Frontend (React.js, Vue.js, etc) has a Kafka client that produces event of important information that requires logging. Consumer Kafka client also exists in the backend and reacts to events internally with other services. Backend also exposes an API for non-kafka requests.
Maybe a combination of this? Or are these all wrong?
Confluent has been quite helpful so far, but I think I am missing something important to piece all of it together.
I recently started working on Kafka for a project of mine.
I am trying to figure out how can I subscribe a rest API to a Kafka event rather than running a consumer which keeps on listening to the topic.
I came across Kafka connect, but not able to figure out exactly how to achieve the same.
Details: I am running a spring-boot project as a producer which uses KafkaTemplate provided by spring to publish the message. Also, the consumer is a Spring Boot project which exposes rest APIs.
There is no other way than to use the Kafka consumer library to have data come from Kafka. Kafka Connect would do the same.
A RESTful service is stateless. Having a consumer is more stateful (offset maintenance, for example).
If you want streaming events in general, maybe you could look into WebSockets or gRPC instead.
this question is more of a design/architecture question. Let's say I have a server application that provides Spring-based webservices and a client application. So currently I have a few Java classes on the client side where the endpoint of the service is hardcoded (e.g. http://myserver/some/webservice).
What is a good way to map the client side properly with the webservice? Just off the top of my head: is there a library that helps evaluate URLs with parameters and maps them to the properties of a POJO using reflection?
As I understand your question, 2 options pop into my head:
1) Eureka- Service Discovery for Spring Cloud.
It can help you by giving your client the Eureka URL and the Eureka will supply the client with the desired service URL. so if there the server is going down Eureka can point the client to a back up server (it will be seamless to the client) or even different URL's to different services on the same server.
2) Spring Cloud Config
A configuration service that contains the URL's in the DB, the client will pull those URLs from there and will make the calls to a configurable URI's.
Spring allow you to update those URL's in the DB and it will use spring cloud config to push the new URL's down to the clients without any downtime... (might fit you better if you are not interested in load balancing and other features provided by Eureka)
I made one web application in java spring mvc,
In this application traffic will be around 10000 files upload/minute.
So in controller we wrote a code to inject file byte data into Kafka Producer.
Where to implement consumer that can get result of file and do the process in java?
Need to make service which will run separately like Windows service. Is this good way?
Hi there we are planning on integrating a websocket server implementation as frontend to our RabbitMQ systems. Currently we are running some Java/Groovy/Grails based apps which use the RabbitMQ server.
We would like to have a simple websocket server implementation that handles connections etc and that passes the request to our RabbitMQ layer.
Clients (hardware devices) would connect to a websocket layer that handles the request to RabbitMQ. Some other process takes on the job of handling the request and places back data in the queue if needed so that RabbitMQ is able to pass the data via websockets back to the client.
I am a bit lost in the land of websockets so i am wondering what other people would advise to use.
You can use rabbitmq itself with the webstomp plugin and sock.js for web frontends. You can expose this directly or via something like haproxy.
http://www.rabbitmq.com/blog/2012/05/14/introducing-rabbitmq-web-stomp/
In version 3.x it is now included by default, just enable the plugin.
For Java there are a couple of choices:
Atmosphere
Vert.x
Play 2.0
Netty directly
There are so many ways to skin the cat. Atmosphere will probably get you the furthers if you already using Grails. You will have to write a custom Broadcaster IIRC there is not one for RabbitMQ but you can just copy one of the existing ones.
Also with RabbitMQ or any queue your going to have to decide whether your going to make queues for each for each user (browser using websocket) or your going to aggregate based on some hash and then dispatch internally (ie make a giant map of mailboxes). Akka would be a good choice for mapping to mailboxes.