I have a Spring Boot Application that uses kafka to produce and consume messages to/from other applications.
I implemented a new producer whose messages should be sent to different clients located in different development servers. Kafka configuration it's specificated in the application.yml of the project. This was the previous configuration:
spring:
kafka:
bootstrap-servers: server.a:port
producer:
properties:
client.rack: server.a
consumer:
clientId: a-client-id
groupId: a-group-id
properties:
client.rack: server.a
jaas:
options:
username: an-username
password: a-password
Now with the new producer I need to produce message to a second server, server.b so:
spring:
kafka:
bootstrap-servers:
- server.a:port
- server.b:port
producer:
properties:
client.rack:
- server.a
- server.b
consumer:
clientId: a-client-id
groupId: a-group-id
properties:
client.rack: server.a
jaas:
options:
username: an-username
password: a-password
However, this seems to be sending the produced messages to server.b only.
I'm not sure if my config it's wrong or not. As far as I read this seems to be the properly way of doing it but, obviously, I did something wrong because it's not working. Bit lost here.
This isn't how Kafka works. Clients always send to the leader broker. Leaders can exist on any rack, and clients cannot control where they send data other than that leader first, and then configure acks to allow for replication to other followers on other racks.
Also, client.rack is a string, not a list.
Related
Good day,
At this moment I am working on a very simple gateway that (for now) only needs to redirect incoming HTTP POST and GET requests.
THE SETUP:
The Eureka Server: the location where my Spring Boot microservices are registered;
The Spring Gateway: maps all incoming HTTP POST and GET requests and routes them to the proper microservice;
The Spring Boot microservices: doing just some thingies as requested :)
Note: I'm kinda new to this gateway stuff, just you know :).
The microservice is registered fine with the Eureka server. Its webbased GUI shows me that the instance "MY-MICRO-SERVICE" is registered with the Eureka server. Other (Spring Boot) services can use that name ("MY-MICRO-SERVICE") without issues, so for them it works fine. Just this gateway can't handle the instance name; it seems it only accepts IP addresses (which I just want to prevent, as the microservice can change from servers and therefor their IP address). And the Eureka server is not configured to only allow/use IP addresses.
THE ISSUE:
All runs smooth when the Gateway has a route that holds an IP address of the microservice. But what I want is to let the Gateway resolve the service ID from the Eureka server. And if I do that, it throws me a java.net.UnknownHostException: MY-MICRO-SERVICE: Temporary failure in name resoultion.
THE QUESTION:
Now why can't I use the name of the Spring application "MY-MICRO-SERVICE" (being the registered Spring Boot microservice) in the Spring Gateway (while that construction works fine when used in other microservices)? Can't a Yaml config file handle such instance names, just only IP addresses?
THE DETAILS
The gateway is mostly configured via a yaml config file. There is only one simple Java class that kicks off the gateway application. The routing is all set in the yaml config file.
The Spring Gateway application class
#SpringBootApplication
#EnableEurekaClient
public class MyGatewayApplication {
public static void main(String[] args) {
SpringApplication.run(MyGatewayApplication.class, args);
}
}
The Gateway Yaml configuration file (application.yml)
spring:
application:
name: my-gateway
cloud:
gateway:
discovery:
locator:
lowerCaseServiceId: true
enabled: true
globalcors:
corsConfigurations:
'[/**]':
allowedOrigins: "*"
allowedMethods:
- GET
- POST
routes:
- id: my_route
uri: http://MY-MICRO-SERVICE
predicates:
- Path=/test/**
server:
port: 8999
info:
app:
properties: dev
The Error
java.net.UnknownHostException: MY-MICRO-SERVICE: Temporary failure in name resolution
at java.base/java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method) ~[na:na]
at java.base/java.net.InetAddress$PlatformNameService.lookupAllHostAddr(InetAddress.java:929) ~[na:na]
at java.base/java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1515) ~[na:na]
at java.base/java.net.InetAddress$NameServiceAddresses.get(InetAddress.java:848) ~[na:na]
at java.base/java.net.InetAddress.getAllByName0(InetAddress.java:1505) ~[na:na]
at java.base/java.net.InetAddress.getAllByName(InetAddress.java:1364) ~[na:na]
at java.base/java.net.InetAddress.getAllByName(InetAddress.java:1298) ~[na:na]
at java.base/java.net.InetAddress.getByName(InetAddress.java:1248) ~[na:na]
at io.netty.util.internal.SocketUtils$8.run(SocketUtils.java:146) ~[netty-common-4.1.36.Final.jar:4.1.36.Final]
...
Issue has been fixed.
I changed the "http" to "lb" protocol and that fixed my issue. To my understanding, "lb" stands for LoadBalancing. I have no loadbalancer active on my local machine, but anyway: this works.
- POST
routes:
- id: my_route
uri: lb://MY-MICRO-SERVICE
predicates:
- Path=/test/**
I created a simple micronaut app in my local with 'consul-config' feature. My code can able to connect and get properties from consul key/value store. I have below configuration in my bootstrap.yml
micronaut:
application:
name: user-service
config-client:
enabled: true
consul:
client:
registration:
enabled: true
defaultZone: "${CONSUL_HOST:localhost}:${CONSUL_PORT:8500}"
Everything is fine but I don't want to use consul in my local computer, because since it involves the network activity so startup takes sometime. I want to avoid consul in local but I need it in dev, test and prod environments.
I have below code in my app. #Value annotation will try to load 'db-schema' and if it not founds then uses 'local' as default value. So if consul is disabled then my app should use 'local' else it should load values based on consul configuration.
#Value("${db-schema:local}")
private String dbSchema;
How I can do this with out code changes and only with environment options?
I tried setting VM option '-Dmicronaut.config-client.enabled=false' but still it loads bootstrap.yml and trying to connect consul.
There are a number of ways you can do it. One is to create a file like src/main/resources/application-local.yml which contains the following:
consul:
client:
registration:
enabled: false
And in your local environment export MICRONAUT_ENVIRONMENTS=local.
I have implemented an API gateway with Spring cloud gateway. I have added the Redis rate limiters with below configurations:
spring:
cloud:
gateway:
discovery:
locator:
enabled: true
routes:
- id: user-service
uri: lb://user-service
predicates:
- Path=/user/**
filters:
- StripPrefix=1
- name: RequestRateLimiter
args:
key-resolver: "#{#remoteAddrKeyResolver}"
redis-rate-limiter.replenishRate: 1
redis-rate-limiter.burstCapacity: 5
---
spring:
redis:
host: localhost
port: 6379
database: 0
I can successfully block the user requests with an error code 429 TOO Many requests.
Now, I want the same entry to be inserted into the Redis database so that I can analyze.
What configuration do I need to make?
I have visited a blog where he shows it but I couldn't find code related to it. Here is the a link to that blog.
Also, Can anyone explain the exact difference between replenishRate vs burstCapacity with some example? I am a bit confused here.
I'm using Spring Cloud Stream (SpringBoot) to communicate with RabbitMQ instance.
The project could connect to RabbitMQ through AMQP, but not work for STOMP. Anyone knows: is stomp supported or not and how to configure? (My RabbitMQ has opened the 61613 port)
The application.yml file is like this:
server:
port: 8080
spring:
cloud:
stream:
bindings:
output:
destination: cloud-stream
rabbitmq:
addresses: amqp://192.168.231.130:5672 # this works
#addresses: stomp://192.168.231.130:61613 # this does not work
username: test
password: test
STOMP is not currently a supported binder protocol.
I have a unsecured kafka instance with 2 brokers everything was running fine until I decided to configure ACL for topics, after ACL configuration my consumers stopped polling data from Kafka and I keep getting warning Error while fetching metadata with correlation id , my broker properties looks like below:-
listeners=PLAINTEXT://localhost:9092
advertised.listeners=PLAINTEXT://localhost:9092
authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
allow.everyone.if.no.acl.found=true
And my client configuration looks like below:-
bootstrap.servers=localhost:9092
topic.name=topic-name
group.id=topic-group
I've used below command to configure ACL
bin\windows\kafka-acls.bat --authorizer-properties zookeeper.connect=localhost:2181 --add --allow-principal User:* Read --allow-host localhost --consumer --topic topic-name --group topic-group
After having all above configuration when I start consumer it stopped receiving messages. Can someone point where I'm mistaking. Thanks in advance.
We are using ACLs successfully, but not with PLAINTEXT protocol.
IMHO you shall use SSL protocol and instead of localhost use the real machine name.