Unable to establish connection with elasticsearch 8.1 (java) - java

I have elasticsearch 8.1 running in docker with this docker compose file:
version: '3.7'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:8.1.0
container_name: es-node
environment:
- xpack.security.enabled=false
- discovery.type=single-node
volumes:
- ./elastic-data:/usr/share/elasticsearch/data
ports:
- 9200:9200
cap_add:
- IPC_LOCK
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 65536
hard: 65536
kibana:
image: docker.elastic.co/kibana/kibana:8.1.0
container_name: kibana
environment:
- ELASTICSEARCH_HOST=http://localhost:9200
ports:
- 5601:5601
depends_on:
- elasticsearch
I'm trying to make a simple GET request to the es cluster using the org.elasticsearch.client.RestClient.
Request:
Request request = new Request("GET", "_cluster/health");
try {
return restClient.performRequest(request).toString();
} catch (IOException e) {
throw new RuntimeException(e);
}
Rest client initialisation:
var hosts = buildClusterHosts(transportAddresses);
restClient = RestClient.builder(hosts).build();
if (isElasticSniffEnabled) {
sniffer = Sniffer.builder(restClient).build();
}
var esTransport = new RestClientTransport(restClient, new JacksonJsonpMapper());
elasticsearchClient = new ElasticsearchClient(esTransport);
Main method:
var es = ElasticEightClient.builder()
.transportAddresses("localhost:9200")
.isElasticSniffEnabled(true)
.build();
System.out.println("Started elasticsearch with health: " + es.getHealth());
buildClusterHosts() method is correctly building an array of HttpHost (in this case only one) and provides it to the rest client builder.
In theory this should be enough, but I keep getting Caused by: java.net.ConnectException: Timeout connecting to [/172.20.0.2:9200] and I'm not sure why?

Tldr;
It seems you are confusing the Transport port and the Rest Api port of Elasticsearch.
To Fix
You will first need to expose the port of the transport layer, which is 9300 by default
services:
elasticsearch:
ports:
- 9300:9300
Then update the main method
var es = ElasticEightClient.builder()
.transportAddresses("localhost:9200")
.isElasticSniffEnabled(true)
.build();
System.out.println("Started elasticsearch with health: " + es.getHealth());

Figured out what was the problem. To use the Sniffer you need to add http.publish_host=localhost as environment variable in the docker compose file.

Related

Microservices communication with a spring gateway

I'm very new in springboot and I'm trying to explore this world.
I was creating three microservices that communicates to each other. Everything seems working except the Spring gateway that I just added.
The API call returns:
Error: Socket hang up
This is the configuration that I made but for sure I think it is not 100% correct. Can You help me to discover the bad config?
This is the docker-compose:
version: '3.4'
x-common-variables: &common-variables
DATASOURCE_USER: ${DB_USER}
DATASOURCE_PASSWORD: ${DB_PASSWORD}
DATASOURCE_PORT: ${DB_PORT}
services:
apigateway:
build:
context: .
dockerfile: APIgateway/Dockerfile
ports:
- "4444:4444"
restart: always
paymysqldb:
container_name: paymysqldb
image: mysql
ports:
- "3313:3306"
environment:
- MYSQL_DATABASE=${DB_DATABASE_PAY}
- MYSQL_USER=${DB_USER}
- MYSQL_PASSWORD=${DB_PASSWORD}
- MYSQL_ROOT_PASSWORD=${DB_ROOT_PASSWORD}
volumes:
- paystorage:/var/lib/mysql
usermysqldb:
container_name: usermysqldb
image: mysql
ports:
- "3311:3306"
environment:
- MYSQL_DATABASE=${DB_DATABASE_USER}
- MYSQL_USER=${DB_USER}
- MYSQL_PASSWORD=${DB_PASSWORD}
- MYSQL_ROOT_PASSWORD=${DB_ROOT_PASSWORD}
volumes:
- userstorage:/var/lib/mysql
catalogmysqldb:
container_name: catalogmysqldb
image: mysql
ports:
- "3312:3306"
environment:
- MYSQL_DATABASE=${DB_DATABASE_CATALOG}
- MYSQL_USER=${DB_USER}
- MYSQL_PASSWORD=${DB_PASSWORD}
- MYSQL_ROOT_PASSWORD=${DB_ROOT_PASSWORD}
volumes:
- catalogstorage:/var/lib/mysql
paymanager:
container_name: paymanager
image: arausa/payimage
build:
context: .
dockerfile: MicroServices/PaymentManager/Dockerfile
depends_on:
- paymysqldb
ports:
- "3333:3333"
restart: always
environment:
<<: *common-variables
PM_DATASOURCE_HOST: ${DB_HOST_PAY}
PM_DATASOURCE_NAME: ${DB_DATABASE_PAY}
usermanager:
container_name: usermanager
image: arausa/userimage
build:
context: .
dockerfile: MicroServices/UserManager/Dockerfile
depends_on:
- usermysqldb
ports:
- "1111:1111"
restart: always
environment:
<<: *common-variables
UM_DATASOURCE_HOST: ${DB_HOST_USER}
UM_DATASOURCE_NAME: ${DB_DATABASE_USER}
expose:
- "1111"
catalogmanager:
container_name: catalogmanager
image: arausa/catalogimage
build:
context: .
dockerfile: MicroServices/CatalogManager/Dockerfile
depends_on:
- catalogmysqldb
ports:
- "2222:2222"
restart: always
environment:
<<: *common-variables
CM_DATASOURCE_HOST: ${DB_HOST_CATALOG}
CM_DATASOURCE_NAME: ${DB_DATABASE_CATALOG}
#kafka usa zookeeper tiene traccia dei broker, topologia della network e info per la sincronizzazione
zookeeper:
image: wurstmeister/zookeeper
#identifica il broker kafka
kafka:
image: wurstmeister/kafka
ports:
- "9092:9092" #porta di default per il broker kafka
environment:
KAFKA_ADVERTISED_HOST_NAME: kafka
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181 #serve a dire dove sta girando zookeeper
volumes:
userstorage:
catalogstorage:
paystorage:
This is the API gateway class
package com.example.apigateway;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.cloud.client.discovery.EnableDiscoveryClient;
import org.springframework.cloud.gateway.route.RouteLocator;
import org.springframework.cloud.gateway.route.builder.RouteLocatorBuilder;
import org.springframework.context.annotation.Bean;
#SpringBootApplication
#EnableDiscoveryClient
public class APIgatewayApplication {
public static void main(String[] args) {
SpringApplication.run(APIgatewayApplication.class, args);
}
#Bean
public RouteLocator myRoutes(RouteLocatorBuilder builder) {
return builder.routes()
.route(p -> p
.path("/user/**")
.uri("http://usermanager:1111"))
.route(p -> p
.path("/catalog/**")
.uri("http://catalogmanager:2222"))
.route(p -> p
.path("/payment/**")
.uri("http://paymanager:3333"))
.build();
}
}
This is the application.properties:
spring.application.name=apigateway
server.port=4444
And finally this is the .env even if I don't think it is usefull for this problem:
DB_DATABASE_PAY=PayDB
DB_HOST_PAY=paymysqldb
DB_DATABASE_USER=UserDB
DB_HOST_USER=usermysqldb
DB_DATABASE_CATALOG=CatalogDB
DB_HOST_CATALOG=catalogmysqldb
DB_USER=db_user
DB_PASSWORD=ale2022
DB_ROOT_PASSWORD=user
DB_PORT=3306
My bad. I forgot to recreate the image of the API.
The error is now :
500 Server Error for HTTP POST "/user/addUser"
apigateway_1 |
apigateway_1 | io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: usermanager/172.18.0.10:1111

Healthcheck not working at all when using docker-compose (My service do not wait for Kafka to be started before launching)

I have three services on my docker-compose:
version: '3.4'
setup-topics:
image: 'bitnami/kafka:2'
hostname: setup-topics
container_name: setup-topics
command: "bash -c 'echo Waiting for Kafka to be ready... && \
./opt/bitnami/kafka/bin/kafka-topics.sh --create --if-not-exists --zookeeper zookeeper:2181 --partitions 1 --replication-factor 1 --topic orders && \
./opt/bitnami/kafka/bin/kafka-topics.sh --create --if-not-exists --zookeeper zookeeper:2181 --partitions 1 --replication-factor 1 --topic redis'"
environment:
KAFKA_BROKER_ID: ignored
KAFKA_ZOOKEEPER_CONNECT: ignored
depends_on:
- kafka
kafka:
container_name: kafka
hostname: kafka
image: 'bitnami/kafka:2'
ports:
- '9092:9092'
- '29092:29092'
volumes:
- 'kafka_data:/opt/kafka'
- './Ping.jar:/Ping.jar'
environment:
- KAFKA_HEAP_OPTS=-Xms1g -Xmx1g
- KAFKA_JVM_PERFORMANCE_OPTS=-Xms512m -Xmx512M
- KAFKA_BROKER_ID:1
- KAFKA_CFG_ZOOKEEPER_CONNECT=zookeeper-server:2181
- ALLOW_PLAINTEXT_LISTENER=yes
- KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
- KAFKA_CFG_LISTENERS=PLAINTEXT://:9092,PLAINTEXT_HOST://:29092
- KAFKA_CFG_ADVERTISED_LISTENERS=PLAINTEXT://kafka-server:9092,PLAINTEXT_HOST://localhost:29092
depends_on:
- zookeeper
healthcheck:
test: ["CMD", "java", "-jar", "/Ping.jar", "localhost", "9092"]
interval: 30s
timeout: 10s
retries: 4
zookeeper:
container_name: zookeeper
hostname: zookeeper
image: 'bitnami/zookeeper:3'
ports:
- '2181:2181'
volumes:
- 'zookeeper_data:/bitnami'
environment:
- ALLOW_ANONYMOUS_LOGIN=yes
- ZOOKEEPER_CLIENT_PORT=32181
- ZOOKEEPER_TICK_TIME=2000
And here the Ping.java file (Found it on here on stackoverflow answer: Docker-Compose: How to healthcheck OpenJDK:8 container?):
import java.io.IOException;
import java.net.InetSocketAddress;
import java.net.Socket;
public class Ping {
public static void main(String[] args) {
if (args.length != 2) {
System.exit(-1);
}
String host = args[0];
int port = 0;
try {
port = Integer.parseInt(args[1]);
} catch (NumberFormatException e) {
e.printStackTrace();
System.exit(-2);
}
try (Socket socket = new Socket()) {
socket.connect(new InetSocketAddress(host, port), 10 * 1000);
System.exit(0);
} catch (IOException e) {
System.exit(1);
}
}
}
Even with depends_on on the SETUP-TOPICS service to be dependent on Kafka in order to works, but he don't wait until Kafka is started before running and install new topics.
I can avoid this steps by using:
KAFKA_AUTO_CREATE_TOPICS_ENABLE=true
But for my developement purpose, I need to make it FALSE and create them one by one.
In top of this, I already tested with this command in the Healthcheck without requiring third party file:
healthcheck:
test: ["CMD", "bash", "-c", "unset" , "JMX_PORT" ,";" ,"/opt/bitnami/kafka/bin/kafka-topics.sh","--zookeeper","zookeeper:2181","--list"]
interval: 30s
timeout: 10s
retries: 4
And finally, here is the error message I am getting for both tries:
Waiting for Kafka to be ready...
Error while executing topic command : Replication factor: 1 larger than available brokers: 0.
[2020-06-01 15:06:28,809] ERROR org.apache.kafka.common.errors.InvalidReplicationFactorException: Replication factor: 1 larger than available brokers: 0.
(kafka.admin.TopicCommand$)
I am aware that we can do it also using SLEEP command, but its not professional and if there will be performances issue on the server and Kafka take longer to start, this one will be missed and recieve again the same error as above.
I hear also about kafkacat (Which I didn't found yet an example on how to integrate it with docker-compose for this purpose).
I want to stay basic and use limited third party tools to achieve this goal, this is way I choosen JAVA file since the image already have Java dependency installed.
Hpe you understand my view, thank you in advance for your help.
Not clear why you need a JAR file. This should work just as well
test: ["CMD", "nc", "-vz", "localhost", "9092"]
The problem is simply waiting for kafka has not been waiting long enough before you run your commands. depends_on does not wait for healthcheck, AFAIK

Docker - MySQL and Java container connectivity error

I am trying to do a simple task of creating a microservice with JAVA and MySQL.
I am using docker-compose on Windows 10 with Docker Desktop.
Client: Docker Engine - Community
Version: 19.03.5
API version: 1.40
Server: Docker Engine - Community
Engine:
Version: 19.03.5
API version: 1.40 (minimum version 1.12)
My docker-compose.yml is
version: '3.1'
services:
db:
#image: mysql:5.7.22
image: mysql:latest
ports: ["3306:3306"]
hostname: db
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_DATABASE=Users
container_name: mysqldatabase
web:
build: docker-mysql-connector
image: docker-mysql-connector
hostname: web
tty: true
depends_on:
- db
links:
- db:db
My JAVA code to check the connectivity is
package com.prasad.docker.mysql;
import java.net.InetAddress;
import java.sql.Connection;
import java.sql.DriverManager;
import java.util.Map;
public class MySQLConnection {
public static void main(String[] args) throws Exception {
String ipAddr = InetAddress.getLocalHost().getHostName();
System.out.println("Printing IP address of the host " + ipAddr);
Map<String, String> env = System.getenv();
for (String envName : env.keySet()) {
System.out.format("%s=%s%n", envName, env.get(envName));
}
Thread.sleep(10000);
boolean connected = false;
while (!connected) {
try {
String url = "jdbc:mysql://db:3306/Users?autoReconnect=false&useSSL=false";
String user = "root";
String password = "root";
System.out.println("Connecting to URL " + url);
Class.forName("com.mysql.cj.jdbc.Driver").newInstance();
Connection conn = DriverManager.getConnection(url, user, password);
System.out.println("Connection was successful");
connected = true;
} catch (Exception e) {
System.err.println("Error connecting to database");
e.printStackTrace();
Thread.sleep(5000);
}
}
}
}
I get the following error when I output the log of web container where my JAVA ode is running
Connecting to URL jdbc:mysql://db:3306/Users?autoReconnect=false&useSSL=false
Error connecting to database
com.mysql.cj.jdbc.exceptions.CommunicationsException: Communications link failure
The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
at com.mysql.cj.jdbc.exceptions.SQLError.createCommunicationsException(SQLError.java:590)
at com.mysql.cj.jdbc.exceptions.SQLExceptionsMapping.translateException(SQLExceptionsMapping.java:57)
at com.mysql.cj.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:1606)
at com.mysql.cj.jdbc.ConnectionImpl.<init>(ConnectionImpl.java:633)
at com.mysql.cj.jdbc.ConnectionImpl.getInstance(ConnectionImpl.java:347)
at com.mysql.cj.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:219)
at java.sql.DriverManager.getConnection(DriverManager.java:664)
at java.sql.DriverManager.getConnection(DriverManager.java:247)
at com.prasad.docker.mysql.MySQLConnection.main(MySQLConnection.java:34)
Caused by: com.mysql.cj.core.exceptions.CJCommunicationsException: Communications link failure
I am able to test the connection successfully from MySQL Workbench and the MySQL database inside the container. I get the connectivity error only from JAVA code. I tried using latest version and v5.7.22 of MySQL database. Same error in both cases. Any help appreciated
Have you mentioned that the instance is for localhost ?
version: '3.1'
services:
db:
#image: mysql:5.7.22
image: mysql:latest
ports: ["3306:3306"]
hostname: db
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_DATABASE=Users
container_name: mysqldatabase
web:
build: docker-mysql-connector
image: docker-mysql-connector
hostname: web
tty: true
depends_on:
- db
links:
- db:db
environment:
SPRING_DATASOURCE_URL: jdbc:mysql://docker-mysql:3306/database?autoReconnect=true&useSSL=false
Also adding network_mode: "host" will also help.

Elasticsearch client can't connect elasticsearch in docker container

I'm trying to use Elasticsearch inside a container from a Java app which is also inside a container. Without docker containers my app correctly connect to local elasticsearch. My docker-compose file:
version: "3.7"
volumes:
postgis:
services:
database:
container_name: database
build:
postgis/
ports:
- 5432:5432
volumes:
- ./postgis:/var/lib/postgresql:rw
restart: on-failure
networks:
- net
application:
depends_on:
- database
- es
container_name: application
build:
application/
ports:
- $LORRYAPP_DEBUG_PORT:8080
volumes:
- ./application:/app:rw
environment:
LORRYAPP_OPTS: $LORRYAPP_OPTS
restart: on-failure
networks:
- net
es:
image: docker.elastic.co/elasticsearch/elasticsearch:7.6.1
ports:
- "9200:9200"
- "9300:9300"
environment:
- discovery.type=single-node
networks:
- net
networks:
net:
driver: bridge
Initialising es-client in ctor:
public ElasticSearchDao(ObjectMapper mapper) {
this.esClient = new RestHighLevelClient(RestClient.builder(HttpHost.create("http://localhost:9200")));
this.mapper = mapper;
}
Stacktrace:
java.net.ConnectException: Connection refused
at org.elasticsearch.client.RestClient.extractAndWrapCause(RestClient.java:788) ~[elasticsearch-rest-client-7.4.0.jar!/:7.4.0]
at org.elasticsearch.client.RestClient.performRequest(RestClient.java:218) ~[elasticsearch-rest-client-7.4.0.jar!/:7.4.0]
at org.elasticsearch.client.RestClient.performRequest(RestClient.java:205) ~[elasticsearch-rest-client-7.4.0.jar!/:7.4.0]
at org.elasticsearch.client.RestHighLevelClient.internalPerformRequest(RestHighLevelClient.java:1454) ~[elasticsearch-rest-high-level-client-7.4.0.jar!/:7.4.0]
at org.elasticsearch.client.RestHighLevelClient.performRequest(RestHighLevelClient.java:1424) ~[elasticsearch-rest-high-level-client-7.4.0.jar!/:7.4.0]
at org.elasticsearch.client.RestHighLevelClient.performRequestAndParseEntity(RestHighLevelClient.java:1394) ~[elasticsearch-rest-high-level-client-7.4.0.jar!/:7.4.0]
at org.elasticsearch.client.RestHighLevelClient.index(RestHighLevelClient.java:836) ~[elasticsearch-rest-high-level-client-7.4.0.jar!/:7.4.0]
ES available through browser. http://0.0.0.0:9200/, http://127.0.0.1:9200/, http://localhost:9200/ give response:
{
"name" : "254bdb7bcc2a",
"cluster_name" : "docker-cluster",
"cluster_uuid" : "vm537RNGSiG3dW8ag2MDTw",
"version" : {
"number" : "7.6.1",
"build_flavor" : "default",
"build_type" : "docker",
"build_hash" : "aa751e09be0a5072e8570670309b1f12348f023b",
"build_date" : "2020-02-29T00:15:25.529771Z",
"build_snapshot" : false,
"lucene_version" : "8.4.0",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}
I think this could be related to question I've asked a long while ago.
Accessing Elasticsearch Docker instance using NEST
It's targeted against C#, this is really an issue with containers. What happened in my case was that client would connect to docker, obtain internal IP which is used for inter-container communication only and then would try to use this IP to keep connection open - that obviously doesn't work.

Create Kafka Producer to send each message from the list

I have kafka and zookeeper running in the docker-machine
I need to send kafka messages to kafka by using springboot.
List of Messages:
[[{"id":"0x804f","timestamp":1551684977690}],
[{"id":"1234","timestamp":155168497800}],
[{"id":"39339e82-6bd6-4ab6-9672-21d0df4d34eb","timestamp":1551684977690}],
[{"id":"a3173ca5-4cc4-408b-a058-879a298d6081","timestamp":155168497800}]]
This is what I tried for sample :
import java.util.Properties;
import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.ProducerConfig;
import org.apache.kafka.clients.producer.ProducerRecord;
import org.apache.kafka.common.serialization.StringSerializer;
public class Producer {
private Properties properties = new Properties();
String topicName = "tslistsbc";
public Producer(){
String bootstrapServer = "docker-machineIP:9092";
String keySerializer = StringSerializer.class.getName();
String valueSerializer = StringSerializer.class.getName();
String producerId = "simpleProducer";
int retries = 2;
properties.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServer);
properties.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, keySerializer);
properties.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, valueSerializer);
properties.put(ProducerConfig.CLIENT_ID_CONFIG, producerId);
properties.put(ProducerConfig.RETRIES_CONFIG, retries);
KafkaProducer<String, String> kafkaProducer = new KafkaProducer<>(properties);
KafkaProducer<String, String> kafkaProducer = new KafkaProducer<>(properties);
String value = "sample list"
ProducerRecord<String, String> producerRecord = new ProducerRecord<>(topicName, "1",value);
kafkaProducer.send(producerRecord);
kafkaProducer.close();
}
Docker Image:
These containers are running in the docker machine
zookeeper:
build: ../components/zookeeper
image: xxxx:${ZOOKEEPER}
container_name: zookeeper
ports:
- 2181:2181
restart: unless-stopped
kafka:
build: ../components/kafka
image: xxx:${EMD_KAFKA}
container_name: image-kafka
environment:
KAFKA_ADVERTISED_HOST_NAME: 192.168.99.100
KAFKA_CREATE_TOPICS: "tslist:1:1,topic:1:1"
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_MESSAGE_MAX_BYTES: "15728640"
ports:
- 9092:9092
depends_on:
- zookeeper
restart: unless-stopped
Error Message
SLF4J: Failed toString() invocation on an object of type [org.apache.kafka.clients.NodeApiVersions]
Reported exception:
java.lang.NullPointerException
at org.apache.kafka.clients.NodeApiVersions.apiVersionToText(NodeApiVersions.java:167)
Its not working, the message is not being sent.
Since you are trying to access one of the docker compose containers externally from the docker compose started services (for instance by running your service in your IDE), you need to add the docker container name to your system's hosts file.
In Linux/Mac the hosts file is at /etc/hosts and in Windows its at c:\windows\system32\drivers\etc\hosts. According to the error you are getting, your hosts file should have an entry like the following:
127.0.0.1 image-kafka
Regarding the exception
SLF4J: Failed toString() invocation on an object of type
[org.apache.kafka.clients.NodeApiVersions]
Reported exception:
java.lang.NullPointerException
at org.apache.kafka.clients.NodeApiVersions.apiVersionToText(NodeApiVersions.java:167)
it is due to a mismatch between the used Kafka Server version and Kafka Client version (check the answer here).

Categories