MongoDb init-mongo.sh Authentication failed - java

I develop an application in Spring Boot which is a group of microservices and run them as Docker containers. I'm using MongoDB as my database. I create Root User and User when creating Monga using the init-mongo.sh and stage_mongo.env files, then I try to connect to the database using the stage_mongo_auth.env file from other microservices. When I try to connect as Root User everything goes fine but when I try to connect as User I get an authentication error.
Error:
com.mongodb.MongoCommandException: Command failed with error 18 (AuthenticationFailed): 'Authentication failed.' on server mongodb:27017. The full response is {"ok": 0.0, "errmsg": "Authentication failed.", "code": 18, "codeName": "AuthenticationFailed"} at com.mongodb.internal.connection.ProtocolHelper.getCommandFailureException(ProtocolHelper.java:198) ~[mongodb-driver-core-4.6.0.jar!/:na] at com.mongodb.internal.connection.InternalStreamConnection.receiveCommandMessageResponse(InternalStreamConnection.java:413) ~[mongodb-driver-core-4.6.0.jar!/:na] at com.mongodb.internal.connection.InternalStreamConnection.sendAndReceive(InternalStreamConnection.java:337) ~[mongodb-driver-core-4.6.0.jar!/:na] at com.mongodb.internal.connection.CommandHelper.sendAndReceive(CommandHelper.java:101) ~[mongodb-driver-core-4.6.0.jar!/:na] at com.mongodb.internal.connection.CommandHelper.executeCommand(CommandHelper.java:45) ~[mongodb-driver-core-4.6.0.jar!/:na] at com.mongodb.internal.connection.SaslAuthenticator.sendSaslStart(SaslAuthenticator.java:230) ~[mongodb-driver-core-4.6.0.jar!/:na] at com.mongodb.internal.connection.SaslAuthenticator.getNextSaslResponse(SaslAuthenticator.java:137) ~[mongodb-driver-core-4.6.0.jar!/:na]
docker-compose.yaml
version: '3.3'
services:
mongodb:
image: mongo:6.0.2
restart: unless-stopped
env_file:
- ../config/stage_mongo.env
volumes:
- ../mongodb/db:/data/db
- ./init-mongo.sh:/docker-entrypoint-initdb.d/init-mongo.sh
ports:
- 30430:27017
deploy:
resources:
limits:
cpus: '4.0'
memory: 2GB
logging:
driver: "json-file"
options:
tag: "mongodb"
max-size: 256m
api:
image: amazoncorretto:17.0.3-alpine
depends_on:
- mongodb
restart: unless-stopped
env_file:
- ../config/stage_mongo_auth.env
volumes:
- ./java/api-0.0.1-SNAPSHOT.jar:/gjava/java.jar
- ../files:/files
environment:
spring_data_mongodb_host: mongodb
command: /bin/sh -c "cd /gjava && chmod +x /gjava/*.jar && java -Xmx2g -Dspring.profiles.active=dev -jar /gjava/java.jar"
ports:
- 30429:30329
deploy:
resources:
limits:
cpus: '2.0'
memory: 2GB
logging:
driver: "json-file"
options:
tag: "api"
max-size: 256m
init-mongo.sh
mongo -- "$MONGO_INITDB_DATABASE" <<EOF
var rootUser = '$MONGO_INITDB_ROOT_USERNAME';
var rootPassword = '$MONGO_INITDB_ROOT_PASSWORD';
var admin = db.getSiblingDB('admin');
admin.auth(rootUser, rootPassword);
var user = '$MONGO_INITDB_USERNAME';
var passwd = '$MONGO_INITDB_PASSWORD';
db.createUser({user: user, pwd: passwd, roles: ["readWrite"]});
EOF
stage_mongo.env
MONGO_INITDB_ROOT_USERNAME=someRootName
MONGO_INITDB_ROOT_PASSWORD=someRootPassword
MONGO_INITDB_USERNAME=someName
MONGO_INITDB_PASSWORD=somePassword
MONGO_INITDB_DATABASE=someDatabaseName
stage_mongo_auth.env
spring_data_mongodb_username=someName
spring_data_mongodb_password=somePassword
I've looked through my code several times, but I can't find the reason for this error, I've also tried to search the internet for answers, but I haven't found anything either.
I will be grateful for any help.
Update 1
I found the reason why some login credentials work and others don't - commands from init-mongo.sh do not run. I removed it and got the same way to authenticate to MongoDB.
I've tried different ways to enter commands like that:
mongo <<EOF
var rootUser = "${MONGO_INITDB_ROOT_USERNAME}";
var rootPassword = "${MONGO_INITDB_ROOT_PASSWORD}";
db.getSiblingDB('admin').auth(rootUser, rootPassword);
use ${MONGO_INITDB_DATABASE}
db.createCollection("someCollectionName")
use admin
db.createUser(
{
user: "${MONGO_INITDB_USERNAME}",
pwd: "${MONGO_INITDB_PASSWORD}",
roles: [ { role: "readWrite", db: "${MONGO_INITDB_DATABASE}" } ]
}
)
EOF
I've tried adding the :ro suffix to docker-cospose:
volumes:
- ../mongodb/db:/data/db
- ./init-mongo.sh:/docker-entrypoint-initdb.d/init-mongo.sh:ro
but it still doesn't work.

Related

Unable to establish connection with elasticsearch 8.1 (java)

I have elasticsearch 8.1 running in docker with this docker compose file:
version: '3.7'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:8.1.0
container_name: es-node
environment:
- xpack.security.enabled=false
- discovery.type=single-node
volumes:
- ./elastic-data:/usr/share/elasticsearch/data
ports:
- 9200:9200
cap_add:
- IPC_LOCK
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 65536
hard: 65536
kibana:
image: docker.elastic.co/kibana/kibana:8.1.0
container_name: kibana
environment:
- ELASTICSEARCH_HOST=http://localhost:9200
ports:
- 5601:5601
depends_on:
- elasticsearch
I'm trying to make a simple GET request to the es cluster using the org.elasticsearch.client.RestClient.
Request:
Request request = new Request("GET", "_cluster/health");
try {
return restClient.performRequest(request).toString();
} catch (IOException e) {
throw new RuntimeException(e);
}
Rest client initialisation:
var hosts = buildClusterHosts(transportAddresses);
restClient = RestClient.builder(hosts).build();
if (isElasticSniffEnabled) {
sniffer = Sniffer.builder(restClient).build();
}
var esTransport = new RestClientTransport(restClient, new JacksonJsonpMapper());
elasticsearchClient = new ElasticsearchClient(esTransport);
Main method:
var es = ElasticEightClient.builder()
.transportAddresses("localhost:9200")
.isElasticSniffEnabled(true)
.build();
System.out.println("Started elasticsearch with health: " + es.getHealth());
buildClusterHosts() method is correctly building an array of HttpHost (in this case only one) and provides it to the rest client builder.
In theory this should be enough, but I keep getting Caused by: java.net.ConnectException: Timeout connecting to [/172.20.0.2:9200] and I'm not sure why?
Tldr;
It seems you are confusing the Transport port and the Rest Api port of Elasticsearch.
To Fix
You will first need to expose the port of the transport layer, which is 9300 by default
services:
elasticsearch:
ports:
- 9300:9300
Then update the main method
var es = ElasticEightClient.builder()
.transportAddresses("localhost:9200")
.isElasticSniffEnabled(true)
.build();
System.out.println("Started elasticsearch with health: " + es.getHealth());
Figured out what was the problem. To use the Sniffer you need to add http.publish_host=localhost as environment variable in the docker compose file.

Insert Data Into MongoDB Container at the Spring Boot Application initialization

I have to insert some sample data into Collection in MongoDB when the application starts to run. I have done the following steps but it didn't work.
created the entrypoint.js under the init_script folder
entrypoint.js
use admin;
db.createUser(
{
user: "patient_db",
pwd: "14292",
roles: [ { role: "readWrite", db: "patient_db" } ]
}
);
db.grantRolesToUser( "patient_db", [{ role: "readWrite", db: "patient_db"}]);
created data.js file in the resources path
src/main/resources/data.js
use patient_db;
db.createCollection("holiday");
db.holiday.insert({holiday_date:'25-12-2021',holiday_name:'Christmas',created_by:'John Wick',modified_by:'John_Wick',created_date_time:'2021-04-25 04:23:55',modified_date_time:'2021-04-25 04:23:55'});
configured the docker-compose.yml
docker-compose.yml
version: "3"
services:
patient-service:
image: patient-service:1.0
container_name: patient-service
ports:
- 9090:9090
restart: on-failure
networks:
- patient-mongo
depends_on:
- mongo-db
links:
- mysql-db
mongo-db:
image: mongo:latest
container_name: mongo-db
ports:
- 27017:27017
networks:
- patient-mongo
volumes:
- 'mongodata:/data/db'
- './init_scripts:/docker-entrypoint-initdb.d'
environment:
- MONGO_INITDB_ROOT_USERNAME=admin
- MONGO_INITDB_ROOT_PASSWORD=14292
restart: unless-stopped
networks:
patient-mongo:
volumes:
mongodata:
4.Finally, Connection with MongoDB
properties-dev.yml
spring:
data:
mongodb:
host: mongo-db
port: 27017
database: patient_db
This is how I insert the entrypoint code to mongodb container:
Create a .sh file (example.sh)
Create mongo users and the data you want to insert.
example.sh
#!/usr/bin/env bash
echo "Creating mongo users..."
mongo admin --host localhost -u root -p mypass --eval "
db = db.getSiblingDB('patient_db');
db.createUser(
{
user: "patient_db",
pwd: "14292",
roles: [ { role: "readWrite", db: "patient_db" } ]
}
);
db.createCollection('holiday');
db.holiday.insert({holiday_date:'25-12-2021',holiday_name:'Christmas',
created_by:'John Wick',modified_by:'John_Wick',created_date_time:'2021-04-25 04:23:55',modified_date_time:'2021-04-25 04:23:55'});
"
echo "Mongo users and data created."
At docker-compose, insert the entrypoint
volumes:
- 'mongodata:/data/db'
- './example.sh:/docker-entrypoint-initdb.d/example.sh'
Maybe its not the more clean option, but its works perfectly.
I did it like this because I didn't get it to work with js files.
Thanks, #Schwarz54 for your answer. It works with js file
init_scripts/mongo_init.js
var db = connect("mongodb://admin:14292#127.0.0.1:27017/admin");
db = db.getSiblingDB('patient_db'); /* 'use' statement doesn't support here to switch db */
db.createUser(
{
user: "patient_db",
pwd: "14292",
roles: [ { role: "readWrite", db: "patient_db" } ]
}
);
db.createCollection("holiday");
db.holiday.insert({holiday_date:'25-12-2021',holiday_name:'Christmas',created_by:'John Wick',modified_by:'John_Wick',created_date_time:'2021-04-25 04:23:55',modified_date_time:'2021-04-25 04:23:55'});
docker-compose.yml
volumes:
- 'mongodata:/data/db'
- './init_scripts:/docker-entrypoint-initdb.d'

Healthcheck not working at all when using docker-compose (My service do not wait for Kafka to be started before launching)

I have three services on my docker-compose:
version: '3.4'
setup-topics:
image: 'bitnami/kafka:2'
hostname: setup-topics
container_name: setup-topics
command: "bash -c 'echo Waiting for Kafka to be ready... && \
./opt/bitnami/kafka/bin/kafka-topics.sh --create --if-not-exists --zookeeper zookeeper:2181 --partitions 1 --replication-factor 1 --topic orders && \
./opt/bitnami/kafka/bin/kafka-topics.sh --create --if-not-exists --zookeeper zookeeper:2181 --partitions 1 --replication-factor 1 --topic redis'"
environment:
KAFKA_BROKER_ID: ignored
KAFKA_ZOOKEEPER_CONNECT: ignored
depends_on:
- kafka
kafka:
container_name: kafka
hostname: kafka
image: 'bitnami/kafka:2'
ports:
- '9092:9092'
- '29092:29092'
volumes:
- 'kafka_data:/opt/kafka'
- './Ping.jar:/Ping.jar'
environment:
- KAFKA_HEAP_OPTS=-Xms1g -Xmx1g
- KAFKA_JVM_PERFORMANCE_OPTS=-Xms512m -Xmx512M
- KAFKA_BROKER_ID:1
- KAFKA_CFG_ZOOKEEPER_CONNECT=zookeeper-server:2181
- ALLOW_PLAINTEXT_LISTENER=yes
- KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
- KAFKA_CFG_LISTENERS=PLAINTEXT://:9092,PLAINTEXT_HOST://:29092
- KAFKA_CFG_ADVERTISED_LISTENERS=PLAINTEXT://kafka-server:9092,PLAINTEXT_HOST://localhost:29092
depends_on:
- zookeeper
healthcheck:
test: ["CMD", "java", "-jar", "/Ping.jar", "localhost", "9092"]
interval: 30s
timeout: 10s
retries: 4
zookeeper:
container_name: zookeeper
hostname: zookeeper
image: 'bitnami/zookeeper:3'
ports:
- '2181:2181'
volumes:
- 'zookeeper_data:/bitnami'
environment:
- ALLOW_ANONYMOUS_LOGIN=yes
- ZOOKEEPER_CLIENT_PORT=32181
- ZOOKEEPER_TICK_TIME=2000
And here the Ping.java file (Found it on here on stackoverflow answer: Docker-Compose: How to healthcheck OpenJDK:8 container?):
import java.io.IOException;
import java.net.InetSocketAddress;
import java.net.Socket;
public class Ping {
public static void main(String[] args) {
if (args.length != 2) {
System.exit(-1);
}
String host = args[0];
int port = 0;
try {
port = Integer.parseInt(args[1]);
} catch (NumberFormatException e) {
e.printStackTrace();
System.exit(-2);
}
try (Socket socket = new Socket()) {
socket.connect(new InetSocketAddress(host, port), 10 * 1000);
System.exit(0);
} catch (IOException e) {
System.exit(1);
}
}
}
Even with depends_on on the SETUP-TOPICS service to be dependent on Kafka in order to works, but he don't wait until Kafka is started before running and install new topics.
I can avoid this steps by using:
KAFKA_AUTO_CREATE_TOPICS_ENABLE=true
But for my developement purpose, I need to make it FALSE and create them one by one.
In top of this, I already tested with this command in the Healthcheck without requiring third party file:
healthcheck:
test: ["CMD", "bash", "-c", "unset" , "JMX_PORT" ,";" ,"/opt/bitnami/kafka/bin/kafka-topics.sh","--zookeeper","zookeeper:2181","--list"]
interval: 30s
timeout: 10s
retries: 4
And finally, here is the error message I am getting for both tries:
Waiting for Kafka to be ready...
Error while executing topic command : Replication factor: 1 larger than available brokers: 0.
[2020-06-01 15:06:28,809] ERROR org.apache.kafka.common.errors.InvalidReplicationFactorException: Replication factor: 1 larger than available brokers: 0.
(kafka.admin.TopicCommand$)
I am aware that we can do it also using SLEEP command, but its not professional and if there will be performances issue on the server and Kafka take longer to start, this one will be missed and recieve again the same error as above.
I hear also about kafkacat (Which I didn't found yet an example on how to integrate it with docker-compose for this purpose).
I want to stay basic and use limited third party tools to achieve this goal, this is way I choosen JAVA file since the image already have Java dependency installed.
Hpe you understand my view, thank you in advance for your help.
Not clear why you need a JAR file. This should work just as well
test: ["CMD", "nc", "-vz", "localhost", "9092"]
The problem is simply waiting for kafka has not been waiting long enough before you run your commands. depends_on does not wait for healthcheck, AFAIK

Docker - MySQL and Java container connectivity error

I am trying to do a simple task of creating a microservice with JAVA and MySQL.
I am using docker-compose on Windows 10 with Docker Desktop.
Client: Docker Engine - Community
Version: 19.03.5
API version: 1.40
Server: Docker Engine - Community
Engine:
Version: 19.03.5
API version: 1.40 (minimum version 1.12)
My docker-compose.yml is
version: '3.1'
services:
db:
#image: mysql:5.7.22
image: mysql:latest
ports: ["3306:3306"]
hostname: db
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_DATABASE=Users
container_name: mysqldatabase
web:
build: docker-mysql-connector
image: docker-mysql-connector
hostname: web
tty: true
depends_on:
- db
links:
- db:db
My JAVA code to check the connectivity is
package com.prasad.docker.mysql;
import java.net.InetAddress;
import java.sql.Connection;
import java.sql.DriverManager;
import java.util.Map;
public class MySQLConnection {
public static void main(String[] args) throws Exception {
String ipAddr = InetAddress.getLocalHost().getHostName();
System.out.println("Printing IP address of the host " + ipAddr);
Map<String, String> env = System.getenv();
for (String envName : env.keySet()) {
System.out.format("%s=%s%n", envName, env.get(envName));
}
Thread.sleep(10000);
boolean connected = false;
while (!connected) {
try {
String url = "jdbc:mysql://db:3306/Users?autoReconnect=false&useSSL=false";
String user = "root";
String password = "root";
System.out.println("Connecting to URL " + url);
Class.forName("com.mysql.cj.jdbc.Driver").newInstance();
Connection conn = DriverManager.getConnection(url, user, password);
System.out.println("Connection was successful");
connected = true;
} catch (Exception e) {
System.err.println("Error connecting to database");
e.printStackTrace();
Thread.sleep(5000);
}
}
}
}
I get the following error when I output the log of web container where my JAVA ode is running
Connecting to URL jdbc:mysql://db:3306/Users?autoReconnect=false&useSSL=false
Error connecting to database
com.mysql.cj.jdbc.exceptions.CommunicationsException: Communications link failure
The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
at com.mysql.cj.jdbc.exceptions.SQLError.createCommunicationsException(SQLError.java:590)
at com.mysql.cj.jdbc.exceptions.SQLExceptionsMapping.translateException(SQLExceptionsMapping.java:57)
at com.mysql.cj.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:1606)
at com.mysql.cj.jdbc.ConnectionImpl.<init>(ConnectionImpl.java:633)
at com.mysql.cj.jdbc.ConnectionImpl.getInstance(ConnectionImpl.java:347)
at com.mysql.cj.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:219)
at java.sql.DriverManager.getConnection(DriverManager.java:664)
at java.sql.DriverManager.getConnection(DriverManager.java:247)
at com.prasad.docker.mysql.MySQLConnection.main(MySQLConnection.java:34)
Caused by: com.mysql.cj.core.exceptions.CJCommunicationsException: Communications link failure
I am able to test the connection successfully from MySQL Workbench and the MySQL database inside the container. I get the connectivity error only from JAVA code. I tried using latest version and v5.7.22 of MySQL database. Same error in both cases. Any help appreciated
Have you mentioned that the instance is for localhost ?
version: '3.1'
services:
db:
#image: mysql:5.7.22
image: mysql:latest
ports: ["3306:3306"]
hostname: db
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_DATABASE=Users
container_name: mysqldatabase
web:
build: docker-mysql-connector
image: docker-mysql-connector
hostname: web
tty: true
depends_on:
- db
links:
- db:db
environment:
SPRING_DATASOURCE_URL: jdbc:mysql://docker-mysql:3306/database?autoReconnect=true&useSSL=false
Also adding network_mode: "host" will also help.

Elasticsearch client can't connect elasticsearch in docker container

I'm trying to use Elasticsearch inside a container from a Java app which is also inside a container. Without docker containers my app correctly connect to local elasticsearch. My docker-compose file:
version: "3.7"
volumes:
postgis:
services:
database:
container_name: database
build:
postgis/
ports:
- 5432:5432
volumes:
- ./postgis:/var/lib/postgresql:rw
restart: on-failure
networks:
- net
application:
depends_on:
- database
- es
container_name: application
build:
application/
ports:
- $LORRYAPP_DEBUG_PORT:8080
volumes:
- ./application:/app:rw
environment:
LORRYAPP_OPTS: $LORRYAPP_OPTS
restart: on-failure
networks:
- net
es:
image: docker.elastic.co/elasticsearch/elasticsearch:7.6.1
ports:
- "9200:9200"
- "9300:9300"
environment:
- discovery.type=single-node
networks:
- net
networks:
net:
driver: bridge
Initialising es-client in ctor:
public ElasticSearchDao(ObjectMapper mapper) {
this.esClient = new RestHighLevelClient(RestClient.builder(HttpHost.create("http://localhost:9200")));
this.mapper = mapper;
}
Stacktrace:
java.net.ConnectException: Connection refused
at org.elasticsearch.client.RestClient.extractAndWrapCause(RestClient.java:788) ~[elasticsearch-rest-client-7.4.0.jar!/:7.4.0]
at org.elasticsearch.client.RestClient.performRequest(RestClient.java:218) ~[elasticsearch-rest-client-7.4.0.jar!/:7.4.0]
at org.elasticsearch.client.RestClient.performRequest(RestClient.java:205) ~[elasticsearch-rest-client-7.4.0.jar!/:7.4.0]
at org.elasticsearch.client.RestHighLevelClient.internalPerformRequest(RestHighLevelClient.java:1454) ~[elasticsearch-rest-high-level-client-7.4.0.jar!/:7.4.0]
at org.elasticsearch.client.RestHighLevelClient.performRequest(RestHighLevelClient.java:1424) ~[elasticsearch-rest-high-level-client-7.4.0.jar!/:7.4.0]
at org.elasticsearch.client.RestHighLevelClient.performRequestAndParseEntity(RestHighLevelClient.java:1394) ~[elasticsearch-rest-high-level-client-7.4.0.jar!/:7.4.0]
at org.elasticsearch.client.RestHighLevelClient.index(RestHighLevelClient.java:836) ~[elasticsearch-rest-high-level-client-7.4.0.jar!/:7.4.0]
ES available through browser. http://0.0.0.0:9200/, http://127.0.0.1:9200/, http://localhost:9200/ give response:
{
"name" : "254bdb7bcc2a",
"cluster_name" : "docker-cluster",
"cluster_uuid" : "vm537RNGSiG3dW8ag2MDTw",
"version" : {
"number" : "7.6.1",
"build_flavor" : "default",
"build_type" : "docker",
"build_hash" : "aa751e09be0a5072e8570670309b1f12348f023b",
"build_date" : "2020-02-29T00:15:25.529771Z",
"build_snapshot" : false,
"lucene_version" : "8.4.0",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}
I think this could be related to question I've asked a long while ago.
Accessing Elasticsearch Docker instance using NEST
It's targeted against C#, this is really an issue with containers. What happened in my case was that client would connect to docker, obtain internal IP which is used for inter-container communication only and then would try to use this IP to keep connection open - that obviously doesn't work.

Categories