not able to connect mysql docker container from spring boot docker container - java

i am getting the following error
2020-12-26 23:17:30.499 INFO 1 --- [ main] org.hibernate.dialect.Dialect : HHH000400: Using dialect: org.hibernate.dialect.MySQL57Dialect
licensingservice_1 | Hibernate: drop table if exists licenses
licensingservice_1 | 2020-12-26 23:17:31.006 INFO 1 --- [ main] com.zaxxer.hikari.HikariDataSource : HikariPool-1 - Starting...
licensingservice_1 | 2020-12-26 23:17:32.010 ERROR 1 --- [ main] com.zaxxer.hikari.pool.HikariPool : HikariPool-1 - Exception during pool initialization.
licensingservice_1 |
licensingservice_1 | com.mysql.cj.jdbc.exceptions.CommunicationsException: Communications link failure
licensingservice_1 |
licensingservice_1 | The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
licensingservice_1 | at com.mysql.cj.jdbc.exceptions.SQLError.createCommunicationsException(SQLError.java:174) ~[mysql-connector-java-8.0.22.jar:8.0.22]
licensingservice_1 | at com.mysql.cj.jdbc.exceptions.SQLExceptionsMapping.translateException(SQLExceptionsMapping.java:64) ~[mysql-connector-java-8.0.22.jar:8.0.22]
my docker-compose yml
version : '3'
services:
licensingservice:
image: licensing/licensing-service-ms:0.0.1-SNAPSHOT
ports:
- "8080:8080"
networks:
- my-network
volumes:
- .:/vol/development
depends_on:
- mysqldbserver
mysqldbserver:
image: mysql:5.7
ports:
- "3307:3306"
networks:
- my-network
environment:
MYSQL_DATABASE: license
MYSQL_ROOT_PASSWORD: Spartans#123
container_name: mysqldb
networks:
my-network:
driver: bridge
and my application.properties
spring.jpa.hibernate.ddl-auto=create-drop
spring.datasource.url=jdbc:mysql://mysqldb:3307/license
spring.datasource.username=root
spring.datasource.password=Spartans#123
spring.jpa.properties.hibernate.dialect=org.hibernate.dialect.MySQL57Dialect
spring.datasource.driver-class-name=com.mysql.cj.jdbc.Driver
spring.jpa.show-sql=true

Try connecting to port 3306 instead. You're exposing port 3306 on the database container to the host machine on port 3307, but that doesn't change anything for communication between services inside the same network.
This is explained in the Docker-Compose docs.
By default Compose sets up a single network for your app. Each container for a service joins the default network and is both reachable by other containers on that network, and discoverable by them at a hostname identical to the container name.
Additionally, you can choose to expose these ports to the outside world by defining a mapping between the host port and container port. However, this has no effect on communication between services inside the same network:
It is important to note the distinction between HOST_PORT and CONTAINER_PORT. [...] Networked service-to-service communication uses the CONTAINER_PORT. When HOST_PORT is defined, the service is accessible outside the swarm as well.

Related

Kafka Broker Application Disconnecting with EOFException

I'm running an application that connects to the kafka broker using confluentinc/cp-server:7.2.1 docker image.
When I try to run a larger application it just disconnect both application and I need to restart broker.
The message of application is this:
WARN 16756 --- [RMADE-738-0-C-1]
org.apache.kafka.clients.NetworkClient : [Consumer
clientId=consumer01] Connection to node 1 (localhost/127.0.0.1:9092)
could not be established. Broker may not be available.
INFO 16756 --- [RMFDE-603-0-C-1]
org.apache.kafka.clients.NetworkClient : [Consumer
clientId=consumer01] Node 1 disconnected.
And the message of broker (docker image) is this:
broker | INFO Skipping goal violation detection due to previous new
broker change
(com.linkedin.kafka.cruisecontrol.detector.GoalViolationDetector)
[2023] DEBUG [SocketServer listenerType=ZK_BROKER, nodeId=1]
Connection with /172.18.0.1
(channelId=172.18.0.5:9092-172.18.0.1:51368-43) disconnected
(org.apache.kafka.common.network.Selector)
java.io.EOFException at
org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:97)
at
org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:797)
at
org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:700)
at
org.apache.kafka.common.network.Selector.attemptRead(Selector.java:783)
at
org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:635)
at org.apache.kafka.common.network.Selector.poll(Selector.java:519)
at kafka.network.Processor.poll(SocketServer.scala:1463) at
kafka.network.Processor.run(SocketServer.scala:1307) at
java.base/java.lang.Thread.run(Thread.java:829) at
org.apache.kafka.common.utils.KafkaThread.run(KafkaThread.java:64)
[2023] DEBUG [SocketServer listenerType=ZK_BROKER, nodeId=1]
Connection with /172.18.0.1
(channelId=172.18.0.5:9092-172.18.0.1:50998-29) disconnected
(org.apache.kafka.common.network.Selector)
java.io.IOException: Broken pipe at
java.base/sun.nio.ch.FileDispatcherImpl.write0(Native Method) at
java.base/sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47)
at java.base/sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:113)
at java.base/sun.nio.ch.IOUtil.write(IOUtil.java:79) at
java.base/sun.nio.ch.IOUtil.write(IOUtil.java:50) at
java.base/sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:462)
at
org.apache.kafka.common.network.PlaintextTransportLayer.write(PlaintextTransportLayer.java:143)
at
org.apache.kafka.common.network.PlaintextTransportLayer.write(PlaintextTransportLayer.java:159)
at
org.apache.kafka.common.network.ByteBufferSend.writeTo(ByteBufferSend.java:62)
at
org.apache.kafka.common.network.NetworkSend.writeTo(NetworkSend.java:41)
at
org.apache.kafka.common.network.KafkaChannel.write(KafkaChannel.java:728)
at org.apache.kafka.common.network.Selector.write(Selector.java:753)
at
org.apache.kafka.common.network.Selector.attemptWrite(Selector.java:743)
at
org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:652)
at org.apache.kafka.common.network.Selector.poll(Selector.java:519)
at kafka.network.Processor.poll(SocketServer.scala:1463) at
kafka.network.Processor.run(SocketServer.scala:1307) at
java.base/java.lang.Thread.run(Thread.java:829) at
org.apache.kafka.common.utils.KafkaThread.run(KafkaThread.java:64)
My docker-compose.yml file:
---
version: '2'
services:
zookeeper:
image: confluentinc/cp-zookeeper:7.2.1
hostname: zookeeper
container_name: zookeeper
ports:
- "2181:2181"
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
broker:
image: confluentinc/cp-server:7.2.1
hostname: broker
container_name: broker
depends_on:
- zookeeper
ports:
- "9092:9092"
- "9101:9101"
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181'
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://broker:29092,PLAINTEXT_HOST://localhost:9092
KAFKA_METRIC_REPORTERS: io.confluent.metrics.reporter.ConfluentMetricsReporter
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
KAFKA_CONFLUENT_LICENSE_TOPIC_REPLICATION_FACTOR: 1
KAFKA_CONFLUENT_BALANCER_TOPIC_REPLICATION_FACTOR: 1
KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
KAFKA_JMX_PORT: 9101
KAFKA_JMX_HOSTNAME: localhost
KAFKA_CONFLUENT_SCHEMA_REGISTRY_URL: http://schema-registry:8081
CONFLUENT_METRICS_REPORTER_BOOTSTRAP_SERVERS: broker:29092
CONFLUENT_METRICS_REPORTER_TOPIC_REPLICAS: 1
CONFLUENT_METRICS_ENABLE: 'true'
CONFLUENT_SUPPORT_CUSTOMER_ID: 'anonymous'
KAFKA_HEAP_OPTS: -Xms6G -Xmx6G
JAVA_OPTS: -Xms6G -Xmx6G
JVM_OPTS: Xmx6g -Xms6g -XX:MaxPermSize=1024m
KAFKA_LOG4J_ROOT_LOGLEVEL: DEBUG
KAFKA_TOOLS_LOG4J_LOGLEVEL: DEBUG
schema-registry:
image: confluentinc/cp-schema-registry:7.2.1
hostname: schema-registry
container_name: schema-registry
depends_on:
- broker
ports:
- "8081:8081"
environment:
SCHEMA_REGISTRY_HOST_NAME: schema-registry
SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS: 'broker:29092'
SCHEMA_REGISTRY_LISTENERS: http://0.0.0.0:8081

how to docker-compose spring-boot with kafka? [duplicate]

This question already has answers here:
Connect to Kafka running in Docker
(5 answers)
Closed 10 months ago.
Execution is working fine but I think I am missing something, because when I use the REST API it will show:
(/127.0.0.1:9092) could not be established. Broker may not be available
You may see this below. By the way I am new to Docker and Kafka.
I can't sent or get data to /GET/POST/PUT/DELETE since I use Docker.
My reference when i created this setup: https://github.com/codegard/kafka-docker/blob/master/docker-compose.yml
//docker-compose.yaml
version: '3'
services:
#----------------------------------------------------------------
productmicroservice:
image: productmicroservice:latest
container_name: productmicroservice
depends_on:
- product-mysqldb
- kafka
restart: always
build:
context: ./
dockerfile: Dockerfile
ports:
- "9001:8091"
environment:
- MYSQL_HOST=product-mysqldb
- MYSQL_USER=oot
- MYSQL_PASSWORD=root
- MYSQL_PORT=3306
- "SPRING_PROFILES_ACTIVE=${ACTIVE_PROFILE}"
#----------------------------------------------------------------
product-mysqldb:
image: mysql:8.0.28
restart: unless-stopped
container_name: product-mysqldb
ports:
- "3307:3306"
cap_add:
- SYS_NICE
environment:
MYSQL_DATABASE: dbpoc
MYSQL_ROOT_PASSWORD: root
#----------------------------------------------------------------
zookeeper:
image: elevy/zookeeper:latest
container_name: zookeeper
ports:
- "2181:2181"
#----------------------------------------------------------------
kafka:
image: wurstmeister/kafka:2.11-2.0.0
container_name: kafka
restart: on-failure
ports:
- "9092:9092"
environment:
KAFKA_ADVERTISED_HOST_NAME: 127.0.0.1
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
depends_on:
- zookeeper
//appication.yaml
spring.kafka.bootstrap-servers=127.0.0.1:9092
product.kafkaServer= ${spring.kafka.bootstrap-servers}
spring.kafka.properties.security.protocol=PLAINTEXT
spring.kafka.producer.key-serializer=org.apache.kafka.common.serialization.StringSerializer
spring.kafka.producer.value-serializer=org.springframework.kafka.support.serializer.JsonSerializer
topic.name=producttopic
spring.jpa.properties.hibernate.check_nullability=true
spring:
datasource:
driver-class-name: com.mysql.cj.jdbc.Driver
url: jdbc:mysql://${MYSQL_HOST:localhost}:${MYSQL_PORT:3306}/dbpoc
username: root
password: root
jpa:
hibernate:
naming:
implicit-strategy: org.springframework.boot.orm.jpa.hibernate.SpringImplicitNamingStrategy
physical-strategy: org.springframework.boot.orm.jpa.hibernate.SpringPhysicalNamingStrategy
hibernate.ddl-auto: update
generate-ddl: false
show-sql: false
properties:
hibernate:
dialect: org.hibernate.dialect.MySQL5InnoDBDialect
mvc:
throw-exception-if-no-handler-found: true
web:
resources:
add-mappings: false
sql:
init:
mode: always
continue-on-error: true
server:
port: 8091
// .env
ACTIVE_PROFILE=dev
//Dockerfile
FROM openjdk:8-alpine
ADD target/*.jar app.jar
ENTRYPOINT ["java","-jar","app.jar"]
//topic that i created
./kafka-topics.sh --create --zookeeper zookeeper:2181 --replication-factor 1 --partitions 1 --topic producttopic
//trying to send data manualy in kafka
//working
//producer
bash-4.4# ./kafka-console-producer.sh --broker-list 127.0.0.1:9092 --topic producttopic
>hellow
>
//comsumer
bash-4.4# ./kafka-console-consumer.sh --topic producttopic --from-beginning --bootstrap-server 127.0.0.1:9092
hellow
//sending data to from my rest api
//this is wrong but i dont know what is the reason
productmicroservice | 2022-05-04 13:08:28.283 WARN 1 --- [ad | producer-1] org.apache.kafka.clients.NetworkClient : [Producer clientId=producer-1] Connection to node -1 (/127.0.0.1:9092) could not be established. Broker may not be available.
productmicroservice | 2022-05-04 13:08:28.283 WARN 1 --- [ad | producer-1] org.apache.kafka.clients.NetworkClient : [Producer clientId=producer-1] Bootstrap broker 127.0.0.1:9092 (id: -1 rack: null) disconnected
productmicroservice | 2022-05-04 13:08:29.343 WARN 1 --- [ad | producer-1] org.apache.kafka.clients.NetworkClient : [Producer clientId=producer-1] Connection to node -1 (/127.0.0.1:9092) could not be established. Broker may not be available.
//fix reference below comment
//change made
//application.yaml
spring.kafka.topic.name=producttopic
topic.name=${spring.kafka.topic.name}
spring.kafka.bootstrap-servers=kafka:9092
product.kafkaServer= ${spring.kafka.bootstrap-servers}
//docker-compose.yaml
//added to kafka environment
KAFKA_CREATE_TOPICS: "producttopic:1:1"
KAFKA_INTER_BROKER_LISTENER_NAME: INSIDE
KAFKA_LISTENERS: INSIDE://kafka:9092,OUTSIDE://0.0.0.0:9093
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INSIDE:PLAINTEXT,OUTSIDE:PLAINTEXT
KAFKA_ADVERTISED_LISTENERS: INSIDE://kafka:9092,OUTSIDE://localhost:9093
//manual get data in topic by consumer
./kafka-console-consumer.sh --topic producttopic --from-beginning --bootstrap-server localhost:9093
//code changes in spring
//for topic
#Value("${topic.name}")
//for bootstrapserver
#Value("${product.kafkaServer}")
//must note
wurstmeister/kafka:2.11-2.0.0
I use "2.11-2.0.0" since its compatible with jdk 8 where latest give me error and my project required jdk 8
You are using 127.0.0.1:9092 as the Kafka container endpoint from the Java container. From a container, localhost targets the container itself, this won't work.
Docker Compose will set a default network in which your services are reachable by their name.
Therefore I think you should change your application.yaml to:
spring.kafka.bootstrap-servers=kafka:9092
# ...

Java application doesn't apply enviroment variable inside docker container

I am trying to deploy my application in docker (on Windows 10), in compose with a Postgres container. When I execute docker-compose up, I see the following log:
Starting postgres ... done
Recreating application ... done
Attaching to postgres, application
postgres |
postgres | PostgreSQL Database directory appears to contain a database; Skipping initialization
postgres |
postgres | 2021-08-20 14:51:49.721 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
postgres | 2021-08-20 14:51:49.721 UTC [1] LOG: listening on IPv6 address "::", port 5432
postgres | 2021-08-20 14:51:49.741 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
postgres | 2021-08-20 14:51:49.858 UTC [21] LOG: database system was interrupted; last known up at 2021-08-20 14:50:34 UTC
postgres | 2021-08-20 14:51:51.363 UTC [21] LOG: database system was not properly shut down; automatic recovery in progress
postgres | 2021-08-20 14:51:51.377 UTC [21] LOG: redo starts at 0/1661A88
postgres | 2021-08-20 14:51:51.377 UTC [21] LOG: invalid record length at 0/1661AC0: wanted 24, got 0
postgres | 2021-08-20 14:51:51.377 UTC [21] LOG: redo done at 0/1661A88
postgres | 2021-08-20 14:51:51.471 UTC [1] LOG: database system is ready to accept connections
Then the container of my application tries to start and after the banner "Spring Boot" etc. I get an error:
application | 2021-08-20 14:52:23.440 ERROR 1 --- [ main] com.zaxxer.hikari.pool.HikariPool : HikariPool-1 - Exception during pool initialization.
application |
application | org.postgresql.util.PSQLException: Connection to localhost:5432 refused. Check that the hostname and port are correct and that the postmaster is accepting TCP/IP connections.
application | at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:303) ~[postgresql-42.2.20.jar!/:42.2.20]
Here is my docker-compose.yml
version: "3"
services:
db:
image: postgres:11.13-alpine
container_name: postgres
ports:
- 5432:5432
volumes:
- ./pgdata:/var/lib/postgresql/data
environment:
- POSTGRES_DB=my_db
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=root
- PGDATA=/var/lib/postgresql/data/mnt
restart: always
app:
build: .
container_name: application
ports:
- 8085:8085
environment:
- POSTGRES_HOST=db
restart: always
links:
- db
my Dockerfile:
FROM openjdk:11
ADD target/my-app.jar my-app.jar
EXPOSE 8085
ENTRYPOINT ["java" , "-jar", "my-app.jar"]
application.properties:
spring.datasource.url=jdbc:postgresql://${POSTGRES_HOST}:5432/my_db
spring.datasource.username=postgres
spring.datasource.password=root
spring.jpa.database-platform=org.hibernate.dialect.PostgreSQLDialect
spring.jpa.hibernate.ddl-auto=none
spring.liquibase.change-log=classpath:liquibase/changelog.xml
logging.level.org.springframework.jdbc.core = TRACE
What is the problem? Why my application looking for Postgres on localhost and doesn't apply enviroment variable? Inside docker container the host for postgres should be different, isn't it? I have even tried to hardcode postgres host in application.properties to jdbc:postgresql://db:5432/my_db , but it continue to use localhost. How can I fix it?
try to use in this way
environment:
POSTGRES_HOST: db

Cannot produce kafka message on kubernetes

I'm getting error on kafka: [2020-05-04 12:46:59,477] ERROR [KafkaApi-1001] Number of alive brokers '0' does not meet the required replication factor '1' for the offsets topic (configured via 'offsets.topic.replication.factor'). This error can be ignored if the cluster is starting up and not all brokers are up yet. (kafka.server.KafkaApis)
And error when I'm trying to produce message: 2020-05-04 12:47:45.221 WARN 1 --- [ad | producer-1] org.apache.kafka.clients.NetworkClient : [Producer clientId=producer-1] Error while fetching metadata with correlation id 10 : {activate-user=LEADER_NOT_AVAILABLE}
Using docker-compose everything works fine but I'm trying to move it also to k8s. I started that process with kompose convert tool and modify the output.
Here is a fragment of the docker-compse:
zookeeper:
container_name: zookeeper
image: wurstmeister/zookeeper:latest
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ports:
- "2181:2181"
mail-sender-kafka:
container_name: mail-sender-kafka
image: wurstmeister/kafka:2.12-2.2.1
environment:
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_ADVERTISED_HOST_NAME: mail-sender-kafka
KAFKA_CREATE_TOPICS: "activate-user:1:1"
ports:
- 9092:9092
depends_on:
- zookeeper
account-service:
image: szastarek/food-delivery-account-service:${TAG}
container_name: account-service
environment:
- KAFKA_URI=mail-sender-kafka:9092
depends_on:
- config-server
- account-service-db
mail-sender:
image: szastarek/food-delivery-mail-sender:${TAG}
container_name: mail-sender
environment:
- KAFKA_URI=mail-sender-kafka:9092
depends_on:
- config-server
After converting it to k8s I've got zookeeper-deployment, zookeeper-service, mail-sender-deployment, mail-sender-kafka-deployment, mail-sender-kafka-service.
I've also tried to add some env variables and for now, it looks like that:
spec:
containers:
- env:
- name: KAFKA_ADVERTISED_HOST_NAME
value: mail-sender-kafka
- name: KAFKA_ADVERTISED_PORT
value: '9092'
- name: ADVERTISED_LISTENERS
value: PLAINTEXT://mail-sender-kafka:9092
- name: KAFKA_CREATE_TOPICS
value: activate-user:1:1
- name: KAFKA_ZOOKEEPER_CONNECT
value: zookeeper:2181
I've found one thing that probably is connected with the problem.
When I run ping mail-sender-kafka on docker I can reach myself. But if I connect to kubernetes mail-sender-kafka pod then I cannot ping myself.
After update the hosts file it works. There was something like:
172.18.0.24 mail-sender-kafka-xxxxxxx
And I changed it to
172.18.0.24 mail-sender-kafka
Any tips about how should I fix it?

Can't access spring-boot rest-endpoint inside docker container when using docker-compose

I'm setting up spring-boot applications inside docker with docker-compose and need to access a rest endpoint from one application on port 8080 from my localhost. The following endpoint works fine when started locally http://localhost:8080/central/products.
I'm on ubuntu 19.10 and running docker version 18.09.5. When I set up a simple spring-boot application for docker like explained on https://spring.io/guides/gs/spring-boot-docker/, everything works as expected and I can reach the endpoint on http://localhost:8080/. However, when I start more services with docker-compose I'm not able to reach this endpoint from my localhost.
Dockerfile for building the spring-boot application:
FROM openjdk:8-jdk-alpine
VOLUME /tmp
ARG DEPENDENCY=target/dependency
COPY ${DEPENDENCY}/BOOT-INF/lib /app/lib
COPY ${DEPENDENCY}/META-INF /app/META-INF
COPY ${DEPENDENCY}/BOOT-INF/classes /app
ENTRYPOINT ["java","-cp","app:app/lib/*","sample.EfridgeCentralApplication"]
docker-compose.yml file that seems to cause the problem:
version: '3.7'
services:
central-london:
image: demo/efridge-central:latest
container_name: central-london
ports:
- 8080:8080
environment:
- SERVER_PORT=8080
- SPRING_PROFILES_ACTIVE=dev
- SPRING_DATA_MONGODB_HOST=mongo-central
- SPRING_DATA_MONGODB_PORT=27017
- APP_RABBIT_HOSTNAME=rabbit-efridge
factory-usa:
image: demo/efridge-factory:latest
container_name: factory-usa
ports:
- 8081:8081
environment:
- SERVER_PORT=8081
- SPRING_PROFILES_ACTIVE=usa
- SPRING_DATA_MONGODB_HOST=mongo-usa
- SPRING_DATA_MONGODB_PORT=27017
- APP_RABBIT_HOSTNAME=rabbit-efridge
factory-china:
image: demo/efridge-factory:latest
container_name: factory-china
ports:
- 8082:8082
environment:
- SERVER_PORT=8082
- SPRING_PROFILES_ACTIVE=china
- SPRING_DATA_MONGODB_HOST=mongo-china
- SPRING_DATA_MONGODB_PORT=27017
- APP_RABBIT_HOSTNAME=rabbit-efridge
mongo-central:
image: mongo:latest
container_name: mongo-central
hostname: mongo-central
ports:
- 27017:27017
mongo-usa:
image: mongo:latest
container_name: mongo-usa
hostname: mongo-usa
ports:
- 27018:27017
mongo-china:
image: mongo:latest
container_name: mongo-china
hostname: mongo-china
ports:
- 27019:27017
rabbit-efridge:
image: rabbitmq:3-management
container_name: rabbit-efridge
hostname: rabbit-efridge
ports:
- 15672:15672
- 5672:5672
Output from docker inspect:
"NetworkSettings": {
"Bridge": "",
"SandboxID": "b91760f810a656e382d702dd408afe3c5ffcdf4c0cd15ea8550150867ac038cc",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"8080/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8080"
}
]
}
Logs from spring-boot
2019-07-03 11:54:57.654 INFO 1 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat initialized with port(s): 8080 (http)
2019-07-03 11:54:57.803 INFO 1 --- [ main] o.apache.catalina.core.StandardService : Starting service [Tomcat]
2019-07-03 11:54:57.804 INFO 1 --- [ main] org.apache.catalina.core.StandardEngine : Starting Servlet engine: [Apache Tomcat/9.0.21]
2019-07-03 11:54:58.149 INFO 1 --- [ main] o.a.c.c.C.[.[localhost].[/central] : Initializing Spring embedded WebApplicationContext
2019-07-03 11:54:58.150 INFO 1 --- [ main] o.s.web.context.ContextLoader : Root WebApplicationContext: initialization completed in 6950 ms
2019-07-03 11:54:59.810 INFO 1 --- [ main] org.mongodb.driver.cluster : Cluster created with settings {hosts=[mongo-central:27017], mode=MULTIPLE, requiredClusterType=UNKNOWN, serverSelectionTimeout='30000 ms', maxWaitQueueSize=500}
2019-07-03 11:54:59.810 INFO 1 --- [ main] org.mongodb.driver.cluster : Adding discovered server mongo-central:27017 to client view of cluster
2019-07-03 11:55:00.256 INFO 1 --- [o-central:27017] org.mongodb.driver.connection : Opened connection [connectionId{localValue:1, serverValue:11}] to mongo-central:27017
The output from docker inspect of the working spring-boot container and the not working looks almost the same. I can also access the rabbitmq web interface and MongoDB via mongo client. The only thing that does not work is access to the rest endpoint via http://localhost:8080/central/products.
your Dockerfile is lacking the EXPOSE statement, therefore no ports are exposed to the outer world.
once you add EXPOSE 8080 to the bottom of your Dockerfile, your app will be reachable from outside the container.
The problem is caused by the application.properties file, where I still had server.address=localhost specified.
Removing this line solved the problem!

Categories