I have a two simple images:
#Angular image
FROM node:12.2.0
WORKDIR /app
ENV PATH /app/node_modules/.bin:$PATH
COPY package.json /app/package.json
RUN npm install
RUN npm install -g #angular/cli#7.3.9
COPY . /app
CMD ng serve
&
# Java spring (REST) image
FROM openjdk:8-jdk-alpine
VOLUME /tmp
COPY ./target/api-0.0.1-SNAPSHOT.jar app.jar
ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom","-jar", "app.jar"]
And my docker-compose:
version: '3'
services:
web_app_speech:
image: web_app_speech
restart: always
ports:
- "4300:4200"
depends_on:
- api_speech_docker
api_speech_docker:
image: api_speech_docker
ports:
- "8080:8080"
restart: always
$> docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
15c719d31861 web_app_speech "/bin/sh -c 'ng serv…" 9 minutes ago Up 9 minutes 0.0.0.0:4300->4200/tcp azure_web_app_speech_1
044fd15f07e4 api_speech_docker "java -Djava.securit…" 10 minutes ago Up 9 minutes 0.0.0.0:8080->8080/tcp azure_api_speech_docker_1
I can access my REST API from localhost:8080 and my web app from localhost:4300 without problem but when I try to perform a call from my web_app to my rest_api I have the following error:
OPTIONS http://api_speech_docker:8080/speech net::ERR_NAME_NOT_RESOLVED
I have no idea how to fix this, if you need more logs tell me !
Thanks for your help 🙏
To my understanding your REST API is being called from your browser which cannot resolve the docker service names. For different applications, I used the following solutions:
Use localhost: http://localhost:8080/speech
Works great for locally hosted projects
If hosting on the cloud, may cause other errors
or
Hit endpoint through public IP http://13.192.123.12:8080/speech
Only works if the IP address does not change
If hosting on the cloud, inbound traffic through port 8080 must be allowed as well
Related
I was given a multy-steps task and im stuck !!
im trying to connect my Java container to my MYSQL container,but im getting 503 ERROR
HTTP ERROR 503
Problem accessing /. Reason:
Service Unavailable
docker-compose file :
version: "3.3"
services:
lavagna:
build: .
ports:
- "8080:8080"
networks:
- back_net
depends_on:
- my_db
environment:
spring.datasource.url: "jdbc:mysql://my-db:3306/lavagna"
my_db:
image: mysql:5.7
ports:
- "3306:3306"
networks:
- back_net
volumes:
- $PWD/mysql:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: 123
MYSQL_USER: eyal
MYSQL_PASSWORD: 123
networks:
back_net:
driver: bridge
I got the JAVA src files,i just used maven localy to build it and use target for the Java Dockerfile
java app dockerfile :
FROM openjdk:8-jre-alpine
EXPOSE 8080
COPY ./target/. .
COPY ./entrypoint.sh .
ENV DB_DIALECT MYSQL
ENV DB_URL jdbc:mysql://localhost:3306/lavagna
ENV DB_USER "root"
ENV DB_PASS "123"
ENV SPRING_PROFILE dev
RUN apk update \
&& apk add ca-certificates \
&& update-ca-certificates && apk add openssl
RUN chmod 774 entrypoint.sh
ENTRYPOINT [ "./entrypoint.sh" ]
I think you need a combination of comments and answers given already. Your containers are on the same network, so it appears to boil down to configuration.
In your docker file update your DB_URL to:
ENV DB_URL jdbc:mysql://my_db:3306/lavagna
If you use localhost your container will loopback to itself, and never hit the network.
In your docker-compose yml file, you have a typo in the url, try updating to:
spring.datasource.url: "jdbc:mysql://my_db:3306/lavagna"
As an aside, using depends_on does not wait for the service to be ready. It simply dictates start order as the documentation states:
There are several things to be aware of when using depends_on:
depends_on does not wait for db and redis to be “ready” before starting web - only until they have been started. If you need to wait for a service to be ready...
Even though I'm giving in the application properties,
spring.data.mongodb.host=api-database4
as the hostname which is the container name and hostname of the MongoDB on the docker-compose file, Spring app still can't connect to the MongoDB instance. I can however connect from MongoDB Compass to localhost:27030 but not to mongodb://api-database4:27030/messagingServiceDb.
My docker-compose file;
version: '3'
services:
messaging-api6:
container_name: 'messaging-api6'
build: ./messaging-api
restart: always
ports:
- 8085:8080
depends_on:
- api-database4
networks:
- shared-net
api-database4:
image: mongo
container_name: api-database4
hostname: api-database4
restart: always
ports:
- 27030:27017
networks:
- shared-net
command: mongod --bind_ip_all
networks:
shared-net:
driver: bridge
and my Docker file for the Spring app is;
FROM openjdk:12-jdk-alpine
ARG JAR_FILE=target/*.jar
COPY ${JAR_FILE} app.jar
ENTRYPOINT ["java","-jar","/app.jar"]
and my application.properties are;
#Local MongoDB config
spring.data.mongodb.database=messagingServiceDb
spring.data.mongodb.port=27030
spring.data.mongodb.host=api-database4
Entire code can be seen here.
How can I make my spring app on a docker container create a connection to the MongoDB instance which is on another docker container?
I have tried the solutions on similar questions and replicated them, it still gives the same error.
Edit and Solution:
I solved the issue by commenting out configuration below,
#Local MongoDB config
#spring.data.mongodb.database=messagingServiceDb
spring.data.mongodb.host=api-database4
spring.data.mongodb.port=27030
The remaining question is, why? That was the correct port that I'm trying to connect. Could it be related to the configuration order?
ports directive in docker-compose publishes container ports to the host machine. The containers communicate with each other on exposed ports. You can test whether a container can reach another with netcat.
docker exec -it messaging-api6 bash
> apt-get install netcat
> nc -z -v api-database4 27030
> nc -z -v api-database4 27017
This question already has answers here:
Docker port forwarding not working
(11 answers)
Closed 2 years ago.
I have a problem with dockerizing Spring Boot app. My docker-compose project consists of 4 parts:
back - it`s just Spring Boot application with Tomcat on 8080. Here are my controllers for front app.
front - Nginx + Angular
core - mainly consists of a TCP server for receiving some information to DB in database-app, implemented on a simple Java Socket.
database - Postgres, which I just download from the Docker Hub and configure to create the database necessary for the back-application.
My goal is to use my front-app, which is open in the browser on the host machine, manipulate data from the database from the database-app, through the controllers of my back-app.
So, I don't have any problems with building and running. Ports mapping for core, database and front apps works excellent. But not for back. I don't have any access from host to back-container from localhost:8080(curl requests from the host to container return an empty response, but curl in container bash works fine). In back-app I used Spring Security, so CORS is configured to allow all requests, and CSRF is disabled, if it's matter.
Generously apologize for my broken English!
Back Dockerfile
FROM maven:3.5-jdk-8 AS build
COPY src /usr/src/app/src
COPY pom.xml /usr/src/app
RUN mvn -f /usr/src/app/pom.xml clean package
FROM gcr.io/distroless/java
ARG JAR_FILE=target/*.jar
COPY --from=build /usr/src/app/${JAR_FILE} /usr/app/back.jar
ENTRYPOINT ["java","-jar","/usr/app/back.jar"]
Core Dockerfile
FROM maven:3.5-jdk-8 AS build
COPY src /usr/src/app/src
COPY pom.xml /usr/src/app
RUN mvn -f /usr/src/app/pom.xml clean package
FROM gcr.io/distroless/java
ARG JAR_FILE=target/*.jar
COPY --from=build /usr/src/app/${JAR_FILE} /usr/app/core.jar
ENTRYPOINT ["java","-jar","/usr/app/core.jar"]
Front Dockerfile
FROM node:12 as builder
COPY package.json package-lock.json ./
RUN npm install && mkdir /app && mv ./node_modules ./app
WORKDIR /app
COPY . .
RUN npm run ng build -- --deploy-url=/ --prod
FROM nginx
COPY ./.nginx/nginx.conf /etc/nginx/nginx.conf
RUN rm -rf /usr/share/nginx/html/*
COPY --from=builder /app/dist/snsr-front-app /usr/share/nginx/html
ENTRYPOINT ["nginx", "-g", "daemon off;"]
UPDATED 1:
Dockerfile(s) are still the same.
docker-compose.yml
version: '3'
services:
snsr-front-app:
build: ./snsr-front-app
ports:
- 4200:80
depends_on:
- snsr-back-app
image: mxmtrms/snsr-front-app
networks:
- front-net
snsr-back-app:
build: ./snsr-back-app
depends_on:
- database
image: mxmtrms/snsr-back-app
networks:
- back-net
- front-net
expose:
- 8080
environment:
DB_URL: database
DB_PORT: 5432
snsr-core-app:
build: ./snsr-core-app
ports:
- 3000:3000
depends_on:
- database
image: mxmtrms/snsr-core-app
networks:
- back-net
database:
image: postgres
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: masterkey
POSTGRES_DB: snsr
ports:
- 5432:5432
networks:
- back-net
networks:
back-net:
front-net:
nginx.conf
worker_processes 4;
events { worker_connections 1024; }
http {
upstream frontend {
server 0.0.0.0:80;
}
upstream backend {
server snsr-back-app:8080;
}
server {
listen 80;
root /usr/share/nginx/html;
include /etc/nginx/mime.types;
location / {
proxy_pass http://frontend;
try_files $uri /index.html;
}
location /api {
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real-IP $remote_addr;
proxy_pass http://backend;
}
}
}
UPDATE 2
Backend logs: https://gist.github.com/mxmtrms/ff12e2481d0ccc2781f15a961de6eab9
docker ps:
https://gist.github.com/mxmtrms/2baaadc0e4873fc8bb28453d5c6d04f4
Frontend serve static content when you open your site in browser its work fine because you configure frontend port publiclly in nginx but when you perform any action its looking for backend url if backend url is not publically open then it can not able to make request.
so you need to define your backend port in nginx so frontend can able to connect.
I hope it may help you.
And also my english not that much good so sorry for that :)
You can define nginx configuration in your default.conf like this -
server {
server_name hostname.com; // this is basically a host url where your application running
location / {
proxy_pass http://frontend; //this part will define in other file called upstream.conf
}
location /api { //I have consider all backend rest end point start through /api
proxy_pass http://backend; //this part will define in other file called upstream.conf
}
}
And then You need to create a new file in same directory upstream.conf and put given configuration -
upstream frontend {
server 0.0.0.0:8091;
}
upstream backend {
server 0.0.0.0:8081;
}
In the end just in your angular environment file put apiUrl - http://hostname.com/api
I hope it will help.
The solution to the problem was to change the server.host in the back container from localhost to 0.0.0.0, since when calling docker ps, you can see that this is where Docker redirects the port.
This is also said here: https://stackoverflow.com/a/57427805/12305316
I want two Docker containers to be able to communicate with each other on a Windows machine running Docker Toolbox. I am able to link the containers using the --link option; however, if I try to run the containers on a custom bridge network that I created, the containers are unable to communicate with each other :
Here are the steps I followed :
docker network create web-application-mysql-network
docker run --detach --env MYSQL_ROOT_PASSWORD=somepassword--env MYSQL_USER=some-user --env MYSQL_PASSWORD=pass --env MYSQL_DATABASE=mydb --name mysql --publish 3306:3306 --network=web-application-mysql-network mysql:5.7
docker run -p 8080:8080 -d --network=web-application-mysql-network myrepo/mywebapp:0.0.1-SNAPSHOT
The image in the last command above contains the Tomcat web server Docker image as the base image and a "WAR" (web archive file) that will be hosted in Tomcat. When I check the logs for the container started by the last command, I can see the following errors :
Caused by: com.mysql.cj.exceptions.CJCommunicationsException: Communications link failure
The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
I am able to link the two containers without any issues if I used the --link option instead of running them on my custom bridge network.
Additional info : I am using localhost in my web app code for the MySQL URL. This seemed to work fine when using --link
What configuration/command parameters am I missing to make this work?
When you're using the network, you should use the container name you want to connect to in the URL. In other words, you have to use mysql in mywebapp to reach the DB.
I'd suggest you take a check to docker-compose since it allows you to avoid the manual creation of the network.
Here's an example:
version: "3"
services:
mysql:
image: mysql:5.7
env_file:
- db.env
environment:
MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD}
MYSQL_USER: ${MYSQL_USER:-user}
MYSQL_PASSWORD: ${MYSQL_PASSWORD}
MYSQL_DATABASE: "mydb"
volumes:
- dbdata:/var/lib/mysql
mywebapp:
image: myrepo/mywebapp:${TAG_VERSION:-0.0.1-SNAPSHOT}
build:
context: ./mywebapp_location
dockerfile: Dockerfile
ports:
- "8080:8080"
volumes:
dbdata:
db.env:
MYSQL_ROOT_PASSWORD=mysql_root_password
MYSQL_USER=the_user
MYSQL_PASSWORD=the_user_password
To build you can simply execute:
docker-compose build
and to start simply:
docker-compose up
for the rest you can use the normal docker commands.
I am new to docker and having a simple DW(dropwizard) application that connects to elasticsearch, Which is already running in docker using the docker-compose.yml, which has the following content.
Docker-compose.yml for elasticsearch
version: '2.2'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.6.0
container_name: elasticsearch
environment:
- xpack.security.enabled=false
- discovery.type=single-node
ports:
- 8200:9200
- 8300:9300
volumes:
elasticsearch-data:
driver: local
Note: I am exposing 8200 and 8300 as ES port on my host(local mac system)
Now everything works fine when I simply run my DW application which connects to ES in 8200 on localhost, but now I am trying to dockerize my DW application and facing few issues.
Below is my Dockerfile for DW application
COPY target/my.jar my.jar
COPY config.yml config.yml
ENTRYPOINT ["java" , "-jar" , "my.jar", "server", "config.yml"]
When I run my above DW docker image, it immediately stops, using docker logs <my-container-id>, it throws below exception:
*java.io.IOException: elasticsearch: Name does not resolve*
org.elasticsearch.client.IndicesClient.exists(IndicesClient.java:827)
**Caused by: java.net.UnknownHostException: elasticsearch: Name does not resolve**
Things I have tried
The error message clearly mentions my DW app docker instance is not able to connect to elasticsearch, which I verified running fine.
Also checked the network of Elaticsearch docker and it has the network alias as elasticsearch as shown below and n/w as docker-files_default.
"Aliases": [
"elasticsearch",
"de78c684ae60"
],
Checked the n/w of my DW app docker instance and it uses bridge network and doesn't have any network alias.
Now, how can I make both my app docker and elasticsearch docker use the same network so that they can connect with each other, I guess this would solve the issue?
Two ways to solve this: First is to check what network docker-compose created for your elasticsearch setting (docker network ls) and then run your DW app with
docker run --network=<name of network> ...
Second way is to create a network docker network create elastic and use it as external network in your docker compose file as well as in your docker run command for the DW app.
Docker compose file could then look like
...
services:
elasticsearch:
networks:
elastic:
...
networks:
elastic:
external: true