How to use docker-compose from another person? - java

I'm pretty new to the world of Docker, so I have the following scenario:
Spring Boot application which depends to..
PostgreSQL
and frontend requesting data from them.
The Dockerfile in the Spring Boot app is:
EXPOSE 8080
ARG JAR_FILE=target/*.jar
COPY ${JAR_FILE} app.jar
ENTRYPOINT ["java","-jar","/app.jar"]
And the content of the docker-compose.yaml is:
version: '3'
services:
app:
image: <user>/<repo>
build: .
ports:
- "8080:8080"
container_name: app_test
depends_on:
- db
environment:
- SPRING_DATASOURCE_URL=jdbc:postgresql://db:5432/test
- SPRING_DATASOURCE_USERNAME=test
- SPRING_DATASOURCE_PASSWORD=test
db:
image: 'postgres:13.1-alpine'
restart: always
expose:
- 5432
ports:
- "5433:5432"
container_name: db_test
environment:
- POSTGRES_DB=test
- POSTGRES_USER=test
- POSTGRES_PASSWORD=test
volumes:
- db:/var/lib/postgresql/data
- ./create-tables.sql:/docker-entrypoint-initdb.d/create-tables.sql
- ./fill_tables.sql:/docker-entrypoint-initdb.d/fill_tables.sql
volumes:
db:
driver: local
As far as I understand in order to run the whole thing is required just to type docker-compose up and voila, it works. It pulls the image for the app from the docker-hub repo and same does for the image for the database.
Here comes the thing. I'm working with another guy (front end), whose goal is to make requests to this API. So is it enough for him to just copy-paste this docker-compose.yaml file and write docker-compose up or there is another thing to be done?
How should docker-compose be used in teamwork?
Thanks in advance, if I have to make it more clear leave a comment!

Your colleague will need:
The docker-compose.yml file itself
Any local files or directories named on the left-hand side of volumes: bind mounts
Any directories named in build: (or build: { context: }) lines, if the images aren't pushed to a registry
Any data content contained in a named volume that isn't automatically recreated
If they have the docker-compose.yml file they can docker-compose pull the images named there, and Docker won't try to rebuild them.
Named volumes are difficult to transfer between systems; see the Docker documentation on saving and restoring volumes. Bind-mounted host directories are easier to transfer, but are much slower on non-Linux hosts. Avoid using volumes for parts of your application code, including the Node library directory or static assets.
For this setup in particular, the one change I might consider making is using the postgres image's environment variables to create the database, and then use your application's database migration system to create tables and seed data. This would avoid needing the two .sql files. Beyond that, the only thing they need is the docker-compose.yml file.

Because of the build: . keyword in your docker-compose in api service, running docker-compose up will search for the backend Dockerfile and build the image. So, your teammate needs to get all the files you wrote.
Another solution, which in my point of view is better, would be building the image by you and pushing it to docker.hub, so your teammate can just pull the image from there and run it on his/her system. For this solution, this could be useful.
In case your not familiar with docker hub, read this quick start

Related

Development version of docker-compose.yml in Spring Boot?

In my Spring Boot app, I have a docker-compose.yml file that is used Dockerizing my app with Dockerfile. On the other hand, for the local development, other developers would also need a docker-compose.yml file for creating MySQL on a local Docker environment.
For this kind of general situations, what is the proper way to provide the configuration for local development besides Dockerizing? I look but there seems to be no docker-compose-dev.yml file usage as far as I see. So, what should I do? Where keep my compose config? I think I can use any whatever file name but in this case should it be a differennt location than the default one?
Name the developer-oriented file docker-compose.override.yml. If this file is present, Compose will use its settings to augment and override the settings in the main docker-compose.yml. See Multiple Compose files in the Docker documentation for more details.
Typically if you have this file, you'll do pre-production testing and deployment using only the main docker-compose.yml file but not the override file. This works better if the settings in the override file are purely additive.
You might have a production-oriented Compose file like, for example,
# docker-compose.yml
version: '3.8'
services:
app:
image: registry.example.com/app:${APP_TAG:-latest}
environment: [MYSQL_HOST, MYSQL_DATABASE MYSQL_USER, MYSQL_PASSWORD]
where most of the interesting settings are injected from host environment variables, and expecting to interface with an external database. In development, though, you might want to build this image from source and install a database as part of the setup
version: '3.8'
services:
app:
build: .
environment:
MYSQL_HOST: mysql
ET: cetera
mysql:
image: mysql
volumes: ['mysql:/var/lib/mysql/data']
environment: {...}
volumes:
mysql:
The settings from the two files are combined, so if you
APP_TAG=20221109 docker-compose build
you'll wind up with an image name registry.example.com/app:20221109, for example.

Spring/Hibernate app in docker doesn't remove columns

I am new to docker and it is easy to get confused about some things. Here is my question. I am working on my Spring Boot app and was creating entities for DB. I found out that when I remove a column from the entity after rebuilding the container (docker-compose up --build) this column isn't removed. But when I add a new column after rebuilding a container new column is created.
After that, I tried to remove all unused images and containers by running docker system prune. And surprisingly after once again running docker-compose up --build column was removed from db.
Is this expected or it can be changed somehow?
I'm gonna add my docker files. Maybe the problem is somewhere there.
Dockerfile:
FROM maven:3.5-jdk-11 AS build
COPY src /usr/src/app/src
COPY pom.xml /usr/src/app
RUN mvn -f /usr/src/app/pom.xml clean package -DskipTests
FROM adoptopenjdk:11-jre-hotspot
ARG JAR_FILE=*.jar
COPY --from=build /usr/src/app/target/restaurant-microservices.jar /usr/app/restaurant-microservices.jar
ENTRYPOINT ["java","-jar","/usr/app/restaurant-microservices.jar"]
docker-compose.yml:
version: '3'
services:
db:
image: 'postgres:13'
container_name: db
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=password
- POSTGRES_DB=restaurant
ports:
- "5432:5432"
app:
build: .
container_name: app
ports:
- "8080:8080"
depends_on:
- db
environment:
- SPRING_DATASOURCE_URL=jdbc:postgresql://db:5432/restaurant
- SPRING_DATASOURCE_USERNAME=postgres
- SPRING_DATASOURCE_PASSWORD=password
- SPRING_JPA_HIBERNATE_DDL_AUTO=update
Upd: I tried the same thing with controllers and it creates and removes
controllers without any issues
I found the problem. It is all about this line of code in docker-compose.yml(and application.properties)
SPRING_JPA_HIBERNATE_DDL_AUTO=update
The update operation for example, will attempt to add new columns, constraints, etc. but will never remove a column or constraint that may have existed previously but no longer does as part of the object model from a prior run (found here How does spring.jpa.hibernate.ddl-auto property exactly work in Spring?) So when I changed update to create-drop (not sure if it is optimal option) it started working as expected
oh, I also didn't know that. thank you for your problem solving.
It is something weird, because 'update' means something changed.

How to pass gitlab-ci secrets via docker-compose to spring-boot application?

I have a cicd pipeline in gitlab that uses gitlab-secrets to publish them to a spring-boot application as follows:
docker-compose.yml:
services:
my-app:
image: my-app-image:latest
environment:
- SPRING_PROFILES_ACTIVE=test
- app.welcome.secret=$APP_WELCOME_SECRET
Of course I defined the $APP_WELCOME_SECRET as variable it gitlabs ci/cd variables configuration page.
Problem: running the gitlab-ci pipelines results in:
The APP_WELCOME_SECRET variable is not set. Defaulting to a blank string.
But why?
I could solve it by writing the secret value into a .env file as follows:
gitlab-ci.yml:
deploy:
stage: deploy
script:
- touch .env
- echo "APP_WELCOME_SECRET=$APP_WELCOME_SECRET" >> .env
- scp -i $SSH_KEY docker-compose.yml .env user#$MY_SERVER:~/deployment/app
While that works, I would still be interested in a better approach.

DOCKER error: Pull access denied for coffeeteareadb, repository does not exist or may require 'docker login'

I have a Spring MVC application that I want to add to Docker. I created the image, configured Docker, but the application in Docker does not want to start. In the application I use Spring Boot and PostgresSQL database.
Dockerfile:
FROM openjdk:11
ADD build/libs/Coffeetearea-0.0.1-SNAPSHOT.jar Coffeetearea-0.0.1-SNAPSHOT.jar
#EXPOSE 8080:8080
ENTRYPOINT ["java", "-jar", "Coffeetearea-0.0.1-SNAPSHOT.jar"]
docker-compose.yml:
version: '3.1'
services:
app:
container_name: coffeetearea
image: coffeeteareaimage
build: ./
ports:
- "8080:8080"
depends_on:
- coffeeteareadb
coffeeteareadb:
image: coffeeteareadb
ports:
- "5432:5432"
environment:
- POSTGRES_PASSWORD=pass123
- POSTGRES_USER=postgres
- POSTGRES_DB=coffeetearea
application.propeerties:
#Databse
spring.jpa.database-platform=org.hibernate.dialect.PostgreSQLDialect
spring.datasource.driver-class-name=org.postgresql.Driver
spring.datasource.url=jdbc:postgresql://coffeeteareadb:5432/coffeetearea
spring.datasource.username=postgres
spring.datasource.password=pass123
spring.jpa.generate-ddl=false
spring.jpa.hibernate.ddl-auto=validate
MY STEPS in TERMINAL:
C:\Users\vartanyan\IdeaProjects\Coffeetearea>docker-compose up
Creating network "coffeetearea_default" with the default driver
Pulling coffeeteareadb (coffeeteareadb:)...
ERROR: The image for the service you're trying to recreate has been removed. If you continue, volume data could be lost. Consider backing up your data before continuing.
Continue with the new image? [yN] y
Pulling coffeeteareadb (coffeeteareadb:)...
ERROR: pull access denied for coffeeteareadb, repository does not exist or may require 'docker login': denied: requested access to the resource is denied
The error means that Docker cannot find an image named coffeeteareadb locally or on https://hub.docker.com/ . If your image is in private repository (meaning that someone in your party have already created it) you have to login Docker into that repository first. Although for private repository you image name should look like an URL: registry.example.com/image-name:tag.
If you want coffeeteareadb to be a regular PostgreSQL database, you probably want to change the image here:
coffeeteareadb:
image: postgres:13 # https://hub.docker.com/_/postgres
If you are new to Docker, an image is like an executable or binary file, while a container is something like a running process of that executable. An image consists of several incremental layers of data: an application, dependencies, some basic config, etc. When you run an image you create a container. A container is an instance of an image. It differs from the image in the way that apart from just application, it has some information on how to run it (which ports to map, which volumes to mount, etc). There can be many containers using the same image. So when you asked to select an image you basically need to tell what application you want to use. Docker will look for it locally and on the hub, download it, and create a container from it. If you want to create your own image, you need a Dockerfile (see this reference https://docs.docker.com/engine/reference/builder/ ).
It seems like your image (coffeeteareadb) doesn't exist on dockerhub, so docker can't pull it.
Build it locally like coffeeteareaimage(build: buildpath).
Or use an PostgreSQL image that is on dockerhub (image: postgresql:latest).

Spring Boot + docker-compose + MySQL: Connection refused

I'm trying to set up a Spring Boot application that depends on a MySQL database called teste in docker-compose. After issuing docker-compose up, I'm getting:
Caused by: java.net.ConnectException: Connection refused (Connection refused)
I'm running on Linux Mint, my docker-compose version is 1.23.2, my Docker version is 18.09.0.
application.properties
# JPA PROPS
spring.jpa.show-sql=true
spring.jpa.properties.hibernate.format_sql=true
spring.jpa.hibernate.ddl-auto=update
spring.jpa.properties.hibernate.dialect=org.hibernate.dialect.MySQL5Dialect
spring.jpa.hibernate.naming-strategy=org.hibernate.cfg.ImprovedNamingStrategy
spring.datasource.url=jdbc:mysql://db:3306/teste?useSSL=false&serverTimezone=UTC
spring.datasource.username=rafael
spring.datasource.password=password
spring.database.driverClassName =com.mysql.cj.jdbc.Driver
docker-compose.yml
version: '3.5'
services:
db:
image: mysql:latest
environment:
- MYSQL_ROOT_PASSWORD=rootpass
- MYSQL_DATABASE=teste
- MYSQL_USER=rafael
- MYSQL_PASSWORD=password
ports:
- 3306:3306
web:
image: spring-mysql
depends_on:
- db
links:
- db
ports:
- 8080:8080
environment:
- DATABASE_HOST=db
- DATABASE_USER=rafael
- DATABASE_NAME=teste
- DATABASE_PORT=3306
and the Dockerfile
FROM openjdk:8
ADD target/app.jar app.jar
EXPOSE 8080
ENTRYPOINT ["java", "-jar", "app.jar"]
Docker compose always starts and stops containers in dependency order, or sequential order in the file if not given. But docker-compose does not guarantee that it will wait till the dependency container is running. You can refer here for further details. So the problem here is that your database is not ready when your spring-mysql container tries to access the database. So, the recommended solution is you could use wait-for-it.sh or similar script to wrap your spring-mysql app starting ENTRYPOINT.
As example if you use wait-for-it.sh your ENTRYPOINT in your Dockerfile should change to following after copying above script to your project root:
ENTRYPOINT ["./wait-for-it.sh", "db:3306", "--", "java", "-jar", "app.jar"]
And two other important thing to consider here is:
Do not use links they are deprecated you should use user-defined network instead. All services in docker-compose file will be in single user-defined network if you don't explicitly define any network. So you just have to remove the links from compose file.
You don't need to publish the port for docker container if you only use it inside the user-defined network.
I was facing the same issue and in case you do not want to use any custom scripts, this can easily be resolved using health checks along with depends on. A sample using these is as follows:
services:
mysql-db:
image: mysql
environment:
- MYSQL_ROOT_PASSWORD=vikas1234
- MYSQL_USER=vikas
ports:
- 3306:3306
restart: always
healthcheck:
test: [ "CMD", "mysqladmin" ,"ping", "-h", "localhost" ]
timeout: 20s
retries: 10
app:
image: shop-keeper
container_name: shop-keeper-app
build:
context: .
dockerfile: Dockerfile
ports:
- 8080:8080
depends_on:
mysql-db:
condition: service_healthy
environment:
SPRING_DATASOURCE_URL: jdbc:mysql://mysql-db:3306/shopkeeper?createDatabaseIfNotExist=true
SPRING_DATASOURCE_USERNAME: root
SPRING_DATASOURCE_PASSWORD: vikas1234
Your config looks nice, I would just recommend:
Remove links: db. It has no value in user-defined bridge networking
Remove port exposing for db unless you want to connect from outside docker-compose - all ports are exposed automatically inside user-defined bridge network.
I think the problem is that database container takes more time to start than web. depends_on just controls the order, but does not guarantee you database readiness. If possible, set several connection attempts or put socket-wait procedure in your web container.

Categories