I am new to docker and it is easy to get confused about some things. Here is my question. I am working on my Spring Boot app and was creating entities for DB. I found out that when I remove a column from the entity after rebuilding the container (docker-compose up --build) this column isn't removed. But when I add a new column after rebuilding a container new column is created.
After that, I tried to remove all unused images and containers by running docker system prune. And surprisingly after once again running docker-compose up --build column was removed from db.
Is this expected or it can be changed somehow?
I'm gonna add my docker files. Maybe the problem is somewhere there.
Dockerfile:
FROM maven:3.5-jdk-11 AS build
COPY src /usr/src/app/src
COPY pom.xml /usr/src/app
RUN mvn -f /usr/src/app/pom.xml clean package -DskipTests
FROM adoptopenjdk:11-jre-hotspot
ARG JAR_FILE=*.jar
COPY --from=build /usr/src/app/target/restaurant-microservices.jar /usr/app/restaurant-microservices.jar
ENTRYPOINT ["java","-jar","/usr/app/restaurant-microservices.jar"]
docker-compose.yml:
version: '3'
services:
db:
image: 'postgres:13'
container_name: db
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=password
- POSTGRES_DB=restaurant
ports:
- "5432:5432"
app:
build: .
container_name: app
ports:
- "8080:8080"
depends_on:
- db
environment:
- SPRING_DATASOURCE_URL=jdbc:postgresql://db:5432/restaurant
- SPRING_DATASOURCE_USERNAME=postgres
- SPRING_DATASOURCE_PASSWORD=password
- SPRING_JPA_HIBERNATE_DDL_AUTO=update
Upd: I tried the same thing with controllers and it creates and removes
controllers without any issues
I found the problem. It is all about this line of code in docker-compose.yml(and application.properties)
SPRING_JPA_HIBERNATE_DDL_AUTO=update
The update operation for example, will attempt to add new columns, constraints, etc. but will never remove a column or constraint that may have existed previously but no longer does as part of the object model from a prior run (found here How does spring.jpa.hibernate.ddl-auto property exactly work in Spring?) So when I changed update to create-drop (not sure if it is optimal option) it started working as expected
oh, I also didn't know that. thank you for your problem solving.
It is something weird, because 'update' means something changed.
Related
I have a cicd pipeline in gitlab that uses gitlab-secrets to publish them to a spring-boot application as follows:
docker-compose.yml:
services:
my-app:
image: my-app-image:latest
environment:
- SPRING_PROFILES_ACTIVE=test
- app.welcome.secret=$APP_WELCOME_SECRET
Of course I defined the $APP_WELCOME_SECRET as variable it gitlabs ci/cd variables configuration page.
Problem: running the gitlab-ci pipelines results in:
The APP_WELCOME_SECRET variable is not set. Defaulting to a blank string.
But why?
I could solve it by writing the secret value into a .env file as follows:
gitlab-ci.yml:
deploy:
stage: deploy
script:
- touch .env
- echo "APP_WELCOME_SECRET=$APP_WELCOME_SECRET" >> .env
- scp -i $SSH_KEY docker-compose.yml .env user#$MY_SERVER:~/deployment/app
While that works, I would still be interested in a better approach.
I'm pretty new to the world of Docker, so I have the following scenario:
Spring Boot application which depends to..
PostgreSQL
and frontend requesting data from them.
The Dockerfile in the Spring Boot app is:
EXPOSE 8080
ARG JAR_FILE=target/*.jar
COPY ${JAR_FILE} app.jar
ENTRYPOINT ["java","-jar","/app.jar"]
And the content of the docker-compose.yaml is:
version: '3'
services:
app:
image: <user>/<repo>
build: .
ports:
- "8080:8080"
container_name: app_test
depends_on:
- db
environment:
- SPRING_DATASOURCE_URL=jdbc:postgresql://db:5432/test
- SPRING_DATASOURCE_USERNAME=test
- SPRING_DATASOURCE_PASSWORD=test
db:
image: 'postgres:13.1-alpine'
restart: always
expose:
- 5432
ports:
- "5433:5432"
container_name: db_test
environment:
- POSTGRES_DB=test
- POSTGRES_USER=test
- POSTGRES_PASSWORD=test
volumes:
- db:/var/lib/postgresql/data
- ./create-tables.sql:/docker-entrypoint-initdb.d/create-tables.sql
- ./fill_tables.sql:/docker-entrypoint-initdb.d/fill_tables.sql
volumes:
db:
driver: local
As far as I understand in order to run the whole thing is required just to type docker-compose up and voila, it works. It pulls the image for the app from the docker-hub repo and same does for the image for the database.
Here comes the thing. I'm working with another guy (front end), whose goal is to make requests to this API. So is it enough for him to just copy-paste this docker-compose.yaml file and write docker-compose up or there is another thing to be done?
How should docker-compose be used in teamwork?
Thanks in advance, if I have to make it more clear leave a comment!
Your colleague will need:
The docker-compose.yml file itself
Any local files or directories named on the left-hand side of volumes: bind mounts
Any directories named in build: (or build: { context: }) lines, if the images aren't pushed to a registry
Any data content contained in a named volume that isn't automatically recreated
If they have the docker-compose.yml file they can docker-compose pull the images named there, and Docker won't try to rebuild them.
Named volumes are difficult to transfer between systems; see the Docker documentation on saving and restoring volumes. Bind-mounted host directories are easier to transfer, but are much slower on non-Linux hosts. Avoid using volumes for parts of your application code, including the Node library directory or static assets.
For this setup in particular, the one change I might consider making is using the postgres image's environment variables to create the database, and then use your application's database migration system to create tables and seed data. This would avoid needing the two .sql files. Beyond that, the only thing they need is the docker-compose.yml file.
Because of the build: . keyword in your docker-compose in api service, running docker-compose up will search for the backend Dockerfile and build the image. So, your teammate needs to get all the files you wrote.
Another solution, which in my point of view is better, would be building the image by you and pushing it to docker.hub, so your teammate can just pull the image from there and run it on his/her system. For this solution, this could be useful.
In case your not familiar with docker hub, read this quick start
I have a Spring MVC application that I want to add to Docker. I created the image, configured Docker, but the application in Docker does not want to start. In the application I use Spring Boot and PostgresSQL database.
Dockerfile:
FROM openjdk:11
ADD build/libs/Coffeetearea-0.0.1-SNAPSHOT.jar Coffeetearea-0.0.1-SNAPSHOT.jar
#EXPOSE 8080:8080
ENTRYPOINT ["java", "-jar", "Coffeetearea-0.0.1-SNAPSHOT.jar"]
docker-compose.yml:
version: '3.1'
services:
app:
container_name: coffeetearea
image: coffeeteareaimage
build: ./
ports:
- "8080:8080"
depends_on:
- coffeeteareadb
coffeeteareadb:
image: coffeeteareadb
ports:
- "5432:5432"
environment:
- POSTGRES_PASSWORD=pass123
- POSTGRES_USER=postgres
- POSTGRES_DB=coffeetearea
application.propeerties:
#Databse
spring.jpa.database-platform=org.hibernate.dialect.PostgreSQLDialect
spring.datasource.driver-class-name=org.postgresql.Driver
spring.datasource.url=jdbc:postgresql://coffeeteareadb:5432/coffeetearea
spring.datasource.username=postgres
spring.datasource.password=pass123
spring.jpa.generate-ddl=false
spring.jpa.hibernate.ddl-auto=validate
MY STEPS in TERMINAL:
C:\Users\vartanyan\IdeaProjects\Coffeetearea>docker-compose up
Creating network "coffeetearea_default" with the default driver
Pulling coffeeteareadb (coffeeteareadb:)...
ERROR: The image for the service you're trying to recreate has been removed. If you continue, volume data could be lost. Consider backing up your data before continuing.
Continue with the new image? [yN] y
Pulling coffeeteareadb (coffeeteareadb:)...
ERROR: pull access denied for coffeeteareadb, repository does not exist or may require 'docker login': denied: requested access to the resource is denied
The error means that Docker cannot find an image named coffeeteareadb locally or on https://hub.docker.com/ . If your image is in private repository (meaning that someone in your party have already created it) you have to login Docker into that repository first. Although for private repository you image name should look like an URL: registry.example.com/image-name:tag.
If you want coffeeteareadb to be a regular PostgreSQL database, you probably want to change the image here:
coffeeteareadb:
image: postgres:13 # https://hub.docker.com/_/postgres
If you are new to Docker, an image is like an executable or binary file, while a container is something like a running process of that executable. An image consists of several incremental layers of data: an application, dependencies, some basic config, etc. When you run an image you create a container. A container is an instance of an image. It differs from the image in the way that apart from just application, it has some information on how to run it (which ports to map, which volumes to mount, etc). There can be many containers using the same image. So when you asked to select an image you basically need to tell what application you want to use. Docker will look for it locally and on the hub, download it, and create a container from it. If you want to create your own image, you need a Dockerfile (see this reference https://docs.docker.com/engine/reference/builder/ ).
It seems like your image (coffeeteareadb) doesn't exist on dockerhub, so docker can't pull it.
Build it locally like coffeeteareaimage(build: buildpath).
Or use an PostgreSQL image that is on dockerhub (image: postgresql:latest).
When I run the command docker-compose -f docker-compose.yml up my container starts normally.
In IntelliJ it appears the button to execute the container when the file docker-compose.yml is opened, When I try to upload the container directly through the * .yml file I get the error below:
Failed to deploy 'Compose: docker-compose': Sorry but parent: com.intellij.execution.impl.ConsoleViewImpl[,0,0,1188x368,invalid,layout=java.awt.BorderLayout,alignmentX=0.0,alignmentY=0.0,border=,flags=9,maximumSize=,minimumSize=,preferredSize=] has already been disposed (see the cause for stacktrace) so the child: com.intellij.util.Alarm#7566093f will never be disposed.
My docker-compose.yml file:
version: 3.4
services:
api.logistics-service:
container_name: logistics-service
build: ./docker
ports:
- "8080:8080"
I had the same problem. A wrong version in docker-compose.yaml caused the error on first startup.
After fixing this it, I was not able to start any docker-compose-services anymore.
Looks like an IntelliJ bug.
In this situation just restart IntelliJ.
I've been stuck on this for a good bit now and can't find the solution anywhere. I'm writing a java rest service using jersey framework, maven as a package manager hosted on a apache tomcat.
The project works perfectly fine locally. I want to dockerize the application and really struggling. I have the tomcat container up and running and when I go to the root of my application I can see the simple hello text I have. So when I go to http://xxx:8888/npmanager/ at this point I'm seeing what I expect.
Now when I try to hit any of my endpoints i.e https://xxx:8888/npmanager/api/XXX I get a 500 error:
warnings have been detected with resource and/or provider classes:
SEVERE: Missing dependency for field: private org.glassfish.jersey.server.wadl.WadlApplicationContext org.glassfish.jersey.server.wadl.internal.WadlResource.wadlContext
Dockefile:
FROM tomcat:8.5.38
ADD ./target/npmanager.war /usr/local/tomcat/webapps/
CMD chmod +x /usr/local/tomcat/bin/catalina.sh
CMD ["catalina.sh", "run"]
docker-compose.yml
version: '3'
services:
tomcat-dev:
build: .
environment:
TOMCAT_USERNAME: root
TOMCAT_PASSWORD: root
ports:
- "8888:8080"
mysql-dev:
image: mysql:8.0.2
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_DATABASE: npmanager
volumes:
- /mysql-data:/var/lib/mysql
ports:
- "3308:3306"