I`m learning Docker. I build my image from Dockerfile in the counter app. And I am using mysql as a database. DockerCompose file is using one db and two container from the same app image. Mysql db has two different schemas. My goal is to use separate app services with different ports(e.g. 9000 and 9001), and they have own schemas. When I call localhost:9000/index it shows first counter and when I call localhost:9000/index it shows second counter.
But problem is that both of them use first schema, and so result being same counter. How can I isolate schemas?
Compose-file ->
version: '3.1'
services:
mysql:
image: mysql
restart: always
ports:
- "3306:3306"
environment:
MYSQL_ROOT_PASSWORD: password
volumes:
- mysql_data:/var/lib/mysql
hello-docker:
image: hello-docker:0.0.2
restart: always
environment:
DB_CONNECTION_IP: mysql
DB_SCHEMA_NAME: hello-counter
ports:
- "9000:9000"
volumes:
- mysql_data:/var/lib/mysql
hello-docker2:
image: hello-docker:0.0.2
restart: always
environment:
DB_CONNECTION_IP: mysql
DB_SCHEMA_NAME: hello_counter2
ports:
- "9001:9000"
volumes:
mysql_data:
application.yaml ->
spring:
datasource:
url: &connectionUrl jdbc:mysql://${DB_CONNECTION_IP:localhost}:${DB_CONNECTION_PORT:3306}/${DB_SCHEMA_NAME}?allowPublicKeyRetrieval=true&createDatabaseIfNotExist=true&useSSL=false&useJDBCCompliantTimezoneShift=true&useLegacyDatetimeCode=false&serverTimezone=UTC&zeroDateTimeBehavior=convertToNull&useUnicode=true&characterEncoding=UTF-8
username: root
password: password
driver-class-name: com.mysql.cj.jdbc.Driver
jpa:
hibernate.ddl-auto: validate
generate-ddl: true
show-sql: true
properties.hibernate.format_sql: true
server:
port: 9000
It is better having separate docker compose for each application and its database.
And if you want only one docker compose for both applications, you can define two separate services for mysql with different exposed schemas and ports and refer to each of them in your applications.
This would be same as your application that you have defined two services for it.
In addition here:
When I call localhost:9000/index it shows first counter and when I
call localhost:9000/index it shows second counter.
You referred to same application, It seems you mean:
localhost:9000/index
and
localhost:9001/index
The easiest way to accomplish this is to run a separate database per service. You can put these in a single Compose file if you'd like. (Or as #EskandarAbedini suggests in their answer you can run a separate Compose file per service, though this can get unwieldy if you have a significant stack based on largely the same code base.)
version: '3.8'
services:
mysql1:
image: mysql
volumes:
- mysql_data1:/var/lib/mysql
hello-docker1:
image: hello-docker:0.0.2
environment:
DB_CONNECTION_IP: mysql1
ports:
- "9000:9000"
mysql2:
image: mysql
volumes:
- mysql_data2:/var/lib/mysql
hello-docker2:
image: hello-docker:0.0.2
environment:
DB_CONNECTION_IP: mysql2
ports:
- "9001:9000"
volumes:
mysql_data1:
mysql_data2:
Note that both pairs of containers run the same images, but the MySQL containers have separated storage, and the application containers publish different host ports for the same container port. You'd have to separately run tasks like database migration for each container.
In principle nothing stops you from running multiple databases or schemata in a single MySQL container, provided you execute the correct SQL CREATE DATABASE call. You'd need a custom SQL init script to set this up; you couldn't do it with just environment-variable configuration.
Related
I have the following code in the docker-compose.yml file to have a database started in my test container (the exposed port is dynamic).
postgres:
image: my-proxy.jfrog.io/postgres:12
ports:
- "5432"
environment:
POSTGRES_DB: myTestingDB
POSTGRES_USER: username
POSTGRES_PASSWORD: password
Now, to have my functional testing done, I need to insert some records in the junit testing code before trigger the call. How should I do that?
I am trying to dockeried two springboot applications by docker compose. user_service needs to send the rest template request to product_service in order to get all product information and the url request is http://localhost:8080/product .without being docker containers there is no problem for 2 applications to communicate but after i made them docker containers when i want to send the request from user_service to product_service there is a connection refused error even though i add them in the same network. here is my docker compose file
version: "3.7"
services:
product_service:
build: /productservice/
restart: always
ports:
- "8080:8080"
networks:
- bridge
user_service:
build: /userservice/
restart: always
ports:
- "7074:7074"
networks:
- bridge
networks:
bridge:
driver: bridge
After a lot of trouble i find the solution for that. if you have 2 spring-boot services that are communicating with each other through rest template or http client you need first to change the localhost to what ever your services name is in the docker-compose file in my case it is http://product_service. an other problem that i face was the face it was an error on my URL. later on i find out i shouldn't have _ on my URL so i change my application service name to product on docker file so finally the URL i set up for my rest template is http://product .
one more thing i should add here is if for communicating between your micro services you want to use https you need to set up SSL certification but if you use http you can skip that.
I wish to create a network via docker-compose (via DockerComposeContainer) and have another container (created via ImageFromDockerfile) join that same network. Is this possible?
Asked another way, can ImageFromDockerfile join an existing network?
(For me the order is crucial, because when i start my image it needs to connect to all the services running through compose)
The moving parts I have tried include:
The docker compose file
version: '3.6'
services:
vault:
image: docker.x.com/x/vault:123-ytr
ports:
- 8200:8200
networks:
- gateway
environment:
- APP_NAME=requestlogidentityconsumer
networks:
gateway:
name: damo
Executing above compose file via DockerComposeContainer (incl. create damo network)
Attempt to build and run rlic-container and have it join damo n/w
Network network =
Network.builder().createNetworkCmdModifier(cmd -> cmd.withName("damo")).build();
new GenericContainer(
new ImageFromDockerfile("rlic-container", true)
.withFileFromFile("Dockerfile", DOCKER_FILE_PATH.toFile()))
.withNetwork(network)
.start();
When I run step 3 i get:
Caused by: com.github.dockerjava.api.exception.ConflictException: {"message":"network with name damo already exists"}
Which makes sense (in so far as network does exist from step 2), and ties back to my question of; can i write step 3 such that it joins an existing network?
Thanks
I am working on a spring boot server using Postgres as the database. I would like to be able to have a log file of the queries coming from the server. I have found my sql logs and I can see queries I make to the database via psql but queries coming from my server via jdbc are not logged. How can I get these queries to show in my log file? Or if they are just being logged somewhere else could someone point me in the right direction?
You can set log_min_duration_statement to 0 to log all statements. Be careful, though, as this can greatly increase the size of your logs.
It's also helpful to set the application name in the jdbc connection url so it's easier to pick out the appropriate log messages:
jdbc:postgresql://host:5432/DB?ApplicationName=YourApp
In order to get the logs of every query, I use the option log_statement=all which is configured as a command line argument, for the process of postgres. I use the following docker compose file in order to run the postgresql database in development environment.
version: "3.8"
services:
db-ct:
container_name: 'my-container-name'
image: 'postgres:13'
environment:
- POSTGRES_USER=database-username-created-on-container-creation
- POSTGRES_PASSWORD=database-password-created-on-container-creation
- POSTGRES_DB=database-name-created-on-container-creation
ports:
# The port is available by sql clients, like sqldeveloper
- '5000:5432'
volumes:
- db:/var/lib/postgresql/data/
restart: 'no'
# Run postgres and log all queries
command: [ "postgres", "-c", "log_statement=all" ]
networks:
internal:
volumes:
# Database files
# They are stored in a volume thus they are not reset after rebuilding the images
db:
name: your-volume-name
networks:
internal:
Line 19: The command line argument
4 replaces are required: my-container-name, database-username-created-on-container-creation, database-password-created-on-container-creation, database-name-created-on-container-creation, your-volume-name
The database will listen on port 5000.
If log_statement=all is coming from your postgresql.conf file, but it doesn't seem to be effective sometimes, then it is possible that that setting has been overridden at the database or user level. Make sure you are connected to the same database and as the same user as your app server connects, before you issue the 'SHOW ALL'.
They could have been overridden by something like one of these:
alter user <foo> set log_statement='none';
alter database <foo> set log_statement='none';
However, to ensure users can't hide their activity from the superuser, only a superuser would have been allowed to issue one of the above. And if your app logs in as a superuser (not a good idea), then it could have turned off logging at the session level, overriding the conf file.
If it is one of the above commands which changed the setting, you can see it with \drds in psql, or in SQL with:
SELECT rolname AS "Role", datname AS "Database",
pg_catalog.array_to_string(setconfig, E'\n') AS "Settings"
FROM pg_catalog.pg_db_role_setting s
LEFT JOIN pg_catalog.pg_database d ON d.oid = setdatabase
LEFT JOIN pg_catalog.pg_roles r ON r.oid = setrole
ORDER BY 1, 2;
I am deploying a simple web application. I divided it into 3 pods:front end, back end, and postgres db. I successfully deployed my front end and back end to google kubernetes service and they works as expected. But for my postgresql db server, I used the following yamls. The postgres image is created by me using standard postgres images from dockerhub. I created some tables, and inserted some data and pushed to DockerHub. My backend is not able to make connection to my db. I think I might need to change my Java connection code.Not sure on using localhost. It works without any issue on my local Eclipse Jee and Tomcat.
//my pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: postgres-app-pod
labels:
name: postgres-app-pod
app: demo-geo-app
spec:
containers:
- name: postgres
image: myrepo/example:v1
ports:
- containerPort: 5432
//my service.yaml
apiVersion: v1
kind: Service
metadata:
name: db
labels:
name: db-service
app: demo-geo-app
spec:
ports:
- port: 5432
targetPort: 5432
selector:
name: postgres-pod
app: demo-geo-app
//from my java backend, I access my db server this way.
String dbURL = "jdbc:postgresql://localhost:5432/Location?user=postgres&password=mysecretpassword";
there are two issues to be fixed.
the service selector key:value should match with pod labels
replace localhost with postgresql service dns
As per comments you have an error on your connection string, localhost is referring to the same pod where you are running the Java code, you need to change to db as the same name you put on the service yaml to work.
I recommend you to use a deployment instead of Pod on the type, but in that case that you are trying to deploy a database you need to use a StatefulSet please review the documentation
https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/
Also, I recommend you to check https://helm.sh you have a lot of chart ready to use instead of having to code from scratch a service like a database.
https://github.com/helm/charts/tree/master/stable/postgresql
On that chart you have all the necessary yaml ready including the PVC provisioning.