Docker Container interacting with 2 applications - java

I currently have a Java Application (.jar) in one container and I am using the docker-compose.yml to create an instance of a mysql database in a second container.
I want to know if it is possible for the container 1 to be able to interect with container 2, and be able to read/write to and from the mysql database
If it is possible, how would I go about this?

Each container is like a virtual machine running inside your actual machine. And they have a virtual network that connects all of them. They can communicate with each other just like real machines on a real network.
When you specify links in your yaml, e.g. from the example from the documentation:
web:
links:
- db
The result will be that inside the web container, the hostname db will resolve to the virtual IP of the db container. You can actually do ping db from within the web container and you should see the db container answer.
For mysql, assuming you named the mysql container db like in the example and linked your application to it like above, you'll simply have to write code that assumes this hostname. E.g. you'd connect to jdbc:mysql://db:3306/databasename. The port depends on what the image you use exposes.
It gets tricky once you want to have the containers running on actually different machines because you need a way to reach the virtual container network inside those machines. There are ways like proxies, forwarded ports, overlay networks, .. but that's beyond the capabilities of compose.

From the code point of view, the interaction is the same.
Your MySQL in Docker exposes the service on a particular hostname and port. The program using it uses that hostname and port. Docker gives you the ability to configure this outside MySQL, but the Java code is the same.

Yes, linking containers over a network is standard functionality.
As Peter Lawrey mentioned, you just configure the database connection in Java as normal with the name of service or docker container you want to connect to.
version: "2"
services:
web:
image: myapp
networks:
- myapp
db:
image: mysql
networks:
- myapp
networks:
myapp:
driver: bridge
Then you have a network to connect over
$ docker-compose start
Starting composenetworks_web_1
Starting composenetworks_db_1
$ docker exec composenetworks_web_1 ping db
PING mysql (172.22.0.3): 56 data bytes
64 bytes from 172.22.0.3: seq=0 ttl=64 time=0.093 ms

Related

How connect to db when running Dockerfile?

I have a spring boot app that connects fine to my PostgreSQL server running locally in Desktop Docker.
Than I wrote a simple Dockerfile to run my app in container. Container starts but can't connect to my db server with error message:
Connection to localhost:5432 refused.
Why and how to fix this?
To access localhost from inside a docker container you can use the IP of your computer. Localhost or 127.0.0.0 donsen't work.
Use docker compose to connect the two docker containers.
https://docs.docker.com/compose/
On this page you can see an example on how to connect to containers using docker compose:
https://dev.to/mozartted/docker-networking--how-to-connect-multiple-containers-7fl
If they are on seperate Docker networks, use:
docker.host.internal
This connects to the Docker host machine. So if your PostgreSQL instance is exposed on 5432, docker.host.internal will route to that instance through the host from other containers.
Alternatively, set them up in the same network using docker compose or by creating a network and attaching both containers to them. They can then communicate with container name.
https://docs.docker.com/engine/reference/commandline/network_create/

How to access non containerised DB on remote IP from a docker container?

I am trying to package Java web services in Docker. I have a postgres DB hosted on a VM (non-containerised) and the code in the docker container is unable to connect to the database. How to do that?
Theoretically spring.datasource.url= jdbc:postgresql://yourIpAddress/nameOfDB should work.
But database don't run on 8080 so you need to bind the port of database (for postgres 5432, for mysql 3306) and I wouldnt bind it to 80, where everything is listening.
If you are running postgres it will be listening on 5432 so you can do docker run -p 8082:5432 postgres
Then you should be able to connect to your host computer via jdbc:postgresql://yourIpAddress:8082/nameOfDB
This all assumes nothing else jumps in the way, like firewalls or whatnot. I also don't know how you configured your virtual machine. In general you should practice connecting them on the same machine to get the idea first.

Expose random port to docker-compose.yml

I need multiple instance of same application, for that I am using
server.port=0 to run application in random port.
my question is how can I map randomly generated port to docker-compose.yml to create multiple instances.
I am using spring boot at the back-end. I am unable to find any solution.
Any help much appreciated.
Each Docker container runs a single process in an isolated network namespace, so this isn't necessary. Pick a fixed port. For HTTP services, common port numbers include 80, 3000, 8000, and 8080, depending on permissions and the language runtime (80 requires elevated privileges, 3000 is Node's default, and so on). The exact port number doesn't matter.
You access the port from outside Docker space using a published port. If you're running multiple containers, there is the potential for conflict if multiple services use the same host port, which is probably what you're trying to avoid. In the docker run -p option or the Docker Compose ports: setting, it's possible to list only the port running inside the container, and Docker will choose a host port for you.
version: "3"
services:
web:
image: ...
ports:
- "8000" # no explicit host port
command: ... -Dserver.port=8000 # fixed container port
docker-compose port web 8000 will tell you what the host (public) port number is. For communication between containers in the same docker-compose.yml file, you can use the service name and the (fixed, known) internal port, http://web:8000.

Error Can not connect to Ryuk in CircleCi

There is config for CircleCI.
On the local machine, when you run CircleCI, everything passes. In this case, the server is a lot of mistakes, one of them is
java.lang.IllegalStateException: Can not connect to Ryuk
At the same time in the future there is an error connecting tests in containers launched earlier in test-containers, I think this is due to an error connecting to Ryuk. Confuses that fact that on the local machine everything works and on the server everything falls.
The reason for the problem is here: https://gist.github.com/OlegGorj/52ca84624503a5e85624c6eb38df4590
where it says:
Separation of Environments The job and remote docker run in separate environments. Therefore, Docker containers cannot directly communicate with the containers running in remote docker.
Accessing Services It’s impossible to start a service in remote docker and ping it directly from a primary container (and vice versa).
There appear to be three options:
Do your entire build in another remote docker container.
Use a dedicate VM for the build (https://www.testcontainers.org/supported_docker_environment/continuous_integration/circle_ci/)
If you can get away with creating the test container at the start then do that and don't use testcontainers within circleci (https://circleci.com/docs/2.0/executor-types/#using-multiple-docker-images). Just remember that each test case will be interacting with the same instance of the service.
More details on option 3
Basically, don't use testcontainers (one word) when using circleci.
In your circleci/config.yaml do something like this:
jobs:
build:
docker:
- image: circleci/openjdk:14.0.1-jdk-buster
- image: rabbitmq:3.8-alpine
environment:
So circleci runs the rabbit container on the same host as your image.
You can then communicate with it on localhost on whatever ports it opens, and circleci will close these secondary containers when your build (which is always in the first container) finishes.
There are a few downsides to this:
testcontainers lets you start and stop containers, this approach doesn't so you fundamentally cannot test the restart of a container.
all of your tests will run with the same instance so, in the rabbit instance, each test should use a unique exchange and queue.
if, like me, you need to build in circleci and on the desktop (and in Jenkins) then you need to have circleci conditional logic in your tests (just check for System.getenv("CIRCLECI")) to determine which approach to take.
I had the same error, fixed it by turning off Experimental Features in Docker.
You can find them in Preferences.

Fail to start H2o cluster in docker containers because it can not bind external or host ip

Now we try to use H2o to construct training cluster. It is easy to use by running java -jar ./h2o.jar and we can setup the cluster with simple flatfile.txt which contain multiple ip and ports.
But we found that it is impossible to setup the h2o cluster within docker containers. Although we can start multiple containers to run java -jar ./h2o.jar and add the prepared flatfile.txt, the h2o process will try to bind local(container's eth0) ip which is different from the one in flatfile.txt. We can java -jar ./h2o.jar -ip $ip to set the one which is in flatfile.txt but h2o instance is not able to run without this "external" ip.
If you use use "docker run --network=host ..." it will work.
See my response to a similar issue here. I describe how it is possible to start an H2O cluster using a flatfile and docker swarm. Basically, you have to run a script in each service before starting H2O, to identify the correct IP addresses for the cluster. This is because docker assigns two IPs to each service. The flatfile needs to use the $HOSTNAME IP for each cluster member, which is difficult to determine in advance.

Categories