Springboot dockeried container throw Connection refused for rest template request - java

I am trying to dockeried two springboot applications by docker compose. user_service needs to send the rest template request to product_service in order to get all product information and the url request is http://localhost:8080/product .without being docker containers there is no problem for 2 applications to communicate but after i made them docker containers when i want to send the request from user_service to product_service there is a connection refused error even though i add them in the same network. here is my docker compose file
version: "3.7"
services:
product_service:
build: /productservice/
restart: always
ports:
- "8080:8080"
networks:
- bridge
user_service:
build: /userservice/
restart: always
ports:
- "7074:7074"
networks:
- bridge
networks:
bridge:
driver: bridge

After a lot of trouble i find the solution for that. if you have 2 spring-boot services that are communicating with each other through rest template or http client you need first to change the localhost to what ever your services name is in the docker-compose file in my case it is http://product_service. an other problem that i face was the face it was an error on my URL. later on i find out i shouldn't have _ on my URL so i change my application service name to product on docker file so finally the URL i set up for my rest template is http://product .
one more thing i should add here is if for communicating between your micro services you want to use https you need to set up SSL certification but if you use http you can skip that.

Related

How to share database schemas in DockerCompose file?

I`m learning Docker. I build my image from Dockerfile in the counter app. And I am using mysql as a database. DockerCompose file is using one db and two container from the same app image. Mysql db has two different schemas. My goal is to use separate app services with different ports(e.g. 9000 and 9001), and they have own schemas. When I call localhost:9000/index it shows first counter and when I call localhost:9000/index it shows second counter.
But problem is that both of them use first schema, and so result being same counter. How can I isolate schemas?
Compose-file ->
version: '3.1'
services:
mysql:
image: mysql
restart: always
ports:
- "3306:3306"
environment:
MYSQL_ROOT_PASSWORD: password
volumes:
- mysql_data:/var/lib/mysql
hello-docker:
image: hello-docker:0.0.2
restart: always
environment:
DB_CONNECTION_IP: mysql
DB_SCHEMA_NAME: hello-counter
ports:
- "9000:9000"
volumes:
- mysql_data:/var/lib/mysql
hello-docker2:
image: hello-docker:0.0.2
restart: always
environment:
DB_CONNECTION_IP: mysql
DB_SCHEMA_NAME: hello_counter2
ports:
- "9001:9000"
volumes:
mysql_data:
application.yaml ->
spring:
datasource:
url: &connectionUrl jdbc:mysql://${DB_CONNECTION_IP:localhost}:${DB_CONNECTION_PORT:3306}/${DB_SCHEMA_NAME}?allowPublicKeyRetrieval=true&createDatabaseIfNotExist=true&useSSL=false&useJDBCCompliantTimezoneShift=true&useLegacyDatetimeCode=false&serverTimezone=UTC&zeroDateTimeBehavior=convertToNull&useUnicode=true&characterEncoding=UTF-8
username: root
password: password
driver-class-name: com.mysql.cj.jdbc.Driver
jpa:
hibernate.ddl-auto: validate
generate-ddl: true
show-sql: true
properties.hibernate.format_sql: true
server:
port: 9000
It is better having separate docker compose for each application and its database.
And if you want only one docker compose for both applications, you can define two separate services for mysql with different exposed schemas and ports and refer to each of them in your applications.
This would be same as your application that you have defined two services for it.
In addition here:
When I call localhost:9000/index it shows first counter and when I
call localhost:9000/index it shows second counter.
You referred to same application, It seems you mean:
localhost:9000/index
and
localhost:9001/index
The easiest way to accomplish this is to run a separate database per service. You can put these in a single Compose file if you'd like. (Or as #EskandarAbedini suggests in their answer you can run a separate Compose file per service, though this can get unwieldy if you have a significant stack based on largely the same code base.)
version: '3.8'
services:
mysql1:
image: mysql
volumes:
- mysql_data1:/var/lib/mysql
hello-docker1:
image: hello-docker:0.0.2
environment:
DB_CONNECTION_IP: mysql1
ports:
- "9000:9000"
mysql2:
image: mysql
volumes:
- mysql_data2:/var/lib/mysql
hello-docker2:
image: hello-docker:0.0.2
environment:
DB_CONNECTION_IP: mysql2
ports:
- "9001:9000"
volumes:
mysql_data1:
mysql_data2:
Note that both pairs of containers run the same images, but the MySQL containers have separated storage, and the application containers publish different host ports for the same container port. You'd have to separately run tasks like database migration for each container.
In principle nothing stops you from running multiple databases or schemata in a single MySQL container, provided you execute the correct SQL CREATE DATABASE call. You'd need a custom SQL init script to set this up; you couldn't do it with just environment-variable configuration.

Make docker nginx forward an angular app requests to consume APIs through Unix domain socket

I'm trying to interconnect two docker containers process using Unix domain socket creating a socket path in my host and sharing that volume in each container something like that:
in my docker-compose file
services:
my-ngixn-container:
image: nginx:1.20.2-alpine
container_name: my-ngixn-container
...
volumes:
- ${PWD}/env/mysocket.socket:/env/mysocket.socket
my-api-container:
image: openjdk:8-jdk-alpine
container_name: my-api-container
...
volumes:
- ${PWD}/env/mysocket.socket:/env/mysocket.socket
I want to make my angular app consume the API through the Unix domain socket, I suppose that I could set Nginx to forward all the requests through the Unix socket.

Can Testcontainers join existing network?

I wish to create a network via docker-compose (via DockerComposeContainer) and have another container (created via ImageFromDockerfile) join that same network. Is this possible?
Asked another way, can ImageFromDockerfile join an existing network?
(For me the order is crucial, because when i start my image it needs to connect to all the services running through compose)
The moving parts I have tried include:
The docker compose file
version: '3.6'
services:
vault:
image: docker.x.com/x/vault:123-ytr
ports:
- 8200:8200
networks:
- gateway
environment:
- APP_NAME=requestlogidentityconsumer
networks:
gateway:
name: damo
Executing above compose file via DockerComposeContainer (incl. create damo network)
Attempt to build and run rlic-container and have it join damo n/w
Network network =
Network.builder().createNetworkCmdModifier(cmd -> cmd.withName("damo")).build();
new GenericContainer(
new ImageFromDockerfile("rlic-container", true)
.withFileFromFile("Dockerfile", DOCKER_FILE_PATH.toFile()))
.withNetwork(network)
.start();
When I run step 3 i get:
Caused by: com.github.dockerjava.api.exception.ConflictException: {"message":"network with name damo already exists"}
Which makes sense (in so far as network does exist from step 2), and ties back to my question of; can i write step 3 such that it joins an existing network?
Thanks

Run/curl simple Java application deployed and exposed to Kubernetes cluster hosted on AWS

I am newbie to Kubernetes and had a long time configuring my application to be hosted on Kubernetes cluster hosted on AWS EKS.
Status-quo: I am pretty sure that the service of type LoadBalancer is up and running. It has its pod and all the stuff running. The application is simple Java application with input. You can try accessing it by pulling an image from Docker Hub via:
docker run -i ardulat/mckinsey
Question: how can I run the Java application (not Spring, not REST) that is being hosted on Kubernetes cluster?
Already tried:
curl -v <EXTERNAL-IP>:<PORT> that outputs:
* Trying 3.134.148.191...
* TCP_NODELAY set
* Connected to a8154210d09da11ea9c3806983848f2f-1085657314.us-east-2.elb.amazonaws.com (3.134.148.191) port 8080 (#0)
> GET / HTTP/1.1
> Host: a8154210d09da11ea9c3806983848f2f-1085657314.us-east-2.elb.amazonaws.com:8080
> User-Agent: curl/7.63.0
> Accept: */*
>
* Empty reply from server
* Connection #0 to host a8154210d09da11ea9c3806983848f2f-1085657314.us-east-2.elb.amazonaws.com left intact
curl: (52) Empty reply from server
nc -v <EXTERNAL-IP> <PORT> that outputs:
found 0 associations
found 1 connections:
1: flags=82<CONNECTED,PREFERRED>
outif en0
src 172.20.22.42 port 63865
dst 3.13.128.24 port 8080
rank info not available
TCP aux info available
Connection to a8154210d09da11ea9c3806983848f2f-1085657314.us-east-2.elb.amazonaws.com port 8080 [tcp/http-alt] succeeded!
Therefore, I assume that connection works and the service is up and running except I am trying to connect to the Java (.jar) application in the wrong way. Do you have any suggestions?
You should change your dockerfile and change CMD to ENTRYPOINT which is nicely explained here.
I would also recommend reading Define a Command and Arguments for a Container.
CMD sets default command and/or parameters, which can be overwritten from command line when docker container runs.
ENTRYPOINT configures a container that will run as an executable.
Your dockerfile might look like this:
FROM java:8
WORKDIR /
ADD Anuar.jar Anuar.jar
EXPOSE 8080
ENTRYPOINT ["java","-jar","Anuar.jar"]
Your service might look like this:
apiVersion: v1
kind: Service
metadata:
name: javaservice
labels:
app: javaservice
spec:
type: LoadBalancer
selector:
app: javaservice
ports:
- protocol: TCP
port: 8080
name: http
Also it's important which LoadBalancer you want to use as on AWS there is Classic Load Balancer which is default and Network Load Balancer. You can read more about it on Internal load balancer and check the AWS documentation for Load Balancing.
Amazon EKS supports the Network Load Balancer and the Classic Load Balancer through the Kubernetes service of type LoadBalancer. The configuration of your load balancer is controlled by annotations that are added to the manifest for your service.
By default, Classic Load Balancers are used for LoadBalancer type services. To use the Network Load Balancer instead, apply the following annotation to your service:
service.beta.kubernetes.io/aws-load-balancer-type: nlb
For more information about using Network Load Balancer with Kubernetes, see Network Load Balancer support on AWS in the Kubernetes documentation.
By default, services of type LoadBalancer create public-facing load balancers. To use an internal load balancer, apply the following annotation to your service:
service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0
For internal load balancers, your Amazon EKS cluster must be configured to use at least one private subnet in your VPC. Kubernetes examines the route table for your subnets to identify whether they are public or private. Public subnets have a route directly to the internet using an internet gateway, but private subnets do not.

Elastic APM Stack on Docker

im trying to install the Elastic APM with Elasticsearch, Kibana and the APM server as 3 services with docker-compose. Now im getting confused on how to set the IPs in the app-server.yml file with the documentation APM Server Configuration. The file should look like this:
apm-server:
host: localhost:8200
output:
elasticsearch:
hosts: ElasticsearchAddress:9200
I tried to set ElasticsearchAddress to localhost or 127.0.0.1 but I always get errors like
Failed to connect: Get http://127.0.0.1:9200: dial tcp 127.0.0.1:9200: getsockopt: connection refused or Failed to connect: Get http://localhost:9200: dial tcp [::1]:9200: connect: cannot assign requested address. I also tried it with several other ips.
Does anyone know how to configure the app server correctly or are there any docker-compose files to do the installation correctly?
Thanks for ur help
If you are starting all the services with single docker compose file, the app-server.yaml should have the value like this
output:
elasticsearch:
hosts: elasticsearch:9200
The "hosts: elasticsearch:9200" should be service name of the elasticsearch you mentioned in the docker-compose. Like in the followiing
version: '2'
services:
elasticsearch:
image: elasticsearch:latest
When you bring up containers using compose, each container has its own networking stack (so they can each talk to themselves on localhost, but they need an ip address or dns name to talk to a different container!).
Compose by default connects each of the containers to a default network and gives each a dns name with the name of the service.
If your compose file looks like
services:
apm:
image: apm_image
elasticsearch:
image: elasticsearch:latest
A process in the apm container could access elasticsearch at http://elasticsearch:9200

Categories