I am running a spring boot with REST API inside a docker container. Everything seems to work fine when i run from eclipse or as jar. But when i dockerize it and run i am facing below issues
First
Not able to access REST Endpoint within container.
http://localhost:9000/ --> works But
http://localhost:9000/api/v1/test --> it does not identify.
However i can run it from swagger.
Second issue org.postgresql.util.PSQLException: ERROR: permission denied for schema < schema_name >
However i have given all permissions for the schema like
GRANT ALL ON SCHEMA < schemaname> TO < username>;
GRANT USAGE ON SCHEMA < schemaname> TO < username>;
These issues are only when i try to run from a container.
Commands use for docker
docker run -p 9000:9000 < image name >
Am using spring boot 2.1.9
Dockerfile
FROM openjdk:8-jdk-alpine
VOLUME /tmp
ADD run-app.sh run-app.sh
RUN chmod +x run-app.sh
EXPOSE 9000
ARG JAR_FILE=target/*.jar
COPY ${JAR_FILE} dummy.jar
ENTRYPOINT ./run-app.sh
run-app.sh
java $JAVA_OPTS -jar /dummy.jar
My postgresql Db is running in aws.
My spring boot is able to start, but only while my API is querying i am facing the exception
Can you share dockerfile content if possible, would like to see what commands you have given with endpoints. and for postgresql which you are using, is it embeded in docker with spring boot app or another server?
I do already setup spring boot with nginx and postgresql dockerized but in different servers/containers which working pretty smooth on production.
For first issue I will need more details.
For second issue, the problem is two containers are not on the same network and hence service container can't communicate with psql container. You can actually create a docker-compose.yml file to run those on same network. Or create a network and join the containers on that network.
Related
I have Spring Boot application with Hazelcast (use 4.2 and also try to use 4.0.3 of hazelcast-all dependency). Hazelcast is configured for embedded topology. I use TCP-IP network join way. Properties for Hazelcast are set using file Hazelcast.yaml and Spring Boot property: spring.hazelcast.config (Spring by default use this name for Hazelcast config, it is redundant).
In property member-list I indicate IP addresses of two machines in one subnet (e.g. 192.0.0.1 and 192.0.0.2).
I build application in Docker using image based on Alpine on OpenJdk. Image includes start java -jar command as ENTRYPOINT.
PROBLEM PREMISE:
I run two docker containers on two machines described earlier. I forward only port 5701 (using -p) on both containers. And containers don't see each other. Spring Boot logs show that the container network is being used.
PS:
All work if run docker with --net host.
Also all work if I package Spring Boot application with property public-address in Hazelcast.yaml for two containers - once package with value 192.0.0.1, the other with value 192.0.0.2. Spring Boot Hazelcast instances see each other using network of machine (192.0.0.1 and 192.0.0.2).
PROBLEM:
I try to override property public-address in Hazelcast.yaml using:
docker run -e HZ_NETWORK_PUBLICADDRESS=192.0.0.1
export HZ_NETWORK_PUBLICADDRESS=192.0.0.1 && docker run
JAVA_OPTS="-Dhz.network.public-address=192.0.0.1"
JAVA_OPTS="-Dhazelcast.local.publicAddress=192.0.0.1"
JAVA_OPTS="-Dhazelcast.config=/mnt/overrided_hazelcast.yaml"
ENV HZ_NETWORK_PUBLICADDRESS=192.0.0.1 - in Dockerfile
ENTRYPOINT java -jar -Dhz.network.public-address=192.0.0.1 my-app.jar
Nothing works. Does anyone know why it is not possible to override property public-address in the Hazelcast.yaml at startup?
Or maybe anyone knows how I can run two Spring Boot applications with embedded Hazelcast in separate Docker containers on separate machines.
The way you set up Hazelcast public address is correct, at least starting from Hazelcast 4.1 where the Config Override feature was added.
To check the working version, you can have a look at Hazelcast Guide: Embedded Hazelcast on Kubernetes. Instead of Hazelcast Kubernetes configuration, you can use TCP-IP. The following Hazelcast configuration worked for me (my host IP is 172.22.41.210).
hazelcast:
cluster-name: hazelcast-cluster
network:
join:
tcp-ip:
enabled: true
member-list:
- 172.22.41.210:5701
- 172.22.41.210:5702
Then, building and starting two applications should form a cluster.
$ mvn package && docker build -t hazelcast-embedded .
$ docker run --rm -e HZ_NETWORK_PUBLICADDRESS=172.22.41.210 -p 5701:5701 hazelcast-embedded
$ docker run --rm -e HZ_NETWORK_PUBLICADDRESS=172.22.41.210:5702 -p 5702:5701 hazelcast-embedded
You should see that the cluster was formed in the application logs.
Members {size:2, ver:2} [
Member [172.22.41.210]:5701 - 21af9e1a-7e98-4305-905c-451ee23486c3 this
Member [172.22.41.210]:5702 - 0507d970-1f31-4df3-9ea5-8c3981eb7c98
]
I am working with docker
I am provided a docker image which compiles and runs fine
the application uses Amazon client to interact services like S3 , SNS , SQS .
the moment the application tries to load the client it fails with error
Caused by: com.amazonaws.SdkClientException: Unable to load AWS credentials from any provider in the chain: [EnvironmentVariableCredentialsProvider: Unable to load AWS credentials from environment variables (AWS_ACCESS_KEY_ID (or AWS_ACCESS_KEY) and AWS_SECRET_KEY (or AWS_SECRET_ACCESS_KEY)), SystemPropertiesCredentialsProvider: Unable to load AWS credentials from Java system properties (aws.accessKeyId and aws.secretKey), com.amazonaws.auth.profile.ProfileCredentialsProvider#17bf085: profile file cannot be null, com.amazonaws.auth.EC2ContainerCredentialsProviderWrapper#24117d53: Unable to load credentials from service endpoint
I have tested on cli that application local IAM configuration is correct
get caller identity on console
aws sts get-caller-identity
output
{
"Account": "xxxxxxxxxxxx",
"UserId": "XXXXXXXXXXXXXXXXXXXXX:xxxxxxxx-session-1562266255",
"Arn": "arn:aws:sts::342484191705:assumed-role/abc-abc-abc-abc/xxxxxxxx-session-1562266255"
}
so the IAM role is assumed correctly on local machine ,
running unit test and integration test on local machine also assume the IAM role perfectly .
I am running the docker image by command
docker run -it --rm -e "JPDA_ADDRESS=*:8000" -e "JPDA_TRANSPORT=dt_socket" -p 5033:8000 -p 6060:6033 --memory 1300M --log-driver json-file --log-opt "max-size=1g" docker-image-arn dev
the image runs but fails all operation where it has to assume IAM role and interact with AWS services .
what is missing ?
how to make application within the container use the IAM role ?
I am using a docker-compose command to create and start my containers.
My Docker Version
docker --version
Docker version 17.09.0-ce, build afdb6d4
My Docker-Compose version
docker-compose --version
docker-compose version 1.16.1, build 6d1ac21
The .yml file that I'm using looks something like this:
(Note that I've just shortened it to take sensitive things out)
---
services:
zookeeper:
image: "zookeeper"
server-1:
cap_add:
- "NET_ADMIN"
server-0:
cap_add:
- "NET_ADMIN"
dns:
- 8.8.8.8
- 9.9.9.9
environment:
SERVER_ID: 0
NETEM_HOSTS: ""
LOSS_VALUES: ""
MAX_RATE_VALUES: ""
DELAY_VALUES: ""
image: "cloud.mycompany.com:5000/server-0:latest"
fakedns:
image: "cloud.mycompany.com:5000/fakedns:latest"
version: "3.3"
Then I start using:
docker-compose --file compose.yml up -d
My Question is this:
1) After containers come up... when I go into a container, for e.g. in this case server-0, I don't see the /etc/resolv.conf file updated to use these nameservers. Instead it uses the embedded dns of docker which is 127.0.0.11
2) How do I make sure that it uses what I specify in file that is used by docker-compose
3) I tried to do this with the command and it seems to work, but I need to do from compose-file
docker run -p 4000:53 --dns=8.8.8.8 cloud.mycompany.com:5000/server-0:latest
4) Ideally, I want it to have the IP address of the container 'fakedns' so that it uses this one instead of the embedded one #127.0.0.11
You won't see custom DNS servers in /etc/resolv.conf but Docker's resolver will forward DNS requests to them.
User Defined Networks and DNS
Docker compose definitions that are v2+ create a user defined network by default.
Docker with a user defined network uses an embedded DNS server so that Docker can respond for local container requests (service discovery).
For any DNS hosts Docker can not resolve, the request will be forwarded onto a DNS server. This is either the system default server, the server configured in dockerd or the DNS server configured for the container at run time.
Docker DNS
Be careful when using internal DNS servers. Things in the Docker daemon will break if you point the systems DNS at a container as you create a chicken or the egg problem, Docker needs DNS to start but can't start the container to provide DNS.
As your example config is only setting the DNS for one app container it should be ok, but make sure the DNS container is up and healthy before your application.
I have my spring boot application and mysql database running in separate docker containers. I am able to access server database from my host.
My application.properties for Spring boot application looks like below:
spring.datasource.url=jdbc:mysql://benefitsmysql:3308/benefitsmysql
spring.datasource.driver-class-name=com.mysql.jdbc.Driver
spring.jpa.show-sql=true
spring.jpa.properties.hibernate.dialect = org.hibernate.dialect.MySQL5Dialect
# ====================================================================================
# = SSL Configuration
# ====================================================================================
#security.basic.enabled=false
server.port=8443
server.ssl.key-store=keystore.jks
server.ssl.key-store-password=*******
server.ssl.keyStoreType=jks
server.ssl.keyAlias=tomcatselfsigned
I am building a docker container image by using maven plugin for docker. My Dockerfile looks like below:
FROM java:8
VOLUME /tmp
ADD Benefits.jar Benefits.jar
EXPOSE 8443
RUN bash -c 'touch /Benefits.jar'
ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom","-jar","/Benefits.jar"]
I am starting docker container for spring boot application like below:
docker run -p 8443:8443 --name benefits --link benefitsmysql:mysql -d c794a4d0c634
and if I do docker ps -a, I get following output
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
8070c575b6dd c794a4d0c634 "java -Djava.secur..." 2 minutes ago Up 2 minutes 0.0.0.0:8443->8443/tcp benefits
aa417df08b94 mysql:5.6 "docker-entrypoint..." 2 days ago Up 2 days 0.0.0.0:3308->3306/tcp benefitsmysql
f55a2a7ac487 hello-world "/hello" 2 days ago Exited (0) 2 days ago gifted_lalande
Now when I access my spring boot application running inside docker container from my windows machine like https://192.168.99.103:8443/home, I get connection refused error ERR_CONNECTION_REFUSED.
What am I missing in this configuration?
yogsma
I read your blog, and apply your solve, but docker-machine ip didn't solve my problem.
Then I realize docker containers can't communicate with 127.0.0.1 and I use their container ip
docker inspect <container_id>
then find IpAddress.
This ip address is solves my problem.I dont need to use docker-machine ip
Am taking this as sample microservice consumer & provider
https://github.com/anha1/microservices-pact-maven
Packbroker Docker
https://github.com/DiUS/pact_broker-docker
How to deploy and run the pact_broker with postgres in Kubernetes?
I have pact_broker image without postgres in docker
how to configure postgres for pact_broker while deploy the pact_broker in kubernetes?
We can deploy PactBroker in Kubernetes,
We need Docker postgres image, deploy in kubernetes
Kubernetes svc yaml file for postgres, have to mention the "type : ClusterIP" & "targetPort: 5432"in spec
We need Docker PactBroker image, deploy in kubernetes
Kubernetes svc yaml file for pactbroker, have to mention the "type : NodePort" & "targetPort: 80" in spec
Sample image - Pact app running in Kubernetes