Dockerized Spring Boot microservice freeze without reason - java

I have a microservices based project. Each microservice is a Spring Boot (v.2.0.0-RC2) app. I have also a discovery, config and gateway microservices based on Spring Cloud (Finchley). The whole system is deployed on test machine using Docker Compose.
I realized that one of the microservices freezes after receiving several subsequent requests from frontend app, in a short period of time. After this, it becomes unresponsive for further requests, and I receive read timeout from my gateway. The same occurs when calling this microservice directly, bypassing the gateway.
I have a spring boot admin instance, and I realized the microservice goes offline and online again every 5 minutes. Despite of that, nothing interesting occurs in logs. No memory issues observed.
Next remark: this problem occurs only when I start all system from docker compose in same time. When I restart this single microservice, I can't reproduce it anymore.
And the last: the whole container of the microservice seems to be freezed. When I do 'docker stop' on it, the terminal hangs up, but after checking the container status in another terminal, the container appears as 'exited'. A very strange thing occured, when I did 'docker attach' on the container. The terminal also hung up and when I exited from it, my problematic microservice started to work properly and accepts incoming request with success.
Can anyone help me with this strange problem ? I have really no more ideas, what can I try to resolve it.
Thanks in advance for any clue.
EDIT
docker-compose.yml
version: '3.4'
services:
config-service:
image: im/config-service
container_name: config-service
environment:
- SPRING_PROFILES_ACTIVE=native
volumes:
- ~/production-logs:/logs
discovery-service:
image: im/discovery-service
container_name: discovery-service
environment:
- SPRING_PROFILES_ACTIVE=production
volumes:
- ~/production-logs:/logs
gateway-service:
image: im/gateway-service
container_name: gateway-service
ports:
- "8080:8080"
depends_on:
- config-service
- discovery-service
environment:
- SPRING_PROFILES_ACTIVE=production
volumes:
- ~/production-logs:/logs
car-service_db:
image: postgres:9.5
container_name: car-service_db
environment:
- POSTGRES_DB=car
- POSTGRES_USER=user
- POSTGRES_PASSWORD=pass
car-service:
image: im/car-service
container_name: car-service
depends_on:
- config-service
- discovery-service
- car-service_db
environment:
- SPRING_PROFILES_ACTIVE=production
- CAR_SERVICE_DB_URL=jdbc:postgresql://car-service_db:5432/car
- CAR_SERVICE_DB_USER=user
- CAR_SERVICE_DB_PASSWORD=pass
volumes:
- ~/production-logs:/logs
Dockerfile of car-service
FROM openjdk:8-jdk-alpine
VOLUME /tmp
EXPOSE 9005
ARG JAR_FILE
ADD ${JAR_FILE} app.jar
ENV JAVA_OPTS="-agentlib:jdwp=transport=dt_socket,address=8001,server=y,suspend=n"
ENTRYPOINT ["sh", "-c", "java $JAVA_OPTS -Djava.security.egd=file:/dev/./urandom -jar /app.jar"]
Command used to start up
docker-compose up
Test machine:
Ubuntu Server 16.04 LTS

RESOLVED
The cause was logging aspect. I realized a lot of threads waiting on:
sun.misc.Unsafe.park(Unsafe.java:-2) native
java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:870)
java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1199)
java.util.concurrent.locks.ReentrantLock$NonfairSync.lock(ReentrantLock.java:209)
java.util.concurrent.locks.ReentrantLock.lock(ReentrantLock.java:285)
ch.qos.logback.core.OutputStreamAppender.writeBytes(OutputStreamAppender.java:197)
ch.qos.logback.core.OutputStreamAppender.subAppend(OutputStreamAppender.java:231)
ch.qos.logback.core.OutputStreamAppender.append(OutputStreamAppender.java:102)
ch.qos.logback.core.UnsynchronizedAppenderBase.doAppend(UnsynchronizedAppenderBase.java:84)
ch.qos.logback.core.spi.AppenderAttachableImpl.appendLoopOnAppenders(AppenderAttachableImpl.java:51)
ch.qos.logback.classic.Logger.appendLoopOnAppenders(Logger.java:270)
ch.qos.logback.classic.Logger.callAppenders(Logger.java:257)
ch.qos.logback.classic.Logger.buildLoggingEventAndAppend(Logger.java:421)
ch.qos.logback.classic.Logger.filterAndLog_2(Logger.java:414)
ch.qos.logback.classic.Logger.debug(Logger.java:490)

Related

Cannot run or connect Portgresql container on Docker

In my Windows 10 machine I have a Java app and create Postgresql images on Docker using the following configuration:
docker-compose.yml:*
version: '2.0'
services:
postgresql:
image: postgres:11
ports:
- "5432:5432"
expose:
- "5432"
environment:
- POSTGRES_USER=demo
- POSTGRES_PASSWORD=******
- POSTGRES_DB=demo_test
And I use the following command to compose images:
cd postgresql
docker-compose up -d
Although pgadmin container is working on Docker, postgres container is generally restarting state and sometines seems to be running state for a second. When I look at that container log, I see I encounter the following errors:
2021-03-16 09:00:18.526 UTC [82] FATAL: data directory "/data/postgres" has wrong ownership
2021-03-16 09:00:18.526 UTC [82] HINT: The server must be started by the user that owns the data directory.
child process exited with exit code 1
*initdb: removing contents of data directory "/data/postgres"
running bootstrap script ... The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.
I have tried to apply several workaround suggestions e.g. PostgreSQL with docker ownership issue, but none of them is working. So, how can I fix this problem?
Update: Here is last status of my docker-compoese.yml file:
version: '2.0'
services:
postgresql:
image: postgres:11
container_name: "my-pg"
ports:
- "5432:5432"
expose:
- "5432"
environment:
- POSTGRES_USER=demo
- POSTGRES_PASSWORD=******
- POSTGRES_DB=demo_test
volumes:
- psql:/var/lib/postgresql/data
volumes:
psql:
As I already stated in my comment I'd suggest using a named volume.
Here's my docker-compose.yml for Postgres 12:
version: "3"
services:
postgres:
image: "postgres:12"
container_name: "my-pg"
ports:
- 5432:5432
environment:
POSTGRES_USER: "postgres"
POSTGRES_PASSWORD: "postgres"
POSTGRES_DB: "mydb"
volumes:
- psql:/var/lib/postgresql/data
volumes:
psql:
Then I created the psql volume via docker volume create psql (so just a volume without any actual path mapping).

Getting "Exception opening socket" on Mongodb connection from Spring App (docker-compose)

Even though I'm giving in the application properties,
spring.data.mongodb.host=api-database4
as the hostname which is the container name and hostname of the MongoDB on the docker-compose file, Spring app still can't connect to the MongoDB instance. I can however connect from MongoDB Compass to localhost:27030 but not to mongodb://api-database4:27030/messagingServiceDb.
My docker-compose file;
version: '3'
services:
messaging-api6:
container_name: 'messaging-api6'
build: ./messaging-api
restart: always
ports:
- 8085:8080
depends_on:
- api-database4
networks:
- shared-net
api-database4:
image: mongo
container_name: api-database4
hostname: api-database4
restart: always
ports:
- 27030:27017
networks:
- shared-net
command: mongod --bind_ip_all
networks:
shared-net:
driver: bridge
and my Docker file for the Spring app is;
FROM openjdk:12-jdk-alpine
ARG JAR_FILE=target/*.jar
COPY ${JAR_FILE} app.jar
ENTRYPOINT ["java","-jar","/app.jar"]
and my application.properties are;
#Local MongoDB config
spring.data.mongodb.database=messagingServiceDb
spring.data.mongodb.port=27030
spring.data.mongodb.host=api-database4
Entire code can be seen here.
How can I make my spring app on a docker container create a connection to the MongoDB instance which is on another docker container?
I have tried the solutions on similar questions and replicated them, it still gives the same error.
Edit and Solution:
I solved the issue by commenting out configuration below,
#Local MongoDB config
#spring.data.mongodb.database=messagingServiceDb
spring.data.mongodb.host=api-database4
spring.data.mongodb.port=27030
The remaining question is, why? That was the correct port that I'm trying to connect. Could it be related to the configuration order?
ports directive in docker-compose publishes container ports to the host machine. The containers communicate with each other on exposed ports. You can test whether a container can reach another with netcat.
docker exec -it messaging-api6 bash
> apt-get install netcat
> nc -z -v api-database4 27030
> nc -z -v api-database4 27017

503 error code for Springboot container connecting to mongo container using docker-compose

I am trying to connect my spring-boot application(REST endpoints) running in a Tomcat container with a mongo container. I am using docker-compose to link both the containers. The application was working perfectly fine. It just stopped working suddenly.
Following is my code:
Dockerfile:
FROM tomcat:9.0.13
WORKDIR /usr/local/tomcat/webapps
#COPY pom.xml .
#RUN ["mvn", "clean", "install"]
COPY /target/TestProfileManager.war .
docker-compose.yml:
version: '3'
services:
app:
container_name: VF-BACKEND
restart: always
build: .
ports:
- "8083:8080" #VF Webservice
depends_on:
- mongo
links:
- mongo
mongo:
container_name: VF-MONGO
image: mongo:4.0.2
ports:
- "27018:27017"
volumes:
- /data/vfdb:/data/db
application.properties
spring.data.mongodb.uri=mongodb://mongo:27018/tsp
If I run the application from the IDE as a standalone application, the endpoints do return the response. Only during container communication, I am getting 503. I could not find any post that answers my question.
Thanks for the help. Since, the code was working before, not pasting the classes. Let me know if I should share them as well.
It should be mongodb://mongo:27017, in service to service communication you do not need to use publish port.
It is important to note the distinction between HOST_PORT and
CONTAINER_PORT. the HOST_PORT is 27018 and the container port is
27017 . Networked service-to-service communication use the
CONTAINER_PORT
compose-networking

pio train fails with IOException: Connection reset by peer

I've done a setup of predictionIO v0.13 on my linux machine in docker (running in swarm mode). This setup includes:
one container for pio v0.13
one container for elasticsearch v5.6.4
one container for mysql v8.0.16
one container for spark-master v2.3.2
one container for spark-worker v2.3.2
The template I am using is the ecomm-recommender-java, modified for my data. I don't know if I made an error with the template or with the docker setup, but there is something really wrong:
pio build succeeds
pio train fails - with
Exception in thread "main" java.io.IOException: Connection reset by peer
Because of this, I put a lot of logging into my template for various points, and this is what I found:
The train fails after the model is computed. I am using a custom Model class, for holding the logistic-regression model and the various user and product indices.
The model is a PersistentModel. In the save method I put logging after every step. Those are logged, and I can find the saved results in the mounted docker volume, so it seems like the save also succeeds, but after that I get the following exception:
[INFO] [Model] saving user index
[INFO] [Model] saving product index
[INFO] [Model] save done
[INFO] [AbstractConnector] Stopped Spark#20229b7d{HTTP/1.1,[http/1.1]}{0.0.0.0:4040}
Exception in thread "main" java.io.IOException: Connection reset by peer
at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
at sun.nio.ch.IOUtil.read(IOUtil.java:197)
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380)
at org.apache.predictionio.shaded.org.apache.http.impl.nio.reactor.SessionInputBufferImpl.fill(SessionInputBufferImpl.java:204)
at org.apache.predictionio.shaded.org.apache.http.impl.nio.codecs.AbstractMessageParser.fillBuffer(AbstractMessageParser.java:136)
at org.apache.predictionio.shaded.org.apache.http.impl.nio.DefaultNHttpClientConnection.consumeInput(DefaultNHttpClientConnection.java:241)
at org.apache.predictionio.shaded.org.apache.http.impl.nio.client.InternalIODispatch.onInputReady(InternalIODispatch.java:81)
at org.apache.predictionio.shaded.org.apache.http.impl.nio.client.InternalIODispatch.onInputReady(InternalIODispatch.java:39)
at org.apache.predictionio.shaded.org.apache.http.impl.nio.reactor.AbstractIODispatch.inputReady(AbstractIODispatch.java:114)
at org.apache.predictionio.shaded.org.apache.http.impl.nio.reactor.BaseIOReactor.readable(BaseIOReactor.java:162)
at org.apache.predictionio.shaded.org.apache.http.impl.nio.reactor.AbstractIOReactor.processEvent(AbstractIOReactor.java:337)
at org.apache.predictionio.shaded.org.apache.http.impl.nio.reactor.AbstractIOReactor.processEvents(AbstractIOReactor.java:315)
at org.apache.predictionio.shaded.org.apache.http.impl.nio.reactor.AbstractIOReactor.execute(AbstractIOReactor.java:276)
at org.apache.predictionio.shaded.org.apache.http.impl.nio.reactor.BaseIOReactor.execute(BaseIOReactor.java:104)
at org.apache.predictionio.shaded.org.apache.http.impl.nio.reactor.AbstractMultiworkerIOReactor$Worker.run(AbstractMultiworkerIOReactor.java:588)
at java.lang.Thread.run(Thread.java:748)
I couldn't find any more relevant in any of the logs, but that's a possibility that I overlooked something.
I tried to play with the train parameters like so:
pio-docker train -- --master local[3] --driver-memory 4g --executor-memory 10g --verbose --num-executors 3
playing with the spark modes (i.e.: --master local[1-3], and not providing that to use the instances in the docker containers)
played with the --driver-memory (from 4g to 10g)
played with the --executor-memory (also from 4g to 10g)
played with the --num-executors number (from 1 to 3)
As most of the google search results are suggested these.
My main problem here is that I don't know from where this exception is coming and how to discover it.
Here is the save and method, which could be relevant:
public boolean save(String id, AlgorithmParams algorithmParams, SparkContext sparkContext) {
try {
logger.info("saving logistic regression model");
logisticRegressionModel.save("/templates/" + id + "/lrm");
logger.info("creating java spark context");
JavaSparkContext jsc = JavaSparkContext.fromSparkContext(sparkContext);
logger.info("saving user index");
userIdIndex.saveAsObjectFile("/templates/" + id + "/indices/user");
logger.info("saving product index");
productIdIndex.saveAsObjectFile("/templates/" + id + "/indices/product");
logger.info("save done");
} catch (IOException e) {
e.printStackTrace();
}
return true;
}
The hardcoded /templates/ is the docker-mounted volume for pio and for spark also.
Expected result is: train completes without error.
I am happy to share more details if necessary, please ask for them, as I am not sure what could be helpful here.
EDIT1: Including docker-compose.yml
version: '3'
networks:
mynet:
driver: overlay
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:5.6.4
environment:
- xpack.graph.enabled=false
- xpack.ml.enabled=false
- xpack.monitoring.enabled=false
- xpack.security.enabled=false
- xpack.watcher.enabled=false
- cluster.name=predictionio
- bootstrap.memory_lock=false
- "ES_JAVA_OPTS=-Xms1g -Xmx1g"
volumes:
- pio-elasticsearch-data:/usr/share/elasticsearch/data
deploy:
replicas: 1
networks:
- mynet
mysql:
image: mysql:8
command: mysqld --character-set-server=utf8mb4 --collation-server=utf8mb4_unicode_ci
environment:
MYSQL_ROOT_PASSWORD: somepass
MYSQL_USER: someuser
MYSQL_PASSWORD: someotherpass
MYSQL_DATABASE: pio
volumes:
- pio-mysql-data:/var/lib/mysql
deploy:
replicas: 1
networks:
- mynet
spark-master:
image: bde2020/spark-master:2.3.2-hadoop2.7
ports:
- "8080:8080"
- "7077:7077"
volumes:
- ./templates:/templates
environment:
- INIT_DAEMON_STEP=setup_spark
deploy:
replicas: 1
networks:
- mynet
spark-worker:
image: bde2020/spark-worker:2.3.2-hadoop2.7
depends_on:
- spark-master
ports:
- "8081:8081"
volumes:
- ./templates:/templates
environment:
- "SPARK_MASTER=spark://spark-master:7077"
deploy:
replicas: 1
networks:
- mynet
pio:
image: tamassoltesz/pio0.13-spark.230:1
ports:
- 7070:7070
- 8000:8000
volumes:
- ./templates:/templates
dns: 8.8.8.8
depends_on:
- mysql
- elasticsearch
- spark-master
environment:
PIO_STORAGE_SOURCES_MYSQL_TYPE: jdbc
PIO_STORAGE_SOURCES_MYSQL_URL: "jdbc:mysql://mysql/pio"
PIO_STORAGE_SOURCES_MYSQL_USERNAME: someuser
PIO_STORAGE_SOURCES_MYSQL_PASSWORD: someuser
PIO_STORAGE_REPOSITORIES_EVENTDATA_NAME: pio_event
PIO_STORAGE_REPOSITORIES_EVENTDATA_SOURCE: MYSQL
PIO_STORAGE_REPOSITORIES_MODELDATA_NAME: pio_model
PIO_STORAGE_REPOSITORIES_MODELDATA_SOURCE: MYSQL
PIO_STORAGE_SOURCES_ELASTICSEARCH_TYPE: elasticsearch
PIO_STORAGE_SOURCES_ELASTICSEARCH_HOSTS: predictionio_elasticsearch
PIO_STORAGE_SOURCES_ELASTICSEARCH_PORTS: 9200
PIO_STORAGE_SOURCES_ELASTICSEARCH_SCHEMES: http
PIO_STORAGE_REPOSITORIES_METADATA_NAME: pio_meta
PIO_STORAGE_REPOSITORIES_METADATA_SOURCE: ELASTICSEARCH
MASTER: spark://spark-master:7077 #spark master
deploy:
replicas: 1
networks:
- mynet
volumes:
pio-elasticsearch-data:
pio-mysql-data:
I found out what the issue is: somehow the connection to elasticsearch is lost during the long-running train. This is a docker issue, not a predictionIO issue. For now, I "solved" this by not using elasticsearch at all.
Another thing I was not aware of: it does matter where you put your --verbose in the command. Providing it in the way I did originally (like pio train -- --driver-memory 4g --verbose) has no/little effect on the verbosity of the logging. The right way to do so is pio train --verbose -- --driver-memory 4g, so before the --. This way I got much more log, from which the origin of the issue became clear.

Spring Boot + docker-compose + MySQL: Connection refused

I'm trying to set up a Spring Boot application that depends on a MySQL database called teste in docker-compose. After issuing docker-compose up, I'm getting:
Caused by: java.net.ConnectException: Connection refused (Connection refused)
I'm running on Linux Mint, my docker-compose version is 1.23.2, my Docker version is 18.09.0.
application.properties
# JPA PROPS
spring.jpa.show-sql=true
spring.jpa.properties.hibernate.format_sql=true
spring.jpa.hibernate.ddl-auto=update
spring.jpa.properties.hibernate.dialect=org.hibernate.dialect.MySQL5Dialect
spring.jpa.hibernate.naming-strategy=org.hibernate.cfg.ImprovedNamingStrategy
spring.datasource.url=jdbc:mysql://db:3306/teste?useSSL=false&serverTimezone=UTC
spring.datasource.username=rafael
spring.datasource.password=password
spring.database.driverClassName =com.mysql.cj.jdbc.Driver
docker-compose.yml
version: '3.5'
services:
db:
image: mysql:latest
environment:
- MYSQL_ROOT_PASSWORD=rootpass
- MYSQL_DATABASE=teste
- MYSQL_USER=rafael
- MYSQL_PASSWORD=password
ports:
- 3306:3306
web:
image: spring-mysql
depends_on:
- db
links:
- db
ports:
- 8080:8080
environment:
- DATABASE_HOST=db
- DATABASE_USER=rafael
- DATABASE_NAME=teste
- DATABASE_PORT=3306
and the Dockerfile
FROM openjdk:8
ADD target/app.jar app.jar
EXPOSE 8080
ENTRYPOINT ["java", "-jar", "app.jar"]
Docker compose always starts and stops containers in dependency order, or sequential order in the file if not given. But docker-compose does not guarantee that it will wait till the dependency container is running. You can refer here for further details. So the problem here is that your database is not ready when your spring-mysql container tries to access the database. So, the recommended solution is you could use wait-for-it.sh or similar script to wrap your spring-mysql app starting ENTRYPOINT.
As example if you use wait-for-it.sh your ENTRYPOINT in your Dockerfile should change to following after copying above script to your project root:
ENTRYPOINT ["./wait-for-it.sh", "db:3306", "--", "java", "-jar", "app.jar"]
And two other important thing to consider here is:
Do not use links they are deprecated you should use user-defined network instead. All services in docker-compose file will be in single user-defined network if you don't explicitly define any network. So you just have to remove the links from compose file.
You don't need to publish the port for docker container if you only use it inside the user-defined network.
I was facing the same issue and in case you do not want to use any custom scripts, this can easily be resolved using health checks along with depends on. A sample using these is as follows:
services:
mysql-db:
image: mysql
environment:
- MYSQL_ROOT_PASSWORD=vikas1234
- MYSQL_USER=vikas
ports:
- 3306:3306
restart: always
healthcheck:
test: [ "CMD", "mysqladmin" ,"ping", "-h", "localhost" ]
timeout: 20s
retries: 10
app:
image: shop-keeper
container_name: shop-keeper-app
build:
context: .
dockerfile: Dockerfile
ports:
- 8080:8080
depends_on:
mysql-db:
condition: service_healthy
environment:
SPRING_DATASOURCE_URL: jdbc:mysql://mysql-db:3306/shopkeeper?createDatabaseIfNotExist=true
SPRING_DATASOURCE_USERNAME: root
SPRING_DATASOURCE_PASSWORD: vikas1234
Your config looks nice, I would just recommend:
Remove links: db. It has no value in user-defined bridge networking
Remove port exposing for db unless you want to connect from outside docker-compose - all ports are exposed automatically inside user-defined bridge network.
I think the problem is that database container takes more time to start than web. depends_on just controls the order, but does not guarantee you database readiness. If possible, set several connection attempts or put socket-wait procedure in your web container.

Categories