How to install java using docker compose? - java

I am building an node mongo project.
I am using docker compose for the project.
here is my dockerFile
FROM node:carbon
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 8080
CMD [ "npm", "start" ]
Here is docker-compose.yml
version: "2"
services:
app:
container_name: app
restart: always
build: .
ports:
- "3000:3000"
links:
- mongo
mongo:
container_name: mongo
image: mongo
volumes:
- ./data:/data/db
ports:
- "27017:27017"
Here I also want to install java using docker-compose. As I will need java for elastic search and for other purposes. So can any one help about how to install java using docker-compose in this project.

Docker-compose is a tool used to launch multiple containers from a single .yaml file.
Add this line to your Dockerfile to install Java:
RUN apt-get update && \
apt-get install -y openjdk-8-jdk && \
apt-get install -y ant && \
apt-get clean;

Related

Gitlab-ci docker inside java image

To use a integration test library https://www.testcontainers.org/" I need a image with java a docker installed at same time.
I'm trying o use this stage:
test:
stage: test
image: gradle:jdk16
services:
- docker:latest
script:
- docker --version
- chmod +x ./gradlew
- export GRADLE_USER_HOME=`pwd`/.gradle
- ./gradlew test --stacktrace
rules:
- !reference [.rules_merge_request, rules]
But It does not work:
$ docker --version
/scripts-33119345-2089057982/step_script: line 154: docker: command not found
Any help?
The image gradle:jdk16 does not include the docker client. You'll have to install it in your job. Additionally, you'll need to use the service docker:dind in your services: configuration (not docker:latest)
test:
stage: test
image: gradle:jdk16
services:
- docker:dind # use the docker-in-docker image
before_script: # install docker
- apt update && apt install --no-install-recommends -y docker.io
script:
- docker --version
Running this on gitlab.com runners, you should see an output like this:

Cannot detach from Docker console with Java process on it

I'm not able to detach from a Docker container after I attached to it.
Dockerfile to define the container:
FROM adoptopenjdk/openjdk16:debian
WORKDIR /app
STOPSIGNAL SIGTERM
RUN apt-get update && apt-get install -y curl && apt clean
CMD curl -so server.jar https://ci.tivy.ca/job/Airplane-1.17/lastSuccessfulBuild/artifact/launcher-airplane.jar && java -jar server.jar
Build the container with: docker build -t minecraft .
docker-compose file:
version: "3.7"
services:
mc:
container_name: mc
ports:
- 25565:25565
image: minecraft
volumes:
- type: bind
source: /root/mc
target: /app
Start the container with: docker-compose up mc
I tryed to attach to console with docker attach mc, it works. But I'm not unable to detach from screen. Ctrl+C not work. Ctrl+P and Ctrl+Q not work. Writing stop (command that will stop the java process) not working
I tryed to attach with docker attach --detach-keys="ctrl-d" mc but is not working
Java process never ends
Found fix myself. Need to insert stdin_open and ttyin docker-compose file. New docker-compose.yml:
version: "3.7"
services:
mc:
container_name: mc
ports:
- 25565:25565
image: minecraft
volumes:
- type: bind
source: /root/mc
target: /app
stdin_open: true
tty: true
Will set this answer as solution in two days because of stackoverflow rules

Use Java with Airflow and Docker

I have a use case where I want to run a jar file via Airflow, all of which has to live in a Docker Container on Mac.
I have tried installing java separately and also, tried mounting my JAVA_HOME(host) onto the container.
This is my docker-compose.yaml:
airflow:
image: 'puckel/docker-airflow:1.10.9'
hostname: airflow
container_name: airflow
volumes:
- ${PWD}/airflow/dags:/usr/local/airflow/dags
- ${JAVA_HOME}:/usr/local/bin/java //FWD MOUNTING JAVA_HOME
This way, I get a java directory inside /usr/local/bin/ with the data but java -version returns Permission denied.
Changing it to ${JAVA_HOME}/bin/java:/usr/local/bin/java returns exec format error.
What is the correct way to handle this use case?
I think that you're getting Permission denied because you are running docker with user airflow.
Can you try to run it as root? (this is risky! don't use in production - it is just an effort to temporary workaround). Avoid using root user!
airflow:
image: 'puckel/docker-airflow:1.10.9'
hostname: airflow
container_name: airflow
user: root
volumes:
- ${PWD}/airflow/dags:/usr/local/airflow/dags
- ${JAVA_HOME}:/usr/local/bin/java
EDIT:
Instead of mounting your local java, consider installation of a seperate one:
airflow:
build:
context: .
dockerfile: Dockerfile
hostname: airflow
container_name: airflow
volumes:
- ${PWD}/airflow/dags:/usr/local/airflow/dags
and add the Dockerfile in the same directory:
FROM puckel/docker-airflow:1.10.9
USER root
RUN mkdir -p /usr/share/man/man1
RUN apt-get update && apt-get install -y default-jdk && apt-get clean
USER airflow

Gitlab + liqubase

The question is very straightforward
What is the best way to execute liquibase migration in gitlab pipelines
what i have so far
but seems gitlab services immediately executes docker run, and docker run already requires db migration parameters
image: docker:19.03.1
stages:
- build
- db-migration
- deploy
services:
- docker:19.03.1-dind
- liquibase/liquibase:latest
variables:
DOCKER_TLS_CERTDIR: "/certs"
AWS_ACCESS_KEY_ID: $AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY: $AWS_SECRET_ACCESS_KEY
AWS_IMAGE_PATH: $AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com
before_script:
- apk add --update python python-dev py-pip
- pip install awscli --upgrade
- $(aws ecr get-login --no-include-email --region $AWS_REGION | tr -d '\r')
build:
stage: build
script:
- docker build --tag $AWS_IMAGE_PATH/$CI_PROJECT_NAME:$CI_COMMIT_SHA --tag $AWS_IMAGE_PATH/$CI_PROJECT_NAME:latest .
- docker push $AWS_IMAGE_PATH/$CI_PROJECT_NAME:$CI_COMMIT_SHA
- docker push $AWS_IMAGE_PATH/$CI_PROJECT_NAME:latest
db-migration:
stage: db-migration
script:
- liquibase --changeLogFile=/src/main/resources/db/changelog/psql/changelog.yaml
--url="jdbc:postgresql://host:5432/db"
--username username --password $DB_PASSWORD update
deploy:
stage: deploy
script:
- echo "Deployed"
db-migration:
image: openjdk:8-jre-alpine
stage: db-migration
script:
- INIT_PATH=`pwd`
- apk add bash
- mkdir /liquibase
- mkdir /Downloads
- cd /Downloads
- wget "https://github.com/liquibase/liquibase/releases/download/liquibase-parent-3.7.0/liquibase-3.7.0-bin.zip"
- wget "https://repo1.maven.org/maven2/org/postgresql/postgresql/42.2.8/postgresql-42.2.8.jar"
- unzip liquibase-3.7.0-bin.zip -d /liquibase -q
- mv postgresql-42.2.8.jar /liquibase/lib/
- export PATH=$PATH:/liquibase
- liquibase --changeLogFile=$INIT_PATH/src/main/resources/db/changelog/psql/changelog.yaml
--url="jdbc:postgresql://host:port/db"
--username username --password $DB_PASSWORD update
only:
- master

How to split docker file from monolithic deployment

In a single docker file I use to run Node and Java for the following:
run test script
run java executable (to bundle screenshots)
run my application
My understanding from the documentation is that this is bad practice, in the sense that this is a 'monolithic' deployment. I am supposed to split this in separate images based on the individual tasks. So, presumably, I would have 3 docker files.
Dockerfile 1: test script
FROM node:8
RUN node --version
RUN apt-get update && apt-get install -yq libgconf-2-4
# Note: this installs the necessary libs to make the bundled version of Chromium work
RUN apt-get update && apt-get install -y wget --no-install-recommends \
&& wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add - \
&& sh -c 'echo "deb [arch=amd64] http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google.list' \
&& apt-get update \
&& apt-get install -y google-chrome-unstable fonts-ipafont-gothic fonts-wqy-zenhei fonts-thai-tlwg fonts-kacst ttf-freefont \
--no-install-recommends \
&& rm -rf /var/lib/apt/lists/* \
&& apt-get purge --auto-remove -y curl \
&& rm -rf /src/*.deb
# project repo
RUN git clone ...
WORKDIR /myApp/
RUN npm install
RUN npm i puppeteer
CMD ["node", "test.ts"]
Dockerfile 2: Java executable (bundle images)
FROM openjdk
RUN git clone -b ...
WORKDIR /myApp/
CMD ["java -jar", "ImageTester.jar"]
Dockerfile 3: Run app
FROM node:8
RUN node --version
RUN git clone -b ...
WORKDIR /myApp/
RUN npm install
EXPOSE 9999
CMD ["npm", "start"]
The question is, how does one exactly do this? How is a non monolithic deployment implemented in my case? How does one run 3 docker images inside one project?
Use docker-compose. In your docker-compose.yml file, you'll have 3 services pointing to the 3 different dockerfiles.
Ex:
version: '3'
services:
test:
build:
context: .
dockerfile: Dockerfile.test
images:
build:
context: .
dockerfile: Dockerfile.images
app:
build:
context: .
dockerfile: Dockerfile.app

Categories