I want to build an application. For testing it uses testcontainers. The build will run on CI and on the developers' machines. The Dockerfile is more or less:
FROM amazoncorretto:17-alpine as builder
add . .
run ./gradlew build
from amazoncorretto:17-alpine
copy --from=builder build/libs/*.jar app.jar
ENTRYPOINT ["java","-jar","/app.jar"]
And I run the build using docker build .
Part of the ./gradlew build runs tests with Testscontainers and uses
val sftpDocker = GenericContainer(DockerImageName.parse("atmoz/sftp:alpine"))
And it returns
java.lang.IllegalStateException: Could not find a valid Docker environment. Please see logs and check configuration
I know that:
Testcontainers has its own docker API client and doesn't requires installed docker inside the Alpine container 3
Someone made it using "docker:20.10.14-dind" image. But I don't know how it fits in my problem 4
I can mount the /var/run/docker.sock during docker run ... but I'm using RUN command inside dockerfile and docker build ... instead
I can expose DOCKER_HOST and testcontainers should use the default gateway's IP address. But it's way less secure than using socket
So is there a way to use a socket in this setup? If not, how should I run my host Docker to expose TCP instead of a socket?
Related
I'm trying to install/run jenkins manually without pulling the Jenkins image from the docker-hub
for this exercise I have used the ubuntu image container and I did the following:
Install jdk-11 on the container
Set up the JAVA_HOME env variable
Install jenkins with apt-get
Run jenkins with the command service jenkins start
then status output is the following
root#42024442b87b:/# service jenkins status
Correct java version found
Jenkins Automation Server is running with the pid 89
Now I don't now how to access the jenkins server running in the container from my host.
thanks in advance
Docker containers are not reachable using the network from the host system by default. You need to expose a container's host, meaning that the port will be opened on the host machine and all traffic forwarded to the container.
Running docker with -p 8080:8080 forwards 8080. Take a look at the syntax here.
You can also specify which port on the host machine is supposed to be mapped to a container's port with something like -p 1234:8080.
You can also use the EXPOSE keyword in your Dockerfile.
I am following the ISTIO tutorial for Service Mesh, and I am having problems to copy one of the steps they do.
https://docs.huihoo.com/microservices/introducing-istio-service-mesh-for-microservices.pdf
They have a java application called Recommendation, which they deploy twice.
docker build -t example/recommendation:v1 .
docker build -t example/recommendation:v2 .
I have a Java application, TEST, deployed in OpenShift which I want to copy and change the version, so I have TEST-V1 and TEST-V2. How can I do it? Do I need to deploy the application twice with different Deployment.yaml?
Thanks in advance.
How can I do it? Do I need to deploy the application twice with different Deployment.yaml?
Basically - yes. At the end, what you need, are two service endpoints pointing to different pods. You may place the service endpoints into the same deployment file, but for sake of robustness I'd use complete different deployment.
First of all, your commands:
docker build -t example/recommendation:v1 .
docker build -t example/recommendation:v2 .
are not deployment.
What you do is building docker image.
This is the line to deploy your service for the first example:
oc apply -f <(istioctl kube-inject -f \ src/main/kubernetes/Deployment.yml) -n tutorial
and second:
Finally, inject the Istio sidecar proxy and deploy this into Kubernetes:
oc apply -f <(istioctl kube-inject -f \ src/main/kubernetes/Deployment-v2.yml) -n tutorial
You ask if you need to deploy 2 times if you change version. First of all, you need to know that you are operating on containers. The docker build command creates a container for you that you will use later. If you create a new version of the application, you should create a new container. They are similar but not identical. That means, that are completely diffent docker images from the OpenShift / Kubernetes point of view. Every time you change the container image, you need to do deploy to Kubernetes / OpenShift. You need to do it one time for each docker image change.
My Cassandra instance is running on google cloud platform and I am deploying my application which connects to cassandra in a container. The application works fine when I run it without dockerizing it. Once I deploy it in the container I am getting the below error,
NoHostAvailableException: All host(s) tried for query failed
I tried pinging the IP of the cassandra instance from inside the container and it is not timing out and the ping looks good.
As for the container, I am using the maven:latest image to create a container and run my application using webapp-runner inside the container.
This is my dockerfile
FROM maven:latest
COPY . /tmp
WORKDIR /tmp
RUN mvn clean package
EXPOSE 9042 80
CMD java -jar target/dependency/webapp-runner.jar target/testproject.war
This sounds like a firewall issue. Can you be sure the required ports are open?
http://cassandra.apache.org/doc/latest/faq/index.html?highlight=port#what-ports
i have no problems running docker from cmd line:
docker run -p 5432:5432 -it --rm postgres:9.5.2`
but when i do it from gradle, using dcompose plugin, i got
Could not evaluate onlyIf predicate for task ':pullDatabaseImage'.
> Docker command failed: Certificate path (DOCKER_CERT_PATH) '/home/xxx/.docker/certs' doesn't exist.
my config:
plugins {
id "com.chrisgahlert.gradle-dcompose-plugin" version "0.3.2"
}
dcompose {
database {
image = 'postgres:9.5.2' // Required
}
}
test {
dependsOn startDatabaseContainer
finalizedBy removeDatabaseContainer
}
what's wrong? how can i run docker from gradle?
I found that docker uses Unix sockets for unsecured local communication but it requires custom certificates for network communication / IP sockets. plugin com.chrisgahlert.gradle-dcompose-plugin uses network communication, so there is no way to make it work out of the box (every developer that wants to run it locally would have to configure his docker). So I stopped using that plugin and switched to manually executing system command (docker run ...) from Java. This way no additional security configuration is needed.
Sorry for the late response.
Try using the latest plugin version (currently 0.8.0). This uses the latest docker-java library which is responsible for communicating with the Docker host. In this version it should be possible to connect to a local unix socket.
If that doesn't help: Try unsetting the environment variable unset DOCKER_TLS_VERIFY.
I am new to docker. I want to run my java application on tomcat server using docker images/containers. Can anyone suggest best method to do that?
First find a docker image with the version of tomcat you want. You can search docker images using, docker search so try
docker search tomcat
next pull it locally
docker pull <your/image>
then run commands on it to install your software
docker run <your/image> <your command and args>
then find your container ID by running
docker images
and commit you changes
docker commit <container_id> <some_name>
I'd recommend the docker tutorial to get started.
P.S. this answer will show you how to transfer files to docker.