I want to do REAL remote JMX management into a docker container running a Spring Boot application:
architecture sketch
I've read a lot of documentation and my understanding is that this should be the server-side configuration:
java \
-Djava.rmi.server.hostname=10.0.2.15 \
-Dcom.sun.management.jmxremote.port=8600 \
-Dcom.sun.management.jmxremote.rmi.port=8601 \
-Dcom.sun.management.jmxremote.ssl=false \
-Dcom.sun.management.jmxremote.authenticate=false \
-Dcom.sun.management.jmxremote.local.only=false \
-jar my-spring-boot-app.jar
The url to use in JVisualVM should be service:jmx:rmi://10.0.2.15:8601/jndi/rmi://10.0.2.15:8600/jmxrmi.
BUT THIS FAILS (Failed to retrieve RMIServer stub) within JVisualVM (started on machine 1) - this is the log output:
Caused: java.io.IOException: Failed to retrieve RMIServer stub at
javax.management.remote.rmi.RMIConnector.connect(RMIConnector.java:369)
at
com.sun.tools.visualvm.jmx.impl.JmxModelImpl$ProxyClient.tryConnect(JmxModelImpl.java:549)
[catch] at
com.sun.tools.visualvm.jmx.impl.JmxModelImpl$ProxyClient.connect(JmxModelImpl.java:486)
at
com.sun.tools.visualvm.jmx.impl.JmxModelImpl.connect(JmxModelImpl.java:214)
IT WORKS if I change the server application configuration to -Djava.rmi.server.hostname=172.19.0.6 (I use a BRDIGE docker network ... therefore routing to 172.19.0.6 is possible). With this configuration I am able to do JMX monitoring if the JVisualVM is started on the Docker Host (machine 2). But this is NO REAL REMOTE management because routing to 172.19.0.6 is usually impossible.
Some additional informations:
Port 8600, 8601 are exposed and are shown as LISTEN:
pfh#workbench ~/temp/ % netstat -taupen | grep 860
tcp6 0 0 :::8600 :::* LISTEN 0 254349 -
tcp6 0 0 :::8601 :::* LISTEN 0 254334 -
and telnet 10.0.2.15 8600 from machine 1 is possible.
I get the same wrong behavior with Java 1.8.0_111 and 1.7.0_80 on docker containers and docker host (running JVisualVM).
BTW: this configuration works if the Spring Boot application is running on machine 2 directly (without Docker).
I know that JMX usually negotiates random ports ... I try to make them explicit in my configuration. There is also one additional property -Dcom.sun.aas.jconsole.server.cbport=8602 that can be set but this did not solve the problem.
Where is my fault?
In my problem description I concealed the the docker container was started via docker-composewith this configuration:
my-spring-boot-service:
...
ports:
- "8610:8610"
- "8611:8611"
... and this results in open ports which seem to be bound to all interfaces as you can see via docker inspect my-spring-boot-app:
"NetworkSettings": {
"Bridge": "",
"SandboxID": "ac1a27e2696fd4ac2fcddf6e0935716304e348203ddbe1a0f8e31114cc6e289b",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"8610/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8610"
}
],
"8611/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8611"
}
],
I cannot see a problem here ... but this seems to be the problem because if I start the container via docker itself (as suggested by #zapl)
docker run -p 8610:8610 -p 8611:8611 my-spring-boot-app-image
IT WORKS - BUT NOT THE WAY I WANT - I want to use docker-compose.
There is a difference between the two deployment ... docker inspect network <foo>.
... on the working docker network looks this way:
"Options": {
"com.docker.network.bridge.default_bridge": "true",
"com.docker.network.bridge.enable_icc": "true",
"com.docker.network.bridge.enable_ip_masquerade": "true",
"com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
"com.docker.network.bridge.name": "docker0",
"com.docker.network.driver.mtu": "1500"
},
on the non-working docker-compose network looks this way:
"Options": {},
Both container configurations use no explicit defined network but the default one.
QUESTIONS: Is there a configuration missing? Should I define a network explicitly in docker-compose?
docker-elk is a docker-compose based deployment. I configured the described configuration of the JMX interface and was able to remote JMX this machine.
My JMX configuration is exactly the same - MINE IS NOT WORKING :-(
OS/Arch: linux/amd64
docker version: 1.12.2
docker-compose version: 1.8.0, build f3628c7 and 1.9.0, build 2585387
maybe I should switch to JMXMP instead of JMXRMI - https://github.com/oracle/docker-images/tree/master/OracleCoherence/docs/5.monitoring
Related
I have a problem with a Spring Boot application running inside docker container, which is running behind nginx reverse proxy.
To check that docker and nginx are working correctly I've tried folloging, which worked as expected:
docker run -d -p 9001:80 nginx:alpine
My nginx config looks like
server {
server_name xyz.com;
listen 80;
location / {
proxy_pass http://localhost:9001;
}
}
Now I'm able to reach the nginx welcome page from the internet via xyz.com and let's assume the server's IP address is 123.123.1.1, via http://123.123.1.1:9001 as expected.
Also on the server itself, I get the welcome page via
curl http://localhost:9001
I think that shows, that docker and nginx are working as expected.
Let's go further down towards my problem.
Now I have build a simple Spring Boot application, which is bind to address 0.0.0.0 and port 9001. (Of course I have stopped the nginx container before going on...)
After the maven build, I have started the application with
java -jar app.jar
After it started up, I can reach it on the server with
curl http://localhost:9001
and from the internet via xyz.com and http://123.123.1.1:9001 - everything fine as expected.
Now, I've stopped the application und build a Dockerfile:
FROM amazoncorretto:11-alpine-jdk
COPY target/app.jar app.jar
ENTRYPOINT ["java", "-jar","/app.jar"]
First building the image with docker build -t app:latest . and afterwards run the container with docker run -p 9001:9001 -d app:latest. So far so good, the container starts up - fine.
And here is the problem:
I'm not able to reach the application neither via xyz.com nor on the server witch curl http://localhost:9001! But via http://123.123.1.1:9001 it worked!
I've tried to check it inside the container docker exec -it [container] sh and afterwards # wget localhost:9001 worked fine. # netstat -tulpn on the container itself brings following
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:9001 0.0.0.0:* LISTEN 1/java
Also on the server $ netstat -tulpn shows the following:
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:9001 0.0.0.0:* LISTEN 607173/docker-proxy
tcp6 0 0 :::9001 :::* LISTEN 607180/docker-proxy
Maybe something with iptables, but not sure what.
Hopefully somebody out there has an idea to help me.
Thanks in advance!
Update
When I used port 80 inside the container (The Spring Boot application listen now on port 80 instead of 9001) - magically it works! Tried port 8080 inside the container also - this didn't work!
Looks like a port problem, maybe someone can explain where the problem is?!
I have Spring Boot application with Hazelcast (use 4.2 and also try to use 4.0.3 of hazelcast-all dependency). Hazelcast is configured for embedded topology. I use TCP-IP network join way. Properties for Hazelcast are set using file Hazelcast.yaml and Spring Boot property: spring.hazelcast.config (Spring by default use this name for Hazelcast config, it is redundant).
In property member-list I indicate IP addresses of two machines in one subnet (e.g. 192.0.0.1 and 192.0.0.2).
I build application in Docker using image based on Alpine on OpenJdk. Image includes start java -jar command as ENTRYPOINT.
PROBLEM PREMISE:
I run two docker containers on two machines described earlier. I forward only port 5701 (using -p) on both containers. And containers don't see each other. Spring Boot logs show that the container network is being used.
PS:
All work if run docker with --net host.
Also all work if I package Spring Boot application with property public-address in Hazelcast.yaml for two containers - once package with value 192.0.0.1, the other with value 192.0.0.2. Spring Boot Hazelcast instances see each other using network of machine (192.0.0.1 and 192.0.0.2).
PROBLEM:
I try to override property public-address in Hazelcast.yaml using:
docker run -e HZ_NETWORK_PUBLICADDRESS=192.0.0.1
export HZ_NETWORK_PUBLICADDRESS=192.0.0.1 && docker run
JAVA_OPTS="-Dhz.network.public-address=192.0.0.1"
JAVA_OPTS="-Dhazelcast.local.publicAddress=192.0.0.1"
JAVA_OPTS="-Dhazelcast.config=/mnt/overrided_hazelcast.yaml"
ENV HZ_NETWORK_PUBLICADDRESS=192.0.0.1 - in Dockerfile
ENTRYPOINT java -jar -Dhz.network.public-address=192.0.0.1 my-app.jar
Nothing works. Does anyone know why it is not possible to override property public-address in the Hazelcast.yaml at startup?
Or maybe anyone knows how I can run two Spring Boot applications with embedded Hazelcast in separate Docker containers on separate machines.
The way you set up Hazelcast public address is correct, at least starting from Hazelcast 4.1 where the Config Override feature was added.
To check the working version, you can have a look at Hazelcast Guide: Embedded Hazelcast on Kubernetes. Instead of Hazelcast Kubernetes configuration, you can use TCP-IP. The following Hazelcast configuration worked for me (my host IP is 172.22.41.210).
hazelcast:
cluster-name: hazelcast-cluster
network:
join:
tcp-ip:
enabled: true
member-list:
- 172.22.41.210:5701
- 172.22.41.210:5702
Then, building and starting two applications should form a cluster.
$ mvn package && docker build -t hazelcast-embedded .
$ docker run --rm -e HZ_NETWORK_PUBLICADDRESS=172.22.41.210 -p 5701:5701 hazelcast-embedded
$ docker run --rm -e HZ_NETWORK_PUBLICADDRESS=172.22.41.210:5702 -p 5702:5701 hazelcast-embedded
You should see that the cluster was formed in the application logs.
Members {size:2, ver:2} [
Member [172.22.41.210]:5701 - 21af9e1a-7e98-4305-905c-451ee23486c3 this
Member [172.22.41.210]:5702 - 0507d970-1f31-4df3-9ea5-8c3981eb7c98
]
I have a gRPC server written in Java and I'm currently trying to create a web client, with React. However, I can't seem to manage the connection between the envoy proxy to which the client is connecting and the actual server.
I would expect to receive the same message as with the Java client, but I get the error "Http response at 400 or 500 level", receiving an empty response with the web client, while the Java server doesn't even get the request.
The server runs on port 8080, and the envoy proxy is configured on port 9090, which is the one used by the web client.
Dockerfile:
FROM envoyproxy/envoy-dev:latest
COPY ./envoy.yaml /etc/envoy/envoy.yaml
CMD /usr/local/bin/envoy -c /etc/envoy/envoy.yaml -l trace --log-path /tmp/envoy_info.log
envoy.yaml:
admin:
access_log_path: /tmp/admin_access.log
address:
socket_address: { address: 0.0.0.0, port_value: 9901 }
static_resources:
listeners:
- name: listener_0
address:
socket_address: { address: 0.0.0.0, port_value: 9090 }
filter_chains:
- filters:
- name: envoy.http_connection_manager
config:
codec_type: auto
stat_prefix: ingress_http
route_config:
name: local_route
virtual_hosts:
- name: local_service
domains: ["*"]
routes:
- match: { prefix: "/" }
route:
cluster: m_service
cors:
allow_origin:
- "*"
allow_headers: keep-alive,user-agent,cache-control,content-type,content-transfer-encoding,x-accept-content-transfer-encoding,x-accept-response-streaming,x-user-agent,x-grpc-web,grpc-timeout
expose_headers: grpc-status,grpc-message
enabled: true
http_filters:
- name: envoy.grpc_web
- name: envoy.cors
- name: envoy.router
clusters:
- name: m_service
connect_timeout: 0.25s
type: logical_dns
http2_protocol_options: {}
lb_policy: round_robin
hosts:
socket_address:
address: localhost
port_value: 8080
The commands I use for building and running the docker container are docker build -t m-server ., and docker run -p 9090:9090 -td m-server /bin/bash and the proto classes for the front-end are loaded statically.
If there's any more code that'd be useful to post, please let me know. Any advice is appreciated, thank you!
For me the solution was to change the command passed to run the container, thus docker run -p 9090:9090 -td m-server /bin/bash becoming docker run -d -p 9090:9090 -p 9901:9901 m-server. The main difference was putting -d instead of -td and the second port mapping is for the envoy server.
I am just learning Docker and from what I understood from the documentation, the explanation would be that I was running the container in detached mode, but with a pseudo-tty allocated, which is used in foreground mode. I've seen it here but the purpose was slightly different and at the time I misunderstood it as only keeping the container running was not what I needed.
Changing 'localhost' to '0.0.0.0', as suggested in this answer is also important.
Looks like Envoy is not forwarding the request to your Java server. Envoy has an admin interface https://www.envoyproxy.io/docs/envoy/latest/operations/admin . That and the Envoy log files should help troubleshoot this.
socket_address:
address: localhost
This is the problem. Your envoy tries to forward to itself if it's running as dockerized image, because localhost is not your docker host machine for running container (where grpc server is running) , but actually localhost of running container. Use docker compose, port mapping or external network. Good luck
I am using a docker-compose command to create and start my containers.
My Docker Version
docker --version
Docker version 17.09.0-ce, build afdb6d4
My Docker-Compose version
docker-compose --version
docker-compose version 1.16.1, build 6d1ac21
The .yml file that I'm using looks something like this:
(Note that I've just shortened it to take sensitive things out)
---
services:
zookeeper:
image: "zookeeper"
server-1:
cap_add:
- "NET_ADMIN"
server-0:
cap_add:
- "NET_ADMIN"
dns:
- 8.8.8.8
- 9.9.9.9
environment:
SERVER_ID: 0
NETEM_HOSTS: ""
LOSS_VALUES: ""
MAX_RATE_VALUES: ""
DELAY_VALUES: ""
image: "cloud.mycompany.com:5000/server-0:latest"
fakedns:
image: "cloud.mycompany.com:5000/fakedns:latest"
version: "3.3"
Then I start using:
docker-compose --file compose.yml up -d
My Question is this:
1) After containers come up... when I go into a container, for e.g. in this case server-0, I don't see the /etc/resolv.conf file updated to use these nameservers. Instead it uses the embedded dns of docker which is 127.0.0.11
2) How do I make sure that it uses what I specify in file that is used by docker-compose
3) I tried to do this with the command and it seems to work, but I need to do from compose-file
docker run -p 4000:53 --dns=8.8.8.8 cloud.mycompany.com:5000/server-0:latest
4) Ideally, I want it to have the IP address of the container 'fakedns' so that it uses this one instead of the embedded one #127.0.0.11
You won't see custom DNS servers in /etc/resolv.conf but Docker's resolver will forward DNS requests to them.
User Defined Networks and DNS
Docker compose definitions that are v2+ create a user defined network by default.
Docker with a user defined network uses an embedded DNS server so that Docker can respond for local container requests (service discovery).
For any DNS hosts Docker can not resolve, the request will be forwarded onto a DNS server. This is either the system default server, the server configured in dockerd or the DNS server configured for the container at run time.
Docker DNS
Be careful when using internal DNS servers. Things in the Docker daemon will break if you point the systems DNS at a container as you create a chicken or the egg problem, Docker needs DNS to start but can't start the container to provide DNS.
As your example config is only setting the DNS for one app container it should be ok, but make sure the DNS container is up and healthy before your application.
I'm new to docker and dc/os. I have deployed dc/os cluster in the microsoft azure. I need to set access via a jmx to my java applications but I can't.
Let's take the example of deploying a standart tomcat image.
I have docker installed on my local machine. To run a tomcat container with jmx access I use this command:
docker run -e
JAVA_OPTS= "-Dcom.sun.management.jmxremote
-Djava.rmi.server.hostname=127.0.0.1
-Dcom.sun.management.jmxremote.port=8081
-Dcom.sun.management.jmxremote.rmi.port=8081
-Dcom.sun.management.jmxremote.local.only=false
-Dcom.sun.management.jmxremote.authenticate=false
-Dcom.sun.management.jmxremote.ssl=false"
-p 8080:8080 -p 8081:8081 tomcat:8.0
And I can connect to Tomcat via port 8081.
I try to do the same in dc/os cluster. I use the below json configuration to deploy:
{
"id": "/tomcat",
"instances": 1,
"cpus": 1,
"mem": 512,
"container": {
"type": "DOCKER",
"docker": {
"image": "tomcat:8.0",
"network": "BRIDGE",
"portMappings": [
{ "protocol": "tcp", "hostPort": 8080, "containerPort": 8080 },
{ "protocol": "tcp", "hostPort": 8081, "containerPort": 8081 }
]
}
},
"requirePorts": true,
"acceptedResourceRoles": [
"slave_public"
],
"env": {
"JAVA_OPTS": "-Dcom.sun.management.jmxremote -Djava.rmi.server.hostname=10.0.0.4 -Dcom.sun.management.jmxremote.port=8081 -Dcom.sun.management.jmxremote.rmi.port=8081 -Dcom.sun.management.jmxremote.local.only=false -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false"
},
"healthChecks": [
{
"gracePeriodSeconds": 120,
"intervalSeconds": 30,
"maxConsecutiveFailures": 3,
"path": "/",
"portIndex": 0,
"protocol": "HTTP",
"timeoutSeconds": 5
}
]
}
After that I have access to tomcat webconsole but I can't connnect via jmx.
I tried variouse values for "-Djava.rmi.server.hostname": 127.0.0.1, 10.0.0.4( agent ip ), agents.westeurope.cloudapp.azure.com.
Please help me understand what I do wrong.
UPDATE: Thank Walter - MSFT who pointed out a fact which ports are opened by default on azure. I really forgot about it. But an issue with connecting via jmx is still actual for me. I opened new discussion where I give more details. DC/OS JMX Access
You could read Azure official article:
Any DC/OS container in the ACS public agent pool is automatically
exposed to the internet. By default, ports 80, 443, 8080 are opened,
and any (public) container listening on those ports are accessible.
According to your description, it seems that port 8081 is not open. You could open port 8081 on Azure Portal.
More information about please refer to this link: Enable public access to an Azure Container Service application.
Update:
I test in my lab with your json file, it works for me, you don't change it. You should open port on Azure NSG and Load Balance.
NSG:
LoadBalncer
I test in my lab, I could open 8080 Web UI. When I test port 8081, I notice that the port is listening, I could access the port with Public IP.
azureuser#dcos-master-01234567-0:~$ netcat -z -v 13.84.176.235 8081
Connection to 13.84.176.235 8081 port [tcp/tproxy] succeeded!
You also could use curl to test, I get the following result.
azureuser#dcos-master-01234567-0:~$ curl 13.84.176.235:8081
curl: (52) Empty reply from server
If you could not access 8081 Web UI, I suggest you had better check docker container.