I wanted to use vault server to store secrets and deploy it on openshift.
I wrote this dockerfile, built the image and pushed it to the openshift registry and created a deployment from this image stream:
FROM vault:1.5.0
ADD *.hcl /etc/config.hcl
ENTRYPOINT ["vault", "server", "-config=/etc/config.hcl"]
Here is the config:
storage "file" {
path = "/vault/data"
}
listener "tcp" {
address="127.0.0.1:8200"
tls_disable=1
}
disable_mlock = true
api_addr = "http://127.0.0.1:8200"
I created a route to the 8200 port. When I use the vault CLI from inside the vault-server pod it works fine, I can login, configure etc. When i use the openshift cli on my local computer to forward port 8200 to my local 8200 port I can also access the API.
The problem is I cannot access the API from anywhere outside the pod. The route fives me a 503 response and when trying via http://vault-server.namepsace.svc:8200 I get connection refused (using Spring Rest Template).
How can I configure Vault to also accept external traffic?
Your listener block means you are only listening for connections from localhost. Change the address field to 0.0.0.0:8200 to listen on all interfaces:
listener "tcp" {
address="0.0.0.0:8200"
}
And please don't forget to enable TLS as soon as you've got connectivity working.
Related
We have an Openshift project ( project1 ) in which we setup an AMQ Artemis broker using the image : amq- amq-broker-7-tech-preview/amq-broker-71-openshif . Being the basic image we don't have any configuration such as SSL or TLS. In order to do the setup we used as example : https://github.com/jboss-container-images/jboss-amq-7-broker-openshift-image/blob/amq71-dev/templates/amq-broker-71-basic.yaml
After the deployment of the image on Openshift we have the following:
broker-amq-amqp (5672/TCP 5672) No route
broker-amq-jolokia (8161/TCP 8161) https://broker-amq-jolokia-project1.192.168.99.105.nip.io
broker-amq-mqtt ( 1883/TCP 1883 ) No route
broker-amq-stomp ( 61613/TCP 61613 ) No route
broker-amq-tcp ( 61616/TCP 61616 ) No route
From another Openshift service, in Java we try to connect to the broker but we receive the following error :
[org.apache.activemq.transport.failover.FailoverTransport] (ActiveMQ Task-1) Failed to connect to [tcp://broker-amq-amqp-project1.192.168.99.105.nip.io:61616?keepAlive=true] after: 230 attempt(s) with Connection refused (Connection refused), continuing to retry.
The Java code:
user = "example";
password = "example";
String address = "queue/example";
InitialContext context = new InitialContext();
queue = (Queue) context.lookup(address);
ConnectionFactory cf = (ConnectionFactory) context.lookup("ConnectionFactory");
try (Connection connection = cf.createConnection(user, password);) {
connection.start();
session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE);
}
The JNDI Properties file
java.naming.factory.initial=org.apache.activemq.jndi.ActiveMQInitialContextFactory
java.naming.provider.url=failover:(tcp://broker-amq-amqp-project1.192.168.99.105.nip.io:61616?keepAlive=true)?randomize=false
queue.queue/example=example/strings
It looks as if you're trying to connect to the broker using an OpenShift route, when there is no route defined for the relevant service. You (or the installer) defined a route for Jolokia, but there's no route for the broker.
You won't get a helpful error message here, because any hostname that ends with the right domain will get connected to the OpenShift router. However, the router won't know how to process the connection without a valid route, and will probably just return some sort of meaningless error packet to the JMS client.
If you're trying to connect to the broker from another application in the same OpenShift namespace as the broker, you don't need to connect via the router -- just use the service name (presumably broker-amq-tcp) and service port explicitly in your JMS set-up.
If you're connecting to the broker from another application in a different OpenShift namespace in the same cluster, you might be able to configure the networking subsystem to allow direct connections to the service across namespaces. This is, unfortunately, a little fiddly to set up after OpenShift is installed.
If you're connecting to the broker from outside an OpenShift namespace, and you can't use services directly, you'll have to connect via a route, and you must use an encrypted connection. That's not necessarily for security -- the router will read the SNI information from the SSL header to work out how to route the request.
So you'll need to create a service for the broker's SSL port, create a route for that service, export server certificates from the broker, import those certificates into your client, and configure the client to use an SSL connection URI via the router. Clearly, using the service directly is easier, if you can ;)
All these set-up steps are described in Red Hat's AMQ7-on-OpenShift documentation:
https://access.redhat.com/documentation/en-us/red_hat_amq/7.5/html/deploying_amq_broker_on_openshift/index
although I can't deny that there's an awful lot of information to wade through in that document.
Addition and clarification to answer by Kevin Boone (which is very much correct).
In order for the AMQ broker running inside pod named "broker-amq-tcp" to be reachable from other pods in same cluster:
start broker inside the container on address 0.0.0.0. This is critical - localhost (loopback; 127.0.0.1) will prevent any connections from outside of the pod from reaching the broker;
create service (e.g. "broker-amq-tcp-service") for broker-amq-tcp that maps a pod port to container's 61616; e.g. 62626 (or any other);
connect from other pods using tcp://broker-amq-tcp-service:62626.
The 0.0.0.0 part cost me few days of debugging :)
The goal is to publish/send message into ActiveMQ through Java code inside a secured company network.
I have configured ActiveMQ in an AWS Cloud EC2 machine (console access: IPAddress:8161). Also I can publish the messages using the AWS IPAddress and port number 61616 (IPAddress:61616) through Java code.
But now I need to publish messages from inside a company network. It is secured and can't access the AWS IPAddress directly.
So we create reverse proxy for
IPAddress:8161 to activemq-ui.testdemo.com
IPAddress:61616 to activemq-api.testdemo.com
Now I can access ActiveMQ console from our company network using activemq-ui.testdemo.com. But couldn't access activemq-api.testdemo.com through Java code.
Getting Below Error:
SEVERE: Error Message: javax.jms.JMSException: Could not connect to broker URL: tcp://activemq-api.demo.com. Reason:
java.lang.IllegalArgumentException: port out of range:-1
Error looks like expecting port number in the URL. But not sure what to pass for this.
Can anyone help me on how to access ActiveMQ API inside corporate network?
You need to provide the port that the client should attempt to connect to on the connection URI as the error is telling you, something like:
tcp://activemq-api.demo.com:80
The client does not attempt to guess or deduce what the port is you want it to use and so that field is mandatory.
I am running a spring boot application in docker tool box. The application runs on port 8380 as set in application properties. However, when i run its image in a container, I am publishing with ports 8380:8082. When i access it from ip 192.168.99.100 (my docker machine ip) and port 8380, it gives me ERR_CONNECTION_REFUSED error. 192.168.99.100 refused to connect.
Any ideas what might be wrong?
I have tried using localhost instead of docker-machine ip. I checked the access url from kitematic i.e 192.168.99.100:8380. Using this it does not work.
Here is my DockerFile:
FROM java:8
EXPOSE 8082
ADD /build/libs/tsi-csrportal-gui-2.0-SNAPSHOT.jar dockerDemoCsrportal.jar
ENTRYPOINT ["java", "-DTSI_APP_NAME=csrportal", "-DTSI_ENV=test", "-Dtsi.log.console", "-jar", "dockerDemoCsrportal.jar"]
I expect the service to give json response when I access with the proper endpoint. Similar to when I run the spring boot application without docker toolbox. (Only change is that now I use docker machine ip instead of using localhost)
When you expose a port in docker it means you can access to that container using [container_ip]:[exposed_port].
But when you map the exposed port to another port it means that you can access to the container using [host_ip]:[mapped_port].
So you can access like localhost:8380 or 192.168.99.100:8082
I want to connect my redis server remotely which is running in Ubuntu Machine through windows, But not able to connect and getting Connection Refused Exception. Application is build with spring boot. Please suggest me how I can do it.
Below is my sample code:
#Override
public void expireDevices() {
JedisPool pool = new JedisPool(new JedisPoolConfig(), "IP address", 6379, Protocol.DEFAULT_TIMEOUT);
try(Jedis jedis=pool.getResource()){
// Doing Something
}
expireWithBackgroundTask();
}
I second what Bhushan said, make sure that Redis is listening on a public IP. By default when you install it listens on localhost.
If your Redis server is installed on Ubuntu then go to /etc/redis/redis.conf file and find attribute something like bind 127.0.0.1. You need to find public IP of your Redis server and replace it with 127.0.0.1 then restart the Redis.
P.S. If you open Redis on public IP then go through Redis security risks
I have a Tomcat 7.0 webapp running inside a docker container on AWS Elastic Beanstalk (EB) (I followed the tutorial here).
When I browse to my EB url myapplication.elasticbeanstalk.com, I get a 502 Bad Gateway by served by nginx. So its immediately clear that my port 80 is not forwarding to my container. When I browse to myapplication.elasticbeanstalk.com:8888 (another port I exposed in my Dockerfile) the connection is refused (ERR_CONNECTION_REFUSED). So I SSH'ed into the AWS instance and checked the docker logs, which show that my Tomcat server has started successfully, yet obviously hasn't processed any requests.
Does anyone have any idea my port 8888 appears not to be forwarding to my container?
Executing the command (on the AWS instance):
sudo docker ps -a
gives:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c353e236da7a aws_beanstalk/current-app:latest "catalina.sh run" 28 minutes ago Up 13 minutes 80/tcp, 8080/tcp, 8888/tcp sharp_leakey
which shows port 80, 8080, and 8888 as being open on the docker container.
My Dockerfile is fairly simple:
FROM tomcat:7.0
EXPOSE 8080
EXPOSE 8888
EXPOSE 80
and my Dockerrun.aws.json file is:
{
"AWSEBDockerrunVersion": "1",
"Image": {
"Name": "myusername/mycontainer-repo"
},
"Authentication": {
"Bucket": "mybucket",
"Key": "docker/.dockercfg"
},
"Ports": [
{
"ContainerPort": "8888"
}
]
}
Does anyone see where I could be going wrong?
I'm not even sure where to look at this point.
Also, my AWS security group for the instance is open on port 80, 8080, and 8888.
Any advice would be greatly appreciated! I'm at a loss here.
Update 1:
Minor update, although I am still having trouble.
After SSH'ing into my AWS EB instance, I inspected the Docker container to grab the IP of the container:
sudo docker inspect c353e236da7a
which gave me the IP as 172.17.0.6.
Then, again from the AWS instance, I ran a curl command:
curl 172.17.0.6:8080/homepage
which works, and returns the HTML of homepage! However, curl 172.17.0.6:8888/homepage does not work (so I'm not sure what the "ContainerPort" : "8888" means in the Dockerrun.aws.json file then).
However, I still have the question, why aren't my :8080 requests being forwarded to the container Tomcat webserver? As above, myapplication.elasticbeanstalk.com:8080/homepage still receives a connection refused (ERR_CONNECTION_REFUSED).
myapplication.elasticbeanstalk.com
Is a load balancer, not your instance. Elastic beanstalk launches a load balancer to autoscale your instances. Therefore when you are connecting to myapplication.elasticbeanstalk.com:8888 You are actually connecting to an instance that has only port 80 open. The load balancer then fowards traffic to an instance listening on port 8080.
You should be able to access your web application by just using the url without a port: myapplication.elasticbeanstalk.com
The reason this doesn't work is because you told your docker container to use port 8080, but told Beanstalk to forward to port 8888. Sure, all your ports are open, but tomcat is only running on port 8080.
The ports section in the dockerrun.aws.json doesnt tell your app which port to run on, it tells the load balancer which port to forward to.
Ports – (required when you specify the Image key) Lists the ports to expose on the Docker container. AWS Elastic Beanstalk uses ContainerPort value to connect the Docker container to the reverse proxy running on the host.
You can specify multiple container ports, but AWS Elastic Beanstalk uses only the first one to connect your container to the host's reverse proxy and route requests from the public Internet.
as seen here.
Or, in other words, the 8888 that you told beanstalk to forward to is working correctly, but your app is actually running on port 8080. You should change the dockerrun.aws.json to use the port 8080 instead.
I'm done this using fixing of nginx's listen ports.
So, you have to add .ebextensions directory into root of your app and put your config file here (in my example it's 00-bypass-nginx-proxy.config):
files:
"/tmp/change_nginx_port.sh":
mode: "000755"
owner: root
group: root
content: |
#!/bin/sh
# change listen port from 80 to 8761
sed -i '7s/.*/ listen 8761;/' /etc/nginx/sites-available/elasticbeanstalk-nginx-docker-proxy.conf
# restart nginx
service nginx restart
container_commands:
00setup-nginx:
command: "/tmp/change_nginx_port.sh"
Your service now will be available on port 8761. Pay attention to sed part of script, there is hardcoded line number which could be differ on your environment.