I'm trying to work with Akka service discovery in a kubernetes cluster. Both pods are labeled and Akka can get the right address to connect to the pods. But when trying to connect the following error occurs: ...http://ip.namespace.pod.cluster.local:8558)] Connection attempt failed. Backing off new connection attempts for at least 100 milliseconds. This error resolved once I enabled port forwarding to port 8558 on service level. I was able to set up a tcp connection to the other pod when I was logged in to a pod using the given address. Any reason why it does work when I add the ports to the service/ why akka would even us the service to connect to the other pods?
akka.management {
http {
hostname = "127.0.0.1"
hostname = ${?HOSTNAME}
bind-hostname = "0.0.0.0"
port = 8558
bind-port = 8558
}
Has the bind port and the port configured correctly in conf file ?, Because the specific configuration is the one which binds the host to 0.0.0.0:8558 'internally' and which helps in the mapping.
Related
I have an exposed GraphDB instance in an ip address and port with http protocol. I want to make it more secure so I decided to expose it through https and a domain (with the port included in it). The problem is when I try to call the instance from Java code with the library rdf4j, if I the ip address and port as repository endpoint the code connects perfectly and adds the statements, nevertheless, if I set the domain as endpoint it returns a timeout. This is the code I am using:
HTTPRepository repository = new HTTPRepository("https://example.com", "repository_name");
RepositoryConnection dataSource = repository.getConnection();
dataSource.add(statements);
This is the thrown exception:
org.apache.http.conn.HttpHostConnectException: Connect to example.com:80 [example.com/"here there is the resolved ip address"] failed: Connection timed out: connect
Rdf4j uses apache http library which supports https connection. However, in the exception the port 80 is shown, that's make my think it is attacking the port without considering the https at the begining of the address.
If more information is required I can add it.
I wanted to use vault server to store secrets and deploy it on openshift.
I wrote this dockerfile, built the image and pushed it to the openshift registry and created a deployment from this image stream:
FROM vault:1.5.0
ADD *.hcl /etc/config.hcl
ENTRYPOINT ["vault", "server", "-config=/etc/config.hcl"]
Here is the config:
storage "file" {
path = "/vault/data"
}
listener "tcp" {
address="127.0.0.1:8200"
tls_disable=1
}
disable_mlock = true
api_addr = "http://127.0.0.1:8200"
I created a route to the 8200 port. When I use the vault CLI from inside the vault-server pod it works fine, I can login, configure etc. When i use the openshift cli on my local computer to forward port 8200 to my local 8200 port I can also access the API.
The problem is I cannot access the API from anywhere outside the pod. The route fives me a 503 response and when trying via http://vault-server.namepsace.svc:8200 I get connection refused (using Spring Rest Template).
How can I configure Vault to also accept external traffic?
Your listener block means you are only listening for connections from localhost. Change the address field to 0.0.0.0:8200 to listen on all interfaces:
listener "tcp" {
address="0.0.0.0:8200"
}
And please don't forget to enable TLS as soon as you've got connectivity working.
We have an Openshift project ( project1 ) in which we setup an AMQ Artemis broker using the image : amq- amq-broker-7-tech-preview/amq-broker-71-openshif . Being the basic image we don't have any configuration such as SSL or TLS. In order to do the setup we used as example : https://github.com/jboss-container-images/jboss-amq-7-broker-openshift-image/blob/amq71-dev/templates/amq-broker-71-basic.yaml
After the deployment of the image on Openshift we have the following:
broker-amq-amqp (5672/TCP 5672) No route
broker-amq-jolokia (8161/TCP 8161) https://broker-amq-jolokia-project1.192.168.99.105.nip.io
broker-amq-mqtt ( 1883/TCP 1883 ) No route
broker-amq-stomp ( 61613/TCP 61613 ) No route
broker-amq-tcp ( 61616/TCP 61616 ) No route
From another Openshift service, in Java we try to connect to the broker but we receive the following error :
[org.apache.activemq.transport.failover.FailoverTransport] (ActiveMQ Task-1) Failed to connect to [tcp://broker-amq-amqp-project1.192.168.99.105.nip.io:61616?keepAlive=true] after: 230 attempt(s) with Connection refused (Connection refused), continuing to retry.
The Java code:
user = "example";
password = "example";
String address = "queue/example";
InitialContext context = new InitialContext();
queue = (Queue) context.lookup(address);
ConnectionFactory cf = (ConnectionFactory) context.lookup("ConnectionFactory");
try (Connection connection = cf.createConnection(user, password);) {
connection.start();
session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE);
}
The JNDI Properties file
java.naming.factory.initial=org.apache.activemq.jndi.ActiveMQInitialContextFactory
java.naming.provider.url=failover:(tcp://broker-amq-amqp-project1.192.168.99.105.nip.io:61616?keepAlive=true)?randomize=false
queue.queue/example=example/strings
It looks as if you're trying to connect to the broker using an OpenShift route, when there is no route defined for the relevant service. You (or the installer) defined a route for Jolokia, but there's no route for the broker.
You won't get a helpful error message here, because any hostname that ends with the right domain will get connected to the OpenShift router. However, the router won't know how to process the connection without a valid route, and will probably just return some sort of meaningless error packet to the JMS client.
If you're trying to connect to the broker from another application in the same OpenShift namespace as the broker, you don't need to connect via the router -- just use the service name (presumably broker-amq-tcp) and service port explicitly in your JMS set-up.
If you're connecting to the broker from another application in a different OpenShift namespace in the same cluster, you might be able to configure the networking subsystem to allow direct connections to the service across namespaces. This is, unfortunately, a little fiddly to set up after OpenShift is installed.
If you're connecting to the broker from outside an OpenShift namespace, and you can't use services directly, you'll have to connect via a route, and you must use an encrypted connection. That's not necessarily for security -- the router will read the SNI information from the SSL header to work out how to route the request.
So you'll need to create a service for the broker's SSL port, create a route for that service, export server certificates from the broker, import those certificates into your client, and configure the client to use an SSL connection URI via the router. Clearly, using the service directly is easier, if you can ;)
All these set-up steps are described in Red Hat's AMQ7-on-OpenShift documentation:
https://access.redhat.com/documentation/en-us/red_hat_amq/7.5/html/deploying_amq_broker_on_openshift/index
although I can't deny that there's an awful lot of information to wade through in that document.
Addition and clarification to answer by Kevin Boone (which is very much correct).
In order for the AMQ broker running inside pod named "broker-amq-tcp" to be reachable from other pods in same cluster:
start broker inside the container on address 0.0.0.0. This is critical - localhost (loopback; 127.0.0.1) will prevent any connections from outside of the pod from reaching the broker;
create service (e.g. "broker-amq-tcp-service") for broker-amq-tcp that maps a pod port to container's 61616; e.g. 62626 (or any other);
connect from other pods using tcp://broker-amq-tcp-service:62626.
The 0.0.0.0 part cost me few days of debugging :)
I am working on a demo project , which has 5 microservices - discovery server , api-gateway , user-order-detail , order and user service.
I will expose the order and user service internally on GKE
I will expose the user-order-detail service externally which will call the other two services using a rest endpoint
Services that are up on google kubernetes engine:
user-order-detail LoadBalancer
kubernetes ClusterIP
order-management LoadBalancer
user-management LoadBalancer
user-order-detail hits an endpoint to retrieve all users. I am getting this error :No matches for the virtual host name :user-management
Code :
String url = "user-management/user";
InstanceInfo instance = eurekaClient.getNextServerFromEureka("user-management", false);
Object response = restTemplate.getForObject(instance.getHomePageUrl() + url +"/" + userId, Object.class);
I am having problem in inter-service communication.please help
UPDATE:
I was able to redirect my service,but I am getting connection timeout error.How should I solve this?
I/O error on GET request for "http://user-management/user-management/user/1": Operation timed out (Connection timed out); nested exception is java.net.ConnectException: Operation timed out (Connection timed out)
check the port for your service and target port
port can be 80
targetport
If I try to connect remotely using new admin(test) it will connect, but if the same program is ran through a remote machine it will connect to guest.
ConnectionFactory factory = new ConnectionFactory();
factory.setHost("192.168.1.6");
factory.setUsername("test");
factory.setPassword("test");
//factory.setPort(5267);
Connection connection = factory.newConnection();
Channel channel = connection.createChannel();
and I am to fetch messages from my queue. My variables are set.
My conf file is
[
{rabbit,[{loopback_users,[]}]}
].
If I run the same program on a remote machine it will show connect as guest:
What is my mistake? because connecting remotely I am not able to fetch message form queue as guest user
My amqp listening port is below. Do I need to change anything here?
Listening ports
Protocol Bound to Port
amqp 0.0.0.0 5672
amqp :: 5672
Your client library(probably RabbitMQ provided client?) is using guest/guest as default username and password. Check com.rabbitmq.client.ConnectionFactory's source code, especially DEFAULT_USER and DEFAULT_PASSWORD. You might need to change it to use new id and password if you don't want to use guest/guest.