Kubernetes: Exposed service to deployment unreachable - java

I deployed a container on Google Container Engine and it runs fine. Now, I want to expose it.
This application is a service that listens on 2 ports. Using kubectl expose deployment, I created 2 load balancers, one for each port.
I made 2 load balancers because the kubectl expose command doesn't seem to allow more than one port. While I defined it as type=LoadBalancer on kubectl, once these got created on GKE, they were defined as Forwarding rules associated to 2 Target pools that were also created by kubectl. kubectl also automatically made firewall rules for each balancer.
The first one I made exposes the application as it should. I am able to communicate with the application and get a response.
The 2nd one does not connect at all. I keep getting either connection refused or connection timeout. In order to troubleshoot this issue, I further stripped down my firewall rules, to be as permissive as possible, to troubleshoot this issue. Since ICMP is allowed, by default, pinging the ip for this balancer results in replies.
Does kubernetes only allow one load balancer to work, even if more than one can be configured? If it matters any, the working balancer's external ip is in the pattern 35.xxx.xxx.xxx and the ip of the balancer that's not working is 107.xxx.xxx.xxx.
As a side question, is there a way to expose more than one port using kubectl expose --port, without defining a range i.e. I just need 2 ports?
Lastly, I tried using the Google console, but I couldn't get the load balancer, or forwarding rules to work with what's on kubernetes, the way doing it on kubectl does.
Here is the command I used, modifying the port and service name on the 2nd use:
kubectl expose deployment myapp --name=my-app-balancer --type=LoadBalancer --port 62697 --selector="app=my-app"
My firewall rule is basically set to allow all incoming TCP connections over 0.0.0.0/0.
Edit:
External IP had nothing to do with it. I kept deleting & recreating the balancers until I was given an IP of xxx.xxx.xxx.xxx for the working balancer, and the balancer still worked fine.
I've also tried deleting the working balancer and re-creating the one that wasn't working, to see if it's a conflict between balancers. The 2nd balancer still didn't work, even if it was the only one running.
I'm currently investigating the code for the 2nd service of my app, though it's practically the same as the 1st service, a simple ServerSocket implementation that listens on a defined port.

After more thorough investigation (opening a console in the running pod, installing tcpdump, iptables etc), I found that the service (i.e. load balancer) was, in fact, reachable. What happened in this situation was, although traffic reached the container's virtual network interfrace (eth0), the data wasn't routed to the listening services, even when these were ip aliases for the interface (eth0:1, eth0:2).
The last step to getting this to work was to create the required routes through
iptables -t nat -A PREROUTING -p tcp -i eth0 --dport <listener-ip> -j DNAT --to-destination <listener-ip>
Note, there are other ways to accomplish this, but this was the one I chose. I wish the Docker/Kubernetes documentation mentioned this.

Related

Why do we use port number localhost: 8080? Why don't we use a port number when using www.example.com?

When I use a Spring Boot app in local it uses, localhost:8080. When it is pushed to Pivotal Cloud Foundry, it has some route https://my-app.xyz-domain.com and we can access the URL without a port, what is happening behind the scene?
Please help me understand.
There is a default port number for each protocol which is used by the browser if none is specified. For https it is 443, for http 80 and for telnet 23.
On Unix and similar systems like Linux, those are often not available to a developer so other ports are used but then they have to be specified. 8080 is often available and looks like 80.
On CloudFoundry, your application is actually still running on localhost:8080. The reason that you can access your application through https://my-app.xyz-domain.com is that the platform handles routing the traffic from that URL to your application.
The way this works is as follows:
You deploy your application. It's run by the foundation in a container. The container is assigned a port, which it provides to the application through the $PORT env variable (this can technically change, but it's been 8080 for a long time). Your application then listens on localhost:$PORT or effectively localhost:8080.
The platform also runs Envoy in your container. It's configured to listen for incoming HTTP and HTTPS requests, and it will proxy that traffic to your application on localhost:$PORT.
Using the cf cli, you map a route to your application. This is a logical rule that tells the platform what external traffic should go to your application. A route can consist of a hostname, domain, and/or path. For example, my-cool-app.example.com or my-cool-app.example.com/foo. For a route to work, the domain must have its DNS directed to the platform.
When an end-user accesses the route that you mapped, the DNS resolves to the platform and the traffic is directed to the external load balancers (sometimes TCP/layer4, sometimes HTTPS/layer7) that sit in front of the platform. These proxies do not have knowledge of CF, they just proxy incoming traffic.
Traffic from the external load balancers is spread across the set of the platform Gorouters. The Gorouter is a second layer of proxies, but these proxies have knowledge of the platform, specifically, all of the routes that have been mapped on the platform and where those applications actually live.
When a request comes to Gorouter, it will recognize the route like my-cool-app.example.com and look up the location of the container where that app is running. Traffic from Gorouter is then proxied to the Envoy which is running in the app container. This ties into step two as the Envoy will route that traffic to your application.
All in total, incoming requests are routed like this:
Client/Browser -> External LBs -> Gorouters -> Envoy -> Application
First, you should change the port to 80 or 443, because HTTP corresponds to 80, and HTTPS corresponds to 443. Then, you should set the domain name to resolve to the current host, so that you can access the current application through the domain name. In addition, if you want to set the local domain name, then The hosts file should be modified.

How to recover the port on balanced servers

My webapp is over a balanced servers, it works with Tomcat.
These tomcat are on different servers as normal.
In this last days we added a new Tomcat service in the same machine on a different port for our personal tests.
So we have this configuration:
Serv1:8845
Serv2:8845
Serv2:8945
balanced in testmyapp:443 because it is in https.
So when I typing https://testmyapp:443/myapp, I recover the IP of the Serv2 through NetworkInterface, but I don't succeed to recover the port.
I have to know in which port of the server the request is started (8845 or 8945) , because one of this service was created for another purpose.
How can I recover this type of information?
Thank you.
You can access to the listening port with the HTTPServletRequest object, see its getLocalPort method.
By the way, there is a similar method returning the listening IP. NetworkInterface looks somewhat out of context in your case.

How to configure subdomains with Payara and virtual servers?

I'm struggling here with something that may be easy to do, but I haven't found a correct solution, so i hope you can help me please.
Background
We are developing an application that consists in 4 different Java Web projects.
AppA
AppB
AppC
WebService
All of these applications have to be accessed from 4 diferent sub domains of mydomain.com:
a.mydomain.com
b.mydomain.com
c.mydomain.com
api.mydomain.com
Technology
Application server: Payara server 4 (what is almost the same that Glassfish 4).
Payara server is running inside a Docker container which in turn is running inside an Amazon EC2 instance.
I've used Amazon Route 53 in the following scenario:
What I have already done successfully
This was done for another proyect where there was only 1 app which is accessed from a subdomain of otherdomainiown.com.
This works perfectly, because the DNS records of the domain provider (iPage) just points to my Amazon Route 53 records of the hosted zone I configured. This hosted zone has an A record that points to the fixed IP of my Amazon EC2 instance. Then, Docker exposes Payara server through port 80 that is mapped to port 8080 which Payara uses by default to serve it's applications.
Problem
Now, i'm facing a similar scenario. The difference is that I have 4 different apps that need to be accessed by 4 different sub domains.
I've tried with Virtual Servers (virtual hosts) with no luck, I'm not familiar with that, but i think that could be a possible solution.
I considered using Amazon S3 buckets to redirect but I don't think that's what I need.
In an image, this should be the final scenario, although I just draw 2 sub domains for simplicity:
Should I use Docker mappings to resolve this?
Should I use Virtual Servers?
Should I buy 4 different machines? (this will solve all this in a few seconds, but buying more instances is not an option)
Should I use a Docker container for each application?
As you can see, i'm a little lost, so it would be great if you could point me in the right direction.
Thanks in advance.
What are you using Route 53 for? What benefit do you get from it in this scenario?
There is a blog post on the Payara website which gives an overview of using Virtual Servers in Payara Server, but it's a bit in-depth to quote for an answer here.
The key point is that you still need to configure incoming traffic to arrive at different subdomains. If all your traffic is coming in on the same IP address as it looks like Route53 is doing, then it will be very tricky to differentiate what traffic should go to which endpoint.
The usual way to do this would be to have a load balancer or proxy where you have Route53 in your diagram. An Amazon ELB would be able to perform the redirects you need. A cheaper option (though it would involve more management) would be to use something like Apache httpd or Nginx to forward requests to the Payara Server.
You just need to create a virtual server for each subdomain and set the subdomain in the "Hosts" field. Then you need to dpeloy all 4 applications and select the proper virtual server in the "Virtual Servers" field. The blog linked by #Mike will guide you: https://blog.payara.fish/virtual-servers-in-payara-server
All of the virtual servers will be listening on the same IP address but Payara Server will read the domain from incoming HTTP requests and will route the request to the correct virtual server.
However, this is recommended only for very small applications. Bigger applications should be deployed separately on different Payara Server instances running on different ports or different machines. If you use docker then you can run 4 instances in docker and map them to different ports. Then you would need a proxy server (Apache Httpd, Nginx, ) to route requests to the correct Payara instances (ports) according to the domain name in requests.

RabbitMQ connecting VM to Host

I'm new-ish to networking, and I'm swimming (drowning) in semantics.
I have a VM which runs a Java application. Ideally, it would be fed inputs from the host through a RabbitMQ queue. The Java application would then place the results on another RabbitMQ queue on a different port where it will be used by the host application. After researching it for a bit, it seems like RabbitMQ only exists in the localhost space with listeners on different ports, am I correct in this?
Do I need 2 RabbitMQ servers running in tandem, then, (one on the VM and other on Host) each listening to the same port? Or do I just need one RabbitMQ server running while both applications are pointed to the same IP Address/Port?
Also, I have also read that you cannot connect as 'guest/guest' unless it is on localhost, which I understand, but how is RabbitMQ supposed to be configured/reachable to anything besides localhost?
I've been researching for several hours, but the documentation does not point to a direct answer/how-to guide. Perhaps it is my lack of network experience. If anyone could elaborate on these questions or point me to some articles/helpful guides, I would be much obliged.
P.S. -- I don't even know what code to display to give context. Let me know and I'll edit the code into the post.
RabbitMQ listens to TCP port 5672 on all network interfaces out-of-the-box. This includes the "loopback" interface (to allow fast connections to self) and interfaces visible to other remote hosts (including VMs).
For your use case, you probably need a single RabbitMQ instance for both directions. The application on the host will publish messages to one queue and the Java application in the VM will consume messages from that queue and push the result to a second queue. This second queue can be consumed by the application on the host.
For the user, you need to create a new user with the appropriate rights. This is documented in the access control article. To create the user, you can do it from the management web UI (after you enabled the management plugin) or using the rabbitmqctl command line tool.
The last part is networking between the host and the VM. It really depends on the technology you use. It may work out-of-the-box or you may have to configure how VMs are connected to the network. Refer to the documentation of your hypervisor.

Opening port 80 with Java application on Ubuntu

What I need to do is running a Java application which is a RESTful service server side writtern by Restlet. And this service will be called by another app running on Google App Engine.
Because of the restriction of GAE, every http call is limited to port 80 and 443 (http and https) with HttpUrlConnection class. As a result, I have to deploy my server side application on port 80 or 443.
However, because the app is running on Ubuntu, and those ports under 1024 cannot be accessed by non-root user, then a Access Denied exception will be thrown when I run my app.
The solutions that have come into my mind includes:
Changing the security policy of JRE, which is the files resides in /lib/security/java.policy, to grantjava.net.SocketPermission "*.80" "listen, connect, accept, resolve" permission。However, neither using command line to include this file or overrides the content in JRE's java.policy file, the same exception keeps coming out.
try to login as a root user, however because my unfamiliarity with Unix, I don't know how to do it.
another solution I haven't try is to map all calls to 80 to a higher port like 1234, then I can deploy my app on 1234 without problem, and GAE call send request to port 80. But how to connect the missing gap is still a problem.
Currently I am using a "hacking" method, which is to package the application into a jar file, and sudo running the jar file with root privilege. It works now, but definitely not appropriate in the real deployment environment.
So if anyone have any idea about the solution, thanks very much!
You can use iptables to redirect using something like this:
iptables -t nat -A PREROUTING -i eth0 -p tcp --dport http -j REDIRECT --to-ports 8080
Make the changes permanent (persist after reboot) with:
iptables-save
Solution 1: It won't change anything, this is not a Java limitation, it's the OS that is preventing you to use privileged port numbers (ports lower than 1024).
Solution 2: Not a good idea IMO, there are good reasons to not run a process as root.
Solution 3: Use setcap or iptables. See this previous question.
A much easier solution is to set up a reverse proxy in Apache httpd, which Ubuntu will run for you on port 80 from /etc/init.d.
There are also ways of getting here with iptables, but I don't have recent personal experience. I've got such a proxy running right now.

Categories