Whenever I deploy a spring boot app , it had embedded tomcat container. It relys on container being available. Does it mean that these are not 12 factor app compliant as depends on runtime injection of webserver?
What does TCP routing mean for non-http services?
Port Binding
Export services via port binding. The 12-factor app is
completely self-contained and does not rely on runtime injection of a
web server into the execution environment to create web-facing
service.
For Pivotal Cloud Foundry, non-HTTP services require TCP routing in
order to be replatformed.
When you run locally, a spring boot app, it runs with a default profile. So, Spring will leverage your port and other settings at runtime.
When you push to cloud, a spring boot app runs with a cloud profile. In a cloud profile, port settings are dictated by the cloud and settings you provide are ignored.
In PCF, a Diego cell hosts all app instances. A Diego cell has its own CIDR block for apps its hosting. So your app instance will get an IP from that range. And you cannot access the app by its ip.
The Diego cell vm though, has the IP from the CIDR range of the network its running. Diego cell also uses NAT-ing to map you app ip to a port on the Diego cell vm. That is how the traffic is routed to your app.
As you can see, the Diego cell, in PCF, cannot rely that the port you provided. Instead it will run the app where it can, and NAT to an available port.
Take a look at Diego Reference Architecture.
As to your second question, Go-Routers in Cloud Foundry route requests to app instances. By default only http/https traffic is enabled on Go-Routers. You can enable TCP Routing on Go-Routers. This was added, I believe, in PCF 1.9.
Here's the documentation.
Related
When I use a Spring Boot app in local it uses, localhost:8080. When it is pushed to Pivotal Cloud Foundry, it has some route https://my-app.xyz-domain.com and we can access the URL without a port, what is happening behind the scene?
Please help me understand.
There is a default port number for each protocol which is used by the browser if none is specified. For https it is 443, for http 80 and for telnet 23.
On Unix and similar systems like Linux, those are often not available to a developer so other ports are used but then they have to be specified. 8080 is often available and looks like 80.
On CloudFoundry, your application is actually still running on localhost:8080. The reason that you can access your application through https://my-app.xyz-domain.com is that the platform handles routing the traffic from that URL to your application.
The way this works is as follows:
You deploy your application. It's run by the foundation in a container. The container is assigned a port, which it provides to the application through the $PORT env variable (this can technically change, but it's been 8080 for a long time). Your application then listens on localhost:$PORT or effectively localhost:8080.
The platform also runs Envoy in your container. It's configured to listen for incoming HTTP and HTTPS requests, and it will proxy that traffic to your application on localhost:$PORT.
Using the cf cli, you map a route to your application. This is a logical rule that tells the platform what external traffic should go to your application. A route can consist of a hostname, domain, and/or path. For example, my-cool-app.example.com or my-cool-app.example.com/foo. For a route to work, the domain must have its DNS directed to the platform.
When an end-user accesses the route that you mapped, the DNS resolves to the platform and the traffic is directed to the external load balancers (sometimes TCP/layer4, sometimes HTTPS/layer7) that sit in front of the platform. These proxies do not have knowledge of CF, they just proxy incoming traffic.
Traffic from the external load balancers is spread across the set of the platform Gorouters. The Gorouter is a second layer of proxies, but these proxies have knowledge of the platform, specifically, all of the routes that have been mapped on the platform and where those applications actually live.
When a request comes to Gorouter, it will recognize the route like my-cool-app.example.com and look up the location of the container where that app is running. Traffic from Gorouter is then proxied to the Envoy which is running in the app container. This ties into step two as the Envoy will route that traffic to your application.
All in total, incoming requests are routed like this:
Client/Browser -> External LBs -> Gorouters -> Envoy -> Application
First, you should change the port to 80 or 443, because HTTP corresponds to 80, and HTTPS corresponds to 443. Then, you should set the domain name to resolve to the current host, so that you can access the current application through the domain name. In addition, if you want to set the local domain name, then The hosts file should be modified.
how to connect to rabbitmq server with web proxy. I have rabbit credentials in application.yml spring.rabbitmq.host, spring.rabbitmq.port, spring.rabbitmq.username, spring.rabbitmq.password
I am not pretty sure what do you need but pls read below basic info:
The RabbitMQ is divided into 2 parts:
The Queue Service itself
The Management UI
You can install 1 without 2 or both.
The service works on port 5672 (and it is not HTTP) whereas Management UI on port 15672 (and it is HTTP).
Be aware that to connect to Management UI (via the browser) you need to have management plugin installed or use docker image with "management" postfix.
To sum up:
Your spring-boot application connects over port 5672 with the service directly.
If you have performed the steps mentioned before you should be able to connect to the management plugin using http://localhost:15672.
My goal is to have web servers that work on the default port so users don't have to type in a port #. Easy to do with LAMP stack, where A is apache... and no other web server exists. However, if I purchase general purpose hosting with Centos and I want to run
1) Gunicorn/NGINX for Python/Django -> access from example.com from outside (no port required to be entered by the web browser.
2) Spring framework in a Java EE container - Java EE defaults to port 8080 and other ports in that range but people just enter a domain name and expect it to work. -> So reachable from example2.com
3) Node.js - Reachable from example3.com
4) PHP apps such as WordPress, Drupal on LAMP - example3.com
Recommendations are appreciated.
My closest experience that seems to do this for example would be AWS with load balancer allowing access from public web - app servers accessible from only load balancer.
Thanks,
Bruce
You can use nearly any http server in front to do this kind of job.
bind everything (tomcat, nodejs, gunicorn, uwsgi, etc. pp.) to local http or file sockets and use the proxy feature of your favorite server to bundle them all on this host. With the naming of nginx: use different locations on one server and/or different server blocks with proper server names set to build your custom host.
A few servers:
nginx with proxy feature
apache2 supports setups like that, too if you use mod_proxy.
haproxy is another alternative
Finally it depends on your specific needs (and experiences) which setup to pick.
Edit: missed docker a little - but the same thing works for containers - except that you do not use file sockets, but make everything with (http) sockets in private or public nets.
I have deployed our playframework 1.2.7 web application to aws-ec2 ubuntu instance. The started the application on port 8081 since 80 or 8080 complains about not able to bind to those ports. How can I configure the ubuntu instance either througth the aws security group or on ubuntu itself so that I wouldn't have to add the port 8081 to the end of the public url or the public ip provided by aws.
ie I don't want do this:
example.com:8081 / ip4:8081
But I just want to use:
example.com / ip4
to access the application.
Please I need help on this.
The problem is that on Ubuntu ports < 1024 are privileged. This means that normal users can do nothing with it. To start play on port 80 you could simply start it as root user. Anyway it's not a best practice to start webserver as root due to possible security issues.
I'd suggest to start it on whatever non-privileged port you want, as normal user, and make use of an Elastic Load Balancer (ELB) to redirect all inbound traffic on port 80(or 443 for instance) to your play port. You can accomplish this simply using AWS web interface, when creating an ELB
So users will reach your play instance calling ELB on port 80 using Amazon auto-assigned dns name.
Example flow:
User browser --> http://your-elb-dns-name.com --> your_play_server_ip:8081
Just make sure that the Security Group associated to your play server instance will accept inbound traffic on 8081 from your ELB (you can identify your ELB using the amazon id assigned during its creation)
Another great advantage of using this ELB approach is that you can use it as reverse proxy to hide your ec2-instance(s) ip(s) to the internet. In fact, if you use ELB you could also avoid assignin a public ip to your ec2 instance during creation. ELB doesn't need to know a public ip beacuse it will have access to the Virtual Private Cloud (VPC) in which your ec2 instance was started
Another possible approach, if you don't want to use ELB, is to install NGINx or Apache on your ec2 instance to act as reverse proxy, but I think you should make use of Amazon web services to accomplish that. You may want to use an internal NGINX or Apache reverse proxy if you need to hide a particular resource of your play server to the internet.
https://aws.amazon.com/it/elasticloadbalancing/
My iOS app uses a Java-based server and communicates with it using Google Cloud Endpoints. Normally the server listens on https://myservice.appspot.com/_ah/api/rpc.
How can I debug my server code? After I run it with Debug As | Web Application inside Eclipse and change its URL to https://localhost:8888/_ah/api/rpc the client cannot connect. I don't think it's a firewall issue because URLs with localhost:8888 work for other client-server pairs.
So does one need to take any special steps for debugging code in Google Web Application projects with Google Cloud Endpoints in Eclipse, and is there a better way to set their required URL on the client than hardcoding it (like I currently try)?
The first cause why this did not work was that I tried to connect to localhost over SSL.
The second cause was that my real device of course needs to contact the dev server not as localhost, but using its remote IP address (currently 10.0.0.2 on my WLAN).
The third cause was that the firewall on my OS X 10.9.3 Mac prevented my real device from connecting to its port 8888. I had to disable Block all incoming connections and allow incoming connections for the applications Eclipse and java under System Preferences | Security & Privacy | Firewall Options. (OS X will prompt for permission the first time a connection is attempted.)