We use simple Akka Remote calls between our EC2 instances that are behind Load balancers. Each instance type (A and B) are behind load balancers and can have multiple instances, respectively. We use tell to send requests and use getSender() to know who to respond back to, when needed.
I've tried setting remote.netty.tcp.hostname to the following configurations, listed with their outcomes:
hostname:ServerA-loadbalancer-dns:
request from ServerA --> ServerB-load-balancer: ServerB responds back to the ServerA-loadbalancer-dns allowing the response to be routed to a different instance, one that didn't make the request. Causing a timeout.
so the request works, but the response fails
hostname:ServerA-instance-dns:
request from ServerA --> ServerB-load-balancer: ServerB responds back to the ServerA-instance-dns which works because it knows exactly who to respond to.
request from ServerB --> ServerA-load-balancer: fails because the hostname is set to ServerA-instance-dns which doesn't match the load balancer used in the request. Error message:
akka.remote.EndpointWriter - dropping message [class akka.actor.ActorSelectionMessage] for non-local recipient [Actor[akka.tcp://application#ServerA-loadbalancer-dns:8000/]] arriving at [akka.tcp://application#ServerA-loadbalancer-dns:8000] inbound addresses are [akka.tcp://application#ServerA-instance-dns:8000]
So because I can't allow two acceptable hostnames for my incoming remote calls, I will always have a failure case, with this current setup. Is there a way to configure this?
Related
I'm trying to configure the WSO2 API Manager. (version - v4.0.0)
When I try to create REST API and point to the endpoints I"m getting a Connection error message for the given endpoints. I have hosted the API Manager and the back end services on the same server(backend services are running on the tomcat application on the same server in port 8080)
API Manager Log produces the following message :
ERROR {org.wso2.carbon.apimgt.rest.api.publisher.v1.impl.ApisApiServiceImpl} - Error occurred while sending the HEAD request to the given endpoint url: org.apache.commons.httpclient.ConnectTimeoutException: The host did not accept the connection within timeout of 4000 ms
would really like to what has caused the issue.
P.S: I can access the backend services directly without any connection issues using a REST client.
It's difficult to answer the question without knowing the exact details of your deployment and the backend. But let me try. Here is what I think is happening. As you can clearly see, the error is a connection timeout The host did not accept the connection within timeout of 4000 ms.
Let me explain what happens when you click on the Check Endpoint Status button. When you click on the Check Endpoint Status button, the Browser is not directly sending a request to the Backend to validate it. The Backend URL will be passed to the APIM Server, and the Server will perform the validation by sending an HTTP HEAD request to the BE service.
So there can be two causes. First may be your backend doesn't know how to handle a HEAD request which is preventing it from accepting the request. But given the error indicated it's a network issue, I doubt it even reached the BE.
The second one is, that your Backend is not accessible from the place API Manager is running. If you are running API Manager on Server A and trying to access API Manager via browser from Server B(Local Machine). Although you can access the BE from Server B may be from Server A it's not accessible. When I say BE is not accessible from API Manager server, it means it's not accessible with the same URL that was used in API Manager. It doesn't really matter if it runs in the same Server if you are using a different DNS other than localhost to access it. So go to the server API Manager is running and send a request using the same URL that was used in API Manager and see whether it's accessible from there.
First try doing a curl request by login into the server where APIM is running (not from your local machine). Maybe due to some firewall rules within the server, the hostname given in the URL may not be accessible. Also, try sending a HEAD request as well. You might be able to get some idea why this is happening
There are many sites (such as microleaves.com) that allow you to change your external IP address by connecting to a master proxy. You connect to a master proxy and the master proxy routes your IP to an external proxy. The external IP can change at any time (for example, if one of the external proxies stop working it is automatically replaced by a different proxy by the master proxy/IP).
I'm using this type of service to navigate to different websites via the Selenium framework - is it possible to prevent a request from executing if the external proxy changes?
For example, suppose at the present moment my external IP is 1.1.1.1.1 and suddenly the IP changes to 2.2.2.2.2 - is there a way to programmatically instruct program to execute request or navigate to a site only if my external IP is still equal to 1.1.1.1.1?
The purpose of this is that I would like to retrieve certain characteristics of the new proxy (such as the city, state or ISP) before actually executing a request with the proxy.
Someone might suggest simply making a request to an external server to get my external IP before executing each request - however it is possible that the external IP will change immediately after I do this - how can I guarantee consistency of external IP?
I am running a test script on jmeter. The design of the system I'm testing is multi-profile. Meaning, as I login using an HTTP server, I am redirected to either Server1 or Server2 (randomly). On the test script I recorded, I was redirected to Server2. So whenever I run this pre-recorded test script again (with 100 users/threads), only those requests redirected to Server2 are processed successfully and those requests redirected to Server1 are returning a 'User session Not Found' error. How do I fix this?
I have an HTTP Cache and HTTP Cookie Manager in my test plan before the HTTP samplers.
It seems like a bad configuration of two servers as they are not sharing the session data. normally, the servers share the session related information such as cookies so that client can get the response from either of the servers.
I am not sure whether you can really control which server to hit (though it hit the second server during recording, because load balancer has chosen that server for you at that point in time), it is completely Load balancer decision (based on the algorithms used, like least response times, client IP-based etc)
I suggest to check the server configurations whether they are sharing cookie level data or not. Also, suggest check which algorithm is being used by the load balancer to distribute the load across two servers.
If they are not causing the issues, then look at how the cookies are sent by the server and how the(JMeter) client is resending them (by HTTP Cookie Manager) i.e., check whether Cookies are sent by the JMeter as expected by the server(s). sometimes it is possible that partial cookies are being sent.
Please answer the following questions:
Is the request always sent to Server2 when one thread is used?
Whether the request is getting succeeded when the requests hit the server1 for single thread?
Are you hitting the load balancer URL (which inturn decides the server to hit)? Or hardcoded one of the server address?
I googled for load balancing but the only thing I can find is the working theory, which at the moment, is the "easy" part for me. But zero examples of how to implement one.
I have several questions pertaining load balancing:
I have a domain (example.com) and I behind it I have a load balancing server (lets call it A) which, according to the theory, will ask the client to close the connection with A, and connect to B, a sub-server and carry on the request with B. Will the client, in a web browser stop seeing "example.com/page.html" in the address bar and start seeing "B_ip_address/page.html" instead?
How can I implement a simple load balancer from scratch? My doubt targets the HTTP part. Is there some specific message or set of messages I need to send the client which will make him disconnect from me and connect to a sub-server?
What about lower level protocols than HTTP, such as TCP/IP, are there any standard packets to tell the client he just connected to a load balancer server and now he needs to connect to xxx.xxx.xxx.xxx to carry on the request?
What method is the most used? (1) client connects to load balancing server, and it asks the client to directly connect to one of the sub-servers, or (2) the load balancing server starts bridging all traffic from the client to the sub-server and vice-versa in a transparent fashion?
So question 2, 3 and 4 regard the load balancing protocol, and the 1st one the way a domain name can be connected to a load balancer and what are the underlying consequences.
Your approach is a kind of static load balancing by redirect the calls to another server. All following calls may use this other server or are send to the load balancer again for redirect.
An implementation depends on the implementation of your system. A load balancer works best for independent requests with no session state. You need to sync the session state otherwise between the "end" servers. Or use a shared session store to provide the session state to all servers.
There exists a simple and transparent solution for HTTP server load balancing. You can use the load balancing module of an nginx server (http://nginx.org/en/docs/http/load_balancing.html). This can be used for HTTP and HTTPS requests. And it may be extended with extra servers dynamically if the load increases. You need to edit the nginx configuration and restart the server. This can be transparent to existing connections. And nginx does not cause problems with changing domain or host names.
Other protocols need some support by the client and the server. Load balancing may be transparent if a specialized device is between the client and server. Or the communication protocol needs to support connection redirects.
Edit:
Load balancing can be implemented by DNS round robin too. Each DNS lookup call returns another IP address for the same domain name. The client choose an IP and connects to this server. Another client can use the next IP. The address bar name is the same all the time.
Example:
Non-authoritative answer:
Name: www.google.com
Addresses: 2a00:1450:4001:80f::1010
173.194.116.209
173.194.116.210
173.194.116.212
173.194.116.211
173.194.116.208
Non-authoritative answer:
Name: www.google.com
Addresses: 2a00:1450:4001:80f::1010
173.194.116.210
173.194.116.212
173.194.116.211
173.194.116.208
173.194.116.209
Non-authoritative answer:
Name: www.google.com
Addresses: 2a00:1450:4001:80f::1010
173.194.116.212
173.194.116.211
173.194.116.208
173.194.116.209
173.194.116.210
The IP address range rotates. Most HTTP load balancers work as transparent load balancer like nginx or other reverse proxy implementations. A redirecting load balancer is more a low tech implementation I think.
TCP/IP is not a protocol. It's the transport layer used to transfer data implementing a specific communication protocol. While TCP/IP itself is a protocol for the network components. But not the applications. You may check https://en.wikipedia.org/wiki/OSI_model .
I have a problem where I have several servers sending HttpRequests (using round robin to decide which server to send to) to several servers that process the requests and return the response.
I would like to have a broker in the middle that examines the request and decides which server to forward it to but the responses can be very big so I would like the response to only be sent to the original requester and not be passed back through the broker. Kind of like a proxy but the way I understand a proxy is that all data is sent back through the proxy. Is this possible?
I'm working with legacy code and would rather not change the way the requests and responses are processed but only put something in the middle that can do some smarter routing of the requests.
All this is currently done using HttpServletRequest/Response and Servlets running on embedded Jetty web servers.
Thank you!
What you're after is that the broker component is using the client's IP address when connecting to the target server. That is called IP spoofing.
Are you sure that you want to implement this yourself? Intricacies of network implementation of such a solution are quite daunting. Consider using software that has this option builtin, such as HAProxy. See these blog posts.