I have proxy list, like this. It contains all kind of proxies: HTTP, HTTPS, SOCKS etc. I want to calculate heartbeat (health) of each proxy each X minutes.
I've found nice example how people ping IP addresses through Java sockets:
Socket s = new Socket(hostname, port);
s.getOutputStream().write((byte) '\n');
int ch = s.getInputStream().read();
s.close();
if (ch == '\n') // its all good.
Question
Which protocol (protocols) I need to use to ping HTTP, HTTPS, SOCKS servers?
That code does not use ICMP, it opens a TCP connection to a port.
Opening a TCP connection (or, actually, using ICMP) only verifies that the host network stack is capable of responding, it does not verify the health of the proxy itself. To do that you'd have to actually make a connection using the proxy protocol and verify proxying to an outside resource.
Related
Spring Boot app is hosted on default port server.port=8080 and when I connect to the server, JS client's code calls the same port new SockJS('http://localhost:8080/api/streams'); but port use after connection established for WebSocket?
I suppose data exchange for websocket\stomp work on a different port. But wheat is number?
WebSocket uses the HTTP connection, so it can use that port 8080, or a more normal port 443 (secure) or port 80 (insecure) connection.
A WebSocket connection is established by making an HTTP connection, then asking to upgrade the connection to a WebSocket connection.
As Wikipedia says it:
WebSocket is distinct from HTTP. Both protocols are located at layer 7 in the OSI model and depend on TCP at layer 4. Although they are different, RFC 6455 states that WebSocket "is designed to work over HTTP ports 443 and 80 as well as to support HTTP proxies and intermediaries," thus making it compatible with HTTP. To achieve compatibility, the WebSocket handshake uses the HTTP Upgrade header to change from the HTTP protocol to the WebSocket protocol.
By default there are acceptor elements configured to accept STOMP connections on ports 61616 and 61613.
https://activemq.apache.org/components/artemis/documentation/latest/stomp.html
I am testing on java.net.ServerSocket.
What I want is the following.
When connecting to aaa.com, you get aaa.com,
Getting bbb.com when connecting to bbb.com.
My etc/hosts file configuration is as follows.
127.0.0.1 aaa.com
127.0.0.1 bbb.com
I used the following java source.
ServerSocket server = new ServerSocket(port);
Socket request = server.accept();
request.getInetAddress().getHostName();
And when connecting to aaa.com, aaa.com is returned.
When connecting to bbb.com, aaa.com is returned.
How can I get bbb.com when connected to bbb.com?
This code is not connecting to anything. It is accepting connections from ... something.
So ... I presume that you have some client code (not shown) that is connecting to port port using hostnames "aaa.com" and "bbb.com" respectively. And you want this server side to know which hostname that the client side used.
It is not possible.
The client resolves the hostnames to an IP address and then makes the connection using the IP address (and only the IP address). Since the IP address is the same in both cases, the server side cannot distinguish the two cases.
It follows that if the application level of the server needs to know the hostname that the client used to make the connection, then the application protocol must pass this information from the client to the server. (That is what protocols like HTTP, FTP and so on do.)
I've looked online, and everything I find shows how to make a separate server to connect to the main server if it's behind a nat or firewall.
But in my case the client is behind the NAT, and the server is on the local network.
So it's set up kinda like below:
Client Actual:10.0.0.1 -> Client NAT:100.0.0.2:1111 <--> Server 10.0.0.0:1099
The Java code I use to connect to the server is as below:
String serviceUrl = "service:jmx:rmi:///jndi/rmi://10.0.0.0:1099/jmxrmi";
String[] credentials = new String[] {"username", "password"};
String objectName = "org.apache.activemq:type=Broker,brokerName=test";
JMXServiceURL url = new JMXServiceURL(serviceUrl);
Map<String, String[]> env = new HashMap<String, String[]>();
env.put(JMXConnector.CREDENTIALS, credentials);
JMXConnector jmxc = JMXConnectorFactory.connect(url, env);
conn = jmxc.getMBeanServerConnection();
broker = MBeanServerInvocationHandler.newProxyInstance(conn, new ObjectName(objectName), BrokerViewMBean.class, true);
And the error it throws is:
java.rmi.ConnectException: Connection refused to host: 10.0.0.0; nested exception is:
java.net.ConnectException: Connection timed out: connect
So my question is, how do I make this client behind NAT connection work?
First of all: there is nothing special in with regard with network configuration for ActiveMQ to work. ActiveMQ's protocol is single port, and can be easily routed just like most other TCP/IP protocol.
Therefore, given that the server is properly listening on its TCP port and that a client can successfully connect to it locally, then this problem can be analyzed as if it was any other network-related problem.
Can the client machine ping the server machine? It is difficult from the IP address scheme that you present to properly understand your network, but as it is presented right now, the client machine will simply assume that the server is on the local network and therefore send an ARP request asking for the MAC address of "10.0.0.0" (which will timeout because there is no such machine to answer the request) rather than forward the request to its NAT gateway. If that is indeed the problem you have, then there are three possible solutions: a) modify the network layout (have the client use a different IP scheme), b) setup a static route for the server's IP on the client machine to force its routing through the gateway, or c) add a port redirect on the gateway and have the client connect to the IP address of the gateway instead. Now solution a is not very practical, unless your setup is barely a lab configuration. Solution b is a possibility, but a really bad one. Solution C, that is setting up port redirection on the gateway, is the most common solution to this kind of problem.
Use hostnames on both sides, by setting the same -Djava.rmi.server.hostname=XXX. Be sure that hostname is resolvable on both sides. You can have a look at http://docs.oracle.com/javase/8/docs/technotes/guides/rmi/faq.html#nethostname
The title says it.
If I try to bind a ServerSocket and a SSLServerSocket to the same port I get an error. If a client tries to connect to an SSLServerSocket without SSL, the accept() method throws an error. If a client tries to connect to a ServerSocket via SSL I have no idea how I would go about establishing a secure connection.
Is it even possible?
You can accept a normal socket connection and upgrade it to SSL/TLS at a later stage, using SSLSocketFactory.createSocket(Socket s, String host, int port, boolean autoClose) (and SSLSocket.setUseClientMode(false) on the server side).
You'll need to defined a command in your plaintext protocol so that both sides can agree about an upgrade taking place (similarly to STARTTLS commands in SMTP or LDAP for example).
Alternatively, you could use port unification (as it can be done with Grizzly), whereby you try to detect whether the client initiates the connection with an SSL/TLS Client Hello message. It can be trickier to do, since you'd have to read ahead to detect the packet type (so you'd probably need to keep that buffer and pass its content into an SSLEngine, instead of being able to use the SSLSocket directly).
I'm working on tunnelling the cajo rmi traffic through a SSH tunnel.
For that I have a server running an SSH deamon (apache Mina) and a client running an SSH client (Trilead SSH).
The shh connection between these machines can be established and by applying local and remote port forwarding I can tunnel rmi traffic, however this works only in the outging (to server) direction.
The setup:
Active SSH connection (port 22)
client: forwarding local port 4000, to remote host port 1198 (this traffic actually goes trhough the tunnel)
server: forwarding server port 4000, to client port 1198 (this part of the tunnel is not being used by cajo)
The server exports an object using:
Remote.config(null, 1198, null, 0);
ItemServer.bind(new SomeObject(), "someobject");
The client does an object lookup using:
ObjectType someObject = (ObjectType)TransparentItemProxy.getItem(
"//localhost:4000/someobject",
new Class[] { ObjectType.class });
logger.info(someObject.getName());
Port forwarding is invoked using the trilead ssh library on the client side:
conn.createLocalPortForwarder(4000, "Server-IP", 1198);
conn.requestRemotePortForwarding("localhost" 4000, "Client-IP", 1198);
When analysing the ip traffic between the two machines with WireShark, I see that the lookup is being redirected throug the tunnel, but the response is not.
The respons is ordinarily send to port 1198 of the client.
How can I force the server to send the response of a remote invocation to a local port, in order to get it tunneled back to the client?
UPDATE: The problem here was that the ports for RMI objects are different then the registry port and therefore also need to be forwarded.
In short, client 10.0.0.1 makes lookup to //10.0.0.1:4000 which is forwarded to the RMI port on the server (through the tunnel).
Subsequently the server responds to 10.0.0.1:1198 where I would like the server to send its traffic to its local port 4000 instead, in order to use the tunnel.
I have tried to fiddle with the cajo Remote.config(ServerAddress, ServerPort, ClientAddress, ClientPort) settings, however when I set the clientaddress to 10.0.0.1 or 127.0.0.1 for this method, I'm unable to get response back and I don't see any responding traffic at all...
I did find a solution to this problem, in which I omitted the cajo framework from the setup and use pure java rmi. This makes things more transparent.
On both client and server I placed a security policy file: C:\server.policy
grant {
permission java.security.AllPermission;
};
Then on the server, set security permissions and start registry on desired port:
System.setProperty("java.rmi.server.hostname", "127.0.0.1");
System.setProperty("java.security.policy","C:\\server.policy");
System.setSecurityManager(new RMISecurityManager());
new SocketPermission("*:1024-", "accept,connect,listen");
createRMIRegistry(Property.getProperty("rmi.registry.port"));
Notice the hostname 127.0.0.1, this makes sure we are always pointing to localost,
this tricks the client in thinking the object got from the remote registry is local and then connects to its local forwarded ports.
On the client I give the same permissions as above, I don't start the register, but bind an extra socket factory to use for the registry lookup.
RMISocketFactory.setSocketFactory(new LocalHostSocketFactory());
This socket factory creates a SSHClientSocket to the localhost ssh port (to the remote registry).
The remote objects are exported with a custom ClientSocketFactory, which is therefore implemented on the clientside. (On the serverside it needs to be disabled, otherwise you will ssh to your own machine :$)
It then creates a ssh socket and port forwarder on the fly.
public class SSHClientSocketFactory implements RMIClientSocketFactory, Serializable {
public Socket createSocket(String host, int port) throws IOException {
try {
Connection conn = new Connection(hostname, 22);
conn.connect();
boolean isAuthenticated = conn.authenticateWithPassword(username, password);
LocalPortForwarder lpf1 = conn.createLocalPortForwarder(port, serverAddress, port);
return new Socket(host, port);
catch (Exception e) {System.out.println("Unable to connect")};
}
}
This automatic port formwarding ensures that whatever port is being used to bind an RMI object, it goes through the SSH tunnel and points to localhost for that.
Remote port forwarding is not needed for this setup.