I am trying to debug http client request. I need to get the local port number that was used when the connection was made. I can use the HttpClientContext to get the connection, and retrieve the local port when the connection is successful. However, in cases where the IOExceptions are thrown, I get a ConnectionShutdownException when retrieving the local port number. Any clue on how I can obtain the local port number for all http requests in case of error.
This is for HTTPClient 4.0.1 (last version I have with me).
I did not find any simple one liner...
The part of http client that actually binds sockets to local host/port and connects to the remote host/port is, unsurprisingly, the SocketFactory. In HTTPClient, the socket factory is associated to a SchemeRegistry, that in turns belongs to the connection manager. Note that the HTTPClient's SocketFactory is NOT a javax.net.SocketFactory, but a wrapper around such an object. You define a scheme like so :
SchemeRegistry schRgstr = new SchemeRegistry();
Scheme httpScheme = new Scheme("http", PlainSocketFactory.getSocketFactory(), 80);
schRgstr.register(httpScheme);
Of course, org.apache.http.conn.scheme.SocketFactory is an Interface, so you can decorate it to do anything you want in particular, this will come in handy.
The part of httpclient that calls the socket factory is called the ClientConnectionOperator (which is an interface too). This object is actually also tied to the connection manager, and not the client per se. So if you want to customize the connection operator, you may override the connection manager too, for example, like so (anonymous class) :
ThreadSafeClientConnManager connMngr = new ThreadSafeClientConnManager(httpParams, schRgstr) {
protected ClientConnectionOperator createConnectionOperator(SchemeRegistry schreg) {
return new YourConnectionOperator(schreg);
};
};
The lifecycle model of the socket goes like so:
When the connection manager needs a now connection, it calls createConnection on the operator (which basically does nothing but create an internal object that eventually will hold the actual socket).
Further along the way it calls openConnection
openConnection goes to the SocketFactory and asks for a new Socket, then tries to connect it like so (here, sf is the "httpclient socket factory")
sock = sf.connectSocket(sock, target.getHostName(),
schm.resolvePort(target.getPort()),
local, 0, params);
If this call fails, an exception is thrown and more information will not be accessible. We'll get back to that.
If the connectSocket works, though, the prepareSocket method is called on the connection operator. And so, you can override the method and put the port information into the context, (or anything else you fancy) :
#Override
protected void prepareSocket(Socket sock, HttpContext context, HttpParams params) throws IOException {
super.prepareSocket(sock, context, params);
context.setAttribute("LOCAL PORT INTERCEPTOR", sock.getLocalPort());
}
The HttpContext instance that is used is passed when you invoke HTTPClient, so you can access it even if the call fails later, because of some other exception. When you place your call, make it so:
HttpContext ctx = new BasicHttpContext();
HttpGet get = new HttpGet(targetUrl.getUrl());
HttpResponse resp = client.execute(get, ctx);
In this code, if the client could go as far as step 5, you have your port info accessible, even if an exception occurs later on (connection drop out, timeout, invalid HTTP, ...).
Going further
There is still a dark zone in all this : if the call fails at step 4 (actual opening of the socket, like if you have a DNS error while resolving the destination host name)... I'm not sure this case is actually interesting to you, (I can not see why it would be). Seeing dealing with it starts to get "messy", you should really consider wether or not you need this.
For that we need to start overriding really deep, and that involves lots of work - some of it I would not consider very good design.
The complexity arises because authors of HTTPClient did not provide the necessary methods one could override to get to the information you need. Inside the socket factory, the interesting point is :
sock = createSocket();
InetSocketAddress isa = new InetSocketAddress(localAddress, localPort);
sock.bind(isa);
// go on and connect to the server
// like socket.connect...
There is no method to override that splits the local and the server side of the socket openning, so that if any socket exception is thrown on the server side, your access to the socket instance is lost, and the local port info gone with it.
But all is not lost because we do have one entry point we can play with : the createSocket method! Default implementation is
public Socket createSocket() {
return new Socket();
}
But as Socket is not a final class, you can... play with it !
public Socket createSocket() {
return new Socket() {
#Override
public void bind(SocketAddress bindpoint) throws IOException {
super.bind(bindpoint);
// get the local port and give the info back to whomever you like
}
};
}
Problem is : this works with plain sockets (because we can create an anonymous subclass), but this does not work with HTTPS, because you can not simply instanciate a Socket on this case, you have to do :
return (SSLSocket) this.javaxNetSSLSocketFactory.createSocket();
And you can not create an anonymous subclass in that case. And as Socket is no interface either, you can not even proxy it to decorate it. So you could (uglyest code ever) create a Socket subclass that wraps and delegates to the SSLSocket, but that'd be desperate.
So, recap time.
If we only care about sockets that were at some point connected to the server, the solution is fairly simple.
A scheme registry we builb is used in a custom ConnectionManager that overrides the ConnectionOperator. The operator's overriden methods is prepareSocketthat allows us to simply update the HttpContext of any request we send with the port information. Which is an easy way to find the information when everything goes well.
If we want to care about local ports attributed to a socket that never, ever, got connected (which I would argue is of limited use), one need to go deeper.
Our own SchemeRegistry should be designed to have custom SocketFactories. These should probably be decorators or the default ones... the createSocket overriden method allows us to "intercept" the binding on the local port (and store this into maybe a ConcurrentMap<Socket, Integer> or a ThreadLocal), and by overriding 'connectSocket' we trap any exception that may happen, and rethrow it, but not before wrapping it in our own exception type that can hold local port information. That way, if the exceptions passes though when you call the client, by checking the cause chain, you will find your port data. If no exception occurs, we clean or map instance / thread local.
Related
Edit: Removed startHandshake(); as it's irrelevant to the question and rarely needed (for example, not in my case)
I have a rather specific and rare client-server protocol (over TCP).
I've implemented it using SSLSocket.
Now, I foresee that I might need to use the same protocol over an un-encrypted connection.
My problem is that the class implementing the protocol has a field: public SSLSocket currentSocket;
(and then the methods in my client class do all sorts of .read(), .write(), flush()...)
I thought about changing the field type, like so: public Socket currentSocket;
However, then, the problem is that my connection procedure is incompatible:
public static void connect () {
currentSocket = SslUtils.getSSLsocket(host, port, keystoreFile, keystorePass, pkPass);
...
java.net.Socket 's default constructor obviously doesn't accept keystore stuff
I don't want to re-implement my whole client just for this difference...
One thought I have is, when I need a plaintext Socket, to create an SSLSocket with no encryption.
I don't know if that's a professional way of doing it or if it will even work (the server will expect a plaintext client socket in the new use case)
My other idea is to define two fields, one for plaintext socket, one for SSL socket and then use logic to link the in/out streams to the correct field, as needed. However, that will result in a "hanging" field. If you use SSL, there will be a redundant field Socket plaintextSocket and vice-versa...
Is there a way to make my currentSocket field more abstract, so that I can define it in the same client, then instruct a slightly different client code path depending on a known variable (something like needSSLsocket=true) for the instantiation and connection?
SSLSocket extends Socket, so you can assign an SSLSocket object to a Socket variable. You are right to change your currentSocket field to a Socket. Simply use another variable to handle the SSLSocket when needed, eg:
public static void connect () {
if (needSSLsocket) {
SSLSocket ssl = SslUtils.getSSLsocket(host, port, keystoreFile, keystorePass, pkPass);
ssl.startHandshake();
...
currentSocket = ssl;
/* or:
currentSocket = SslUtils.getSSLsocket(host, port, keystoreFile, keystorePass, pkPass);
((SSLSocket) currentSocket).startHandshake();
...
*/
} else {
currentSocket = new Socket(host, port);
}
...
}
got a Problem with my RMI Test...
Server:
LocateRegistry.createRegistry(non-default-port);
obj = new HelloImpl();
Naming.rebind("//ip-of-server/HelloServer", obj);
Client:
RMISocketFactory.setSocketFactory(new sun.rmi.transport.proxy.RMIHttpToCGISocketFactory());
obj = (Hello) LocateRegistry.getRegistry("ip of server", non-default-port).lookup( "HelloServer");
obj.sayHello("test");
All objects are static in the class..
But i get a "java.rmi.NoSuchObjectException: no such object in table"-Exception all the time..
This only happens, if i use the HTTP Tunneling via
RMISocketFactory.setSocketFactory(new sun.rmi.transport.proxy.RMIHttpToCGISocketFactory());
If i try it without the HTTP Tunneling (from a other PC in the normal web), it works fine!
What could be the problem?
You will be getting this from the sayHello() rather than the lookup(). The meaning of the exception is that the stub is 'stale', i.e. the remote object has been unexported, which probably means it has been DGC'd as well. You should try the following, in this order, one at a time:
Keep a static reference to the value returned by createRegistry().
This should be sufficient by itself, but if it isn't:
Keep a static reference to the remote object itself, and no I do not mean its stub. In this case, obj.
I can't explain why it happens via HTTP tunnelling only, but you should do (1) in all cases anyway, so really it is a bug waiting to happen via any means.
In my Java Sockets program, I have implemented a client-server Observer pattern. That is, the server subject publishes its server events to client observers that have subscribed to the server. That's the theory.
In practice, I am unable to send the client observer through the socket to subscribe to the server. The error that comes up is: "java.io.NotSerializableException: java.net.Socket." I think this means Java is complaining that the client observer contains a Socket which, as we all know, is not Serializable.
However, the Socket is the means of communication between the client and the server!
How can I implement a client-server Observer pattern when the client appears to contain a non-Serializable roadblock?
Here is a code overview to give you an understanding of what is happening:
Server
public class Server implements ServerSocketPublisher {
// traditional Observer publisher methods implemented here, such as register,
// deregister, notifySubscribers
// ServerSocket implemented here. Waiting on accept()
}
Client
public class Client implements ClientSocketSubscriber, Serializable {
// traditional Observer subscriber methods implemented here, i.e. updateClient
Socket connectingSocket = null; //I SUSPECT THIS VARIABLE IS THE PROBLEM
try {
connectingSocket = new Socket();
// set SocketAddress and timeout
connectingSocket.connect(sockAddr, timeout)
if (connectingSocket.isConnected()) {
ObjectOutputStream oos = new ObjectOutputStream
(connectingSocket.getOutputStream());
oos.writeObject(this); // THIS LINE THROWS THE ERROR in STACKTRACES
oos.flush();
oos.close();
}
} catch (/*various exceptions*/) {
}
// close connectingSocket
}
You have couple of ways to get this fixed:
Mark your socket as transient
transient Socket connectingSocket = null;
Instead of implementing Serializable implement Externalizable and then in your implementation of read and write object ignore the Socket.
Along with this you should also read
About transient:
Post on SO
About Externalizable :
Javabeat
you cannot write the Client to the output stream socket since it contains a Socket. If you serialize the Client, you serialize all non-transient vars in it, and thats when you get the exception.
However, the server already has the socket on its side, so you don't need to send it and the client across. If all clients are observers once the connection has occurred you can pretty much at that point start waiting for data from the socket on the client side. The server will need to keep a list of sockets its ready to broadcast to, and when it gets an event to send, loop over all sockets and send the register, deregister, notifySubscriber messages
Alternatively if you wish to treat the client as an object on the server side and call methods on it (which it looks like you might be trying to do), maybe you need to look into RMI - where the server holds stubs of the client and invoking the stub sends messages to the client.
I have 3 .jsp's. The first one asks the user for their username. Once the form is submitted it is taken to a 2nd jsp where a unique passcode is created for the user. How would I go about taking this passcode and passing it to a 3rd jsp using a socket?
You can use java.net.URL and java.net.URLConnection to fire and handle HTTP requests programmatically. They make use of sockets under the covers and this way you don't need to fiddle with low level details about the HTTP protocol. You can pass parameters as query string in the URL.
String url = "http://localhost:8080/context/3rd.jsp?passcode=" + URLEncoder.encode(passcode, "UTF-8");
InputStream input = new URL(url).openStream();
// ... (read it, it contains the response)
This way the passcode request parameter is available in the 3rd JSP by ${param.passcode} or request.getParameter("passcode") the usual way.
Better is however to just include that 3rd JSP in your 2nd JSP.
request.setAttribute("passcode", passcode);
request.getRequestDispatcher("3rd.jsp").include(request, response);
This way the passcode is available as request attribute in the 3rd JSP by ${passcode} or request.getAttribute("passcode") the usual way.
See also:
Using java.net.URLConnection to fire and handle HTTP requests
Unrelated to the concrete question, this is however a terribly nasty hack and the purpose of this is beyond me. There's somewhere a serious design flaw in your application. Most likely those JSPs are tight coupled with business logic which actually belongs in normal and reuseable Java classes like servlets and/or EJBs and/or JAX-WS/RS which you just import and call in your Java class the usual Java way. JSPs are meant to generate and send HTML, not to act as business services, let alone web services. See also How to avoid Java code in JSP files?
So, you want the username to be submitted from the first JSP to the second, by submitting a form to the second, right?
But, for interaction between the second and third, you want to avoid using the communication mechanisms behind the the JSP files and use your own, right?
Well, how you might implement doing this depends on where you're sending your communication from and to. For instance, are they on the same machine, or on different machines?
Generally speaking, you'll need a client-server type of relationship to be set up here. I imagine that you would want your third JSP to act as the server.
What the third JSP will do is will sit and wait for a client to try to communicate with it. But, before you can do that, you'll first need to bind a port to your application. Ports are allocated by the Operating System and are given to requesting processes.
When trying to implement this in Java, you might want to try something like the following:
int port_number = 1080;
ServerSocket server = new ServerSocket(port_number);
In the above example, the ServerSocket is already bound to the specified port 1080. It doesn't have to be 1080 - 1080 is just an example.
Next, you will want to listen and wait for a request to come in. You can implement this step in the following:
Socket request = null;
while((request = server.accept()) == null)
{}
This will cause the server socket to keep looping until it finally receives a request. When the request comes in, it will create a new Socket object to handle that request. So, you could come back to your loop later on and continue to wait and accept requests, while a child thread handles communication using your newly created request Socket.
But, for your project, I would guess that you don't need to communicate with more than one client at a time, so it's okay if we just simply stop listening once we receive a request, I suppose.
So, now onto the client application. Here, it's a little bit different from what we had with the server. First off, instead of listening in on the port and waiting for are request, the client's socket will actively try to connect to a remote host on their port. So, if there is no server listening in on that port, then the connection will fail.
So, two things will need to be know, those are:
What's the IP Address of the server?
What port is the server listening in on?
There are short-cuts to getting the connection using the Java Socket class, but I assume that you're going to test this out on the same machine, right? If so, then you will need two separate ports for both your client and server. That's because the OS won't allow two separate processes to share the same port. Once a process binds to the port, no other process is allowed to access it until that port releases it back to the OS.
So, to make the two separate JSP's communicate on the same physical machine, you'll need both a local port for your client, and you'll need the server's port number that it's listening in on.
So, let's try the following for the client application:
int local_port = 1079;
int remote_port = 1080;
InetSocketAddress localhost = new InetSocketAddress(local_port);
Socket client = new Socket(); //The client socket is not yet bound to any ports.
client.bind(localhost); //The client socket has just requested the specified port number from the OS and should be bound to it.
String remoteHostsName = "[put something here]";
InetSocketAddress remotehost = new InetSocketAddress(InetAddress.getByName(remoteHostsName), remote_port); //Performs a DSN lookup of the specified remote host and returns an IP address with the allocated port number
client.connect(remotehost); //Connection to the remote server is being made.
That should help you along your way.
A final note should be made here. You can't actually run these two applications using the same JVM. You'll need two separate processes for client and server applications to run.
In our application, we are using RMI for client-server communication in very different ways:
Pushing data from the server to the client to be displayed.
Sending control information from the client to the server.
Callbacks from those control messages code paths that reach back from the server to the client (sidebar note - this is a side-effect of some legacy code and is not our long-term intent).
What we would like to do is ensure that all of our RMI-related code will use only a known specified inventory of ports. This includes the registry port (commonly expected to be 1099), the server port and any ports resulting from the callbacks.
Here is what we already know:
LocateRegistry.getRegistry(1099) or Locate.createRegistry(1099) will ensure that the registry is listening in on 1099.
Using the UnicastRemoteObject constructor / exportObject static method with a port argument will specify the server port.
These points are also covered in this Sun forum post.
What we don't know is: how do we ensure that the client connections back to the server resulting from the callbacks will only connect on a specified port rather than defaulting to an anonymous port?
EDIT: Added a longish answer summarizing my findings and how we solved the problem. Hopefully, this will help anyone else with similar issues.
SECOND EDIT: It turns out that in my application, there seems to be a race condition in my creation and modification of socket factories. I had wanted to allow the user to override my default settings in a Beanshell script. Sadly, it appears that my script is being run significantly after the first socket is created by the factory. As a result, I'm getting a mix of ports from the set of defaults and the user settings. More work will be required that's out of the scope of this question but I thought I would point it out as a point of interest for others who might have to tread these waters at some point....
You can do this with a custom RMI Socket Factory.
The socket factories create the sockets for RMI to use at both the client and server end so if you write your own you've got full control over the ports used. The client factories are created on the server, Serialized and then sent down to the client which is pretty neat.
Here's a guide at Sun telling you how to do it.
You don't need socket factories for this, or even multiple ports. If you're starting the Registry from your server JVM you can use port 1099 for everything, and indeed that is what will happen by default. If you're not starting the registry at all, as in a client callback object, you can provide port 1099 when exporting it.
The part of your question about 'the client connections back to the server resulting from callbacks' doesn't make sense. They are no different from the original client connections to the server, and they will use the same server port(s).
Summary of the long answer below: to solve the problem that I had (restricting server and callback ports at either end of the RMI connection), I needed to create two pairs of client and server socket factories.
Longer answer ensues:
Our solution to the callback problem had essentially three parts. The first was the object wrapping which needed the ability to specify that it was being used for a client to server connection vs. being used for a server to client callback. Using an extension of UnicastRemoteObject gave us the ability to specify the client and server socket factories that we wanted to use. However, the best place to lock down the socket factories is in the constructor of the remote object.
public class RemoteObjectWrapped extends UnicastRemoteObject {
// ....
private RemoteObjectWrapped(final boolean callback) throws RemoteException {
super((callback ? RemoteConnectionParameters.getCallbackPort() : RemoteConnectionParameters.getServerSidePort()),
(callback ? CALLBACK_CLIENT_SOCKET_FACTORY : CLIENT_SOCKET_FACTORY),
(callback ? CALLBACK_SERVER_SOCKET_FACTORY : SERVER_SOCKET_FACTORY));
}
// ....
}
So, the first argument specifies the part on which the object is expecting requests, whereas the second and third specify the socket factories that will be used at either end of the connection driving this remote object.
Since we wanted to restrict the ports used by the connection, we needed to extend the RMI socket factories and lock down the ports. Here are some sketches of our server and client factories:
public class SpecifiedServerSocketFactory implements RMIServerSocketFactory {
/** Always use this port when specified. */
private int serverPort;
/**
* #param ignoredPort This port is ignored.
* #return a {#link ServerSocket} if we managed to create one on the correct port.
* #throws java.io.IOException
*/
#Override
public ServerSocket createServerSocket(final int ignoredPort) throws IOException {
try {
final ServerSocket serverSocket = new ServerSocket(this.serverPort);
return serverSocket;
} catch (IOException ioe) {
throw new IOException("Failed to open server socket on port " + serverPort, ioe);
}
}
// ....
}
Note that the server socket factory above ensures that only the port that you previously specified will ever be used by this factory. The client socket factory has to be paired with the appropriate socket factory (or you'll never connect).
public class SpecifiedClientSocketFactory implements RMIClientSocketFactory, Serializable {
/** Serialization hint */
public static final long serialVersionUID = 1L;
/** This is the remote port to which we will always connect. */
private int remotePort;
/** Storing the host just for reference. */
private String remoteHost = "HOST NOT YET SET";
// ....
/**
* #param host The host to which we are trying to connect
* #param ignoredPort This port is ignored.
* #return A new Socket if we managed to create one to the host.
* #throws java.io.IOException
*/
#Override
public Socket createSocket(final String host, final int ignoredPort) throws IOException {
try {
final Socket socket = new Socket(host, remotePort);
this.remoteHost = host;
return socket;
} catch (IOException ioe) {
throw new IOException("Failed to open a socket back to host " + host + " on port " + remotePort, ioe);
}
}
// ....
}
So, the only thing remaining to force your two way connection to stay on the same set of ports is some logic to recognize that you are calling back to the client-side. In that situation, just make sure that your factory method for the remote object calls the RemoteObjectWrapper constructor up top with the callback parameter set to true.
I've been having various problems implementing an RMI Server/Client architecture, with Client Callbacks. My scenario is that both Server and Client are behind Firewall/NAT. In the end I got a fully working implementation. Here are the main things that I did:
Server Side , Local IP: 192.168.1.10. Public (Internet) IP 80.80.80.10
On the Firewall/Router/Local Server PC open port 6620.
On the Firewall/Router/Local Server PC open port 1099.
On the Router/NAT redirect incoming connections on port 6620 to 192.168.1.10:6620
On the Router/NAT redirect incoming connections on port 1099 to 192.168.1.10:1099
In the actual program:
System.getProperties().put("java.rmi.server.hostname", IP 80.80.80.10);
MyService rmiserver = new MyService();
MyService stub = (MyService) UnicastRemoteObject.exportObject(rmiserver, 6620);
LocateRegistry.createRegistry(1099);
Registry registry = LocateRegistry.getRegistry();
registry.rebind("FAManagerService", stub);
Client Side, Local IP: 10.0.1.123 Public (Internet) IP 70.70.70.20
On the Firewall/Router/Local Server PC open port 1999.
On the Router/NAT redirect incoming connections on port 1999 to 10.0.1.123:1999
In the actual program:
System.getProperties().put("java.rmi.server.hostname", 70.70.70.20);
UnicastRemoteObject.exportObject(this, 1999);
MyService server = (MyService) Naming.lookup("rmi://" + serverIP + "/MyService ");
Hope this helps.
Iraklis