I'm getting the reference to a remote EJB instance without any kind of problem but, sometimes, when I invoke one of its methods a "java.net.SocketTimeoutException: Read timed out" is thrown from the client side. There seem to be no problems at the server side
Is there a way to set EJB client timeout on a per-invocation basis?
I'm using a pretty old JBoss version (4.2.1 GA)
Regards
You can configure InvokerLocater attribute for Connector MBean.
<attribute name="InvokerLocator">socket://{jboss.bind.address}:3873/?socketTimeout=60000</attribute>
Can provide more finer details for the config element under Configuration attribute. By default it's one minute.
<attribute name="socketTimeout">60000</attribute>
Providing timeout parameters in the JNDI properties file.
jnp.timeout : The connection timeout in milliseconds. The default
value is 0 which means the connection will block until the VM TCP/IP
layer times out.
jnp.sotimeout : The connected socket read timeout in milliseconds. The
default value is 0 which means reads will block. This is the value
passed to the Socket.setSoTimeout on the newly connected socket.
To manually configure timeout for individual invocations, you have to create initial context with appropriate property values.
Related
I have a spring boot application with an embedded tomcat server. To limit the impact of DOS attacks i've set the property server.tomcat.connection-timeout to 3 seconds. A connectionTimeout is the limit of time after which the server will automatically close the connection with the client.
So if in my case the client takes more then 3 seconds to finish the request the connection will automatically time out. However, its not yet clear to me what exactly happens when instead it is a process on the server side that is causing a delay.
To give an example, My web application is using a hikari connection pool that manages connections to the database. It can have a maximum of 10 database connections. If all 10 are in use any incoming request will have to wait for one of the database connections to become available. If this wait takes more then 3 seconds, will the tomcat connection time out? Or will the connection remain available since the delay isn't caused by the client?
Thank you
According to the Tomcat 9.0 documentation, the connection-timeout is:
The number of milliseconds this Connector will wait, after accepting a connection, for the request URI line to be presented. [...] Unless disableUploadTimeout is set to false, this timeout will also be used when reading the request body (if any).
That is a time taken for the client to send the request. This is unrelated to the time that the server takes to respond to the request.
So ...
If this wait takes more then 3 seconds, will the tomcat connection time out?
No, it won't1. In fact, it appears that Tomcat doesn't have any limits on how long a (synchronous) request may take to complete.
Of course, the client could timeout a request if the server takes too long. It is unlikely that the server will notice this so that it can abandon the request.
1 - Assuming that the documentation is accurate. However, that config option has been present for a number of Tomcat versions, with the same description. If the documentation was wrong, this would surely have been noticed, reported and fixed.
So HTTP.1 version and above support persistence connection.
Now, we are creating a rest application which should be stateless. And we are putting limitation of number of connections at a time.
But if I go through the HTTP 1.0 doc, this approach seems problematic.
It says the server will keep the connection open unless client says to close.
So, my question is what if client does not close? It can give me denial of service error if a connection is always active.
What is the default timeout with jetty and how can I configure it? I am not able to find appropriate documentation.
The HttpConfiguration has a configuration setting setIdleTimeout(long ms)
https://www.eclipse.org/jetty/javadoc/jetty-9/org/eclipse/jetty/server/HttpConfiguration.html#setIdleTimeout(long)
That controls the idle timeout between requests.
The HttpConfiguration object is part of the ServerConnector that controls the listening port and accepts connections.
The idle timeout default is -1 in code (infinite).
But it's 30,000ms in jetty-home (and the older jetty-distribution).
Use jetty-start property jetty.http.idleTimeout to configure it for your specific jetty-base instance/configuration if using standalone jetty.
Note: if you use Servlet Async features, then the idle timeouts configured at the container are overridden by the Servlet Async configuration for timeout. (If you use Servlet Async, then ALWAYS specify a valid timeout, never disable it, or set it to infinite, or set it to massively huge value)
We have a solace broker running in a docker container. When we create a JNDI Connection Factory there are default properties such as
Reconnect Retry Attempts
Connect Retry Attempts
Connect Retry Attempts per Host
and so on
When we establish a producer using JMS we give properties like so
env.put(SupportedProperty.SOLACE_JMS_JNDI_CLIENT_ID, config.getJndiClientID());
env.put(SupportedProperty.SOLACE_JMS_PROP_SENDER_ID, config.getSenderID());
env.put(SupportedProperty.SOLACE_JMS_VPN, config.getVpn());
env.put(SupportedProperty.SOLACE_JMS_JNDI_CONNECT_RETRIES, 0);
env.put(SupportedProperty.SOLACE_JMS_JNDI_RECONNECT_RETRIES, 0);
env.put(SupportedProperty.SOLACE_JMS_JNDI_CONNECT_RETRIES_PER_HOST, 0);
however at the run-time of application and at the point when connection is getting established it seems that these properties that I set on the client side take no effect. Specifically I was able to test that by stopping the docker container of solace and seeing that it is trying to reconnect 3 times which is what happens to be the default is on the broker side.
Therefore, the question, how to force the override of these properties on the client side, if at all possible? Under what circumstances does setting these properties on a client side take affect?
Loading of a JMS ConnectionFactory over JNDI is, per definition, a two step process: first the API connects to JNDI and then loads whatever JMS ConnectionFactory object has been created.
Property SOLACE_JMS_JNDI_CONNECT_RETRIES (note the JNDI) is actually the parameter for the first step ! It defines the #retries for contacting JNDI. If you want to change the definition of the loaded JMS ConnectionFactory, you need to do this in your Solace administrator. For example within admin GUI as shown below.
When you use env.put(), you are trying to set the JMS Property using the Initial Context.
But these properties could also be set through the JNDI properties file as well as the command line.
If you turn on the API debugging, you should be able to see which value is taken from where.
Now, once you are able to connect with the JNDI connection factory on the broker, the values will be taken from the broker side.
I have an instance to a class A that implements java.rmi.Remote.
In order to check the health of the connection to the RMI Server, I invoke a custom-made, trivial member function of the instance of A and see if an Exception is thrown. That's not really elegant. Therefore my question:
Is there any native way to check if the connection is available for method invocation on the instance of A, i.e. without the need to actually try to call a member function?
A special case is: Should the RMI server be restarted during the lifetime of the instance of A on the client side, then the instance of A becomes invalid and defunct (although the server might be back up and healthy).
From Java RMI FAQ :
F.1 At what point is there a "live" connection between the client and
the server and how are connections managed?
When a client does a "lookup" operation, a connection is made to the
rmiregistry on the specified host. In general, a new connection may or
may not be created for a remote call. Connections are cached by the
Java RMI transport for future use, so if a connection is free to the
right destination for a remote call, then it is used. A client cannot
explicitly close a connection to a server, since connections are
managed at the Java RMI transport level. Connections will time out if
they are unused for a period of time.
Your questions :
Is there any native way to check if the connection is available for
method invocation on the instance of A, i.e. without the need to
actually try to call a member function?
This question boils down to how to check programmatically that a given server/system is UP. This question has already been answered several times here and several other forums. One such question which answers this is Checking if server is online from Java code.
A special case is: Should the RMI server be restarted during the
lifetime of the instance of A on the client side, then the instance of
A becomes invalid and defunct (although the server might be back up
and healthy).
Then again, the answer is pretty easy. If the instance of the class was busy performing a step within the remote method invocation, then there would be a connection related exception thrown instantly.
Again, from RMI FAQ, D.8 Why can't I get an immediate notification when a client crashes? :
If a TCP connection is held open between the client and the server
throughout their interaction, then the server can detect the client
reboot(I'm adding here: and vice-versa) when a later attempt to write to the connection
fails (including the hourly TCP keepalive packet, if enabled).
However, Java RMI is designed not to require such permanent
connections forever between client and server(or peers), as it impairs
scalability and doesn't help very much.
Given that it is absolutely impossible to instantly determine when a
network peer crashes or becomes otherwise unavailable, you must decide
how your application should behave when a peer stops responding.
The lookup would keep on working perfectly, till the server is UP and doesn't get down while the client is performing operation on the remote method. You must decide here how your application should behave if the peer restarts. Additionally, there is no such concept as session in RMI.
I hope this answers all of your questions.
Your question is founded on a fallacy.
Knowing the status in advance doesn't help you in the slightest. The status test is followed by a timing window which is followed by your use of the server. During the timing window, the status can change. The server could be up when you test and down when you use. Or it could be down when you test and up when you use.
The correct way to determine whether any resource is available is to try to use it. This applies to input files, RMI servers, Web systems, ...
Should the RMI server be restarted during the lifetime of the instance of A on the client side, then the instance of A becomes invalid and defunct (although the server might be back up and healthy).
In this case you will get either a java.rmi.ConnectException or a java.rmi.NoSuchObjectException depending on whether the remote object restarted on a different port or the same port.
I have a java application which does some JMS send&receive work. But I found an interesting problem. For example, I set the following for java.naming.provider.url.
tcp://hostnameA.foo.bar:7222
But I got the error as below. Only hostname in it, not the full qualified domain name.
javax.jms.JMSException: Failed to connect to the server at tcp://hostnameA:7222
Unless I add hostnameA in my hosts file manually, it won't connect to Tibco server.
How can I fix it?
Thanks in advance.
The EMS Server has its own built-in JNDI server. What you're actually doing when you connect is 1) querying the EMS:s JNDI server for a connection factory definition and then 2) creating a connection based on the returned factory. This is implied by the fact that you're using java.naming.provider.url.
Change the connection factory definition (factories.conf) on the EMS server for the connection factory you're using. The default definition for the default factories (e.g. QueueConnectionFactory) on a fresh install is "tcp://7222" which will be replaced by "tcp://hostname:7222" by the server when retrieved. You can change this definition to e.g. "tcp://hostname.myfqdn.com:7222" and things should work.
You could also bypass the JNDI server completely by creating a connection directly, but I wouldn't recommend this since the connection factory definition returned by the server may contain information about load balanced and fault tolerant pairs, SSL settings, or point to a completely different server pair etc. It also allows the EMS administrators to change the definition of connection factories without the clients having to change their code or even their configuration.
I guess this has nothing to do with the programming layer.
Your DNS query for that name is unresolvable, that's why it works when you edit your hosts-file.
Either check your system's DNS settings (or make sure the dns server which is in your system's configuration replies to your name query), or use the IP-address instead.