So HTTP.1 version and above support persistence connection.
Now, we are creating a rest application which should be stateless. And we are putting limitation of number of connections at a time.
But if I go through the HTTP 1.0 doc, this approach seems problematic.
It says the server will keep the connection open unless client says to close.
So, my question is what if client does not close? It can give me denial of service error if a connection is always active.
What is the default timeout with jetty and how can I configure it? I am not able to find appropriate documentation.
The HttpConfiguration has a configuration setting setIdleTimeout(long ms)
https://www.eclipse.org/jetty/javadoc/jetty-9/org/eclipse/jetty/server/HttpConfiguration.html#setIdleTimeout(long)
That controls the idle timeout between requests.
The HttpConfiguration object is part of the ServerConnector that controls the listening port and accepts connections.
The idle timeout default is -1 in code (infinite).
But it's 30,000ms in jetty-home (and the older jetty-distribution).
Use jetty-start property jetty.http.idleTimeout to configure it for your specific jetty-base instance/configuration if using standalone jetty.
Note: if you use Servlet Async features, then the idle timeouts configured at the container are overridden by the Servlet Async configuration for timeout. (If you use Servlet Async, then ALWAYS specify a valid timeout, never disable it, or set it to infinite, or set it to massively huge value)
Related
We are running a setup on production where grpc clients are talking to servers via proxy in between (image attached)
The client is written in java and server is written in go. We are using the load balancing property as round_robin in the client. Despite this, we have observed some bizarre behaviour. When our proxy servers scale in i.e reduce from let's say 4 to 3, then resolver gets into action and the request load from our clients gets distributed equally to all of our proxies, but when the proxy servers scale out i.e increase from 4 to 8, then the new proxy servers don't get any requests from the clients which leads to a skewed distribution of request load on our proxy servers. Is there any configuration that we can do to avoid this?
We tried setting a property named networkaddress.cache.ttl to 60 seconds in the JVM ARGS but even this didn't help.
You need to cycle the sticky gRPC connections using the keepalive and keepalive timeout configuration in the gRPC client.
Please have a look at this - gRPC connection cycling
both round_robin and pick_first perform name resolution only once. They are intended for thin, user-facing clients (android, desktop) that have relatively short life-time, so sticking to a particular (set of) backend connection(s) is not a problem then.
If your client is a server app, then you should be rather be using grpclb or the newer xDS: they automatically re-resolve available backends when needed. To enable them you need to add runtime dependency in your client to grpc-grpclb or grpc-xds respectively.
grpclb does not need any additional configuration or setup, but has limited functionality. Each client process will have its own load-balancer+resolver instance. backends are obtained via repeated DNS resolution by default.
xDS requires an external envoy instance/service from which it obtains available backends.
I have a bean dedicated to export a CSV and due to the big amount of data it takes around 15 minutes o more and I get a timeout.
In order to solve this I thought about adding some kind of configuration only for this bean, as I mustn't increase Glassfish timeouts. So in my local machine I changed Glassfish timeout configurations to make some tests and reduce those 15 minutes (Request Timeout from Network Listeners and IDLE timeout from Thread Pools) and, by adding just a #Transactional annotation it worked, but when I used a preproduction machine it didn't.
Both are Glassfish 4.1.1, but in preproduction I also use Nginx. But trying through the 8181 port (HTTPS, which my Nginx doesn't control) it didn't work neither.
So I've been trying different solutions:
#Transactional (cause transactions timeout is 0 in Glassfish)
Use a EJB
#StatefulTimeout
#TransactionManagement(TransactionManagementType.CONTAINER) and #TransactionAttribute(TransactionAttributeType.REQUIRED)
None of this worked in preproduction. Can someone guide me a bit?
Edit:
This is my configuration of Nginx.
server {
listen 80;
client_max_body_size 900M;
server_name *my config*;
location / {
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_buffering off;
proxy_ignore_client_abort off;
proxy_set_header Host $http_host;
proxy_pass http://127.0.0.1:8080;
}
}
The thing is even though setting timeouts in Glassfish and reducing it to 1 minute, in preproducction I can't avoid that timeout and the connection closes after that minute.
Long transactions are in general not a good idea for an online application. That taken into consideration it makes probably sense to limit connection timeout to 1 minute.
What you can do is to perform the actual CSV export asynchronously in background. By annotating a business method of a stateless session bean with #Async, the method will be executed in a separate thread, while the original request is completed immediately. Transaction timeout of #Async annotated method need to be increased, of course, but that you might have already done. Since the initial request ends immediately, timeout isn't an issue anymore. And export is performed in background on the server, without a connection to the client.
Note: Business methods annotated with #Async are by default started in a new transaction.
The remaining problem is to properly report back the result of CSV export to the user, if that is needed. After the initial request, the user can only be told, that the CSV export job has been triggered, but not whether it finished successfully. One possible solution is, that the long-running export method returns a FutureResult<ExportJobResult>, that is registered in a #Singleton keyed by a job-id. The generated job-id (e.g. an UUID) is returned to the client with the initial response. The client can then poll, let's say every 10 seconds, for status of the Future object (checking for isDone()). A more elaborated solution would be a complete job management framework, that persists job status.
I have a question related to the scenario when connecting from a Java application using the Microsoft JDBC Driver 4.0 to a SQL Server 2014 with AlwaysOn Availability Groups set up for high availability.
With this set up, we will be connecting to an availability group listener (specified in the db connecting string instead of any particular instance), so that the DB fail-over etc. is handled gracefully by the listener and it tries to connect to the next available instance behind the scenes if current primary goes down in the AG cluster.
Question(s) I have is,
In the data-source that is configured on the j2ee application server side (we use WebSphere), what happens to those connections already pooled by the data-source?
When a database goes down, though the AG listener would try to reconnect on the db side to the next available DB, will the AG Listener also through the jdbc driver send an event or something to the data-source created on the app server and make sure those connections that are already pooled by the datasource to be discarded and have it create new ones so that transactions on the application side wont fail (though they might for a while till new connections are created and fail over is successful) or the java application has to find out only after requesting it from the datasource?
WebSphere Application Server is able to cope with bad connections and removes them from the pool. Exactly when this happens depends on some configurable options and on how fully the Microsoft JDBC driver takes advantage of the javax.sql.ConnectionEventListener API to send notifications to the application server. In the ideal case where a JDBC driver sends the connectionErrorOccurred event immediately for all connections, WebSphere Application Server responds by removing all of these connections from the pool and by marking any connection that is currently in-use as bad so that it does not get returned to the pool once the application closes the handle. Lacking this, WebSphere Application Server will discover the first bad connection upon next use by the application. It is discovered either by a connectionErrorOcurred event that is sent by the JDBC driver at that time, or lacking that, upon inspecting the SQLState/error code of an exception for known indicators of bad connections. WebSphere Application Server then goes about purging bad connections from the pool according to the configured Purge Policy. There are 3 options:
Purge Policy of Entire Pool - all connections are removed from
the pool and in-use connections marked as bad so that they are not
pooled.
Purge Policy of Failing Connection Only - only the
specific connection upon which the error actually occurred is
removed from the pool or marked as bad and not returned to the pool
Purge Policy of Validate All Connections - all connections are
tested for validity (Connection.isValid API) and connections found
to be bad are removed from the pool or marked as bad and not
returned to the pool. Connections found to be valid remain in the
pool and continue to be used.
I'm not sure from your description if you are using WebSphere Application Server traditional or Liberty. If traditional, there is an additional option for pre-testing connections as they are handed out of the pool, but be aware that turning this on can have performance implications.
That said, the one thing to be aware of is that regardless of any of the above, your application will always need to be capable of handling the possibility of errors due to bad connections (even if the connection pool is cleared, connections can go bad while in use) and respond by requesting a new connection and retrying the operation in a new transaction.
Version 4 of that SQL Server JDBC driver is old and doesn't know anything about the always on feature.
Any data source connection pool can be configured to check the status of the connection from the pool prior to doling it out to the client. If the connection cannot be used the pool will create a new one. That's true of all vendors and versions. I believe that's the best you can do.
After spending a few hours reading the Http Client documentation and source code I have decided that I should definitely ask for help here.
I have a load balancer server using a round-robin algorithm somewhat like this
+---> RESTServer1
client --> load balancer +---> RESTServer2
+---> RESTServer3
Our client is using HttpClient to direct requests to our load balancer server, which in turn round-robins the requests to the corresponding RESTServer.
Now, Apache HttpClient creates, by default, a pool of connections (2 per route by default). This connections are by default persistent connections since I am using Http v1.1 and my servers are emitting Connection: Keep-Alive headers.
So, the problems is that since HttpClient creates this persistent connections, then those connections are no longer subject to round-robing algorithm at the balancer level. They always hit the same server every time.
This creates two problems:
I can see that sometimes one or more of the balanced servers are overloaded with traffic, whereas one ore more of the other servers are idle; and
even if I take one of my REST servers out of the balancer, it stills receives requests while the persistent connections are alive.
Definitely this is not the intended behavior.
I suppose I could force a Connection: close header in my responses, or I could run HttpClient without a connection pool or with a NoConnectionReuseStrategy. But the documentation for HttpClient states that the idea behind the use of a pool is to improve performance by avoiding having to open a socket every time and doing all the TPC handshaking and related stuff. So, I have to conclude that the use of a connection pool is beneficial to the performance of my applications.
So my question here, is there a way to use persistent connections with a load-balancer in the way or am I forced to use non-persistent connections for this scenario?
I want the performance that comes with reusing connections, but I want them properly load-balanced. Any thoughts on how I can configure this scenario with Apache Http Client if at all possible?
Your question is perhaps more related to your load balancer configuration and the style of load balancing. There are several ways:
HTTP Redirection
LB acts as a reverse proxy
Pure packet forwarding
In scenarios 1 and 3 you do not have a chance with persistent connections. If your load balancer acts like a reverse proxy, there might be a way to achieve persistent connections with balancing. "Dumb" balancers, like SMTP or LDAP selects the target per TCP connection, not on a request basis.
For example the Apache HTTPd server with the balancer module (see http://httpd.apache.org/docs/2.2/mod/mod_proxy_balancer.html) can dispatch every request (even on persistent connections) to a different server.
Also check, that you do not receive a balancer cookie which might be session persistent so that the cause is not the persistent connection but a balancer cookie.
HTH, Mark
+1 to #mp911de answer
One can also make the scenarios 1 and 3 work reasonably well by limiting the total time to live of persistent connections to some short period time, say 15 seconds. This way connections would live long enough to get re-used during periods of activity and short enough to go away during periods of relative inactivity.
I'm getting the reference to a remote EJB instance without any kind of problem but, sometimes, when I invoke one of its methods a "java.net.SocketTimeoutException: Read timed out" is thrown from the client side. There seem to be no problems at the server side
Is there a way to set EJB client timeout on a per-invocation basis?
I'm using a pretty old JBoss version (4.2.1 GA)
Regards
You can configure InvokerLocater attribute for Connector MBean.
<attribute name="InvokerLocator">socket://{jboss.bind.address}:3873/?socketTimeout=60000</attribute>
Can provide more finer details for the config element under Configuration attribute. By default it's one minute.
<attribute name="socketTimeout">60000</attribute>
Providing timeout parameters in the JNDI properties file.
jnp.timeout : The connection timeout in milliseconds. The default
value is 0 which means the connection will block until the VM TCP/IP
layer times out.
jnp.sotimeout : The connected socket read timeout in milliseconds. The
default value is 0 which means reads will block. This is the value
passed to the Socket.setSoTimeout on the newly connected socket.
To manually configure timeout for individual invocations, you have to create initial context with appropriate property values.