I have a webApp deployed on a Gateway server which is actually a reverse proxy for an upstream server. The gateway receives HTTP requests from clients and also from the upstream server.The upstream server keeps sending requests periodically. The requests of client are serialized and written in the response to upstream server. The gateway server is Apache tomcat and has 200 threads by default. An HTTP session is created for the upstream server, at the gateway. If client sends 200 requests, all threads of the gateway are occupied and the requests of upstream server get rejected. The session times out and client receives error response - HTTP 503.
I do not wish to change the default server thread pool size.
I can either reject client requests after they reach some specified max count, or I can use two different thread pools by creating two different ports (connectors in case of tomcat). Is there anything else I can do?
thanks in advance.
Related
There is a system that allows us to establish at most 100 http connections.
We have a Spring boot app that connects to that system using a connection pool with a limited size and as request come in, if there are no connections in the pool, the requests waits in a queue for an available connection.
The app is deployed to Kubernetes, and it scales and round robin requests to the pods.
When load comes in, some pods have available connections and others don't and was wondering, is there a way I can configure a custom load balancer on Kubernetes to forward requests to pods that have the highest available connection count rather than round robin?
The HTTP protocol has a feature called HTTP keep-alive, or HTTP connection reuse that uses a single TCP connection to send and receive multiple HTTP requests and responses. follow this doc for Load balancing long-lived connections in Kubernetes
Here are a few examples on how to implement keep-alive in different languages:
Keep-alive in Spring boot
Good afternoon everyone,
I'm having some issues with my HTTP server. I've made my own HTTP server (A lightweight HTTP server due some circumstances and needings) that I want to implement on a software I have. This HTTP API also is used to allow HTTPS, but my main issue comes actually with HTTP.
One issue I'm facing is retrieving HTTPS connections on the HTTP server. Using HTTPS as the server and HTTP as the connection from the client gets denied, as the Handshake fails and gives an Exception to the server. The problem using the HTTP server with an HTTPS client is that this connections keeps running, but the message is encrypted. As it's encrypted, I can't read the information and get details like the Content-Length, so the server is waiting for an end that will never come as it can't read correctly the data.
I was wondering if there's a way in Java to detect if the client is using encrypted responses to deny this connections instead of trying to read them. The main issue with this sockets is that they aren't detected as SSLSockets, they are normal sockets that can't decrypt the information in the InputStream.
Thank you in advance.
Are you aware that HTTP and HTTPS are usually served on different port numbers? So 80 is for HTTP and 443 for HTTPS. For non-privileged ports often 8000 and 8443 are used. A client that connects using TLS on a HTTP-only port is faulty, and your HTTP server should easily detect non-HTTP traffic:
If the first word received isn't one of the HTTP verbs supported by your server, such as GET, HEAD, POST, PUT, OPTIONS, etc. your server should send a 400 or 408 response (408 is request timeout, your server should only wait a reasonable amout of time for the request header) and then close the connection.
I've configured my Jetty server with a bounded ThreadPool and a bounded QueueSize. What would be the behavior when this is actually hit? Ie, a user submits a HTTP request to the server, and Jetty is unable to get a thread/queue-slot to fulfill the request.
I would have expected the server to respond to the client with a 500 error of some form. But from my local testing, it appears that the client doesn't get any response at all.
What does Jetty do in this case? Is there any way for me to configure Jetty to send a 500 response to the user when this occurs?
I am doing a project of Restaurant ordering system in that I gave order by using different client and process that order and that order can send different system, e.g:
kitchen1
Kitechn2, and
kitchen3
First I can create different client by using rmi threading concept an and they can send (means all client) data to my server i want to sent that data into different client
How can I do that? I do create different client and send data to the server suggest how can I do that?
RMI is a synchronous (request/response) protocol - the client sends a request to the server, which it can respond to.
The server cannot arbitrarily send more data to the client.
The simplest way to asynchronous communication in Java via JMS using a message broker like Active MQ.
The process would go something like this:
the server starts and connects to its incoming request queue.
client 1 creates a temporary queue and registers with the server via the request queue passing the name of its temporary queue.
the server stores the client and the name of its termporary queue.
client 2 does the same and server stores the client and the name of its termporary queue.
client 1 sends a message to the server causing the server to send a message to client 2, which is does via the temporary queue that client 2 registered with the server.
client 2 reponds to the server causing the server to send a message to client 1, which is does via the temporary queue that client 1 registered with the server.
This can go on until one or both client shuts down, at which point their temporary queue are closed and the server can no longer send messages to that client (though it's best the client de-registers itself).
We have a Java web service with document style and http protocol. Local this service works smoothly and fast (~ 6ms). But calling the service-methods from remote takes over 200ms.
One main reason for this delay is that the
server sends first the response http header,
the client sends in return a ACK and
then again the server sends the response http body.
This second step where the client sends the ACK costs the most time, almost the whole 200ms. I would like to avoid this step and save the time.
So that's why my question: Is it possible to send the whole response in one package? And how and where do I configure that?
Thanks for any advice.
I'm not fully understanding the question.
Why is the server sending the first message? Shouldn't the client be requesting for a web service via HTTP initially?
From what I understand, SOAP requests are wrapped within an http message. HTTP messages assumes a TCP connection and requires a response. This suggests that a client must respond when the server sends an http header.
Basically whatever one end sends to another, the other end must reply. The ACK return from you step 2 will always be present.
EDIT:
I think the reason for the difference in time when requesting via local and remote is simply the routing that happens in the real network versus on your local machine. It's not the number of steps taken in your SOAP request and response.