Jetty Server behavior when ThreadPool can't accept new requests - java

I've configured my Jetty server with a bounded ThreadPool and a bounded QueueSize. What would be the behavior when this is actually hit? Ie, a user submits a HTTP request to the server, and Jetty is unable to get a thread/queue-slot to fulfill the request.
I would have expected the server to respond to the client with a 500 error of some form. But from my local testing, it appears that the client doesn't get any response at all.
What does Jetty do in this case? Is there any way for me to configure Jetty to send a 500 response to the user when this occurs?

Related

WSO2 API Manager Connection Error , "Error occurred while sending the HEAD request to the given endpoint url"

I'm trying to configure the WSO2 API Manager. (version - v4.0.0)
When I try to create REST API and point to the endpoints I"m getting a Connection error message for the given endpoints. I have hosted the API Manager and the back end services on the same server(backend services are running on the tomcat application on the same server in port 8080)
API Manager Log produces the following message :
ERROR {org.wso2.carbon.apimgt.rest.api.publisher.v1.impl.ApisApiServiceImpl} - Error occurred while sending the HEAD request to the given endpoint url: org.apache.commons.httpclient.ConnectTimeoutException: The host did not accept the connection within timeout of 4000 ms
would really like to what has caused the issue.
P.S: I can access the backend services directly without any connection issues using a REST client.
It's difficult to answer the question without knowing the exact details of your deployment and the backend. But let me try. Here is what I think is happening. As you can clearly see, the error is a connection timeout The host did not accept the connection within timeout of 4000 ms.
Let me explain what happens when you click on the Check Endpoint Status button. When you click on the Check Endpoint Status button, the Browser is not directly sending a request to the Backend to validate it. The Backend URL will be passed to the APIM Server, and the Server will perform the validation by sending an HTTP HEAD request to the BE service.
So there can be two causes. First may be your backend doesn't know how to handle a HEAD request which is preventing it from accepting the request. But given the error indicated it's a network issue, I doubt it even reached the BE.
The second one is, that your Backend is not accessible from the place API Manager is running. If you are running API Manager on Server A and trying to access API Manager via browser from Server B(Local Machine). Although you can access the BE from Server B may be from Server A it's not accessible. When I say BE is not accessible from API Manager server, it means it's not accessible with the same URL that was used in API Manager. It doesn't really matter if it runs in the same Server if you are using a different DNS other than localhost to access it. So go to the server API Manager is running and send a request using the same URL that was used in API Manager and see whether it's accessible from there.
First try doing a curl request by login into the server where APIM is running (not from your local machine). Maybe due to some firewall rules within the server, the hostname given in the URL may not be accessible. Also, try sending a HEAD request as well. You might be able to get some idea why this is happening

Sent many requests asynchonized to http server but it only serves a few request at a time

I'm facing to this issue:
I have a grizzly embedded http server running. By sending 200 asynchronous requests to the server (using ExecutorService in java), I thought it would serve all these request at a time but I release that the server only serves 8 request in a time and no error thrown. Please give me an explaination for this. Do I misunderstanding anything?
Are you sure that all requests have arrived at the server? Do you release resources after the program is processed? If you send multiple requests at the same time and exceed the program's tolerance limit, you will wait. Have you done all these controls?

Manage distribution of container threads to avoid session timeout

I have a webApp deployed on a Gateway server which is actually a reverse proxy for an upstream server. The gateway receives HTTP requests from clients and also from the upstream server.The upstream server keeps sending requests periodically. The requests of client are serialized and written in the response to upstream server. The gateway server is Apache tomcat and has 200 threads by default. An HTTP session is created for the upstream server, at the gateway. If client sends 200 requests, all threads of the gateway are occupied and the requests of upstream server get rejected. The session times out and client receives error response - HTTP 503.
I do not wish to change the default server thread pool size.
I can either reject client requests after they reach some specified max count, or I can use two different thread pools by creating two different ports (connectors in case of tomcat). Is there anything else I can do?
thanks in advance.

Sending SOAP http response in one package

We have a Java web service with document style and http protocol. Local this service works smoothly and fast (~ 6ms). But calling the service-methods from remote takes over 200ms.
One main reason for this delay is that the
server sends first the response http header,
the client sends in return a ACK and
then again the server sends the response http body.
This second step where the client sends the ACK costs the most time, almost the whole 200ms. I would like to avoid this step and save the time.
So that's why my question: Is it possible to send the whole response in one package? And how and where do I configure that?
Thanks for any advice.
I'm not fully understanding the question.
Why is the server sending the first message? Shouldn't the client be requesting for a web service via HTTP initially?
From what I understand, SOAP requests are wrapped within an http message. HTTP messages assumes a TCP connection and requires a response. This suggests that a client must respond when the server sends an http header.
Basically whatever one end sends to another, the other end must reply. The ACK return from you step 2 will always be present.
EDIT:
I think the reason for the difference in time when requesting via local and remote is simply the routing that happens in the real network versus on your local machine. It's not the number of steps taken in your SOAP request and response.

spring java web service, asynchronous

can an asynchronous web service be achieved with java spring-ws framework like how it's explained in here
basically when a client sends a request to the server for the first time, the web service on the server will callback the the client whenever it has information based on the request. so that means the server may reply more than once based on the first initial request from the client.
Suggested approach as per my experience:
Asynchronous web services are generally implemented in the following model:
CLIENT SUBMIT REQUEST -> SERVER RETURNS 202 ACCEPTED RESPONSE(polling/JOB URL in header) -> CLIENT KEEP POLLING THE JOB URL -> SERVER RETURNS 200 OK for the JOB URL ALONG WITH JOB RESPONSE IN BODY.
You may need to define few response body for job in progress. When client polls the server and server is still processing the request, the body should contain the IN PROGRESS message in a predefined form for the client. If server finished processing, then the desired response should be available in the body.
Hope it helps!

Categories