This is my Method code shown below
Actually this serves as a Webservices Method .
public Response getData(ServiceRequest request)
{
try
{
final boolean toProceedorNot = validate(legdata);
if (!toProceedorNot) {
status.setErrorText(errorText);
return response;
}
else {
// Some Processing is done here
response.setMessage(result);
}
}
catch(Exception e)
{
errorText = e.getMessage().toString();
status.setErrorText(errorText);
response.setStatus(status);
}
return response;
}
If the execution of the Method takes longer time , an SocketTimeoutException will be thrown by the Apache CXF Framework
at java.lang.Thread.run(Thread.java:662)
Caused by: java.net.SocketTimeoutException: Read timed out
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.read(SocketInputStream.java:129)
at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
at java.io.BufferedInputStream.read1(BufferedInputStream.java:258)
at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
at sun.net.www.http.HttpClient.parseHTTPHeader(HttpClient.java:695)
at sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:640)
at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1195)
at java.net.HttpURLConnection.getResponseCode(HttpURLConnection.java:379)
at org.apache.cxf.transport.http.HTTPConduit$WrappedOutputStream.handleResponseInternal(HTTPConduit.java:2034)
at org.apache.cxf.transport.http.HTTPConduit$WrappedOutputStream.handleResponse(HTTPConduit.java:2013)
at org.apache.cxf.transport.http.HTTPConduit$WrappedOutputStream.close(HTTPConduit.java:1938)
at org.apache.cxf.transport.AbstractConduit.close(AbstractConduit.java:66)
at org.apache.cxf.transport.http.HTTPConduit.close(HTTPConduit.java:626)
at org.apache.cxf.interceptor.MessageSenderInterceptor$MessageSenderEndingInterceptor.handleMessage(MessageSenderInterceptor.java:62)
My Question is that, even though an SocketTimeoutException is thrown, it is not coming inside the Exception block.
I am not sure whether this Exception , should be handled by whom ( The client or inside the Webservices Implementation method )
But as a Webservice provider , please tell me how to deal with this Exception ??
Related
In grails, can the socket read failed or ioexception be caught? The following error is triggered even though i have wrapped the error part in try catch block.
ERROR 2021-03-28 08:34:10,170 [ajp-bio-8109-exec-39783] errors.GrailsExceptionResolver: IOException occurred when processing request: [POST] /race/results/
Socket read failed. Stacktrace follows:
java.io.IOException: Socket read failed
at java.io.FilterInputStream.read(FilterInputStream.java:133)
at java.io.PushbackInputStream.read(PushbackInputStream.java:186)
at java.io.InputStreamReader.read(InputStreamReader.java:184)
at java.io.Reader.read(Reader.java:140)
at org.apache.commons.io.IOUtils.copyLarge(IOUtils.java:1485)
at org.apache.commons.io.IOUtils.copy(IOUtils.java:1461)
at org.apache.commons.io.IOUtils.copy(IOUtils.java:1436)
at org.apache.commons.io.IOUtils.toString(IOUtils.java:585)
at grails.converters.JSON.parse(JSON.java:312)
at grails.converters.JSON.parse(JSON.java:347)
At the point where the error is thrown i have wrapped it within try catch block as shown below.
def results(){
def results
try {
results = request.JSON
} catch (IOException e1) {
log.error "ERROR WHILE request.json in /results******************************************************************"
render contentType: "text/json", text: '{"status":"fail"}'
return
}
The error is thrown at this point
results = request.JSON
I appreciate any insights. I am using Grails 2.2.
Thanks!
UPDATE:
Here is the full stacktrace. Thanks!
ERROR 2021-03-28 08:34:10,170 [ajp-bio-8109-exec-39783] errors.GrailsExceptionResolver: IOException occurred when processing request: [POST] /race/results/
Socket read failed. Stacktrace follows:
java.io.IOException: Socket read failed
at java.io.FilterInputStream.read(FilterInputStream.java:133)
at java.io.PushbackInputStream.read(PushbackInputStream.java:186)
at java.io.InputStreamReader.read(InputStreamReader.java:184)
at java.io.Reader.read(Reader.java:140)
at org.apache.commons.io.IOUtils.copyLarge(IOUtils.java:1485)
at org.apache.commons.io.IOUtils.copy(IOUtils.java:1461)
at org.apache.commons.io.IOUtils.copy(IOUtils.java:1436)
at org.apache.commons.io.IOUtils.toString(IOUtils.java:585)
at grails.converters.JSON.parse(JSON.java:312)
at grails.converters.JSON.parse(JSON.java:347)
at race.results(VirtualRaceController.groovy:2011)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
You'll probably find it is throwing ConverterException, not IOException. Here is the code:
class ConvertersExtension {
static getJSON(HttpServletRequest request) {
JSON.parse(request)
}
}
public class JSON {
public static Object parse(HttpServletRequest request) throws ConverterException {
// blah blah
try {
/// blah blah
}
catch (IOException e) {
throw new ConverterException("Error parsing JSON", e);
}
}
}
If you're using an IDE, right click on the API in question and select "GO To declaration", or whatever the equivilent is in your IDE until you trace your way up to see what throws what.
I have been unable to catch time out exception that happens in my vertx HttpClientRequest. I have enclosed my connection and request creation code in try-catch block. Also I have added exceptionHandler and endHandler. But none of them gets fired when the time out happens. All I receive is the below error message which gets printed on the console. Please give me idea how to catch this exception, so that I can call the caller back with relevant info.
io.vertx.core.http.impl.HttpClientRequestImpl
SEVERE: io.netty.channel.ConnectTimeoutException: connection timed out:
The code below is what I use to make request to server. As you can see I have used try-catch and added exceptionHandler as well.
try{
HttpClient httpClient = Vert.x.createHttpClient(new HttpClientOptions().setSsl(true).setTrustAll(true).setVerifyHost(false));
HttpClientRequest request = httpClient.get(port, host, uri.getRawPath(), event-> {
event.exceptionHandler(e -> {
log.error(" Error:: " + e);
});
event.handler(handler -> {
//code
});
});
request.putHeader(HttpHeaders.Names.AUTHORIZATION, "Basic "+authEnc);
request.end();
} catch(Exception e){
log.error(" Exception :: " + e);
}
Due to the async programing model you won't be able to use try-catch since your method has long been terminated before you get the timeout event. In order to catch it you need to setup an exception handler like:
request.exceptionHandler(t -> {
// where t is a throwable
// do something with it...
}
If you're interested in catching response exceptions same concept applies.
I encountered an issue with the Mule ESB FTP Transport: when polling, the thread running the client would hang indefinitely without throwing an error. This causes FTP poll to stop completely. Mule uses Apache Commons Net FTPClient.
Looking further into the code, I think it is caused by the SocketTimeout of the FTPClient not being set, sometime causing infinite hanging when reading lines from the FTPClient's socket.
We can clearly see the problem in these stacks retrieved with jstack when the problem occured. The __getReply() function seems to be the more direct link to the problem.
This one hanging on connect() call when creating a new FTPClient:
receiver.172 prio=10 tid=0x00007f23e43c8800 nid=0x2d5 runnable [0x00007f24c32f1000]
java.lang.Thread.State: RUNNABLE
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.read(SocketInputStream.java:152)
at java.net.SocketInputStream.read(SocketInputStream.java:122)
at sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:283)
at sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:325)
at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:177)
- locked <0x00000007817a9578> (a java.io.InputStreamReader)
at java.io.InputStreamReader.read(InputStreamReader.java:184)
at java.io.BufferedReader.fill(BufferedReader.java:154)
at java.io.BufferedReader.readLine(BufferedReader.java:317)
- locked <0x00000007817a9578> (a java.io.InputStreamReader)
at java.io.BufferedReader.readLine(BufferedReader.java:382)
at org.apache.commons.net.ftp.FTP.__getReply(FTP.java:294)
at org.apache.commons.net.ftp.FTP._connectAction_(FTP.java:364)
at org.apache.commons.net.ftp.FTPClient._connectAction_(FTPClient.java:540)
at org.apache.commons.net.SocketClient.connect(SocketClient.java:178)
at org.mule.transport.ftp.FtpConnectionFactory.makeObject(FtpConnectionFactory.java:33)
at org.apache.commons.pool.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:1188)
at org.mule.transport.ftp.FtpConnector.getFtp(FtpConnector.java:172)
at org.mule.transport.ftp.FtpConnector.createFtpClient(FtpConnector.java:637)
at org.mule.transport.ftp.FtpMessageReceiver.listFiles(FtpMessageReceiver.java:134)
at org.mule.transport.ftp.FtpMessageReceiver.poll(FtpMessageReceiver.java:94)
at org.mule.transport.AbstractPollingMessageReceiver.performPoll(AbstractPollingMessageReceiver.java:216)
at org.mule.transport.PollingReceiverWorker.poll(PollingReceiverWorker.java:80)
at org.mule.transport.PollingReceiverWorker.run(PollingReceiverWorker.java:49)
at org.mule.transport.TrackingWorkManager$TrackeableWork.run(TrackingWorkManager.java:267)
at org.mule.work.WorkerContext.run(WorkerContext.java:286)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Locked ownable synchronizers:
- <0x00000007817a3540> (a java.util.concurrent.ThreadPoolExecutor$Worker)
And the other hanging on pasv() call when using listFiles():
receiver.137" prio=10 tid=0x00007f23e433b000 nid=0x7c06 runnable [0x00007f24c2fee000]
java.lang.Thread.State: RUNNABLE
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.read(SocketInputStream.java:152)
at java.net.SocketInputStream.read(SocketInputStream.java:122)
at sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:283)
at sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:325)
at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:177)
- locked <0x0000000788847ed0> (a java.io.InputStreamReader)
at java.io.InputStreamReader.read(InputStreamReader.java:184)
at java.io.BufferedReader.fill(BufferedReader.java:154)
at java.io.BufferedReader.readLine(BufferedReader.java:317)
- locked <0x0000000788847ed0> (a java.io.InputStreamReader)
at java.io.BufferedReader.readLine(BufferedReader.java:382)
at org.apache.commons.net.ftp.FTP.__getReply(FTP.java:294)
at org.apache.commons.net.ftp.FTP.sendCommand(FTP.java:490)
at org.apache.commons.net.ftp.FTP.sendCommand(FTP.java:534)
at org.apache.commons.net.ftp.FTP.sendCommand(FTP.java:583)
at org.apache.commons.net.ftp.FTP.pasv(FTP.java:882)
at org.apache.commons.net.ftp.FTPClient._openDataConnection_(FTPClient.java:497)
at org.apache.commons.net.ftp.FTPClient.initiateListParsing(FTPClient.java:2296)
at org.apache.commons.net.ftp.FTPClient.initiateListParsing(FTPClient.java:2269)
at org.apache.commons.net.ftp.FTPClient.initiateListParsing(FTPClient.java:2189)
at org.apache.commons.net.ftp.FTPClient.initiateListParsing(FTPClient.java:2132)
at org.mule.transport.ftp.FtpMessageReceiver.listFiles(FtpMessageReceiver.java:135)
at org.mule.transport.ftp.FtpMessageReceiver.poll(FtpMessageReceiver.java:94)
at org.mule.transport.AbstractPollingMessageReceiver.performPoll(AbstractPollingMessageReceiver.java:216)
at org.mule.transport.PollingReceiverWorker.poll(PollingReceiverWorker.java:80)
at org.mule.transport.PollingReceiverWorker.run(PollingReceiverWorker.java:49)
at org.mule.transport.TrackingWorkManager$TrackeableWork.run(TrackingWorkManager.java:267)
at org.mule.work.WorkerContext.run(WorkerContext.java:286)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Locked ownable synchronizers:
- <0x0000000788832180> (a java.util.concurrent.ThreadPoolExecutor$Worker)
I think the problem is caused by the use of the default FTPClient constructor (extending SocketClient) in Mule default FtpConnectionFactory.
Note the setConnectTimeout() values seems to be used only when calling socket.connect(), but ignored on other operations using the same socket:
protected FTPClient createFtpClient()
{
FTPClient ftpClient = new FTPClient();
ftpClient.setConnectTimeout(connectionTimeout);
return ftpClient;
}
It uses the FTPClient() constructor, itself using SocketClient with a 0 timeout, defined when creating the socket.
public SocketClient()
{
...
_timeout_ = 0;
...
}
And then we call connec(), which calls _ connectAction()_.
In SocketClient:
protected void _connectAction_() throws IOException
{
...
_socket_.setSoTimeout(_timeout_);
...
}
In FTP, a new Reader is instanciated with our everlasting socket:
protected _connectAction_(){
...
_controlInput_ =
new BufferedReader(new InputStreamReader(_socket_.getInputStream(),
getControlEncoding()));
...
}
Then, when calling __getReply() function, we use this Reader-with-everlasting-socket:
private void __getReply() throws IOException
{
...
String line = _controlInput_.readLine();
...
}
Sorry for the long post, but I think this required correct explanations. A solution may be to call setSoTimeout() just after connect(), to define a Socket Timeout.
Having a default timeout does not seem an acceptable solution, as each users may have different needs and a default is not suitable in any case. https://issues.apache.org/jira/browse/NET-35
Finally, this raises 2 questions:
It seems like a bug to me, as it will completely stops FTP polling without giving error. What do you think?
What could be an easy way to avoid such situation? Calling setSoTimeout() with a custom FtpConnectionFactory? Am I missing a configuration or parameter somewhere?
Thanks by advance.
EDIT: I am using Mule CE Standalone 3.5.0, which seems to use Apache Commons Net 2.0. But looking in the code, Mule CE Standalone 3.7 with Commons Net 2.2 does not seem different. Here are the source codes involved:
https://github.com/mulesoft/mule/blob/mule-3.5.x/transports/ftp/src/main/java/org/mule/transport/ftp/FtpConnectionFactory.java
http://grepcode.com/file/repo1.maven.org/maven2/commons-net/commons-net/2.0/org/apache/commons/net/SocketClient.java
http://grepcode.com/file/repo1.maven.org/maven2/commons-net/commons-net/2.0/org/apache/commons/net/ftp/FTP.java
http://grepcode.com/file/repo1.maven.org/maven2/commons-net/commons-net/2.0/org/apache/commons/net/ftp/FTPClient.java
In an ideal world the timeout should not be necessary, but it looks like in your case it is.
Your description is very comprehensive, have you considered to raise a bug?
To workaround I would suggest first to use "Response Timeout" in the advanced tab. If that doesnt work I would use a service override, from there you should be able to override the receiver.
I reproduced the error in both my previous cases using MockFtpServer, and I was able to use a FtpConnectionFactory which seems to solve the issue.
public class SafeFtpConnectionFactory extends FtpConnectionFactory{
//define a default timeout
public static int defaultTimeout = 60000;
public static synchronized int getDefaultTimeout() {
return defaultTimeout;
}
public static synchronized void setDefaultTimeout(int defaultTimeout) {
SafeFtpConnectionFactory.defaultTimeout = defaultTimeout;
}
public SafeFtpConnectionFactory(EndpointURI uri) {
super(uri);
}
#Override
protected FTPClient createFtpClient() {
FTPClient client = super.createFtpClient();
//Define the default timeout here, which will be used by the socket by default,
//instead of the 0 timeout hanging indefinitely
client.setDefaultTimeout(getDefaultTimeout());
return client;
}
}
And then attaching it to my connector:
<ftp:connector name="archivingFtpConnector" doc:name="FTP"
pollingFrequency="${frequency}"
validateConnections="true"
connectionFactoryClass="my.comp.SafeFtpConnectionFactory">
<reconnect frequency="${reconnection.frequency}" count="${reconnection.attempt}"/>
</ftp:connector>
Using this configuration, a java.net.SocketTimeoutException will be thrown after the specified timeout, such as:
java.net.SocketTimeoutException: Read timed out
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.read(SocketInputStream.java:152)
at java.net.SocketInputStream.read(SocketInputStream.java:122)
at sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:283)
at sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:325)
at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:177)
at java.io.InputStreamReader.read(InputStreamReader.java:184)
at java.io.BufferedReader.fill(BufferedReader.java:154)
at java.io.BufferedReader.readLine(BufferedReader.java:317)
at java.io.BufferedReader.readLine(BufferedReader.java:382)
at org.apache.commons.net.ftp.FTP.__getReply(FTP.java:294)
at org.apache.commons.net.ftp.FTP._connectAction_(FTP.java:364)
at org.apache.commons.net.ftp.FTPClient._connectAction_(FTPClient.java:540)
at org.apache.commons.net.SocketClient.connect(SocketClient.java:178)
at org.mule.transport.ftp.FtpConnectionFactory.makeObject(FtpConnectionFactory.java:33)
at org.apache.commons.pool.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:1188)
at org.mule.transport.ftp.FtpConnector.getFtp(FtpConnector.java:172)
at org.mule.transport.ftp.FtpConnector.createFtpClient(FtpConnector.java:637)
...
Otherwise, an attempt at connect() or pasv() would hang indefinitely without server response. I reproduced this exact behavior using mock FTP.
Note: I used setDefaultTimeout() as it seems to be the variable used with connect() and connectAction() (from SocketClient source):
public abstract class SocketClient
{
...
protected void _connectAction_() throws IOException
{
...
_socket_.setSoTimeout(_timeout_);
...
}
...
public void setDefaultTimeout(int timeout)
{
_timeout_ = timeout;
}
...
}
EDIT: For those who are interested, here is the test code for mock FTP used to reproduce the never answering server. The infinite loop is far from good practice though. It should be replaced with something like sleep with an enclosing Test class expecting a SocketTimeout exception and ensuring failure after a given timeout.
private static final int CONTROL_PORT = 2121;
public void startStubFtpServer(){
FakeFtpServer fakeFtpServer = new FakeFtpServer();
//define the command which should never be answered
fakeFtpServer.setCommandHandler(CommandNames.PASV, new EverlastingCommandHandler());
//fakeFtpServer.setCommandHandler(CommandNames.CONNECT, new EverlastingConnectCommandHandler());
//or any other command...
//server config
...
//start server
fakeFtpServer.setServerControlPort(CONTROL_PORT);
fakeFtpServer.start();
...
}
//will cause any command received to never have an answer
public class EverlastingConnectCommandHandler extends org.mockftpserver.core.command.AbstractStaticReplyCommandHandler{
#Override
protected void handleCommand(Command cmd, Session session, InvocationRecord rec) throws Exception {
while(true){
try {
Thread.sleep(60000);
} catch (InterruptedException e) {
//TODO
}
}
}
}
public class EverlastingCommandHandler extends AbstractFakeCommandHandler {
#Override
protected void handle(Command cmd, Session session) {
while(true){
try {
Thread.sleep(60000);
} catch (InterruptedException e) {
//TODO
}
}
}
};
I'm using the newly added HTTP Streaming feature with ResponseBodyEmitter in Spring 4.2.0.BUILD-SNAPSHOT.
I would like to implement a long running persistent TCP connection on an undending stream of data between a (possibly java) client and server until the client breaks the connection. I would like to avoid using the websocket protocol.
If a client breaks the connection while streaming, a runtime IllegalStateException is thrown. I would like to handle this gracefully and cleanup the emitter. Short of catching a runtime exception, is there any way to gracefully handle this?
I have to specify an artifically high timeout value on the emitter for a "persistent" connection. Can I set no timeout?
The webapp is deployed on apache-tomcat-7.0.62.
Relevant code as follows:
#RequestMapping(value = "stream", method = RequestMethod.GET)
public ResponseBodyEmitter handleStreaming() {
ResponseBodyEmitter emitter = new ResponseBodyEmitter(timeout);
emitters.add(emitter);
emitter.onCompletion(new Runnable() {
#Override
public void run() {
emitters.remove(emitter);
}
});
emitter.onTimeout(new Runnable() {
#Override
public void run() {
emitters.remove(emitter);
}
});
return emitter;
}
.
while (true) {
for (Iterator<ResponseBodyEmitter> iterator = emitters.iterator(); iterator.hasNext();) {
ResponseBodyEmitter emitter = iterator.next();
try {
emitter.send("data...", MediaType.TEXT_PLAIN);
} catch (IOException | IllegalStateException e) {
LOGGER.error(e);
iterator.remove();
}
}
try {
Thread.sleep(1000);
} catch (InterruptedException e) {
LOGGER.error(e);
}
}
Logs:
INFO: An error occurred in processing while on a non-container thread. The connection will be closed immediately
java.net.SocketException: Broken pipe
at java.net.SocketOutputStream.socketWrite0(Native Method)
at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:109)
at java.net.SocketOutputStream.write(SocketOutputStream.java:153)
at org.apache.coyote.http11.InternalOutputBuffer.realWriteBytes(InternalOutputBuffer.java:215)
at org.apache.tomcat.util.buf.ByteChunk.flushBuffer(ByteChunk.java:480)
at org.apache.coyote.http11.InternalOutputBuffer.flush(InternalOutputBuffer.java:119)
at org.apache.coyote.http11.AbstractHttp11Processor.action(AbstractHttp11Processor.java:801)
at org.apache.coyote.Response.action(Response.java:172)
at org.apache.catalina.connector.OutputBuffer.doFlush(OutputBuffer.java:363)
at org.apache.catalina.connector.OutputBuffer.flush(OutputBuffer.java:331)
at org.apache.catalina.connector.CoyoteOutputStream.flush(CoyoteOutputStream.java:101)
at sun.nio.cs.StreamEncoder.implFlush(StreamEncoder.java:297)
at sun.nio.cs.StreamEncoder.flush(StreamEncoder.java:141)
at java.io.OutputStreamWriter.flush(OutputStreamWriter.java:229)
at org.springframework.util.StreamUtils.copy(StreamUtils.java:106)
at org.springframework.http.converter.StringHttpMessageConverter.writeInternal(StringHttpMessageConverter.java:109)
at org.springframework.http.converter.StringHttpMessageConverter.writeInternal(StringHttpMessageConverter.java:40)
at org.springframework.http.converter.AbstractHttpMessageConverter.write(AbstractHttpMessageConverter.java:193)
at org.springframework.web.servlet.mvc.method.annotation.ResponseBodyEmitterReturnValueHandler$HttpMessageConvertingHandler.sendInternal(ResponseBodyEmitterReturnValueHandler.java:157)
at org.springframework.web.servlet.mvc.method.annotation.ResponseBodyEmitterReturnValueHandler$HttpMessageConvertingHandler.send(ResponseBodyEmitterReturnValueHandler.java:150)
at org.springframework.web.servlet.mvc.method.annotation.ResponseBodyEmitter.sendInternal(ResponseBodyEmitter.java:180)
at org.springframework.web.servlet.mvc.method.annotation.ResponseBodyEmitter.send(ResponseBodyEmitter.java:164)
....
[ERROR] [02/07/15 18:11 PM] [Controller$TestResponseBodyEmitter:74] - java.lang.IllegalStateException: The request associated with the AsyncContext has already completed processing.
Command:
curl http://localhost:8080/myapp/stream -v -N
data...data...
Ctrl-C
According to the Javadoc of the ResponseBodyEmitter's constructor (found here).
Create a ResponseBodyEmitter with a custom timeout value. By default
not set in which case the default configured in the MVC Java Config or
the MVC namespace is used, or if that's not set, then the timeout
depends on the default of the underlying server.
Therefore do give a timeout when you create the c instance.
PS: In my environment ResponseBodyEmitter#getTimeout() returned null; this does not mean that there is an infinite timeout. On the contrary after 5-10 sec the connection timed out.
I have been trying to put together some code that will- among other things - upload files to a Sharepoint site that uses NTLM authentication. Earlier versions of the code were single threaded, and worked perfectly. They uploaded the file exactly as expected without the slightest issue. However, I eventually tried to multi-thread this application, so that it could upload many files at once, while still going about the rest of its business.
However when I tried to multithread the code, it fails every single time, throwing an IndexOutOfBoundsException. This is singularly unhelpful to me in diagnosing the actual cause of the problem.
In case you are wondering, if I change out the CachedThreadExecutor for a SingleThreadExecutor - forcing the code bask to a single-threaded state - it once again works fine.
Creating the executor and connection manager, and constructing threads:
class OrderProcessor implements Runnable {
//Other variables for object
private final ExecutorService executorService = Executors
.newCachedThreadPool();
// .newSingleThreadExecutor();
private HttpClientConnectionManager conManager;
private void setup() {
//always called before execution of anything else in object
conManager = new PoolingHttpClientConnectionManager();
}
//lots of other code
}
The actual code for submitting the threads is complicated, so this version is somewhat simplified, but gets the point across.
for(Request request : requests){
//Do other stuff
simpleSubmitFile(request);
//Do other stuff
}
Here is the simplified file submission method
public Future<Boolean> simpleSubmitFile(Request request){
transferer = new SharePointTransferer(extractionRequest, conManager);
Future<Boolean> future = executorService.submit(transferer);
return future;
}
SharePointTransferer code
//actual values scrubbed
private final String USERNAME = "";
private final String PASSWORD = "";
private final String DOMAIN = "";
private final File sourceFile;
private final String destinationAddress;
private final CloseableHttpClient client;
public SharePointTransferer(final Request extractionRequest, HttpClientConnectionManager conManager) {
super(extractionRequest);
this.sourceFile = this.extractionRequest.getFile();
this.destinationAddress = this.extractionRequest.getDestinationAddress();
this.client = HttpClients.custom()
.setConnectionManager(conManager).build();
}
public Boolean call() throws Exception {
String httpAddress = correctSharePointAddress(destinationAddress);
HttpPut put = new HttpPut(httpAddress + sourceFile.getName());
// construct basic request
put.setEntity(new FileEntity(sourceFile));
HttpClientContext context = HttpClientContext.create();
// set credentials for the SharePoint login
CredentialsProvider credProvider = new BasicCredentialsProvider();
credProvider.setCredentials(AuthScope.ANY, new NTCredentials(USERNAME,
PASSWORD, "", DOMAIN));
context.setCredentialsProvider(credProvider);
// execute request
try {
HttpResponse response = client.execute(put, context);
logger.info("response code was: "
+ response.getStatusLine().getStatusCode());
if (response.getStatusLine().getStatusCode() != 201) {
throw new FileTransferException(
"Could not upload file. Http response code 201 expected."
+ "\nActual status code: "
+ response.getStatusLine().getStatusCode());
}
} catch (ClientProtocolException e) {
throw new FileTransferException(
"Exception Occurred while Transferring file "
+ sourceFile.getName(), e);
} catch (IOException e) {
throw new FileTransferException(
"Exception Occurred while Transferring file "
+ sourceFile.getName(), e);
}finally{
logger.info("deleting source file: " + sourceFile.getName());
sourceFile.delete();
client.close();
}
logger.info("successfully transfered file: "+sourceFile.getName());
return true;
}
If I submit multiple files it throws essentially the exact same exception for all of the files. The trace is below
Exception Stack Trace
2015-04-16 11:49:26 ERROR OrderProcessor:224 - error processing file: FILE_NAME_SCRUBBED
PACKAGE_SCRUBBED.FileProcessingException: Could not process file: FILE_NAME_SCRUBBED
at PACKAGE_SCRUBBED.OrderProcessor.finishProcessingOrder(OrderProcessor.java:223)
at PACKAGE_SCRUBBED.OrderProcessor.run(OrderProcessor.java:124)
at PACKAGE_SCRUBBED.FileTransferDaemon.process(FileTransferDaemon.java:48)
at PACKAGE_SCRUBBED.FileTransferDaemon.start(FileTransferDaemon.java:83)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.commons.daemon.support.DaemonLoader.start(DaemonLoader.java:243)
Caused by: java.util.concurrent.ExecutionException: java.lang.ArrayIndexOutOfBoundsException: 41
at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:222)
at java.util.concurrent.FutureTask.get(FutureTask.java:83)
at PACKAGE_SCRUBBED.OrderProcessor.finishProcessingOrder(OrderProcessor.java:208)
... 8 more
Caused by: java.lang.ArrayIndexOutOfBoundsException: 41
at org.apache.http.impl.auth.NTLMEngineImpl$NTLMMessage.addByte(NTLMEngineImpl.java:924)
at org.apache.http.impl.auth.NTLMEngineImpl$NTLMMessage.addUShort(NTLMEngineImpl.java:946)
at org.apache.http.impl.auth.NTLMEngineImpl$Type1Message.getResponse(NTLMEngineImpl.java:1052)
at org.apache.http.impl.auth.NTLMEngineImpl.getType1Message(NTLMEngineImpl.java:148)
at org.apache.http.impl.auth.NTLMEngineImpl.generateType1Msg(NTLMEngineImpl.java:1641)
at org.apache.http.impl.auth.NTLMScheme.authenticate(NTLMScheme.java:139)
at org.apache.http.impl.auth.AuthSchemeBase.authenticate(AuthSchemeBase.java:138)
at org.apache.http.impl.auth.HttpAuthenticator.doAuth(HttpAuthenticator.java:239)
at org.apache.http.impl.auth.HttpAuthenticator.generateAuthResponse(HttpAuthenticator.java:202)
at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:262)
at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:184)
at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:88)
at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110)
at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:184)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
at PACKAGE_SCRUBBED.SharePointTransferer.call(SharePointTransferer.java:74)
at PACKAGE_SCRUBBED.SharePointTransferer.call(SharePointTransferer.java:1)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
at java.util.concurrent.FutureTask.run(FutureTask.java:138)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
If anyone can figure out what is causing this problem, I would greatly appreciate it.
EDIT: I managed to find a workaround that fixes the issue for me, but would still appreciate an explanation of exactly what is going on.
this is a bug, solved in httpclient version 4.5.2
http://www.apache.org/dist/httpcomponents/httpclient/RELEASE_NOTES-4.5.x.txt
Release 4.5.2
Changelog:
[HTTPCLIENT-1715] NTLMEngineImpl#Type1Message not thread safe but declared as a constant. Contributed by Olivier Lafontaine , Gary Gregory
You can't reuse nor HttpClientContext neither NTLMScheme in a concurrent environment because they are both marked as #NotThreadSafe (see javadoc).
In my environment I got the same error, solved with something like:
synchronized(context) {
HttpResponse response = client.execute(put, context);
}
The authenticated context is reused, but one thread at time.
I eventually managed to solve this problem by setting the number of connections per route to 1, as below.
conManager.setDefaultMaxPerRoute(1);
I'm still not exactly sure why the problem occured, or what the proper way to fix this is, but this solution worked for me.