Attempting to upload files onto a Server using its REST API from a Camel endpoint.
Below is part of the camel Endpoint in consideration
<camel:setBody>
<camel:simple>${header.objectdata.getData}</camel:simple><!-- 2mb file as String -->
</camel:setBody>
<camel:setHeader headerName="CamelHttpMethod">
<camel:constant>PUT</camel:constant>
</camel:setHeader>
<camel:recipientList id="ml-rest">
<camel:simple>{URL_HERE}</camel:simple>
</camel:recipientList>
The above endpoint works fine with smaller files. But for the ~2mb file, it throws
org.apache.commons.httpclient.NoHttpResponseException: The server localhost failed to respond
at org.apache.commons.httpclient.HttpMethodBase.readStatusLine(HttpMethodBase.java:1976)
at org.apache.commons.httpclient.HttpMethodBase.readResponse(HttpMethodBase.java:1735)...
I tried uploading the same file onto the server using POSTMAN(not via code,camel) and it works fine.
Tried setting the SO_TIMEOUT option, but strangely it appears capped to 30 seconds. Setting values less than 30 appears to work, but greater than 30 seconds are simply ignored. I noticed this based on the time difference between the occurences of the following log statements.
...
[t1] Request body sent
[t2] Closing the connection
Related
I am using a Tomcat on which I have deployed a Jersey application.
On a certain REST URL it returns a fixed PNG image.
Prior to requesting the image, I have to initialize the application by providing the base path of the location of the image on the file system. This is done by performing a POST to a different URL after which the location is stored in an object in the context.
Using the network function of Firefox I can see a difference in the time the browser is waiting for the response the first time versus the second time.
Second request waiting network time = 9 ms
Second request waiting network time = 4 ms
I have executed this experiment several times and the first time always seems to take several milliseconds longer than the second or third time.
What is causing this difference?
FYI:
header Cache-control = "no-cache"
If you have deployed your jersery Container as servlet . Than for each Resource
a separate servlet is created to which the request is delegated . So first time when you request for the resource with url . The servlet has to be created . Second time the already existing servlet is used . That is why there is response delay
I am trying to access Yahoo mail with IMAP using JavaMail API. I can connect to the Yahoo mail server successfully and am able to fetch the messages using folder.getMessages() call where folder is an object of javax.mail.Folder class.
I need to iterate over all the messages returned by this call and I fetch received date of each message in this iteration. The iteration works well for small number of messages as it does not takes a long time, however if the number of returned messages is large (say around 10000) and iteration takes more than 30 minutes, then following exception occurs after 30 minutes:
javax.mail.FolderClosedException: * BYE IMAP4rev1 Server logging out
at com.sun.mail.imap.IMAPMessage.loadEnvelope(IMAPMessage.java:1234)
at com.sun.mail.imap.IMAPMessage.getReceivedDate(IMAPMessage.java:378)
at mypack.ImapUtils.getReceivedDate(ImapUtils.java:193)
...
Please note that I do not use the Folder object again during this iteration.
Could anyone please tell:
if there is a way to keep the folder open on yahoo mail server until it is explicitly closed?
if there is some property or setting which can be used to increase this "30 minutes" interval after which the folder is closed by the yahoo's IMAP server.
Thanks.
I have a URL in my Play! app that routes to either HTML or XLSX depending on the extension that is passed in the URL, with a routes line like :-
# Calls
GET /calls.{format} Call.index
so calls.html renders the page, calls.xlsx downloads an Excel file (using Play Excel module). All works fine from the browser, a cURL request, etc.
I now want to be able to create an email and have the Excel attached to it, but I cannot pull the attachment. Here's the basic version of what I tried first :-
public static void sendReport(List<Object[]> invoicelines, String emailaddress) throws MalformedURLException, URISyntaxException
{
setFrom("Telco Analysis <test#test.com>");
addRecipient(emailaddress);
setSubject("Telco Analysis report");
EmailAttachment emailAttachment = new EmailAttachment();
URL url = new URL("http://localhost:9001/calls.xlsx");
emailAttachment.setURL(url);
emailAttachment.setName(url.getFile());
emailAttachment.setDescription("Test file");
addAttachment(emailAttachment);
send(invoicelines);
}
but it just doesn't pull the URL content, it just sits there without any error messages, with Chrome's page spinner going and ties up the web server (to the point that requests from another browser/machine don't appear to get serviced). If I send the email without the attachment, all is fine, so it's just the pulling of the file that appears to be the problem.
So far I've tried the above method, I've tried Play's WS webservice library, I've tried manually-crafted HttpRequests, etc. If I specify another URL (such as http://www.google.com) it works just fine.
Anyone able to assist?
I am making an assumption that you are running in Dev mode.
In Dev mode, you will likely have a single request execution pool, but in your controller that send an email, you are sending off a second request, which will block until your previous request has completed (which it won't because it is waiting for the second request to respond)...so....deadlock!
The resaon why external requests work fine, is because you are not causing the deadlock on your Play request pool.
Simple answer to your problem is to increase the value of the play.pool in the application.conf. Make sure that it is uncommented, and choose a value greater than 1!
# Execution pool
# ~~~~~
# Default to 1 thread in DEV mode or (nb processors + 1) threads in PROD mode.
# Try to keep a low as possible. 1 thread will serialize all requests (very useful for debugging purpose)
play.pool=3
Salesforce can send up to 100 requests inside 1 SOAP message. While sending this type of Bulk Ooutbound message request my PHP script finishes executing but SF fails to accept the ACK used to clear the message queue on the Salesforce side of things. Looking at the Outbound message log (monitoring) I see all the messages in a pending state with the Delivery Failure Reason "java.net.SocketTimeoutException: Read timed out". If my script has finished execution, why do I get this error?
I have tried these methods to increase the execution time on my server as I have no access on the Salesforce side:
set_time_limit(0); // in the script
max_execution_time = 360 ; Maximum execution time of each script, in seconds
max_input_time = 360 ; Maximum amount of time each script may spend parsing request data
memory_limit = 32M ; Maximum amount of memory a script may consume
I used the high settings just for testing.
Any thoughts as to why this is failing the ACK delivery back to Salesforce?
Here is some of the code:
This is how I accept and send the ACK file for the imcoming SOAP request
$data = 'php://input';
$content = file_get_contents($data);
if($content) {
respond('true');
} else {
respond('false');
}
The respond function
function respond($tf) {
$ACK = <<<ACK
<?xml version = "1.0" encoding = "utf-8"?>
<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<soapenv:Body>
<notifications xmlns="http://soap.sforce.com/2005/09/outbound">
<Ack>$tf</Ack>
</notifications>
</soapenv:Body>
</soapenv:Envelope>
ACK;
print trim($ACK);
}
These are in a generic script that I include into the script that uses the data for a specific workflow. I can process about 25 requests (That are in 1 SOAP response) but once I go over that I get the timeout error in the Salesforce queue. for 50 requests is usually takes my PHP script 86.77 seconds.
Could it be Apache? PHP?
I have also tested just accepting the 100 request SOAP response and just accepting and sending the ACK the queue clears out, so I know it's on my side of things.
I show no errors in the apache log, the script runs fine.
I did find some info on the Salesforce site but still no luck. Here is the link.
Also I'm using the PHP Toolkit 11 (From Salesforce).
Other forum with good SF help
Thanks for any insight into this,
--Phill
UPDATE:
If I receive the incoming message and print the response, should this happen first regardless if I do anything else after? Or does it wait for my process to finish and then print the response?
UPDATE #2:
okay I think I have the problem:
PHP uses the single thread processing approach and will not send back the ACK file until the thread has completed it's processing. Is there a way to make this a mutli thread process?
Thread #1 - accept the incoming SOAP request and send back the ACK
Thread #2 - Process the SOAP request
I know I could break it up into like a DB table or flat file, but is there a way to accomplish this without doing that?
I'm going to try to close the socket after the ACK submission and continue the processing, cross my fingers it will work.
Sounds like the outbound message is hitting the timeout. Other users have reported timeouts as low as 10 seconds (see forum link below). The sandbox instance that I use (cs1) is timing out after about 1 minute, from my testing. It's possible that the timeout is an organization or instance level setting that Salesforce controls.
Two things you could try:
Open a support ticket with
Salesforce to see if they can
increase the timeout value for
outbound messages. From my
experience, there are lot of
settings that they can modify on the
organization level - this might be
one of them.
Offload processing of your data, so
that the ACK is sent immediately
back to Salesforce. Then the actual
processing of your data will take
place asynchronously. ie. Message
queue, separate thread, etc.
Some other resources that might be helpful:
related Salesforce forum discussion
Outbound messaging documentation
I think they timeout the thing waiting for Your script to end.
There is a way You could try to fix this.
Output the envelope with ack message at the beginning and then flush the thing so that their server gets it before You end processing. No threading, just plain priorities rethinking :)
read this for best info on flushing content
Are you 100% sure that Salesforce will wait the amount of time your scripts need too run? 80 seconds seem like a loong time too me.
If all requests failed I would guess that Salesforce expects you to set the Content-Type header appropriately, but this does not seem to be the case.
I don't know about Salesforce, but if you want to make some multithreading with PHP you should take a look at this code example and more precisely to pcntl_fork().
N.B: pcntl is not enabled by default and won't work on Windows platforms.
So what I've done is:
Accept all incoming OBM's, parse them into a DB
When this is done kick of a process that runs in the background (Actually I send it to the background so the script can end)
Send ACK file back
By just accepting the raw data, parsing into fields and inserting it into a DB is fairly quick. Then I issue a Linux Command Line command that also send the processing script to run in the background. Then I send the ACK file to SF and the script ends within the allotted time. It is cumbersome to split the script process into two separate stages but it works.
We are converting large PDF file using Adobe LiveCycle ConvertPDF service.
This works fine for smaller PDF files, but fails when we attempt to convert a large PDF file (around 150mb - don't ask).
It looks like Adobe sets the a transaction timeout around 14(?) minutes. As processing time for our huge PDF exceeds this time, operation is aborted.
We tried multiple PDFs, so this is not likely to be caused by corrupted input file.
Here's the output that exception produces:
com.adobe.livecycle.convertpdfservice.exception.ConvertPdfException: ALC-DSC-000-000: com.adobe.idp.dsc.DSCException: Internal error.
at com.adobe.convertpdf.docservice.ConvertPdfServiceImpl.toPS2WithSMT(ConvertPdfServiceImpl.java:117)
at com.adobe.convertpdf.docservice.ConvertPdfServiceImpl.toPS2(ConvertPdfServiceImpl.java:93)
[...]
Caused by: ALC-DSC-000-000: com.adobe.idp.dsc.DSCException: Internal error.
at com.adobe.convertpdf.docservice.ConvertPdfServiceImpl$1.doInTransaction(ConvertPdfServiceImpl.java:110)
at com.adobe.idp.dsc.transaction.impl.ejb.adapter.EjbTransactionBMTAdapterBean.doRequiresNew(EjbTransactionBMTAdapterBean.java:218)
[...]
Caused by: com.adobe.livecycle.convertpdfservice.exception.ConvertPdfException: Cannot convert PDF file to PostScript.
Exception: "Transaction timed out: Couldn't connect to Datamanager Service"
at com.adobe.convertpdf.ConvertPdfBmcWrapper.convertPdftoPs(ConvertPdfBmcWrapper.java:207)
at com.adobe.convertpdf.ConvertPdfServer.convertPdftoPs(ConvertPdfServer.java:121)
at com.adobe.convertpdf.docservice.ConvertPdfServiceImpl.toPS2InTxn(ConvertPdfServiceImpl.java:129)
[...]
So far - seems logical.
However, I can't find where the transaction length is configured. I guess if we increased the timeout to something like 30 minutes, our problem would go away.
(Also the problem would go away if we had way of invoking this operation without any transactions...)
Let's say we are simply running it like this:
ServiceClientFactory factory = com.adobe.idp.dsc.clientsdk.ServiceClientFactory.createInstance(connectionProps);
ConvertPdfServiceClient convertPDFClient = new com.adobe.livecycle.convertpdfservice.client.ConvertPdfServiceClient(factory);
// ... set-up details skipped ...
com.adobe.idp.Document result_postscript = convertPDFClient.toPS2(inPdf,options);
result_postscript.copyToFile(new File("c:/Adobe/output.ps"))
However, either we are not setting up ServiceClientFactory correctly, or maybe not reading JBoss config properly, we can't find way to make the transaction live longer. (Is the transaction time to live really the issue?)
In LiveCycle Administration Console simply go to
Home > Services > Applications and Services > Service Management > ConvertPdfService
The service timeout can be changed there.
When testing with converting pdf (generated by iText) that contains 39k pages (13 initial, each cloned 3000 times, size ~15Mb) -final output PostScript file was ~1,25Gb. Whole work took about 2 hours. But it worked, no problems.
(I guess this answer makes the question not-programming related, but hey.)