Env: Tomcat 7.
Would like to log http requests and their headers. Actually I could do without the headers as long as I can log the IP address of the caller, the resource he's requesting (the URL) and the type of request (GET, POST, etc)
This may seem like a trivial question, but it really isn't.
The standard way would be to use the AccessLogValve, but as far as I understand that one is actually not request logging, it is request/response logging, meaning that it will not log anything before at the end of the response cycle. It will only log those requests where a response has successfully been delivered to the http client. If something goes wrong before that AccessLogValve will not log the request.
Question 1: Is this correctly understood?
Question 2: Are there other options?
UPDATE 1:
I've done a test with Tomcat7 using a dummy-servlet that does blocking for x seconds based on an URL parameter. My findings are that indeed that request gets logged by the AccessLogValve ... although as expected this does not happen until the end of the response, i.e. after the x seconds. There will be a log entry regardless if the client has aborted before the request finishes and regardless if the servlet throws an exception during processing.
Therefore the answer to question 1 is : "No".
Conclusion
AcccesLogValve will eventually produce a log entry. At least I haven't been able to produce a scenario where this is not the case.
All the access logs that I have seen are written after the request/response has been processed because it is useful to log info like the size of the response or the total processing time.
"If something goes wrong before that AccessLogValve will not log the request.
Question 1: Is this correctly understood?"
No, not based on my experience. The request/response is always logged, even if there is an error processing it. In that case the HTTP status code field (%s in the log pattern) will contain an error code, like 500.
Related
So I am trying out a simple full stack project of my own that involves a java backend implementation of a REST API, for which I am using the org.restlet.com framework/package and jetty as the server.
Whilst I was testing my API using Postman I noticed something wierd: Every time I started the server only the first POST/PUT/DELETE HTTP Request would get an answer, while the next ones would not receive one and on the console this error message would appear:
/* Timestamp-not-important */ org.restlet.engine.adapter.ServerAdapter commit
INFO: The connection was broken. It was probably closed by the client.
Reason: Closed
The GET HTTP Requests however do not share that problem.
I said "Fair enough, probably it's postman's fault".. after all the request made it to the server and their effects were applied. However, now that I am building the front-end this problem blocks the server's response: instead of a JSON object I get an undefined (edit: actually I get 204 No Content) on the front-end and the same "INFO" on the back-end for every POST/PUT/DELETE after the first one.
I have no idea what it is or what I am doing wrong. It has to be the backend's problem, right? But what should I look for?
Nevermind, it was the stupidest thing ever. I tried to be "smart" about returning the same Representation object (with only a 'success' JSON field) on multiple occasions by making one instance on a static final field of a class. Turns out a new instance must be returned each time.
This might be a simple problem, but I can't seem to find a good solution right now.
I've got:
OldApp - a Java application started from the command line (no web front here)
NewApp - a Java application with a REST api behind Apache
I want OldApp to call NewApp through its REST api and when NewApp is done, OldApp should continue.
My problem is that NewApp is doing a lot of stuff that might take a lot of time which in some cases causes a timeout in Apache, and then sends a 502 error to OldApp. The computations continue in NewApp, but OldApp does not know when NewApp is done.
One solution I thought of is fork a thread in NewApp and store some kind of ID for the API request, and return it to OldApp. Then OldApp could poll NewApp to see if the thread is done, and if so - continue. Otherwise - keep polling.
Are there any good design patterns for something like this? Am I complicating things? Any tips on how to think?
If NewApp is taking a long time, it should immediately return a 202 Accepted. The response should contain a Location header indicating where the user can go to look up the result when it's done, and an estimate of when the request will be done.
OldApp should wait until the estimate time is reached, then submit a new GET call to the location. The response from that GET will either be the expected data, or an entity with a new estimated time. OldApp can then try again at the later time, repeating until the expected data is available.
So The conversation might look like:
POST /widgets
response:
202 Accepted
Location: "http://server/v1/widgets/12345"
{
"estimatedAvailableAt": "<whenever>"
}
.
GET /widgets/12345
response:
200 OK
Location: "http://server/v1/widgets/12345"
{
"estimatedAvailableAt": "<wheneverElse>"
}
.
GET /widgets/12345
response:
200 OK
Location: "http://server/v1/widgets/12345"
{
"myProperty": "myValue",
...
}
Yes, that's exactly what people are doing with REST now. Because there no way to connect from server to client, client just polls very often. There also some improved method called "long polling", when connection between client and server has big timeout, and server send information back to connected client when it becomes available.
The question is on java and servlets ... So I would suggest looking at Servlet 3.0 asynchronous support.
Talking from a design perspective, you would need to return a 202 accepted with an Id and an URL to the job. The oldApp needs to check for the result of the operation using the URL.
The thread that you fork on the server needs to implement the Callable interface. I would also recommend using a thread pool for this. The GET url for the Job that was forked can check the Future object status and return it to the user.
I'm having trouble establishing AsyncContexts for users and using them to push notifications to them. On page load I have some jQuery code to send the request:
$.post("TestServlet",{
action: "registerAsynchronousContext"
},function(data, textStatus, jqXHR){
alert("Server received async request"); //Placed here for debugging
}, "json");
And in "TestServlet" I have this code in the doPost method:
HttpSession userSession = request.getSession();
String userIDString = userSession.getAttribute("id").toString();
String paramAction = request.getParameter("action");
if(paramAction.equals("registerAsynchronousContext"))
{
AsyncContext userAsyncContext = request.startAsync();
HashMap<String, AsyncContext> userAsynchronousContextHashMap = (HashMap<String, AsyncContext>)getServletContext().getAttribute("userAsynchronousContextHashMap");
userAsynchronousContextHashMap.put(userIDString, userAsyncContext);
getServletContext().setAttribute("userAsynchronousContextHashMap", userAsynchronousContextHashMap);
System.out.println("Put asynchronous request in global map");
}
//userAsynchronousContextHashMap is created by a ContextListener on the start of the web-app
However, according to Opera Dragonfly (a debugging tool like Firebug), it appears that the server sends an HTTP 500 response about 30000ms after the request is sent.
Any responses created with userAsyncContext.getResponse().getWriter().print(SOME_JSON) and sent before the HTTP 500 response is not received by the browser, and I don't know why. Using the regular response object to send a response (response.print(SOME_JSON)) is received by the browser ONLY if all the code in the "if" statement dealing with AsyncContext is not present.
Can someone help me out? I have a feeling this is due to my misunderstanding of how the asynchronous API works. I thought that I would be able to store these AsyncContexts in a global map, then retrieve them and use their response objects to push things to the clients. However, it doesn't seem as if the AsyncContexts can write back to the clients.
Any help would be appreaciated.
I solved the issue. It seems as though there were several problems wrong with my approach:
In Glassfish, AsyncContext objects all have a default timeout period of 30,000 milliseconds (.5 minutes). Once this period expires, the entire response is committed back to the client, meaning you won't be able to use it again.
If you're implementing long-polling this might not be much of an issue (since you'll end up sending another request after the response anyway), but if you wish to implement streaming (sending data to back to the client without committing the response) you'll want to either increase the timeout, or get rid of it all together.
This can be accomplished with an AsyncContext's .setTimeout() method. Do note that while the spec states: "A timeout value of zero or less indicates no timeout.", Glassfish (at this time) seems to interpret 0 as being "immediate response required", and any negative number as "no timeout".
If you're implementing streaming , you must use the printwriter's .flush() method to push the data to the client after you're done using its .print() .println() or .write() methods to write the data.
On the client side, if you've streamed the data, it will trigger a readyState of 3 ("interactive", which means that the browser is in the process of receiving a response). If you are using jQuery, there is no easy way to handle readyStates of 3, so you're going to have to revert to regular Javascript in order to both send the request and handle the response if you're implementing streaming.
I have noticed that in Glassfish if you use AsyncContext and use .setTimeOut() to a negative number the connection is broken anyway, to fix this I had to go to my Glassfish admin web configurator : asadmin set
configs.config.server-config.network-config.protocols.protocol.http-listener-1.http. And set timeout to -1. All this to avoid glassfish finish the connections after 30 sec.
Salesforce can send up to 100 requests inside 1 SOAP message. While sending this type of Bulk Ooutbound message request my PHP script finishes executing but SF fails to accept the ACK used to clear the message queue on the Salesforce side of things. Looking at the Outbound message log (monitoring) I see all the messages in a pending state with the Delivery Failure Reason "java.net.SocketTimeoutException: Read timed out". If my script has finished execution, why do I get this error?
I have tried these methods to increase the execution time on my server as I have no access on the Salesforce side:
set_time_limit(0); // in the script
max_execution_time = 360 ; Maximum execution time of each script, in seconds
max_input_time = 360 ; Maximum amount of time each script may spend parsing request data
memory_limit = 32M ; Maximum amount of memory a script may consume
I used the high settings just for testing.
Any thoughts as to why this is failing the ACK delivery back to Salesforce?
Here is some of the code:
This is how I accept and send the ACK file for the imcoming SOAP request
$data = 'php://input';
$content = file_get_contents($data);
if($content) {
respond('true');
} else {
respond('false');
}
The respond function
function respond($tf) {
$ACK = <<<ACK
<?xml version = "1.0" encoding = "utf-8"?>
<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<soapenv:Body>
<notifications xmlns="http://soap.sforce.com/2005/09/outbound">
<Ack>$tf</Ack>
</notifications>
</soapenv:Body>
</soapenv:Envelope>
ACK;
print trim($ACK);
}
These are in a generic script that I include into the script that uses the data for a specific workflow. I can process about 25 requests (That are in 1 SOAP response) but once I go over that I get the timeout error in the Salesforce queue. for 50 requests is usually takes my PHP script 86.77 seconds.
Could it be Apache? PHP?
I have also tested just accepting the 100 request SOAP response and just accepting and sending the ACK the queue clears out, so I know it's on my side of things.
I show no errors in the apache log, the script runs fine.
I did find some info on the Salesforce site but still no luck. Here is the link.
Also I'm using the PHP Toolkit 11 (From Salesforce).
Other forum with good SF help
Thanks for any insight into this,
--Phill
UPDATE:
If I receive the incoming message and print the response, should this happen first regardless if I do anything else after? Or does it wait for my process to finish and then print the response?
UPDATE #2:
okay I think I have the problem:
PHP uses the single thread processing approach and will not send back the ACK file until the thread has completed it's processing. Is there a way to make this a mutli thread process?
Thread #1 - accept the incoming SOAP request and send back the ACK
Thread #2 - Process the SOAP request
I know I could break it up into like a DB table or flat file, but is there a way to accomplish this without doing that?
I'm going to try to close the socket after the ACK submission and continue the processing, cross my fingers it will work.
Sounds like the outbound message is hitting the timeout. Other users have reported timeouts as low as 10 seconds (see forum link below). The sandbox instance that I use (cs1) is timing out after about 1 minute, from my testing. It's possible that the timeout is an organization or instance level setting that Salesforce controls.
Two things you could try:
Open a support ticket with
Salesforce to see if they can
increase the timeout value for
outbound messages. From my
experience, there are lot of
settings that they can modify on the
organization level - this might be
one of them.
Offload processing of your data, so
that the ACK is sent immediately
back to Salesforce. Then the actual
processing of your data will take
place asynchronously. ie. Message
queue, separate thread, etc.
Some other resources that might be helpful:
related Salesforce forum discussion
Outbound messaging documentation
I think they timeout the thing waiting for Your script to end.
There is a way You could try to fix this.
Output the envelope with ack message at the beginning and then flush the thing so that their server gets it before You end processing. No threading, just plain priorities rethinking :)
read this for best info on flushing content
Are you 100% sure that Salesforce will wait the amount of time your scripts need too run? 80 seconds seem like a loong time too me.
If all requests failed I would guess that Salesforce expects you to set the Content-Type header appropriately, but this does not seem to be the case.
I don't know about Salesforce, but if you want to make some multithreading with PHP you should take a look at this code example and more precisely to pcntl_fork().
N.B: pcntl is not enabled by default and won't work on Windows platforms.
So what I've done is:
Accept all incoming OBM's, parse them into a DB
When this is done kick of a process that runs in the background (Actually I send it to the background so the script can end)
Send ACK file back
By just accepting the raw data, parsing into fields and inserting it into a DB is fairly quick. Then I issue a Linux Command Line command that also send the processing script to run in the background. Then I send the ACK file to SF and the script ends within the allotted time. It is cumbersome to split the script process into two separate stages but it works.
I am building a small api around the JMS API for a project of mine. Essentially, we are building code that will handle the connection logic, and will simplify publishing messages by providing a method like Client.send(String message).
One of the ideas being discussed right now is that we provide a means for the users to attach interceptors to this client. We will apply the interceptors after preparing the JMS message and before publishing it.
For example, if we want to timestamp a message and wrote an interceptor for that, then this is how we would apply that
...some code ...
Message message = session.createMessage()
..do all the current processing on the message and set the body
for(interceptor:listOfInterceptors){
interceptor.apply(message)
}
One of the intrerceptors we though of was to compress the message body. But when we try to read the body of the message in the interceptor, we are getting a MessageNotReadableException. In the past, I normally compressed the content before setting it as the body of the message - so never had to worry about this exception.
Is there any way of getting around this exception?
It looks like your JMS client attempts to read a write-only message. Your interceptor cannot work this way, please elaborate how you were compressing message earlier.