JAVA email tracking pixel keep track of reading time - java

I am working on email tracking pixel project, and I want to make sure that users spend at least 30 seconds reading an email before considering it "read"
I am using Java Springboot for backend server and HTML to create email templates
in my template I put this pixel for tracking:
<img id="tr_pixel" src="https://localhost:8080/api/v1/images/1"/>
once the image is loaded it will request my server:
#GetMapping("/images/{emailID}")
public ResponseEntity<byte[]> getImagePixel (#Pathvariable String emailID) {
try{
// wait 30seconds before saving the event
Thread.sleep(30000);
// If the connection with the client is lost, throw an exception and ignore the next line
//save tracking only if user spend > 30s
service.saveTracking(emailID);
return ok().contentType(IMAGE_PNG).body(pixel);
} catch (ConnectionLostException){
// catch connection lost error if the client close the email before 30s or if did not receive the response
}
}
is there any way to check if the connection with the client is lost, or if the client receives the http response?

Perhaps a 302 redirect will help you.
It will be enough for you to define not one, but two entry points.
The first point #GetMapping("/wait30second/{emailID}"), having received the request, sleeps for 30 seconds and sends a 302 redirect to the second point ...your-site.../images/{emailID}, for example like this: https://www.baeldung.com/spring-redirect-and-forward
The second point #GetMapping("/images/{emailID}") just fixes repeated access to a pixel in
service.saveTracking(emailID) without a pause of 30 seconds. 302 redirect is performed by the client. Therefore, if there is no repeated appeal, it means that the letter was not read.
However, keep in mind that with the massive use of such monitoring, there is a high risk of your emails being flagged as spam.

Related

How to send an email when internet connection return back in java?

I'm working on a project should download some files. If a problem happened to the internet connection, it will be catch an exception. When this happened I should send an email to some people.
I need to send the email, but no internet connection, so I have 2 ideas:
1- trying to send the email, but because I don't have a connection, I need to save the email until the connection return back and send it again.
2- making a thread and check if the connection is stable, and with a condition if the internet is stable I will send the email.
I have another idea to make an infinity loop to check the internet connection and send the email and end the loop when connection is back.
Any one can help with that?
There isn't really a need to add a check for a stable internet connection. Just keep trying to send the email until it succeeds. The logic for your email notification thread seems like it would be very simple:
long RETRY_DELAY = 60*1000;
boolean emailSent = false;
while(!emailSent ) {
emailSent = sendEmail();
if (!emailSent) {
Thread.sleep(RETRY_DELAY);
}
}
The sendEmail() method should return false if there was any issue sending the email. E.g. exceptions related to networking.
You can set the eventual status in some other object and add a maximum number of retries, or maximum length of time to continue retrying until you give up etc. Those would just add more conditions to terminate the loop. You can also interrupt that emailer thread if there is a manual abort.

Handle client-side request timeout in java

A client sends a request and catches a timeout exception. However the server is still processing the request and saving it to the database. Before that happening, the client already sent a second request which doubles the record on the database. How do I prevent that from happening? Im using java servlets and javascript.
A few suggestions:-
1) Increase the client timeout.
2) Make the server more efficient so it can respond faster.
3) Get the server to respond with an intermediate "I'm working on it" response before returning with the main response.
4) Does the server need to do all the work before it responds to the client, or can some be offloaded to a seperate process for running later?
A client sends a request and catches a timeout exception. However the server is still processing the request
Make the servlet generate some output (can be just blank spaces) and flush the stream every so often (every 15 seconds for example).
If the connection has been closed on the client side, the write will fail with a socket exception.
Before that happening, the client already sent a second request which doubles the record on the database
Use the atomicity of the database, for example, a unique key. Start the process by creating a unique record (maybe in some "unfinished" status), it will fail if the record already exists.

Browser it self making request on request failure,when internet reconnected

I'm using GWT (Java to JavaScript) as front-end, and RPC mechanism (AJAX) to make server requests (Servlets are the keys).
Everything going smooth as of now.
Now a test-case has been generated like
1)Make a request to server
2)In between disconnect the internet of client (user).
3)We are handling that InvocationException by showing some message.
#Override
public void onFailure(Throwable caught) {
NTMaskAlert.unMask();
if(caught instanceof InvocationException){
NTFailureMessage.showFailureException(caught,"Network disconnected");
}
onNTFailure(caught);
}
3)Now client reconnected, user making a request.
Here is the interesting point.
As soon as the internet reconnected, the browser started processing the previous request, I observed this in fire-bug. If I disconnect twice and reconnected twice, automatically request going twice and duplication of data happening.
The reason for is simply that this behaviour is typically what users want.
That is, if they are temporarily off of the network, for example because the wireless router is down, then most of the time they expect that the browser, mail, etc, will attempt to reconnect when the network is back, they don't expect to have to go to every window and "refresh" to get it to start working again.

Servlet 3.0: Can't send an asynchronous response?

I'm having trouble establishing AsyncContexts for users and using them to push notifications to them. On page load I have some jQuery code to send the request:
$.post("TestServlet",{
action: "registerAsynchronousContext"
},function(data, textStatus, jqXHR){
alert("Server received async request"); //Placed here for debugging
}, "json");
And in "TestServlet" I have this code in the doPost method:
HttpSession userSession = request.getSession();
String userIDString = userSession.getAttribute("id").toString();
String paramAction = request.getParameter("action");
if(paramAction.equals("registerAsynchronousContext"))
{
AsyncContext userAsyncContext = request.startAsync();
HashMap<String, AsyncContext> userAsynchronousContextHashMap = (HashMap<String, AsyncContext>)getServletContext().getAttribute("userAsynchronousContextHashMap");
userAsynchronousContextHashMap.put(userIDString, userAsyncContext);
getServletContext().setAttribute("userAsynchronousContextHashMap", userAsynchronousContextHashMap);
System.out.println("Put asynchronous request in global map");
}
//userAsynchronousContextHashMap is created by a ContextListener on the start of the web-app
However, according to Opera Dragonfly (a debugging tool like Firebug), it appears that the server sends an HTTP 500 response about 30000ms after the request is sent.
Any responses created with userAsyncContext.getResponse().getWriter().print(SOME_JSON) and sent before the HTTP 500 response is not received by the browser, and I don't know why. Using the regular response object to send a response (response.print(SOME_JSON)) is received by the browser ONLY if all the code in the "if" statement dealing with AsyncContext is not present.
Can someone help me out? I have a feeling this is due to my misunderstanding of how the asynchronous API works. I thought that I would be able to store these AsyncContexts in a global map, then retrieve them and use their response objects to push things to the clients. However, it doesn't seem as if the AsyncContexts can write back to the clients.
Any help would be appreaciated.
I solved the issue. It seems as though there were several problems wrong with my approach:
In Glassfish, AsyncContext objects all have a default timeout period of 30,000 milliseconds (.5 minutes). Once this period expires, the entire response is committed back to the client, meaning you won't be able to use it again.
If you're implementing long-polling this might not be much of an issue (since you'll end up sending another request after the response anyway), but if you wish to implement streaming (sending data to back to the client without committing the response) you'll want to either increase the timeout, or get rid of it all together.
This can be accomplished with an AsyncContext's .setTimeout() method. Do note that while the spec states: "A timeout value of zero or less indicates no timeout.", Glassfish (at this time) seems to interpret 0 as being "immediate response required", and any negative number as "no timeout".
If you're implementing streaming , you must use the printwriter's .flush() method to push the data to the client after you're done using its .print() .println() or .write() methods to write the data.
On the client side, if you've streamed the data, it will trigger a readyState of 3 ("interactive", which means that the browser is in the process of receiving a response). If you are using jQuery, there is no easy way to handle readyStates of 3, so you're going to have to revert to regular Javascript in order to both send the request and handle the response if you're implementing streaming.
I have noticed that in Glassfish if you use AsyncContext and use .setTimeOut() to a negative number the connection is broken anyway, to fix this I had to go to my Glassfish admin web configurator : asadmin set
configs.config.server-config.network-config.protocols.protocol.http-listener-1.http. And set timeout to -1. All this to avoid glassfish finish the connections after 30 sec.

Salesforce/PHP - Bulk Outbound message (SOAP), Time out issue - See update #2

Salesforce can send up to 100 requests inside 1 SOAP message. While sending this type of Bulk Ooutbound message request my PHP script finishes executing but SF fails to accept the ACK used to clear the message queue on the Salesforce side of things. Looking at the Outbound message log (monitoring) I see all the messages in a pending state with the Delivery Failure Reason "java.net.SocketTimeoutException: Read timed out". If my script has finished execution, why do I get this error?
I have tried these methods to increase the execution time on my server as I have no access on the Salesforce side:
set_time_limit(0); // in the script
max_execution_time = 360 ; Maximum execution time of each script, in seconds
max_input_time = 360 ; Maximum amount of time each script may spend parsing request data
memory_limit = 32M ; Maximum amount of memory a script may consume
I used the high settings just for testing.
Any thoughts as to why this is failing the ACK delivery back to Salesforce?
Here is some of the code:
This is how I accept and send the ACK file for the imcoming SOAP request
$data = 'php://input';
$content = file_get_contents($data);
if($content) {
respond('true');
} else {
respond('false');
}
The respond function
function respond($tf) {
$ACK = <<<ACK
<?xml version = "1.0" encoding = "utf-8"?>
<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<soapenv:Body>
<notifications xmlns="http://soap.sforce.com/2005/09/outbound">
<Ack>$tf</Ack>
</notifications>
</soapenv:Body>
</soapenv:Envelope>
ACK;
print trim($ACK);
}
These are in a generic script that I include into the script that uses the data for a specific workflow. I can process about 25 requests (That are in 1 SOAP response) but once I go over that I get the timeout error in the Salesforce queue. for 50 requests is usually takes my PHP script 86.77 seconds.
Could it be Apache? PHP?
I have also tested just accepting the 100 request SOAP response and just accepting and sending the ACK the queue clears out, so I know it's on my side of things.
I show no errors in the apache log, the script runs fine.
I did find some info on the Salesforce site but still no luck. Here is the link.
Also I'm using the PHP Toolkit 11 (From Salesforce).
Other forum with good SF help
Thanks for any insight into this,
--Phill
UPDATE:
If I receive the incoming message and print the response, should this happen first regardless if I do anything else after? Or does it wait for my process to finish and then print the response?
UPDATE #2:
okay I think I have the problem:
PHP uses the single thread processing approach and will not send back the ACK file until the thread has completed it's processing. Is there a way to make this a mutli thread process?
Thread #1 - accept the incoming SOAP request and send back the ACK
Thread #2 - Process the SOAP request
I know I could break it up into like a DB table or flat file, but is there a way to accomplish this without doing that?
I'm going to try to close the socket after the ACK submission and continue the processing, cross my fingers it will work.
Sounds like the outbound message is hitting the timeout. Other users have reported timeouts as low as 10 seconds (see forum link below). The sandbox instance that I use (cs1) is timing out after about 1 minute, from my testing. It's possible that the timeout is an organization or instance level setting that Salesforce controls.
Two things you could try:
Open a support ticket with
Salesforce to see if they can
increase the timeout value for
outbound messages. From my
experience, there are lot of
settings that they can modify on the
organization level - this might be
one of them.
Offload processing of your data, so
that the ACK is sent immediately
back to Salesforce. Then the actual
processing of your data will take
place asynchronously. ie. Message
queue, separate thread, etc.
Some other resources that might be helpful:
related Salesforce forum discussion
Outbound messaging documentation
I think they timeout the thing waiting for Your script to end.
There is a way You could try to fix this.
Output the envelope with ack message at the beginning and then flush the thing so that their server gets it before You end processing. No threading, just plain priorities rethinking :)
read this for best info on flushing content
Are you 100% sure that Salesforce will wait the amount of time your scripts need too run? 80 seconds seem like a loong time too me.
If all requests failed I would guess that Salesforce expects you to set the Content-Type header appropriately, but this does not seem to be the case.
I don't know about Salesforce, but if you want to make some multithreading with PHP you should take a look at this code example and more precisely to pcntl_fork().
N.B: pcntl is not enabled by default and won't work on Windows platforms.
So what I've done is:
Accept all incoming OBM's, parse them into a DB
When this is done kick of a process that runs in the background (Actually I send it to the background so the script can end)
Send ACK file back
By just accepting the raw data, parsing into fields and inserting it into a DB is fairly quick. Then I issue a Linux Command Line command that also send the processing script to run in the background. Then I send the ACK file to SF and the script ends within the allotted time. It is cumbersome to split the script process into two separate stages but it works.

Categories