The scenario is the sending of a password reset mail to the user from a web request (and possibly other mail related tasks in the future).
The arguments I bring to the table for queuing:
I believe web requests should be handled as fast as possible
Decoupling the send action from the request, more easily allows externalization of the mail system (if required in the future)
The arguments I recognize against queuing:
The user does not get feedback if something goes wrong during the sending of the message
What are more arguments in this discussion? And to those in favor of queuing, how would you implement the queue? Scheduled action? Infinite dequeuing task (with interval, of course)?
Thanks!
I would suggest you to decouple actual sending of mail from your app business logic.
Do this asynchronously: Use queue or at least different thread for sending such notifications.
Sending of email could be time consuming operation,
even if you use your own internal mail server which is close to your app.
SMTP conversation consists of several requests/responses.
Do not treat sending of a mail as a transactional action.
When target SMTP server replies with 250 OK as a response for DATA command - it just takes responsibility for this mail nothing else.
Delivery could fail in future if next server in the chain is not able to deliver mail (read about DSN, aka bounce).
Last but not least think about failure modes.
What if your business critical functionality is slowed down / blocked by auxiliary one (email notification), not good I guess.
You definitely don't want to do the send synchronously since the mail server may be slow.
Send a JMS message and use an MDB to send the email.
In a Java EE 6+ scenario you can use #Asynchronous annotation in a EJB method. It returns a Future<V>. So you can continue with proccesing and ask later for task ending, while it is executed in another thread.
So you can accept a lot of request fastly, you decouple the send action from request, and you can get feedback.
http://docs.oracle.com/javaee/6/tutorial/doc/gkkqg.html
You may think that requests should be serviced as fast as possible, but what about the user? What does he think?
The user needs his password reset. He doesn't care how long that takes. If he can't complete that request he can't do anything at all.
Don't queue.
I think u should go to queue. Because it help in fast performance and to check whether the password reset request is arrived from correct source.
So u can use Map for queue implementation. Because in map u can use email id as key and a unique request reference as value. And this map element should be deleted within a time period.
Also for fast email service u can create a simple thread class that send emails and start a new thread by passing some data arguments in it. and scheduling will automatically managed by web container for these threads.
Related
My server requires sending emails pretty frequently. The emails are heavy; they have attachments as well as inline images in them.
My present code blocks code until an email is sent. (loosing 5 to 6 seconds for every email)
What is the best approach for handling the emails with out blocking the main code flow?
If you are suggesting threads, please elaborate on how efficiently it could be handled?
There are multiple ways to achieve this functionality.
Synchronous Call
This is the one which you are already using. Code (synchronously) invokes Java Mail API and waits for API to complete the execution. The process may take time depending on the complexity of building the email message (fetching records from Database, reading images/documents (attachments), communication with Mail Server etc.
Trade-offs
For individual requests(web/desktop), response latency will increase based on the time it takes to construct and send email.
An exception in sending email, may require redo of entire process (if retried).
Transactional data (e.g. DB) may be rolled back, due to exception while sending email. This may not be desired behavior.
Overall application latency will increase, if similar functionality is invoked by multiple users concurrently.
Email retry functionality may not be possible, if email sending code is tightly coupled with other functional code.
Multithreading Approach
Create a separate thread to asynchronously send an email. Calling code need not have to wait for Email send functionality to complete and execute rest of the code. Ideally, should make use of ThreadPool, instead of blandly creating new threads.
Trade-offs
Though, request latency will go down, it is still not reliable. Any exception occurred while constructing/sending email may result into, no email sent to user.
Email sending functionality can't be distributed across multiple machines.
Retry functionality is possible, since email code is separated into separate class. This class can be independently called, without a need of redoing other things.
Asynchronous Processing
Create a class, which accepts Email request and stores it in either database or messaging infrastructure (e.g. JMS). The message listeners will process the task as and when it arrives and update the status against each task.
Trade-offs
Email requests can be processed in distributed mode.
Retry email possible, without having any side effects.
Complex implementation as multiple components are involved in processing, persisting email requests.
You can efficiently do this if you spawn a thread for every email you have to send.
One way to do it is as follows:
You would need a class that is just a pure extension of Thread:
public class MailSenderThread extends Thread {
#Override
public void run() {
try {
// Code to send email
} catch (Exception ex) {
ex.printStackTrace();
}
}
}
And when you want to send email, you can do:
new MailSenderThread().start();
That is the shortest/easiest way I could think of.
You can refer to an example in my public repository as well. It's off-topic but it gets the concept.
I am learning socket and server/client model concept and having a hard time understanding the server client concept. If a client sends a request, can server sends more than one respond? Or we have to put everything in one respond?
For a memory game program, when a client click a card, the action will send a request to server in order to turn the card in every player's program, if the second card does not match, the server tells players wait 2 secs, turn the 2 cards back, and then assign turn to next player. Can a server does this in multiple responds or it has to do it in single respond? Since no client requests for some responds, so I don't know if it is achievable or not.
If you're talking about TCP connections, after the connection has established client and server are equivalent, both are free to send data as long and as much they like and/or shut down their end of the connection.
Edit: After several passes I think i have understood what the second paragraph of your question is aiming for.
There is, of course, nothing which would stop the server from doing anything.. What your server seems to do, most of the time, is blocking on a InputStream.read() operation. If you want the server to operate even when no network input happens, one solution might be to use a read timeout, or check the input stream for readability before actually reading.
This is not your complete answer.
For one request, you get one response back.
Please read on this information in wikipedia for the basics
"Request-response, also known as request-reply, is a message exchange pattern in which a requestor sends a request message to a replier system which receives and processes the request, ultimately returning a message in response. This is a simple, but powerful messaging pattern which allows two applications to have a two-way conversation with one another over a channel. This pattern is especially common in client-server architectures.1
For simplicity, this pattern is typically implemented in a purely synchronous fashion, as in web service calls over HTTP, which holds a connection open and waits until the response is delivered or the timeout period expires. However, request-response may also be implemented asynchronously, with a response being returned at some unknown later time. This is often referred to as "sync over async", or "sync/async", and is common in enterprise application integration (EAI) implementations where slow aggregations, time-intensive functions, or human workflow must be performed before a response can be constructed and delivered."
Let me try explaining the situation:
There is a messaging system that we are going to incorporate which could either be a Queue or Topic (JMS terms).
1 ) Producer/Publisher : There is a service A. A produces messages and writes to a Queue/Topic
2 ) Consumer/Subscriber : There is a service B. B asynchronously reads messages from Queue/Topic. B then calls a web service and passes the message to it. The webservice takes significant amount of time to process the message. (This action need not be processed real-time.)
The Message Broker is Tibco
My intention is : Not to miss out processing any message from A. Re-process it at a later point in time in case the processing failed for the first time (perhaps as a batch).
Question:
I was thinking of writing the message to a DB before making a webservice call. If the call succeeds, I would mark the message processed. Otherwise failed. Later, in a cron job, I would process all the requests that had initially failed.
Is writing to a DB a typical way of doing this?
Since you have a fail callback, you can just requeue your Message and have your Consumer/Subscriber pick it up and try again. If it failed because of some problem in the web service and you want to wait X time before trying again then you can do either schedule for the web service to be called at a later date for that specific Message (look into ScheduledExecutorService) or do as you described and use a cron job with some database entries.
If you only want it to try again once per message, then keep an internal counter either with the Message or within a Map<Message, Integer> as a counter for each Message.
Crudely put that is the technique, although there could be out-of-the-box solutions available which you can use. Typical ESB solutions support reliable messaging. Have a look at MuleESB or Apache ActiveMQ as well.
It might be interesting to take advantage of the EMS platform your already have (example 1) instead of building a custom solution (example 2).
But it all depends on the implementation language:
Example 1 - EMS is the "keeper" : If I were to solve such problem with TIBCO BusinessWorks, I would use the "JMS transaction" feature of BW. By encompassing the EMS read and the WS call within the same "group", you ask for them to be both applied, or not at all. If the call failed for some reason, the message would be returned to EMS.
Two problems with this solution : You might not have BW, and the first failed operation would block all the rest of the batch process (that may be the desired behavior).
FYI, I understand it is possible to use such feature in "pure java", but I never tried it : http://www.javaworld.com/javaworld/jw-02-2002/jw-0315-jms.html
Example 2 - A DB is the "keeper" : If you go with your "DB" method, your queue/topic customer continuously drops insert data in a DB, and all records represent a task to be executed. This feels an awful lot like the simple "mapping engine" problem every integration middleware aims to make easier. You could solve this with anything from a custom java code and multiples threads (DB inserter, WS job handlers, etc.) to an EAI middleware (like BW) or even a BPM engine (TIBCO has many solutions for that)
Of course, there are also other vendors... EMS is a JMS standard implementation, as you know.
I would recommend using the built in EMS (& JMS) features,as "guaranteed delivery" is what it's built for ;) - no db needed at all...
You need to be aware that the first decision will be:
do you need to deliver in order? (then only 1 JMS Session and Client Ack mode should be used)
how often and in what reoccuring times do you want to retry? (To not make an infinite loop of a message that couldn't be processed by that web service).
This is independent whatever kind of client you use (TIBCO BW or e.g. Java onMessage() in a MDB).
For "in order" delivery: make shure only 1 JMS Session processes the messages and it uses Client acknolwedge mode. After you process the message sucessfully, you need to acknowledge the message with either calling the JMS API "acknowledge()" method or in TIBCO BW by executing the "commit" activity.
In case of an error you don't execute the acknowledge for the method, so the message will be put back in the Queue for redelivery (you can see how many times it was redelivered in the JMS header).
EMS's Explicit Client Acknolwedge mode also enables you to do the same if order is not important and you need a few client threads to process the message.
For controlling how often the message get's processed use:
max redelivery properties of the EMS queue (e.g. you could put the message in the dead
letter queue afer x redelivery to not hold up other messages)
redelivery delay to put a "pause" in between redelivery. This is useful in case the
Web Service needs to recover after a crash and not gets stormed by the same message again and again in high intervall through redelivery.
Hope that helps
Cheers
Seb
I would like to write a method which handles the flow of communication on XMPP. The sequence of things I'd like to do is:
Send message.
Wait for response.
Process the response.
Since we could be waiting longer than 30s for the response (step 2) I'll be teeing up a task to take care of this. This task will need to send the message and then wait for a response on the XMPP servlet handling the incoming message. My question is: How do I wait in the task servlet thread for the response to arrive in the XMPP Servlet?
I'd normally use a listener pattern where the listener would store the message in a field in the Task object and then trigger a Semaphore to signal that a message has arrived. Like this:
Install listener in XMPP servlet in a static field.
Send message.
Wait for semaphore. ........ Meanwhile, in the XMPP servlet thread, a response will arrive and it will call the listener's callback method which stores the message and releases the semaphore.
Get message from field and process.
I tried this and it worked fine on the development server. However, when I uploaded to the cloud I found that I'd install the listener on the XMPP servlet (step 1) but then a new instance of the servlet would be instantiated when the message came in and there would no longer be a reference to the listener to call, event through the listener is a static field. My conclusion is XMPPServlet is run in a completely different VM meaning the static field is not shared between that servlet and the task one. Is this correct?
In general what is the best practice for communication between these servlets? How to I share data (normally I would've stored it in an object's field) and how do I signal from one to the other when events occur (normally I would've used a semaphore)?
Sorry about the long winded question. Tell me if it's not clear and I'll refine it a bit.
Reposting my answer to the same question you asked on the mailing list:
You can't [wait for a response in the sending process]. Instead, you
should use an asynchronous pattern: Send the message, and register a
handler for incoming XMPP messages. That handler should match up the
response to the corresponding request (stored in the datastore if
necessary) and perform appropriate processing on it.
An App Engine app can be run on any number of machines;
synchronization primitives designed for communication between threads
will not work.
I have reached a point where I will have to send email notifications to my users, fro any event they have subscribed to. My service is not large, but nothing stops it from becoming one, thus I would like to be prepared.
Currently, I am handling those emails using Spring's mail sender in a fairly synchronous manner (grabbing a bunch of subscribed email addresses from a collection and sending them a mail) However, one can see how unusable this approach may soon become. Thus I am striving for a little bit more parallelism.
Multiple threads may help the situation unless there are too many of them at the same time. I guess |I will need something like an in-memory queue, which could send batches of emails at certain intervals, opening a new thread. Threads which are finished will eb collected in a thread pool and reused.
Suggestions? Perhaps my approach is too complex. Perhaps Spring already offers a way to alleviate the blocking and synchronism. I'd be glad to know.
Rather than send one email to each user, just send a single email to all of the users at once. In other words, make one mail and add every user to the destination list. Then your SMTP server will worry about duplicating it and sending copies to each person.