Long Running Task in Java EE WebApp + icefaces - java

I don't have much knowledge on Java EE but am currently learning it.
I've come up with a project which involves a long running task (up to several minutes) invoked by the user. The task consists of several steps. Of course I would like to show the progress to the user.
The project uses Java EE with JPA, JSF and Icefaces. It runs on Glassfish.
An experienced colleague adviced the following pattern to me:
Create a stateless, asynchronous EJB which creates a response object and processes the request
Persist the response object after each step
In the backing bean, query and display the response object
This works well. My only problem is to update the status site to mirror the progress. Currently I am doing this with a simple JavaScript page reload every x seconds.
Do you know a way/pattern to reflect the current step from the stateless ejb to the jsf backing bean?
Or, and I would prefer that, do you know a way to query the value of a backing bean every x seconds?
Edit:
I am aware of the Icefaces push mechanism, but I want the status update site to be decoupled from the calculation EJB for the following reasons:
The backing bean might already be destroyed because the user left the site and return later to fetch the result
Multiple sessions and therefore multiple beans may exist for one user
Having a clean design

There are several options to pass back this information. If EJB is living in the same JVM,
you may as well use some singleton Map and store progress under certain key (session ID)
If this is not the case, you will need some shared state or comminucation. There are several options
store it on database accessible from both tiers ( sql, JNDI, LDAP - better solution would be key-value store , like redis - if you got it )
use some messaging to deposit state of processing on web tier side
store state in a hash it on EJB tier side, and provide another SLSB method to rtrieve this state
Your choice is not easy - all of these solution suckin a different ways.

I accomplished this using a threaded polling model in conjunction with a ProgressBar component.
public void init()
{
// This method is called by the constructor.
// It doesn't matter where you define the PortableRenderer, as long as it's before it's used.
PushRenderer.addCurrentSession("fullFormGroup");
portableRenderer = PushRenderer.getPortableRenderer();
}
public void someBeanMethod(ActionEvent evt)
{
// This is a backing bean method called by some UI event (e.g. clicking a button)
// Since it is part of a JSF/HTTP request, you cannot call portableRenderer.render
copyExecuting = true;
// Create a status thread and start it
Thread statusThread = new Thread(new Runnable() {
public void run() {
try {
// message and progress are both linked to components, which change on a portableRenderer.render("fullFormGroup") call
message = "Copying...";
// initiates render. Note that this cannot be called from a thread which is already part of an HTTP request
portableRenderer.render("fullFormGroup");
do {
progress = getProgress();
portableRenderer.render("fullFormGroup"); // render the updated progress
Thread.sleep(5000); // sleep for a while until it's time to poll again
} while (copyExecuting);
progress = getProgress();
message = "Finished!";
portableRenderer.render("fullFormGroup"); // push a render one last time
} catch (InterruptedException e) {
System.out.println("Child interrupted.");
}
});
statusThread.start();
// create a thread which initiates script and triggers the termination of statusThread
Thread copyThread = new Thread(new Runnable() {
public void run() {
File someBigFile = new File("/tmp/foobar/large_file.tar.gz");
scriptResult = copyFile(someBigFile); // this will take a long time, which is why we spawn a new thread
copyExecuting = false; // this will caue the statusThread's do..while loop to terminate
}
});
copyThread.start();
}

As you are using icefaces you could use the ICEpush mechanism for rendering your updates.

Related

Java monitor multi threads outside the class

Since I don't have the code here I'll try to be as clear as I can...
I'm developing a rest service in java that will get some params (number of threads, ammount of messages) and will create the threads (via loop) and send this number of messages via MQ (I'm passing the number of mssages when creating the thread).
So for an example if someone sends 50 threads and 5000 msgs it will send 2.5M msgs...
Now my question is how could I create another rest service to monitor all those threads and give me a % of conclusions on the messages sent.
I'm considering calling this service to update a progress bar every 2 secs via ajax.
A simplified approach is to create a class to keep track of the statistics the status bar will need to display. For example:
public class MessageCreatorProgress {
private final int totalMessagesToBeCreated;
private final AtomicInteger successCount;
private final AtomicInteger failureCount;
// constructor to initialize values
// increment methods
// get methods
}
In the initial request which starts the threads, construct the threads with a shared instance of an MessageCreatorProgress. For example:
// endpoint method to create a bunch of messages
public String startCreatingMessages(CreateMessagesRequest request) {
MessageCreatorProgress progress = new MessageCreatorProgress(requesst.getThreadCount * request.getMessageCountPerThread());
for (...) {
new MyMessageCreator(progress, request.getSomeParameter(), ....).start();
}
String messageProgressId = some unique value...
// Store MessageCreatorProgress in the session or some other shared memory,
// so it can be accessed by subsequent calls.
session.setAttribute(messageProgressId, progress);
return messageProgressId;
}
Each MyMessageCreator instance would for example call progress.incrementSuccess() as a last step, or progress.incrementFailure() for an exception.
The AJAX call passes the messageProgressId to the status endpoint which knows how to access the MessageCreatorProgress:
// endpoint method to get the message creation progress
// transform to JSON or whatever
public MessageCreatorProgress getMessageCreationProgress(String messageProgressId) {
return session.getAttribute(messageProgressId);
}
A more complex approach is to use a database - for example when the AJAX call will not hit the same server running the threads which are creating the messages. When a thread is successful or has an exception it can update a record associated with messageProgressId, and the AJAX endpoint checks the database and constructs a MessageCreatorProgress to return to the client.

Is it possible to do server push in a desktop-scope event queue in ZK?

I'm quite new to ZK and the concept of event queues. What I'm trying to do is run a long operation in the server and update the UI of the progress in real-time, instead of blocking the UI while the long operation runs. So for example, if there are 3 tasks (this number is not fixed) to do in the long operation, it should update the UI by updating a "log trace" textbox and a progress bar that same number of times.
My code structure looks like:
if (EventQueues.exists("longop")) {
print("It is busy. Please wait");
return; //busy
}
EventQueue eq = EventQueues.lookup("longop"); //create a queue
String result;
//subscribe async listener to handle long operation
eq.subscribe(new EventListener() {
public void onEvent(Event evt) {
if ("doLongOp".equals(evt.getName())) {
//simulate a long operation
doTask1();
eq.publish(new Event("printStatus", null, "Task1 completed."));
doTask2();
eq.publish(new Event("printStatus", null, "Task2 completed."));
doTask3();
eq.publish(new Event("printStatus", null, "Task3 completed."));
result = "success";
eq.publish(new Event("endLongOp")); //notify it is done
}
}
}, true); //asynchronous
//subscribe a normal listener to show the resul to the browser
eq.subscribe(new EventListener() {
public void onEvent(Event evt) {
if("printStatus".equals(evt.getName())) {
printToTextbox((String)evt.getData()); //appends value to the log textbox
}
if ("endLongOp".equals(evt.getName())) {
print(result); //show the result to the browser
EventQueues.remove("longop");
}
}
}); //synchronous
eq.publish(new Event("doLongOp")); //kick off the long operation
This didn't work. All the printStatus events happen AFTER the long operation is finished. The only thing this fixed is that the UI is not getting blocked whenever the long operation runs. I was assuming that since the long operation thread is asynch, it will still send the events to the queue and the synch UI thread will be able to handle them as soon as they happen. So after several hours of trial and error, and after noticing that the server push is NOT used in a desktop scope queue, I changed the scope to application and explicitly enabled server push:
EventQueue<Event> eq = EventQueues.lookup("longop", EventQueues.APPLICATION, true);
desktop.enableServerPush(true);
it just worked. I know that ZK CE only has the client polling, which is fine for my use case. But why is it that in desktop scope, server push is not used? How can we accomplish such task if we don't want the queue to be shared application-wide? I want each desktop to have their own event queue.
It might also be worth mentioning that I have enabled the event thread. And that I tried disabling it but the result was the same. So it looks to me that it doesn't affect my problem.
Any help is greatly appreciated.
PS: I am using ZK CE 7.0.3
There are many possible solutions for your situation.
Please take a look at this section of ZK documents.
You can use the piggyback, but when the user doesn't do anything, you also have no updates on the screen.
So I suggest go for the echoEvents.
So you have to do task 1, update screen and echo onTask2.
In OnTask2 do your stuff, update screen and echo onTask3.
And for onTask3 do task 3 and update the screen.
Edit :
The scope doesn't have to be application scope. The application scope event queue has already server push build in (And I believe Session also). For the desktop you have to do it manually(or other approach). (your desktop.enableServerPush isn't needed for application scope)
If you want to work simple with the eventqueue look here.
Use the EventQueue.subscribe(EventListener, EventListener) what is the async and sync Eventlistener.
The only thing is, in the Sync listener you need to call your Task 2 with again the sync listener for refreshing GUI and start task 3 in same way.
The other way is passing the desktop to the async listener so you can enable (and disable) server push there.(async listener never has reference to desktop, it's a complete new thread)

Is Play Framework suitable for asynchronous background processing?

I'm going to build a web application that's going to host urban games.
A user visits my website, clicks "Start game" and starts receiving some SMS messages when gets to some location and has to answer them to get points.
Is Play suitable for this kind of application? After clicking the "start game" button some logic has to go on its own course. How would I handle checking geolocation of the players (I have API for that) parallely? I would like to ping the player every ~5 sec. and do some logic. The user of course has to be able to use the web application at the same time as it's processing his location, assigning points, sending and receiving messages etc.
So to sum up: I want an application written in Play that starts a separate thread for a game after clicking "start game" and other users are able to view their data (statisctics etc.), while the threads work their way with the game logic.
I found something like jobs but they are documented for version 1.2. After some reading it turned out that Akka is the recommended one now but it uses and actor model.
Is Play + Akka a good choice for my project?
Absolutely. It is very easy to set up computations in a separate ThreadPool (also referred to as ExecutionContext) with the Play Framework. You may want to read up on the documentation here, but in a nutshell you'll be wanting to do something like this in your Application.scala controller file (note this example uses Scala):
// Async Action that's triggered when a user clicks "Start Game".
// Runs logic in separate gameLogicContext thread pool and asynchronously returns a response without blocking of Play's default thread pool.
def startGame = Action.async { implicit request =>
Future {
// ... your game logic here. This will be run in gameLogicContext
Ok("Game started in separate thread pool") // http response
}(Contexts.gameLogicContext) // the thread pool the future should run in.
}
And then you will set up a separate gameLogicContext thread pool within your application.conf file:
play {
akka {
actor {
game-logic-context = {
fork-join-executor {
parallelism-min = 300
parallelism-max = 300 // thread pool with 300 threads
}
}
}
}
}

How to synchronize concurring Web Service calls in Java

I'm currently developing some web services in Java (& JPA with MySQL connection) that are being triggered by an SAP System.
To simplify my problem I'm referring the two crucial entities as BlogEntry and Comment. A BlogEntry can have multiple Comments. A Comment always belongs to exactly one BlogEntry.
So I have three Services (which I can't and don't want to redefine, since they're defined by the WSDL I exported from SAP and used parallel to communicate with other Systems): CreateBlogEntry, CreateComment, CreateCommentForUpcomingBlogEntry
They are being properly triggered and there's absolutely no problem with CreateBlogEntry or CreateComment when they're called seperately.
But: The service CreateCommentForUpcomingBlogEntry sends the Comment and a "foreign key" to identify the "upcoming" BlogEntry. Internally it also calls CreateBlogEntry to create the actual BlogEntry. These two services are - due to their asynchronous nature - concurring.
So I have two options:
create a dummy BlogEntry and connect the Comment to it & update the BlogEntry, once CreateBlogEntry "arrives"
wait for CreateBlogEntry and connect the Comment afterwards to the new BlogEntry
Currently I'm trying the former but once both services are fully executed, I end up with two BlogEntries. One of them only has the ID delivered by CreateCommentForUpcomingBlogEntry but it is properly connected to the Comment (more the other way round). The other BlogEntry has all the other information (such as postDate or body), but the Comment isn't connected to it.
Here's the code snippet of the service implementation CreateCommentForUpcomingBlogEntry:
#EJB
private BlogEntryFacade blogEntryFacade;
#EJB
private CommentFacade commentFacade;
...
List<BlogEntry> blogEntries = blogEntryFacade.findById(request.getComment().getBlogEntryId().getValue());
BlogEntry persistBlogEntry;
if (blogEntries.isEmpty()) {
persistBlogEntry = new BlogEntry();
persistBlogEntry.setId(request.getComment().getBlogEntryId().getValue());
blogEntryFacade.create(persistBlogEntry);
} else {
persistBlogEntry = blogEntries.get(0);
}
Comment persistComment = new Comment();
persistComment.setId(request.getComment().getID().getValue());
persistComment.setBody(request.getComment().getBody().getValue());
/*
set other properties
*/
persistComment.setBlogEntry(persistBlogEntry);
commentFacade.create(persistComment);
...
And here's the code snippet of the implementation CreateBlogEntry:
#EJB
private BlogEntryFacade blogEntryFacade;
...
List<BlogEntry> blogEntries = blogEntryFacade.findById(request.getBlogEntry().getId().getValue());
BlogEntry persistBlogEntry;
Boolean update = false;
if (blogEntries.isEmpty()) {
persistBlogEntry = new BlogEntry();
} else {
persistBlogEntry = blogEntries.get(0);
update = true;
}
persistBlogEntry.setId(request.getBlogEntry().getId().getValue());
persistBlogEntry.setBody(request.getBlogEntry().getBody().getValue());
/*
set other properties
*/
if (update) {
blogEntryFacade.edit(persistBlogEntry);
} else {
blogEntryFacade.create(persistBlogEntry);
}
...
This is some fiddling that fails to make things happen as supposed.
Sadly I haven't found a method to synchronize these simultaneous service calls. I could let the CreateCommentForUpcomingBlogEntry sleep for a few seconds but I don't think that's the proper way to do it.
Can I force each instance of my facades and their respective EntityManagers to reload their datasets? Can I put my requests in some sort of queue that is being emptied based on certain conditions?
So: What's the best pracice to make it wait for the BlogEntry to exist?
Thanks in advance,
David
Info:
GlassFish Server 3.1.2
EclipseLink, version: Eclipse Persistence Services - 2.3.2.v20111125-r10461
If you are sure you are getting a CreateBlogEntry call, queue the CreateCommentForUpcomingBlogEntry calls and dequeue and process them once you receive the CreateBlogEntry call.
Since you are on an application server, for queues, you can probably use JMS queues that autoflush to storage or use the DB cache engine (Ehcache ?), in case you receive a lot of calls or want to provide a recovery mechanism across restarts.

CORBA unclear stuff

I've recently began on working on my first CORBA project. I think I got the basic stuff , however there are some things that still elude me. One of these things is how CORBA handles several calls on the same object .
Suppose I have a client that registers itself with the server , and then can receive work. The server sends work at random times.
Are all these calls handled on the same thread ? This would mean that while the client is working , it cannot receive anything. In this case how could I give him a multithread behavior.
Or on the other hand is a thread spawned for every call received ?. In this case do I need to protect the common data that can be accessed on each call ? What would be a good practice to do so
Other thing I'd like to do is to create several workers and have them receive work ,but in my implementation only one worker is active .
Below :
public static void main(String[] args)
{
try
{
connectWithServer(args);
createWorkers();
// wait for invocations from clients
orb.run();
}
catch (Exception e)
{
System.out.println("ERROR : " + e) ;
e.printStackTrace(System.out);
}
}
static public void connectWithServer(String[] args)throws Exception
{
orb = ORB.init(args, null);
// get reference to rootpoa & activate the POAManager
rootpoa = POAHelper.narrow(orb.resolve_initial_references("RootPOA"));
rootpoa.the_POAManager().activate();
// get the root naming context
org.omg.CORBA.Object objRef = orb.resolve_initial_references("NameService");
// Use NamingContextExt instead of NamingContext. This is
// part of the Interoperable naming Service.
NamingContextExt ncRef = NamingContextExtHelper.narrow(objRef);
// resolve the Object Reference in Naming
taskBagImpl = TaskBagHelper.narrow(ncRef.resolve_str(SERVER_NAME));
System.out.println(TAG + " Obtained a handle on server object: " + taskBagImpl);
}
public static void createWorkers() throws Exception
{
for(int i = 0; i < nrOfWorkers; i++)
{
WorkerImpl w = new WorkerImpl();
rootpoa.activate_object((Servant) w);
Worker ref = WorkerHelper.narrow(rootpoa.servant_to_reference(w));
w.setRef(ref);
taskBagImpl.registerWorker(w.getId(), ref);
}
}
Threading options are not specified in the CORBA standard. The only configuration possible in respect to threading is the POA policy ThreadingPolicy. Possible values are either ORB_CTRL_MODEL or SINGLE_THREAD_MODEL. The former specifies nothing about threading, and the ORB implementation decides which threading model to use. The latter guarantees that every request that an object receives (within the same POA) is serialized, so no re-entrancy or multi-threading capabilities has to be implemented in the servant.
CORBA implementors, however, took notice of this limitation and implemented some standard default policies, that have to be configured by other means (maybe program options via ORB.init() or configuration files). Usually, you can find three different policies (once you select ORB_CTRL_MODEL):
Thread per request: Spawns a new thread each request.
Thread per client: Spawns a new thread for each different client.
Thread pool: The ORB pre-allocates some pool of threads and uses them to serve all requests.
Others are possible, but those tend to be the common ground. Of couse, either of them will force you to use any kind of locking strategy to support concurrent clients.
See this Java IDL FAQ :
What is the thread model supported by the CORBA implementation in this release?

Categories