I want to create a servlet method like below. In this method I want to perform some data download.So if request for data download comes I just do the download. If already a download is going on I want somehow the second request to wait till the first thread is done with download. Once the first thread is done with download the second thread can start automatically.
DoTheDownloadAction(){
}
How can i achieve the above requirement?
Considering you have a DownloadHelper class and in your Servlet you have created one instance of that class then you can do something like this :
DoTheDownloadAction() {
synchronized(downloadHelper) {
//Downloading something
}
}
Lets imagine you have a button called as "download" with id="download" in your jsp and you have this code in your javascript
var globalDownloadStatus = false;
jQuery(document).ready(function(){
jQuery('#download'(.click(function(){
if(globalDownloadStatus == true) {
alert('download already in progress, please wait');
return;
}
jQuery.get('yourservletpath', function(data){
alert('Download Complete');
});
});
});
Sounds like the perfect candidate for a semaphore, or (depending on the complexity and the downstream effects) the simpler way to affect the same change would be to synchronize the download code to a relevant key for your application.
Take in account that usually web-servers could distributed for scalability. Usually the appropriate solution is to synchronize via database locks. However for you maybe just enough to use synchronized java keyword on an object you want wait to.
Also, you are asking for a pessimistic lock. It's usually bad architecture design.
Related
I tried to find an answer online for it, but I couldn't find one which is specific for Firebase implementations.
I can choose between OnCompleteListener and OnSuccessListener for a lot of operations in Firebase, and I'd like to know how can I choose between them?.
I have read the documentation for OnComplete and OnSuccess, but as I can see from Firebase documentations, this one for example, for one specific operation (like get operation in the example), they sometimes use OnSuccessListener and sometimes they use OnCompleteListener.
How can I know which one is better in every situation?
Does it matter? Considering that I'd like to know for every operation if it was succussful or not.
As the name suggests, onSuccess() will fire when a task is completed successfully.
onComplete() will fire when the task is completed, even if it failed.
In the method, you can call Task.isSuccessful() and Task.getException().
In onSuccess() you can be certain that isSuccessful() will return true, and getException() will return null (so there's not much point calling them).
In onComplete() isSuccessful() may be false, and you have the opportunity to deal with the failure, perhaps using getException() to obtain more detail.
If you need to handle failed tasks (and you should!), you have two choices:
Use and OnCompleteListener, and if(task.isSuccessful()) { ... } else {...} -- this puts the success and failure code close together, and may be useful if those routines share state.
Use separate OnSuccessListener and OnFailureListener -- this allows you to write listeners with a bit more cohesion, in that each handler specialises in one thing. Of course, one class may implement both interfaces, giving you another way to have both see the same state.
To add to what slim answered above in my use of Firebase.
I find out that this two listeners (OnCompleteListener and OnSuccessListener)
Have different callback times in writing data to their servers.
The general rule of thumb
If you're relying on a systematic(sequential) way of writing to the servers in order to
perform some logic then use OnCompleteListener
If you're not dependent on a systematic(non-sequential i.e async tasks) way of writing to the servers in order to
perform some logic then use OnSuccessListener
Sometimes you may find that you need to use value of the result say for example getting device token.. only the onSuccess will give InstanceIdResult and not onComplete... so therefore you must use onSuccess...
// Get The Device Token And Put It Into Firebase Instance
FirebaseInstanceId.getInstance().getInstanceId().addOnSuccessListener(new OnSuccessListener<InstanceIdResult>() {
#Override
public void onSuccess(InstanceIdResult instanceIdResult) {
String DeviceToken = instanceIdResult.getToken();
}
});
I'm not quite sure exactly how to go about this...so it may take me a few tries to get this question right. I have a annotation for caching the results of a method. My code is a private fork for now, but the part I'm working on starts from here:
https://code.google.com/p/cache4guice/source/browse/trunk/src/org/cache4guice/aop/CacheInterceptor.java#46
I have annotated a method that I want cached, that runs a VERY slow query, sometimes takes a few minutes to run. The problem is, that my async web app keeps getting new users coming and asking for the same data. However, the getSlowData() method hasn't completed yet.
So something like this:
#Cached
public getSlowData() {
...
}
Inside the interceptor, we check the cache and find that it's not cached, which passes us down to:
return getResultAndCache(methodInvocation, cacheKey);
I've never gotten comfortable with the whole concept of concurrency. I think what I need is to mark that the getResultAndCache() method, for the given getSlowData(), has already been kicked off and subsequent requests should wait for the result.
Thanks for any thoughts or advice!
Most cache implementations synchronize calls to 'get' and 'set' but that's only half of the equation. What you really need to do is make sure that only one thread enters the 'check if loaded and load if not there' part. For most situations, the cost to serializing thread access may not be worth if there's
1) no risk
2) little cost
to loading the data multiple times through parallel threads (comment here if you need more clarification on this). Since this annotation is used universally, I would suggest creating a second annotation, something like '#ThreadSafeCached' and the invoke method will look like this
Object cacheElement = cache.get(cacheKey);
if (cacheElement != null) {
LOG.debug("Returning element in cache: {}", cacheElement);
} else {
synchronized(<something>) {
// double-check locking, works in Java SE 5 and newer
if ((cacheElement = cache.get(cacheKey)) == null) {
// a second check to make sure a previous thread didn't load it
cacheElement = getResultAndCache(methodInvocation, cacheKey);
} else {
LOG.debug("Returning element in cache: {}", cacheElement);
}
}
}
return cacheElement;
Now, I left the part about what you synchronize on. It'd be most optimal to lock down on the item being cached since you won't make any threads not specifically loading this cache item wait. If that's not possible, another crude approach may be to lock down on the annotation class itself. This is obviously less efficient but if you have no control over the cache loading logic (seems like you do), it's an easy way out!
I just started using Wicket (and really am not too familiar with a lot of web development) and have a question with regards to a download link. I have a web app that simply allows users to upload particular files, processes some of the information in the files, and offers downloads of different formats of the processed information. However this is really supposed to be a lite version of some software I am working on, so I really don't want to do much processing. I am wondering if there is a way to set something like a timeout for the download link, so that if the user clicks on the link and the processing takes longer than 20 seconds or so, it will simply quit the processing and send them an error instead. Thanks!
I agree with Xavi that the processing (and possible termination of the processing) should be done with a thread.
However, especially if it takes more than just a few seconds, it is much better to not just wait with the open connection, but rather to check at regular intervals to see whether the thread is done.
I'd do something like this:
Start the thread doing the actual work
Show a Panel that says "Processing your download" or something like that.
Attach an AbstractAjaxTimerBehavior to the panel with a timer duration of, say, 10 seconds or so.
In the timer behavior's onTimer method, check the state of the processing:
If it's still working, do nothing.
If it's canceled because it took too long, show a message like "Canceled" to the user, e.g. by replacing the panel or setting a warning label to visible.
If it's done, show a message like "Your download is starting" and start the download. See this document for how to do an AJAX response and at the same time initiate a download
To be able to cancel processing if it takes more than a given amount of time, it would be appropriate to perform it in a separate thread. This matter is addressed in the following question: How to timeout a thread.
Now for the Wicket part of it: If I understood what you're trying to achieve, you could for instance roll your own Link that would perform the processing, and respond with the results in case it doesn't timeout. In case the processing takes too much time, you can simply throw an error (remember to have a FeedbackPanel so that it can be shown).
The processing, or generation of the file to download, could be implemented in a LoadableDetachableModel for efficiency. See this question for more details: How to use Wicket's DownloadLink with a file generated on the fly?
For instance:
IModel<File> processedFileModel = new LoadableDetachableModel<File>(){
protected File load(){
// Implement processing in a separate thread.
// If it times out it could return null, for instance
}
}
Link<File> downloadLink = new Link<File>("yourID", processedFileModel) {
#Override
public void onClick() {
File processedFile = getModelObject();
if (file != null) {
IResourceStream rs = new FileResourceStream(file);
getRequestCycle().setRequestTarget(new ResourceStreamRequestTarget(rs));
} else {
error("Processing took too long");
}
}
};
I have a webapp that needs to sometimes download some bytes from a url and package it up and send back to the requester. The downloaded bytes are stored for a little while so they can be reused if the same url is needed to be downloaded. I am trying to figure out how best to prevent the threads from downloading the same url at the same time if the requests come in at the same time. I was thinking of creating a class like below that would prevent the same url from being downloaded at the same time. If a url is unable to be locked then it either waits until its not locked anymore to try and download it as long as it does not exist after the unlock.
public class URLDownloader
{
HashMap<String,String> activeThreads = new HashMap<String,String>();
public synchronized void lockURL(String url, String threadID) throws UnableToLockURLException
{
if(!activeThreads.containsKey(url))
activeThreads.put(url, threadID)
else
throw UnableToLockURLException()
}
public synchonized void unlockURL(String url, String threadID)
{
//need to check to make sure its locked and by the passed in thread
returns activeThreads.remove(url);
}
public synchonized void isURLStillLocked(String url)
{
returns activeThreads.contains(url);
}
}
Does anyone have a better solution for this? Does my solution seem valid? Are there any open source components out there that already do this very well that I can leverage?
Thanks
I would suggest to keep a ConcurrentHashSet<String> to keep track of your unique URLs visible to all your threads. This construct might not exist directly in the java library but can easily constructed by a ConcurrentHashMap like so: Collections.newSetFromMap(new ConcurrentHashMap<String,Boolean>())
It sounds like you don't need a lock, since if there are multiple requests to download the same URL, the point is to download it only once.
Also, I think it would make more sense in terms of encapsulation to put the check for a stored URL / routine to store new URLs in the URLDownloader class, rather than in the calling classes. Your threads can simply call e.g. fetchURL(), and let URLDownloader handle the specifics.
So, you can implement this in two ways. If you don't have a constant stream of download requests, the simpler way is to have only one URLDownloader thread running, and to make its fetchURL method synchronized, so that you only download one URL at a time. Otherwise, keep the pending download requests in a central LinkedHashSet<String>, which preserves order and ignores repeats.
I really want to abuse #Asynchronous to speed up my web application, therefore I want to understand this a bit more in order to avoid incorrectly using this annotation.
So I know business logic inside this annotated method will be handled in a separated thread, so the user wont have to wait. So I have two method that persist data
public void persist(Object object) {
em.persist(object);
}
#Asynchronous
public void asynPersist(Object object) {
em.persist(object);
}
So I have couple scenario I want to ask which one of these scenario is not ok to do
1. B is not depend on A
A a = new A();
asynPersist(a); //Is it risky to `asynPersist(a) here?`
B b = new B();
persist(b);
//Cant asynPersist(B) here because I need the `b` to immediately
//reflect to the view, or should I `asynPersist(b)` as well?
2. Same as first scenario but B now depend on A. Should I `asynPersist(a)`
3. A and B are not related
A a = new A();
persist(a); //Since I need `a` content to reflect on the view
B b = new B();
asynPersist(b); //If I dont need content of b to immediately reflect on the view. Can I use asyn here?
EDIT: hi #arjan, thank you so much for your post, here is another scenario I want to ask your expertise. Please let me know if my case does not make any sense to u.
4. Assume User has an attribute call `count` type `int`
User user = null;
public void incVote(){
user = userDAO.getUserById(userId);
user.setCount(u.getCount() + 1);
userDAO.merge(user);
}
public User getUser(){ //Accessor method of user
return user;
}
If I understand you correctly, if my method getUserById use #Asynchronous, then the line u.setCount(u.getCount() + 1); will block until the result of u return, is it correct? So in this case, the use of #Asynchronous is useless, correct?
If the method merge (which merge all changes of u back to database) use #Asynchronous, and if in my JSF page, I have something like this
<p:commandButton value="Increment" actionListener="#{myBean.incVote}" update="cnt"/>
<h:outputText id="cnt" value="#{myBean.user.count}" />
So the button will invoke method incVote(), then send and ajaxical request to tell the outputText to update itself. Will this create an race condition (remember we make merge asynchronous)? So when the button tell the outputText to update itself, it invoke the accessor method getUser(), will the line return user; block to wait for the asynchronous userDAO.merge(user), or there might possible a race condition here (that count might not display the correct result) and therefore not recommend to do so?
There are quite a few places where you can take advantage of #Asynchronous. With this annotation, you can write your application as intended by the Java EE specification; don't use explicit multi-threading but let work being done by managed thread pools.
In the first place you can use this for "fire and forget" actions. E.g. sending an email to a user could be done in an #Asynchronous annotated method. The user does not need to wait for your code to contact the mail-server, negotiate the protocol, etc. It's a waste of everyone's time to let the main request processing thread wait for this.
Likewise, maybe you do some audit logging when a user logs in to your application and logs off again. Both these two persist actions are perfect candidates to put in asynchronous methods. It's senseless to let the user wait for such backend administration.
Then there is a class of situations where you need to fetch multiple data items that can't be (easily) fetched using a single query. For instance, I often see apps that do:
User user = userDAO.getByID(userID);
Invoice invoice = invoiceDAO.getByUserID(userID);
PaymentHistory paymentHistory = paymentDAO.getHistoryByuserID(userID);
List<Order> orders = orderDAO.getOpenOrdersByUserID(userID);
If you execute this as-is, your code will first go the DB and wait for the user to be fetched. It sits idle in between. Then it will go fetch the invoice and waits again. Etc etc.
This can be sped up by doing these individual calls asynchronously:
Future<User> futureUser = userDAO.getByID(userID);
Future<Invoice> futureInvoice = invoiceDAO.getByUserID(userID);
Future<PaymentHistory> futurePaymentHistory = paymentDAO.getHistoryByuserID(userID);
Future<List<Order>> futureOrders = orderDAO.getOpenOrdersByUserID(userID);
As soon as you actually need one of those objects, the code will automatically block if the result isn't there yet. This allows you to overlap fetching of individual items and even overlap other processing with fetching. For example, your JSF life cycle might already go through a few phases until you really need any of those objects.
The usual advice that multi threaded programming is hard to debug doesn't really apply here. You're not doing any explicit communication between threads and you're not even creating any threads yourself (which are the two main issues this historical advice is based upon).
For the following case, using asynchronous execution would be useless:
Future<user> futureUser = userDAO.getUserById(userId);
User user = futureUser.get(); // block happens here
user.setCount(user.getCount() + 1);
If you do something asynchronously and right thereafter wait for the result, the net effect is a sequential call.
will the line return user; block to wait for the asynchronous userDAO.merge(user)
I'm afraid you're not totally getting it yet. The return statement has no knowledge about any operation going on for the instance being processed in another context. This is not how Java works.
In my previous example, the getUserByID method returned a Future. The code automatically blocks on the get() operation.
So if you have something like:
public class SomeBean {
Future<User> futureUser;
public String doStuff() {
futureUser = dao.getByID(someID);
return "";
}
public getUser() {
return futureUser.get(); // blocks in case result is not there
}
}
Then in case of the button triggering an AJAX request and the outputText rendering itself with a binding to someBean.user, then there is no race condition. If the dao already did its thing, futureUser will immediately return an instance of type User. Otherwise it will automatically block until the User instance is available.
Regarding doing the merge() operation asynchronous in your example; this might run into race conditions. If your bean is in view scope and the user quickly presses the button again (e.g. perhaps having double clicked the first time) before the original merge is done, an increment might happen on the same instance that the first merge invocation is still persisting.
In this case you have to clone the User object first before sending it to the asynchronous merge operation.
The simple examples I started this answer with are pretty save, as they are about doing an isolated action or about doing reads with an immutable type (the userID, assume it is an int or a String) as input.
As soon as you start passing mutable data into asynchronous methods you'll have to be absolutely certain that there is no mutation being done to that data afterwards, otherwise stick to the simple rule to only pass in immutable data.
You should not use asynch this way if any process that follows the asynch piece depends on the outcome. If you persist data that a later thread needs, you'll have a race condition that will be a bad idea.
I think you should take a step back before you go this route. Write your app as recommended by Java EE: single threaded, with threads handled by the container. Profile your app and find out where the time is being spent. Make a change, reprofile, and see if it had the desired effect.
Multi-threaded apps are hard to write and debug. Don't do this unless you have a good reason and solid data to support your changes.