I have a web application in which one of the workflows, users can download files that are dynamically generated. The input is a form which has parameters needed to generate the file.
My current solution is to let them submit this form & on the servlet side I change the response header - content disposition to be an attachment & also provide an appropriate mime-type.
But I find this approach to be inadequate. Because there are chances that the generation of file can take a very long time, in such cases after a certain timeout I directly get 500 or 503 errors in the browser. I guess this is to be expected for the current approach.
I want my workflow to be flexible enough to tell the users as soon as they submit the form that it might take time for the file to generate & that we will display the link to the file as soon as it is ready. I guess I can also email the file or this message to them, but this is not ideal.
Can you guys suggest me an approach for this problem? Should I be more specific in providing information? Any help appreciated.
If you want to do this synchronously (i.e. make the user wait for the document to be ready rather than have them go off and do other things while waiting) a traditional approach is to bring them to a "report loading" page.
This would be a page that:
1) informs them that the report is loading.
2) refreshes itself (either using the meta refresh tag or javascript)
3) upon refresh, checks to see if the report is ready and either:
a) goes back to step 1 if it isn't ready
b) gives them the document if it is ready.
Synchronous is kind of old-school, but your question sounded like that was the approach you wanted.
Asynchronous approaches would include:
Use Ajax to make a link to the document appear on the page once it is ready.
Have a separate page that shows previously generated documents. The use can go to this page at their leisure, and, meanwhile, they can browse the rest of the site. This requires keeping a history of generated documents.
As you suggested, send it via e-mail.
You can make an asynchronous Ajax Call to the server with the form data instead of submiting the form direct.
On the server you create a temp file and return a link to the client with the download URL.
After submitting the answer via Javascript you can show the user a hint, that the download link will appear in a minute. Don't forget to cleanup the temp file!
For submitting the Ajax Call I would suggest using an Javascript Framework. Have a look at JQuery:
http://api.jquery.com/category/ajax/
Related
The scenario is that when I would like to launch the browswer pointing to some web pages in Eclipse Plugin and I would like to monitor the content change in the HTTP/S message and do some corresponding operations.
The corresponding operations may looks like fetch the raw field of HTTP/S message from browser.
For example, when users do some operations (AJAX) call and then the title or other fields of HTML page change, I would like to know this is happpening and fetch the "raw" content from the body field of HTTP/S message.
I found some ways to launch the browser here. The first one is that I could use SWT browser (org.eclipse.swt.browser.Browser) here. However, I do not see it expose any listener API to monitor this change, let alone fetch raw content of HTTP/S message.
The second one is about org.eclipse.ui.browser.IWebBrowser. I also do not see any API expose by IWebBrowser.
Does anyone kwno how to do achieve this? Thank for your help.
This are no APIs for this.
The Browser class does have listeners for changes to the Location (addLocationListener) and the page title (addTitleListener) but these are fairly limited.
I am interested in extracting a particular from the source code of a website. I am able to do this using JSoup, by getting the entire source code using
Document doc;
doc = Jsoup.connect("http://example.com").get();
Element divs = document.getElementById("importantDiv");
However, the problem is that I need to do this about 20000 times a day, to be able to get all the changes that are happening in the div. To create the whole document every time would use a lot of network bandwidth, which I would like to avoid. Is there a way to be able to extract the required element without re-creating the entire document on the client side.
NOTE : The code snippet is an example and not the actual URL or ID which I need to extract.
I don't believe you can request specific portions of a web page. JSoup is basically a web client class, and the web client has no control over what the server sends it. The server is the one that dictates what is sent, so you can't really request a segment of a webpage without requesting the entire web page.
Do you have access to this webpage, or is it an external website?
If you don't have control of the server side, you cannot do it. You will need to download the complete html. But note that it's just the HTML, not the rest of the resources like stylesheets, images, javascripts, etc.
To save bandwidth you would need to install some code in the server, so that it serves just the bits of information required.
Take a look at the URLConnection class, you can use it to open a connection to an URL get the connection's input stream and read only as much bytes as you need, this will work and you won't have to download the entire document, but unfortunately you won't be able to download the document starting from an offset. You will always have to start downloading the document from its beginning.
I am rewriting some code that used to use GET and replaced it with POST.
The download URL used to be a GET request to
https://myurl/getfile?fileid=1234&filetype=pdf
Now, I changed that to
https://myurl/getfile
and put the fileid=1234&filetype=pdf in the POST body.
I did this using jquery's post method as:
function postCall(url, param) {
$.post(url, param);
}
The server side is written using Java and I tried to reused the old code for GET, which write the file binary into the servlet's stream.
However, my browser does not prompt user for download, which used to do for GET.
Previous posts on stackoverflow did suggest that AJAX should not be used for file download. But what is the alternative way for me to use? The request is not generated by a form though.
Many thanks.
I would suggest creating a form on the page (or create one dynamically using jQuery), and then have that form do the post submission (using jQuery's "submit" function or "trigger('submit')" on the form). This way the request won't be done asynchronously in the background. If the "getfile" script responds with a file with Content-disposition: attachment, it should download.
That said, I'm not sure the browser will "prompt" the user in this scenario--this is dependent on the browser (whether or not a dialog appears to save the download, or if it automatically downloads the file without a prompt).
I was trying to crawl some of website content, using jsoup and java combination. Save the relevant details to my database and doing the same activity daily.
But here is the deal, when I open the website in browser I get rendered html (with all element tags out there). The javascript part when I test it, it works just fine (the one which I'm supposed to use to extract the correct data).
But when I do a parse/get with jsoup(from Java class), only the initial website is downloaded for parsing. Meaning there are some dynamic parts of a website and I want to get that data but since they're rendered post get, asynchronously on the website I'm unable to capture it with jsoup.
Does anybody knows a way around this? Am I using the right toolset? more experienced people, I bid your advice.
You need to check before if the website you're crawling demands some of this list to show all contents:
Authentication with Login/Password
Some sort of session validation on HTTP headers
Cookies
Some sort of time delay to load all the contents (sites profuse on Javascript libraries, CSS and asyncronous data may need of this).
An specific User-Agent browser
A proxy password if, by example, you're inside a corporative network security configuration.
If anything on this list is needed, you can manage that data providing the parameters in your jsoup.connect(). Please refer the official doc.
http://jsoup.org/cookbook/input/load-document-from-url
This problem relates to the Restlet framework and Java
When a client wants to discover the resources available on a server - they must send an HTTP request with OPTIONS as the request type. This is fine I guess for non human readable clients - i.e. in code rather than a browser.
The problem I see here is - browsers (human readable) using GET, will NOT be able to quickly discover the resources available to them and find out some extra help documentation etc - because they do not use OPTIONS as a request type.
Is there a way to make a browser send an OPTIONS/GET request so the server can fire back formatted XML to the client (as this is what happens in Restlet - i.e. the server response is to send all information back as XML), and display this in the browser?
Or have I got my thinking all wrong - i.e. the point of OPTIONS is that is meant to be used inside a client's code and not meant to be read via a browser.
Use the TunnelService (which by default is already enabled) and simply add the method=OPTIONS query parameter to your URL.
(The Restlet FAQ Q19 is a similar question.)
I think OPTIONS is not designed to be 'user-visible'.
How would you dispatch an OPTIONS request from the browser ? (note that the form element only allows GET and POST).
You could send it using XmlHttpRequest and then get back XML in your Javascript callback and render it appropriately. But I'm not convinced this is something that your user should really know about!