Vaadin: How to check if browser has pdf plug-in? - java

Is there a posability in Vaadin 7 or in Java generally, to check if a browser has an embedded pdfreader or not?
I need to know that because it depends on that how i open the pdf.

There is unfortunately no way to consistently check if the browser supports viewing PDF files or not. I would recommend using something like PDF.JS (https://github.com/mozilla/pdf.js) or FlexPaper (http://flexpaper.devaldi.com/products.jsp) on your web site to display your documents to make sure your visitors can see your documents
Both those options are available as open source

In a web application, Java (and per se, Vaadin) runs on server side, so you cannot know which technology is installed on client, in this case, the browser. Just fire your file download with the application/pdf mime header and let the client do it's work. If you want to fire it as a general file download, use application/octet-stream mime header instead.
Here's a more generic q/a on this topic: How to determine if the user's browser can view PDF files

Related

how to read and save a server-side pdf with adobe in a client-side web browser? (plugin, applet / java servlet, ...)

My need is:
read a pdf present on a server to display it in a web browser
save the modifications of the pdf (made by the user) on the server
Currently, I'm using an independent applet that uses a servlet to open a connection between the client and the server. This allows you to read and save the pdf as a byte stream.
I need to do it with Adobe.
Do you know how I can do this? (Plugin, ...)
Thanks
environment: tomcat, java, jsp, internet explorer

Import html to lotus notes richtext using java

I would like to create a notes mail from some html source (with possibly inline image and attachments) using java through DIIOP. I tried to use mime item to do that, but sign and encrypt would need internet certs. So rich text seems to be the only choice, but I could not find any java API to import html into richtext. In notes client GUI, one can import from text/html. And also I noticed that MIME mail exported from inbox are "Itemized by DIIOP Server". Is there any way I can programmatically import html into lotus notes message so that sign and encrypt can be used with Lotus Notes internal certs.
Thanks and Regards,
Shing
You should be able to encrypt using Java via DIIOP, but you can't sign that way.
You need a private key in order to sign a message or document. The low-level Notes APIs expect the private key to be located in the current ID file for the session. When you are using DIIOP. Your Java code is running locally and it does not have access to your user ID file. The low-level Notes APIs don't run on the same machine that the Java is running on. There usually isn't even a Notes or Domino installation on the code where the Java code is running, so the code for the low-level APIs isn't even available to the JVM.
In a DIIOP configuraiton, the low-level Notes API code is running on the Domino server. The only ID file it has access to is the server ID file, and it will not allow you to sign using the server's private key.
Eventually find a solution, abeit rather hacky. create a document using MIME, then save to database, then close the session. The open a new session, and get the saved document, it is converted to richtext by the Domino Server, but there are some traces of MIME, export to DXL using DXLExporter. In the exported DXL, remove the items "MIME_Version" and "$MIMETrack". Inline image of type other than jpg and gif (png and gif) are not handled properly, have to play around the XML DOM a bit to fix it, then import the fixed DXL using DXLImporter, and there you have a converted Richtext document, rather like what you get from importing HTML file in Note Client GUI. Better than none.

HTTP header for inline PDF filename from Java webserver

I need to send to the client a byte[] with a pdf data from my tomcat server.
I'm using this:
response.setContentType("application/pdf");
response.setHeader("Content-Disposition:","inline; filename=test.pdf");
But (at least) with firefox I get a file download instead of inline display.
The only way to show pdf data inline is to remove the Content-Disposition header record however, if I do so I cannot set the filename, the pdf name is get from the last folder of url.
You seems to be setting the right headers. But rendering of pdf or another such formats depends on the browser capabilities as well. I mean browser need to have a pdf plugin installed in order to render a pdf when it sees the same in the contentType header field. So make sure you install a pdf plugin for your firefox and try to test after that. You can download firefox pdf plugin from here:
https://addons.mozilla.org/en-US/firefox/addon/pdf-download/

Advice with crawling web site content

I was trying to crawl some of website content, using jsoup and java combination. Save the relevant details to my database and doing the same activity daily.
But here is the deal, when I open the website in browser I get rendered html (with all element tags out there). The javascript part when I test it, it works just fine (the one which I'm supposed to use to extract the correct data).
But when I do a parse/get with jsoup(from Java class), only the initial website is downloaded for parsing. Meaning there are some dynamic parts of a website and I want to get that data but since they're rendered post get, asynchronously on the website I'm unable to capture it with jsoup.
Does anybody knows a way around this? Am I using the right toolset? more experienced people, I bid your advice.
You need to check before if the website you're crawling demands some of this list to show all contents:
Authentication with Login/Password
Some sort of session validation on HTTP headers
Cookies
Some sort of time delay to load all the contents (sites profuse on Javascript libraries, CSS and asyncronous data may need of this).
An specific User-Agent browser
A proxy password if, by example, you're inside a corporative network security configuration.
If anything on this list is needed, you can manage that data providing the parameters in your jsoup.connect(). Please refer the official doc.
http://jsoup.org/cookbook/input/load-document-from-url

Need to get a path location via a web page

In firefox 2 I was able to get the path using Browse - I use the path in our project to then write out files to that location. So now that browse does not get the path, does anyone know a way for a user to go to a directory and have the path returned via a web page so I could pass that along to the server for processing?
execCommand does not work in firefox and had limites save type capaility, and entering by hand is not a useable option. Thanks.
The ability to see a complete client file path is now considered a security risk, and all modern browsers prevent you from seeing it (both via Javascript and via information sent back to the server on the form POST).
This is not possible with HTML/JavaScript. In HTML you can at highest use <input type="file"> to select a file, but not a folder or so. In JS you can't do anything at the local disk file system, let alone with a <input type="file"> element in the DOM tree. You're prohibited by security restrictions (you as being an enduser would of course not like if websites are able to do stuff at the local disk file system unaskingly).
You can only do that with a small application which runs straight at the client machine. For example a (signed!) applet which is basically just a piece of Java code served by a webpage which runs right at the client machine. You can communicate between applet and servlet using java.net.URL and consorts. Then, in the applet use Swing's JFileChooser to have a folder or file selection dialogue.
Update: by the way, MSIE and some other ancient browsers sends the full client-side disk file system path along the <input type="file"> to the server side. This is technically wrong (only the filename+extension should have been sent) and completely superfluous. This information is worthless in the server side, because it cannot access the file using the normal java.io.File stuff (unless both the server and the client runs at physically the same machine which of course wouldn't occur in real world). The normal way to get the uploaded file is to parse the multipart/form-data request body (for which one would normally use Apache Commons FileUpload or the Servlet 3.0 provided HttpServletRequest#getParts()).

Categories