I have just implemented a Java code in order to get number of pages of a document, but it only serves for PDF files. I need to count number of pages of others files (Docx, HTML, etc). Any idea?
My code is:
public int numberOfPages(#RequestBody() MultipartFile inputFile) throws Exception {
int numberOfPages = 0;
InputStream fileName = inputFile.getInputStream();
PDDocument document = PDDocument.load(fileName);
if (document != null) {
numberOfPages = document.getNumberOfPages();
}
return numberOfPages;
}
I think that that is not as easy task as it seems because the page number deppends on the size of the paper, type of the printer, size of the image, etc.
One possible solution can be to convert the input document to PDF and then you can count pages easily. You can store the PDF content as well with the original documents or you can use the toPdf(FileInputStream document) method on the fly each time when you need the page number info. It deppends on the quantity of files and performance requirements.
It can convert html, office documents, pure text and images to PDF.
You can use Apache Tika to check the type of the uploaded file and then based on this info you can execute the proper method to convert uploaded content to PDF.
Check the file type:
public static MediaType getMediaType(final byte[] content) throws IOException {
try (InputStream stream = new ByteArrayInputStream(content)) {
TikaConfig tika = TikaConfig.getDefaultConfig();
Metadata metadata = new Metadata();
return tika.getDetector().detect(stream, metadata);
}
}
Then:
MediaType mediaType = ContentTypeDetector.getMediaType(content);
String uploadedContent = mediaType.toString();
if (uploadedContent.equals("image/jpeg") {
PDF pdf = SomeClass.jpgToPdf(...)
} else if (uploadedContent.equals(...) {
PDF pdf = SomeClass....(...)
}
iText is a nice Java library to create PDF files from the uploaded files based on your settings.
Related
I am Working on Pdfs to Excel conversion using docparser.
But docparser is unable to process scanned pdfs properly. So I need to seperate the scanned pdfs from normal pdfs and only want to process normal pdfs through docparser(i.e API call).
Is there exit some to way to identify the pdf type(Scanned or normal) programmatically so that I could work further?
Please help if anyone knows how to tackle this problem.....
Finally, I found a solution to my question.But not a standard one(I THINK SO). Thanks to the people who commented and provide some help.
Using Pdfbox library we can extract pages of scanned pdf and will compare each page to the instance of an image object(PDImageXObject),if it comes true , the page will be count as an image and we can count those images.If images are equal to number of pages in pdf. We will say it is a scanned pdf.
here is the code...
public static String testPdf(String filename) throws IOException
{
String s = "";
int g = 0;
int gg = 0;
PDDocument doc = PDDocument.load(new File(filename));
gg = doc.getNumberOfPages();
for(PDPage page:doc.getPages())
{
PDResources resource = page.getResources();
for(COSName xObjectName:resource.getXObjectNames())
{
PDXObject xObject = resource.getXObject(xObjectName);
if (xObject instanceof PDImageXObject)
{
((PDImageXObject) xObject).getImage();
g++;
}
}
}
doc.close();
if(g==gg) // pdf pages if equal to the images
{
return "Scanned pdf";
}
else
{
return "Searchable pdf";
}
}
I am testing my method with this form https://help.adobe.com/en_US/Acrobat/9.0/Samples/interactiveform_enabled.pdf
It is being called like so:
Pdf.editForm("./src/main/resources/pdfs/interactiveform_enabled.pdf", "./src/main/resources/pdfs/FILLEDOUT.pdf"));
where Pdf is just a worker class and editForm is a static method.
The editForm method looks like this:
public static int editForm(String inputPath, String outputPath) {
try {
PdfDocument pdf = new PdfDocument(new PdfReader(inputPath), new PdfWriter(outputPath));
PdfAcroForm form = PdfAcroForm.getAcroForm(pdf, true);
Map<String, PdfFormField> m = form.getFormFields();
for (String s : m.keySet()) {
if (s.equals("Name_First")) {
m.get(s).setValue("Tristan");
}
if (s.equals("BACHELORS DEGREE")) {
m.get(s).setValue("Off"); // On or Off
}
if (s.equals("Sex")) {
m.get(s).setValue("FEMALE");
}
System.out.println(s);
}
pdf.close();
logger.info("Completed");
} catch (IOException e) {
logger.error("Unable to fill form " + outputPath + "\n\t" + e);
return 1;
}
return 0;
}
Unfortunately the FILLEDOUT.pdf file is no longer a form after calling this method. Am I doing something wrong?
I was using this resource for guidance. Notice how I am not calling the form.flattenFields(). If I do call that method however, I get an error of java.lang.IllegalArgumentException.
Thank you for your time.
Your form is Reader-enabled, i.e. it contains a usage rights digital signature by a key and certificate issued by Adobe to indicate to a regular Adobe Reader that it shall activate a number of additional features when operating on that very PDF.
If you stamp the file as in your original code, the existing PDF objects will get re-arranged and slightly changed. This breaks the usage rights signature, and Adobe Reader, recognizing that, disclaims "The document has been changed since it was created and use of extended features is no longer available."
If you stamp the file in append mode, though, the changes are appended to the PDF as an incremental update. Thus, the signature still correctly signs its original byte range and Adobe Reader does not complain.
To activate append mode, use StampingProperties when you create your PdfDocument:
PdfDocument pdf = new PdfDocument(new PdfReader(inputPath), new PdfWriter(outputPath), new StampingProperties().useAppendMode());
(Tested with iText 7.1.1-SNAPSHOT and Adobe Acrobat Reader DC version 2018.009.20050)
By the way, Adobe Reader does not merely check the signature, it also tries to determine whether the changes in the incremental update don't go beyond the scope of the additional features activated by the usage rights signature.
Otherwise you could simply take a small Reader-enabled PDF and in append mode replace all existing pages by your own content of choice. This of course is not in Adobe's interest...
The filled in PDF is still an AcroForm, otherwise the example below would result in the same PDF twice.
public class Main {
public static final String SRC = "src/main/resources/interactiveform_enabled.pdf";
public static final String DEST = "results/filled_form.pdf";
public static final String DEST2 = "results/filled_form_second_time.pdf";
public static void main(String[] args) throws Exception {
File file = new File(DEST);
file.getParentFile().mkdirs();
Main main = new Main();
Map<String, String> data1 = new HashMap<>();
data1.put("Name_First", "Tristan");
data1.put("BACHELORS DEGREE", "Off");
main.fillPdf(SRC, DEST, data1, false);
Map<String, String> data2 = new HashMap<>();
data2.put("Sex", "FEMALE");
main.fillPdf(DEST, DEST2, data2, false);
}
private void fillPdf(String src, String dest, Map<String, String> data, boolean flatten) {
try {
PdfDocument pdf = new PdfDocument(new PdfReader(src), new PdfWriter(dest));
PdfAcroForm form = PdfAcroForm.getAcroForm(pdf, true);
//Delete print field from acroform because it is defined in the contentstream not in the formfields
form.removeField("Print");
Map<String, PdfFormField> m = form.getFormFields();
for (String d : data.keySet()) {
for (String s : m.keySet()) {
if(s.equals(d)){
m.get(s).setValue(data.get(d));
}
}
}
if(flatten){
form.flattenFields();
}
pdf.close();
System.out.println("Completed");
} catch (IOException e) {
System.out.println("Unable to fill form " + dest + "\n\t" + e);
}
}
}
The issue you are facing has to do with the 'reader enabled forms'.
What it boils down to is that the PDF file that is initially fed to your program is reader enabled. Hence you can open the PDF in Adobe Reader and fill in the form. This allows Acrobat users to extend the behaviour of Adobe Reader.
Once the PDF is filled in and closed using iText it saves the PDF as 'not reader-extended'.
This makes it so that the AcroForm can still be filled using iText but when you open the PDF using Adobe Reader the extended functionality you see in the original PDF is gone. But this does not mean the form is flattened.
iText cannot make a form reader enabled, as a matter of fact, the only way to create a reader enabled form is using Acrobat Professional. This is how Acrobat and Adobe Reader interact and it is not something iText can imitate or solve. You can find some more info and a possible solution on this link.
The IllegalArgumentException you get when you call the form.flattenFields() method is because of the way the PDF document was constructed.
The "Print form" button should have been defined in the AcroForm, yet it is defined in the contentstream of the PDF, meaning the button in the AcroForm has an empty text value, and this is what causes the exception.
You can fix this by removing the print field from the AcroForm before you flatten.
IllegalArgumentException issue has been fixed in iText 7.1.5.
I would like to save all attached files from an Excel (xls/HSSF) without extension.
I've been trying for a long time now, and I really don't know if this is even possible. I also tried Apache Tika, but I don't want to use Tika for this, because I need POI for other tasks, anyway.
I tried the sample code from the Busy Developers Guide, but this does not extract files in the old office format (doc, ppt, xls). And it throws an Error when trying to create new SlideShow(new HSLFSlideShow(dn, fs)) Error: (Remove argument to match HSLFSlideShow(dn))
My actual code is:
public static void saveEmbeddedXLS(InputStream fis_param, String embDIR) throws IOException, InvalidFormatException{
//HSSF - XLS
int i = 0;
System.out.println("Starting Embedded Search in xls...");
POIFSFileSystem fs = new POIFSFileSystem(fis_param);//create FileSystem using fileInputStream
HSSFWorkbook workbook = new HSSFWorkbook(fs);
for (HSSFObjectData obj : workbook.getAllEmbeddedObjects()) {
System.out.println("Objects : "+ obj.getOLE2ClassName());//the OLE2 Class Name of the object
String oleName = obj.getOLE2ClassName();//Document Type
DirectoryNode dn = (DirectoryNode) obj.getDirectory();//get Directory Node
//Trying to create an input Stream with the embedded document, argument of createDocumentInputStream should be: String; Where/How can I get this correct parameter for the function?
InputStream is = dn.createDocumentInputStream(dn);//This line is incorrect! How can I do i correctly?
FileOutputStream fos = new FileOutputStream("embDIR" + i);//Outputfilepath + Number
IOUtils.copy(is, fos);//FileInputStream > FileOutput Stream (save File without extension)
i++;
}
}
So my simple question is:
Is it possible to save ALL attachments from an xls file without any extension (as simple as possible)? And can any one provide me a solution? Many Thanks!
Please tell me how to append data in docx file using java and docx4j.
What I'm doing is, I am using a template in docx format in which some field are dilled by java at run time,
My problem is for every group of data it creates a new file and i just want to append the new file into 1 file. and this is not done using java streams
String outputfilepath = "e:\\Practice/DOC/output/generatedLatterOUTPUT.docx";
String outputfilepath1 = "e:\\Practice/DOC/output/generatedLatterOUTPUT1.docx";
WordprocessingMLPackage wordMLPackage;
public void templetsubtitution(String name, String age, String gender, Document document)
throws Exception {
// input file name
String inputfilepath = "e:\\Practice/DOC/profile.docx";
// out put file name
// id of Xml file
String itemId1 = "{A5D3A327-5613-4B97-98A9-FF42A2BA0F74}".toLowerCase();
String itemId2 = "{A5D3A327-5613-4B97-98A9-FF42A2BA0F74}".toLowerCase();
String itemId3 = "{A5D3A327-5613-4B97-98A9-FF42A2BA0F74}".toLowerCase();
// Load the Package
if (inputfilepath.endsWith(".xml")) {
JAXBContext jc = Context.jcXmlPackage;
Unmarshaller u = jc.createUnmarshaller();
u.setEventHandler(new org.docx4j.jaxb.JaxbValidationEventHandler());
org.docx4j.xmlPackage.Package wmlPackageEl = (org.docx4j.xmlPackage.Package) ((JAXBElement) u
.unmarshal(new javax.xml.transform.stream.StreamSource(
new FileInputStream(inputfilepath)))).getValue();
org.docx4j.convert.in.FlatOpcXmlImporter xmlPackage = new org.docx4j.convert.in.FlatOpcXmlImporter(
wmlPackageEl);
wordMLPackage = (WordprocessingMLPackage) xmlPackage.get();
} else {
wordMLPackage = WordprocessingMLPackage
.load(new File(inputfilepath));
}
CustomXmlDataStoragePart customXmlDataStoragePart = wordMLPackage
.getCustomXmlDataStorageParts().get(itemId1);
// Get the contents
CustomXmlDataStorage customXmlDataStorage = customXmlDataStoragePart
.getData();
// Change its contents
((CustomXmlDataStorageImpl) customXmlDataStorage).setNodeValueAtXPath(
"/ns0:orderForm[1]/ns0:record[1]/ns0:name[1]", name,
"xmlns:ns0='EasyForm'");
customXmlDataStoragePart = wordMLPackage.getCustomXmlDataStorageParts()
.get(itemId2);
// Get the contents
customXmlDataStorage = customXmlDataStoragePart.getData();
// Change its contents
((CustomXmlDataStorageImpl) customXmlDataStorage).setNodeValueAtXPath(
"/ns0:orderForm[1]/ns0:record[1]/ns0:age[1]", age,
"xmlns:ns0='EasyForm'");
customXmlDataStoragePart = wordMLPackage.getCustomXmlDataStorageParts()
.get(itemId3);
// Get the contents
customXmlDataStorage = customXmlDataStoragePart.getData();
// Change its contents
((CustomXmlDataStorageImpl) customXmlDataStorage).setNodeValueAtXPath(
"/ns0:orderForm[1]/ns0:record[1]/ns0:gender[1]", gender,
"xmlns:ns0='EasyForm'");
// Apply the bindings
BindingHandler.applyBindings(wordMLPackage.getMainDocumentPart());
File f = new File(outputfilepath);
wordMLPackage.save(f);
FileInputStream fis = new FileInputStream(f);
ByteArrayOutputStream bos = new ByteArrayOutputStream();
byte[] buf = new byte[1024];
try {
for (int readNum; (readNum = fis.read(buf)) != -1;) {
bos.write(buf, 0, readNum);
}
// System.out.println( buf.length);
} catch (IOException ex) {
}
byte[] bytes = bos.toByteArray();
FileOutputStream file = new FileOutputStream(outputfilepath1, true);
DataOutputStream out = new DataOutputStream(file);
out.write(bytes);
out.flush();
out.close();
System.out.println("..done");
}
public static void main(String[] args) {
utility u = new utility();
u.templetsubtitution("aditya",24,mohan);
}
thanks in advance
If I understand you correctly, you're essentially talking about merging documents. There are two very simple approaches that you can use, and their effectiveness really depends on the structure and onward use of your data:
PhilippeAuriach describes one approach in his answer, which entails
appending all components within a MaindocumentPart instance to
another. In terms of the final docx file, this means the content
that appears in document.xml -- it won't take into account headers
and footers ( for example), but that may be fine for you.
You can insert multiple documents into a single docx file by inserting them
as AltChunk elements (see the docx4j documentation). This will
bring everything from one Word file into another, headers and all.
The downside of this is that your final document won't be a proper
flowing Word file until you open it and save it in MS Word itself
(the imported components remain as standalone files within the docx
bundle). This will cause you issues if you want to generated
'merged' files and then do something with them like render PDFs --
the merged content will simply be ignored.
The more complete (and complex) approach is to perform a "deep merge". This updates and maintains all references held within a document. Imported content becomes part of the main "flow" of the document (i.e. it is not stored as separate references), so the end result is a properly-merged file which can be rendered to PDF or whatever.
The downside to this is you need a good knowledge of docx structure and the API, and you will be writing a fair amount of code (I would recommend buying a license to Plutext's MergeDocx instead).
I had to deal with similar things, and here is what I did (probably not the most efficient, but working) :
create a finalDoc loading the template, and emptying it (so you have the styles in this doc)
for each data row, create a new doc loading the template, then replace your fields with your values
use the function below to append the doc filled with the datas to the finalDoc :
public static void append(WordprocessingMLPackage docDest, WordprocessingMLPackage docSource) {
List<Object> objects = docSource.getMainDocumentPart().getContent();
for(Object o : objects){
docDest.getMainDocumentPart().getContent().add(o);
}
}
Hope this helps.
I know this has probably been asked 10000 times, however, I can't seem to find a straight answer to the question.
I have a LOB stored in my db that represents an image; I am getting that image from the DB and I would like to show it on a web page via the HTML IMG tag. This isn't my preferred solution, but it's a stop-gap implementation until I can find a better solution.
I'm trying to convert the byte[] to Base64 using the Apache Commons Codec in the following way:
String base64String = Base64.encodeBase64String({my byte[]});
Then, I am trying to show my image on my page like this:
<img src="data:image/jpg;base64,{base64String from above}"/>
It's displaying the browser's default "I cannot find this image", image.
Does anyone have any ideas?
Thanks.
I used this and it worked fine (contrary to the accepted answer, which uses a format not recommended for this scenario):
StringBuilder sb = new StringBuilder();
sb.append("data:image/png;base64,");
sb.append(StringUtils.newStringUtf8(Base64.encodeBase64(imageByteArray, false)));
contourChart = sb.toString();
According to the official documentation Base64.encodeBase64URLSafeString(byte[] binaryData) should be what you're looking for.
Also mime type for JPG is image/jpeg.
That's the correct syntax. It might be that your web browser does not support the data URI scheme. See Which browsers support data URIs and since which version?
Also, the JPEG MIME type is image/jpeg.
You may also want to consider streaming the images out to the browser rather than encoding them on the page itself.
Here's an example of streaming an image contained in a file out to the browser via a servlet, which could easily be adopted to stream the contents of your BLOB, rather than a file:
public void doGet(HttpServletRequest req, HttpServletResponse resp)
throws ServletException, IOException
{
ServletOutputStream sos = resp.getOutputStream();
try {
final String someImageName = req.getParameter(someKey);
// encode the image path and write the resulting path to the response
File imgFile = new File(someImageName);
writeResponse(resp, sos, imgFile);
}
catch (URISyntaxException e) {
throw new ServletException(e);
}
finally {
sos.close();
}
}
private void writeResponse(HttpServletResponse resp, OutputStream out, File file)
throws URISyntaxException, FileNotFoundException, IOException
{
// Get the MIME type of the file
String mimeType = getServletContext().getMimeType(file.getAbsolutePath());
if (mimeType == null) {
log.warn("Could not get MIME type of file: " + file.getAbsolutePath());
resp.setStatus(HttpServletResponse.SC_INTERNAL_SERVER_ERROR);
return;
}
resp.setContentType(mimeType);
resp.setContentLength((int)file.length());
writeToFile(out, file);
}
private void writeToFile(OutputStream out, File file)
throws FileNotFoundException, IOException
{
final int BUF_SIZE = 8192;
// write the contents of the file to the output stream
FileInputStream in = new FileInputStream(file);
try {
byte[] buf = new byte[BUF_SIZE];
for (int count = 0; (count = in.read(buf)) >= 0;) {
out.write(buf, 0, count);
}
}
finally {
in.close();
}
}
If you don't want to stream from a servlet, then save the file to a directory in the webroot and then create the src pointing to that location. That way the web server does the work of serving the file. If you are feeling particularly clever, you can check for an existing file by timestamp/inode/crc32 and only write it out if it has changed in the DB which can give you a performance boost. This file method also will automatically support ETag and if-modified-since headers so that the browser can cache the file properly.