Generate PDF dynamically in Java [closed] - java

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
Which of the following is the best way of generating pdf in java using iText:
Generate pdf from scratch each time.
Have a predefined pdf, and each time push the data values to the predefined pdf and save as new pdf.
Generate a XML each time from the data to be pushed and generate a new pdf each time.
Appreciate your response.

When you have the PDF Generating code GENERIC to appending data and making STYLING and TRANSFORMATIONS to the DYNAMIC CONTENT, it is advised to pass your data to that and GENERATE from the SCRATCH.
If you are adding IMAGES, STYLING and TRANSFORMATIONS to the STATIC CONTENT, it is better to make a PREDEFINED PDF with DATA-HOTSPOT-IDs so that you can REPLACE those IDs with your DYNAMIC CONTENT.

Related

Need to write a result set in the form of Map<String, List<String>> to excel sheet [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I have a class that runs SQL query and returns the resultset in Map> format. Now, I need to store this data in an excel. Can anyone help me write the logic to store the result set I received to an excel sheet?
Well, you can use apache poi, utilizing their API's you can write your data into an excel document. Can't really help you with the actual code though but the logic somehow would be to loop through your map and then use the apache poi api to write that data into an excel file... more info on apache poi here: https://poi.apache.org/spreadsheet/quick-guide.html#NewWorkbook
Let me know if that worked out for you. Thanks!

Convert from XML in format A to XML in format B [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
The schema for the XML file is changing and i need to create a utility that will take the xml file in format A and convert it to format B. How can i do it.
I am not able to figure out the starting point for it.
You will probably want to look into XSLT. You can write one for each iteration of changes, which hopefully you, or whoever is changing the XML, is versioning each change. If that is the case, you will easily be able to transform each version into the next.
On the chance that you do not have versions available to you for the XML, then you will probably have to do very strict matching on your XSLTs.

Extract data from microsoft word to a database table [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
We receive word documents that are practically a form. Users fill in the answers to the questions we have in the documents, so essentially just a key-value pair of question and answer.
Now, I would like to extract the answers and store it in the database table mapping to the appropriate column(question). What is the best way to do this? Is there a library that can help me achieve this.
Thanks
I would consider unzipping the .docx file and extracting the information from the embedded .xml file. You can find out more about the Word 2010 format here:
http://blogs.msdn.com/b/chrisrae/archive/2010/10/06/where-is-the-documentation-for-office-s-docx-xlsx-pptx-formats-part-2-office-2010.aspx

Extracting data from sites [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I want to extract data from sites. I already got information from sites using the article extractor but now I want to get the information about events of a particular place. I want to get the events in that place when I give location as input.For example I want to extract information from this site "http://www.indianevents.org/events-Rajasthan-14.htm" I could be able to extract all events,festivals etc.
URL url;
url = new URL(str);
InputSource is = HTMLFetcher.fetch(url).toInputSource();
BoilerpipeSAXInput in = new BoilerpipeSAXInput(is);
TextDocument doc = in.getTextDocument();
news=ArticleExtractor.INSTANCE.getText(doc);
consider Apache Tika to download the text content
you can use stanford pos tagger to parse the text into
meaningful sentences
and NLP can help identify event information.
although writing this might sound simple (trust me its difficult).
Good Luck. :)

PDF to lucene document coversion using pdfbox [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 9 years ago.
Improve this question
PDFbox provides classes to convert a pdf to lucene document. Does it preserve the formatting of the document.By formatting i mean does it store details about the location and font type/size and other options.
By default, it will remove all formatting and extract only textual content and make it searchable. This content can be searched, and the original PDF can be maintained external to the index and returned with search results when a hit has been found. Rebuilding a PDF from the Lucene index may not be the best approach, if that is your intent.
PDFBox is quite capable of extracting metadata, though, and it can certainly be used to index formatting / font / etc data, if you wish to be able to search on that sort of thing.

Categories