The code breaks if I'm using it in this combination:
Save a value, Delete a value, Save a value.
The moment I save that value a 2nd time it will even add the deleted values back to it.
So if I would save the configurations 1, 2 ,3, 4, delete 3, and save 5, this would save 3 as well making the final values : 1,2,3,4,5 even though I deleted 3.
Here is the code that I think is important
/**
* This is the save class.
* This class contains the action for when the user clicks on the save button.
*/
package controller;
/**
* These are the imports for the class.
*/
import java.io.FileInputStream;
import java.io.FileOutputStream;
import java.io.IOException;
import java.util.Properties;
import model.ModelFacade;
public class Save {
/**
* These are the fields for the class.
*/
ModelFacade mf = new ModelFacade();
Properties prop = new Properties();
/**
* This method sets the entered value for the matching property.
*
* #param property
* #param value
*/
public void setPropertyValue(String property, String value) {
try {
try{
//set the properties value
FileInputStream fis = new FileInputStream("config.properties");
prop.load(fis);
fis.close();
prop.setProperty(property, value);
}
catch (IOException ex) {
}
//save properties to project root folder
FileOutputStream fos = new FileOutputStream("config.properties");
prop.store(fos, null);
fos.close();
System.out.println("saved " + property + " as " + value);
}
catch (IOException ex) {
ex.printStackTrace();
}
}
}
/**
* This is the delete class.
* This class contains the action for when the user clicks on the delete button.
*/
package controller;
/**
* These are the imports of the class.
*/
import java.io.FileInputStream;
import java.io.FileOutputStream;
import java.io.IOException;
import java.util.Enumeration;
import java.util.Properties;
import model.ModelFacade;
public class Delete {
/**
* These are the fields of the class.
*/
ModelFacade mf = new ModelFacade();
Properties prop = new Properties();
/**
* This method deletes the values of the selected config.
*
* #param config
*/
public void deleteAction(String config) {
if(mf.checkProperty() == true){
try {
FileInputStream fis = new FileInputStream("config.properties");
prop.load(fis);
fis.close();
#SuppressWarnings("rawtypes")
Enumeration em = prop.keys();
int i = 0;
while(em.hasMoreElements()){
Object obj = em.nextElement();
String str = (String)obj;
//If the property contains the configuration name in its name it will be deleted.
if(str.endsWith(config)){
i++;
System.out.println("Deleted: "+str);
prop.remove(str);
FileOutputStream fos = new FileOutputStream("config.properties");
prop.store(fos, null);
fos.close();
}
}
//The system prints a message of the missing configuration.
if(i < 1){
System.out.println("The configuration could not be found.");
}
}
catch (IOException ex) {
ex.printStackTrace();
}
}
//The system prints a message that the property file could not be found.
else{
System.out.println("The property file could not be found.");
}
}
}
These are the 2 methods that are being used. I hope this is enough information for you to help. And sorry for being so short with my words. I'm kinda tired and short in time for now. If it's needed I can explain this more into detail. But I'm kinda hoping that the problem is in this code.
Related
I'm curious that the location where java.io.PrintWriter creates the file is always in the root directory of my project, I want the generated file to be in my current java module, do I need to specify the absolute path to the file location? Or is it related to the settings of my IDEA?
Here is the code:
package com.study.inputandoutput.textFile;
import java.io.FileInputStream;
import java.io.IOException;
import java.io.PrintWriter;
import java.nio.charset.StandardCharsets;
import java.time.LocalDate;
import java.util.Scanner;
public class TextFileTest {
public static void main(String[] args) throws IOException {
Employee[] staff = new Employee[3];
staff[0] = new Employee("Marry", 75000, 1993, 8, 22);
staff[1] = new Employee("Jerry", 50000, 1997, 1, 22);
staff[2] = new Employee("Harry", 40000, 1996, 7, 17);
//save all employee records to the file employee.dat
try (PrintWriter out = new PrintWriter("employee.dat", StandardCharsets.UTF_8)){
writeData(staff, out);
}
}
/**
* Write all employees in an array to a print writer.
* #param employees an array of employees
* #param out a print writer
*/
private static void writeData(Employee[] employees, PrintWriter out) {
//write number of employee
out.println(employees.length);
//write every employee
for (Employee e : employees) {
writeEmployee(out, e);
}
}
/**
* Write employee data to a print writer
* #param out the print writer
* #param e an employee of employees arrays
*/
private static void writeEmployee(PrintWriter out, Employee e) {
out.println(e.getName() + "|" + e.getSalary() + "|" + e.getHireDay());
}
}
Is it possible in Apache Flink to write to multiple text files depending on a key? For instance, I have some data like this.
key1, foo, bar
key2, baz, foo
key3, etc, etc
The value of the key isn’t known at compile time; new keys would come in and I’d like to write the results for that key to a separate file to those of the other keys.
I'd expect to see 3 files, named 'key1.txt', 'key2.txt' and 'key3.txt'.
Is this something Flink can do out of the box?
You can try the following sink's implementation, which can be used with KeyedStream :
KeyedStream<Tuple2<String, String>, Tuple> keyedDataStream = dataStream.keyBy(0);
StreamKeyPartitionerSink<Tuple2<String, SynopsesEvent>> sinkFunction = new StreamKeyPartitionerSink<Tuple2<String, SynopsesEvent>>(
"../data/key_grouping", "f0"); // f0 is the key field name
keyedDataStream.addSink(sinkFunction);
For more info about state managament in Flink : https://ci.apache.org/projects/flink/flink-docs-release-1.3/dev/stream/state.html#keyed-state since I used it for managing state per key.
import java.io.BufferedWriter;
import java.io.FileWriter;
import java.io.IOException;
import java.io.PrintWriter;
import java.lang.reflect.Field;
import java.nio.file.Files;
import java.nio.file.Paths;
import java.nio.file.StandardOpenOption;
import java.util.ArrayList;
import java.util.List;
import org.apache.flink.api.common.state.ValueState;
import org.apache.flink.api.common.state.ValueStateDescriptor;
import org.apache.flink.api.common.typeinfo.TypeHint;
import org.apache.flink.api.common.typeinfo.TypeInformation;
import org.apache.flink.configuration.Configuration;
import org.apache.flink.streaming.api.functions.sink.RichSinkFunction;
/**
* * Flink sink writes tuples to files partitioned by their keys, which also writes the records as
* batches.
*
* #param <IN> Input tuple type
*
* #author ehabqadah
*/
public class StreamKeyPartitionerSink<IN> extends RichSinkFunction<IN> {
private transient ValueState<String> outputFilePath;
private transient ValueState<List<IN>> inputTupleList;
/**
* Number of rcords to be hold before writing.
*/
private int writeBatchSize;
/**
* The output directory path
*/
private String outputDirPath;
/**
* The name of the input tuple key
*/
private String keyFieldName;
public StreamKeyPartitionerSink(String outputDirPath, String keyFieldName) {
this(outputDirPath, keyFieldName, 1);
}
/**
*
* #param outputDirPath- writeBatchSize the size of on hold batch before write
* #param writeBatchSize - output directory
*/
public StreamKeyPartitionerSink(String outputDirPath, String keyFieldName, int writeBatchSize) {
this.writeBatchSize = writeBatchSize;
this.outputDirPath = outputDirPath;
this.keyFieldName = keyFieldName;
}
#Override
public void open(Configuration config) {
// initialize state holders
`//for more info about state management check `//
ValueStateDescriptor<String> outputFilePathDesc =
new ValueStateDescriptor<String>("outputFilePathDesc",
TypeInformation.of(new TypeHint<String>() {}));
ValueStateDescriptor<List<IN>> inputTupleListDesc =
new ValueStateDescriptor<List<IN>>("inputTupleListDesc",
TypeInformation.of(new TypeHint<List<IN>>() {}));
outputFilePath = getRuntimeContext().getState(outputFilePathDesc);
inputTupleList = getRuntimeContext().getState(inputTupleListDesc);
}
#Override
public void invoke(IN value) throws Exception {
List<IN> inputTuples =
inputTupleList.value() == null ? new ArrayList<IN>() : inputTupleList.value();
inputTuples.add(value);
if (inputTuples.size() == writeBatchSize) {
writeInputList(inputTuples);
inputTuples = new ArrayList<IN>();
}
// update the state
inputTupleList.update(inputTuples);
}
/**
* Write the tuple list, each record in separate line
*
* #param tupleList
* #throws Exception
*/
public void writeInputList(List<IN> tupleList) {
String path = getOrInitFilePath(tupleList);
try (PrintWriter outStream = new PrintWriter(new BufferedWriter(new FileWriter(path, true)))) {
for (IN tupleToWrite : tupleList) {
outStream.println(tupleToWrite);
}
} catch (IOException e) {
throw new RuntimeException("Exception occured while writing file " + path, e);
}
}
private String getOrInitFilePath(List<IN> tupleList) {
IN firstInstance = tupleList.get(0);
String path = null;
try {
path = outputFilePath.value();
if (path == null) {
Field keyField = firstInstance.getClass().getField(keyFieldName);
String keyValue = keyField.get(firstInstance).toString();
path = Paths.get(outputDirPath, keyValue + ".txt").toString();
setUpOutputFilePathPath(outputDirPath, path);
// save the computed path for this key
outputFilePath.update(path);
}
} catch (IOException | NoSuchFieldException | SecurityException | IllegalArgumentException
| IllegalAccessException e) {
throw new RuntimeException(
"ExceptionsetUpOutputFilePathPath occured while fetching the value of key field " + path,
e);
}
return path;
}
private void setUpOutputFilePathPath(String outputDirPath, String path) throws IOException {
if (!Files.exists(Paths.get(outputDirPath))) {
Files.createDirectories(Paths.get(outputDirPath));
}
// create the file if it does not exist and delete its content
Files.write(Paths.get(path), "".getBytes(), StandardOpenOption.CREATE,
StandardOpenOption.TRUNCATE_EXISTING);
}
}
That is not possible ouf-of-the-box. However, you can implement an own output format and use it via result.out(...) (for Batch API); see https://ci.apache.org/projects/flink/flink-docs-release-1.1/apis/batch/index.html#data-sinks
For Streaming API, it would be stream.addSink(...); see https://ci.apache.org/projects/flink/flink-docs-release-1.1/apis/streaming/index.html#data-sinks
Hello i am trying to make a program that will allow me to load in a file and upload the name of it to a list. Once i select a file name in the list it should go through that file and take each line an put it in the specified jtextfield. But when i try and load a second file and try and select it, it tells me arrayIndexOutOfBounds. Can someone please explain to me what I'm doing wrong. I am using NetBeans.
/*
* To change this template, choose Tools | Templates
* and open the template in the editor.
*/
package prog24178.assignment4;
import java.awt.event.KeyEvent;
import java.io.File;
import java.io.FileNotFoundException;
import java.util.ArrayList;
import java.util.Scanner;
import java.util.logging.Level;
import java.util.logging.Logger;
import javax.swing.JFileChooser;
public class CustomerView extends javax.swing.JFrame {
/**
* Creates new form CustomerView
*/
private Application ass4App = new Application();
public ArrayList<Customer> customer = new ArrayList<Customer>();
public ArrayList<String> names = new ArrayList<String>();
public String fileName;
public Customer customers = new Customer();
public int i;
public void setApplication(Application customerApp) {
this.ass4App = ass4App;
}
public CustomerView() {
initComponents();
}
/**
* This method is called from within the constructor to initialize the form.
* WARNING: Do NOT modify this code. The content of this method is always
* regenerated by the Form Editor.
*/
private void jExitItemActionPerformed(java.awt.event.ActionEvent evt) {
// TODO add your handling code here:
System.exit(0);
}
private void jOpenCusItemActionPerformed(java.awt.event.ActionEvent evt) {
// TODO add your handling code here:
String currentPath = System.getProperty("user.dir");
JFileChooser fc = new JFileChooser();
fc.setMultiSelectionEnabled(true);
fc.setFileSelectionMode(JFileChooser.FILES_ONLY);
if (fc.showOpenDialog(null) == JFileChooser.APPROVE_OPTION) {
File[] file = fc.getSelectedFiles();
for (int i = 0; i < file.length; i++) {
try {
customers.constructCustomer(file[i]);
} catch (FileNotFoundException ex) {
Logger.getLogger(CustomerView.class.getName()).log(Level.SEVERE, null, ex);
}
customer.add(customers);
names.add(customer.get(i).getName());
}
jCustomerList.setListData(names.toArray());
}
}
private void jCustomerListValueChanged(javax.swing.event.ListSelectionEvent evt) {
// TODO add your handling code here:
jCusNameField.setText((String) customer.get(jCustomerList.getSelectedIndex()).getName());
jAddressField.setText((String) customer.get(jCustomerList.getSelectedIndex()).getAddress());
jCityField.setText((String) customer.get(jCustomerList.getSelectedIndex()).getCity());
jProvinceField.setText((String) customer.get(jCustomerList.getSelectedIndex()).getProvince());
jPostalCodeField.setText((String) customer.get(jCustomerList.getSelectedIndex()).getPostalCode());
jEmailAddressField.setText((String) customer.get(jCustomerList.getSelectedIndex()).getEmailAddress());
jPhoneNumberField.setText((String) customer.get(jCustomerList.getSelectedIndex()).getPhoneNumber());
}
I fix the problem. I realized that i was just adding the variable customers to customer without giving it a proper value.
customer.add(customers.constructCustomer(file[i]));
I don't know what customers.constructCustomer(file[i]); or customer.add(customers); do, exactly -- we don't have enough code to know -- but you are using i to iterate through the array of File objects and to obtain a customer (customers.get(i)). That's the second place I'd look.
The FIRST place I'd look is at the error message; it tells you the line on which the array index was out of bounds, the value of the index, and the upper bound on the array.
Okay, so I have been looking at the sample code below from the Ephesoft Developer's Guide...
//import java.io.File;
//
//import javax.xml.transform.Result;
//import javax.xml.transform.Source;
//import javax.xml.transform.Transformer;
//import javax.xml.transform.TransformerConfigurationException;
//import javax.xml.transform.TransformerException;
//import javax.xml.transform.TransformerFactory;
//import javax.xml.transform.TransformerFactoryConfigurationError;
//import javax.xml.transform.dom.DOMSource;
//import javax.xml.transform.stream.StreamResult;
//
//import org.w3c.dom.Document;
//import org.w3c.dom.Element;
//import org.w3c.dom.Node;
//import org.w3c.dom.NodeList;
import com.ephesoft.dcma.script.IScripts;
//--------------------------------------
import java.io.File;
import java.io.FileNotFoundException;
import java.io.FileOutputStream;
import java.io.FileWriter;
import java.io.IOException;
import java.io.OutputStream;
import java.util.List;
import java.util.zip.ZipEntry;
import java.util.zip.ZipOutputStream;
import org.jdom.Document;
import org.jdom.Element;
import org.jdom.output.XMLOutputter;
//import com.ephesoft.dcma.script.IJDomScript;
/**
* The <code>ScriptDocumentAssembler</code> class represents the script execute structure. Writer of scripts plug-in should implement this IScript
* interface to execute it from the scripting plug-in. Via implementing this interface writer can change its java file at run time.
* Before the actual call of the java Scripting plug-in will compile the java and run the new class file.
*
* #author Ephesoft
* #version 1.0
*/
public class ScriptDocumentAssembler
{
private static final String BATCH_LOCAL_PATH = "BatchLocalPath";
private static final String BATCH_INSTANCE_ID = "BatchInstanceIdentifier";
private static final String EXT_BATCH_XML_FILE = "_batch.xml";
private static final String DOCUMENTS = "Documents";
private static final String DOCUMENT = "Document";
private static final String PAGES = "Pages";
private static final String PAGE = "Page";
private static String ZIP_FILE_EXT = ".zip";
/**
* The <code>execute</code> method will execute the script written by the writer at run time with new compilation of java file. It
* will execute the java file dynamically after new compilation.
*
* #param document {#link Document}
*/
public void execute(Document document, String fieldName, String docIdentifier) {
System.out.println("************* Inside ScriptDocumentAssembler scripts.");
System.out.println("************* Start execution of the ScriptDocumentAssembler scripts.");
System.out.println("Custom ScriptDocumentAssembler, removing Document seperator sheets...");
removeFirstPageOfDoc(document);
boolean isWrite = true;
//boolean isWrite = false;
// write the document object to the XML file.
if (isWrite) {
writeToXML(document);
System.out.println("************* Successfully write the xml file for the ScriptDocumentAssembler scripts.");
} else {
System.out.println("************** No changes performed by ScriptDocumentAssembler scripts.");
}
System.out.println("************* End execution of the ScriptDocumentAssembler scripts.");
}
private void removeFirstPageOfDoc(Document documentFile) {
Element documentsList = (Element) documentFile.getChildren(DOCUMENTS).get(0);
List<?> documentList = documentsList.getChildren(DOCUMENT);
for (int documentIndex = 0; documentIndex < documentList.size(); documentIndex++) {
Element document = (Element) documentList.get(documentIndex);
System.out.println("Processing Document - " + document.getChildren("Identifier").get(0).getText());
Element pages = (Element) document.getChildren(PAGES).get(0);
List<?> pageList = pages.getChildren(PAGE);
Element page = (Element)pageList.get(0);
System.out.println(document.getChildren("Identifier").get(0).getText() + " Page Count = " + pageList.size());
System.out.println("Removing page node " + page.getChildren("Identifier").get(0).getText() + " from " +
document.getChildren("Identifier").get(0).getText());
pages.remove(page);
System.out.println(document.getChildren("Identifier").get(0).getText() + " Page Count = " + pageList.size());
}
}
private void writeToXML(Document document) {
String batchLocalPath = null;
List<?> batchLocalPathList = document.getRootElement().getChildren(BATCH_LOCAL_PATH);
if (null != batchLocalPathList) {
batchLocalPath = ((Element) batchLocalPathList.get(0)).getText();
}
if (null == batchLocalPath) {
System.err.println("Unable to find the local folder path in batch xml file.");
return;
}
String batchInstanceID = null;
List<?> batchInstanceIDList = document.getRootElement().getChildren(BATCH_INSTANCE_ID);
if (null != batchInstanceIDList) {
batchInstanceID = ((Element) batchInstanceIDList.get(0)).getText();
}
if (null == batchInstanceID) {
System.err.println("Unable to find the batch instance ID in batch xml file.");
return;
}
String batchXMLPath = batchLocalPath.trim() + File.separator + batchInstanceID + File.separator + batchInstanceID
+ EXT_BATCH_XML_FILE;
String batchXMLZipPath = batchXMLPath + ZIP_FILE_EXT;
System.out.println("batchXMLZipPath************" + batchXMLZipPath);
OutputStream outputStream = null;
File zipFile = new File(batchXMLZipPath);
FileWriter writer = null;
XMLOutputter out = new XMLOutputter();
try {
if (zipFile.exists()) {
System.out.println("Found the batch xml zip file.");
outputStream = getOutputStreamFromZip(batchXMLPath, batchInstanceID + EXT_BATCH_XML_FILE);
out.output(document, outputStream);
} else {
writer = new java.io.FileWriter(batchXMLPath);
out.output(document, writer);
writer.flush();
writer.close();
}
} catch (Exception e) {
System.err.println(e.getMessage());
} finally {
if (outputStream != null) {
try {
outputStream.close();
} catch (IOException e) {
}
}
}
}
public static OutputStream getOutputStreamFromZip(final String zipName, final String fileName) throws FileNotFoundException, IOException {
ZipOutputStream stream = null;
stream = new ZipOutputStream(new FileOutputStream(new File(zipName + ZIP_FILE_EXT)));
ZipEntry zipEntry = new ZipEntry(fileName);
stream.putNextEntry(zipEntry);
return stream;
}
}
Note I have not changed anything from the original code, but I added the jdom and ephesoft jars to my build path. However, within the removeFirstPageOfDoc method, I am still getting a bunch of errors related to the casting.For example, the line Element documentsList = (Element) documentFile.getChildren(DOCUMENTS).get(0); should allow documentFile to gain access to the methods of Element, right? However, it still seems to only have access to the methods of type document. I was just wondering what the issue might be here and how I might go about resolving it?
For example, the line Element documentsList = (Element) documentFile.getChildren(DOCUMENTS).get(0); should allow documentFile to gain access to the methods of Element, right?
No, because casting has lower precedence than the dot operator. To cast documentFile to type Element, you would write this:
Element documentsList = ((Element) documentFile).getChildren(DOCUMENTS).get(0);
with parentheses around (Element) documentFile.
Edited to add (incorporating information from the comments below):
However, according to the Javadoc for org.jdom.Document and that for org.jdom.Element, they're both actual classes — neither one is an interface — and neither is a subtype of the other. This means that you can't actually cast from one to the other. (In Java, a cast doesn't let you convert an instance of one type into another type; in order for ((Type) reference) to work, reference has to refer to an object that really does belong to type Type. Since an object can never be an instance of both Element and Document, the compiler won't even allow this sort of cast here.)
Instead, the person who wrote this sample-code probably should have written this:
Element documentsList =
documentFile.getRootElement().getChildren(DOCUMENTS).get(0);
which uses the getRootElement() method (which returns the document's root element) rather than casting to Element (which would try to convince the compiler that the document simply is an element).
ruakh is right, but you need to do the next level as well.
Element documentsList = (Element)(((Document) documentFile).getChildren(DOCUMENTS).get(0));
Of course, in JDOM 2.x (with correct generic typing) this is all all easier....
Element documentsList = doumentFile.getChildren(DOCUMENTS).get(0);
rolfl
I need to open a .doc/.dot/.docx/.dotx (I'm not picky, I just want it to work) document,
parse it for placeholders (or something similar),
put my own data,
and then return generated .doc/.docx/.dotx/.pdf document.
And on top of all that, I need the tools to accomplish that to be free.
I've searched around for something that would suit my needs, but I can't find anything.
Tools like Docmosis, Javadocx, Aspose etc. are commercial.
From what I've read, Apache POI is nowhere near successfully implementing this (they currently don't have any official developer working on Word part of framework).
The only thing that looks that could do the trick is OpenOffice UNO API.
But that is a pretty big byte for someone that has never used this API (like me).
So if I am going to jump into this, I need to make sure that I am on the right path.
Can someone give me some advice on this?
I know it's been a long time since I've posted this question, and I said that I would post my solution when I'm finished.
So here it is.
I hope that it will help someone someday.
This is a full working class, and all you have to do is put it in your application, and place TEMPLATE_DIRECTORY_ROOT directory with .docx templates in your root directory.
Usage is very simple.
You put placeholders (key) in your .docx file, and then pass file name and Map containing corresponding key-value pairs for that file.
Enjoy!
import java.io.BufferedInputStream;
import java.io.BufferedOutputStream;
import java.io.BufferedReader;
import java.io.Closeable;
import java.io.File;
import java.io.FileInputStream;
import java.io.FileOutputStream;
import java.io.IOException;
import java.io.InputStream;
import java.io.InputStreamReader;
import java.io.OutputStream;
import java.net.URI;
import java.util.Deque;
import java.util.Enumeration;
import java.util.HashMap;
import java.util.Iterator;
import java.util.LinkedList;
import java.util.Map;
import java.util.UUID;
import java.util.zip.ZipEntry;
import java.util.zip.ZipFile;
import java.util.zip.ZipOutputStream;
import javax.faces.context.ExternalContext;
import javax.faces.context.FacesContext;
import javax.servlet.http.HttpServletResponse;
public class DocxManipulator {
private static final String MAIN_DOCUMENT_PATH = "word/document.xml";
private static final String TEMPLATE_DIRECTORY_ROOT = "TEMPLATES_DIRECTORY/";
/* PUBLIC METHODS */
/**
* Generates .docx document from given template and the substitution data
*
* #param templateName
* Template data
* #param substitutionData
* Hash map with the set of key-value pairs that represent
* substitution data
* #return
*/
public static Boolean generateAndSendDocx(String templateName, Map<String,String> substitutionData) {
String templateLocation = TEMPLATE_DIRECTORY_ROOT + templateName;
String userTempDir = UUID.randomUUID().toString();
userTempDir = TEMPLATE_DIRECTORY_ROOT + userTempDir + "/";
try {
// Unzip .docx file
unzip(new File(templateLocation), new File(userTempDir));
// Change data
changeData(new File(userTempDir + MAIN_DOCUMENT_PATH), substitutionData);
// Rezip .docx file
zip(new File(userTempDir), new File(userTempDir + templateName));
// Send HTTP response
sendDOCXResponse(new File(userTempDir + templateName), templateName);
// Clean temp data
deleteTempData(new File(userTempDir));
}
catch (IOException ioe) {
System.out.println(ioe.getMessage());
return false;
}
return true;
}
/* PRIVATE METHODS */
/**
* Unzipps specified ZIP file to specified directory
*
* #param zipfile
* Source ZIP file
* #param directory
* Destination directory
* #throws IOException
*/
private static void unzip(File zipfile, File directory) throws IOException {
ZipFile zfile = new ZipFile(zipfile);
Enumeration<? extends ZipEntry> entries = zfile.entries();
while (entries.hasMoreElements()) {
ZipEntry entry = entries.nextElement();
File file = new File(directory, entry.getName());
if (entry.isDirectory()) {
file.mkdirs();
}
else {
file.getParentFile().mkdirs();
InputStream in = zfile.getInputStream(entry);
try {
copy(in, file);
}
finally {
in.close();
}
}
}
}
/**
* Substitutes keys found in target file with corresponding data
*
* #param targetFile
* Target file
* #param substitutionData
* Map of key-value pairs of data
* #throws IOException
*/
#SuppressWarnings({ "unchecked", "rawtypes" })
private static void changeData(File targetFile, Map<String,String> substitutionData) throws IOException{
BufferedReader br = null;
String docxTemplate = "";
try {
br = new BufferedReader(new InputStreamReader(new FileInputStream(targetFile), "UTF-8"));
String temp;
while( (temp = br.readLine()) != null)
docxTemplate = docxTemplate + temp;
br.close();
targetFile.delete();
}
catch (IOException e) {
br.close();
throw e;
}
Iterator substitutionDataIterator = substitutionData.entrySet().iterator();
while(substitutionDataIterator.hasNext()){
Map.Entry<String,String> pair = (Map.Entry<String,String>)substitutionDataIterator.next();
if(docxTemplate.contains(pair.getKey())){
if(pair.getValue() != null)
docxTemplate = docxTemplate.replace(pair.getKey(), pair.getValue());
else
docxTemplate = docxTemplate.replace(pair.getKey(), "NEDOSTAJE");
}
}
FileOutputStream fos = null;
try{
fos = new FileOutputStream(targetFile);
fos.write(docxTemplate.getBytes("UTF-8"));
fos.close();
}
catch (IOException e) {
fos.close();
throw e;
}
}
/**
* Zipps specified directory and all its subdirectories
*
* #param directory
* Specified directory
* #param zipfile
* Output ZIP file name
* #throws IOException
*/
private static void zip(File directory, File zipfile) throws IOException {
URI base = directory.toURI();
Deque<File> queue = new LinkedList<File>();
queue.push(directory);
OutputStream out = new FileOutputStream(zipfile);
Closeable res = out;
try {
ZipOutputStream zout = new ZipOutputStream(out);
res = zout;
while (!queue.isEmpty()) {
directory = queue.pop();
for (File kid : directory.listFiles()) {
String name = base.relativize(kid.toURI()).getPath();
if (kid.isDirectory()) {
queue.push(kid);
name = name.endsWith("/") ? name : name + "/";
zout.putNextEntry(new ZipEntry(name));
}
else {
if(kid.getName().contains(".docx"))
continue;
zout.putNextEntry(new ZipEntry(name));
copy(kid, zout);
zout.closeEntry();
}
}
}
}
finally {
res.close();
}
}
/**
* Sends HTTP Response containing .docx file to Client
*
* #param generatedFile
* Path to generated .docx file
* #param fileName
* File name of generated file that is being presented to user
* #throws IOException
*/
private static void sendDOCXResponse(File generatedFile, String fileName) throws IOException {
FacesContext facesContext = FacesContext.getCurrentInstance();
ExternalContext externalContext = facesContext.getExternalContext();
HttpServletResponse response = (HttpServletResponse) externalContext
.getResponse();
BufferedInputStream input = null;
BufferedOutputStream output = null;
response.reset();
response.setHeader("Content-Type", "application/msword");
response.setHeader("Content-Disposition", "attachment; filename=\"" + fileName + "\"");
response.setHeader("Content-Length",String.valueOf(generatedFile.length()));
input = new BufferedInputStream(new FileInputStream(generatedFile), 10240);
output = new BufferedOutputStream(response.getOutputStream(), 10240);
byte[] buffer = new byte[10240];
for (int length; (length = input.read(buffer)) > 0;) {
output.write(buffer, 0, length);
}
output.flush();
input.close();
output.close();
// Inform JSF not to proceed with rest of life cycle
facesContext.responseComplete();
}
/**
* Deletes directory and all its subdirectories
*
* #param file
* Specified directory
* #throws IOException
*/
public static void deleteTempData(File file) throws IOException {
if (file.isDirectory()) {
// directory is empty, then delete it
if (file.list().length == 0)
file.delete();
else {
// list all the directory contents
String files[] = file.list();
for (String temp : files) {
// construct the file structure
File fileDelete = new File(file, temp);
// recursive delete
deleteTempData(fileDelete);
}
// check the directory again, if empty then delete it
if (file.list().length == 0)
file.delete();
}
} else {
// if file, then delete it
file.delete();
}
}
private static void copy(InputStream in, OutputStream out) throws IOException {
byte[] buffer = new byte[1024];
while (true) {
int readCount = in.read(buffer);
if (readCount < 0) {
break;
}
out.write(buffer, 0, readCount);
}
}
private static void copy(File file, OutputStream out) throws IOException {
InputStream in = new FileInputStream(file);
try {
copy(in, out);
} finally {
in.close();
}
}
private static void copy(InputStream in, File file) throws IOException {
OutputStream out = new FileOutputStream(file);
try {
copy(in, out);
} finally {
out.close();
}
}
}
Since a docx file is merely a zip-archive of xml files (plus any binary files for embedded objects such as images), we met that requirement by unpacking the zip file, feeding the document.xml to a template engine (we used freemarker) that does the merging for us, and then zipping the output document to get the new docx file.
The template document then is simply an ordinary docx with embedded freemarker expressions / directives, and can be edited in Word.
Since (un)zipping can be done with the JDK, and Freemarker is open source, you don't incur any licence fees, not even for word itself.
The limitation is that this approach can only emit docx or rtf files, and the output document will have the same filetype as the template. If you need to convert the document to another format (such as pdf) you'll have to solve that problem separately.
I ended up relying on Apache Poi 3.12 and processing paragraphs (separately extracting paragraphs also from tables, headers/footers, and footnotes, as such paragraphs aren't returned by XWPFDocument.getParagraphs() ).
The processing code (~100 lines) and unit tests are here on github.
I've been in more or less the same situation as you, I had to modify a whole bunch of MS Word merge templates at once. After having googled a lot to try to find a Java solution I finally installed Visual Studio 2010 Express which is free and did the job in C#.
I have recently dealt with similar problem:
"A tool which accepts a template '.docx' file, processes the file by evaluation of passed parameter context and outputs a '.docx' file as the result of the process."
finally god brought us scriptlet4dox :).
the key features for this product is:
1. groovy code injection as scripts in template file (parameter injection, etc.)
2. loop over collection items in table
and so many other features.
but as I checked the last commit on the project is performed about a year ago, so there is a probability that the project is not supported for new features and new bug-fixes. this is your choice to use it or not.