POI - Excel remains held by some process and cannot be edited - java

I am using POI to read,edit and write excel files.
My process flow is like write an excel, read it and again write it.
Then if I try to edit the file using normal desktop excel application while my Java app is still running, the excel cannot be saved, it says some process is holding the excel,
I am properly closing all file handles.
Please help and tell me how to fix this issue.
SXSSFWorkbook wb = new SXSSFWorkbook(WINDOW_SIZE);
Sheet sheet = getCurrentSheet();//stores the current sheet in a instance variable
for (int rowNum = 0; rowNum < data.size(); rowNum++) {
if (rowNum > RECORDS_PER_SHEET) {
if (rowNum % (RECORDS_PER_SHEET * numberOfSheets) == 1) {
numberOfSheets++;
setCurrentSheet(wb.getXSSFWorkbook().createSheet());
sheet = getCurrentSheet();
}
}
final Row row = sheet.createRow(effectiveRowCnt);
for (int columnCount = 0; columnCount < data.get(rowNum).size(); columnCount++) {
final Object value = data.get(rowNum).get(columnCount);
final Cell cell = row.createCell(columnCount);
//This method creates the row and cell at the given loc and adds value
createContent(value, cell, effectiveRowCnt, columnCount, false, false);
}
}
public void closeFile(boolean toOpen) {
FileOutputStream out = null;
try {
out = new FileOutputStream(getFileName());
wb.write(out);
}
finally {
try {
if (out != null) {
out.close();
out = null;
if(toOpen){
// Open the file for user with default program
final Desktop dt = Desktop.getDesktop();
dt.open(new File(getFileName()));
}
}
}
}
}

The code looks correct. After out.close();, there shouldn't be any locks left.
Things that could still happen:
You have another Java process (for example hanging in a debugger). Your new process tries to write the file, fails (because of process 1) and in the finally, it tries to open Excel which sees the same problem. Make sure you log all exceptions that happen in wb.write(out);
Note: The code above looks correct in this respect, since it only starts Excel when out != null and that should only be the case when Java could open the file.
Maybe the file wasn't written completely (i.e. there was an exception during write()). Excel tries to open the corrupt file and gives you the wrong error message.
Use a tool like Process Explorer to find out which process keeps a lock on a file.

I tried all the options. After looking thoroughly, it seems the problem is the Event User model.
I am using the below code for reading the data:
final OPCPackage pkg = OPCPackage.open(getFileName());
final XSSFReader r = new XSSFReader(pkg);
final SharedStringsTable sst = r.getSharedStringsTable();
final XMLReader parser = fetchSheetParser(sst);
final Iterator<InputStream> sheets = r.getSheetsData();
while (sheets.hasNext()) {
final InputStream sheet = sheets.next();
final InputSource sheetSource = new InputSource(sheet);
parser.parse(sheetSource);
sheet.close();
}
I assume the excel is for some reason held by this process for some time. If I remove this code and use the below code:
final File file = new File(getFileName());
final FileInputStream fis = new FileInputStream(file);
final XSSFWorkbook xWb = new XSSFWorkbook(fis);
The process works fine an the excel does not remain locked.

I figured it out actually.
A very simple line was required but some some reason it was not explained in the New Halloween Document (http://poi.apache.org/spreadsheet/how-to.html#sxssf)
I checked the Busy Developer's Guide and got the solution.
I needed to add
pkg.close(); //To close the OPCPackage
This added my code works fine with any number of reads and writes on the same excel file.

Related

How to read large Excel file using POI in Java? [duplicate]

I have a large .xlsx file (141 MB, containing 293413 lines with 62 columns each) I need to perform some operations within.
I am having problems with loading this file (OutOfMemoryError), as POI has a large memory footprint on XSSF (xlsx) workbooks.
This SO question is similar, and the solution presented is to increase the VM's allocated/maximum memory.
It seems to work for that kind of file-size (9MB), but for me, it just simply doesn't work even if a allocate all available system memory. (Well, it's no surprise considering the file is over 15 times larger)
I'd like to know if there is any way to load the workbook in a way it won't consume all the memory, and yet, without doing the processing based (going into) the XSSF's underlying XML. (In other words, maintaining a puritan POI solution)
If there isn't tough, you are welcome to say it ("There isn't.") and point me the ways to a "XML" solution.
I was in a similar situation with a webserver environment. The typical size of the uploads were ~150k rows and it wouldn't have been good to consume a ton of memory from a single request. The Apache POI Streaming API works well for this, but it requires a total redesign of your read logic. I already had a bunch of read logic using the standard API that I didn't want to have to redo, so I wrote this instead: https://github.com/monitorjbl/excel-streaming-reader
It's not entirely a drop-in replacement for the standard XSSFWorkbook class, but if you're just iterating through rows it behaves similarly:
import com.monitorjbl.xlsx.StreamingReader;
InputStream is = new FileInputStream(new File("/path/to/workbook.xlsx"));
StreamingReader reader = StreamingReader.builder()
.rowCacheSize(100) // number of rows to keep in memory (defaults to 10)
.bufferSize(4096) // buffer size to use when reading InputStream to file (defaults to 1024)
.sheetIndex(0) // index of sheet to use (defaults to 0)
.read(is); // InputStream or File for XLSX file (required)
for (Row r : reader) {
for (Cell c : r) {
System.out.println(c.getStringCellValue());
}
}
There are some caveats to using it; due to the way XLSX sheets are structured, not all data is available in the current window of the stream. However, if you're just trying to read simple data out from the cells, it works pretty well for that.
A improvement in memory usage can be done by using a File instead of a Stream.
(It is better to use a streaming API, but the Streaming API's have limitations, see http://poi.apache.org/spreadsheet/index.html)
So instead of
Workbook workbook = WorkbookFactory.create(inputStream);
do
Workbook workbook = WorkbookFactory.create(new File("yourfile.xlsx"));
This is according to : http://poi.apache.org/spreadsheet/quick-guide.html#FileInputStream
Files vs InputStreams
"When opening a workbook, either a .xls HSSFWorkbook, or a .xlsx XSSFWorkbook, the Workbook can be loaded from either a File or an InputStream. Using a File object allows for lower memory consumption, while an InputStream requires more memory as it has to buffer the whole file."
The Excel support in Apache POI, HSSF and XSSF, supports 3 different modes.
One is a full, DOM-Like in-memory "UserModel", which supports both reading and writing. Using the common SS (SpreadSheet) interfaces, you can code for both HSSF (.xls) and XSSF (.xlsx) basically transparently. However, it needs lots of memory.
POI also supports a streaming read-only way to process the files, the EventModel. This is much more low-level than the UserModel, and gets you very close to the file format. For HSSF (.xls) you get a stream of records, and optionally some help with handling them (missing cells, format tracking etc). For XSSF (.xlsx) you get streams of SAX events from the different parts of the file, with help to get the right part of the file and also easy processing of common but small bits of the file.
For XSSF (.xlsx) only, POI also supports a write-only streaming write, suitable for low level but low memory writing. It largely just supports new files though (certain kinds of append are possible). There is no HSSF equivalent, and due to back-and-forth byte offsets and index offsets in many records it would be pretty hard to do...
For your specific case, as described in your clarifying comments, I think you'll want to use the XSSF EventModel code. See the POI documentation to get started, then try looking at these three classes in POI and Tika which use it for more details.
POI now includes an API for these cases. SXSSF http://poi.apache.org/spreadsheet/index.html
It does not load everything on memory so it could allow you to handle such file.
Note: I have read that SXSSF works as a writing API. Loading should be done using XSSF without inputstream'ing the file (to avoid a full load of it in memory)
Check this post. I show how to use SAX parser to process an XLSX file.
https://stackoverflow.com/a/44969009/4587961
In short, I extended org.xml.sax.helpers.DefaultHandler whih processes XML structure for XLSX filez. t is event parser - SAX.
class SheetHandler extends DefaultHandler {
private static final String ROW_EVENT = "row";
private static final String CELL_EVENT = "c";
private SharedStringsTable sst;
private String lastContents;
private boolean nextIsString;
private List<String> cellCache = new LinkedList<>();
private List<String[]> rowCache = new LinkedList<>();
private SheetHandler(SharedStringsTable sst) {
this.sst = sst;
}
public void startElement(String uri, String localName, String name,
Attributes attributes) throws SAXException {
// c => cell
if (CELL_EVENT.equals(name)) {
String cellType = attributes.getValue("t");
if(cellType != null && cellType.equals("s")) {
nextIsString = true;
} else {
nextIsString = false;
}
} else if (ROW_EVENT.equals(name)) {
if (!cellCache.isEmpty()) {
rowCache.add(cellCache.toArray(new String[cellCache.size()]));
}
cellCache.clear();
}
// Clear contents cache
lastContents = "";
}
public void endElement(String uri, String localName, String name)
throws SAXException {
// Process the last contents as required.
// Do now, as characters() may be called more than once
if(nextIsString) {
int idx = Integer.parseInt(lastContents);
lastContents = new XSSFRichTextString(sst.getEntryAt(idx)).toString();
nextIsString = false;
}
// v => contents of a cell
// Output after we've seen the string contents
if(name.equals("v")) {
cellCache.add(lastContents);
}
}
public void characters(char[] ch, int start, int length)
throws SAXException {
lastContents += new String(ch, start, length);
}
public List<String[]> getRowCache() {
return rowCache;
}
}
And then I parse the XML presending XLSX file
private List<String []> processFirstSheet(String filename) throws Exception {
OPCPackage pkg = OPCPackage.open(filename, PackageAccess.READ);
XSSFReader r = new XSSFReader(pkg);
SharedStringsTable sst = r.getSharedStringsTable();
SheetHandler handler = new SheetHandler(sst);
XMLReader parser = fetchSheetParser(handler);
Iterator<InputStream> sheetIterator = r.getSheetsData();
if (!sheetIterator.hasNext()) {
return Collections.emptyList();
}
InputStream sheetInputStream = sheetIterator.next();
BufferedInputStream bisSheet = new BufferedInputStream(sheetInputStream);
InputSource sheetSource = new InputSource(bisSheet);
parser.parse(sheetSource);
List<String []> res = handler.getRowCache();
bisSheet.close();
return res;
}
public XMLReader fetchSheetParser(ContentHandler handler) throws SAXException {
XMLReader parser = new SAXParser();
parser.setContentHandler(handler);
return parser;
}
Based on monitorjbl's answer and test suite explored from poi, following worked for me on multi-sheet xlsx file with 200K records (size > 50 MB):
import com.monitorjbl.xlsx.StreamingReader;
. . .
try (
InputStream is = new FileInputStream(new File("sample.xlsx"));
Workbook workbook = StreamingReader.builder().open(is);
) {
DataFormatter dataFormatter = new DataFormatter();
for (Sheet sheet : workbook) {
System.out.println("Processing sheet: " + sheet.getSheetName());
for (Row row : sheet) {
for (Cell cell : row) {
String value = dataFormatter.formatCellValue(cell);
}
}
}
}
For latest code use this
InputStream file = new FileInputStream(
new File("uploads/" + request.getSession().getAttribute("username") + "/" + userFile));
Workbook workbook = StreamingReader.builder().rowCacheSize(100) // number of rows to keep in memory
.bufferSize(4096) // index of sheet to use (defaults to 0)
.open(file); // InputStream or File for XLSX file (required)
Iterator<Row> rowIterator = workbook.getSheetAt(0).rowIterator();
while (rowIterator.hasNext()) {
while (cellIterator.hasNext()) {
Cell cell = cellIterator.next();
String cellValue = dataFormatter.formatCellValue(cell);
}}
You can use SXXSF instead of using HSSF. I could generate excel with 200000 rows.

Java Out of memory error for reading from and writing to an xlsx

I need to read several xlsx files looking for data specific to an employee and simultaneously create another xlsx file (if I find data in any of the file)with file name as employee Id appended to the name I found the data in. Eg. there is an employee with emp id 1 and there are severaal xlsx files such as A,B, C... so on; I need to look for data relating to emp id 1 in each file and for the files I get a hit I need to create a file named 1_A.xlsx.
Now although I have built the logic and am using Apache POI APIs for reading and writing, my code is throwing Out Of Memory error after creating just the first file with the data. And is unable to read the rest of the files.
I have tried using SXSSF instead of XSSF but same OOM happens.
Increasing the heap space is not an option for me.
Please help here...Thanks in advance.
Here is a piece of code :
//Reader:
Row row = null;
List<Row> listOfRecords = new ArrayList<Row>();
try {
FileInputStream fis = new FileInputStream(metaDataFile);
new InputStreamReader(fis, "ISO-8859-1");
XSSFWorkbook wb = new XSSFWorkbook(fis);
XSSFSheet sheet = wb.getSheetAt(0);
Iterator<Row> rowIterator = sheet.iterator();
while (rowIterator.hasNext()) {
row = rowIterator.next();
if (!isEmptyRow(row)) {
listOfRecords.add(row);
}
}
wb.close();
fis.close();
//Writer
LOGGER.info("in createWorkbook " );
Workbook empWorkbook = new SXSSFWorkbook(200);
Sheet empSheet = empWorkbook.createSheet("Itype Sheet For Emp_"
+ personnelNumber);
int rowNum = listOfRecords.size();
System.out.println("Creating excel");
Cell c = null;
for (int i = 0; i < rowNum; i++) {
Row record = listOfRecords.get(i);
Row empRow = empSheet.createRow(i++);
if (!isEmptyRow(record)) {
int colNum = record.getLastCellNum() + 1;
for (int j = 0; j < colNum; j++) {
Cell newCell = empRow.createCell(j);
System.out.println("cellVal:"
+ String.valueOf(record.getCell(j)));
newCell.setCellValue(String.valueOf(record.getCell(j)));
}
}
}
The writer method is called from within the reader.
Reading of multiple xlsx files is indeed tricky business butI finally solved it.
I had to break down my code several folds to realise that the OOM error was due to the fact that after reading 3 files no more memory was left to process the rest of the files.
xlsx files are compressed xml files. So when we try to read them using XSSF or SXSSF APIs it loads the entire DOM to the memory thereafter choking it.
I found an excellent solution here :
[https://github.com/monitorjbl/excel-streaming-reader]
Hope this will help others who come here facing the same issue.

What is causing excel to crash after updating file thru java?

I have some java code that opens an excel sheet, adds auto filter to a group of columns, then saves and closes. The problem is that when a user opens the file and trys to sort smallest to largest or largest to smallest excel will freeze then crash. But if you first filter then you can sort with out issue and it does not freeze and crash.
private static void AddFilter()
{
//Adds filter to the rows in Column 2
try
{
FileInputStream fileIn = new FileInputStream("C:\\Users\\gria\\Desktop\\Fleet Manager Summary.xls");
HSSFWorkbook report = new HSSFWorkbook(fileIn);
Sheet sheet = report.getSheetAt(0);
sheet.setAutoFilter(CellRangeAddress.valueOf("A5:P5"));
FileOutputStream fileOut = new FileOutputStream("C:\\Users\\gria\\Desktop\\Fleet Manager SummaryT.xls");
report.write(fileOut);
fileOut.close();
}
catch (Exception e)
{
e.printStackTrace();
}
}
Thank you,
UPDATE:
After some experimenting I have updated the info and code since the original question was asked to better explain the problem.
Try changing
XSSFFormulaEvaluator.evaluateAllFormulaCells(report);
to
HSSFFormulaEvaluator.evaluateAllFormulaCells(report);
XSSFFormulaEvaluator.evaluateAllFormulaCells() works for XSSfWorkbook, but not for the HSSFWorkbook.
Documentation
Also,
you can edit your commented code to:
for(int i = 5; i < sheet.getLastRowNum(); i++)
{
Cell cell = sheet.getRow(i).getCell(6);
//I don't know if you have data in all of the cells,
//So I suggest you to evaluate null
if(cell != null && !cell.getStringCellValue().isEmpty())
{
cell.setCellValue(Double.valueOf(cell.getStringCellValue()).doubleValue());
}
}

Edit an excel file on the server side in a client server architecture

I have an excel book which is shared across all the business users. As part of the process, the excel needs to be updated at the server side. All users have mapped to a network drive which contains the excel book. Earlier we used to update the excel on the client side using activeX controls. Now the problem is that the path for the excel book is not accessible on the server machine and hence it's throwing file not found exception. Is there any way to handle this apart from mapping the network drive on the server machine as well?
Following is the code:
public static String updateExcelFile(String fileLocation, String dataVal,String txnId) throws IOException{
String status="";
//fileLocation = "Z:\\check.xlsx";
//FileInputStream file = new FileInputStream(new File("C:\\JXL\\Checklist OnBoarding1.1.xlsx"));
//fileLocation.rep
FileInputStream file = new FileInputStream(new File(fileLocation));
XSSFWorkbook wb = new XSSFWorkbook (file);
XSSFSheet sheet = wb.getSheetAt(0);
Row r = null;
Cell c = null;
String data[] = dataVal.split(",");
int rowCount = sheet.getPhysicalNumberOfRows();
int rowNum = rowCount;
Iterator<Row> rowInterator =sheet.iterator();
while(rowInterator.hasNext()){
r=rowInterator.next();
Iterator<Cell> cellIterator = r.cellIterator();
while(cellIterator.hasNext()){
c=cellIterator.next();
//System.out.println();
switch(c.getCellType()) {
case Cell.CELL_TYPE_NUMERIC:
System.out.print(c.getNumericCellValue() + "\t\t");
if(c.getNumericCellValue()==Integer.parseInt(txnId)){
int modRow = r.getRowNum();
System.out.println("ModRow"+modRow);
rowNum=modRow;
}
break;
case Cell.CELL_TYPE_STRING:
System.out.print(c.getStringCellValue() + "\t\t");
if(c.getStringCellValue().equals(txnId)){
int modRow = r.getRowNum();
System.out.println("ModRow"+modRow);
rowNum=modRow;
}
break;
}
}
}
if(rowNum==rowCount){
r=sheet.createRow(rowNum+1);
r=sheet.getRow(rowNum+1);
}else{
r=sheet.getRow(rowNum);
}
for(int cellNum =0; cellNum<data.length;cellNum++){
c=r.createCell(cellNum);
c.setCellValue(data[cellNum]);
}
file.close();
//FileOutputStream outFile =new FileOutputStream(new File("C:\\JXL\\Checklist OnBoarding1.1.xlsx"));
FileOutputStream outFile =new FileOutputStream(new File(fileLocation));
wb.write(outFile);
outFile.close();
status="success";
return status;
}
send the excel file to the server when the user clicks the button:
use ajax to upload the file to a servlet,
have the servlet process the file and then respond to the client with a link to the updated file (which sits in a temp folder on the server), which the ajax can then present to the user.
The user can now download the excel and put it onto the network drive, where he and other users can edit it directly.
I guess this approach would require the user to at least accept the save operation.
Also this approach does not protect you from concurrent updates on the Excel: several users can have the server edit a copy of the Excel, and several other users could directly edit the excel on the network drive.
And having the server edit the Excel might be a problem if the Excel in question is large.
Well, I think I answered the question :-).

Apache POI much quicker using HSSF than XSSF - what next?

I've been having some issues with parsing .xlsx files with Apache POI - I am getting java.lang.OutOfMemoryError: Java heap space in my deployed app. I'm only processing files under 5MB and around 70,000 rows so my suspicion from reading number other questions is that something is amiss.
As suggested in this comment I decided to run SSPerformanceTest.java with the suggested variables so see if there is anything wrong with my code or setup. The results show a significant difference between HSSF (.xls) and XSSF (.xlsx):
1) HSSF 50000 50 1: Elapsed 1 seconds
2) SXSSF 50000 50 1: Elapsed 5 seconds
3) XSSF 50000 50 1: Elapsed 15 seconds
The FAQ specifically says:
If you can't run that with 50,000 rows and 50 columns in all of HSSF, XSSF and SXSSF in under 3 seconds (ideally a lot less!), the problem is with your environment.
Next, it says to run XLS2CSV.java which I have done. Feeding in the XSSF file generated above (with 50000 rows and 50 columns) takes around 15 seconds - the same amount it took to write the file.
Is something wrong with my environment, and if so how do I investigate further?
Stats from VisualVM show the heap used shooting up to 1.2Gb during the processing. Surely this is way too high considering that's an extra gig on top of the heap compared to before processing began?
Note: The heap space exception mentioned above only happens in production (on Google App Engine) and only for .xlsx files, however the tests mentioned in this question have all been run on my development machine with -Xmx2g. I'm hoping that if I can fix the problem on my development setup it will use less memory when I deploy.
Stack trace from app engine:
Caused by: java.lang.OutOfMemoryError: Java heap space
at org.apache.xmlbeans.impl.store.Cur.createElementXobj(Cur.java:260)
at org.apache.xmlbeans.impl.store.Cur$CurLoadContext.startElement(Cur.java:2997)
at org.apache.xmlbeans.impl.store.Locale$SaxHandler.startElement(Locale.java:3211)
at org.apache.xmlbeans.impl.piccolo.xml.Piccolo.reportStartTag(Piccolo.java:1082)
at org.apache.xmlbeans.impl.piccolo.xml.PiccoloLexer.parseAttributesNS(PiccoloLexer.java:1802)
at org.apache.xmlbeans.impl.piccolo.xml.PiccoloLexer.parseOpenTagNS(PiccoloLexer.java:1521)
I was facing same kind of issue to read bulky .xlsx file using Apache POI and I came across
excel-streaming-reader-github
This library serves as a wrapper around that streaming API while preserving the syntax of the standard POI API
This library can help you to read large files.
The average XLSX sheet I work is about 18-22 sheets of 750 000 rows with 13-20 columns. This is spinning in the Spring web application with lots of other functionalities. I gave to whole application not that much of memory: -Xms1024m -Xmx4096m - and it works great!
First of all dumping code: it is wrong to load each and every data row in memory and than starting to dump it. In my case (reporting from the PostgreSQL database) I reworked data dump procedure to use RowCallbackHandler to write to my XLSX, during this once I reach "my limit" of 750000 rows, I create new sheet. And workbook is created with visibility window of 50 rows. In this way I am able to dump huge volumes: size of XLSX file is about 1230Mb.
Some code to write sheets:
jdbcTemplate.query(
new PreparedStatementCreator() {
#Override
public PreparedStatement createPreparedStatement(Connection connection) throws SQLException {
PreparedStatement statement = connection.prepareStatement(finalQuery, ResultSet.TYPE_FORWARD_ONLY, ResultSet.CONCUR_READ_ONLY);
statement.setFetchSize(100);
statement.setFetchDirection(ResultSet.FETCH_FORWARD);
return statement;
}
}, new RowCallbackHandler() {
Sheet sheet = null;
int i = 750000;
int tableId = 0;
#Override
public void processRow(ResultSet resultSet) throws SQLException {
if (i == 750000) {
tableId++;
i = 0;
sheet = wb.createSheet(sheetName.concat(String.format("%02d%n", tableId)));
Row r = sheet.createRow(0);
Cell c = r.createCell(0);
c.setCellValue("id");
c = r.createCell(1);
c.setCellValue("Дата");
c = r.createCell(2);
c.setCellValue("Комментарий");
c = r.createCell(3);
c.setCellValue("Сумма операции");
c = r.createCell(4);
c.setCellValue("Дебет");
c = r.createCell(5);
c.setCellValue("Страхователь");
c = r.createCell(6);
c.setCellValue("Серия договора");
c = r.createCell(7);
c.setCellValue("Номер договора");
c = r.createCell(8);
c.setCellValue("Основной агент");
c = r.createCell(9);
c.setCellValue("Кредит");
c = r.createCell(10);
c.setCellValue("Программа");
c = r.createCell(11);
c.setCellValue("Дата начала покрытия");
c = r.createCell(12);
c.setCellValue("Дата планового окончания покрытия");
c = r.createCell(13);
c.setCellValue("Периодичность уплаты взносов");
}
i++;
PremiumEntity e = PremiumEntity.builder()
.Id(resultSet.getString("id"))
.OperationDate(resultSet.getDate("operation_date"))
.Comments(resultSet.getString("comments"))
.SumOperation(resultSet.getBigDecimal("sum_operation").doubleValue())
.DebetAccount(resultSet.getString("debet_account"))
.Strahovatelname(resultSet.getString("strahovatelname"))
.Seria(resultSet.getString("seria"))
.NomPolica(resultSet.getLong("nom_polica"))
.Agentname(resultSet.getString("agentname"))
.CreditAccount(resultSet.getString("credit_account"))
.Program(resultSet.getString("program"))
.PoliciStartDate(resultSet.getDate("polici_start_date"))
.PoliciPlanEndDate(resultSet.getDate("polici_plan_end_date"))
.Periodichn(resultSet.getString("id_periodichn"))
.build();
Row r = sheet.createRow(i);
Cell c = r.createCell(0);
c.setCellValue(e.getId());
if (e.getOperationDate() != null) {
c = r.createCell(1);
c.setCellStyle(dateStyle);
c.setCellValue(e.getOperationDate());
}
c = r.createCell(2);
c.setCellValue(e.getComments());
c = r.createCell(3);
c.setCellValue(e.getSumOperation());
c = r.createCell(4);
c.setCellValue(e.getDebetAccount());
c = r.createCell(5);
c.setCellValue(e.getStrahovatelname());
c = r.createCell(6);
c.setCellValue(e.getSeria());
c = r.createCell(7);
c.setCellValue(e.getNomPolica());
c = r.createCell(8);
c.setCellValue(e.getAgentname());
c = r.createCell(9);
c.setCellValue(e.getCreditAccount());
c = r.createCell(10);
c.setCellValue(e.getProgram());
if (e.getPoliciStartDate() != null) {
c = r.createCell(11);
c.setCellStyle(dateStyle);
c.setCellValue(e.getPoliciStartDate());
}
;
if (e.getPoliciPlanEndDate() != null) {
c = r.createCell(12);
c.setCellStyle(dateStyle);
c.setCellValue(e.getPoliciPlanEndDate());
}
c = r.createCell(13);
c.setCellValue(e.getPeriodichn());
}
});
After reworking my code on dumping the data to XLSX, I came to problem, that it requires Office in 64 bits to open them. So I need to split my workbook with lots of sheets into separate XLSX files with single sheets to make them readable on average machine. And again I used small visibility windows and streamed processing, and kept the whole application working well without any sights of OutOfMemory.
Some code to read and split sheets:
OPCPackage opcPackage = OPCPackage.open(originalFile, PackageAccess.READ);
ReadOnlySharedStringsTable strings = new ReadOnlySharedStringsTable(opcPackage);
XSSFReader xssfReader = new XSSFReader(opcPackage);
StylesTable styles = xssfReader.getStylesTable();
XSSFReader.SheetIterator iter = (XSSFReader.SheetIterator) xssfReader.getSheetsData();
int index = 0;
while (iter.hasNext()) {
InputStream stream = iter.next();
String sheetName = iter.getSheetName();
DataFormatter formatter = new DataFormatter();
InputSource sheetSource = new InputSource(stream);
SheetToWorkbookSaver saver = new SheetToWorkbookSaver(sheetName);
try {
XMLReader sheetParser = SAXHelper.newXMLReader();
ContentHandler handler = new XSSFSheetXMLHandler(
styles, null, strings, saver, formatter, false);
sheetParser.setContentHandler(handler);
sheetParser.parse(sheetSource);
} catch(ParserConfigurationException e) {
throw new RuntimeException("SAX parser appears to be broken - " + e.getMessage());
}
stream.close();
// this creates new File descriptors inside storage
FileDto partFile = new FileDto("report_".concat(StringUtils.trimToEmpty(sheetName)).concat(".xlsx"));
File cloneFile = fileStorage.read(partFile);
FileOutputStream cloneFos = new FileOutputStream(cloneFile);
saver.getWb().write(cloneFos);
cloneFos.close();
}
and
public class SheetToWorkbookSaver implements XSSFSheetXMLHandler.SheetContentsHandler {
private SXSSFWorkbook wb;
private Sheet sheet;
private CellStyle dateStyle ;
private Row currentRow;
public SheetToWorkbookSaver(String workbookName) {
this.wb = new SXSSFWorkbook(50);
this.dateStyle = this.wb.createCellStyle();
this.dateStyle.setDataFormat(this.wb.getCreationHelper().createDataFormat().getFormat("dd.mm.yyyy"));
this.sheet = this.wb.createSheet(workbookName);
}
#Override
public void startRow(int rowNum) {
this.currentRow = this.sheet.createRow(rowNum);
}
#Override
public void endRow(int rowNum) {
}
#Override
public void cell(String cellReference, String formattedValue, XSSFComment comment) {
int thisCol = (new CellReference(cellReference)).getCol();
Cell c = this.currentRow.createCell(thisCol);
c.setCellValue(formattedValue);
c.setCellComment(comment);
}
#Override
public void headerFooter(String text, boolean isHeader, String tagName) {
}
public SXSSFWorkbook getWb() {
return wb;
}
}
So it reads and writes data. I guess in your case you should rework your code to same patterns: keep in memory only small footprint of data. So I would suggest for reading create custom SheetContentsReader, which will be pushing data to some database, where it can be easily processed, aggregated, etc.

Categories