Beanio - How to write to stream object - java

I am trying to create a fixed length file output using beanio. I don't want to write the physical file, instead I want to write content to an OutputStream.
ByteArrayOutputStream outputStream = new ByteArrayOutputStream();
StreamFactory factory = StreamFactory.newInstance();
StreamBuilder builder = new StreamBuilder("sb")
.format("fixedlength")
.parser(new FixedLengthParserBuilder())
.addRecord(Team.class);
factory.define(builder);
OutputStreamWriter writer = new OutputStreamWriter(outputStream, StandardCharsets.UTF_8);
BeanWriter bw = sf.createWriter("sb", writer);
bw.write(teamVO) // teamVO has some value.
try(OutputStream os = new FileOutputStream("file.txt")){
outputStream.writeTo(os); //Doing this only to check outputStream has some value
}
Here the file created, file.txt has no content in it, It is of size 0 kb.
I am able to write the file in the following method, but as I don't want to write the file in physical location, I thought of writing the contents in to a outputStream and then later in a different method, it can be converted in to a file
//This is working and creates file successfully
StreamFactory factory = StreamFactory.newInstance();
StreamBuilder builder = new StreamBuilder("sb")
.format("fixedlength")
.parser(new FixedLengthParserBuilder())
.addRecord(Team.class)
factory.define(builder);
BeanWriter bw = factory.createWriter("sb", new File("file.txt"));
bw.write(teamVO);
Why in the first approach the file is created with size 0 kb?

It looks like you haven't written enough data to the OutputstreamWriter for it to push some data to the underlying ByteArrayOutputStream. You have 2 options here.
Flush the writer object manually and then close it. This will then write the data to the outputStream. I would not recommend this approach because the writer may not be flushed or closed should there be any Exception before you could manually flush and close the writer.
Use a try-with-resource block for the writer which should take care of closing the writer and ultimately flushing the data to the outputStream
This should do the trick:
final ByteArrayOutputStream outputStream = new ByteArrayOutputStream();
try (final OutputStreamWriter writer = new OutputStreamWriter(outputStream) ) {
final BeanWriter bw = factory.createWriter("sb", writer);
final Team teamVO = new Team();
teamVO.setName("TESTING");
bw.write(teamVO); // teamVO has some value.
}
try (OutputStream os = new FileOutputStream("file.txt") ) {
outputStream.writeTo(os); // Doing this only to check outputStream has some value
}

Related

Strings in downloadfile weird symbols

I've got a String array that contains the content for a downloadable file. I am converting it to a Stream for the download but there are some random values in the downloadfile. I don't know if it is due to the encoding and if yes, how can I change it?
var downloadButton = new DownloadLink(btn, "test.csv", () -> {
try {
ByteArrayOutputStream byteArrayOutputStream = new ByteArrayOutputStream();
ObjectOutputStream objectOutputStream = new ObjectOutputStream(byteArrayOutputStream);
for (int i = 0; i < downloadContent.size(); i++) {
objectOutputStream.writeUTF(downloadContent.get(i));
}
objectOutputStream.flush();
objectOutputStream.close();
byte[] byteArray = byteArrayOutputStream.toByteArray();
ByteArrayInputStream byteArrayInputStream = new ByteArrayInputStream(byteArray);
ObjectInputStream objectInputStream = new ObjectInputStream(byteArrayInputStream);
objectInputStream.close();
return new ByteArrayInputStream(byteArray);
This is the DownloadLink class.
public class DownloadLink extends Anchor {
public DownloadLink(Button button, String fileName, InputStreamFactory fileProvider) {
super(new StreamResource(fileName, fileProvider), "");
getElement().setAttribute("download", fileName);
add(button);
getStyle().set("display", "contents");
}
}
this is the output file
ObjectOutputStream is part of the Java serialization system. In addition to the data itself, it also includes metadata about the original Java types and such. It's only intended for writing data that will later be read back using ObjectInputStream.
To create a file for others to download, you could instead use a PrintWriter that wraps the original output stream. On the other hand, you're using the output stream to create a byte[] which means that a more straightforward, but slightly less efficient, way would be to create a concatenated string from all the array elements and then use getBytes(StandardCharsets.UTF_8) on it to directly get a byte array.

How to write into an output stream with content is csv using opencsv?

#Value("classpath:tpls/Non-PnL_Template_Export_Order_Cross_Checking.csv")
private org.springframework.core.io.Resource exportFileOrderTpl;
I create an output stream of a CSVWriter
InputStream inputStream = exportFileOrderTpl.getInputStream();
String outputFileName = "file.csv"
var fileOutputStream = new FileOutputStream(outputFileName);
IOUtils.copy(inputStream, fileOutputStream);
var outputStreamWriter = new OutputStreamWriter(fileOutputStream,StandardCharsets.UTF_8);
var writer = new CSVWriter(outputStreamWriter)
String [] rows = {"1","2","3"}
writer.writeNext(rows);
var outputStream = new ByteArrayOutputStream();
outputStream.writeTo(fileOutputStream);
But my outputStream is empty. How to fix it!. Thank you
Your issues
Not closing streams
The data like your rows is written to a buffer (in outputStreamWriter). Once you flush or close the stream(-writer), it gets piped.
Too much streams/sinks
You just created an empty outputStream and then (a) write it to another, (b) checking if it is empty. Does that make sense?
var outputStream = new ByteArrayOutputStream();
outputStream.writeTo(fileOutputStream);
At this point you could check your only used final sink fileOutputStream for bytes that have gotten flushed.
Combining two inputs
From your code, you want to combine two inputs into one output (stream or file). So you are connecting the CSVWriter to an already filled outputStream
// create final output file and connect a stream as sink: fileOutputStream
String outputFileName = "file.csv"
var fileOutputStream = new FileOutputStream(outputFileName);
// copy first input from given CSV file to sink: fileOutputStream
InputStream inputStream = exportFileOrderTpl.getInputStream();
IOUtils.copy(inputStream, fileOutputStream);
// connect additional CSV input to sink: fileOutputStream
var outputStreamWriter = new OutputStreamWriter(fileOutputStream,StandardCharsets.UTF_8);
var writer = new CSVWriter(outputStreamWriter)
// write rows via CSV to sink: fileOutputStream
String [] rows = {"1","2","3"}
writer.writeNext(rows);
// close writer including connected streams: fileOutputStream
writer.close();
Writing CSV-file using OpenCSV
Like in Baeldungs introduction to OpenCSV's CSVWriter, you could use the method in given example and slightly modify to write, your single row or many rows to a specified file path:
public void writeRows(List<String[]> stringArray, Path path) throws Exception {
CSVWriter writer = new CSVWriter(new FileWriter(path.toString()));
for (String[] array : stringArray) {
writer.writeNext(array);
}
writer.close();
}
// define your rows consisting of many columns
String[] rowA = {"1","2","3"}
String[] rowB = {"3","4","5"}
List<String[]> rows = List.of(rowA, rowB);
// define your output file
String outputFileName = "file.csv";
Path outputPath = Paths.get(outputFileName);
// write the rows to file by calling the method
writeRows(rows, outputPath)
Note: Only the writer is used as output-sink. Everything IO-related is handled by OpenCSV. No separate output-streams needed.

How to convert TarArchiveOutputStream to byte array without saving into file system

I have a byte array representation of a tar.gz file. I want to get the byte array representation of a new tar.gz file after adding a new config file. I wanted to do this entirely in the code itself without creating any files to the local disk.
Below is my code in java
InputStream fIn = new ByteArrayInputStream(inputBytes);
BufferedInputStream in = new BufferedInputStream(fIn);
GzipCompressorInputStream gzIn = new GzipCompressorInputStream(in);
TarArchiveInputStream tarInputStream = new TarArchiveInputStream(gzIn);
ByteArrayOutputStream fOut = new ByteArrayOutputStream();
BufferedOutputStream buffOut = new BufferedOutputStream(fOut);
GzipCompressorOutputStream gzOut = new GzipCompressorOutputStream(buffOut);
TarArchiveOutputStream tarOutputStream = new TarArchiveOutputStream(gzOut);
ArchiveEntry nextEntry;
while ((nextEntry = tarInputStream.getNextEntry()) != null) {
tarOutputStream.putArchiveEntry(nextEntry);
IOUtils.copy(tarInputStream, tarOutputStream);
tarOutputStream.closeArchiveEntry();
}
tarInputStream.close();
createTarArchiveEntry("config.json", configData, tarOutputStream);
tarOutputStream.finish();
// Convert tarOutputStream to byte array and return
private static void createTarArchiveEntry(String fileName, byte[] configData, TarArchiveOutputStream tOut)
throws IOException {
ByteArrayInputStream baOut1 = new ByteArrayInputStream(configData);
TarArchiveEntry tarEntry = new TarArchiveEntry(fileName);
tarEntry.setSize(configData.length);
tOut.putArchiveEntry(tarEntry);
byte[] buffer = new byte[1024];
int len;
while ((len = baOut1.read(buffer)) > 0) {
tOut.write(buffer, 0, len);
}
tOut.closeArchiveEntry();
}
How to convert tarOuputStream to byte array?
You have opened the several OutputStream instances, but you haven't closed them yet. Or more precisely, you haven't "flushed" the content, specially the BufferedOutputStream instance.
BufferedOutputStream is using an internal buffer to "wait" for the data written to the target OutputStream. It does so until there is a reason to do so. One of these "reasons" is to call the BufferedOutputStream.flush() method:
public void flush() throws IOException
Flushes this buffered output stream. This forces any buffered output bytes to be written out to the
underlying output stream.
One other "reason" is to close the stream so it will write the remaining bytes before closing the stream.
In your case the bytes being written are still stored in the internal buffer. Depending on your code structure, you can simply close all the OutputStream instances you have, so the bytes finally gets written to the ByteArrayOutputStream:
tarInputStream.close();
createTarArchiveEntry("config.json", configData, tarOutputStream);
tarOutputStream.finish();
// Convert tarOutputStream to byte array and return
tarOutputStream.close();
gzOut.close();
buffOut.close();
fOut.close();
byte[] content = fOut.toByteArray();

How to open a PdfADocument from an existing PdfDocument in itext7?

In order to check uploaded PDF files for basic PDF/A conformance, I need to read them in as PdfADocuments.
But starting with version 7.1.6 this no longer works, but throws a PdfException(PdfException.PdfReaderHasBeenAlreadyUtilized)
class Controller
...
// get uploaded data into PdfDocument, which is passed
// on to different services.
InputStream filecontent = fileupload.getInputStream();
int read = 0;
byte[] bytes = new byte[1024];
ByteArrayOutputStream buffer = new ByteArrayOutputStream();
while ((read = filecontent.read(bytes,0,bytes.length)) != -1) {
filesize += read;
buffer.write(bytes, 0, read);
}
ByteArrayInputStream input = new ByteArrayInputStream(buffer.toByteArray());
PdfReader reader = new PdfReader(input);
PdfWriter writer = new PdfWriter(new ByteArrayOutputStream());
PdfDocument pdf = new PdfDocument(reader, writer);
AnalyzerService analyzer = new AnalyzerService();
if(analyzer.analyze(pdf)) {
otherService.doSomethingWith(pdf);
}
...
class AnalyzerService
...
public boolean analyze(PdfDocument pdf) {
PdfADocument pdfa = new PdfADocument(
pdf.getReader(), pdf.getWriter() <-- PdfException here
);
...
}
Up to and including iText 7.1.5 this worked.
With 7.1.6 I get "com.itextpdf.kernel.PdfException: Given PdfReader instance has already been utilized. The PdfReader cannot be reused, please create a new instance."
It seems that I need to get the Bytes from the PdfDocument as a byte[], then create a new PdfReader from it. I have tried getting them from the pdf.getReader().getOutputStream().toByteArray(), but that doesn't work.
I'm quite lost at the moment on how to create that PdfADocument from the given PdfDocument.
Your approach uses the same PdfReader and (even worse) the same PdfWriter for both a PdfDocument and a PdfADocument instance. As both can manipulate the PdfReader and write to the PdfWriter, that situation is likely to result in garbage in the writer, so you shall not do this.
Simply always consider a document with both a reader and a writer as work-in-progress, something one cannot treat as a finished document file, e.g. extract for intermediary checks.
As you want to check uploaded PDF files, why don't you simply forward the byte[] from buffer.toByteArray() to the analyze method to create a separate reader (and, if need be, a document) from? This indeed exactly would check the uploaded file...
Furthermore, if your input document may be PDF/A conform and is treated specially in that case, shouldn't you also manipulate it as a PdfADocument if it is? I.e. shouldn't you first check in your analyzer for conformance and in the positive case use a PdfADocument for it also in your controller class?
PdfDocument SourcePDF=null;
PdfADocument DisPDF =null;
try
{
PdfReader Reader = new PdfReader(input-Path);
PdfWriter writer = new PdfWriter(output-Path, new WriterProperties().SetPdfVersion(PdfVersion.PDF_2_0));
writer.SetSmartMode(true);
SourcePDF = new PdfDocument(Reader);
DisPDF = new PdfADocument(writer, PdfAConformanceLevel.PDF_A_3A,
new PdfOutputIntent("Custom", "", "https://www.color.org", "sRGB", new MemoryStream(Properties.Resources.sRGB_CS_profiles)));
DisPDF.InitializeOutlines();
//Setting some required parameters
DisPDF.SetTagged();
DisPDF.GetCatalog().SetLang(new PdfString("en-EN"));
DisPDF.GetCatalog().SetViewerPreferences(new PdfViewerPreferences().SetDisplayDocTitle(true));
PdfMerger merger = new PdfMerger(DisPDF, true, true);
merger.Merge(SourcePDF, 1, sorsePDF.GetNumberOfPages());
SourcePDF.Close();
DisPDF.Close();
}
catch (Exception ex)
{
throw;
}

Write object to zip file in json format

My goal is to write an object to zip file in json format. The simplest way of doing it is:
ZipOutputStream zip = new ZipOutputStream(new BufferedOutputStream(new FileOutputStream(zipFile)));
String json = gson.toJson(object);
zip.write(json.getBytes());
But I want to avoid to load the whole object to a single string. So I wrapped a zip stream into a writer object:
Writer writer = new OutputStreamWriter(zip);
And after that I write the entry in the following way:
zip.putNextEntry(entry);
gson.toJson(content, writer);
writer.flush();
zip.closeEntry();
zip.flush();
It works fine, but it seems very messy using writer and zip objects at the same time. Is there any better solution for this problem?
You can make it a bit simplier with Jackson which has methods to write directly to an OutputStream
ObjectMapper mapper = new ObjectMapper();
try (ZipOutputStream out = new ZipOutputStream(new FileOutputStream(zipFile))){
out.putNextEntry(new ZipEntry("object.json"));
mapper.writeValue(out, object);
}
You may declare one or more resources in a try-with-resources statement. For example
try (
ZipOutputStream zip = new ZipOutputStream(new BufferedOutputStream(new
FileOutputStream(zipFile)));
Writer writer = new OutputStreamWriter(zip);
) {
zip.putNextEntry(entry);
gson.toJson(content, writer);
}
The close methods are automatically called in this order. Note that the close methods of resources are called in the opposite order of their creation. With regards to flush, close flush it first.
public abstract void close() throws IOException
Closes the stream, flushing it first. Once the stream has been closed, further write() or flush() invocations will cause an IOException to be thrown. Closing a previously closed stream has no effect.

Categories