I have a database where a Long datatype has an image stored in it,
I want to retrieve it and write it to an image file,
I tried using getBytes method and write a file using for and it return as corrupt image,
I also tried using getBinarystream and write using fos I wrote it in an image file I get same corrupt error.
Code:
InputStream is = RS.getBinaryStream(1);
FileOutputStream fos = new FileOutputStream ("image.bmp");
Int c;
While((c=is.read())!=-1)
{
fos.write(c);
}
fos.close;
To store binary data you should use LONG_RAW or a BLOB. The type LONG is for character based (text) data.
Related
I am working on a poc where i need to convert any text file or excel file into encoded string and send as a rest api string body
Now converting plane text file into string and then re construct file without any problem
Now i am unable to re construct encoded string of excel to original excel file.
Getting corrupt file when converting it to excel file..
byte[] decoded = Base64.decodeBase64(encodedExcelString);
BufferedOutputStream w = new BufferedOutputStream(new FileOutputStream("path"));
w.write(decoded.getBytes());
I had the same scenario, I am not much familiar with excel creation scenario and char format type, but in normal case It will work ..
byte[] bytes = new sun.misc.BASE64Decoder().decodeBuffer(encodeData);
try (FileOutputStream fos = new FileOutputStream(filePath)) {
fos.write(bytes);
}
Also please avoid encoded string for binary files, for normal text file it is ok to wrap with in enocded string but in case of large binary file it will take much process time. Instead of string use array of bytes.
We have a requirement to store the uploaded spreadsheet in an Oracle database blob column.
User will upload the spreadsheet using the ADF UI and the same spreadsheet will be persisted to the DB and retrieved later.
We are using POI to process the spreadsheet.
The uploaded file is converted to byte[] and sent to the appserver. Appserver persists the same to the blob column.
But when I am trying to retrieve the same later,I am seeing "Excel found unreadable content in *****.xlsx.Do you want to recover the contents of this workbook?" message.
I could resolve this issue by
Converting the byte[] to XSSFWorkbook and converting the same back to byte[] and persisting it.
But according to my requirement I may get very large spreadsheet and initializing XSSFWorkbook might result into outofmemory issues.
The code to get the byte[] from the uploaded spreadsheet is as below
if (uploadedFile != null) {
InputStream inputStream;
ByteArrayOutputStream byteArrayOutputStream = new ByteArrayOutputStream();
inputStream = uploadedFile.getInputStream();
int c = 0;
while (c != -1) {
c = inputStream.read();
byteArrayOutputStream.write((char) c);
}
bytes = byteArrayOutputStream.toByteArray();
}
and the same byte[] is being persisted into a blob column as below.
1. Assign this byte[] to the BlobCloumn
2. Update the SQL Update statement with the bolobColumn
3. Execute the Statement.
Once the above step is done, retrieve the spreadsheet as follows.
1. Read the BlobColumn
2. Get the bytes[] from the BlobColumn
3. Set the content-type of the response to support the spreadsheet.
4. Send the byte[].
But when I open the above downloaded spreadsheet I am getting the spreadsheet corrupted error.
If I introduce an additional step as below after receiving the byte[] from the UI, the issue is solved.
InputStream is = new ByteArrayInputStream(uploadedSpreadSheetBytes);
XSSFWorkbook uploadedWorkbook = new XSSFWorkbook(is);
and then, derive the byte[] again from the XSSFWorkbook as below
byteArrayOutputStream = new ByteArrayOutputStream();
workbook.write(byteArrayOutputStream);
byte[] spreadSheetBytes = byteArrayOutputStream.toByteArray();
I feel converting the byte[] to XSSFWorkbook and then converting the XSSFWorkbook back to byte[] is redundant.
Any help would be appreciated.
Thanks.
The memory issues can be avoided by, instead of initializing the entire XSSFWorkbook, using event-based parsing (SAX). This way, you are only processing parts of the file which consumes less memory. See the POI spreadsheet how-to page for more info on SAX parsing.
You could also increase the JVM memory, but that's no guarantee of course.
I have an InputStream which I would like to convert to a PDF, and save that PDF in a directory. Currently, my code is able to convert the InputStream to a PDF and the PDF does show up in the correct directory. However, when I try to open it, the file is damaged.
Here is the current code:
InputStream pAdESStream = signingServiceConnector.getDirectClient().getPAdES(this.statusReader.getStatusResponse().getpAdESUrl());
byte[] buffer = new byte[pAdESStream.available()];
pAdESStream.read(buffer);
File targetFile = new File(System.getProperty("user.dir") + "targetFile2.pdf");
OutputStream outStream = new FileOutputStream(targetFile);
outStream.write(buffer);
Originally, the InputStream was a pAdES-file (https://en.wikipedia.org/wiki/PAdES). However, it should be able to be read as just a regular PDF.
Does anyone know how to convert the InputStream to a PDF, without getting a damaged PDF as a result?
Hello it might be a bit late but you can use PDFBOX api (or itextpdf)
https://www.tutorialkart.com/pdfbox/create-write-text-pdf-file-using-pdfbox/
here is a tuto of the process gl
I have a question looks to me so hard at first glance but maybe has very easy solution that I cant figure it out yet. I need to read binary data of an excel file which stored in a oracle database CLOB column.
Everything is ok with reading CLOB as string in java. I get excel file as binaries on a string parameter.
String respXLS = othRaw.getOperationData(); // here I get excel file
InputStream bais = new ByteArrayInputStream(respXLS.getBytes());
POIFSFileSystem fs = new POIFSFileSystem(bais);
HSSFWorkbook wb = new HSSFWorkbook(fs);
Then I try to read ByteStreamData and put in POIFSFileSystem but I get this exception:
java.io.IOException: Invalid header signature; read 0x00003F1A3F113F3F, expected 0xE11AB1A1E011CFD0
I googled some excel problems, they mention about read access. So I download same excel file to hdd and change nothing with it(even I did not open it), and use FileInputStream by giving the file path. It has worked flawless. So what is the reason?
Any advice or alternative way to read from CLOB will be appreciated.
Thanks in advance,
My Regards.
CLOB means Character Large OBject; You want to use a BLOB - Binary Large OBject. Change your database schema.
What happens is that a CLOB will use a Character Set to convert your String to/from the database internal format, whatever that is; this will cause file corruption on non-text contents.
Repeat after me: a String is not a byte[], and a character is not a byte.
I have got to retrieve binary in LONGBLOB field from the db. This field is storing all sorts of file formats such as txt, doc, xdoc, pdf, etc. I basically need to be able to convert the binary format into their actual file formats in order to allow my user to download these files.
Has anyone got any idea how to do this?
As others have said, it would be best have another field to store the format. You can do this by copying the extension (ie, everything after the last "." in the file name). The best way would probably be to get the file's mime type: see this for example.
You can then store the mime type in a field in the database. This will almost always work, whereas the extension of a file can be misleading (or vague).
Adding a field file_format indicating the file format of the file stored in LONGBLOB, then you are able to convert the binary according to the associated file format.
or, reserve the first several bytes for file format, after that is the actual content of file.
I think u should have another field to save the document type and tell which type should be converted. Use I/O InputStream to read/write file.
What I recommend is upload the client files to somewhere, save the path that's link to these file into db. That should be faster.
as others have suggested. create another column(FILETYPE_COL_NAME) in database which will tell you which type of file is stored in BOLB(BLOB_COL_NAME) field and then you extend the below code in try/catch block
String fileType = rs.getString("FILETYPE_COL_NAME");
DataOutputStream dos = new DataOutputStream(new BufferedOutputStream(new FileOutputStream("filepath."+fileType )));
ResultSet rs = statement.executeQuery("Select * from tablename");
BufferedInputStream bis = new BufferedInputStream(rs.getBinaryStream("BLOB_COL_NAME"));
byte[] buffer = new byte[1024];
int byteread = 0;
while((byteread = bis.read(buffer)) != -1){
dos.write(buffer, 0, byteread);
}
dos.flush();
dos.close();
bis.close();