I am trying to save .pfx file in sqlite db and install it after getting its info from DB itself, not directly from the file. I am storing its Byte array in BLOB column and other info in respective to their type in DB, but it fails to install when I get it through sql Query.
Getting Byte Array:
ByteArrayOutputStream baos = new ByteArrayOutputStream();
BufferedInputStream reader = new BufferedInputStream(new FileInputStream(certificateFile));
try {
IOUtils.copyStream(reader, baos);
} finally {
reader.close();
baos.close();
}
return baos.toByteArray();
Please suggest any solution regarding this. I searched for the solution and found that I have to store .pem and .cer file separately after extracting .pfx file, but unable to find how? How could I save those info programmatically?.
Related
I have a database where a Long datatype has an image stored in it,
I want to retrieve it and write it to an image file,
I tried using getBytes method and write a file using for and it return as corrupt image,
I also tried using getBinarystream and write using fos I wrote it in an image file I get same corrupt error.
Code:
InputStream is = RS.getBinaryStream(1);
FileOutputStream fos = new FileOutputStream ("image.bmp");
Int c;
While((c=is.read())!=-1)
{
fos.write(c);
}
fos.close;
To store binary data you should use LONG_RAW or a BLOB. The type LONG is for character based (text) data.
We have a requirement to store the uploaded spreadsheet in an Oracle database blob column.
User will upload the spreadsheet using the ADF UI and the same spreadsheet will be persisted to the DB and retrieved later.
We are using POI to process the spreadsheet.
The uploaded file is converted to byte[] and sent to the appserver. Appserver persists the same to the blob column.
But when I am trying to retrieve the same later,I am seeing "Excel found unreadable content in *****.xlsx.Do you want to recover the contents of this workbook?" message.
I could resolve this issue by
Converting the byte[] to XSSFWorkbook and converting the same back to byte[] and persisting it.
But according to my requirement I may get very large spreadsheet and initializing XSSFWorkbook might result into outofmemory issues.
The code to get the byte[] from the uploaded spreadsheet is as below
if (uploadedFile != null) {
InputStream inputStream;
ByteArrayOutputStream byteArrayOutputStream = new ByteArrayOutputStream();
inputStream = uploadedFile.getInputStream();
int c = 0;
while (c != -1) {
c = inputStream.read();
byteArrayOutputStream.write((char) c);
}
bytes = byteArrayOutputStream.toByteArray();
}
and the same byte[] is being persisted into a blob column as below.
1. Assign this byte[] to the BlobCloumn
2. Update the SQL Update statement with the bolobColumn
3. Execute the Statement.
Once the above step is done, retrieve the spreadsheet as follows.
1. Read the BlobColumn
2. Get the bytes[] from the BlobColumn
3. Set the content-type of the response to support the spreadsheet.
4. Send the byte[].
But when I open the above downloaded spreadsheet I am getting the spreadsheet corrupted error.
If I introduce an additional step as below after receiving the byte[] from the UI, the issue is solved.
InputStream is = new ByteArrayInputStream(uploadedSpreadSheetBytes);
XSSFWorkbook uploadedWorkbook = new XSSFWorkbook(is);
and then, derive the byte[] again from the XSSFWorkbook as below
byteArrayOutputStream = new ByteArrayOutputStream();
workbook.write(byteArrayOutputStream);
byte[] spreadSheetBytes = byteArrayOutputStream.toByteArray();
I feel converting the byte[] to XSSFWorkbook and then converting the XSSFWorkbook back to byte[] is redundant.
Any help would be appreciated.
Thanks.
The memory issues can be avoided by, instead of initializing the entire XSSFWorkbook, using event-based parsing (SAX). This way, you are only processing parts of the file which consumes less memory. See the POI spreadsheet how-to page for more info on SAX parsing.
You could also increase the JVM memory, but that's no guarantee of course.
My project has a requirement that I have to receive a file via a REST service(using jersey) and store it in the database.
The file size will be around 2-4MB.
The received file can be either zip or pdf format.
Before storing in database I would like to compress it.
I googled and found that there are many available classes like GZip, Zip, Deflater... I thought of using Deflater as it looked very simple.I have written the following code for zipping.
Deflater deflater = new Deflater();
deflater.setInput(data);
ByteArrayOutputStream outputStream = new ByteArrayOutputStream(data.length);
deflater.finish();
byte[] buffer = new byte[1024];
while (!deflater.finished()) {
int count = deflater.deflate(buffer);
outputStream.write(buffer, 0, count);
}
outputStream.close();
byte[] output = outputStream.toByteArray();
byte[] output = outputStream.toByteArray();
Could any one please suggest for my use case If I use the above code is it fine or do I have to use some other classes to perform the same.
Thanks,
Kitty
ByteArrayOutputStream caches the compressed output in memory. you have to wrap it around a FileOutputStream to avoid any OOM issue while writing in case of big files.
I'm trying to write code in Java that will encrypt file. I had used example from this site:
http://www.avajava.com/tutorials/lessons/how-do-i-encrypt-and-decrypt-files-using-des.html
Everything works fine but I need code that will overwrite original file with encrypted one. I'd changed only this:
FileInputStream fis = new FileInputStream("original.txt");
FileOutputStream fos = new FileOutputStream("original.txt");
encrypt(key, fis, fos);
FileInputStream fis2 = new FileInputStream("original.txt");
FileOutputStream fos2 = new FileOutputStream("original.txt");
Encryption works, but after decryption decrypted file is empty.
Can someone explain me what's the problem and how to solve it?
Thanks !
You shouldn't read and overwrite the same file simultaneously with FileInputStream and FileOutputStream. Often, you'll get lucky, but the behavior is going to vary based on the underlying system, and that's not good. Instead, write to a temporary file, then move the temporary file to the location of the original file.
I am developing a page to upload files. I am using spring framework 3 multipartFile. I only want to save this uploaded file if it has been changed form its original version in the server. Is there a way I can do MD5 check without saving this uploaded file in a temporary location?
Thanks,
Vasanta
You can use MultipartFile's getBytes() method to read the contents as byte array, and then:
byte[] uploadBytes = upload.getBytes();
MessageDigest md5 = MessageDigest.getInstance("MD5");
byte[] digest = md5.digest(uploadBytes);
String hashString = new BigInteger(1, digest).toString(16);
System.out.println("File hash: " + hashString);
However, according to the documentation the file can potentially still be saved to a temporary folder (but Spring would clean it up afterwards):
The file contents are either stored in memory or temporarily on disk.