In Oracle 11g I have a table that has a CLOB column containing image data for each row.
I need to convert the CLOB field back to an image. I searched a bit through google but I could not find a working example.
Can anyone help please?
Thank you
I found the solution. This is what I used
public void convertFromClob(Clob c, File f2) {
try {
InputStream inStream = c.getAsciiStream();
StringWriter sw = new StringWriter();
IOUtils.copy(inStream, sw);
// Transfer the data
byte[] data = Base64.decodeBase64(sw.toString());
BufferedImage image = ImageIO.read(new ByteArrayInputStream(data));
ImageIO.write(image, "png", f2);
} catch (Exception e) {
e.printStackTrace();
}
}
You can call directly the above method passing as parameters your Clob variable and the File to store the image. Tung
Related
I'm trying to convert a binary file that contains multiple images inside to a pdf doc using java, using itextpdf was the only solutions that I get the converted file in the correct format, but the issue here is that on the output it provide me only one image(the first one), and lost the other ones that are inside the binary file.
I've already prove to use itextpdf in order to add the images in a document also some other solutions like this one :
https://www.mkyong.com/java/how-to-convert-array-of-bytes-into-file/
or
create pdf from binary data in java
As I understand the issue in my case is that I've read my binary file and store them on a byte[] and after I've pass the content of the file to a Vector,
I've create a function that get as argument Vector and create a pdf with the images inside, the issue is that it insert only the first image on the pdf, because it can not separate inside the Vector the end of the first image and the start of the second image like in this case (JPEG image files begin with FF D8 and end with FF D9.) :
How to identify contents of a byte[] is a jpeg?
File imgFront = new File("C:/Users/binaryFile");
byte[] fileContent;
Vector<byte[]> records = new Vector<byte[]>();
try {
fileContent = Files.readAllBytes(imgFront.toPath());
records.add(fileContent); // add the result on Vector<byte[]>
} catch (IOException e1) {
System.out.println( e1 );
}
...
public static String ImageToPDF(Vector<byte[]> imageVector, String pathFile) {
String FileoutputName = pathFile + ".pdf";
Document document = null;
try {
FileOutputStream fos = new FileOutputStream(FileoutputName );
PdfWriter writer = PdfWriter.getInstance(document, fos);
writer.open();
document.open();
//loop here the ImageVector in order to get one by one the images,
//but I get only the first one
for (byte[] img : imageVector) {
Image image = Image.getInstance(img);
image.scaleToFit(500, 500); //size
document.add(image);
}
document.close();
writer.close();
} catch (Exception e) {
e.printStackTrace();
}
return FileoutputName ;
}
I expect that in the pdf to have all the images inside, not only one.
I've made a workaround for the solution here using the itextpdf library.
First I convert the Binary file to bytes, after use the cast in order to convert the bytes to Integer and define the type of image through Byte Array, http://www.sparkhound.com/blog/detect-image-file-types-through-byte-arrays
I found out that my type was Tiff from the output: var tiff2 = new byte[] { 77, 77, 42 }; // TIFF
I've changed the parameters from Vector imageVector, to ==> byte[] bytes when I pass the array of bytes byte[] fileContent;
byte[] fileContent;
fileContent = Files.readAllBytes(ImgFront.toPath());
ImageToPDF(fileContent, "C:/Users/Desktop/pdfWithImages");
Now I get the number of pages the the binary file using:
int numberOfPages = TiffImage.getNumberOfPages(ra); // From itextpdf
public static String ImageToPDF(byte[] bytes, String pathFile) {
String fileName= pathFile + ".pdf";
Document document = null;
document = new Document();
try {
FileOutputStream fos = new FileOutputStream(fileName);
PdfWriter writer = PdfWriter.getInstance(document, fos);
writer.open();
document.open();
// Array of bytes we have read from the Binary file
RandomAccessFileOrArray ra = new RandomAccessFileOrArray(bytes);
// Get the number of pages the the binary file have inside
int numberOfPages = TiffImage.getNumberOfPages(ra);
// Loop through numberOfPages and add them on the document
// one by one
for(int page = 1; page <= numberOfPages; page ++){
Image image = TiffImage.getTiffImage(new RandomAccessFileOrArray(bytes),page);
image.scaleAbsolute(500, 500);
document.add(image);
}
document.close();
writer.close();
} catch (Exception e) {
e.printStackTrace();
}
return fileName;
}
This one works for my case because as I've checked some of the binary files I'm using as source all of them are as TIFF image type, for sure in order to check all the kind of image type need to apply more conditions because this use case is for a particular image type.
I am writing meta data inside PNG image using below code:
public synchronized static byte[] writeMetaDataInPNGImage(BufferedImage buffImg,
String key, String value)
{
byte[][] image = null;
try
{
ImageWriter writer = ImageIO.getImageWritersByFormatName("png").next();
ImageWriteParam writeParam = writer.getDefaultWriteParam();
ImageTypeSpecifier typeSpecifier = ImageTypeSpecifier
.createFromBufferedImageType(BufferedImage.TYPE_INT_RGB);
// adding metadata
IIOMetadata metadata = writer.getDefaultImageMetadata(typeSpecifier, writeParam);
IIOMetadataNode textEntry = new IIOMetadataNode("tEXtEntry");
textEntry.setAttribute("keyword", key);
textEntry.setAttribute("value", value);
IIOMetadataNode text = new IIOMetadataNode("tEXt");
text.appendChild(textEntry);
IIOMetadataNode root = new IIOMetadataNode("javax_imageio_png_1.0");
root.appendChild(text);
int width = buffImg.getWidth();
int height = buffImg.getHeight();
metadata.mergeTree("javax_imageio_png_1.0", root);
ByteArrayOutputStream baos = new ByteArrayOutputStream();
ImageOutputStream stream = ImageIO.createImageOutputStream(baos);
writer.setOutput(stream);
writer.write(metadata, new IIOImage(buffImg, null, metadata), writeParam);
stream.close();
return baos.toByteArray();
// return ImageIO.read(new ByteArrayInputStream(baos.toByteArray()));
}
catch (Exception e)
{
System.out.println("Exception while writing \t " + e.getMessage() + " :: "
+ e.getStackTrace());
e.printStackTrace();
}
return null;
}
After writing meta data I am returning byte array with meta data and image data.
byte[] pngjdata = writeMetaDataInPNGImage(img.getAsBufferedImage(),"key", "hello");
If I save the image with pngjdata[] I am able to see the meta data inside image, but if I create BufferedImage from this byte array and save it I am not seeing written meta data.
InputStream in1 = new ByteArrayInputStream(pngjdata);
BufferedImage brImage = ImageIO.read(in1);
Why is brImage not having the meta data written by me?
Why is brImage not having the meta data written by me?
A BufferedImage does not contain the meta data you are looking for. The BufferedImage instance just represents pixel values, color model, sample model etc., or in other words the data necessary to display the image. There's also no API to set/get meta data (but it could be you are confused by the getProperty(name) and related methods that is inherited from the legacy AWT Image API).
Meta data in the ImageIO API is represented as instances of IIOMetadata and it's various DOM-like forms you can obtain via the getAsTree(..) method (like "javax_imageio_png_1.0" for PNG, or the "standard" "javax_imageio_1.0" format).
To keep both pixel data and meta data organized together in your application, you can use the IIOImage class.
You can read both pixel data and meta data together, using the ImageReader.readAll(index, param) method. You can write both at the same time, using ImageWriter.write(null, iioImage, param), like you already do above (just note that the first param to the write method is the stream meta data, which is different from the image meta data, for PNG just pass null).
I have Java Spring MVC application in which there is an option to upload an image and save to the server. i have the following method:
#RequestMapping(value = "/uploaddocimagecontentsubmit", method = RequestMethod.POST)
public String createUpdateFileImageContentSubmit(#RequestParam("file") MultipartFile file, ModelMap model)
{
//methods to handle file upload
}
I am now trying to reduce the size of the image refering the following:
increasing-resolution-and-reducing-size-of-an-image-in-java and decrease-image-resolution-in-java
The problem I am facing is that in the above examples, we are dealing with java.io.File Objects which are saved to a specified location. I dont want to save the image. Is there any way that I can use something similar to compress my Multipart Image file and continue with the upload.
Why don't you resize it on the client before upload? That will save some bandwidth
BlueImp JQuery Upload can do this
It was my first time taking a deep dive into the ImageIO package. I came across the MemoryCacheImageOutputStream, which allows you to write an image output stream to an output stream, i.e. ByteArrayOutputStream. From there, The data can be retrieved using toByteArray() and toString(), after compression. I used toByteArray, as I am storing images to postgresql and it stores the images as a byte array. I know this is late, but I hope it helps someone.
private byte[] compressImage(MultipartFile mpFile) {
float quality = 0.3f;
String imageName = mpFile.getOriginalFilename();
String imageExtension = imageName.substring(imageName.lastIndexOf(".") + 1);
// Returns an Iterator containing all currently registered ImageWriters that claim to be able to encode the named format.
// You don't have to register one yourself; some are provided.
ImageWriter imageWriter = ImageIO.getImageWritersByFormatName(imageExtension).next();
ImageWriteParam imageWriteParam = imageWriter.getDefaultWriteParam();
imageWriteParam.setCompressionMode(ImageWriteParam.MODE_EXPLICIT); // Check the api value that suites your needs.
// A compression quality setting of 0.0 is most generically interpreted as "high compression is important,"
// while a setting of 1.0 is most generically interpreted as "high image quality is important."
imageWriteParam.setCompressionQuality(quality);
ByteArrayOutputStream baos = new ByteArrayOutputStream();
// MemoryCacheImageOutputStream: An implementation of ImageOutputStream that writes its output to a regular
// OutputStream, i.e. the ByteArrayOutputStream.
ImageOutputStream imageOutputStream = new MemoryCacheImageOutputStream(baos);
// Sets the destination to the given ImageOutputStream or other Object.
imageWriter.setOutput(imageOutputStream);
BufferedImage originalImage = null;
try (InputStream inputStream = mpFile.getInputStream()) {
originalImage = ImageIO.read(inputStream);
} catch (IOException e) {
String info = String.format("compressImage - bufferedImage (file %s)- IOException - message: %s ", imageName, e.getMessage());
logger.error(info);
return baos.toByteArray();
}
IIOImage image = new IIOImage(originalImage, null, null);
try {
imageWriter.write(null, image, imageWriteParam);
} catch (IOException e) {
String info = String.format("compressImage - imageWriter (file %s)- IOException - message: %s ", imageName, e.getMessage());
logger.error(info);
} finally {
imageWriter.dispose();
}
return baos.toByteArray();
}
It seems to me there are two ways to store an attachment in a NotesDocument.
Either as a RichTextField or as a "MIME Part".
If they are stored as RichText you can do stuff like:
document.getAttachment(fileName)
That does not seem to work for an attachment stored as a MIME Part. See screenshot
I have thousands of documents like this in the backend. This is NOT a UI issue where I need to use the file Download control of XPages.
Each document as only 1 attachment. An Image. A JPG file. I have 3 databases for different sizes. Original, Large, and Small. Originally I created everything from documents that had the attachment stored as RichText. But my code saved them as MIME Part. that's just what it did. Not really my intent.
What happened is I lost some of my "Small" pictures so I need to rebuild them from the Original pictures that are now stored as MIME Part. So my ultimate goal is to get it from the NotesDocument into a Java Buffered Image.
I think I have the code to do what I want but I just "simply" can't figure out how to get the attachment off the document and then into a Java Buffered Image.
Below is some rough code I'm working with. My goal is to pass in the document with the original picture. I already have the fileName because I stored that out in metaData. But I don't know how to get that from the document itself. And I'm passing in "Small" to create the Small image.
I think I just don't know how to work with attachments stored in this manner.
Any ideas/advice would be appreciated! Thanks!!!
public Document processImage(Document inputDoc, String fileName, String size) throws IOException {
// fileName is the name of the attachment on the document
// The goal is to return a NEW BLANK document with the image on it
// The Calling code can then deal with keys and meta data.
// size is "Original", "Large" or "Small"
System.out.println("Processing Image, Size = " + size);
//System.out.println("Filename = " + fileName);
boolean result = false;
Session session = Factory.getSession();
Database db = session.getCurrentDatabase();
session.setConvertMime(true);
BufferedImage img;
BufferedImage convertedImage = null; // the output image
EmbeddedObject image = null;
InputStream imageStream = null;
int currentSize = 0;
int newWidth = 0;
String currentName = "";
try {
// Get the Embedded Object
image = inputDoc.getAttachment(fileName);
System.out.println("Input Form : " + inputDoc.getItemValueString("form"));
if (null == image) {
System.out.println("ALERT - IMAGE IS NULL");
}
currentSize = image.getFileSize();
currentName = image.getName();
// Get a Stream of the Imahe
imageStream = image.getInputStream();
img = ImageIO.read(imageStream); // this is the buffered image we'll work with
imageStream.close();
Document newDoc = db.createDocument();
// Remember this is a BLANK document. The calling code needs to set the form
if ("original".equalsIgnoreCase(size)) {
this.attachImage(newDoc, img, fileName, "JPG");
return newDoc;
}
if ("Large".equalsIgnoreCase(size)) {
// Now we need to convert the LARGE image
// We're assuming FIXED HEIGHT of 600px
newWidth = this.getNewWidth(img.getHeight(), img.getWidth(), 600);
convertedImage = this.getScaledInstance(img, newWidth, 600, false);
this.attachImage(newDoc, img, fileName, "JPG");
return newDoc;
}
if ("Small".equalsIgnoreCase(size)) {
System.out.println("converting Small");
newWidth = this.getNewWidth(img.getHeight(), img.getWidth(), 240);
convertedImage = this.getScaledInstance(img, newWidth, 240, false);
this.attachImage(newDoc, img, fileName, "JPG");
System.out.println("End Converting Small");
return newDoc;
}
return newDoc;
} catch (Exception e) {
// HANDLE EXCEPTION HERE
// SAMLPLE WRITE TO LOG.NSF
System.out.println("****************");
System.out.println("EXCEPTION IN processImage()");
System.out.println("****************");
System.out.println("picName: " + fileName);
e.printStackTrace();
return null;
} finally {
if (null != imageStream) {
imageStream.close();
}
if (null != image) {
LibraryUtils.incinerate(image);
}
}
}
I believe it will be some variation of the following code snippet. You might have to change which mimeentity has the content so it might be in the parent or another child depending.
Stream stream = session.createStream();
doc.getMIMEEntity().getFirstChildEntity().getContentAsBytes(stream);
ByteArrayInputStream bais = new ByteArrayInputStream(stream.read());
return ImageIO.read(bais);
EDIT:
session.setConvertMime(false);
Stream stream = session.createStream();
Item itm = doc.getFirstItem("ParentEntity");
MIMEEntity me = itm.getMIMEEntity();
MIMEEntity childEntity = me.getFirstChildEntity();
childEntity.getContentAsBytes(stream);
ByteArrayOutputStream bo = new ByteArrayOutputStream();
stream.getContents(bo);
byte[] mybytearray = bo.toByteArray();
ByteArrayInputStream bais = new ByteArrayInputStream(mybytearray);
return ImageIO.read(bais);
David have a look at DominoDocument,http://public.dhe.ibm.com/software/dw/lotus/Domino-Designer/JavaDocs/XPagesExtAPI/8.5.2/com/ibm/xsp/model/domino/wrapped/DominoDocument.html
There you can wrap every Notes document
In the DominoDocument, there such as DominoDocument.AttachmentValueHolder where you can access the attachments.
I have explained it at Engage. It very powerful
http://www.slideshare.net/flinden68/engage-use-notes-objects-in-memory-and-other-useful-java-tips-for-x-pages-development
I have a BufferedImage object of a jpeg which needs to be streamed as servlet response.
The existing code streams the jpeg using JPEGImageEncoder which looks like this :
JPEGImageEncoder encoder = JPEGCodec.createJPEGEncoder(resp.getOutputStream());
resp.reset();
resp.setContentType("image/jpg");
resp.setHeader("Content-disposition", "inline;filename=xyz.jpg");
JPEGEncodeParam param = encoder.getDefaultJPEGEncodeParam(image);
param.setQuality(jpegQuality, false);
encoder.setJPEGEncodeParam(param);
encoder.encode(image);
I have noticed that this is resulting in the file size of the streamed jpeg to be tripled , unable to figure why.So I have tried using ImageIO to stream the jpeg
ImageIO.write(image, "jpg", out);
This works just fine, I am unable to decide why my predecessor has gone with the choice of JPEGImageEncoder and was wondering what issues would arise if I change to using ImageIO, I have compared both jpegs and couldn't really spot differences. Any thoughts?
To be clear, you've already a concrete JPEG image somewhere on disk or in database and you just need to send it unmodified to the client? There's then indeed absolutely no reason to use JPEGImageEncoder (and ImageIO).
Just stream it unmodified to the response body.
E.g.
File file = new File("/path/to/image.jpg");
response.setContentType("image/jpeg");
response.setHeader("Content-Length", String.valueOf(file.length()));
InputStream input = new FileInputStream(file);
OutputStream output = response.getOutputStream();
byte[] buffer = new byte[8192];
try {
for (int length = 0; (length = input.read(buffer)) > 0;) {
output.write(buffer, 0, length);
}
}
finally {
try { input.close(); } catch (IOException ignore) {}
try { output.close(); } catch (IOException ignore) {}
}
You see the mistake of unnecessarily using JPEGImageEncoder (and ImageIO) to stream image files often back in code of starters who are ignorant about the nature of bits and bytes. Those tools are only useful if you want to convert between JPEG and a different image format, or want to manipulate (crop, skew, rotate, resize, etc) it.