Transferring and saving MultipartFile instance - java

I have the following method, with the simple aim to store the contents of a given MultipartFile instance under a specified directory:
private void saveOnDisk(final String clientProductId, final MultipartFile image, final String parentDirectoryPath, final String fileSeparator) throws IOException
{
final File imageFile = new File(parentDirectoryPath + fileSeparator + clientProductId + image.getOriginalFilename());
image.transferTo(imageFile);
OutputStream out = new FileOutputStream(imageFile);
out. //... ? How do we proceed? OutputStream::write() requires a byte array or int as parameter
}
For what it might be worth, the MultipartFile instance is going to contain an image file which I receive on a REST API I'm building.
I've checked some SO posts such as this and this but this problem is not quite touched: I'm effectively looking to create an entirely new image file and store it on a specified location on disk: the method write() of OutputStream, given that it requires byte[] or int params, doesn't seem to be fitting my use case. Any ideas?

Related

How to create an endpoint which takes a path, load the image and serve it to the client

I want to serve an image to a client by converting it to a byte but for some reason byteArrayOutputStream.toByteArray() is empty. I get a response status of 200 which means it is served. I looked at various documentations on reading an image file from a directory using BufferedImage and then converting BufferedImage to a byteArray from oracle https://docs.oracle.com/javase/tutorial/2d/images/loadimage.html and https://docs.oracle.com/javase/tutorial/2d/images/saveimage.html but for some reason byteArray is still empty
This controller
#GetMapping(path = "/get/image/{name}")
public ResponseEntity<byte[]> displayImage(String name) throws IOException {
String photoPathFromDatabase = productRepository.findPhotoByName(name);
Path path = Paths.get(photoPathFromDatabase);
ByteArrayOutputStream byteArrayOutputStream = new ByteArrayOutputStream();
BufferedImage image = ImageIO.read(path.toFile()); // Reading the image from path or file
String fileType = Files.probeContentType(path.toFile().toPath()); // Getting the file type
ImageIO.write(image, fileType, byteArrayOutputStream); // convert from BufferedImage to byte array
byte[] bytes = byteArrayOutputStream.toByteArray();
return ResponseEntity
.ok()
.contentType(MediaType.valueOf(fileType))
.body(bytes);
}
After I debugged the method
You should read the bytes of the file directly, rather than use an excessive amount of methods from different classes. This can be done with the class java.nio.file.Files.
byte[] contentBytes = Files.readAllBytes(path); //Throws IOException
Probably the file extension is not getting set properly.
You can create a new method to get the file extension or you can use FilenameUtils.getExtension from Apache Commons IO.
public static Optional<String> getExtensionByStringHandling(String filename) {
return Optional.ofNullable(filename)
.filter(f -> f.contains("."))
.map(f -> f.substring(filename.lastIndexOf(".") + 1));
}
then change your ImageIO to take this file extention.
String fileExtention= getExtensionByStringHandling(file.getName()).orElseThrow(()->new RuntimeException("File extension not found"));
ImageIO.write(image, fileExtention, byteArrayOutputStream);

Merging two pdf documents with PDFMergerUtility throws: IOException("Page tree root must be a dictionary")

I have a spring boot application and I am trying to merge two pdf files. The one I am getting as a byte array from another service and the one I have it locally in my resources file: /static/documents/my-file.pdf. This is the code of how I am getting byte array from my file from resources:
public static byte[] getMyPdfContentForLocale(final Locale locale) {
byte[] result = new byte[0];
try {
final File myFile = new ClassPathResource(TEMPLATES.get(locale)).getFile();
final Path filePath = Paths.get(myFile.getPath());
result = Files.readAllBytes(filePath);
} catch (IOException e) {
LOGGER.error(format("Failed to get document for local %s", locale), e);
}
return result;
}
I am getting the file and getting the byte array. Later I am trying to merge this two files with the following code:
PDFMergerUtility pdfMergerUtility = new PDFMergerUtility();
pdfMergerUtility.addSource(new ByteArrayInputStream(offerDocument));
pdfMergerUtility.addSource(new ByteArrayInputStream(merkblattDocument));
ByteArrayOutputStream os = new ByteArrayOutputStream();
pdfMergerUtility.setDestinationStream(os);
pdfMergerUtility.mergeDocuments(null);
os.toByteArray();
But unfortunately it throws an error:
throw new IOException("Page tree root must be a dictionary");
I have checked and it makes this validation before it throws it:
if (!(root.getDictionaryObject(COSName.PAGES) instanceof COSDictionary))
{
throw new IOException("Page tree root must be a dictionary");
}
And I really have no idea what does this mean and how to fix it.
The strangest thing is that I have created totally new project and tried the same code to merge two documents (the same documents) and it works!
Additionally what I have tried is:
Change the spring boot version if it is ok
Set the mergeDocuments method like this: pdfMergerUtility.mergeDocuments(setupMainMemoryOnly())
Set the mergeDocuments method like this: pdfMergerUtility.mergeDocuments(setupTempFileOnly())
Get the bytes with a different method not using the Files from java.nio:
And also executed this in a different thread
Merging files only locally stored (in resources)
Merging the file that I am getting from another service - this works btw and that is why I am sure he is ok
Can anyone help with this?
The issue as Tilman Hausherr said is in that resource filtering that you can find in your pom file. If you have a case where you are not allowed to modify this then this approach will help you:
final String path = new
ClassPathResource(TEMPLATES.get(locale)).getFile().getAbsolutePath();
final File file = new File(path);
final Path filePath = Paths.get(file.getPath());
result = Files.readAllBytes(filePath);
and then just pass the bytes to the pdfMergerUtility object (or even the whole file instead of the list of bytes).

GSON can't serialize BufferedImages

There seems to be a problem with serializing BufferedImages in JSON using GSON. I am using Derby to store images. When I query the the database I build a JavaBean that has some text fields and one BufferedImage field. I then use GSON to convert the JavaBean into JSON, and this is where the exception occurs.
The Exception message is below:
java.lang.IllegalArgumentException: class sun.awt.image.ByteInterleavedRaster declares multiple JSON fields named maxX
I did find similar problems here GSON java.lang.IllegalArgumentException: class 'xx' declares multiple JSON fields named 'XX' AND StackOverflowError and here class A declares multiple JSON fields
But the problem is with the awt library included with Java. I could follow the answers provided in those other stackoverflow answers if I could access the AWT source code, but how do I do that?
You have to know that not every class is designed to be (de)serialized, especially if (de)serialization is based on the target class binary structure. Your approach has at least the following weak points:
the sun.awt.image.ByteInterleavedRaster class fields are not necessarily the same on another JVM/JRE, thus you can get be vendor-locked;
persisting binary data in JSON is probably not the best choice (probably huge and terrible memory consumption during (de)serialization, storage consumption, performance) -- maybe a generic blob storage is better for binary data?
reading an images with Java AWT and writing it back does not guarantee the same binary output: for example, my test image, 1.2K, was deserialized as an image of another size, 0.9K;
you must choose the target persisting image format or detect the most efficient one (how?).
Consider the following simple class:
final class ImageHolder {
final RenderedImage image;
ImageHolder(final RenderedImage image) {
this.image = image;
}
}
Now you have to create a type adapter to tell Gson how a particular type instance can be stored and restored:
final class RenderedImageTypeAdapter
extends TypeAdapter<RenderedImage> {
private static final TypeAdapter<RenderedImage> renderedImageTypeAdapter = new RenderedImageTypeAdapter().nullSafe();
private RenderedImageTypeAdapter() {
}
static TypeAdapter<RenderedImage> getRenderedImageTypeAdapter() {
return renderedImageTypeAdapter;
}
#Override
#SuppressWarnings("resource")
public void write(final JsonWriter out, final RenderedImage image)
throws IOException {
// Intermediate buffer
final ByteArrayOutputStream output = new ByteArrayOutputStream();
// By the way, how to pick up the target image format? BMP takes more space, PNG takes more time, JPEG is lossy...
ImageIO.write(image, "PNG", output);
// Not sure about this, but converting to base64 is more JSON-friendly
final Base64.Encoder encoder = Base64.getEncoder();
// toByteArray() returns a copy, not the original array (x2 more memory)
// + creating a string requires more memory to create the String internal buffer (x3 more memory)
final String imageBase64 = encoder.encodeToString(output.toByteArray());
out.value(imageBase64);
}
#Override
public RenderedImage read(final JsonReader in)
throws IOException {
// The same in reverse order
final String imageBase64 = in.nextString();
final Base64.Decoder decoder = Base64.getDecoder();
final byte[] input = decoder.decode(imageBase64);
return ImageIO.read(new ByteArrayInputStream(input));
}
}
Note that Gson is currently NOT very well designed to support byte transformation, however it might be somewhat better in the future if fixed.
Example use:
private static final Gson gson = new GsonBuilder()
.registerTypeHierarchyAdapter(RenderedImage.class, getRenderedImageTypeAdapter())
.create();
public static void main(final String... args)
throws IOException {
try ( final InputStream inputStream = getPackageResourceInputStream(Q43301580.class, "sample.png") ) {
final RenderedImage image = ImageIO.read(inputStream);
final ImageHolder before = new ImageHolder(image);
final String json = gson.toJson(before);
System.out.println(json);
final ImageHolder after = gson.fromJson(json, ImageHolder.class);
...
}
}
Example output (with real tiny (32x32) PNG file inside):
{"image":"iVBORw0KGgoAAAANSUhEUgAAACAAAAAgCAYAAABzenr0AAADgklEQVR42t2XXUiTYRTHpxj4kSKShhgYGSihZGIXXYhU5J2BhBIhCH5cCF6oiWhG0k1BpHghgRgoJHiloBKEqFQ3frDNuemaOqdu0+n8mFM3Nzf37z1n+JZUEPlOoQdetvd5L87vOed/Ph4ZznnJzsqQz+uFz+M5HwBrezuUFy9CERoKY3U1jtzuwAFY29pgGxgQ350aDVSXLmFfLud9eVAQTHV1gQNYKi+HMiwM9uFhft/o6MBcTg6fWp+XB93duzhyOOA7POSwyAIR64UnTxhi9+tXfhQhIdBlZ2P2wQM2Tmv11StY3rwJjAYIQl9QAGVUFPZGRzF7/z7kwcGw9ffzt80PHzAZE4ODuTnpAQ50OjgmJ3HkcmE+N5chdr98wfzDh5DLZPyo4uOx+/mz9Bqg+B8b0d6+zSecFeJPInSo1XAbjXAKvxR/yUW4Pz7uV/vEBJ9OffUqNNev49BiYeGp4uLg0usDUwdIUNNpaTDV1op7rqUljvNKYyMLb7G4GIdWa2AAbH19LDIy8vNaefmSBRiQUkynMtXUYLGkBO7lZWx2dTEEnVjURFnZL1CSASyWlmL6xg1okpIwdeUK3CYTNjo7WYCGoiLOeU1yMtxmc2AA1NeuscA829uYTk1lEIJYf/eOIcgzP6tdEgAyRicjtatiY8V9EhdDpKTw/7XmZoYgGEkBzEITIQDzs2dsYPX1a/EbuZq8YG5o8GeG8E2dmIgjp/P0AJxGgku1GRnYVyh479jVdFrRE+vrXGqPl3dvTxoPeO12aDMz2aBDqRT315qa/trV/wTgsdmw1d3NJVSMs+BmOqlYhARXL1dUSA/gWljg9FKGh/u72tgYQ1BqEcjvqtqpAHY+fcLOx4/+durzcTOxvH3LXY1qOUFQ/CnVyAszN2+eGK1OBWCur4cyIgIrL174Xb+1hdl79xiERioqOFRSKf3sQ0MclvXWVmk8sN3b6+9UBsMvQwWtb3fuwD4ywpkwlZDAojNWVUk3lhsrK7Hw+PHJ+AudzKnVwrOzwwYP5ud50JhJT5cs9iLAxvv3UFy4wLVdn58P1eXLP4YKIfWor09GR0MZGYm1lhbpLyYUZ/Pz55i5dQu6rCwYnz4FhYXmNjJKKbYmiHG7p+fsb0aGwkIsC2PWuVzNaJ5j1Q8Oni0AVTkKCbmffs/8cuoVlK9/9IjHrP/qdvyn9R0SEM4flWsmCwAAAABJRU5ErkJggg\u003d\u003d"}
I think there are too many flaws, and I would strongly recommend you to redesign your binaries storage if possible and store binary content as-is.

Read PDVInputStream dicomObject information on onCStoreRQ association request

I am trying to read (and then store to 3rd party local db) certain DICOM object tags "during" an incoming association request.
For accepting association requests and storing locally my dicom files i have used a modified version of dcmrcv() tool. More specifically i have overriden onCStoreRQ method like:
#Override
protected void onCStoreRQ(Association association, int pcid, DicomObject dcmReqObj,
PDVInputStream dataStream, String transferSyntaxUID,
DicomObject dcmRspObj)
throws DicomServiceException, IOException {
final String classUID = dcmReqObj.getString(Tag.AffectedSOPClassUID);
final String instanceUID = dcmReqObj.getString(Tag.AffectedSOPInstanceUID);
config = new GlobalConfig();
final File associationDir = config.getAssocDirFile();
final String prefixedFileName = instanceUID;
final String dicomFileBaseName = prefixedFileName + DICOM_FILE_EXTENSION;
File dicomFile = new File(associationDir, dicomFileBaseName);
assert !dicomFile.exists();
final BasicDicomObject fileMetaDcmObj = new BasicDicomObject();
fileMetaDcmObj.initFileMetaInformation(classUID, instanceUID, transferSyntaxUID);
final DicomOutputStream outStream = new DicomOutputStream(new BufferedOutputStream(new FileOutputStream(dicomFile), 600000));
//i would like somewhere here to extract some TAGS from incoming dicom object. By trying to do it using dataStream my dicom files
//are getting corrupted!
//System.out.println("StudyInstanceUID: " + dataStream.readDataset().getString(Tag.StudyInstanceUID));
try {
outStream.writeFileMetaInformation(fileMetaDcmObj);
dataStream.copyTo(outStream);
} finally {
outStream.close();
}
dicomFile.renameTo(new File(associationDir, dicomFileBaseName));
System.out.println("DICOM file name: " + dicomFile.getName());
}
#Override
public void associationAccepted(final AssociationAcceptEvent associationAcceptEvent) {
....
#Override
public void associationClosed(final AssociationCloseEvent associationCloseEvent) {
...
}
I would like somewhere between this code to intercept a method wich will read dataStream and will parse specific tags and store to a local database.
However wherever i try to put a piece of code that tries to manipulate (just read for start) dataStream then my dicom files get corrupted!
PDVInputStream is implementing java.io.InputStream ....
Even if i try to just put a:
System.out.println("StudyInstanceUID: " + dataStream.readDataset().getString(Tag.StudyInstanceUID));
before copying datastream to outStream ... then my dicom files are getting corrupted (1KB of size) ...
How am i supposed to use datastream in a CStoreRQ association request to extract some information?
I hope my question is clear ...
The PDVInputStream is probably a PDUDecoder class. You'll have to reset the position when using the input stream multiple times.
Maybe a better solution would be to store the DICOM object in memory and use that for both purposes. Something akin to:
DicomObject dcmobj = dataStream.readDataset();
String whatYouWant = dcmobj.get( Tag.whatever );
dcmobj.initFileMetaInformation( transferSyntaxUID );
outStream.writeDicomFile( dcmobj );

How to extract .gz file Dynamically in Java?

In http://www.newegg.com/Siteindex_USA.xml lots of urls of .gz-files are given, like this:
<loc>
http://www.newegg.com//Sitemap/USA/newegg_sitemap_product01.xml.gz
</loc>
I want to extract these dynamically. I don't want to store them locally, I just want to extract them and store the contained data in a database.
Modify:
I am getting exception
private void processGzip(URL url, byte[] response) throws MalformedURLException,
IOException, UnknownFormatException {
if (DEBUG) System.out.println("Processing gzip");
InputStream is = new ByteArrayInputStream(response);
// Remove .gz ending
String xmlUrl = url.toString().replaceFirst("\\.gz$", "");
if (DEBUG) System.out.println("XML url = " + xmlUrl);
InputStream decompressed = new GZIPInputStream(is);
InputSource in = new InputSource(decompressed);
in.setSystemId(xmlUrl);
processXml(url, in);
decompressed.close();
}
Simply wrap the input stream in GZIPInputStream, and it'll decompress the data as you're reading it.

Categories