Is ImageIO.read(file) vulnerable to attack? - java

A user is allowed to upload only image files to a servlet. After I temporarily store on the server, I read the file to make sure it is an image and maybe do some extra manipulations eg:
try {
InputStream input = uploadedFile.getInputStream()
if (ImageIO.read(input)!=null)
{
doStuff();
} else { // Not an image }
} catch (Exception e)
{
}
Is this vulnerable to any attacks? Or is this considered safe?

Related

Is there a way to upload/download a file from/to Dropbox while keeping the modification date of the file?

I'm trying to build a sync system for my writing application, so that I can synchronize my text files with a Dropbox folder and edit them from my computer.
Thing is, when a file is uploaded, its modification date corresponds to that of the upload time, not the last time the file's content was modified, and it looks as if the Dropbox file was modified more recently than the local file. Same thing for download, as the local version gets a more recent modification date than the Dropbox one.
This makes things complicated when I want to compare dates to determine which version is the most recent one between the local and the network versions, and if I need to upload the local version or download the network one to be up-to-date.
Is there a way to keep the modification date of the original file ? Currently, I'm using these functions, but maybe I should use a completely different method.
public void uploadFile(String local_path, String db_path) {
try {
InputStream in = new FileInputStream(local_path);
client.files().uploadBuilder(db_path)
.withMode(WriteMode.OVERWRITE)
.uploadAndFinish(in);
}
catch (FileNotFoundException fne) { fne.printStackTrace(); }
catch (IOException ioe) { ioe.printStackTrace(); }
catch (DbxException dbxe) { dbxe.printStackTrace(); }
}
public void downloadFile(String db_path, String local_path) {
try {
File dest = new File(local_path);
try (OutputStream outputStream = new FileOutputStream(dest)) {
client.files().download(db_path).download(outputStream);
}
}
catch (DbxException e) { e.printStackTrace(); }
catch (IOException e) { e.printStackTrace(); }
}
You can set the clientModified date using UploadBuilder.withClientModified. It's not possible to override the serverModified date though.

KML Layer is not shown in Google Maps

I have a problem with displaying certain KMLs in Google Maps, it happens that after going through the method addLayerToMap, it is not rendered on the map.
Funny is that when I step it in Google MyMaps the same works normally and even if I export from there and set to display in Google Maps of the application, it displays normally.
I noticed that MyMaps greatly changes the structure of the KML and it is even smaller (in number of lines and consequently the size).
KML file (original): https://drive.google.com/file/d/1Z4AZMP1xNMgVNNXjK11-kD0gwlPLmJmR/view?usp=sharing
PS: On invalid paths of images, I changed manually and there were no results.
KML file (parsed by Google MyMaps): https://drive.google.com/file/d/1WPT3ZogzjTNa9ITeZze1cYf3ly4JFpUZ/view?usp=sharing
Method that I'm using to read KML (works with most of the KMLs I tried, including Google's own example):
private void retrieveFileFromResource() {
try {
KmlLayer kmlLayer = new KmlLayer(mMap, R.raw.teste3, getActivity());
kmlLayer.addLayerToMap();
moveCameraToKml(kmlLayer);
} catch (IOException e) {
e.printStackTrace();
} catch (XmlPullParserException e) {
e.printStackTrace();
}
}
I'm trying to add the components to the map manually (polylines, polygons, markers, etc) but did not succeed.
try using the below code to check if the kml file is present in the folder and if present, show them in the google maps.
private void retrieveFileFromResources()
{
try
{
int check = this.getResources().getIdentifier(teste3,"folder name", this.getPackageName());
if(check != 0)
{
InputStream inputstream = this.getResources().openRawResource(this.getResources().getIdentifier(teste3,"folder name",this.getPackageName()));
KmlLayer kmlLayer = new KmlLayer(mMap, inputStream, getApplicationContext());
kmlLayer.addLayerToMap();
}
else
{
Toast.makeText(this,"Request KML Layer not available",Toast.LENGTH_LONG).show();
}
}catch(IOException e)
{
e.printStackTrace();
}
catch(XmlPullParseException e)
{
e.printStackTrace();
}
}

Delete an image on java server

I am doing a project that consists of a java server and a web page. I'd like to do that when you delete an object from the web, the server deletes the image asociated with this object. The images of each object are stored inside the images folder inside the web folder. But when i try to delete an image in the server, it says that the file is used by another process,because the thread of the web server is using it (i use grizzly), so I an't delete it.
//save the image
private void saveImage(Eetakemon e){
String base64Image = e.getImage().split(",")[1];
byte[] imageBytes =
javax.xml.bind.DatatypeConverter.parseBase64Binary(base64Image);
File imageFile = new File("WEB\\images\\" + e.getName() + ".png");
try {
BufferedImage bufferedImage = ImageIO.read(new
ByteArrayInputStream(imageBytes));
ImageIO.write(bufferedImage, "png", imageFile);
}
catch(Exception ex){
ex.printStackTrace();
}
}
//delete the image
private void deleteImage(Eetakemon e){
try {
Files.deleteIfExists(Paths.get("WEB\\images\\" + e.getName() +
".png"));
}catch(Exception ex){
ex.printStackTrace();
}
}
The funcions are called inside the create and delete methods respectively
Thank you
You should use a separate folder in your filesystem with read/write access and keep your web server responsible for serving only static content, like static images, HTML,CSS and JS files.
To handle dynamic image files that can eventually be deleted during runtime keep the business logic in a separate service such as a REST API or a simple Servlet.
You can temporarily move the deleted images to a separate folder to be marked for deletion by a later scheduled batch job.
example of a service to delete an image:
public void removeFiles(List<String> fileNames) {
try {
String trashFolderLocation = ConfigurationManager.getInstance().getConfig().getImgFileTrashPath();
String uploadedFileLocation = ConfigurationManager.getInstance().getConfig().getFilePath();
FileUtil.moveFilesToFolder(uploadedFileLocation, trashFolderLocation, fileNames);
} catch(FileException e) {
logException(e);
}
}
In FileUtil:
public static boolean moveFilesToFolder(String locationFrom, String locationTo, List<String> fileNames) throws FileException {
try {
for (String fileName : fileNames) {
File afile = new File(locationFrom + fileName);
if (!afile.renameTo(new File(locationTo + fileName))) {
return false;
}
}
} catch (Exception e) {
throw new FileException(e);
}
return true;
}
You are experiencing a windows property of locking files that are used. You need to locate the process that are using the file and close the resource. This is your only viable option unless you are able to run on a system that doesn't lock files in use. Linux/Unix systems doesn't have this behaviour and they would allow you to delete the file even if it used.

java: read and write on a file at the same time ,concurrency on file [duplicate]

I have read many a posts where-in they speak about reading and writing into the file NOT simultaneously using JavaME. I have a special use case scenarios where-in my log file (maybe full file or just portion of the file) is uploaded to the server on regular basis. This must continue without hampering the current logging of the application in this same file.
The code sample is a under:
boolean writing = true;
boolean reading = true;
void main() {
new Thread("THREAD-FILE-READ") {
public void run() {
InputStream instream = getFileInStream();
if (null != instream) {
while (reading) {
try {
try {
synchronized(READ_LOCK) {
READ_LOCK.wait();
}
} catch (InterruptedException ex) {
ex.printStackTrace();
}
if (writtenCharsLen > 0) {
byte[] bytes = new byte[writtenCharsLen];
instream.read(bytes, 0, writtenCharsLen);
System.out.println("Read="+new String(bytes));
bytes = null;
writtenCharsLen = 0;
}
} catch (IOException ioe) {
ioe.printStackTrace();
}
}
}
closeStream(instream);
}
}.start();
new Thread("THREAD-FILE-WRITE") {
public void run() {
OutputStream outstream = getFileOutStream();
if (null != outstream) {
while (writing) {
try {
byte[] str = randomString();
if (null != str) {
writtenCharsLen = str.length;
System.out.println("Write=" + new String(str));
outstream.write(str);
str = null;
}
} catch (IOException ex) {
ex.printStackTrace();
} finally {
notifyReadStream();
}
try {
synchronized(WRITE_LOCK) {
WRITE_LOCK.wait();
}
} catch (InterruptedException ex) {
ex.printStackTrace();
}
}
}
closeStream(outstream );
}
}.start();
}
void notifyReadStream() {
try {
synchronized (READ_LOCK) {
READ_LOCK.notify();
}
} catch (Exception e) {
e.printStackTrace();
}
}
void notifyWriteStream() {
try {
synchronized (WRITE_LOCK) {
WRITE_LOCK.notify();
}
} catch (Exception e) {
e.printStackTrace();
}
}
In the above code I will replace sop-read and sop-write with proper calls to network IO methods.
PS: Since this piece of code will run of multiple files and multitude of devices i need the modification as compressed as possible to keep my runtime heap as low as possible. Also this piece of code will run till the application life cycle hence closing and opening the file in middle is out of consideration.
Present Undesired Result:
The read and write threads are showing running sop's for read and write. The read thread is reading from the position the writing thread has written. I am not facing any exception in this code but the result is undesired. I have also tried synchronizing read and write streams but that is throwing IllegalMonitorStateException
Expected Result:
Reading of the stream must be triggered after writing into the stream is completed, also the read thread must be able to read from any position in the file.
Any help / pointers is useful?
EDIT: I was able to synchronize the read and the write streams using different monitors but i still feel, i could have done better using single monitor. Will try it sometime later.
I will attack this problem:
Present Undesired Result: The read and write threads are showing running sop's for read and write. The read thread is reading from the position the writing thread has written. I am not facing any exception in this code but the result is undesired. I have also tried synchronizing read and write streams but that is throwing IllegalMonitorStateException.
If you have synchronized the access using monitors i.e. the reader calls someObject.wait() and the writer calls someObject.notify(), remember that you have to wrap these calls in a synchronized block on someObject:
synchronized(someObject) {
someObject.wait();
}
synchronized(someObject) {
someObject.notify();
}
This is the cause for IllegalMonitorStateException.
Your first problem is that you are setting writtenCharsLen before you write the data. If your read thread sees it being non-zero before the write thread actually writes it, you have a problem. Move writtenCharsLen = str.length after the write.
Another problem I see in your sample is that the threads never yield control. They will be CPU hogs.

Use FTP4J to resume upload progress and also get how many percent uploaded

Any one have any example how to upload with ftp4j that support resume and also how to show progress bar?
I have just implemented a sort of following code.
I discovered that if you use compressed streams, you cannot rely on the transferred bytes reported by the listener, because the server can wait for further data in order to decode the previous received blocks.
Hovewer, even if streams are plains, in some cases of lost connection you still cannot rely on the total transferred bytes as reported by the listener. So, I finally realized that the best way is to ask server how many bytes it received.
In my template, the temporal redundancy is more general and involves the control connection with the FTP server. You could limit the while loop to the data connection, i.e. the upload.
FTPClient ftpClient = null;
long writtenBytes;
boolean isCompletedStartingDelete = false; // Our policy is overwrite at first
for (int attempt = 0; attempt < MAX_ATTEMPTS; attempt++) {
try {
ftpClient = getFTPClient();
configureFtpClient(ftpClient);
doLogin(ftpClient);
ftpClient.changeDirectory(remoteDirectory);
if (!isCompletedStartingDelete) { // Our policy is overwrite at first
try {
ftpClient.deleteFile(file);
isCompletedStartingDelete = true;
} catch (FTPException e) {
// Maybe you should check if this exception is really thrown for file not existing.
isCompletedStartingDelete = true;
}
}
try {
writtenBytes = ftpClient.fileSize(fileName);
} catch (Exception e) {
writtenBytes = 0;
}
if (ftpClient.isResumeSupported()) {
// With this template you also could use APPEND
ftpClient.upload(file, writtenBytes, listener);
} else {
ftpClient.upload(file, listener);
}
} catch (FTPAbortException e) {
// User Aborted operation
break;
} catch (Exception e) {
if (attempt == MAX_ATTEMPTS) { // Or in general lastLoop
throw e;
} else {
// Mask failure
// LOG
}
} finally {
if (ftpClient != null && ftpClient.isConnected()) {
try { ftpClient.disconnect(); } catch (Throwable t) { /* LOG */ }
}
}

Categories