I'm trying something new,
There is an application who send data to a memory-mapped file located at Local\MemFileName
I would like to read it in java,
I tried some tutorials like https://www.baeldung.com/java-mapped-byte-buffer, https://howtodoinjava.com/java7/nio/memory-mapped-files-mappedbytebuffer/
But all seem to read a file in JVM, or I did not understand...
How can I read the content of the file Located in windows system Local\MemFileName
Thanks!
Following: Example code of what i tried
public class Main {
private static final String IRSDKMEM_MAP_FILE_NAME = StringEscapeUtils.unescapeJava("Local\\IRSDKMemMapFileName");
private static final String IRSDKDATA_VALID_EVENT = StringEscapeUtils.unescapeJava("Local\\IRSDKDataValidEvent");
public static final CharSequence charSequence = "Local\\IRSDKMemMapFileName";
public static void main(String[] args) throws IOException, InterruptedException {
System.out.println(charSequence);
try (RandomAccessFile file = new RandomAccessFile(new File(IRSDKMEM_MAP_FILE_NAME), "r")) {
//Get file channel in read-only mode
FileChannel fileChannel = file.getChannel();
//Get direct byte buffer access using channel.map() operation
MappedByteBuffer buffer = fileChannel.map(FileChannel.MapMode.READ_ONLY, 0, fileChannel.size());
// the buffer now reads the file as if it were loaded in memory.
System.out.println("Loaded " + buffer.isLoaded()); //prints false
System.out.println("capacity" + buffer.capacity()); //Get the size based on content size of file
//You can read the file from this buffer the way you like.
for (int i = 0; i < buffer.limit(); i++) {
System.out.println((char) buffer.get()); //Print the content of file
}
}
}
}
To read a memory mapped file:
Open a FileChannel on the file using FileChannel.open.
Invoke the map method on the FileChannel to create a MappedByteBuffer covering the area of the file you want to read.
Read the data from the MappedByteBuffer.
The solution for me was to use a WindowsService.class implementing methods from JNA library, as you can see:
My Library
With this I could open a file mapped in Windows system.
All the previous answers was correct for a file accessible from JVM, but from outside the JVM it was impossible.
Thanks !
Related
I have a web app where I need to be able to serve the user an archive of multiple files. I've set up a generic ArchiveExporter, and made a ZipArchiveExporter. Works beautifully! I can stream my data to my server, and archive the data and stream it to the user all without using much memory, and without needing a filesystem (I'm on Google App Engine).
Then I remembered about the whole zip64 thing with 4gb zip files. My archives can get potentially very large (high res images), so I'd like to have an option to avoid zip files for my larger input.
I checked out org.apache.commons.compress.archivers.tar.TarArchiveOutputStream and thought I had found what I needed! Sadly when I checked the docs, and ran into some errors; I quickly found out you MUST pass the size of each entry as you stream. This is a problem because the data is being streamed to me with no way of knowing the size beforehand.
I tried counting and returning the written bytes from export(), but TarArchiveOutputStream expects a size in TarArchiveEntry before writing to it, so that obviously doesn't work.
I can use a ByteArrayOutputStream and read each entry entirely before writing its content so I know its size, but my entries can pontentially get very large; and this is not very polite to the other processes running on the instance.
I could use some form of persistence, upload the entry, and query the data size. However, that would be a waste of my google storage api calls, bandwidth, storage, and runtime.
I am aware of this SO question asking almost the same thing, but he settled for using zip files and there is no more relevant information.
What is the ideal solution to creating a tar archive with entries of unknown size?
public abstract class ArchiveExporter<T extends OutputStream> extends Exporter { //base class
public abstract void export(OutputStream out); //from Exporter interface
public abstract void archiveItems(T t) throws IOException;
}
public class ZipArchiveExporter extends ArchiveExporter<ZipOutputStream> { //zip class, works as intended
#Override
public void export(OutputStream out) throws IOException {
try(ZipOutputStream zos = new ZipOutputStream(out, Charsets.UTF_8)) {
zos.setLevel(0);
archiveItems(zos);
}
}
#Override
protected void archiveItems(ZipOutputStream zos) throws IOException {
zos.putNextEntry(new ZipEntry(exporter.getFileName()));
exporter.export(zos);
//chained call to export from other exporter like json exporter for instance
zos.closeEntry();
}
}
public class TarArchiveExporter extends ArchiveExporter<TarArchiveOutputStream> {
#Override
public void export(OutputStream out) throws IOException {
try(TarArchiveOutputStream taos = new TarArchiveOutputStream(out, "UTF-8")) {
archiveItems(taos);
}
}
#Override
protected void archiveItems(TarArchiveOutputStream taos) throws IOException {
TarArchiveEntry entry = new TarArchiveEntry(exporter.getFileName());
//entry.setSize(?);
taos.putArchiveEntry(entry);
exporter.export(taos);
taos.closeArchiveEntry();
}
}
EDIT this is what I was thinking with the ByteArrayOutputStream. It works, but I cannot guarantee I will always have enough memory to store the whole entry at once, hence my streaming efforts. There has to be a more elegant way of streaming a tarball! Maybe this is a question more suited for Code Review?
protected void byteArrayOutputStreamApproach(TarArchiveOutputStream taos) throws IOException {
TarArchiveEntry entry = new TarArchiveEntry(exporter.getFileName());
try(ByteArrayOutputStream baos = new ByteArrayOutputStream()) {
exporter.export(baos);
byte[] data = baos.toByteArray();
//holding ENTIRE entry in memory. What if it's huge? What if it has more than Integer.MAX_VALUE bytes? :[
int len = data.length;
entry.setSize(len);
taos.putArchiveEntry(entry);
taos.write(data);
taos.closeArchiveEntry();
}
}
EDIT This is what I meant by uploading the entry to a medium (Google Cloud Storage in this case) to accurately query the whole size. Seems like major overkill for what seems like a simple problem, but this doesn't suffer from the same ram problems as the solution above. Just at the cost of bandwidth and time. I hope someone smarter than me comes by and makes me feel stupid soon :D
protected void googleCloudStorageTempFileApproach(TarArchiveOutputStream taos) throws IOException {
TarArchiveEntry entry = new TarArchiveEntry(exporter.getFileName());
String name = NameHelper.getRandomName(); //get random name for temp storage
BlobInfo blobInfo = BlobInfo.newBuilder(StorageHelper.OUTPUT_BUCKET, name).build(); //prepare upload of temp file
WritableByteChannel wbc = ApiContainer.storage.writer(blobInfo); //get WriteChannel for temp file
try(OutputStream out = Channels.newOutputStream(wbc)) {
exporter.export(out); //stream items to remote temp file
} finally {
wbc.close();
}
Blob blob = ApiContainer.storage.get(blobInfo.getBlobId());
long size = blob.getSize(); //accurately query the size after upload
entry.setSize(size);
taos.putArchiveEntry(entry);
ReadableByteChannel rbc = blob.reader(); //get ReadChannel for temp file
try(InputStream in = Channels.newInputStream(rbc)) {
IOUtils.copy(in, taos); //stream back to local tar stream from remote temp file
} finally {
rbc.close();
}
blob.delete(); //delete remote temp file
taos.closeArchiveEntry();
}
I've been looking at a similar issue, and this is a constraint of tar file format, as far as I can tell.
Tar files are written as a stream, and metadata (filenames, permissions etc) are written between the file data (i.e. metadata 1, filedata 1, metadata 2, filedata 2 etc). The program that extracts the data, it reads metadata 1, then starts extracting filedata 1, but it has to have a way of knowing when it's done. This could be done a number of ways; tar does this by having the length in the metadata.
Depending on your needs, and what the recipient expects out, there are a few options that I can see (not all apply to your situation):
As you mentioned, load an entire file, work out the length, then send it.
Divide the file into blocks, of predefined length (which fits into memory), then tar them up as file1-part1, file1-part2 etc.; the last block would be short.
Divide the file into blocks of a predefined length (which don't need to fit into memory), then pad the last block to that size with something appropriate.
Work out the maximum possible size of the file, and pad to that size.
Use a different archive format.
Make your own archive format, which does not have this limitation.
Interestingly, gzip does not have predefined limits, and multiple gzips can be concatenated together, each with it's own "original filename". Unfortunately, standard gunzip extracts all the resulting data into one file, using the (?) first filename.
I have a small Java Application running inside IBM Integration Bus, which is installed in an AIX Server with the character encoding set to ISO-8959-1.
My application is creating a ZIP File with the filenames received as a parameter. I have a file called "Websërvícès Guide.pdf" in the filesystem which I wanted to zip but I'm unable.
This is my code:
String zipFilePath = "/tmp/EventAttachments_2018.01.25.11.39.34.zip";
// Streams buffer
int BUFFER = 2048;
// Open I/O Buffered Streams
BufferedInputStream origin = null;
FileOutputStream dest = new FileOutputStream(zipFilePath);
ZipOutputStream out = new ZipOutputStream(new BufferedOutputStream(dest));
byte data[] = new byte[BUFFER];
// Oprn File Stream to my file
Path currentFilePath = Paths.get("/tmp/Websërvícès Guide.pdf");
InputStream fi = Files.newInputStream(currentFilePath, StandardOpenOption.READ);
origin = new BufferedInputStream(fi, BUFFER);
ZipEntry entry = new ZipEntry("Websërvícès Guide.pdf");
out.putNextEntry(entry);
int count;
while ((count = origin.read(data, 0, BUFFER)) != -1) {
out.write(data, 0, count);
}
origin.close();
out.close();
Which is throwing a "File Not Found" exception in the Files.newInputStream line.
I have read that Java is not working properly when checking it files with special characters exists and so on. I'm not able to perform changes in the JVM Parameters as code is executed inside a IBM JVM.
Any idea on how to solve this issue and pack the file properly in the ZIP?
Thank you
Can you try to pass following flag to while running your java program
-Dsun.jnu.encoding=UTF-8
First: In your code, you are not taking care of any Exceptions that could be thrown. I would suggest to handle the exceptions of the method or make the method throw the exception and handle it on a higher level. But somewhere you need to handle the exception.
Maybe that's already the problem. (see https://stackoverflow.com/a/155655/8896833)
Second: According to ISO-8959-1 all the characters used in your filename should be covered. Are you really sure about the path your program is working in at the moment you are trying to access the file?
Try to use URLDecoder class method decode(String string, String encoding);.
For example:
String path = URLDecoder.decode("Websërvícès Guide.pdf", "UTF-8"));
I have a piece of code which uses the deflate algorithm to compress a file:
public static File compressOld(File rawFile) throws IOException
{
File compressed = new File(rawFile.getCanonicalPath().split("\\.")[0]
+ "_compressed." + rawFile.getName().split("\\.")[1]);
InputStream inputStream = new FileInputStream(rawFile);
OutputStream compressedWriter = new DeflaterOutputStream(new FileOutputStream(compressed));
byte[] buffer = new byte[1000];
int length;
while ((length = inputStream.read(buffer)) > 0)
{
compressedWriter.write(buffer, 0, length);
}
inputStream.close();
compressedWriter.close();
return compressed;
}
However, I'm not happy with the OutputStream copying loop since it's the "outdated" way of writing to streams. Instead, I want to use a Java 7 API method such as Files.copy:
public static File compressNew(File rawFile) throws IOException
{
File compressed = new File(rawFile.getCanonicalPath().split("\\.")[0]
+ "_compressed." + rawFile.getName().split("\\.")[1]);
OutputStream compressedWriter = new DeflaterOutputStream(new FileOutputStream(compressed));
Files.copy(compressed.toPath(), compressedWriter);
compressedWriter.close();
return compressed;
}
The latter method however does not work correctly, the compressed file is messed up and only a few bytes are copied. How come?
I see mainly two problems.
You copy from the target instead of the source. I think the copying has to be changed to Files.copy(rawFile.toPath(), compressedWriter);.
The Javadoc of copy says: "Note that if the given output stream is Flushable then its flush method may need to invoked after this method completes so as to flush any buffered output." So, you have to call the flush-method of the OutputStream after copy.
Additionally there is one more point. The Javadoc of copy says:
It is strongly recommended that the output stream be promptly closed if an I/O error occurs.
You can close the OutputStream in a finally-block to make sure it happens in case of an error. Another possibility is to use try with resources that was introduced in Java 7.
My code currently use RandomAccessFile to read a ZIP file. The code is taken from a open source
project.
I need to make RANDOM Access File operation in memory without creating a physical File in
the disk. So need to replace functionality of RandomAccess File with FileOutput Stream.
The way Random Access File object Create.
protected RandomAccessFile file;
public ExtRandomAccessFile(File zipFile) throws IOException {
this.file = new RandomAccessFile(zipFile, "r");
}
Usage access different position mapped to the Random Access File
int censig = raFile.readInt( fileOffset );
short fileNameLength = raFile.readShort( fileOffset + 28 );
short extraFieldLength = raFile.readShort( fileOffset + 30 );
long fileOffsetPos = fileOffset + 28 + 14;
long fileDataOffset = raFile.readInt( fileOffsetPos );
int locsig = raFile.readInt( fileDataOffset );
Please advice me how do I replace my code with FileOutputstream. What is the
mechanism I should use to look up for values.
Thanks
You can use a DataOutputStream to read different values. But be careful, because java always uses big endian format. If you don't do this for educational reasons, I would recommend ZipOutputStream to create zip files.
I would like to create a simple program (in Java) which edits text files - particularly one which performs inserting arbitrary pieces of text at random positions in a text file. This feature is part of a larger program I am currently writing.
Reading the description about java.util.RandomAccessFile, it appears that any write operations performed in the middle of a file would actually overwrite the exiting content. This is a side-effect which I would like to avoid (if possible).
Is there a simple way to achieve this?
Thanks in advance.
Okay, this question is pretty old, but FileChannels exist since Java 1.4 and I don't know why they aren't mentioned anywhere when dealing with the problem of replacing or inserting content in files. FileChannels are fast, use them.
Here's an example (ignoring exceptions and some other stuff):
public void insert(String filename, long offset, byte[] content) {
RandomAccessFile r = new RandomAccessFile(new File(filename), "rw");
RandomAccessFile rtemp = new RandomAccessFile(new File(filename + "~"), "rw");
long fileSize = r.length();
FileChannel sourceChannel = r.getChannel();
FileChannel targetChannel = rtemp.getChannel();
sourceChannel.transferTo(offset, (fileSize - offset), targetChannel);
sourceChannel.truncate(offset);
r.seek(offset);
r.write(content);
long newOffset = r.getFilePointer();
targetChannel.position(0L);
sourceChannel.transferFrom(targetChannel, newOffset, (fileSize - offset));
sourceChannel.close();
targetChannel.close();
}
Well, no, I don't believe there is a way to avoid overwriting existing content with a single, standard Java IO API call.
If the files are not too large, just read the entire file into an ArrayList (an entry per line) and either rewrite entries or insert new entries for new lines.
Then overwrite the existing file with new content, or move the existing file to a backup and write a new file.
Depending on how sophisticated the edits need to be, your data structure may need to change.
Another method would be to read characters from the existing file while writing to the edited file and edit the stream as it is read.
If Java has a way to memory map files, then what you can do is extend the file to its new length, map the file, memmove all the bytes down to the end to make a hole and write the new data into the hole.
This works in C. Never tried it in Java.
Another way I just thought of to do the same but with random file access.
Seek to the end - 1 MB
Read 1 MB
Write that to original position + gap size.
Repeat for each previous 1 MB working toward the beginning of the file.
Stop when you reach the desired gap position.
Use a larger buffer size for faster performance.
You can use following code:
BufferedReader reader = null;
BufferedWriter writer = null;
ArrayList list = new ArrayList();
try {
reader = new BufferedReader(new FileReader(fileName));
String tmp;
while ((tmp = reader.readLine()) != null)
list.add(tmp);
OUtil.closeReader(reader);
list.add(0, "Start Text");
list.add("End Text");
writer = new BufferedWriter(new FileWriter(fileName));
for (int i = 0; i < list.size(); i++)
writer.write(list.get(i) + "\r\n");
} catch (Exception e) {
e.printStackTrace();
} finally {
OUtil.closeReader(reader);
OUtil.closeWriter(writer);
}
I don't know if there's a handy way to do it straight otherwise than
read the beginning of the file and write it to target
write your new text to target
read the rest of the file and write it to target.
About the target : You can construct the new contents of the file in memory and then overwrite the old content of the file if the files handled aren't so big. Or you can write the result to a temporary file.
The thing would probably be easiest to do with streams, RandomAccessFile doesn't seem to be meant for inserting in the middle (afaik). Check the tutorial if you need.
I believe the only way to insert text into an existing text file is to read the original file and write the content in a temporary file with the new text inserted. Then erase the original file and rename the temporary file to the original name.
This example is focused on inserted a single line into an existing file, but still maybe of use to you.
If it is a text file,,,,Read the existing file in StringBuffer and append the new content in the same StringBuffer now u can write the SrtingBuffer on file. so now the file contains both the existing and new text.
As #xor_eq answer's edit queue is full, here in a new answer a more documented and slightly improved version of his:
public static void insert(String filename, long offset, byte[] content) throws IOException {
File temp = Files.createTempFile("insertTempFile", ".temp").toFile(); // Create a temporary file to save content to
try (RandomAccessFile r = new RandomAccessFile(new File(filename), "rw"); // Open file for read & write
RandomAccessFile rtemp = new RandomAccessFile(temp, "rw"); // Open temporary file for read & write
FileChannel sourceChannel = r.getChannel(); // Channel of file
FileChannel targetChannel = rtemp.getChannel()) { // Channel of temporary file
long fileSize = r.length();
sourceChannel.transferTo(offset, (fileSize - offset), targetChannel); // Copy content after insert index to
// temporary file
sourceChannel.truncate(offset); // Remove content past insert index from file
r.seek(offset); // Goto back of file (now insert index)
r.write(content); // Write new content
long newOffset = r.getFilePointer(); // The current offset
targetChannel.position(0L); // Goto start of temporary file
sourceChannel.transferFrom(targetChannel, newOffset, (fileSize - offset)); // Copy all content of temporary
// to end of file
}
Files.delete(temp.toPath()); // Delete the temporary file as not needed anymore
}