Is there any method so that I can split a text file in java without reading it?
I want to process a large text file in GB's, so I want to split file in small parts and apply thread over each file and combine result for it.
As I will be reading it for small parts then splitting a file by reading it won't make any sense as I will have to read same file for twice and it will degrade my performance.
Your threading attempt is ill formed. If you have to do significant processing with your file data consider following threading structure:
1 Reader Thread (Reads the File and feeds the workers )
Queue with read chunks
1..n Worker Threads (n depends on your cpu cores, processes the data chunks from the reader thread)
Queue or dictionary with processed chunks
1 Writer Thread ( Writes results to some file)
Maybe you could combine the Reader / Writer thread into one thread because it doesn't make much sense to parallelize IO on the same physical harddisk.
It's clear that you need some synchronization stuff between the threads. Especially for queues think about semaphores
Without reading the content of file you can't do that. That is not possible.
I don't think this is possible for the following reasons:
How do you write a file without "reading" it?
You'll need to read in the text to know where a character boundary is (the encoding is not necessarily 1 byte). This means that you cannot treat the file as binary.
Is it really not possible to read line-by line and process it like that? That also saves additional space that the split files will take up alongside the original. For you reference, reading a text file is simply:
public static void loadFileFromInputStream(InputStream in) throws IOException {
BufferedReader inputStream = new BufferedReader(new InputStreamReader(in));
String record = inputStream.readLine();
while (record != null) {
// do something with the record
// ...
record = inputStream.readLine();
}
}
You're only reading one line at a time... so the size of the file does not impact performance at all. You can also stop anytime you have to. If you're adventurous you can also add the lines to separate threads to speed up processing. That way, IO can continue churning along while you process your data.
Good luck! If, for some reason, you do find a solution, please post it here. Thanks!
Technically speaking - it cant be done without reading the file. But you also dont need to keep the entire file contents in memory to do the splitting. Just open a stream to the file and write out to other files by redirecting output to another file after certain number of bytes are written to one file. This way you are not required to keep more than one byte of file data in memory at any given time. But having a larger buffer, about 8 or 16kb will be dramatically increase performance.
Something has to read your file to split it (and you probably want to split it at line barriers, probably not at some multiple of kilobytes).
If running on Linux machine, you could delegate the splitting to an external command like csplit. So your Java program would simply run a csplit yourbigfile.txt command.
In the literal sense no. To literally split a file into smaller files, you have to read the large one and write the smaller ones.
However, I think you really want to know if you can have different threads sequentially reading different "parts" of a file at the same time. And the answer is that you can do that. Just have each thread create its own RandomAccessFile object for the file, seek to the relevant place, and start reading.
(A FileInputStream would probably work too, though I don't think that the Java API spec guarantees that skip is implemented using a OS level "seek" operation on the file.)
There are a couple of possible complications:
If the file is text, you presumably want each thread to start processing at the start of some line in the file. So each thread has to start by finding the end of a line, and make sure that it reads to the end of the last line in its "part".
If the file uses a variable width character encoding (e.g. UTF-8), then you need to deal with the case where your partition boundaries fall in the middle of a character.
Related
I'm trying to use a MappedByteBuffer to allow concurrent reads on a file by multiple threads with the following constraints:
File is too large to load into memory
Threads must be able to read asynchronously (it's a web app)
The file is never written to by any thread
Every thread will always know the exact offset and length of bytes it needs to read (ie - no "seeking" by the app itself).
According to the docs (https://docs.oracle.com/javase/8/docs/api/java/nio/Buffer.html) Buffers are not thread-safe since they keep internal state (position, etc). Is there a way to have concurrent random access to the file without loading it all into memory?
Although FileChannel is technically thread-safe, from the docs:
Where the file channel is obtained from an existing stream or random access file then the state of the file channel is intimately connected to that of the object whose getChannel method returned the channel. Changing the channel's position, whether explicitly or by reading or writing bytes, will change the file position of the originating object, and vice versa
So it would seem that it's simply synchronized. If I were to new RandomAccessFile().getChannel().map() in each thread [edit: on every read] then doesn't that incur the I/O overhead with each read that MappedByteBuffers are supposed to avoid?
Rather than using multiple threads for concurrent reads, I'd go with this approach (based on an example with a huge CSV file whose lines have to be sent concurrently via HTTP):
Reading a single file at multiple positions concurrently wouldn't let you go any faster (but it could slow you down considerably).
Instead of reading the file from multiple threads, read the file from a single thread, and parallelize the processing of these lines. A single thread should read your CSV line-by-line, and put each line in a queue. Multiple working threads should then take the next line from the queue, parse it, convert to a request, and process the request concurrently as needed. The splitting of the work would then be done by a single thread, ensuring that there are no missing lines or overlaps.
If you can read the file line by line, LineIterator from Commons IO is a memory-efficient possibility. If you have to work with chunks, your MappedByteBuffer seems to be a reasonable approach. For the queue, I'd use a blocking queue with a fixed capacity—such as ArrayBlockingQueue—to better control the memory usage (lines/chunks in queue + lines/chunks among workers = lines/chunks in memory).
FileChannel supports a read operation without synchronization. It natively uses pread on Linux:
public abstract int read(ByteBuffer dst, long position) throws IOException
Here is on the FileChannel documentation:
...Other operations, in particular those that take an explicit position, may proceed concurrently; whether they in fact do so is dependent upon the underlying implementation and is therefore unspecified.
It is pretty primitive by returning how many bytes were read (see details here). But I think you can still make use of it with your assumption that "Every thread will always know the exact offset and length of bytes it needs to read"
I am trying to write a single huge file in Java using multiple threads.
I have tried both FileWriter and bufferedWriter classes in Java.
The content being written is actually an entire table (Postgres) being read using CopyManager and written. Each line in the file is a single tuple from the table and I am writing 100s of lines at a time.
Approach to write:
The single to-be-written file is opened by multiple threads in append mode.
Each thread thereafter tries writing to the file file.
Following are the issues I face:
Once a while, the contents of the file gets overwritten i.e: One line remains incomplete and the next line starts from there itself. My assumption here is that the buffers for writer are getting full. This forces the writer to immediately write the data onto the file. The data written may not be a complete line and before it can write the remainder, the next thread writes its content onto the file.
While using Filewriter, once a while I see a single black line in the file.
Any suggestions, how to avoid this data integrity issue?
Shared Resource == Contention
Writing to a normal file by definition is a serialized operation. You gain no performance by trying to write to it from multiple threads, I/O is a finite bounded resource at orders of magnitude less bandwidth than even the slowest or most overloaded CPU.
Concurrent access to a shared resource can be complicated ( and slow )
If you have multiple threads that are doing expensive calculations then you have options, if you are just using multiple threads because you think you are going to speed something up, you are just going to do the opposite. Contention for I/O always slows down access to the resource, it never speeds it up because of the lock waits and other overhead.
You have to have a critical section that is protected and allows only a single writer at a time. Just look up the source code for any logging writer that supports concurrency and you will see that there is only a single thread that writes to the file.
If your application is primarily:
CPU Bound: You can use some locking mechanism/data construct to only let one thread out of many write to the file at a time, which will be useless from a concurrency standpoint as a naive solution; If these threads are CPU bound with little I/O this might work.
I/O Bound: This is the most common case, you must use a messaging passing system with a queue of some sort and have all the threads post to a queue/buffer and have a single thread pull from it and write to the file. This will be the most scalable and easiest to implement solution.
Journaling - Async Writes
If you need to create a single super large file where order of writes are unimportant and the program is CPU bound you can use a journaling technique.
Have each process write to a separate file and then concat the multiple files into a single large file at the end. This is a very old school low tech solution that works well and has for decades.
Obviously the more storage I/O you have the better this will perform on the end concat.
I am trying to write a single huge file in Java using multiple threads.
I would recommend that you have X threads reading from the database and a single thread writing to your output file. This is going to be much easier to implement as opposed to doing file locking and the like.
You could use a shared BlockingQueue (maybe ArrayBlockingQueue) so the database readers would add(...) to the queue and your writer would be in a take() loop on the queue. When the readers finish, they could add some special IM_DONE string constant and as soon as the writing thread sees X of these constants (i.e. one for each reader), it would close the output file and exit.
So then you can use a single BufferedWriter without any locks and the like. Chances are that you will be blocked by the database calls instead of the local IO. Certainly the extra thread isn't going to slow you down at all.
The single to-be-written file is opened by multiple threads in append mode. Each thread thereafter tries writing to the file file.
If you are adamant to have your reading threads also do the writing then you should add a synchronized block around the access to a single shared BufferedWriter -- you could synchronize on the BufferedWriter object itself. Knowing when to close the writer is a bit of an issue since each thread would have to know if the other one has exited. Each thread could increment a shared AtomicInteger when they run and decrement when they are done. Then the thread that looks at the run-count and sees 0 would be the one that would close the writer.
Instead of having a synchronized methods, the better solution would be to have a threadpool with single thread backed by a blocking queue. The message application would be writing will be pushed to blocking queue. The log writer thread would continue to read from blocking queue (will be blocked in case queue is empty) and would continue to write it to single file.
I want to append a line of text into a file - however, I want to get the position of the string in the file, such that I can access the string directly using a RandomAccessFile and file.seek() (or similar)
The issue is that alot of file i/o operations are asynch, and the write operations can happen within very short time intervals - suggesting a asynch write, since everything else is inefficient. How do I make sure the filepointer is calculated correctly? I am a newcomer to Java and dont yet understand the details of the different methods of File I/O, so excuse my Question if using a BufferedWriter is exactly what I am looking for, but how do you get the current length of that?
EDIT: Reading the entire file is NOT an option. The file is large and as I said, the write operations happen often, several hundred every second in peak times.
Refer to the FileChannel class: http://docs.oracle.com/javase/7/docs/api/java/nio/channels/FileChannel.html
Relevant snippets from the link:
The file itself contains a variable-length sequence of bytes that can be read and written and whose current size can be queried. The size of the file increases when bytes are written beyond its current size;
...
File channels are safe for use by multiple concurrent threads. The close method may be invoked at any time, as specified by the Channel interface. Only one operation that involves the channel's position or can change its file's size may be in progress at any given time; attempts to initiate a second such operation while the first is still in progress will block until the first operation completes. Other operations, in particular those that take an explicit position, may proceed concurrently; whether they in fact do so is dependent upon the underlying implementation and is therefore unspecified.
Is it possible to read and write to the same text file with both a C++ application and a java application at the same time without writing conflicting lines / characters to it ? I have tested with two java applications for now, and it seems like it's possible to write to the file from one process even if the other process has opened the stream, but not closed it. Are there any way to lock the file so that the other process needs to wait ?
I think yes, for example boost::interprocess http://www.boost.org/doc/libs/1_50_0/doc/html/interprocess.html file locks http://www.boost.org/doc/libs/1_50_0/doc/html/interprocess/synchronization_mechanisms.html#interprocess.synchronization_mechanisms.file_lock
For two processes that are writing to the same file, as long as you flush your output buffers on line boundaries (i.e., flush after you write a newline character sequence), the data written to the file should be intervleaved nicely.
If one process is writing while another is reading from the same file, you just have to ensure that the reads don't get ahead of the writes. If a read gets an end-of-file condition (or worse, a partial data line), then you know that the reading process must wait until the writing process has finished writing another chunk of data to the file.
If you need more complicated read/write control, you should consider some kind of locking mechanism.
I have a 250Mb file to be read. And the application is multi threaded. If i allow all threads to read the file the memory starvation occurs.
I get out of memory error.
To avoid it. I want to have only one copy of the String (which is read from stream) in memory and i want all the threads to use it.
while (true) {
synchronized (buffer) {
num = is.read(buffer);
String str = new String(buffer, 0, num);
}
sendToPC(str);
}
Basically i want to have only one copy of string when all thread completed sending, i want to read second string and so on.
Why multiple threads? You only have one disk and it can only go so fast. Multithreading it won't help, almost certainly. And any software design that relies on having an entire file in memory is seriously flawed in the first place.
Suppose you define your problem?
I realize this is kind of late, but I think what you want here is to use the map function in the FileChannel class. Once you map a region of the file into memory, then all of your threads can read or write to that block of memory and the OS will synchronizes that memory region with the file periodically (or when you call MappedByteBuffer.load()), and if you want each thread to work with a different part of the file, then you can assign several maps each mapping a specific region of the file and just use one map per thread.
see the javadoc for FileChannel, RandomAccessFile, and MappedByteBuffer
Could you directly use streams instead of completely reading the file in to memory?
You can register all threads as callbacks in the File reading class. SO have something like an array or list of classes implementing an interface StringReaderThread which has the method processString(String input). After reading each line from the file, iterate over this array/list and call processString() on all the threads this way. Would this solve your problem?