Read from a large file and write to multiple files with java - java

I have an A.txt file of 100,000,000 records from 1 to 100000000, each record is one line. I have to read file A then write to file B and C, provided that even line writes to file B and the odd line writes to file C.
Required read and write time must be less than 40 seconds.
Below is the code that I already have but the runtime takes more than 50 seconds.
Does anyone have any other solution to reduce runtime?
Threading.java
import java.io.*;
import java.util.concurrent.LinkedBlockingQueue;
public class Threading implements Runnable {
LinkedBlockingQueue<String> queue = new LinkedBlockingQueue<>();
String file;
Boolean stop = false;
public Threading(String file) {
this.file = file;
}
public void addQueue(String row) {
queue.add();
}
public void Stop() {
stop = true;
}
public void run() {
try {
BufferedWriter bw = new BufferedWriter(new FileWriter(file));
while(!stop) {
try {
String rĘ” = queue.take();
bw.while(row + "\n");
} catch (Exception e) {
e.printStackTrace();
}
}
bw.close();
} catch (Exception e) {
e.printStackTrace();
}
}
}
ThreadCreate.java
// I used 2 threads to write to 2 files B and C
import java.io.*;
import java.util.List;
public class ThreadCreate {
public void startThread(File file) {
Threading t1 = new Threading("B.txt");
Threading t1 = new Threading("B.txt");
Thread td1 = new Thread(t1);
Thread td1 = new Thread(t1);
td1.start();
td2.start();
try {
BufferedReader br = new BufferedReader(new FileReader(file));
String line;
long start = System.currentTimeMillis();
while ((line = br.readLine()) != null) {
if (Integer.parseInt(line) % 2 == 0) {
t1.addQueue(line);
} else {
t2.addQueue(line);
}
}
t1.Stop();
t2.Stop();
br.close();
long end = System.currentTimeMillis();
System.out.println("Time to read file A and write file B, C: " + ((end - start)/1000) + "s");
} catch (Exception e) {
e.printStackTrace();
}
}
}
Main.java
import java.io.*;
public class Main {
public static void main(String[] args) throws IOException {
File file = new File("A.txt");
//Write file B, C
ThreadCreate t = new ThreadCreate();
t.startThread(file);
}
}

Why are you making threads? That just slows things down. Threads are useful if the bottleneck is either the calculation itself or the blocking nature of the operation, and they only hurt if it is not. Here, it isn't: The CPU is just idling (the bottleneck will be the disk), and the nature of what it is blocking on means that multithreading does not help either: Telling a single SSD to write 2 boatloads of bytes in parallel is probably no faster (only slower, as it needs to bounce back and forth). If the target disk is a spinning disk, it is way slower - the write head cannot make clones of itself to go any faster, and by making it multithreaded, you are wasting a ton of time by asking the write head to bounce back and forth between the different write locations.
There's nothing that immediately strikes me as ripe for significant speedups.
Sometimes, writing a ton of data to a disk just takes 50 seconds. If that's not acceptable, buy a faster disk.

try memory mapped files
byte[] buffer = "foo bar foo bar text\n".getBytes();
int number_of_lines = 100000000;
FileChannel file = new RandomAccessFile("writeFIle.txt", "rw").getChannel();
ByteBuffer wrBuf = file.map(FileChannel.MapMode.READ_WRITE, 0, buffer.length * number_of_lines);
for (int i = 0; i < number_of_lines; i++)
{
wrBuf.put(buffer);
}
file.close();
Took to my computer (Dell, I7 processor, with SSD, 32GB RAM) a little over half a minute to run this code)

Related

Process large text file concurrently

So I have a large text file, in this case it's roughly 4.5 GB, and I need to process the entire file as fast as is possible. Right now I have multi-threaded this using 3 threads (not including the main thread). An input thread for reading the input file, a processing thread to process the data, and an output thread to output the processed data to a file.
Currently, the bottleneck is the processing section. Therefore, I'd like to add more processing threads into the mix. However, this creates a situation where I've got multiple threads accessing the same BlockingQueue, and their results are therefore not maintaining the order of the input file.
An example of the functionality I'm looking for would be something like this:
Input file: 1, 2, 3, 4, 5
Output file: ^ the same. Not 2, 1, 4, 3, 5 or any other combination.
I've written a dummy program that is identical in functionality to the actual program minus the processing part, (I can't give you the actual program due to the processing class containing info that is confidential). I should also mention, all of the classes (Input, Processing, and Output) are all Inner classes contained within a Main class that contains the initialise() method and the class level variables mentioned in the main thread code listed below.
Main thread:
static volatile boolean readerFinished = false; // class level variables
static volatile boolean writerFinished = false;
private void initialise() throws IOException {
BlockingQueue<String> inputQueue = new LinkedBlockingQueue<>(1_000_000);
BlockingQueue<String> outputQueue = new LinkedBlockingQueue<>(1_000_000); // capacity 1 million.
String inputFileName = "test.txt";
String outputFileName = "outputTest.txt";
BufferedReader reader = new BufferedReader(new FileReader(inputFileName));
BufferedWriter writer = new BufferedWriter(new FileWriter(outputFileName));
Thread T1 = new Thread(new Input(reader, inputQueue));
Thread T2 = new Thread(new Processing(inputQueue, outputQueue));
Thread T3 = new Thread(new Output(writer, outputQueue));
T1.start();
T2.start();
T3.start();
while (!writerFinished) {
try {
Thread.sleep(1000);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
reader.close();
writer.close();
System.out.println("Exited.");
}
Input thread: (Please forgive the commented debug code, was using it to ensure the reader thread was actually executing properly).
class Input implements Runnable {
BufferedReader reader;
BlockingQueue<String> inputQueue;
Input(BufferedReader reader, BlockingQueue<String> inputQueue) {
this.reader = reader;
this.inputQueue = inputQueue;
}
#Override
public void run() {
String poisonPill = "ChH92PU2KYkZUBR";
String line;
//int linesRead = 0;
try {
while ((line = reader.readLine()) != null) {
inputQueue.put(line);
//linesRead++;
/*
if (linesRead == 500_000) {
//batchesRead += 1;
//System.out.println("Batch read");
linesRead = 0;
}
*/
}
inputQueue.put(poisonPill);
} catch (IOException | InterruptedException e) {
e.printStackTrace();
}
readerFinished = true;
}
}
Processing thread: (Normally this would actually be doing something to the line, but for purposes of the mockup I've just made it immediately push to the output thread). If necessary we can simulate it doing some work by making the thread sleep for a small amount of time for each line.
class Processing implements Runnable {
BlockingQueue<String> inputQueue;
BlockingQueue<String> outputQueue;
Processing(BlockingQueue<String> inputQueue, BlockingQueue<String> outputQueue) {
this.inputQueue = inputQueue;
this.outputQueue = outputQueue;
}
#Override
public void run() {
while (true) {
try {
if (inputQueue.isEmpty() && readerFinished) {
break;
}
String line = inputQueue.take();
outputQueue.put(line);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
}
Output thread:
class Output implements Runnable {
BufferedWriter writer;
BlockingQueue<String> outputQueue;
Output(BufferedWriter writer, BlockingQueue<String> outputQueue) {
this.writer = writer;
this.outputQueue = outputQueue;
}
#Override
public void run() {
String line;
ArrayList<String> outputList = new ArrayList<>();
while (true) {
try {
line = outputQueue.take();
if (line.equals("ChH92PU2KYkZUBR")) {
for (String outputLine : outputList) {
writer.write(outputLine);
}
System.out.println("Writer finished - executing termination");
writerFinished = true;
break;
}
line += "\n";
outputList.add(line);
if (outputList.size() == 500_000) {
for (String outputLine : outputList) {
writer.write(outputLine);
}
System.out.println("Writer wrote batch");
outputList = new ArrayList<>();
}
} catch (IOException | InterruptedException e) {
e.printStackTrace();
}
}
}
}
So right now the general data flow is very linear, looking something like this:
Input > Processing > Output.
But what I'd like to have is something like this:
But the catch is, when the data gets to output, it either needs to be sorted into the correct order, or it needs to already be in the correct order.
Recommendations or examples on how to go about this would be greatly appreciated.
In the past I have used the Future and Callable interfaces to solve a task involving parallel data flows like this, but unfortunately that code was not reading from a single queue, and so is of minimal help here.
I should also add, for those of you that will notice this, batchSize and poisonPill are normally defined in the main thread and then passed around via variables, they are not usually hard coded as they are in the code for Input thread, and the output checks for the writer thread. I was just a wee bit lazy when writing the mockup for experimentation at ~1am.
Edit: I should also mention, this is required to use Java 8 at most. Java 9 features and above cannot be used due to these versions not being installed in the environments in which this program will be run.
What you could do:
Take X threads for processing, where X is the number of cores available for processing
Give each thread its own input queue.
The reader thread gives records to each thread's input queue round-robin in a predictable fashion.
Since the output files are too big for memory, you write X output files, one for each thread, and each file name has the index of the thread in it, so that you can reconstitute the original order from the file names.
After the process is complete, you merge the X output files. One line from the file for thread 1, one from the files for thread 2, etc. in a round-robin fashion again. This reconstitutes the original order.
As an added bonus, since you have an input queue per thread, you don't have lock contention on the queue between readers. (only between the reader and the writer) You could even optimize this by putting things in the input queues in batches larger than 1.
As was also proposed by Alexei, you can create OrderedTask:
class OrderedTask implements Comparable<OrderedTask> {
private final Integer index;
private final String line;
public OrderedTask(Integer index, String line) {
this.index = index;
this.line = line;
}
#Override
public int compareTo(OrderedTask o) {
return index < o.getIndex() ? -1 : index == o.getIndex() ? 0 : 1;
}
public Integer getIndex() {
return index;
}
public String getLine() {
return line;
}
}
As an output queue you can use your own backed by priority queue:
class OrderedTaskQueue {
private final ReentrantLock lock;
private final Condition waitForOrderedItem;
private final int maxQueuesize;
private final PriorityQueue<OrderedTask> backedQueue;
private int expectedIndex;
public OrderedTaskQueue(int maxQueueSize, int startIndex) {
this.maxQueuesize = maxQueueSize;
this.expectedIndex = startIndex;
this.backedQueue = new PriorityQueue<>(2 * this.maxQueuesize);
this.lock = new ReentrantLock();
this.waitForOrderedItem = this.lock.newCondition();
}
public boolean put(OrderedTask item) {
ReentrantLock lock = this.lock;
lock.lock();
try {
while (this.backedQueue.size() >= maxQueuesize && item.getIndex() != expectedIndex) {
this.waitForOrderedItem.await();
}
boolean result = this.backedQueue.add(item);
this.waitForOrderedItem.signalAll();
return result;
} catch (InterruptedException e) {
throw new RuntimeException();
} finally {
lock.unlock();
}
}
public OrderedTask take() {
ReentrantLock lock = this.lock;
lock.lock();
try {
while (this.backedQueue.peek() == null || this.backedQueue.peek().getIndex() != expectedIndex) {
this.waitForOrderedItem.await();
}
OrderedTask result = this.backedQueue.poll();
expectedIndex++;
this.waitForOrderedItem.signalAll();
return result;
} catch (InterruptedException e) {
throw new RuntimeException();
} finally {
lock.unlock();
}
}
}
StartIndex is the index of the first ordered task, and
maxQueueSize is used to stop processing of other tasks (not to fill the memory), when we wait for some earlier task to finish. It should be double/tripple of the number of processing thread, to not stop the processing immediatelly and allow the scalability.
Then you should create your task :
int indexOrder =0;
while ((line = reader.readLine()) != null) {
inputQueue.put(new OrderedTask(indexOrder++,line);
}
The line by line is only used because of your example. You should change the OrderedTask to support the batch of lines.
Why not reverse the flow ?
Output call for X batches;
Generate X promise/task (promise pattern) who will call randomly one of the processing core (keep a batch number, to pass through to the input core); batch the calls handler into a ordered list;
Each processing core call for a batch in the input core;
Enjoy ?

Issue with listing files recursivelly in Java

I've decided to write a recursive program that writes all the files in my C drive into a .txt file, however it is very slow.
I've read online that recursion is slow, but i can't think of any other way. Is there any way i can optimize this ?
EDIT : changed the deepInspect method to use a Stack instead of recursion, which slightly improved performance.
Here is the code
public class FileCount {
static long fCount = 0;
public static void main(String[] args) {
System.out.println("Start....");
long start = System.currentTimeMillis();
File cDir = new File("C:\\");
inspect(cDir);
System.out.println("Operation took : " + (System.currentTimeMillis() - start) + " ms");
}
private static void inspect(File cDir) {
for (File f : cDir.listFiles()) {
deepInspect(f);
}
}
private static void deepInspect(File f) {
Stack<File> stack = new Stack<File>();
stack.push(f);
while (!stack.isEmpty()) {
File current = stack.pop();
if (current.listFiles() != null) {
for (File file : current.listFiles()) {
stack.push(file);
}
}
writeData(current.getAbsolutePath());
}
}
static FileWriter writer = null;
private static void writeData(String absolutePath) {
if (writer == null)
try {
writer = new FileWriter("C:\\Collected\\data.txt");
} catch (IOException e) {}
try {
writer.write(absolutePath);
writer.write("\r\n");//nwline
writer.write("Files : " + fCount);
writer.write("\r\n");
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}
Java 8 provides a stream to process all files.
Files.walk(Paths.get("/"))
.filter(Files::isRegularFile)
.forEach(System.out::println);
You could add "parallel" processing for improved performance
Files.walk(Paths.get("/"))
.parallel()
.filter(Files::isRegularFile)
.forEach(System.out::println);
I tried this under linux, so you would need to replace "/" with "C:" and try it. Besides in my case stops when I try to read I don't have access, so you would need to check that too if you are not running as admin.
Check this out
I don't think the recursion is an issue here. The main issue in your code is the File IO which you are doing at every level. The disk access is extremely costly w.r.t the memory access. If you profile your code you should definitely see huge spike in the disk IO.
So, essentially you want to reduce the disk I/O. To do so you could have a in memory finite size Buffer where you can write the output and when the buffer is full flush the data to the file.
This however considerable more amount of work.

Increasing Disk Read Throughput By Concurrency

I am trying to read a log file and parse it that consumes only CPU. I have a server that reads a huge text file 230MB/second, just read text file not parse. When i try to parse the text file, using single thread, i can parse the file around 50-70MB/second.
I want to increase my throughput, doing that job concurrency. In this code, i reached 130 MB/second. At the peak, i saw 190MB/second. I tried BlockedQueue, Semaphore, ExecutionService etc. Is there any advice you give me reach at 200MB/second throughput.
public static void fileReaderTestUsingSemaphore(String[] args) throws Exception {
CustomFileReader reader = new CustomFileReader(args[0]);
final int concurrency = Integer.parseInt(args[1]);
ExecutorService executorService = Executors.newFixedThreadPool(concurrency);
Semaphore semaphore = new Semaphore(concurrency,true);
System.out.println("Conccurrency in Semaphore: " + concurrency);
String line;
while ((line = reader.getLine()) != null)
{
semaphore.acquire();
try
{
final String p = line;
executorService.execute(new Runnable() {
#Override
public void run() {
reader.splitNginxLinewithIntern(p); // that is the method which parser string and convert to class.
semaphore.release();
}
});
}
catch (Exception ex)
{
ex.printStackTrace();
}
finally {
semaphore.release();
}
}
executorService.shutdown();
executorService.awaitTermination(Long.MAX_VALUE, TimeUnit.MINUTES);
System.out.println("ReadByteCount: " + reader.getReadByteCount());
}
You might benefit from the Files.lines() method and the Stream paradigm introduced in Java 8. It will use the systems common fork/join pool. Try this pattern:
import java.io.IOException;
import java.nio.file.Files;
import java.nio.file.Paths;
public class LineCounter
{
public static void main(String[] args) throws IOException
{
Files.lines(Paths.get("/your/file/here"))
.parallel()
.forEach(LineCounter::processLine);
}
private static void processLine(String line) {
// do the processing
}
}
Assuming that you don't care about order of lines:
final String MARKER = new String("");
BlockingQueue<String> q = new LinkedBlockingDeque<>(1024);
for (int i = 0; i < concurrency; i++)
executorService.execute(() -> {
for (;;) {
try {
String s = q.take();
if(s == MARKER) {
q.put(s);
return;
}
reader.splitNginxLinewithIntern(s);
} catch (InterruptedException e) {
return;
}
}
});
String line;
while ((line = reader.readLine()) != null) {
q.put(line);
}
q.put(MARKER);
executorService.awaitTermination(10, TimeUnit.MINUTES);
This starts a number of threads that each runs a specific task; that task is to read from the queue and run the split method. The reader just feeds the queue, notifies when it's complete and waits for termination.
If you were to use RxJava2 and rxjava2-extras that would simply be
Strings.from(reader)
.flatMap(str -> Flowable
.just(str)
.observeOn(Schedulers.computation())
.doOnNext(reader::splitNginxLinewithIntern)
)
.blockingSubscribe();
You need to go multi-thread, and you need to have the reader thread delegate the parsing to worker threads, that's clear. The point is how to do this delegating with as little overhead as possible.
#Tassos provided code that looks like a solid improvement.
One more thing you can try is to change the delegation granularity, not delegating every single line individually, but building chunks of e.g. 100 lines, thus reducing the delegating/synchronizing overhead by a factor of 100 (but then needing a String[] array or similar, which shouldn't hurt too much).

How to append Published DDS content in existing file on Subscriber side?

I have created the normal publishers and subscribers implemented using java , which works as reading the contents by size as 1MB of total size 5MB and published on every 1MB to the subscriber.Data is getting published successfully .Now 'm facing the issue on appending the content to the existing file .Finally i could find only the last 1MB of data in the file.So please let me to know how to solve this issue ? and also i have attached the source code for publisher and subscriber.
Publisher:
public class MessageDataPublisher {
static StringBuffer fileContent;
static RandomAccessFile randomAccessFile ;
public static void main(String[] args) throws IOException {
MessageDataPublisher msgObj=new MessageDataPublisher();
String fileToWrite="test.txt";
msgObj.towriteDDS(fileToWrite);
}
public void towriteDDS(String fileName) throws IOException{
DDSEntityManager mgr=new DDSEntityManager();
String partitionName="PARTICIPANT";
// create Domain Participant
mgr.createParticipant(partitionName);
// create Type
BinaryFileTypeSupport binary=new BinaryFileTypeSupport();
mgr.registerType(binary);
// create Topic
mgr.createTopic("Serials");
// create Publisher
mgr.createPublisher();
// create DataWriter
mgr.createWriter();
// Publish Events
DataWriter dwriter = mgr.getWriter();
BinaryFileDataWriter binaryWriter=BinaryFileDataWriterHelper.narrow(dwriter);
int bufferSize=1024*1024;
File readfile=new File(fileName);
FileInputStream is = new FileInputStream(readfile);
byte[] totalbytes = new byte[is.available()];
is.read(totalbytes);
byte[] readbyte = new byte[bufferSize];
BinaryFile binaryInstance;
int k=0;
for(int i=0;i<totalbytes.length;i++){
readbyte[k]=totalbytes[i];
k++;
if(k>(bufferSize-1)){
binaryInstance=new BinaryFile();
binaryInstance.name="sendpublisher.txt";
binaryInstance.contents=readbyte;
int status = binaryWriter.write(binaryInstance, HANDLE_NIL.value);
ErrorHandler.checkStatus(status, "MsgDataWriter.write");
ErrorHandler.checkStatus(status, "MsgDataWriter.write");
k=0;
}
}
if(k < (bufferSize-1)){
byte[] remaingbyte = new byte[k];
for(int j=0;j<(k-1);j++){
remaingbyte[j]=readbyte[j];
}
binaryInstance=new BinaryFile();
binaryInstance.name="sendpublisher.txt";
binaryInstance.contents=remaingbyte;
int status = binaryWriter.write(binaryInstance, HANDLE_NIL.value);
ErrorHandler.checkStatus(status, "MsgDataWriter.write");
}
is.close();
try {
Thread.sleep(4000);
} catch (InterruptedException e) {
e.printStackTrace();
}
// clean up
mgr.getPublisher().delete_datawriter(binaryWriter);
mgr.deletePublisher();
mgr.deleteTopic();
mgr.deleteParticipant();
}
}
Subscriber:
public class MessageDataSubscriber {
static RandomAccessFile randomAccessFile ;
public static void main(String[] args) throws IOException {
DDSEntityManager mgr = new DDSEntityManager();
String partitionName = "PARTICIPANT";
// create Domain Participant
mgr.createParticipant(partitionName);
// create Type
BinaryFileTypeSupport msgTS = new BinaryFileTypeSupport();
mgr.registerType(msgTS);
// create Topic
mgr.createTopic("Serials");
// create Subscriber
mgr.createSubscriber();
// create DataReader
mgr.createReader();
// Read Events
DataReader dreader = mgr.getReader();
BinaryFileDataReader binaryReader=BinaryFileDataReaderHelper.narrow(dreader);
BinaryFileSeqHolder binaryseq=new BinaryFileSeqHolder();
SampleInfoSeqHolder infoSeq = new SampleInfoSeqHolder();
boolean terminate = false;
int count = 0;
while (!terminate && count < 1500) {
// To run undefinitely
binaryReader.take(binaryseq, infoSeq, 10,
ANY_SAMPLE_STATE.value, ANY_VIEW_STATE.value,ANY_INSTANCE_STATE.value);
for (int i = 0; i < binaryseq.value.length; i++) {
toWrtieXML(binaryseq.value[i].contents);
terminate = true;
}
try
{
Thread.sleep(200);
}
catch(InterruptedException ie)
{
}
++count;
}
binaryReader.return_loan(binaryseq,infoSeq);
// clean up
mgr.getSubscriber().delete_datareader(binaryReader);
mgr.deleteSubscriber();
mgr.deleteTopic();
mgr.deleteParticipant();
}
private static void toWrtieXML(byte[] bytes) throws IOException {
// TODO Auto-generated method stub
File Writefile=new File("samplesubscriber.txt");
if(!Writefile.exists()){
randomAccessFile = new RandomAccessFile(Writefile, "rw");
randomAccessFile.write(bytes, 0, bytes.length);
randomAccessFile.close();
}
else{
randomAccessFile = new RandomAccessFile(Writefile, "rw");
long i=Writefile.length();
randomAccessFile.seek(i);
randomAccessFile.write(bytes, 0, bytes.length);
randomAccessFile.close();
}
}
}
Thanks in advance
It is hard to give a conclusive answer to your question, because your issue could be the result of several different causes. Also, once the cause of the problem has been identified, you will probably have multiple options to mitigate it.
The first place to look is at the reader side. The code does a take() in a loop with a 200 millisecond pause between each take. Depending on your QoS settings on the DataReader, you might be facing a situation where your samples get overwritten in the DataReader while your application is sleeping for 200 milliseconds. If you are doing this over a gigabit ethernet, then a typical DDS product would be able to do those 5 chunks of 1 megabyte within that sleep period, meaning that your default, one-place buffer will get overwritten 4 times during your sleep.
This scenario would be likely if you used the default history QoS settings for your BinaryFileDataReader, which means history.kind = KEEP_LAST and history.depth = 1. Increasing the latter to a larger value, for example to 20, would result in a queue capable of holding 20 chunks of your file while you are sleeping. That should be sufficient for now.
If this does not resolve your issue, other possible causes can be explored.

Threading in Java for files

I am looking to read the contents of a file in Java. I have about 8000 files to read the contents and have it in HashMap like (path,contents). I think using Threads would be a option for doing this to speed up the process.
From what I know having all 8000 files to read their contents in different threads is not possible(we may want to limit the threads),Any comments on it? Also I am new to threading in Java, can any one help on how to get started on this one?
so far I thought this pesudo code, :
public class ThreadingTest extends Thread {
public HashMap<String, String > contents = new HashMap<String, String>();
public ThreadingTest(ArrayList<String> paths)
{
for(String s : paths)
{
// paths is paths to files.
// Have threading here for each path going to get contents from a
// file
//Not sure how to limit and start threads here
readFile(s);
Thread t = new Thread();
t.start();
}
}
public String readFile(String path) throws IOException
{
FileReader reader = new FileReader(path);
StringBuilder sb = new StringBuilder();
BufferedReader br = new BufferedReader(reader);
String line;
while ( (line=br.readLine()) != null) {
sb.append(line);
}
return textOnly;
}
}
Any help in completing the threading process. Thanks
Short answer: Read the files sequentially. Disk I/O doesn't parallelize well.
Long Answer: Threading might improve the read performance if the disks are good at random access (SSD disks are) or if the files are placed on several different disks, but if they're not you're just likely to end up with a lot of cache misses and waiting for the disks to seek the right read position. (You may still end up there even if your disks are good at random access.)
If you want to measure instead of guess, use Executors.newFixedThreadPool to create an ExecutorService which can read your files in parallell. Experiment with different thread counts, but don't be surprised if one reader thread per physical disk gives you the best performance.
This is a typical task for thread pool. See the tutorial here: http://download.oracle.com/javase/tutorial/essential/concurrency/pools.html
import java.io.*;
import java.util.Collections;
import java.util.HashMap;
import java.util.Map;
import java.util.concurrent.*;
public class PooledFileProcessing {
private Map<String, String> contents = Collections.synchronizedMap(new HashMap<String, String>());
// Integer.MAX_VALUE items max
private LinkedBlockingQueue<Runnable> workQueue = new LinkedBlockingQueue<Runnable>();
private ExecutorService executor = new ThreadPoolExecutor(
5, // five workers by default
20, // up to twenty workers
1, TimeUnit.MINUTES, // idle thread dies in one minute
workQueue
);
public void process(final String basePath) {
visit(new File(basePath));
System.out.println(workQueue.size() + " jobs still in queue");
executor.shutdown();
try {
executor.awaitTermination(5, TimeUnit.MINUTES);
} catch (InterruptedException e) {
System.out.println("interrupted while awaiting termination");
}
System.out.println(contents.size() + " files indexed");
}
public void visit(final File file) {
if (!file.exists()) {
return;
}
if (file.isFile()) { // skip the dirs
executor.submit(new RunnablePullFile(file));
}
// traverse children
if (file.isDirectory()) {
final File[] children = file.listFiles();
if (children != null && children.length > 0) {
for (File child : children) {
visit(child);
}
}
}
}
public static void main(String[] args) {
new PooledFileProcessing().process(args.length == 1 ? args[0] : System.getProperty("user.home"));
}
protected class RunnablePullFile implements Runnable {
private final File file;
public RunnablePullFile(File file) {
this.file = file;
}
public void run() {
BufferedReader reader = null;
try {
reader = new BufferedReader(new FileReader(file));
StringBuilder sb = new StringBuilder();
String line;
while (
(line=reader.readLine()) != null &&
sb.length() < 8192 /* remove this check for a nice OOME or swap thrashing */
) {
sb.append(line);
}
contents.put(file.getPath(), sb.toString());
} catch (IOException e) {
System.err.println("failed on file: '" + file.getPath() + "': " + e.getMessage());
if (reader != null) {
try {
reader.close();
} catch (IOException e1) {
// ignore that one
}
}
}
}
}
}
From my experience, threading helps - use a thread pool and play with values around 1..2 threads per core.
Just take care with the hash map - consider putting data to the map via a synchronized method only. I remember I once had some ugly issues in similiar project and they were related to concurrent modifications of a central hash map.
just some quick tips.
First of all, to get you started on threads, you should just look at the Runnable interface, or the Thread class. To make a thread you either have to implement this interface with a class or extend this class with another class. You can also make anonymous threads too, but I dislike the readability of those unless its something SUPER simple.
Next, just some notes on processing text with multiple threads, because it just so happens I have some experience in exactly this! Keep in mind that if the files are large and take a noticeably long time to process a single file that you will want to monitor your CPU. In my experience I was doing lots of calculations and lookups when I was processing which added hugely to my load so in the end I found that I could only make as many threads as I had processors because each thread was so labor intensive. So keep that in mind, you want to monitor the effect each thread has on the processor.
I'm not sure having threads for this would really speed up the process if all the files are on the same physical disk. It could even slow things down because the disk would have to constantly switch from one location to the other.

Categories