Suppose I have a CSV file with hundreds of lines with two random keywords as cells I'd like to Google search and have the first result on the page printed to the console or stored in some array. In the case of this example, I imagine I would successfully do this reading one line at a time using something like the following:
CSVReader reader = new CSVReader(new FileReader(FILE_PATH));
String [] nextLine;
while ((nextLine = reader.readNext())) !=null) {
driver.get("http://google.com/");
driver.findElement(By.name("q").click();
driver.findElement(By.name("q").clear();
driver.findElement(By.name("q").sendKeys(nextLine[0] + " " + nextLine[1]);
System.out.println(driver.findElement(By.xpath(XPATH_TO_1ST));
}
How would I go about having 5 or however many threads of chromedriver through selenium process the CSV file as fast as possible? I've been able to get 5 lines done at a time implementing Runnable on a class that does this and starting 5 threads, but I would like to know if there is a solution where as soon as one thread is complete, it processes the next available or unprocessed line, as opposed to waiting for the 5 searches to process, then going on to the next 5 lines. Would appreciate any suggested reading or tips on cracking this!
This is a pure java response, rather than specifically a selenium response.
You want to partition the data. A crude but effective partitioner can be made by reading a row from the CSV file and putting it in a Queue. Afterwards, run as many threads as you can profitably use to simply pull the next entry off of the queue and process it.
If you want to do 5 (or more) threads at the same time, you would need to start 5 instances of WebDriver as it is not thread safe. As for updating the CSV, you would need to synchronize writes to that for each thread to prevent corruption to the file itself, or you could batch up updates at some threshold and write several lines at once.
See this Can Selenium use multi threading in one browser?
Update:
How about this? It ensures the web driver is not re-used between threads.
CSVReader reader = new CSVReader(new FileReader(FILE_PATH));
// number to do at same time
int concurrencyCount = 5;
ExecutorService executorService = Executors.newFixedThreadPool(concurrencyCount);
CompletionService<Boolean> completionService = new ExecutorCompletionService<Boolean>(executorService);
String[] nextLine;
// ensure we use a distinct WebDriver instance per thread
final LinkedBlockingQueue<WebDriver> webDrivers = new LinkedBlockingQueue<WebDriver>();
for (int i=0; i<concurrencyCount; i++) {
webDrivers.offer(new ChromeDriver());
}
int count = 0;
while ((nextLine = reader.readNext()) != null) {
final String [] line = nextLine;
completionService.submit(new Callable<Boolean>() {
public Boolean call() {
try {
// take a webdriver from the queue to use
final WebDriver driver = webDrivers.take();
driver.get("http://google.com/");
driver.findElement(By.name("q")).click();
driver.findElement(By.name("q")).clear();
driver.findElement(By.name("q")).sendKeys(line[0] + " " + line[1]);
System.out.println(line[1]);
line[2] = driver.findElement(By.xpath(XPATH_TO_1ST)).getText();
// put webdriver back on the queue
webDrivers.offer(driver);
return true;
} catch (InterruptedException e) {
e.printStackTrace();
return false;
}
}
});
count++;
}
boolean errors = false;
while(count-- > 0) {
Future<Boolean> resultFuture = completionService.take();
try {
Boolean result = resultFuture.get();
} catch(Exception e) {
e.printStackTrace();
errors = true;
}
}
System.out.println("done, errors=" + errors);
for (WebDriver webDriver : webDrivers) {
webDriver.close();
}
executorService.shutdown();
You can create Callable for each row and give it to the ExecutorService. It takes care of the execution of the tasks and manages the worker threads for you. Carefully choose the thread pool size for optimal execution time.
More information about thread pool size can be found here
Related
I am developing an API request and I'm using multi threading.In the output I'm getting the same request twice generated by two threads.As I debugged two thread are calling the same method again.So need help so that this issue is resolved
This is my pseudo code
public void run() {
logger.debug("Thread " + currentThread().getName() + " Running");
String message = "";
Connection connection = null;
InputStream fileinput = null;
Properties properties = new Properties();
try {
File file = new File("/home/sridhar.anirudh/eclipse-workspace/API/Change.properties");
fileinput = new FileInputStream(file);
properties.load(fileinput);
soapEndpointUrl = properties.getProperty("endpoint_url");
soapAction = properties.getProperty("soap_action");
} catch (Exception e) {
e.printStackTrace();
}
try {
connection = Database.getInstance().getConnection();
} catch (SQLException e1) {
logger.error("Failed To Get Connection " + e1.getMessage());
return;
}
if (CATEGORY.equalsIgnoreCase("fraudrestriction")) {
String soapResponse = callSoapWebServiceFraudRestriction(soapEndpointUrl, soapAction);
String response_status = "";
if (soapResponse.contains("<tns:Description>SUCCESS</tns:Description>") &&
soapResponse.contains("<tns:Code>ERR_000</tns:Code>")) {
response_status = "SUCCESS";
If you kick off two copies of the thread, they will both run, creating the effect you see.
You can create multiple worker threads, but you need to allocate the work between those workers such that each performs a subset of the total workload.
Since you're (seemingly) parsing and processing a file, and making a network service request in response to that file's contents, it's not clear how you intend to divide up the work. That's the key; to use multiple threads to improve throughput, you the programmer must devise a means of partitioning the work between those threads.
As an analogy, if you have one (human) worker working on a job, simply hiring a second worker won't get the job completed any faster unless the work is divided between those workers. That division is your problem. There's nothing magical about threads that can do this for you.
I am trying to read a file and add each line to a list.
Simple drawing explaining the goal
Main class -
public class SimpleTreadPoolMain {
public static void main(String[] args) {
ReadFile reader = new ReadFile();
File file = new File("C:\\myFile.csv");
try {
reader.readFile(file);
} catch (IOException e) {
e.printStackTrace();
}
}
}
Reader class -
public class ReadFile {
ExecutorService executor = Executors.newFixedThreadPool(5);//creating a pool of 5 threads
List<String> list = new ArrayList<>();
void readFile(File file) throws IOException {
try (BufferedReader br = new BufferedReader(new FileReader(file))) {
String line;
while ((line = br.readLine()) != "") {
Runnable saver = new SaveToList(line,list);
executor.execute(saver);//calling execute method of ExecutorService
}
}
executor.shutdown();
while (!executor.isTerminated()) { }
}
}
Saver class -
public class SaveToList<E> implements Runnable{
List<E> myList;
E line;
public SaveToList(E line, List<E> list) {
this.line = line;
this.myList = list;
}
public void run() {
//modify the line
myList.add(line);
}
}
I tried to have many saver threads to add in to a same list instead of one saver adding to the list one by one. I want to use threads because I need to modify the data before adding to the list. So I assume modifying the data would take up some time. So paralleling this part would reduce the time consumption, right?
But this doesn't work. I am unable to return a global list which includes all the values from the file. I want to have only one global list of values from the file. So the code definitely should change. If one can guide me it would be greatly appreciated.
Even though adding one by one in a single thread would work, using a thread pool would make it faster, right?
Using multiple threads won't speed anything up here.
You are:
Reading a line from a file, serially.
Creating a runnable and submitting it into a thread pool
The runnable then adds things into a list
Given that you're using an ArrayList, you need to synchronize access to it, because you're mutating it from multiple threads. So, you are adding things into the list serially.
But even without the synchronization, the time taken for the IO will far exceed the time taken to add the string into the list. And adding in multithreading is just going to slow it down more, because it's doing work to construct the runnable, submit it to the thread pool, schedule it, etc.
It's simpler just to miss out the whole middle step:
Read a line from a file, serially.
Add the list to the list, serially.
So:
try (BufferedReader br = new BufferedReader(new FileReader(file))) {
String line;
while (!(line = br.readLine()).isEmpty()) {
list.add(line);
}
}
You should in fact try if it's worth using multi threading in you application, just compare how much time it takes to read the whole file without any processing on rows done, and compare it with the time it takes to process serially the whole file.
If your process is not too complex my guess is it is not worth to use multi threading.
If you find that the time it takes is much more then you can think about using one or more threads to do the computations.
If so, you could use Futures to process batches of input strings or maybe you could use a thread safe Queue to send string to another process.
private static final int BATCH_SIZE = 1000;
public static void main(String[] args) throws IOException {
BufferedReader reader = new BufferedReader(new InputStreamReader(new FileInputStream("big_file.csv"), "utf-8"));
ExecutorService pool = Executors.newFixedThreadPool(8);
String line;
List<String> batch = new ArrayList<>(BATCH_SIZE);
List<Future> results = new LinkedList<>();
while((line=reader.readLine())!=null){
batch.add(line);
if(batch.size()>=BATCH_SIZE){
Future<Object> f = noWaitExec(batch, pool);
results.add(f);
batch = new ArrayList<>(BATCH_SIZE);
}
}
Future<List> f = noWaitExec(batch,pool);
results.add(f);
for (Future future : results) {
try {
Object object = future.get();
// Use your results here
} catch (Exception e) {
// Manage this....
}
}
}
private static Future<List> noWaitExec(final List<String> batch, ExecutorService pool) {
return pool.submit(new Callable<List>() {
public List call() throws Exception {
List result = new ArrayList<>(batch.size());
for (String string : batch) {
result.add(process(string));
}
return result;
}
});
}
private static Object process(String string) {
// Your process ....
return null;
};
There are many other possible solutions (Observables, ParallelStreams, Pipes, CompletableFutures ... you name it), still I think that most of the time spent is the time it takes to read the file, just using a BufferedInputStream to read the file with a big enough buffer could cut your times more then parallel computing.
I'm trying to create a java program that downloads certain asset files from an FTP server to a local file. Because my (free) FTP server doesn't support file sizes over a few megabytes, I decided to split up the files when they are uploaded and recombine them when the program downloads them. This works, but it is rather slow, because for each file, it has to get the InputStream, which takes some time.
The FTP server I use has a way to download the files without actually logging into the server, so I'm using this code to get the InputStream:
private static final InputStream getInputStream(String file) throws IOException {
return new URL("http://site.website.com/path/" + file).openStream();
}
To get the InputStream of a part of the asset file I'm using this code:
public static InputStream getAssetInputStream(String asset, int num) throws IOException, FTPException {
try {
return getInputStream("assets/" + asset + "_" + num + ".raf");
} catch (Exception e) {
// error handling
}
}
Because the getAssetInputStreams(String, int) method takes some time to run (especially if the file size is more then a megabyte), I decided to make the code that actually downloads the file multi-threaded. Here is where my problem lies.
final Map<Integer, Boolean> done = new HashMap<Integer, Boolean>();
final Map<Integer, byte[]> parts = new HashMap<Integer, byte[]>();
for (int i = 0; i < numParts; i++) {
final int part = i;
done.put(part, false);
new Thread(new Runnable() {
#Override
public void run() {
try {
InputStream is = FTP.getAssetInputStream(asset, part);
ByteArrayOutputStream baos = new ByteArrayOutputStream();
byte[] buf = new byte[DOWNLOAD_BUFFER_SIZE];
int len = 0;
while ((len = is.read(buf)) > 0) {
baos.write(buf, 0, len);
curDownload.addAndGet(len);
totAssets.addAndGet(len);
}
parts.put(part, baos.toByteArray());
done.put(part, true);
} catch (IOException e) {
// error handling
} catch (FTPException e) {
// error handling
}
}
}, "Download-" + asset + "-" + i).start();
}
while (done.values().contains(false)) {
try {
Thread.sleep(100);
} catch(InterruptedException e) {
e.printStackTrace();
}
}
File assetFile = new File(dir, "assets/" + asset + ".raf");
assetFile.createNewFile();
FileOutputStream fos = new FileOutputStream(assetFile);
for (int i = 0; i < numParts; i++) {
fos.write(parts.get(i));
}
fos.close();
This code works, but not always. When I run it on my desktop computer, it works almost always. Not 100% of the time, but often it works just fine. On my laptop, which has a far worse internet connection, it almost never works. The result is a file that is incomplete. Sometimes, it downloads 50% of the file. Sometimes, it downloads 90% of the file, it differs every time.
Now, if I replace the .start() by .run(), the code works just fine, 100% of the time, even on my laptop. It is, however, incredibly slow, so I'd rather not use .run().
Is there a way I could change my code so it does work multi-threaded? Any help will be appreciated.
Firstly, get your FTP server replaced, there are plenty of free FTP servers that support arbitrary file size serving with additional features, but I digress...
Your code seems to have many unrelated problems that could potentially all cause the behavior you are seeing, addressed below:
You have race conditions from accessing the done and parts maps from unprotected/unsynchronized access from multiple threads. This could cause data corruption and loss of synchronization for these variables between threads, potentially causing done.values().contains(false) to return true even when it's really not.
You are calling done.values().contains() repeatedly at a high frequency. Whilst the javadoc doesn't explicitly state, a hash map likely traverses every value in a O(n) fashion to check if a given map contains a value. Coupled with the fact that other threads are modifying the map, you'll get undefined behavior. According to values() javadoc:
If the map is modified while an iteration over the collection is in progress (except through the iterator's own remove operation), the results of the iteration are undefined.
You are somehow calling new URL("http://site.website.com/path/" + file).openStream(); but stating you are using FTP. The http:// in the link defines the protocol openStream() tries to open in and http:// is not ftp://. Not sure if this is a typo or did you mean HTTP (or do you have an HTTP server serving identical files).
Any thread raising any type of Exception will cause the code to fail given that not all parts will have "completed" (based on your busy-wait loop design). Granted, you may be redacted some other logic to guard against this, but otherwise this is a potential problem with the code.
You aren't closing any streams that you've opened. This could mean that the underlying socket itself is also left open. Not only does this constitute resource leakage, if the server itself has some sort of maximum number of simultaneous connection limit, you are only causing new connections to fail because the old, completed transfers are not closed.
Based on the issues above, I propose moving the download logic into a Callable task and running them through an ExecutorService as follows:
LinkedList<Callable<byte[]>> tasksToExecute = new LinkedList<>();
// Populate tasks to run
for(int i = 0; i < numParts; i++){
final int part = i;
// Lambda to
tasksToExecute.add(() -> {
InputStream is = null;
try{
is = FTP.getAssetInputStream(asset, part);
ByteArrayOutputStream baos = new ByteArrayOutputStream();
byte[] buf = new byte[DOWNLOAD_BUFFER_SIZE];
int len = 0;
while((len = is.read(buf)) > 0){
baos.write(buf, 0, len);
curDownload.addAndGet(len);
totAssets.addAndGet(len);
}
return baos.toByteArray();
}catch(IOException e){
// handle exception
}catch(FTPException e){
// handle exception
}finally{
if(is != null){
try{
is.close();
}catch(IOException ignored){}
}
}
return null;
});
}
// Retrieve an ExecutorService instance, note the use of work stealing pool is Java 8 only
// This can be substituted for newFixedThreadPool(nThreads) for Java < 8 as well for tight control over number of simultaneous links
ExecutorService executor = Executors.newWorkStealingPool(4);
// Tells the executor to execute all the tasks and give us the results
List<Future<byte[]>> resultFutures = executor.invokeAll(tasksToExecute);
// Populates the file
File assetFile = new File(dir, "assets/" + asset + ".raf");
assetFile.createNewFile();
try(FileOutputStream fos = new FileOutputStream(assetFile)){
// Iterate through the futures, writing them to file in order
for(Future<byte[]> result : resultFutures){
byte[] partData = result.get();
if(partData == null){
// exception occured during downloading this part, handle appropriately
}else{
fos.write(partData);
}
}
}catch(IOException ex(){
// handle exception
}
Using the executor service, you further optimize your multi-threading scenario since the output file will start writing as soon as pieces (in order) are available and that threads themselves are reused to save on thread creation costs.
As mentioned, there could be the case where too many simultaneous links causes the server to reject connections (or even more dangerously, write an EOF to make you think the part was downloaded). In this case, the number of worker threads can be tweaked by newFixedThreadPool(nThreads) to ensure at any given time, only nThreads amount of downloads can happen concurrently.
I have the following code in my application which does two things:
Parse the file which has 'n' number of data.
For each data in the file, there will be two web service calls.
public static List<String> parseFile(String fileName) {
List<String> idList = new ArrayList<String>();
try {
BufferedReader cfgFile = new BufferedReader(new FileReader(new File(fileName)));
String line = null;
cfgFile.readLine();
while ((line = cfgFile.readLine()) != null) {
if (!line.trim().equals("")) {
String [] fields = line.split("\\|");
idList.add(fields[0]);
}
}
cfgFile.close();
} catch (IOException e) {
System.out.println(e+" Unexpected File IO Error.");
}
return idList;
}
When i try parse the file having 1 million lines of record, the java process fails after processing certain amount of data. I got java.lang.OutOfMemoryError: Java heap space error. I can partly figure out that the java process stops because of this huge data being provided. Kindly suggest me how to proceed with this huge data.
EDIT: Will this part of code new BufferedReader(new FileReader(new File(fileName))); parse the whole file and gets affected to the size of the file.
The problem you have is you are accumulating all the data on the list. The best way to approach this is to do it on a streaming fashion. This means do not accumulate all the ids on the list, but call your web service on each row or accumulate a smaller buffer and then do the call.
Opening the file and creating the BufferedReader will have no impact on memory consumption, as the bytes from the file will be read (more or less) line by line. The problem is at this point in the code idList.add(fields[0]);, the list will grow as large as the file as you keep accumulating all of the file data into it.
Your code should do something like this:
while ((line = cfgFile.readLine()) != null) {
if (!line.trim().equals("")) {
String [] fields = line.split("\\|");
callToRemoteWebService(fields[0]);
}
}
Increase your java heap memory size using the -Xms and -Xmx options. If not set explicitly, the jvm sets the heap size to the ergonomic defaults which in your case is not enough. Read this paper to find out more about tuning the memory in jvm: http://www.oracle.com/technetwork/java/javase/tech/memorymanagement-whitepaper-1-150020.pdf
EDIT: Alternative way on doing this in a producer-consumer way to exploit parallel processing. The general idea is to create a producer thread that reads the file and queues tasks for processing and n consumer threads that consume them. A very general idea (for illustrative purposes) is the following:
// blocking queue holding the tasks to be executed
final SynchronousQueue<Callable<String[]> queue = // ...
// reads the file and submit tasks for processing
final Runnable producer = new Runnable() {
public void run() {
BufferedReader in = null;
try {
in = new BufferedReader(new FileReader(new File(fileName)));
String line = null;
while ((line = file.readLine()) != null) {
if (!line.trim().equals("")) {
String[] fields = line.split("\\|");
// this will block if there are not available consumer threads to process it...
queue.put(new Callable<Void>() {
public Void call() {
process(fields);
}
});
}
}
} catch (InterruptedException e) {
Thread.currentThread().interrupt());
} finally {
// close the buffered reader here...
}
}
}
// Consumes the tasks submitted from the producer. Consumers can be pooled
// for parallel processing.
final Runnable consumer = new Runnable() {
public void run() {
try {
while (true) {
// this method blocks if there are no items left for processing in the queue...
Callable<Void> task = queue.take();
taks.call();
}
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
}
}
Of course you have to write code that manages the lifecycle of the consumer and producer threads. The right way to do this would be by implementing it using an Executor.
When you want to work with big data, you have 2 choices:
use a big enough heap to fit all the data. this will "work" for a while, but if your data size is unbounded, it will eventually fail.
work with the data incrementally. only keep part of the data (of a bounded size) in memory at any one time. this is the ideal solution as it will scale to any amount of data.
I am looking to read the contents of a file in Java. I have about 8000 files to read the contents and have it in HashMap like (path,contents). I think using Threads would be a option for doing this to speed up the process.
From what I know having all 8000 files to read their contents in different threads is not possible(we may want to limit the threads),Any comments on it? Also I am new to threading in Java, can any one help on how to get started on this one?
so far I thought this pesudo code, :
public class ThreadingTest extends Thread {
public HashMap<String, String > contents = new HashMap<String, String>();
public ThreadingTest(ArrayList<String> paths)
{
for(String s : paths)
{
// paths is paths to files.
// Have threading here for each path going to get contents from a
// file
//Not sure how to limit and start threads here
readFile(s);
Thread t = new Thread();
t.start();
}
}
public String readFile(String path) throws IOException
{
FileReader reader = new FileReader(path);
StringBuilder sb = new StringBuilder();
BufferedReader br = new BufferedReader(reader);
String line;
while ( (line=br.readLine()) != null) {
sb.append(line);
}
return textOnly;
}
}
Any help in completing the threading process. Thanks
Short answer: Read the files sequentially. Disk I/O doesn't parallelize well.
Long Answer: Threading might improve the read performance if the disks are good at random access (SSD disks are) or if the files are placed on several different disks, but if they're not you're just likely to end up with a lot of cache misses and waiting for the disks to seek the right read position. (You may still end up there even if your disks are good at random access.)
If you want to measure instead of guess, use Executors.newFixedThreadPool to create an ExecutorService which can read your files in parallell. Experiment with different thread counts, but don't be surprised if one reader thread per physical disk gives you the best performance.
This is a typical task for thread pool. See the tutorial here: http://download.oracle.com/javase/tutorial/essential/concurrency/pools.html
import java.io.*;
import java.util.Collections;
import java.util.HashMap;
import java.util.Map;
import java.util.concurrent.*;
public class PooledFileProcessing {
private Map<String, String> contents = Collections.synchronizedMap(new HashMap<String, String>());
// Integer.MAX_VALUE items max
private LinkedBlockingQueue<Runnable> workQueue = new LinkedBlockingQueue<Runnable>();
private ExecutorService executor = new ThreadPoolExecutor(
5, // five workers by default
20, // up to twenty workers
1, TimeUnit.MINUTES, // idle thread dies in one minute
workQueue
);
public void process(final String basePath) {
visit(new File(basePath));
System.out.println(workQueue.size() + " jobs still in queue");
executor.shutdown();
try {
executor.awaitTermination(5, TimeUnit.MINUTES);
} catch (InterruptedException e) {
System.out.println("interrupted while awaiting termination");
}
System.out.println(contents.size() + " files indexed");
}
public void visit(final File file) {
if (!file.exists()) {
return;
}
if (file.isFile()) { // skip the dirs
executor.submit(new RunnablePullFile(file));
}
// traverse children
if (file.isDirectory()) {
final File[] children = file.listFiles();
if (children != null && children.length > 0) {
for (File child : children) {
visit(child);
}
}
}
}
public static void main(String[] args) {
new PooledFileProcessing().process(args.length == 1 ? args[0] : System.getProperty("user.home"));
}
protected class RunnablePullFile implements Runnable {
private final File file;
public RunnablePullFile(File file) {
this.file = file;
}
public void run() {
BufferedReader reader = null;
try {
reader = new BufferedReader(new FileReader(file));
StringBuilder sb = new StringBuilder();
String line;
while (
(line=reader.readLine()) != null &&
sb.length() < 8192 /* remove this check for a nice OOME or swap thrashing */
) {
sb.append(line);
}
contents.put(file.getPath(), sb.toString());
} catch (IOException e) {
System.err.println("failed on file: '" + file.getPath() + "': " + e.getMessage());
if (reader != null) {
try {
reader.close();
} catch (IOException e1) {
// ignore that one
}
}
}
}
}
}
From my experience, threading helps - use a thread pool and play with values around 1..2 threads per core.
Just take care with the hash map - consider putting data to the map via a synchronized method only. I remember I once had some ugly issues in similiar project and they were related to concurrent modifications of a central hash map.
just some quick tips.
First of all, to get you started on threads, you should just look at the Runnable interface, or the Thread class. To make a thread you either have to implement this interface with a class or extend this class with another class. You can also make anonymous threads too, but I dislike the readability of those unless its something SUPER simple.
Next, just some notes on processing text with multiple threads, because it just so happens I have some experience in exactly this! Keep in mind that if the files are large and take a noticeably long time to process a single file that you will want to monitor your CPU. In my experience I was doing lots of calculations and lookups when I was processing which added hugely to my load so in the end I found that I could only make as many threads as I had processors because each thread was so labor intensive. So keep that in mind, you want to monitor the effect each thread has on the processor.
I'm not sure having threads for this would really speed up the process if all the files are on the same physical disk. It could even slow things down because the disk would have to constantly switch from one location to the other.