Text Search based algorithm not behaving as intended - java

Update
I've updated the question with newer code suggested by fellow SO users and will be clarifying any ambiguous text that was previously there.
Update #2
I only have access to the log files generated by the application in question. Thus I'm constrained to work within the content of the log files and no solutions out of that scope is quite possible. I'll have modified the sample data a little bit. I would like to point out the following key variables.
Thread ID - Ranges from 0..19 - A thread is used multiple times. Thus ScriptExecThread(2) could show up multiple times within the logs.
Script - Every thread will run a script on a particular file. Once again, the same script may run on the same thread but won't run on the same thread AND file.
File - Every Thread ID runs a Script on a File. If Thread(10) is running myscript.script on myfile.file, then that EXACT line won't be executed again. A successful example using the above example would be something like so.
------START------
Thread(10) starting myscript.script on myfile.file
Thread(10) finished myscript.script on myfile.file
------END-------
An unsuccessful example using the above example would be:
------START------
Thread(10) starting myscript.script on myfile.file
------END------
Before addressing my query I'll give a rundown of the code used and the desired behavior.
Summary
I'm currently parsing large log files (take an average of 100k - 600k lines) and am attempting to retrieve certain information in a certain order. I've worked out the boolean algebra behind my request which seemed to work on paper but no so much on code (I must've missed something blatantly obvious). I would like to inform in advance that the code is not in any shape or form optimized, right now I simply want to get it to work.
In this log file you can see that certain threads hang up if they started but never finished. The number of possible thread IDs ranges. Here is some pseudo code:
REGEX = "ScriptExecThread(\\([0-9]+\\)).*?(finished|starting)" //in java
Set started, finished
for (int i=log.size()-1; i >=0; i--) {
if(group(2).contains("starting")
started.add(log.get(i))
else if(group(2).contains("finished")
finished.add(log.get(i)
}
started.removeAll(finished);
Search Hung Threads
Set<String> started = new HashSet<String>(), finished = new HashSet<String>();
for(int i = JAnalyzer.csvlog.size()-1; i >= 0; i--) {
if(JAnalyzer.csvlog.get(i).contains("ScriptExecThread"))
JUtility.hasThreadHung(JAnalyzer.csvlog.get(i), started, finished);
}
started.removeAll(finished);
commonTextArea.append("Number of threads hung: " + noThreadsHung + "\n");
for(String s : started) {
JLogger.appendLineToConsole(s);
commonTextArea.append(s+"\n");
}
Has Thread Hung
public static boolean hasThreadHung(final String str, Set<String> started, Set<String> finished) {
Pattern r = Pattern.compile("ScriptExecThread(\\([0-9]+\\)).*?(finished|starting)");
Matcher m = r.matcher(str);
boolean hasHung = m.find();
if(m.group(2).contains("starting"))
started.add(str);
else if (m.group(2).contains("finished"))
finished.add(str);
System.out.println("Started size: " + started.size());
System.out.println("Finished size: " + finished.size());
return hasHung;
}
Example Data
ScriptExecThread(1) started on afile.xyz
ScriptExecThread(2) started on bfile.abc
ScriptExecThread(3) started on cfile.zyx
ScriptExecThread(4) started on dfile.zxy
ScriptExecThread(5) started on efile.yzx
ScriptExecThread(1) finished on afile.xyz
ScriptExecThread(2) finished on bfile.abc
ScriptExecThread(3) finished on cfile.zyx
ScriptExecThread(4) finished on dfile.zxy
ScriptExecThread(5) finished on efile.yzy
ScriptExecThread(1) started on bfile.abc
ScriptExecThread(2) started on dfile.zxy
ScriptExecThread(3) started on afile.xyz
ScriptExecThread(1) finished on bfile.abc
END OF LOG
If you example this, you'll noticed Threads number 2 & 3 started but failed to finished (reason is not necessary, I simply need to get the line).
Sample Data
09.08 15:06.53, ScriptExecThread(7),Info,########### starting
09.08 15:06.54, ScriptExecThread(18),Info,###################### starting
09.08 15:06.54, ScriptExecThread(13),Info,######## finished in #########
09.08 15:06.54, ScriptExecThread(13),Info,########## starting
09.08 15:06.55, ScriptExecThread(9),Info,##### finished in ########
09.08 15:06.55, ScriptExecThread(0),Info,####finished in ###########
09.08 15:06.55, ScriptExecThread(19),Info,#### finished in ########
09.08 15:06.55, ScriptExecThread(8),Info,###### finished in 2777 #########
09.08 15:06.55, ScriptExecThread(19),Info,########## starting
09.08 15:06.55, ScriptExecThread(8),Info,####### starting
09.08 15:06.55, ScriptExecThread(0),Info,##########starting
09.08 15:06.55, ScriptExecThread(19),Info,Post ###### finished in #####
09.08 15:06.55, ScriptExecThread(0),Info,###### finished in #########
09.08 15:06.55, ScriptExecThread(19),Info,########## starting
09.08 15:06.55, ScriptExecThread(0),Info,########### starting
09.08 15:06.55, ScriptExecThread(9),Info,########## starting
09.08 15:06.56, ScriptExecThread(1),Info,####### finished in ########
09.08 15:06.56, ScriptExecThread(17),Info,###### finished in #######
09.08 15:06.56, ScriptExecThread(17),Info,###################### starting
09.08 15:06.56, ScriptExecThread(1),Info,########## starting
Currently the code simply displays the entire log file with lines started with "starting". Which does somewhat make sense when I review the code.
I have removed any redundant information that I don't wish to display. If there is anything that I might have left out feel free to let me know and I'll add it.

If I understand correctly, you have large files and are trying to find patterns of the form "X started (but no mention of X finished)" for all numerical values of X.
If I were to do this, I would use this pseudocode:
Pattern p = Pattern.compile(
"ScriptExecThread\\(([0-9]+).*?(finished|started)");
Set<Integer> started, finished;
Search for p; for each match m,
int n = Integer.parseInt(m.group(1));
if (m.group(2).equals("started")) started.add(n);
else finished.add(n);
started.removeAll(finished); // found 'em: contains started-but-not-finished
This requires a single regex pass over each file and an O(size-of-finished) set substraction; it should be 20x faster than your current approach. The regex would use optional (|) matching to look for both alternatives at once, reducing the number of passes.
Edit: made regex explicit. Compiling the regex once instead of once-per-line should shave off some extra run-time.
Edit 2: implemented pseudocode, works for me
Edit 3: replaced implementation to show file & line. Reduces memory requirements (does not load whole file into memory); but printing the line does require all "start" lines to be stored.
public class T {
public static Collection<String> findHung(Iterable<String> data) {
Pattern p = Pattern.compile(
"ScriptExecThread\\(([0-9]+).*?(finished|starting)");
HashMap<Integer, String> started = new HashMap<Integer, String>();
Set<Integer> finished = new HashSet<Integer>();
for (String d : data) {
Matcher m = p.matcher(d);
if (m.find()) {
int n = Integer.parseInt(m.group(1));
if (m.group(2).equals("starting")) started.put(n, d);
else finished.add(n);
}
}
for (int f : finished) {
started.remove(f);
}
return started.values();
}
static Iterable<String> readFile(String path, String encoding) throws IOException {
final Scanner sc = new Scanner(new File(path), encoding).useDelimiter("\n");
return new Iterable<String>() {
public Iterator<String> iterator() { return sc; }
};
}
public static void main(String[] args) throws Exception {
for (String fileName : args) {
for (String s : findHung(readFile(fileName, "UTF-8"))) {
System.out.println(fileName + ": '" + s + "' unfinished");
}
}
}
}
Input: sample data above, as the first argument, called "data.txt". The same data in another file called "data2.txt" as the second argument (javac T.java && java T data.txt data2.txt). Output:
data.txt: ' 09.08 15:06.54, ScriptExecThread(18),Info,###################### starting' unfinished
data.txt: ' 09.08 15:06.53, ScriptExecThread(7),Info,########### starting' unfinished
data2.txt: ' 09.08 15:06.54, ScriptExecThread(18),Info,###################### starting' unfinished
data2.txt: ' 09.08 15:06.53, ScriptExecThread(7),Info,########### starting' unfinished

Keeping two separate sets of started and finished threads (as described by #tucuxi) can't work. If a thread with ID 5 starts, runs, and finishes, then 5 will appear in the finished set, forever. If another thread with ID 5 starts, and hangs, it won't be reported.
Suppose, though, for a moment, that thread IDs aren't reused. Every thread ever created receives a new ID. By keeping separate started and finished sets, you'll have hundreds of thousands of elements in each by the time you're done. Performance on those data structures is proportional to what they've got in them at the time of the operation. It's unlikely that performance will matter for your use case, but if you were performing more expensive operations, or running on data 100 times larger, it might.
Preamble out of the way, here is a working version of #tucuxi's code:
import java.util.*;
import java.io.*;
import java.util.regex.*;
public class T {
public static Collection<String> findHung(Iterable<String> data) {
Pattern p = Pattern.compile(
"ScriptExecThread\\(([0-9]+).*?(finished|starting)");
HashMap<Integer, String> running = new HashMap<Integer, String>();
for (String d : data) {
Matcher m = p.matcher(d);
if (m.find()) {
int n = Integer.parseInt(m.group(1));
if (m.group(2).equals("starting"))
running.put(n, d);
else
running.remove(n);
}
}
return running.values();
}
static Iterable<String> readFile(String path, String encoding) throws IOException {
final Scanner sc = new Scanner(new File(path), encoding).useDelimiter("\n");
return new Iterable<String>() {
public Iterator<String> iterator() { return sc; }
};
}
public static void main(String[] args) throws Exception {
for (String fileName : args) {
for (String s : findHung(readFile(fileName, "UTF-8"))) {
System.out.println(fileName + ": '" + s + "' unfinished");
}
}
}
}
Note that I've dropped the finished set, and the HashMap is now called running. When new threads start, they go in, and when the thread finishes, it is pulled out. This means that the HashMap will always be the size of the number of currently running threads, which will always be less than (or equal) to the total number of threads ever run. So the operations on it will be faster. (As a pleasant side effect, you can now keep track of how many threads are running on a log line by log line basis, which might be useful in determining why the threads are hanging.)
Here's a Python program I used to generate huge test cases:
#!/usr/bin/python
from random import random, choice
from datetime import datetime
import tempfile
all_threads = set([])
running = []
hung = []
filenames = { }
target_thread_count = 16
hang_chance = 0.001
def log(id, msg):
now = datetime.now().strftime("%m:%d %H:%M:%S")
print "%s, ScriptExecThread(%i),Info,%s" % (now, id, msg)
def new_thread():
if len(all_threads)>0:
for t in range(0, 2+max(all_threads)):
if t not in all_threads:
all_threads.add(t)
return t
else:
all_threads.add(0)
return 0
for i in range(0, 100000):
if len(running) > target_thread_count:
new_thread_chance = 0.25
else:
new_thread_chance = 0.75
pass
if random() < new_thread_chance:
t = new_thread()
name = next(tempfile._get_candidate_names())+".txt"
filenames[t] = name
log(t, "%s starting" % (name,))
if random() < hang_chance:
hung.append(t)
else:
running.append(t)
elif len(running)>0:
victim = choice(running)
all_threads.remove(victim)
running.remove(victim)
log(t, "%s finished" % (filenames[victim],))

The removeAll will never work.
The hasThreadHung is storing the entire string.
So the values in started will never be matched by values in finished.
You want to do something like:
class ARecord {
// Proper encapsulation of the members omitted for brevity
String thread;
String line;
public ARecord (String thread, String line) {
this.thread = thread;
this.line = line;
}
public int hashcode() {
return thread.hashcode();
}
public boolean equals(ARecord o) {
return thread.equals(o.thread);
}
}
Then in hasHungThread, create an ARecord and add that to the Sets.
Ex:
started.add(new ARecord(m.group(2), str));
In searchHungThreads you will retrieve the ARecord from the started and output it as:
for(ARecord rec : started) {
JLogger.appendLineToConsole(rec.line);
commonTextArea.append(rec.line+"\n");
}

Why not solve the problem in another way. If all you want is hung threads, can take thread stack programmatically. Can use an external tool also but internal to own JVM I presume would be easiest. Then expose that as an API or save a date-time stamped file regularly with thread dump. Another program just needs to analyze the thread dumps. If same thread is at same spot (same stack trace or over same 3-5 functions) over thread dumps its a good chance its hung.
There are tools that help you analyze https://www.google.com/search?q=java+thread+dump+tool+open+source

Related

Thread.sleep(time) is not working the way I need it to. I need something better

I'm a student in Denmark trying to make a school project. What I'm working on at this moment is a reader class that takes in a string then prints out word by word and/or letter by letter.
I did some research and found out that Thread.sleep(time) did exactly what I needed it to do. But after I used it I found out it does not work properly! I tried to research some more and found something called a ThreadPoolExecutor but I can figure out how it works in my case.
My reader:
public class TextReader {
// Print method to print word by word from a string
public void wordByWord(String text) throws InterruptedException {
String[] words = text.split(" ");
for (int i = 0; i < words.length; i++) {
System.out.print(words[i] + " ");
Thread.sleep(250);
}
}
// Print method to print letter by letter from a string
public void letterByLetter(String text) throws InterruptedException {
String[] words = text.split(" ");
for (int i = 0; i < words.length; i++) {
String word = words[i] + " ";
char[] letters = (word.toCharArray());
for (int j = 0; j < letters.length; j++) {
System.out.print(letters[j]);
Thread.sleep(250); //so it does not print all the letters at once
}
}
}
}
The reason why Thread.sleep(time) not works in my case is because I need to print to the console and by using Thread.sleep(time) it does not print like a waterfall. It prints either the string I'm trying to break down (time lower than 250ms) or a few letters a once (250 ms) or is just so slow I can't look at it... (over 250ms). I need it to run fast and smooth! So it looks like someone is writing it.
I think I successfully recreated your problem. Every delay lower than about 205 ms seem to cause updating problems. Sometimes the words/letters don't appear but then at the next interval multiple words/letters appear at the same time.
This seems to be a limitation of the Console I/O performance (See this answer). There isn't really anything you can do about this. If you want to output text with a short, minimal delay like this, you need to program your own GUI (for example JavaFX). This will probably solve the performance issues.
Outputs at different delays
205 ms
190 ms
Thread's sleep method takes milliseconds to stop the execution of current thread for specified milliseconds. If it's slow, you can pass less MS and if it's fast then you can increase the timings. So you can tweak according to your need.
ExecutorFramework is a different thing.
It a way to submit your runnable task to the threads managed by ExecutorFramework.
What you are doing is putting a Thread to sleep for that time. That means the thread will become unblocked after that time, however you aren't accounting for the overhead of context switching from another thread. What you want is something more like this
Tried out the ScheduledExecutorService approach and seems to work fine. There's some optimization to be done and some hoops to jump through to wait for the scheduled printing to finish, but it doesn't seem to display the lag (in the two consoles I tried - Eclipse output and Windows Bash).
public class Output {
public static void main(String[] args) {
String toPrint = "Hello, my name is Voldemort, but few call me that.";
StringPrinter printer = new StringPrinter();
printer.print(toPrint, Output::byCharacter, 30);
System.out.println();
printer.print(toPrint, Output::byWord, 150);
}
private static List<String> byWord(String toSplit) {
Iterable<String> it = () -> new Scanner(toSplit);
return StreamSupport.stream(it.spliterator(), false).map(s -> s + " ").collect(Collectors.toList());
}
private static List<String> byCharacter(String toSplit) {
return toSplit.chars().mapToObj(i -> "" + (char) i).collect(Collectors.toList());
}
}
class StringPrinter implements Runnable {
// using an array to be most efficient
private String[] output;
private int currentIndex;
// the service providing the milliseconds delay
private ScheduledExecutorService printExecutor;
public void print(String toOutput, Function<String, List<String>> split, int delay) {
if (printExecutor != null) {
throw new IllegalStateException();
}
printExecutor = Executors.newSingleThreadScheduledExecutor();
List<String> list = split.apply(toOutput);
output = list.toArray(new String[list.size()]);
currentIndex = 0;
printExecutor.scheduleWithFixedDelay(this, 0, delay, TimeUnit.MILLISECONDS);
// wait until output has finished
synchronized (this) {
while (printExecutor != null)
try {
wait(); // wait for printing to be finished
} catch (InterruptedException e) {}
}
}
#Override
public void run() {
if (currentIndex < output.length) {
System.out.print(output[currentIndex++]);
} else {
// mark this print run as finished
printExecutor.shutdown();
printExecutor = null;
synchronized (this) { notifyAll(); }
}
}
}

System.out.print causing latency? [duplicate]

This question already has answers here:
Do not use System.out.println in server side code
(9 answers)
Closed 6 years ago.
I've got a simple program that I got from my Java programming book, just added a bit to it.
package personal;
public class SpeedTest {
public static void main(String[] args) {
double DELAY = 5000;
long startTime = System.currentTimeMillis();
long endTime = (long)(startTime + DELAY);
long index = 0;
while (true) {
double x = Math.sqrt(index);
long now = System.currentTimeMillis();
if (now >= endTime) {
break;
}
index++;
}
System.out.println(index + " loops in " + (DELAY / 1000) + " seconds.");
}
}
This returns 128478180 loops in 5.0 seconds.
If I add System.out.println(x); before the if statement, then my number of loops in 5 seconds goes down to the 400,000s, is that due to latency in the System.out.println()? Or is it just that x was not being calculated when I wasn't printing it out?
Anytime you "do output" within a very-busy loop, in any programming language whatsoever, you introduce two possibly-very-significant delays:
The data must be converted to printable characters, then be written to whatever display/device it might be going to ... and ...
"The act of outputting anything" obliges the process to synchronize itself with any-and-every-other process that might also be generating output.
One alternative strategy that is often used for this purpose is a "trace table." This is an in-memory array, of some fixed size, which contains strings. Entries are added to this table in a "round-robin" fashion: the oldest entry is continuously being replaced by the newest one. This strategy provides a history without requiring output. (The only requirement that remains is that anyone which is adding an entry to the table, or reading from it, must synchronize their activities e.g. using a mutex.)
Processes which wish to display the contents of the trace-table should grab the mutex, make an in-memory copy of the content-of-interest, then release the mutex before preparing their output. In this way, the various processes which are contributing entries to the trace-table will not be delayed by I/O-associated sources of delay.

Best method for parallel log aggregation

My program needs to analyze a bunch of log files daily which are generated on a hourly basis from each application server.
So if I have 2 app servers I will be processing 48 files (24 files * 2 app servers).
file sizes range 100-300 mb. Each line in every file is a log entry which is of the format
[identifier]-[number of pieces]-[piece]-[part of log]
for example
xxx-3-1-ABC
xxx-3-2-ABC
xxx-3-3-ABC
These can be distributed over the 48 files which I mentioned, I need to merge these logs like so
xxx-PAIR-ABCABCABC
My implementation uses a thread pool to read through files in parallel and then aggregate them using a ConcurrentHashMap
I define a class LogEvent.scala
class LogEvent (val id: String, val total: Int, var piece: Int, val json: String) {
var additions: Long = 0
val pieces = new Array[String](total)
addPiece(json)
private def addPiece (json: String): Unit = {
pieces(piece) = json
additions += 1
}
def isDone: Boolean = {
additions == total
}
def add (slot: Int, json: String): Unit = {
piece = slot
addPiece(json)
}
The main processing happens over multiple threads and the code is something on the lines of
//For each file
val logEventMap = new ConcurrentHashMap[String, LogEvent]().asScala
Future {
Source.fromInputStream(gis(file)).getLines().foreach {
line =>
//Extract the id part of the line
val idPart: String = IDPartExtractor(line)
//Split line on '-'
val split: Array[String] = idPart.split("-")
val id: String = split(0) + "-" + split(1)
val logpart: String = JsonPartExtractor(line)
val total = split(2) toInt
val piece = split(3) toInt
def slot: Int = {
piece match {
case x if x - 1 < 0 => 0
case _ => piece - 1
}
}
def writeLogEvent (logEvent: LogEvent): Unit = {
if (logEvent.isDone) {
//write to buffer
val toWrite = id + "-PAIR-" + logEvent.pieces.mkString("")
logEventMap.remove(logEvent.id)
writer.writeLine(toWrite)
}
}
//The LOCK
appendLock {
if (!logEventMap.contains(id)) {
val logEvent = new LogEvent(id, total, slot, jsonPart)
logEventMap.put(id, logEvent)
//writeLogEventToFile()
}
else {
val logEvent = logEventMap.get(id).get
logEvent.add(slot, jsonPart)
writeLogEvent(logEvent)
}
}
}
}
The main thread blocks till all the futures complete
Using this approach I have been able to cut the processing time from an hour+ to around 7-8 minutes.
My questions are as follows -
Can this be done in a better way, I am reading multiple files using different threads and I need to lock at the block where the aggregation happens, are there better ways of doing this?
The Map grows very fast in memory, any suggestions for off heap storage for such a use case
Any other feedback.
Thanks
A common way to do this is to sort each file and then merge the sorted files. The result is a single file that has the individual items in the order that you want them. Your program then just needs to do a single pass through the file, combining adjacent matching items.
This has some very attractive benefits:
The sort/merge is done by standard tools that you don't have to write
Your aggregator program is very simple. Or, there might even be a standard tool that will do it.
Memory requirements are lessened. The sort/merge programs know how to manage memory, and your aggregation program's memory requirements are minimal.
There are, of course some drawbacks. You'll use more disk space and the process will be somewhat slower due to the I/O cost.
When I'm faced with something like this, I almost always go with using the standard tools and a simple aggregator program. The increased performance I get from a custom program just doesn't justify the time it takes to develop the thing.
For this sort of thing, if you can, use Splunk, if not, copy what it does which is index the log files for aggregation on demand at a later point.
For off heap storage, look at distributed caches - Hazelcast or Coherence. Both support provide java.util.Map implementations that are stored over multiple JVMs.

What could cause a java process to get gradually decreasing share of CPU?

I have a very simple java program that prints out 1 million random numbers. In linux, I observed the %CPU that this program takes during its lifespan, it starts off at 98% then gradually decreases to 2%, thus causing the program to be very slow. What are some of the factors that might cause the program to gradually get less CPU time?
I've tried running it with nice -20 but I still see the same results.
EDIT: running the program with /usr/bin/time -v I'm seeing an unusual amount of involuntary context switches (588 voluntary vs 16478 involuntary), which suggests that the OS is letting some other higher priority process run.
It boils down to two things:
I/O is expensive, and
Depending on how you're storing the numbers as you go along, that can have an adverse effect on performance as well.
If you're mainly doing System.out.println(randInt) in a loop a million times, then that can get expensive. I/O isn't one of those things that comes for free, and writing to any output stream costs resources.
I would start by profiling via JConsole or VisualVM to see what it's actually doing when it has low CPU %. As mentioned in comments there's a high chance it's blocking, e.g. waiting for IO (user input, SQL query taking a long time, etc.)
If your application is I/O bound - for example waiting for responses from network calls, or disk read/write
If you want to try and balance everything, you should create a queue to hold numbers to print, then have one thread generate them (the producer) and the other read and print them (the consumer). This can easily be done with a LinkedBlockingQueue.
public class PrintQueueExample {
private BlockingQueue<Integer> printQueue = new LinkedBlockingQueue<Integer>();
public static void main(String[] args) throws InterruptedException {
PrinterThread thread = new PrinterThread();
thread.start();
for (int i = 0; i < 1000000; i++) {
int toPrint = ...(i) ;
printQueue.put(Integer.valueOf(toPrint));
}
thread.interrupt();
thread.join();
System.out.println("Complete");
}
private static class PrinterThread extends Thread {
#Override
public void run() {
try {
while (true) {
Integer toPrint = printQueue.take();
System.out.println(toPrint);
}
} catch (InterruptedException e) {
// Interruption comes from main, means processing numbers has stopped
// Finish remaining numbers and stop thread
List<Integer> remainingNumbers = new ArrayList<Integer>();
printQueue.drainTo(remainingNumbers);
for (Integer toPrint : remainingNumbers)
System.out.println(toPrint);
}
}
}
}
There may be a few problems with this code, but this is the gist of it.

Can Scheduler override join functionality?

I wrote a simple code that uses multiple threads to calculate number of primes from 1 to N.
public static void main (String[] args) throws InterruptedException
{
Date start;
start = new Date();
long startms = start.getTime();
int number_primes = 0, number_threads =0;
number_primes = Integer.parseInt(args[0]);
number_threads = Integer.parseInt(args[1]);
MakeThread[] mt = new MakeThread[number_threads];
for(int i=1;i<=number_threads;i++)
{
mt[i-1] = new MakeThread(i,(i-1)*(number_primes/number_threads),i*(number_primes/number_threads));
mt[i-1].start();
}
for(int i=1;i<number_threads;i++)
{
mt[i-1].join();
}
Date end = new Date();
long endms = end.getTime();
System.out.println("Time taken = "+(endms-startms));
}
}
As show in above, I want the final time taken to be displayed (just to measure performance for different inputs). However I noticed that when I enter a really big value of N and assign only 1 or 2 threads, the scheduler seems to override the join functionality (i.e the last print statement is displayed before other threads end). Is the kernel allowed to do this? Or do I have some bug in my code?
P.S: I have only shown a part of my code. I have a similar System.out.println at the end of the function that the newly forked threads call.
Your loop is the problem.
for(int i=1;i<number_threads;i++)
{
mt[i-1].join();
}
Either you change the condition to <= or you make a less cryptic loop like this:
for(int i=0; i < number_threads;i++){
mt[i].join();
}
Or a for each loop:
for(MakeThread thread : mt)
thread.join();
Provided you correct your loop which calls join on all threads as shown below
for(int i=0;i<number_threads;i++)
{
mt[i].join();
}
there is no way that the last print line may get invoked before all threads ( as specified in the loop ) finish running and join the main thread. Scheduler cannot make any assumptions with this semantics. As pointed by Thomas , the bug is there in your code that does not call join on the last thread ( which therefore does not complete before the last print is called ).

Categories