I have defined a filter for the termination condition by k-means.
if I run my app it always compute only one iteration.
I think the problem is here:
DataSet<GeoTimeDataCenter> finalCentroids = loop.closeWith(newCentroids, newCentroids.join(loop).where("*").equalTo("*").filter(new MyFilter()));
or maybe the filter function:
public static final class MyFilter implements FilterFunction<Tuple2<GeoTimeDataCenter, GeoTimeDataCenter>> {
private static final long serialVersionUID = 5868635346889117617L;
public boolean filter(Tuple2<GeoTimeDataCenter, GeoTimeDataCenter> tuple) throws Exception {
if(tuple.f0.equals(tuple.f1)) {
return true;
}
else {
return false;
}
}
}
best regards,
paul
my full code here:
public void run() {
//load properties
Properties pro = new Properties();
FileSystem fs = null;
try {
pro.load(FlinkMain.class.getResourceAsStream("/config.properties"));
fs = FileSystem.get(new URI(pro.getProperty("hdfs.namenode")),new org.apache.hadoop.conf.Configuration());
} catch (Exception e) {
e.printStackTrace();
}
int maxIteration = Integer.parseInt(pro.getProperty("maxiterations"));
String outputPath = fs.getHomeDirectory()+pro.getProperty("flink.output");
// set up execution environment
ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
// get input points
DataSet<GeoTimeDataTupel> points = getPointDataSet(env);
DataSet<GeoTimeDataCenter> centroids = null;
try {
centroids = getCentroidDataSet(env);
} catch (Exception e1) {
e1.printStackTrace();
}
// set number of bulk iterations for KMeans algorithm
IterativeDataSet<GeoTimeDataCenter> loop = centroids.iterate(maxIteration);
DataSet<GeoTimeDataCenter> newCentroids = points
// compute closest centroid for each point
.map(new SelectNearestCenter(this.getBenchmarkCounter())).withBroadcastSet(loop, "centroids")
// count and sum point coordinates for each centroid
.groupBy(0).reduceGroup(new CentroidAccumulator())
// compute new centroids from point counts and coordinate sums
.map(new CentroidAverager(this.getBenchmarkCounter()));
// feed new centroids back into next iteration with termination condition
DataSet<GeoTimeDataCenter> finalCentroids = loop.closeWith(newCentroids, newCentroids.join(loop).where("*").equalTo("*").filter(new MyFilter()));
DataSet<Tuple2<Integer, GeoTimeDataTupel>> clusteredPoints = points
// assign points to final clusters
.map(new SelectNearestCenter(-1)).withBroadcastSet(finalCentroids, "centroids");
// emit result
clusteredPoints.writeAsCsv(outputPath+"/points", "\n", " ");
finalCentroids.writeAsText(outputPath+"/centers");//print();
// execute program
try {
env.execute("KMeans Flink");
} catch (Exception e) {
e.printStackTrace();
}
}
public static final class MyFilter implements FilterFunction<Tuple2<GeoTimeDataCenter, GeoTimeDataCenter>> {
private static final long serialVersionUID = 5868635346889117617L;
public boolean filter(Tuple2<GeoTimeDataCenter, GeoTimeDataCenter> tuple) throws Exception {
if(tuple.f0.equals(tuple.f1)) {
return true;
}
else {
return false;
}
}
}
I think the problem is the filter function (modulo the code you haven't posted). Flink's termination criterion works the following way: The termination criterion is met if the provided termination DataSet is empty. Otherwise the next iteration is started if the maximum number of iterations has not been exceeded.
Flink's filter function keeps only those elements for which the FilterFunction returns true. Thus, with your MyFilter implementation you only keep the centroids which are before and after the iteration identical. This implies that you'll obtain an empty DataSet if all centroids have changed and, thus, the iteration terminates. This is clearly the inverse of the actual termination criterion. The termination criterion should be: Continue with k-means as long as there is a centroid which has changed.
You can do this with a coGroup function where you emit elements if there is no matching centroid from the preceding centroid DataSet. This is similar to a left outer join, just that you discard non null matches.
public static void main(String[] args) throws Exception {
// set up the execution environment
final ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
DataSet<Element> oldDS = env.fromElements(new Element(1, "test"), new Element(2, "test"), new Element(3, "foobar"));
DataSet<Element> newDS = env.fromElements(new Element(1, "test"), new Element(3, "foobar"), new Element(4, "test"));
DataSet<Element> filtered = newDS.coGroup(oldDS).where("*").equalTo("*").with(new FilterCoGroup());
filtered.print();
}
public static class FilterCoGroup implements CoGroupFunction<Element, Element, Element> {
#Override
public void coGroup(
Iterable<Element> newElements,
Iterable<Element> oldElements,
Collector<Element> collector) throws Exception {
List<Element> persistedElements = new ArrayList<Element>();
for(Element element: oldElements) {
persistedElements.add(element);
}
for(Element newElement: newElements) {
boolean contained = false;
for(Element oldElement: persistedElements) {
if(newElement.equals(oldElement)){
contained = true;
}
}
if(!contained) {
collector.collect(newElement);
}
}
}
}
public static class Element implements Key {
private int id;
private String name;
public Element(int id, String name) {
this.id = id;
this.name = name;
}
public Element() {
this(-1, "");
}
#Override
public int hashCode() {
return 31 + 7 * name.hashCode() + 11 * id;
}
#Override
public boolean equals(Object obj) {
if(obj instanceof Element) {
Element element = (Element) obj;
return id == element.id && name.equals(element.name);
} else {
return false;
}
}
#Override
public int compareTo(Object o) {
if(o instanceof Element) {
Element element = (Element) o;
if(id == element.id) {
return name.compareTo(element.name);
} else {
return id - element.id;
}
} else {
throw new RuntimeException("Comparing incompatible types.");
}
}
#Override
public void write(DataOutputView dataOutputView) throws IOException {
dataOutputView.writeInt(id);
dataOutputView.writeUTF(name);
}
#Override
public void read(DataInputView dataInputView) throws IOException {
id = dataInputView.readInt();
name = dataInputView.readUTF();
}
#Override
public String toString() {
return "(" + id + "; " + name + ")";
}
}
Related
I'm having this errors:
1) Cannot cast from HashBasedTable to Table .
This is the code in error: In the line:
this.restores = **(Table<UUID, PotionEffectType, PotionEffect>)HashBasedTable.create()**;
2) The method put(UUID, PotionEffectType, PotionEffect) in the type Table is not applicable for the arguments (Object, Object, Object)
This is the code in error: In the line:
this.restores.**put**((Object)player.getUniqueId(), (Object)active.getType(), (Object)active);
public class MageRestorer implements Listener
{
private final Table<UUID, PotionEffectType, PotionEffect> restores;
public MageRestorer(final HCF plugin) {
this.restores = (Table<UUID, PotionEffectType, PotionEffect>)HashBasedTable.create();
plugin.getServer().getPluginManager().registerEvents((Listener)this, (Plugin)plugin);
}
#EventHandler(ignoreCancelled = true, priority = EventPriority.MONITOR)
public void onPvpClassUnequip(final PvpClassUnequipEvent event) {
this.restores.rowKeySet().remove(event.getPlayer().getUniqueId());
}
public void setRestoreEffect(final Player player, final PotionEffect effect) {
boolean shouldCancel = true;
final Collection<PotionEffect> activeList = (Collection<PotionEffect>)player.getActivePotionEffects();
for (final PotionEffect active : activeList) {
if (active.getType().equals((Object)effect.getType())) {
if (effect.getAmplifier() < active.getAmplifier()) {
return;
}
if (effect.getAmplifier() == active.getAmplifier() && effect.getDuration() < active.getDuration()) {
return;
}
this.restores.put((Object)player.getUniqueId(), (Object)active.getType(), (Object)active);
shouldCancel = false;
}
}
player.addPotionEffect(effect, true);
if (shouldCancel && effect.getDuration() > 100 && effect.getDuration() < MageClass.DEFAULT_MAX_DURATION) {
this.restores.remove((Object)player.getUniqueId(), (Object)effect.getType());
}
}
#EventHandler(ignoreCancelled = true, priority = EventPriority.MONITOR)
public void onPotionEffectExpire(final PotionEffectExpiresEvent event) {
final LivingEntity livingEntity = event.getEntity();
if (livingEntity instanceof Player) {
final Player player = (Player)livingEntity;
final PotionEffect previous = (PotionEffect)this.restores.remove((Object)player.getUniqueId(), (Object)event.getEffect().getType());
if (previous != null) {
new BukkitRunnable() {
public void run() {
player.addPotionEffect(previous, true);
}
}.runTask((Plugin)HCF.getPlugin());
}
}
}
}
I dont know fix this. This is a clase of bukkit
For the first error, can we see the stack trace? Also, why do you need to cast it? You can store it as a Table; a cast is unnecessary.
For the second error, remove the casts to Object. Simply leave them as their actual types. this.restores.put(player.getUniqueId(), active.getType(), active);
This should help you get through the issue.
I'm working on implements a tree that represent electric circus (without any circles, as in this picture)
I use this implementation:
Binary_Oprtator
public abstract class Binary_Oprtator {
abstract int calc(int x, int y);
#Override
public String toString() {
return super.toString().substring(0, super.toString().indexOf('#'));
}
}
And gate
public class and extends Binary_Oprtator {
public int calc(int x, int y){
return (x&y);
}
}
Or gate
public class or extends Binary_Oprtator {
public int calc(int x, int y){
return (x|y);
}
}
gate_node
public class gate_node {
gate_node father_c;
gate_node right_c, left_c;
Binary_Oprtator op;
int value;
int right_v, left_v;
int array_index;
int arr_size;
boolean leaf;
boolean isRightChild;
public gate_node(Binary_Oprtator op, int array_index, int arr_size, boolean right) {
this.array_index = array_index;
this.arr_size = arr_size;
this.left_c = null;
this.right_c = null;
this.op = op;
right_v = left_v = -1;
this.leaf = false;
this.isRightChild = right;
}
void set_left_son(Binary_Oprtator op) {
this.left_c = new gate_node(op, array_index, arr_size / 2,false);
this.left_c.father_c = this;
this.left_c.leaf = false;
this.left_c.isRightChild = false;
}
void set_right_son(Binary_Oprtator op) {
this.right_c = new gate_node(op, array_index + arr_size / 2,
arr_size / 2,true);
this.right_c.father_c = this;
this.right_c.leaf = false;
this.right_c.isRightChild = true;
}
void set_left_son_as_leaf(Binary_Oprtator op) throws InterruptedException {
this.left_c = new gate_node(op, array_index, arr_size / 2,false);
this.left_c.father_c = this;
this.left_c.leaf = true;
this.left_c.left_v = main_class.arr[array_index];
this.left_c.right_v = main_class.arr[array_index + 1];
this.left_c.isRightChild = false;
main_class.queue.put(this.left_c);
}
void set_right_son_as_leaf(Binary_Oprtator op) throws InterruptedException {
this.right_c = new gate_node(op, array_index + arr_size / 2,
arr_size / 2,true);
this.right_c.father_c = this;
this.right_c.left_v = main_class.arr[array_index + 2];
this.right_c.right_v = main_class.arr[array_index + 3];
this.right_c.leaf = true;
this.right_c.isRightChild = true;
main_class.queue.put(this.right_c);
}
gate_node get_left() {
return this.left_c;
}
gate_node get_right() {
return this.right_c;
}
int compute() {
/*
* The following use of a static sInputCounter assumes that the
* static/global input array is ordered from left to right, irrespective
* of "depth".
*/
final int left, right;
if (this.left_c.leaf != true) {
left = this.left_c.compute();
} else {
left = this.left_c.op.calc(this.left_c.left_v, this.left_c.right_v);
}
if (this.right_c.leaf != true) {
right = this.right_c.compute();
} else {
right = this.right_c.op.calc(this.right_c.left_v,
this.right_c.right_v);
}
return op.calc(left, right);
}
int compute_with_print() {
/*
* The following use of a static sInputCounter assumes that the
* static/global input array is ordered from left to right, irrespective
* of "depth".
*/
final int left, right;
System.out.print(this.op + "(");
if (null != this.left_c) {
left = this.left_c.compute_with_print();
System.out.print(",");
} else {
left = main_class.arr[array_index];
System.out.print(left + ",");
}
if (null != this.right_c) {
right = this.right_c.compute_with_print();
System.out.print(")");
} else {
right = main_class.arr[array_index + 1];
System.out.print(right + ")");
}
return op.calc(left, right);
}
}
tree
public class tree {
gate_node head;
public tree(Binary_Oprtator op,int array_index,int arr_size) {
this.head = new gate_node(op,array_index,arr_size,true);
head.father_c=null;
}
void calc_head_value(){
int t_value = head.op.calc(head.left_v,head.right_v);
/* System.out.println(head.left_v+" "+head.op.toString()+" "+head.right_v+" = "+head.op.calc(head.left_v,head.right_v));
*/ head.value = t_value;
}
int compute() {
return head.compute();
}
int compute_with_print(){
return head.compute_with_print();
}
void set_left_son(Binary_Oprtator op){
head.left_c = new gate_node(op,head.array_index,head.arr_size/2,false);
head.left_c.father_c=head;
}
void set_right_son(Binary_Oprtator op){
head.right_c = new gate_node(op,head.array_index + head.arr_size/2,head.arr_size/2,true);
head.right_c.father_c=head;
}
void set_right_son_as_leaf(Binary_Oprtator op) throws InterruptedException {
head.right_c = new gate_node(op,head.array_index,head.arr_size/2,false);
head.right_c.father_c=head;
head.right_c.father_c = head;
head.right_c.left_v = main_class.arr[head.array_index + 2];
head.right_c.right_v = main_class.arr[head.array_index + 3];
head.right_c.leaf = true;
head.right_c.isRightChild = true;
main_class.queue.put(head.right_c);
}
void set_left_son_as_leaf(Binary_Oprtator op) throws InterruptedException {
head.left_c = new gate_node(op, head.array_index, head.arr_size / 2,false);
head.left_c.father_c = head;
head.left_c.leaf = true;
head.left_c.left_v = main_class.arr[head.array_index];
head.left_c.right_v = main_class.arr[head.array_index + 1];
head.left_c.isRightChild = false;
main_class.queue.put(head.left_c);
}
gate_node get_left(){
return head.left_c;
}
gate_node get_right(){
return head.right_c;
}
}
main_class
import java.util.concurrent.ArrayBlockingQueue;
import java.util.concurrent.BlockingQueue;
public class main_class {
public static int arr[] = { 1, 0, 0, 0, 1, 0, 0, 1 };
static final BlockingQueue<gate_node> queue = new ArrayBlockingQueue<>(6);
public static void main(String[] args) throws InterruptedException {
/*************************************
* compute using multi threads
************************************/
System.out.println("compute using Multi threading");
//start a consumer... wait for nodes to be insert into the queue
Consumer consumer = new Consumer();
consumer.start();
tree t = new tree(new and(), 0, arr.length);
t.set_left_son(new or());
t.get_left().set_left_son_as_leaf(new and());
t.get_left().set_right_son_as_leaf(new or());
t.set_right_son(new and());
t.get_right().set_left_son_as_leaf(new or());
t.get_right().set_right_son_as_leaf(new or());
consumer.join();
t.calc_head_value(); //calc the head
System.out.println("The result is: " + t.head.value);
System.out.println();
/******************************
* compute with a single thread
********************************/
System.out.println("compute with a single thread");
int res = t.compute();
System.out.println("The result is: " + res);
/***********************************************
* printing a arithmatic expression of the tree
*************************************************/
System.out.println();
t.compute_with_print();
}
}
Consumer
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
public class Consumer extends Thread {
Consumer() {
}
#Override
public void run() {
gate_node temp;
// the threads pool parts
ExecutorService executor = Executors.newFixedThreadPool(4);
try {
while ((temp = main_class.queue.take()).father_c != null) {
Runnable worker = new computingThread(temp);
executor.execute(worker);
}
} catch (InterruptedException e) {
e.printStackTrace();
}
executor.shutdown();
while (!executor.isTerminated()) {
}
}
}
computingThread
public class computingThread implements Runnable {
gate_node head;
int t_value;
public computingThread(gate_node head) {
this.head = head;
this.t_value = -1;
}
#Override
public void run() {
/* System.out.println("Start: "+this.hashCode()); */
t_value = head.op.calc(head.left_v,head.right_v);
/* System.out.println("thread: "+this.hashCode()+" is running ==> "+head.left_v+" "+head.op.toString()+" "+head.right_v+" = "+head.op.calc(head.left_v,head.right_v));
*/ head.value = this.t_value;
// update the father
if (head.isRightChild == true) { //update right fathers entire
head.father_c.right_v = t_value;
/*System.out.println("isRightChild");*/
} else { //update left fathers entire
head.father_c.left_v = t_value;
}
if ((head.father_c.right_v != -1) && (head.father_c.left_v != -1)){ //father is ready to compute-> to the queue!
try {
main_class.queue.put(head.father_c);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
/* try {
Thread.sleep(1);
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}*/
/* System.out.println("thread: "+this.hashCode()+" is done!");
*/ return;
}
}
Here what I'm trying to do:
I'm trying to do a parllel comput that use multithreads to compute the finite value of the tree (each node gets two values, produce an outcome based on his opreator, pass it on the tree.. until the root is caculted). What I did is to set a queue of fixed number of spaces.
I insert the leafs to the queue as the tree is build. then I start a consumer that takes each leafs, caculate it, pass the result on the the right entrie of his father, and when both entreis are inserted into the father node, it also goes to the queue, and so on.. until the root is cacluted).
the only problem is that I cannot uses a queue that is smaller from the number of leafs in the tree, and I don't know why.
maybe becuase while I'm building the tree I'm inserting the leafs to the tree and if the queue is smaller then the leafs, I'm doing a: main_class.queue.put(this.right_c); when queue is already full, and that cause the progrem to wait until spaces on the queue will be freed which doesnt happen (cause I'm didn't start the threads yet).
Does anyone have any solution to that?
and another question? is that consider a parrlel computation? meaning if I set a queue with size 2, does that mean the I will do all the computation with only two threads (because I want to set is like the number of core CPU of a certain computer).
Thanks and sorry for my bad spelling.
I think you modelled it in a more complicated way than it was needed. I would not base my modelling on a tree. An electric circuit, is not always a tree. You could have more than one nodes acting as the circuits outputs, right?
I would base my modelling on the gate node. I would have a Gate class with two inputs and one output. Inputs and outputs, would be of type GateValue. Output would be calculated using a different way if the gate is an and or an or gate.
Then I would combine them building my circuit, like this:
gate1.Input1 = gate2.Output
gate1.Input2 = gate3.Output
etc.
Then, I would calculate the value of the last gate (output of the whole circuit) which would cause other gates to calculate their values. This way, you would not need a "parallel" calculation mechanism. As soon as you have no feedback loops in your circuit, this would work fine.
Hope I helped!
I have some places in a excel file, each of the point have a lng and lat coordinate.
Now I try to create a static Map for each point using the google map static map api.
And I have Two component, a parser and a loader.
The Parser is used to read the excel file while the loaded is used to load tiles.
And I make the loader run in a seprate Thread.
public class Parser {
private static Parser instance;
private StaticMapLoader loader;
private Parser(StaticMapLoader loader) {
this.loader = loader;
}
public synchronized static Parser getInstance(StaticMapLoader loader) {
if (instance == null) {
instance = new Parser(loader);
}
return instance;
}
public void parse(String path) {
List<Branch> result = new ArrayList<Branch>();
InputStream inp;
try {
inp = new FileInputStream(path);
Workbook wb = WorkbookFactory.create(inp);
Sheet sheet = wb.getSheetAt(0);
int rows = sheet.getLastRowNum();
for(Row r : sheet.getRows){
loader.addTask(r.type,r.name,r.x,r.y);
}
} catch (InvalidFormatException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
// Branch bc = new Branch("网点1", null, null);
return result;
}
}
Loader:
public class StaticMapLoader extends Thread {
private final static Logger log = Logger.getLogger(StaticMapLoader.class);
private List<Task> tasks = new ArrayList<Task>();
private String tilePath;
private boolean running = false;
public StaticMapLoader(String saveDir) {
this.tilePath = saveDir;
}
#Override
public void run() {
while (running) {
log.debug("run " + tasks.size());
if (tasks.size() > 0) {
Task t = tasks.get(0);
if (t != null && t.status == Status.waiting) {
tasks.remove(0);
t.status = Status.running;
downLoad(t);
}
}
}
}
private void downLoad(Task t) {
log.debug(String.format("load data for " + t.toString()));
//down tiles and save
t.status=Status.success;
}
public void addTask(String type, String name, double x, double y) {
log.debug(String.format("add task of :%s,%s", type, name));
tasks.add(new Task(type,name,x,y));
}
public void startRunning() {
running = true;
this.start();
}
public void stopRunning() {
running = false;
this.interrupt();
}
class Task {
Status status = Status.waiting;
String type, name;
double x,y;
Task(String type, String name, double x,double y) {
this.type = type;
this.name = name;
this.xian = xian;
this.x = x;
this.y = y;
}
}
enum Status {
waiting, running, fail, success
}
}
The process is rather simple, the StaticMapLoader have a field of ArrayList. While the Parser parse a record(place), it will be thrown to the list.
And the loader will iterator the list and download the data.
However I meet a strange problem here:
#Override
public void run() {
while (running) {
log.debug("run " + tasks.size());
if (tasks.size() > 0) {
Task t = tasks.get(0);
if (t != null && t.status == Status.waiting) {
tasks.remove(0);
t.status = Status.running;
downLoad(t);
}
}
}
}
The above codes runs, and I will get the logs like this:
run 1
add task of ..
run 2
add task of ...
However , if I comment the log line, the downLoad will be never called, I will get:
run 1
run 2
......
It seems that this may be caused by the Thread , do I miss anything?
BTW, the above codes ran inside the HttpServlet context, and I start them like this:
#Override
public void init() throws ServletException {
super.init();
try {
URL fileUrl = getServletContext().getResource(getInitParameter("xlsxFile"));
URL tilePath = getServletContext().getResource(getInitParameter("tilePath"));
StaticMapLoader loader = new StaticMapLoader(tilePath.getPath());
loader.startRunning();
Parser.getInstance(loader).parse(fileUrl.getPath());
} catch (MalformedURLException e) {
}
}
I'm having a bit of a problem with writing a multithreaded algorithm in Java. Here's what I've got:
public class NNDFS implements NDFS {
//Array of all worker threads
private Thread[] threadArray;
//Concurrent HashMap containing a mapping of graph-states and
//algorithm specific state objects (NDFSState)
private ConcurrentHashMap<State, NDFSState> stateStore;
//Whether the algorithm is done and whether a cycle is found
private volatile boolean done;
private volatile boolean cycleFound;
/**
Constructor that creates the threads, each with their own graph
#param file The file from which we can create the graph
#param stateStore Mapping between graph-states and state belonging to our algorithm
#param nrWorkers Number of working threads we need
*/
public NNDFS(File file, Map<State, NDFSState> stateStore, int nrWorkers) throws FileNotFoundException {
int i;
this.stateStore = new ConcurrentHashMap<State, NDFSState>(stateStore);
threadArray = new Thread[nrWorkers];
for(i=0;i<nrWorkers;i++){
Graph graph = GraphFactory.createGraph(file);
threadArray[i] = new Thread(new NDFSRunnable(graph, i));
}
}
/**
Class which implements a single thread running the NDFS algorithm
*/
class NDFSRunnable implements Runnable{
private Graph graph;
//Neccesary as Java apparently doesn't allow us to get this ID
private long threadId;
NDFSRunnable(Graph graph, long threadId){
this.graph = graph;
this.threadId = threadId;
}
public void run(){
try {
System.out.printf("Thread id = %d\n", threadId);
//Start by executing the blue DFS for the first graph
mcdfsBlue(graph.getInitialState(), threadId);
} catch (CycleFound e) {
//We must catch all exceptions that are thrown from within our thread
//If exceptions "exit" the thread, the thread will silently fail
//and we dont want that. We use 2 booleans instead, to indicate the status of the algorithm
cycleFound = true;
}
//Either the algorithm was aborted because of a CycleFound exception
//or we completed our Blue DFS without finding a cycle. We are done!
done = true;
}
public void mcdfsBlue(State s, long id) throws CycleFound {
if(done == true){
return;
}
//System.out.printf("Thread %d begint nu aan een dfsblue\n", id);
int i;
int counter = 0;
NDFSState state = stateStore.get(s);
if(state == null){
state = new NDFSState();
stateStore.put(s,state);
}
state.setColor(id, Color.CYAN);
List<State> children = graph.post(s);
i = state.incNextBlue();
while(counter != children.size()){
NDFSState child = stateStore.get(children.get(i%children.size()));
if(child == null){
child = new NDFSState();
stateStore.put(children.get(i % children.size()),child);
}
if(child.getLocalColor(id) == Color.WHITE && !child.isRed()){
mcdfsBlue(children.get(i % children.size()), id);
}
i++;
counter++;
}
if(s.isAccepting()){
state.incRedDFSCount();
mcdfsRed(s, id);
}
state.setColor(id, Color.BLUE);
}
public void mcdfsRed(State s, long id) throws CycleFound {
if(done == true){
return;
}
int i;
int counter = 0;
NDFSState state = stateStore.get(s);
state.setPink(id, true);
List<State> children = graph.post(s);
i = state.incNextRed();
while(counter != children.size()){
NDFSState child = stateStore.get(children.get(i%children.size()));
if(child == null){
child = new NDFSState();
stateStore.put(children.get(i%children.size()),child);
}
if(child.getLocalColor(id) == Color.CYAN){
throw new CycleFound();
}
if(!child.isPink(id) && !child.isRed()){
mcdfsRed(children.get(i%children.size()), id);
}
i++;
counter++;
}
if(s.isAccepting()){
state.decRedDFSCountAndWait();
}
state.setRed();
state.setPink(id, false);
}
}
public void init() {}
public void ndfs() throws Result {
int i;
done = false;
cycleFound = false;
for(i=0;i<threadArray.length;i++){
System.out.printf("Launch thread %d\n",i);
threadArray[i].run();
}
try {
for(i=0;i<threadArray.length;i++){
threadArray[i].join();
}
} catch (InterruptedException e) {
}
//We want to show the result by throwing an exception (weird, but yeah :-/)
if (cycleFound) {
throw new CycleFound();
} else {
throw new NoCycleFound();
}
}
}
However, when I run this, it seems like the first thread is called, completes, and then the next is called etc. What I want obviously, is that all threads are started simultaneously! Otherwise the algorithm has very little use...
Thanks for your time/help!
Regards,
Linus
Use threadArray[i].start(); to launch your thread.
If you use threadArray[i].run();, all it does is call the method normally, in the same thread as the caller.
So, I'm working on a plugin at work and I've run into a situation where I could use a ContentProposalAdapter to my benefit. Basically, a person will start typing in someone's name and then a list of names matching the current query will be returned in a type-ahead manner (a la Google). So, I created a class IContentProposalProvider which, upon calling it's getProposals() method fires off a thread which handles getting the proposals in the background. The problem I am having is that I run into a race condition, where the processing for getting the proposals via HTTP happens and I try to get the proposals before they have actually been retrieved.
Now, I'm trying not to run into an issue of Thread hell, and that isn't getting me very far anyway. So, here is what I've done so far. Does anyone have any suggestions as to what I can do?
public class ProfilesProposalProvider implements IContentProposalProvider, PropertyChangeListener {
private IContentProposal[] props;
#Override
public IContentProposal[] getProposals(String arg0, int arg1) {
Display display = PlatformUI.getWorkbench().getActiveWorkbenchWindow().getShell().getDisplay();
RunProfilesJobThread t1 = new RunProfilesJobThread(arg0, display);
t1.run();
return props;
}
#Override
public void propertyChange(PropertyChangeEvent arg0) {
if (arg0.getSource() instanceof RunProfilesJobThread){
RunProfilesJobThread thread = (RunProfilesJobThread)arg0.getSource();
props = thread.getProps();
}
}
}
public class RunProfilesJobThread extends Thread {
private ProfileProposal[] props;
private Display display;
private String query;
public RunProfilesJobThread(String query, Display display){
this.query = query;
}
#Override
public void run() {
if (!(query.equals(""))){
GetProfilesJob job = new GetProfilesJob("profiles", query);
job.schedule();
try {
job.join();
} catch (InterruptedException e) {
e.printStackTrace();
}
GetProfilesJobInfoThread thread = new GetProfilesJobInfoThread(job.getResults());
try {
thread.join();
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
props = thread.getProps();
}
}
public ProfileProposal[] getProps(){
return props;
}
}
public class GetProfilesJobInfoThread extends Thread {
private ArrayList<String> names;
private ProfileProposal[] props;
public GetProfilesJobInfoThread(ArrayList<String> names){
this.names = names;
}
#Override
public void run() {
if (names != null){
props = new ProfileProposal[names.size()];
for (int i = 0; i < props.length - 1; i++){
ProfileProposal temp = new ProfileProposal(names.get(i), names.get(i));
props[i] = temp;
}
}
}
public ProfileProposal[] getProps(){
return props;
}
}
Ok, i'll try it...
I haven't tried to run it, but it should work more or less. At least it's a good start. If you have any questions, feel free to ask.
public class ProfilesProposalProvider implements IContentProposalProvider {
private List<IContentProposal> proposals;
private String proposalQuery;
private Thread retrievalThread;
public void setProposals( List<IContentProposal> proposals, String query ) {
synchronized( this ) {
this.proposals = proposals;
this.proposalQuery = query;
}
}
public IContentProposal[] getProposals( String contents, int position ) {
// Synchronize incoming thread and retrieval thread, so that the proposal list
// is not replaced while we're processing it.
synchronized( this ) {
/**
* Get proposals if query is longer than one char, or if the current list of proposals does with a different
* prefix than the new query, and only if the current retrieval thread is finished.
*/
if ( retrievalThread == null && contents.length() > 1 && ( proposals == null || !contents.startsWith( proposalQuery ) ) ) {
getProposals( contents );
}
/**
* Select valid proposals from retrieved list.
*/
if ( proposals != null ) {
List<IContentProposal> validProposals = new ArrayList<IContentProposal>();
for ( IContentProposal prop : proposals ) {
if(prop == null) {
continue;
}
String propVal = prop.getContent();
if ( isProposalValid( propVal, contents )) {
validProposals.add( prop );
}
}
return validProposals.toArray( new IContentProposal[ validProposals.size() ] );
}
}
return new IContentProposal[0];
}
protected void getProposals( final String query ) {
retrievalThread = new Thread() {
#Override
public void run() {
GetProfilesJob job = new GetProfilesJob("profiles", query);
job.schedule();
try {
job.join();
ArrayList<String> names = job.getResults();
if (names != null){
List<IContentProposal> props = new ArrayList<IContentProposal>();
for ( String name : names ) {
props.add( new ProfileProposal( name, name ) );
}
setProposals( props, query );
}
} catch (InterruptedException e) {
e.printStackTrace();
}
retrievalThread = null;
}
};
retrievalThread.start();
}
protected boolean isProposalValid( String proposalValue, String contents ) {
return ( proposalValue.length() >= contents.length() && proposalValue.substring(0, contents.length()).equalsIgnoreCase(contents));
}
}