I am supposed to start for example from point 1B to 5D. how am i supposed to reach? anyone can gv me a hint to get start on this?thanks. not asking for codes but the tips/ hints. thanks.
A B C D E
1 5 1 4 4 1
2 3 4 3 3 4
3 4 3 1 1 3
4 4 3 4 2 5
5 3 4 1 1 3
Start by defining data structures , and apply algorithm with it
You may refer to pseudo-code on wiki: Wiki A Star Search Algorithm
Related
How to write a java program to print below the pattern?
1
2 2
3 3
2 2
1
I think this is what are you looking for:
System.out.println("\t 1\n\t 2 2\n\t3 3\n\t 2 2\n\t 1 ");
Can someone explain me how Order Crossover works? I will give this example and I want to understand it in a generic way to implement after.
Parent 1 = 1 2 3 | 4 5 6 7 | 8 9
Parent 2 = 4 5 2 | 1 8 7 6 | 9 3
and the solution are two childreen:
Children 1 = 2 1 8 | 4 5 6 7 | 9 3
Children 2 = 3 4 5 | 1 8 7 6 | 9 2
I understand some parts but others not.
Thanks
Basically, a swath of consecutive alleles from parent 1 drops down, and remaining values are placed in the child in the order which they appear in parent 2.
Step 1: Select a random swath of consecutive alleles from parent 1. (underlined)
Step 2: Drop the swath down to Child 1 and mark out these alleles in Parent 2.
Step 3: Starting on the right side of the swath, grab alleles from parent 2 and insert them in Child 1 at the right edge of the swath. Since 8 is in that position in Parent 2, it is inserted into Child 1 first at the right edge of the swath. Notice that alleles 1, 2 and 3 are skipped because they are marked out and 4 is inserted into the 2nd spot in Child 1.
Step 4: If you desire a second child from the two parents, flip Parent 1 and Parent 2 and go back to Step 1.
One such solution for Ordered Crossover is detailed in this post.
This answer provides some sample java code with documentation detailing the processes used for the Ordered Crossover.
Additionally, this paper from Moscato provides a breakdown of the OX Process.
Hope this helps!
I've been looking into algorithms using a class on coursera. In one of the first lectures, Quick Union Weighted is being discussed. I get what it does and I've tested it out using their code and written a small test for it.
Everything is clear but one point: when you create a union of two objects, it will add the object with the smallest tree to the bigger one. At the same time, the size of the larger tree will be incremented with the size of the smaller tree in a separate array which is used to determine what tree is bigger. Since the array is initiated with value 1 for every index (every node on its own basically is a tree of 1 object), why isn't the value of this index set to 0 instead of remaining on 1?
In order to illustrate this:
// Quick Union Weighted
ID: 0 1 2 3 4 5 6 7 8 9
SZ: 1 1 1 1 1 1 1 1 1 1
quw.union(2, 4);
ID: 0 1 2 3 2 5 6 7 8 9
SZ: 1 1 2 1 1 1 1 1 1 1
quw.union(5, 4);
ID: 0 1 2 3 2 2 6 7 8 9
SZ: 1 1 3 1 1 1 1 1 1 1
quw.union(2, 7);
ID: 0 1 2 3 2 2 6 2 8 9
SZ: 1 1 4 1 1 1 1 1 1 1
// Whereas I would've expected to end up with this
// to point out that the index is empty.
SZ: 1 1 4 1 0 0 1 0 1 1
Why are the sizes of merged indices 1 instead of 0?
You can find the code to test it out here. Note that the implementation is the same as the example provided by the lecturers, which is why I'm assuming my code is correct.
I think this is because the node itself is also size 1 and does not have any children. It can however have children. I'm actually not familiar with Quick-Union Weighted but if it's bit like the other union find algoritmes I've seen you can for example do
quw.union(0, 1);
ID: 0 0 2 3 2 2 6 2 8 9
SZ: 1 1 4 1 1 1 1 1 1 1
quw.union(0, 2);
ID: 2 2 2 3 2 2 6 2 8 9
SZ: 2 1 6 1 1 1 1 1 1 1
So now 0 en 1 have merged and the entire tree starting from 0 is merged with 2 again, still making the subtree starting at 0 size 2.
Like I said, I'm not sure it that's possible in Quick-Union Weighted but the reason for the '1' is still because it's also size 1 on its own.
use I can't divide into segads. As for my above example if 5 threads are set, then first segment would take 2 first object, and second 3th and 4th, so they dont find dups, but there are dups if we merge them, its 2th and 3th.
There could be more complex strate take from first threads .. ah nevermind, to hard to explain.
And ofcourse, problelection itself in my plans.
Tha
EDIT:
InChunk, and then continue analyzing that chunk till the end. ;/
I think the process of dividing up the items to be de-duped is going to have to look at the end of the section and move forward to encompass dups past it. For example, if you had:
1 1 2 . 2 4 4 . 5 5 6
And you dividing up into blocks of 3, then the dividing process would take 1 1 2 but see that there was another 2 so it would generate 1 1 2 2 as the first block. It would move forward 3 again and generate 4 4 5 but see that there were dups forward and generate 4 4 5 5. The 3rd thread would just have 6. It would become:
1 1 2 2 . 4 4 5 5 . 6
The size of the blocks are going to be inconsistent but as the number of items in the entire list gets large, these small changes are going to be insignificant. The last thread may have very little to do or be short changed altogether but again, as the number of elements gets large, this should not impact the performance of the algorithm.
I think this method would be better than somehow having one thread handle the overlapping blocks. With that method, if you had a lot of dups, you could see it having to handle a lot more than 2 contiguous blocks if you were unlucky in the positing of the dups. For example:
1 1 2 . 2 4 5 . 5 5 6
One thread would have to handle that entire list because of the 2s and the 5s.
I would use a chunk-based division, a task queue (e.g. ExecutorService) and private hash tables to collect duplicates.
Each thread in the pool will take chunks on demand from the queue and add 1 to the value corresponding to the key of the item in the private hash table. At the end they will merge with the global hash table.
At the end just parse the hash table and see which keys have a value greater than 1.
For example with a chunk size of 3 and the items:
1 2 2 2 3 4 5 5 6 6
Assume to have 2 threads in the pool. Thread 1 will take 1 2 2 and thread 2 will take 2 3 4. The private hash tables will look like:
1 1
2 2
3 0
4 0
5 0
6 0
and
1 0
2 1
3 1
4 1
5 0
6 0
Next, thread 1 will process 5 5 6 and thread 2 will process 6:
1 1
2 2
3 0
4 0
5 2
6 1
and
1 0
2 1
3 1
4 1
5 0
6 1
At the end, the duplicates are 2, 5 and 6:
1 1
2 3
3 1
4 1
5 2
6 2
This may take up some amount of space due to the private tables of each thread, but will allow the threads to operate in parallel until the merge phase at the end.
How to group rows in htmldatatable?
I am using JSF.
A short example :
TransNum TransAmount InvoiceNum InvoiceAmount
1 50 1 10
1 50 2 15
1 50 3 30
2 10 1 6
2 10 2 5
If I select Grouping column as "InvoiceNum" then the table should look like:-
(i.e Grouping is done on InvoiceNum):
TransNum TransAmount InvoiceNum InvoiceAmount
1
1 50 1 10
2 10 1 6
2
1 50 2 15
2 10 2 5
3
1 50 3 30
TransNum TransAmount InvoiceNum InvoiceAmount
Similarly, grouping can be done based on multiple columns values too.
Thanks in advance.
JSF h:dataTable has no built-in grouping.
Either you find a component that fits your needs in one of the component libraries, such as Primefaces, Richfaces or Icefaces.
Or you have to implement it yourself in the backing bean by sorting the list in the way you want.