I need your help with F-W algorithm. My question is: is it possible to find shortest path between two vertexes? Down below I will give you an example of what I want.
1 -----50(w)----- 5
| \__ / |
| \5(w) /20(w)|
10(w) \__3__/ |25(w)
| \___ |
| 18(w)\ |
2--------8(w)------4
If my picture isn't possible to see, here is a picture of it: http://prntscr.com/b6n43d
This is what I got with final cycle with weights:
0 10 5 18 25
10 0 15 8 33
5 15 0 18 20
18 8 18 0 25
25 33 20 25 0
And this is what I got with vertex'es:
0 1 2 1 2
0 0 0 3 3
0 0 0 3 4
1 1 2 0 4
2 3 2 3 0
And I need to find that vertex, that furthermost vertex be as most as possible near that first vertex. What I mean, that vertex 1 have 50 weight with 5 vertex (that mean they are furthermost from each other) and I need to find shortest path between 1 and 5. Any ideas how I can do that ?
The whole code were write'd from https://en.wikipedia.org/wiki/Floyd%E2%80%93Warshall_algorithm pseuudo code.
Related
I want to find efficient algorithm based on which subset it is. New condition is to be executed for each subset.
For eg: I have 4 flags ABCD and each subset will have seperate condition. What is the most efficient algorithm to solve the following condition. It can be made easily but I want to find the most efficient algorithm. Is there already an algorithm which solves this kind of problem?
A B C D
0 0 0 0 Subset 1 Execute Condition 1
0 0 0 1 Subset 2 Execute Condition 2
0 0 1 0 Subset 3 Execute Condition 3
0 0 1 1 Subset 4 Execute Condition 4
0 1 0 0 Subset 5 Execute Condition 5
0 1 0 1 Subset 6 Execute Condition 6
0 1 1 0 Subset 7 Execute Condition 7
0 1 1 1 Subset 8 Execute Condition 8
1 0 0 0 Subset 9 Execute Condition 9
1 0 0 1 Subset 10 Execute Condition 10
1 0 1 0 Subset 11 Execute Condition 11
1 0 1 1 Subset 12 Execute Condition 12
1 1 0 0 Subset 13 Execute Condition 13
1 1 0 1 Subset 14 Execute Condition 14
1 1 1 0 Subset 15 Execute Condition 15
1 1 1 1 Subset 16 Execute Condition 16
Bitmasking can be used to generate all subsets. There are four values. Therefore, you have 2^4 subsets. All you have to do is iterate this mask 2^4 times and mask it with each of the four values. In each iteration, the result of masking is a subset of the given values. Here's an idea:
allSubsets = {}
for mask in range(1<<4):
subsets = []
for i in range(0,3):
val = mask & (1<<i)
if(val)
subsets.append(a[i]) # Individual subset. Here assume array a has 4 values. Can be just 1s and 0s as in your case.
allSubsets[mask] = subset #keep appending each generated subset
return allSubsets # Do your operation by iterating on each of these subsets
I'm having a hard time thinking of an appropriate data structure to use to represent an adjacency matrix for an undirected graph.
I want to be able to take the nodes from these graphs and insert them into random positions in arrays, and then "score" the arrays based on how well they've managed to keep the adjacent nodes apart. i.e if node A and node B are connected in my graph, and the array places them next to each other, +1 would be added to the array's score, with the lowest scoring array being the best.
So what would be the best data structure to use to represent a collection of nodes, and the neighbouring nodes of each one in the collection?
If I understand your question which I do not think it really clear.
For an adjacency matrix, I think the best way to go is an Array. You can access each positions in O(1) and since it is an undirected graph, it should be easy to create. see graph below
0 --- 1------5---6
| \ \ | /
| \ \ | /
2 3----4---7
0 1 2 3 4 5 6 7
-----------------
0 | 0 1 1 1 0 0 0 0
1 | 1 0 0 0 1 1 0 0
2 | 1 0 0 0 0 0 0 0
3 | 1 0 0 0 1 0 0 0
4 | 0 1 0 1 0 0 0 1
5 | 0 1 0 0 0 0 1 1
6 | 0 0 0 0 0 1 0 1
7 | 0 0 0 0 1 1 1 0
------------------
You can implement your matrix like so and perform whatever operation you want on it. And all that matters is that if a location is not 0 then the graph is connected and you can just pick the highest value for whatever you are doing.
I'm newbie to Hive, I would an help to write an UDF function for weighting factor calculation.
The calculation seems simple.
I have one table with some values KEY,VALUE grouped by GROUP_ID. For each row of one group I want calculate the weighting factor, a float beetween 0 and 1 that's the weight of that element of the group.
The sum of weighting factors into the group must be 1.
In this example the value is the distance, then the weight is inversely proportional to the distance.
GROUP_ID | KEY | VALUE(DISTANCE)
====================================
1 10 4
1 11 3
1 12 2
2 13 1
2 14 5
3 .. ..
...
Math function: 1/(Xi * sum(1/Xk)) from k=1 to N)
GROUP_ID | KEY | VALUE | WEIGHTING_FACTOR
=======================================================
1 10 4 1/(4*(1/4+1/3+1/2)) = 0.23
1 11 3 1/(3*(1/4+1/3+1/2)) = 0.31
1 12 2 1/(2*(1/4+1/3+1/2)) = 0.46
2 13 1 1/(1*(1/1+1/5)) = 0.83
2 14 5 1/(5*(1/1+1+5)) = 0.17
3 .. ..
...
Have you a suggestion for using UDF, UDAF or UDTF function?
Maybe I must use a "Transform"?
https://cwiki.apache.org/confluence/display/Hive/LanguageManual+Transform
Solved using Windowing and Analytics Functions
http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.0.0.2/ds_Hive/language_manual/ptf-window.html
Source: https://stackoverflow.com/a/18919834/2568351
I've been looking into algorithms using a class on coursera. In one of the first lectures, Quick Union Weighted is being discussed. I get what it does and I've tested it out using their code and written a small test for it.
Everything is clear but one point: when you create a union of two objects, it will add the object with the smallest tree to the bigger one. At the same time, the size of the larger tree will be incremented with the size of the smaller tree in a separate array which is used to determine what tree is bigger. Since the array is initiated with value 1 for every index (every node on its own basically is a tree of 1 object), why isn't the value of this index set to 0 instead of remaining on 1?
In order to illustrate this:
// Quick Union Weighted
ID: 0 1 2 3 4 5 6 7 8 9
SZ: 1 1 1 1 1 1 1 1 1 1
quw.union(2, 4);
ID: 0 1 2 3 2 5 6 7 8 9
SZ: 1 1 2 1 1 1 1 1 1 1
quw.union(5, 4);
ID: 0 1 2 3 2 2 6 7 8 9
SZ: 1 1 3 1 1 1 1 1 1 1
quw.union(2, 7);
ID: 0 1 2 3 2 2 6 2 8 9
SZ: 1 1 4 1 1 1 1 1 1 1
// Whereas I would've expected to end up with this
// to point out that the index is empty.
SZ: 1 1 4 1 0 0 1 0 1 1
Why are the sizes of merged indices 1 instead of 0?
You can find the code to test it out here. Note that the implementation is the same as the example provided by the lecturers, which is why I'm assuming my code is correct.
I think this is because the node itself is also size 1 and does not have any children. It can however have children. I'm actually not familiar with Quick-Union Weighted but if it's bit like the other union find algoritmes I've seen you can for example do
quw.union(0, 1);
ID: 0 0 2 3 2 2 6 2 8 9
SZ: 1 1 4 1 1 1 1 1 1 1
quw.union(0, 2);
ID: 2 2 2 3 2 2 6 2 8 9
SZ: 2 1 6 1 1 1 1 1 1 1
So now 0 en 1 have merged and the entire tree starting from 0 is merged with 2 again, still making the subtree starting at 0 size 2.
Like I said, I'm not sure it that's possible in Quick-Union Weighted but the reason for the '1' is still because it's also size 1 on its own.
I am supposed to start for example from point 1B to 5D. how am i supposed to reach? anyone can gv me a hint to get start on this?thanks. not asking for codes but the tips/ hints. thanks.
A B C D E
1 5 1 4 4 1
2 3 4 3 3 4
3 4 3 1 1 3
4 4 3 4 2 5
5 3 4 1 1 3
Start by defining data structures , and apply algorithm with it
You may refer to pseudo-code on wiki: Wiki A Star Search Algorithm