Related
I was solving a problem which states following:
There are n buckets in a row. A gardener waters the buckets. Each day he waters the buckets between positions i and j (inclusive). He does this for t days for different i and j.
Output the volume of the waters in the buckets assuming initially zero volume and each watering increases the volume by 1.
Input: first line contains t and n seperated by spaces.
The next t lines contain i and j seperated by spaces.
Output: a single line showing the volume in the n buckets seperated by spaces.
Example:
Input:
2 2
1 1
1 2
Output:
2 1
Constraints:
0 <= t <= 104; 1 <= n <= 105
I tried this problem. But I use O(n*t) algorithm. I increment each time the bucket from i to j at each step. But this shows time limit error. Is there any efficient algorithm to solve this problem. A small hint would suffice.
P.S: I have used C++ and Java as tags bcoz the program can be programmed in both the languages.
Instead of remembering the amount of water in each bucket, remember the difference between each bucket and the previous one.
have two lists of the intervals, one sorted by upper, one by lower bound
then iterate over n starting with a volume v of 0.
On each iteration over n
check if the next interval starts at n
if so increase v by one and check the next interval.
do the same for the upper bounds but decrease the volume
print v
repeat with the next n
I think the key observation here is that you need to figure out a way to represent your (possibly) 105 buckets without actually allocating space for each and every one of them, and tracking them separately. You need to come up with a sparse representation to track your buckets, and the water inside.
The fact that your input comes in ranges gives you a good hint: you should probably make use of ranges in your sparse representation. You can do this by just tracking the buckets on the ends of each range.
I suggest you do this with a linked list. Each list node will contain 2 pieces of information:
a bucket number
the amount of water in that bucket
You assume that all buckets between the current bucket and the next bucket have the same volume of water.
Here's an example:
Input:
5 30
1 5
4 20
7 13
25 30
19 27
Here's what would happen on each step of the algorithm, with step 1 being the initial state, and each successive step being what you do after parsing a line.
1:0→NULL (all buckets are 0)
1:1→6:0→NULL (1-5 have 1, rest are 0)
1:1→4:2→6:1→21:0→NULL (1-3 have 1, 4-5 have 2, 6-20 have 1, rest have 0)
1:1→4:2→6:1→7:2→14:1→21:0→NULL
1:1→4:2→6:1→7:2→14:1→21:0→25:1→NULL
1:1→4:2→6:1→7:2→14:1→19:2→21:1→25:2→28:1→NULL
You should be able to infer from the above example that the complexity with this method is actually O(t2) instead of O(n×t), so this should be much faster. As I said in my comment above, the bottleneck this way should actually be the parsing and output rather than the actual computation.
Here's an algorithm with space and time complexity O(n)
I am using java since I am used to it
1) Create a hashset of n elements
2) Each time a watering is made increase the respective elements count
3) After file parsing is complete then iterate over hashset to calculate result.
I implemented a custom HashMap class (in C++, but shouldn't matter). The implementation is simple -
A large array holds pointers to Items.
Each item contains the key - value pair, and a pointer to an Item (to form a linked list in case of key collision).
I also implemented an iterator for it.
My implementation of incrementing/decrementing the iterator is not very efficient. From the present position, the iterator scans the array of hashes for the next non-null entry. This is very inefficient, when the map is sparsely populated (which it would be for my use case).
Can anyone suggest a faster implementation, without affecting the complexity of other operations like insert and find? My primary use case is find, secondary is insert. Iteration is not even needed, I just want to know this for the sake of learning.
PS: Why I implemented a custom class? Because I need to find strings with some error tolerance, while ready made hash maps that I have seen provide only exact match.
EDIT: To clarify, I am talking about incrementing/decrementing an already obtained iterator. Yes, this is mostly done in order to traverse the whole map.
The errors in strings (keys) in my case occur from OCR errors. So I can not use the error handling techniques used to detect typing errors. The chance of fist character being wrong is almost the same as that of the last one.
Also, my keys are always string, one word to be exact. Number of entries will be less than 5000. So hash table size of 2^16 is enough for me. Even though it will still be sparsely populated, but that's ok.
My hash function:
hash code size is 16 bits.
First 5 bits for the word length. ==> Max possible key length = 32. Reasonable, given that key is a single word.
Last 11 bits for sum of the char codes. I only store the English alphabet characters, and do not need case sensitivity. So 26 codes are enough, 0 to 25. So a key with 32 'z' = 25 * 32 = 800. Which is well within 2^11. I even have scope to add case sensitivity, if needed in future.
Now when you compare a key containing an error with the correct one,
say "hell" with "hello"
1. Length of the keys is approx the same
2. sum of their chars will differ by the sum of the dropped/added/distorted chars.
in the hash code, as first 5 bits are for length, the whole table has fixed sections for every possible length of keys. All sections are of same size. First section stores keys of length 1, second of length 2 and so on.
Now 'hello' is stored in the 5th section, as length is 5.'When we try to find 'hello',
Hashcode of 'hello' = (length - 1) (sum of chars) = (4) (7 + 4 + 11 + 11 + 14) = (4) (47)
= (00100)(00000101111)
similarly, hashcode of 'helo' = (3)(36)
= (00011)(00000100100)
We jump to its bucket, and don't find it there.
so we try to check for ONE distorted character. This will not change the length, but change the sum of characters by at max -25 to +25. So we search from 25 places backwards to 25 places forward. i.e, we check the sum part from (36-25) to (36+25) in the same section. We won't find it.
We check for an additional character error. That means the correct string would contain only 3 characters. So we go to the third section. Now sum of chars due to additional char would have increased by max 25, it has to be compensated. So search the third section for appropriate 25 places (36 - 0) to (36 - 25). Again we don't find.
Now we consider the case of a missing character. So the original string would contain 5 chars. And the second part of hashcode, sum of chars in the original string, would be more by a factor of 0 to 25. So we search the corresponding 25 buckets in the 5th section, (36 + 0) to (36 + 25). Now as 47 (the sum part of 'hello') lies in this range, we will find a match of the hashcode. Ans we also know that this match will be due to a missing character. So we compare the keys allowing for a tolerance of 1 missing character. And we get a match!
In reality, this has been implemented to allow more than one error in key.
It can also be optimized to use only 25 places for the first section (since it has only one character) and so on.
Also, checking 25 places seems overkill, as we already know the largest and smallest char of the key. But it gets complex in case of multiple errors.
You mention an 'error tolerance' for the string. Why not build in the "tolerance' into the hash function itself and thus obviate the need for iteration.
You could go the way of Javas LinkedHashMap class. It adds efficient iteration to a hashmap by also making it a doubly-linked list.
The entries are key-value pairs that have pointers to the previous and next entries. The hashmap itself has the large array as well as the head of the linked list.
Insertion/deletion are constant time for both data structures, searches are done via the hashmap, and iteration via the linked list.
This is a more descriptive version of my previous question. The problem I am trying to solve relates to block-matching or image-within-image recognition.
I see an image, extract the [x,y] of every black pixel and create a set for that image, such as
{[8,0], [9,0], [11,0]}
The set is then augmented so that the first pixel in the set is at [0,0], but the relationship of the pixels is preserved. For example, I see {[8,0], [9,0]} and change the set to {[0,0], [1,0]}. The point of the extraction is that now if I see {[4,0], [5,0]}, I can recognize that basic relationship as two vertically adjacent pixels, my {[0,0], [1,0]}, since it is the same image but only in a different location.
I have a list of these pixel sets, called "seen images". Each 'seen image' has a unique identifier, that allows it to be used as a nested component of other sets. For example:
{[0,0], [1,0]} has the identifier 'Z'
So if I see:
{[0,0], [1, 0], [5,6]}
I can identify and store it as:
{[z], [5, 6]}
The problem with this is that I have to generate every combination of [x,y]'s within the pixel set to check for a pattern match, and to build the best representation. Using the previous example, I have to check:
{[0,0], [1,0]},
{[0,0], [5,6]},
{[1,0], [5,6]} which is {[0,0], [4,5]}
{[0,0], [1,0], [5,6]}
And then if a match occurs, that subset gets replaced with it's ID, merged with the remainder of the original set, and the new combination needs to be checked if it is a 'seen image':
{[z],[5, 6]}
The point of all that is to match as many of the [x,y]'s possible, using the fewest pre-existing pieces as components, to represent a newly seen image concisely. The greedy solution to get the component that matches the largest subset is not the right one. Complexity arises in generating all of the combinations that I need to check, and then the combinations that spawn from finding a match, meaning that if some match and swap produces {[z], [1,0], [2,0]}, then I need to check (and if matched, repeat the process):
{[z], [1,0]}
{[z], [2,0]}
{[1,0], [2,0]} which is {[0,0], [1,0]}
{[z], [1,0], [2,0]}
Currently I generate the pixel combinations this way (here I use numbers to represent pixels 1 == [x,y]) Ex. (1, 2, 3, 4): Make 3 lists:
1.) 2.) 3.)
12 23 34
13 24
14
Then for each number, for each list starting at that number index + 1, concatenate the number and each item and store on the appropriate list, ex. (1+23) = 123, (1+24) = 124
1.) 2.) 3.)
12 23 34
13 24
14
---- ---- ----
123 234
124
134
So those are all the combinations I need to check if they are in my 'seen images'. This is a bad way to do this whole process. I have considered different variations / optimizations, including once the second half of a list has been generated (below the ----), check every item on the list for matches, and then destroy the list to save space, and then continue generating combinations. Another option would be to generate a single combination, and then check it for a match, and somehow index the combinations so you know which one to generate next.
Does anyone recognize this process in general? Can you help me optimize what I am doing for a set of ~million items. I also have not yet come up with a non-recursive or efficient way to handle that each match generates additional combinations to check.
I'm trying to come up with an algorithm for the following problem :
I've got a collection of triplets of integers - let's call these integers A, B, C. The value stored inside can be big, so generally it's impossible to create an array of size A, B, or C. The goal is to minimize the size of the collection. To do this, we're provided a simple rule that allows us to merge the triplets :
For two triplets (A, B, C) and (A', B', C'), remove the original triplets and place the triplet (A | A', B, C) if B == B' and C = C', where | is bitwise OR. Similar rules hold for B and C also.
In other words, if two values of two triplets are equal, remove these two triplets, bitwise OR the third values and place the result to the collection.
The greedy approach is usually misleading in similar cases and so it is for this problem, but I can't find a simple counterexample that'd lead to a correct solution. For a list with 250 items where the correct solution is 14, the average size computed by greedy merging is about 30 (varies from 20 to 70). The sub-optimal overhead gets bigger as the list size increases.
I've also tried playing around with set bit counts, but I've found no meaningful results. Just the obvious fact that if the records are unique (which is safe to assume), the set bit count always increases.
Here's the stupid greedy implementation (it's just a conceptual thing, please don't regard the code style) :
public class Record {
long A;
long B;
long C;
public static void main(String[] args) {
List<Record> data = new ArrayList<>();
// Fill it with some data
boolean found;
do {
found = false;
outer:
for (int i = 0; i < data.size(); ++i) {
for (int j = i+1; j < data.size(); ++j) {
try {
Record r = merge(data.get(i), data.get(j));
found = true;
data.remove(j);
data.remove(i);
data.add(r);
break outer;
} catch (IllegalArgumentException ignored) {
}
}
}
} while (found);
}
public static Record merge(Record r1, Record r2) {
if (r1.A == r2.A && r1.B == r2.B) {
Record r = new Record();
r.A = r1.A;
r.B = r1.B;
r.C = r1.C | r2.C;
return r;
}
if (r1.A == r2.A && r1.C == r2.C) {
Record r = new Record();
r.A = r1.A;
r.B = r1.B | r2.B;
r.C = r1.C;
return r;
}
if (r1.B == r2.B && r1.C == r2.C) {
Record r = new Record();
r.A = r1.A | r2.A;
r.B = r1.B;
r.C = r1.C;
return r;
}
throw new IllegalArgumentException("Unable to merge these two records!");
}
Do you have any idea how to solve this problem?
This is going to be a very long answer, sadly without an optimal solution (sorry). It is however a serious attempt at applying greedy problem solving to your problem, so it may be useful in principle. I didn't implement the last approach discussed, perhaps that approach can yield the optimal solution -- I can't guarantee that though.
Level 0: Not really greedy
By definition, a greedy algorithm has a heuristic for choosing the next step in a way that is locally optimal, i.e. optimal right now, hoping to reach the global optimum which may or may not be possible always.
Your algorithm chooses any mergable pair and merges them and then moves on. It does no evaluation of what this merge implies and whether there is a better local solution. Because of this I wouldn't call your approach greedy at all. It is just a solution, an approach. I will call it the blind algorithm just so that I can succinctly refer to it in my answer. I will also use a slightly modified version of your algorithm, which, instead of removing two triplets and appending the merged triplet, removes only the second triplet and replaces the first one with the merged one. The order of the resulting triplets is different and thus the final result possibly too. Let me run this modified algorithm over a representative data set, marking to-be-merged triplets with a *:
0: 3 2 3 3 2 3 3 2 3
1: 0 1 0* 0 1 2 0 1 2
2: 1 2 0 1 2 0* 1 2 1
3: 0 1 2*
4: 1 2 1 1 2 1*
5: 0 2 0 0 2 0 0 2 0
Result: 4
Level 1: Greedy
To have a greedy algorithm, you need to formulate the merging decision in a way that allows for comparison of options, when multiple are available. For me, the intuitive formulation of the merging decision was:
If I merge these two triplets, will the resulting set have the maximum possible number of mergable triplets, when compared to the result of merging any other two triplets from the current set?
I repeat, this is intuitive for me. I have no proof that this leads to the globally optimal solution, not even that it will lead to a better-or-equal solution than the blind algorithm -- but it fits the definition of greedy (and is very easy to implement). Let's try it on the above data set, showing between each step, the possible merges (by indicating the indices of triplet pairs) and resulting number of mergables for each possible merge:
mergables
0: 3 2 3 (1,3)->2
1: 0 1 0 (1,5)->1
2: 1 2 0 (2,4)->2
3: 0 1 2 (2,5)->2
4: 1 2 1
5: 0 2 0
Any choice except merging triplets 1 and 5 is fine, if we take the first pair, we get the same interim set as with the blind algorithm (I will this time collapse indices to remove gaps):
mergables
0: 3 2 3 (2,3)->0
1: 0 1 2 (2,4)->1
2: 1 2 0
3: 1 2 1
4: 0 2 0
This is where this algorithm gets it differently: it chooses the triplets 2 and 4 because there is still one merge possible after merging them in contrast to the choice made by the blind algorithm:
mergables
0: 3 2 3 (2,3)->0 3 2 3
1: 0 1 2 0 1 2
2: 1 2 0 1 2 1
3: 1 2 1
Result: 3
Level 2: Very greedy
Now, a second step from this intuitive heuristic is to look ahead one merge further and to ask the heuristic question then. Generalized, you would look ahead k merges further and apply the above heuristic, backtrack and decide the best option. This gets very verbose by now, so to exemplify, I will only perform one step of this new heuristic with lookahead 1:
mergables
0: 3 2 3 (1,3)->(2,3)->0
1: 0 1 0 (2,4)->1*
2: 1 2 0 (1,5)->(2,4)->0
3: 0 1 2 (2,4)->(1,3)->0
4: 1 2 1 (1,4)->0
5: 0 2 0 (2,5)->(1,3)->1*
(2,4)->1*
Merge sequences marked with an asterisk are the best options when this new heuristic is applied.
In case a verbal explanation is necessary:
Instead of checking how many merges are possible after each possible merge for the starting set; this time we check how many merges are possible after each possible merge for each resulting set after each possible merge for the starting set. And this is for lookahead 1. For lookahead n, you'd be seeing a very long sentence repeating the part after each possible merge for each resulting set n times.
Level 3: Let's cut the greed
If you look closely, the previous approach has a disastrous perfomance for even moderate inputs and lookaheads(*). For inputs beyond 20 triplets anything beyond 4-merge-lookahead takes unreasonably long. The idea here is to cut out merge paths that seem to be worse than an existing solution. If we want to perform lookahead 10, and a specific merge path yields less mergables after three merges, than another path after 5 merges, we may just as well cut the current merge path and try another one. This should save a lot of time and allow large lookaheads which would get us closer to the globally optimal solution, hopefully. I haven't implemented this one for testing though.
(*): Assuming a large reduction of input sets is possible, the number of merges is
proportional to input size, and
lookahead approximately indicates how much you permute those merges.
So you have choose lookahead from |input|, which is
the binomial coefficient that for lookahead ≪ |input| can be approximated as
O(|input|^lookahead) -- which is also (rightfully) written as you are thoroughly screwed.
Putting it all together
I was intrigued enough by this problem that I sat and coded this down in Python. Sadly, I was able to prove that different lookaheads yield possibly different results, and that even the blind algorithm occasionally gets it better than lookahead 1 or 2. This is a direct proof that the solution is not optimal (at least for lookahead ≪ |input|). See the source code and helper scripts, as well as proof-triplets on github. Be warned that, apart from memoization of merge results, I made no attempt at optimizing the code CPU-cycle-wise.
I don't have the solution, but I have some ideas.
Representation
A helpful visual representation of the problem is to consider the triplets as points of the 3D space. You have integers, so the records will be nodes of a grid. And two records are mergeable if and only if the nodes representing them sit on the same axis.
Counter-example
I found an (minimal) example where a greedy algorithm may fail. Consider the following records:
(1, 1, 1) \
(2, 1, 1) | (3, 1, 1) \
(1, 2, 1) |==> (3, 2, 1) |==> (3, 3, 1)
(2, 2, 1) | (2, 2, 2) / (2, 2, 2)
(2, 2, 2) /
But by choosing the wrong way, it might get stuck at three records:
(1, 1, 1) \
(2, 1, 1) | (3, 1, 1)
(1, 2, 1) |==> (1, 2, 1)
(2, 2, 1) | (2, 2, 3)
(2, 2, 2) /
Intuition
I feel that this problem is somehow similar to finding the maximal matching in a graph. Most of those algorithms finds the optimal solution by begining with an arbitrary, suboptimal solution, and making it 'more optimal' in each iteration by searching augmenting paths, which have the following properties:
they are easy to find (polynomial time in the number of nodes),
an augmenting path and the current solution can be crafted to a new solution, which is strictly better than the current one,
if no augmenting path is found, the current solution is optimal.
I think that the optimal solution in your problem can be found in the similar spirit.
Based on your problem description:
I'm given a bunch of events in time that's usually got some pattern.
The goal is to find the pattern. Each of the bits in the integer
represents "the event occurred in this particular year/month/day". For
example, the representation of March 7, 2014 would be [1 <<
(2014-1970), 1 << 3, 1 << 7]. The pattern described above allows us to
compress these events so that we can say 'the event occurred every 1st
in years 2000-2010'. – Danstahr Mar 7 at 10:56
I'd like to encourage you with the answers that MicSim has pointed at, specifically
Based on your problem description, you should check out this SO
answers (if you didn't do it already):
stackoverflow.com/a/4202095/44522 and
stackoverflow.com/a/3251229/44522 – MicSim Mar 7 at 15:31
The description of your goal is much more clear than the approach you are using. I'm scared that you won't get anywhere with the idea of merging. Sounds scary. The answer you get depends upon the order that you manipulate your data. You don't want that.
It seems you need to keep data and summarize. So, you might try counting those bits instead of merging them. Try clustering algorithms, sure, but more specifically try regression analysis. I should think you would get great results using a correlation analysis if you create some auxiliary data. For example, if you create data for "Monday", "Tuesday", "first Monday of the month", "first Tuesday of the month", ... "second Monday of the month", ... "even years", "every four years", "leap years", "years without leap days", ... "years ending in 3", ...
What you have right now is "1st day of the month", "2nd day of the month", ... "1st month of the year", "2nd month of the year", ... These don't sound like sophisticated enough descriptions to find the pattern.
If you feel it is necessary to continue the approach you have started, then you might treat it more as a search than a merge. What I mean is that you're going to need a criteria/measure for success. You can do the merge on the original data while requiring strictly that A==A'. Then repeat the merge on the original data while requiring B==B'. Likewise C==C'. Finally compare the results (using the criteria/measure). Do you see where this is going? Your idea of bit counting could be used as a measure.
Another point, you could do better at performance. Instead of double-looping through all your data and matching up pairs, I'd encourage you to do single passes through the data and sort it into bins. The HashMap is your friend. Make sure to implement both hashCode() and equals(). Using a Map you can sort data by a key (say where month and day both match) and then accumulate the years in the value. Oh, man, this could be a lot of coding.
Finally, if the execution time isn't an issue and you don't need performance, then here's something to try. Your algorithm is dependent on the ordering of the data. You get different answers based on different sorting. Your criteria for success is the answer with the smallest size after merging. So, repeatedly loop though this algorithm: shuffle the original data, do your merge, save the result. Now, every time through the loop keep the result which is the smallest so far. Whenever you get a result smaller than the previous minimum, print out the number of iterations, and the size. This is a very simplistic algorithm, but given enough time it will find small solutions. Based on your data size, it might take too long ...
Kind Regards,
-JohnStosh
I have problem with my Java program. Can all of you help me..
This is the program.
Introduction
On the website "MukaBuku", every user has the following property:
Name, the name of the user
Posts, a number of works published post
Friends, a number of existing friends
Of the property, Mr. C want some sort DOWN as follows:
Compare user based on the large number of existing Friends. If there are two
users who have the same number of Friends who then continue the process of sorting the points 2
Compare user based on the number of Posts that exist. If there are two users who
have the same amount Posts sorting process then proceed to point 3
Last is to compare two users based on their name in a letter of the alphabet.
Input Format
Input consists of N +1 lines. On the line to - 1 represents the number of users that exist in
website "MukaBuku", where 1 ≤ N ≤ 1000. Then in row - 2 to row
Output Format
The output consists of N lines.
example
input :
6
Budi 13 14,
Andi 90 87,
Fawwaz 39 109,
Bayu 41 9,
Ani 77 77,
Ano 77 77,
output :
Total Sorting : 8
Fawwaz 109 39
Andi 87 90
Ano 77 77
Ani 77 77
Budi 14 13
Bayu 9 41
In addition to what the above post says, have your MukabuMember implement the Comparable<MukabuMember> interface. then override public int compareTo(MukabuMember m) with the comparison process that you explained.
Once this is done, you can call Arrays.sort(yourlist), and it will be sorted your way.
Have a class(say MukabuMember) with all the propreties that u've mentioned as fields. Now have a main class and create N objects for the MukabuMember class. Create a list with MukabuMember class as parameter. Use an iterator to traverse through the list for sorting. Instead of a single compare stmt, you need three compares (in a nested if fashion) to sort according to ur requirements. This should do enough.
Here are the steps you can follow:
Create a class called MukabukuMember that has the fields name, numberOfPosts *numberOfFriends*
Make this class implement the Comparable interface.This interface has a single method compareTo where-in you can write all the comparison logic.
Mkae sure you also override the equals and hashCode method in the MukabukuMember class.A special care you need to take is to ensure that the logic to determine that two MukabukuMember classes are equal are the same in both the compareTo and equals
Once you are done with the above three steps, you can either store these objects within a List and iterate over it. Within each iteration you can simply then test two objects of the MukabukuMember class say obj1 and obj2 as follows:
obj1.compareTo(obj2) - Returns 1 if obj1 > obj2; 0 if obj1 == obj2 and -1 if obj1 < ob
obj1.equals(obj2) - Returns true if obj1 == obj2