is this use of objects redundant and/or inefficient? - java

I'm fairly inexperienced with using objects so I would really like some input.
I'm trying to remove comments from a list that have certain "unwanted words" in them, both the comments and the list of "unwanted words" are in ArrayList objects.
This is inside of a class called FormHelper, which contains the private member comments as an ArrayList, the auditList ArrayList is created locally in a member function called populateComments(), which then calls this function (below). PopulateComments() is called by the constructor, and so this function only gets called once, when an instance of FormHelper is created.
private void filterComments(ArrayList <String> auditList) {
for(String badWord : auditList) {
for (String thisComment : this.comments) {
if(thisComment.contains(badWord)) {
int index = this.comments.indexOf(thisComment);
this.comments.remove(index);
}
}
}
}
something about the way I implemented this doesn't feel right, I'm also concerned that I'm using ArrayList functions inefficiently. Is my suspicion correct?

It is not particularly efficient. However, finding a more efficient solution is not straightforward.
Lets step back to a simpler problem.
private void findBadWords(List <String> wordList, List <String> auditList) {
for(String badWord : auditList) {
for (String word : wordList) {
if (word.equals(badWord)) {
System.err.println("Found a bad word");
}
}
}
}
Suppose that wordList contains N words and auditList contains M words. Some simple analysis will show that the inner loop is executed N x M times. The N factor is unavoidable, but the M factor is disturbing. It means that the more "bad" words you have to check for the longer it takes to check.
There is a better way to do this:
private void findBadWords(List <String> wordList, HashSet<String> auditWords) {
for (String word : wordList) {
if (auditWords.contains(word))) {
System.err.println("Found a bad word");
}
}
}
Why is that better? It is better (faster) because HashSet::contains doesn't need to check all of the audit words one at a time. In fact, in the optimal case it will check none of them (!) and the average case just one or two of them. (I won't go into why, but if you want to understand read the Wikipedia page on hash tables.)
But your problem is more complicated. You are using String::contains to test if each comment contains each bad word. That is not a simple string equality test (as per my simplified version).
What to do?
Well one potential solution is to split the the comments into an array of words (e.g. using String::split and then user the HashSet lookup approach. However:
That changes the behavior of your code. (In a good way actually: read up on the Scunthorpe problem!) You will now only match the audit words is they are actual words in the comment text.
Splitting a string into words is not cheap. If you use String::split it entails creating and using a Pattern object to find the word boundaries, creating substrings for each word and putting them into an array. You can probably do better, but it is always going to be a non-trivial calculation.
So the real question will be whether the optimization is going to pay off. That is ultimately going to depend on the value of M; i.e. the number of bad words you are looking for. The larger M is, the more likely it will be to split the comments into words and use a HashSet to test the words.
Another possible solution doesn't involve splitting the comments. You could take the list of audit words and assemble them into a single regex like this: \b(word-1|word-2|...|word-n)\b. Then use this regex with Matcher::find to search each comment string for bad words. The performance will depend on the optimizing capability of the regex engine in your Java platform. It has the potential to be faster than splitting.
My advice would be to benchmark and profile your entire application before you start. Only optimize:
when the benchmarking says that the overall performance of the requests where this comment checking occurs is concerning. (If it is OK, don't waste your time optimizing.)
when the profiling says that this method is a performance hotspot. (There is a good chance that the real hotspots are somewhere else. If so, you should optimize them rather than this method.)
Note there is an assumption that you have (sufficiently) completed your application and created a realistic benchmark for it before you think about optimizing. (Premature optimization is a bad idea ... unless you really know what you are doing.)

As a general approach, removing individual elements from an ArrayList in a loop is inefficient, because it requires shifting all of the "following" elements along one position in the array.
A B C D E
^ if you remove this
^---^ you have to shift these 3 along by one
/ / /
A C D E
If you remove lots of elements, this will have a substantial impact on the time complexity. It's better to identify the elements to remove, and then remove them all at once.
I suggest that a neater way to do this would be using removeIf, which (at least for collection implementations such as ArrayList) does this "all at once" removal:
this.comments.removeIf(
c -> auditList.stream().anyMatch(c::contains));
This is concise, but probably quite slow because it has to keep checking the entire comment string to see if it contains each bad word.
A probably faster way would be to use regex:
Pattern p = Pattern.compile(
auditList.stream()
.map(Pattern::quote)
.collect(joining("|")));
this.comments.removeIf(
c -> p.matcher(c).find());
This would be better because the compiled regex would search for all of the bad words in a single pass over each comment.
The other advantage of a regex-based approach is that you can check case insensitively, by supplying the appropriate flag when compiling the regex.

Related

Is use of AtomicInteger for indexing in Stream a legit way?

I would like to get an answer pointing out the reasons why the following idea described below on a very simple example is commonly considered bad and know its weaknesses.
I have a sentence of words and my goal is to make every second one to uppercase. My starting point for both of the cases is exactly the same:
String sentence = "Hi, this is just a simple short sentence";
String[] split = sentence.split(" ");
The traditional and procedural approach is:
StringBuilder stringBuilder = new StringBuilder();
for (int i=0; i<split.length; i++) {
if (i%2==0) {
stringBuilder.append(split[i]);
} else {
stringBuilder.append(split[i].toUpperCase());
}
if (i<split.length-1) { stringBuilder.append(" "); }
}
When want to use java-stream the use is limited due the effectively-final or final variable constraint used in the lambda expression. I have to use the workaround using the array and its first and only index, which was suggested in the first comment of my question How to increment a value in Java Stream. Here is the example:
int index[] = {0};
String result = Arrays.stream(split)
.map(i -> index[0]++%2==0 ? i : i.toUpperCase())
.collect(Collectors.joining(" "));
Yeah, it's a bad solution and I have heard few good reasons somewhere hidden in comments of a question I am unable to find (if you remind me some of them, I'd upvote twice if possible). But what if I use AtomicInteger - does it make any difference and is it a good and safe way with no side effects compared to the previous one?
AtomicInteger atom = new AtomicInteger(0);
String result = Arrays.stream(split)
.map(i -> atom.getAndIncrement()%2==0 ? i : i.toUpperCase())
.collect(Collectors.joining(" "));
Regardless of how ugly it might look for anyone, I ask for the description of possible weaknesses and their reasons. I don't care the performance but the design and possible weaknesses of the 2nd solution.
Please, don't match AtomicInteger with multi-threading issue. I used this class since it receives, increments and stores the value in the way I need for this example.
As I often say in my answers that "Java Stream-API" is not the bullet for everything. My goal is to explore and find the edge where is this sentence applicable since I find the last snippet quite clear, readable and brief compared to StringBuilder's snippet.
Edit: Does exist any alternative way applicable for the snippets above and all the issues when it’s needed to work with both item and index while iteration using Stream-API?
The documentation of the java.util.stream package states that:
Side-effects in behavioral parameters to stream operations are, in general, discouraged, as they can often lead to unwitting violations of the statelessness requirement, as well as other thread-safety hazards.
[...]
The ordering of side-effects may be surprising. Even when a pipeline is constrained to produce a result that is consistent with the encounter order of the stream source (for example, IntStream.range(0,5).parallel().map(x -> x*2).toArray() must produce [0, 2, 4, 6, 8]), no guarantees are made as to the order in which the mapper function is applied to individual elements, or in what thread any behavioral parameter is executed for a given element.
This means that the elements may be processed out of order, and thus the Stream-solutions may produce wrong results.
This is (at least for me) a killer argument against the two Stream-solutions.
By the process of elimination, we only have the "traditional solution" left. And honestly, I do not see anything wrong with this solution. If we wanted to get rid of the for-loop, we could re-write this code using a foreach-loop:
boolean toUpper = false; // 1st String is not capitalized
for (String word : splits) {
stringBuilder.append(toUpper ? word.toUpperCase() : word);
toUpper = !toUpper;
}
For a streamified and (as far as I know) correct solution, take a look at Octavian R.'s answer.
Your question wrt. the "limits of streams" is opinion-based.
The answer to the question (s) ends here. The rest is my opinion and should be regarded as such.
In Octavian R.'s solution, an artificial index-set is created through a IntStream, which is then used to access the String[]. For me, this has a higher cognitive complexity than a simple for- or foreach-loop and I do not see any benefit in using streams instead of loops in this situation.
In Java, comparing with Scala, you must be inventive. One solution without mutation is this one:
String sentence = "Hi, this is just a simple short sentence";
String[] split = sentence.split(" ");
String result = IntStream.range(0, split.length)
.mapToObj(i -> i%2==0 ? split[i].toUpperCase():split[i])
.collect(Collectors.joining(" "));
System.out.println(result);
In Java streams you should avoid the mutation. Your solution with AtomicInteger it's ugly and it's a bad practice.
Kind regards!
As explained in Turing85’s answer, your stream solutions are not correct, as they rely on the processing order, which is not guaranteed. This can lead to incorrect results with parallel execution today, but even if it happens to produce the desired result with a sequential stream, that’s only an implementation detail. It’s not guaranteed to work.
Besides that, there is no advantage in rewriting code to use the Stream API with a logic that basically still is a loop, but obfuscated with a different API. The best way to describe the idea of the new APIs, is to say that you should express what to do but not how.
Starting with Java 9, you could implement the same thing as
String result = Pattern.compile("( ?+[^ ]* )([^ ]*)").matcher(sentence)
.replaceAll(m -> m.group(1)+m.group(2).toUpperCase());
which expresses the wish to replace every second word with its upper case form, but doesn’t express how to do it. That’s up to the library, which likely uses a single StringBuilder instead of splitting into an array of strings, but that’s irrelevant to the application logic.
As long as you’re using Java 8, I’d stay with the loop and even when switching to a newer Java version, I would consider replacing the loop as not being an urgent change.
The pattern in the above example has been written in a way to do exactly the same as your original code splitting at single space characters. Usually, I’d encode “replace every second word” more like
String result = Pattern.compile("(\\w+\\W+)(\\w+)").matcher(sentence)
.replaceAll(m -> m.group(1)+m.group(2).toUpperCase());
which would behave differently when encountering multiple spaces or other separators, but usually is closer to the actual intention.

Match a string String against large list of regexps, performance, in Java

I have following:
private static List<Pattern> pats;
This list contains around 90 patterns that is instantiated before iteration. The patterns are complex, like:
System.out.println("pat: " + pats.get(0).toString());
// pat: \bsingle1\b|\bsingle2\b|(?=.*\bcombo1\b)(?=.*\bcombo2\b)|\bsingle3\b|\bwild.*card\b ...
Some of the patterns contains around 40-50 single words or combination of words, as the regex above shows. The words can contain wildcards.
Now, I have a list of strings, sentences on around 30-60 characters each. I iterate through them and for every string in the list, I iterate them through the list of patterns and perform a pattern.match("This is one of the strings in my list").find() until I get a match, which I mark down and save somewhere else, then I break out of iteration through patterns and continue with the next string in the list.
This is a categorization job, so several strings can match on the same pattern.
My problem is that this of course takes a lot of execution time, I am looking for a more efficient way to solve this problem.
Any suggestions?
One thing that solved my problem (to 90%) was to give up regex partially where String.indexOf() made more sense out of a performance perspective.
This post inspired me: Quickest way to return list of Strings by using wildcard from collection in Java
I wrote my own implementation since the one in the link handles only full words, while I'm dealing with sentences.
It helped with wildcards "*" and pipes "hel(l|lo)" in the performance perspective, the former more than the latter.
Reason for this direction was several recommendations, and it improved performance by cutting down time on 200000 sentences from 1.5 hour down to 15 minutes.
You could also offload the regular expression in a dedicated service ? I believe that it could be faster (and perhaps safer) than giving up regexp partially ?
If your app is intended to run on multiple server, you may also gain performances by centralizing the computation cost.
Here is an example of such implementation via a REST api : http://www.rex-daemon.com/tutorial/more-advanced-queries/

Would Java indexOf (brute force method) be more practical for me or some other substring algorithm?

I'm looking at finding very short substrings (pattern, needle) in many short lines of text (haystack). However, I'm not quite sure which method to use outside the naive, brute force method.
Background: I'm doing a side project for fun where I receive text messaging chat logs of multiple users (anywhere from 2000-15000 lines of text and 2-50 users), and I want to find all the various pattern matches in the chat logs based on predetermined words that I've come up with. So far I have about 1600 patterns that I'm looking for, but I may look for more.
So for example, I want to find the number of food related words that are used in an average text message log such as "hamburger", "pizza", "coke", "lunch", "dinner", "restaurant", "McDonalds". While I gave out English examples, I will actually be using Korean for my program. Each of these designated words will have their own respective score, which I put in a hashmap as key and value separately. I then show the top scorers for food related words as well as the most frequent words used by those users for food words.
My current method is to eliminate each line of text by whitespaces, and process each individual word from the haystack by using contains method (which uses the indexOf method and the naive substring search algorithm) of the haystack contains the pattern.
wordFromInput.contains(wordFromPattern);
To give an example, with 17 users in chat, 13000 lines of text, and the 1600 patterns, I've found that this whole program took 12-13 seconds with this method. And on the Android app that I'm developing, it took 2 minutes and 30 seconds to process, which is far too slow.
Originally, I tried to use a hash map and to merely get the pattern instead of searching for it in the ArrayList, but I then realized that is...
not possible with hash table
for what I am trying to do with a substring.
I've looked around through Stackoverflow and found a lot of helpful and related questions, such as these two:
1 and 2. I'm somewhat more familiar with the various string algorithms (Boyer Moore, KMP, etc.)
I initially thought then that the naive method would of course be the worst type of algorithm for my case, but having found this question, I've realized that my case (short pattern, short text), might actually be more effective with the naive method. But I wanted to know if there was something that I was neglecting completely.
Here is a snippet of my code though if anyone wants to see my issue more concretely.
While I removed large parts of the code to simplify it, the primary method that I use to actually match substrings is there in the method matchWords().
I know that's really ugly and bad code (5 for loops...), so if there are any suggestions for that, I'm happy to hear it as well.
So to clean it up:
lines of text from chat logs (2000-10,000+), haystack
1600+ patterns, needle(s)
mostly using Korean characters, although some English is included
Brute force naive method is simply too slow, but debating whether there are other alternatives and even if there are, whether they are practical given the nature of short patterns and text.
I just want some input on my thought process, and possibly some general advice. But additionally, I would like some specific suggestion for a particular algorithm or method if that is possible.
You can replace the hashtable with a Trie.
Split the line of text into words using white space to separate words. Then check if the word is in the Trie. If it is in the Trie, update a counter associated with the word. Ideally, the counter would be integrated into the Trie.
This appraoch is O(C) where C is the number of characters in the text. It's highly unlikely that you can avoid checking each character at least once. Thus this approach should be as good as you can get at least in terms of big O.
However, it sounds like you may not want to list all of the possible words you are searching for. Therefore, you might want to simply use you could build a counting Trie from all of the words. If nothing else that'll probably make it easier for any pattern matching algorithm you use. Although, it might require some modifications to the Trie.
What you're describing sounds like an excellent use case for the Aho-Corasick string-matching algorithm. This algorithm finds all matches of a set of pattern strings inside of a source string and does so in linear time (plus the time to report the matches). If you have a fixed set of strings to search for, you can do linear preprocessing work up front on the patterns to search for all matches very quickly.
There's a Java implementation of Aho-Corasick available here. I haven't tried it out, but it might be a good match.
Hope this helps!
I'm pretty sure string.contains is already highly optimized, so replacing it with something else is not going to do you a lot of good.
So the way to go, I suspect, is not to look for each and every bank-word in your chat words, but rather do multiple comparisons at once.
The first way to do it would be to create one huge regular expression that will match all your bank-words. Compile it and hope the regular expression package is efficient enough (chances are - it is). You will have a rather lengthy setup stage (the regex compilation), but matches should be a lot faster.
You can build an index of the words you need to match and count them as you process them. If you can use a HashMap to lookup the patterns for each word, the cost will be O(n * m)
You can use a HashMap for all the possible words, you can then dissect the words later.
e.g. say you need to match red and apple, you can combine the sum of
redapple = 1
applered = 0
red = 10
apple = 15
This means that red is actually 11 (10 + 1), and apple is 16 (15 + 1)
I don't know Korean so I imagine the same strategies used to tinker with Strings in Korean isn't necessarily possible in the way it is with English, but perhaps this strategy in pseudocode can be applied with your knowledge of Korean to make it work. (Java is of course still the same, but for example, in Korean is it still highly likely for the letters "ough" to be in succession? Are there even letters for "ough"? But with that being said, hopefully the principle can be applied
I would use String.toCharArray to create a two-dimensional array (or ArrayList if variable size needed). The
if (first letter of word matches keyword's first letter)//we have a candidate
skip to last letter of the current word //see comment below
if(last letter of word matches keyword's last letter)//strong candidate
iterate backwards to start+1 checking remainder of letters
The reason I suggest to skip to the last letter is because statistically a "consonant, vowel" for the first two letters of a word is significantly high, especially nouns, which will consist of alot of your keywords since any food is a noun (almost all the keyword examples you gave were matched that structure of consonant, vowel). And since there are only 5 vowels(plus y), the likelihood of the second letter "i" showing up in the keyword "pizza" is inherently highly likely, yet after that point there is still a good chance that the word may turn out to not be a match.
However if you know that the first letter and the last letter match, then you probably have a much stronger candidate and can then iterate in reverse. I think over larger sets of data, this would eliminate candidates much faster than checking letters in order. Basically you'd be letting too many fake candidates past the second iteration, thus increasing your overall conditional operations. It might sound like something small, but in a project like this there's lots of reiterating, so micro-optimizations will accumulate very quickly.
If this approach can be applied in a language that's probably structurally very different from English(I'm speaking from ignorance here though), then I think it might provide some efficiency for you whether you make it happen through iterating a char array or with a scanner, or any other construct.
The trick is to realise that if you can describe the string you are searching for as a regular expression you can also, by definition, describe it with a state machine.
At every character in your message start a state machine for every one of your 1600 patterns and pass the character through it. This sounds scary but believe me most of them will terminate immediately anyway so you aren't really doing a huge amount of work. Bear in mind that a state machine can usually be encoded with a simple switch/case or a ch == s.charAt at each step so they are close to the ultimate in light-weight.
Obviously you know what to do whenever one of your search machines terminates at the end of their search. Any that terminate before full-match can be discarded immediately.
private static class Matcher {
private final int where;
private final String s;
private int i = 0;
public Matcher ( String s, int where ) {
this.s = s;
this.where = where;
}
public boolean match(char ch) {
return s.charAt(i++) == ch;
}
public int matched() {
return i == s.length() ? where: -1;
}
}
// Words I am looking for.
String[] watchFor = new String[] {"flies", "like", "arrow", "banana", "a"};
// Test string to search.
String test = "Time flies like an arrow, fruit flies like a banana";
public void test() {
// Use a LinkedList because it is O(1) to remove anywhere.
List<Matcher> matchers = new LinkedList<> ();
int pos = 0;
for ( char c : test.toCharArray()) {
// Fire off all of the matchers at this point.
for ( String s : watchFor ) {
matchers.add(new Matcher(s, pos));
}
// Discard all matchers that fail here.
for ( Iterator<Matcher> i = matchers.iterator(); i.hasNext(); ) {
Matcher m = i.next();
// Should it be removed?
boolean remove = !m.match(c);
if ( !remove ) {
// Still matches! Is it complete?
int matched = m.matched();
if ( matched >= 0 ) {
// Todo - Should use getters.
System.out.println(" "+m.s +" found at "+m.where+" active matchers "+matchers.size());
// Complete!
remove = true;
}
}
// Remove it where necessary.
if ( remove ) {
i.remove();
}
}
// Step pos to keep track.
pos += 1;
}
}
prints
flies found at 5 active matchers 6
like found at 11 active matchers 6
a found at 16 active matchers 2
a found at 19 active matchers 2
arrow found at 19 active matchers 6
flies found at 32 active matchers 6
like found at 38 active matchers 6
a found at 43 active matchers 2
a found at 46 active matchers 3
a found at 48 active matchers 3
banana found at 45 active matchers 6
a found at 50 active matchers 2
There are several simple optimisations. With some simple pre-processing the most obvious is to use the current character to determine which matchers may be applicable.
This is a pretty broad question, so I won't go into too much detail, but roughly:
Pre-process the haystacks using something like broad lemmatizer to create "topic word only" versions of the messages by noting which topics all words in it cover. For example, any occurrences of "hamburger", "pizza", "coke", "lunch", "dinner", "restaurant", or "McDonalds" would cause the "topic" word "food" to be collected for that message. Some words may have multiple topics, eg "McDonalds" may be in the topics "food" and "business". Most words won't have any topic.
After this process, you'll have haystacks consisting of only "topic" words. Then create a Map<String, Set<Integer>> and populate it with the topic word and the Set of chat message ids that contain it. This is reverse index of topic word to the chat messages that contain it.
The runtime code to find all documents that contain all n words is then trivial and super fast - near O(#terms):
private Map<String, Set<Integer>> index; // pre-populated
Set<Integer> search(String... topics) {
Set<Integer> results = null;
for (String topic : topics) {
Set<Integer> hits = index.get(topic);
if (hits == null)
return Collections.emptySet();
if (results == null)
results = new HashSet<Integer>(hits);
else
results.retainAll(hits);
if (results.isEmpty())
return Collections.emptySet(); // exit early
}
return results;
}
This will perform near O(1), and tell you which messages share all search terms. If you just want the number, use the trivial size() of the returned Set.

Fastest way to create a keyword filter?

I'm trying to filter tweets based on keyword filter. The filter could have 10 words or more. So a tweet passes if it contains the keywords. The only thing I can think of is to split the tweet's text into tokens. Then I would loop over the filter words and compare every token to every words in the filter. However this way seems very slow. Suppose the keyword filter has N keywords and the number of tokens is M, then its needs O(N*M).
Is there a better approach?
There are a number of interesting aspects to this question and ways to approach the problem. Every one of them has trade offs.
When people go on about HashMaps and such being O(1), they're still missing some of the compile time optimizations that can be done. Knowing the set of words at compile will allow you to put it into an Enum which will then allow you to use the lesser known EnumMap (doc) and EnumSet (doc). An Enum gives you an ordinal type that then allows you to size the backing array or bitfield once and never worry about expanding it. Likewise, the hash of the enum is its ordinal value so you don't have complex hash lookups (especially of non-interend strings). The EnumSet is kind of a type safe bitfield.
import java.util.EnumSet;
public class Main {
public static void main(String[] args) {
EnumSet<Words> s = EnumSet.noneOf(Words.class);
for(String a : args) {
s.clear();
for(String w : a.split("\\s+")) {
try {
s.add(Words.valueOf(w.toUpperCase()));
} catch (IllegalArgumentException e) {
// nothing really
}
}
System.out.print(a);
if(s.size() == 4) { System.out.println(": All!"); }
else { System.out.println(": Only " + s.size()); }
}
}
enum Words {
STACK,
SOUP,
EXCHANGE,
OVERFLOW
}
}
When run with some example strings on the command line:
"stack exchange overflow soup foo"
"stack overflow"
"stack exchange blah"
One gets the results:
stack exchange overflow soup foo: All!
stack overflow: Only 2
stack exchange blah: Only 2
You've moved the what one matches to the core language, hoping its well optimized. Turns out this look like its ultimately just a Map<String,T> (and digging even further its a HashMap hidden deep within the Class class.).
You've got a String. Splitting it into tokens of some sort is unavoidable. Each token needs to be examined to see if it matches. But comparing them against all the tokens is as you've noted expensive.
However, the language of "matches exactly these strings" is a regular one. This means we can use a regular expression to filter out the words that are not going to match. The regular expression runs in O(n) time (see What is the complexity of regular expression? ).
This doesn't get rid of O(wordsInString * keyWords) because that still is the worst case (which is what O() represents), but it does mean that for unmatched words you've only spent O(charsInWord) on eliminating it.
package com.michaelt.so.keywords;
import java.util.EnumSet;
import java.util.regex.Matcher;
import java.util.regex.Pattern;
public class Main {
final static Pattern pat = Pattern.compile("S(?:TACK|OUP)|EXCHANGE|OVERFLOW", Pattern.CASE_INSENSITIVE);
public static void main(String[] args) {
EnumSet<Words> s = EnumSet.noneOf(Words.class);
Matcher m = pat.matcher("");
for(String a : args) {
s.clear();
for(String w : a.split("\\s+")) {
m.reset(w);
if(m.matches()) {
try {
s.add(Words.valueOf(w.toUpperCase()));
} catch (IllegalArgumentException e) {
// nothing really
}
} else {
System.out.println("No need to look at " + w);
}
}
System.out.print(a);
if(s.size() == 4) { System.out.println(": All!"); }
else { System.out.println(": Only " + s.size()); }
System.out.println();
}
}
enum Words {
STACK,
SOUP,
EXCHANGE,
OVERFLOW
}
}
And this gives the output of:
No need to look at foo
stack exchange overflow soup foo: All!
stack overflow: Only 2
No need to look at blah
stack exchange blah: Only 2
Now, the big let down. Despite all of this, it is probably still faster for Java to compute the hash of the string and look it up in a Hash to see if it exists or not.
The only thing here that would be better would be to make a regex that matches all the strings. As mentioned, it is a regular language.
(?:stack\b.+?\bexchange\b.+?\bsoup\b.+?\boverflow)|(?:soup\b.+?\bexchange\b.+?\bstack\b.+?\boverflow) ...
The above regex will match the string stack exchange pea soup overflow
There are four words here, that means 4! parts for (s1)|(s2)|(s3)|...(s24) A regex with 10 keywords approached this way would be (s1)|...|(s3628800) which could be considered to be very impractical. Possible though some engines might choke on a regex that large. Still, it would trim it down to O(n) where n is the length of the string you've got.
Further note that this is an all filter rather than an any filter or a some filter.
If you want to match one keyword out of ten, then the regex is only ten groups long. If you want to match two keywords out of ten, then its only 90 groups long (bit long, but the engine might not choke on it). This regex can be programmatically generated.
This will get you back down to O(N) time where N is the length of the tweet. No splitting required.
One way I'm thinking to solve this is by creating a HashSet and put all of the tweet's text tokens inside it. Then I would loop over the words in the words filter and check if they are all in the HashSet
Searching in a HashMap is more or less O(1) so if you store the keys in a HashMap (for example) you will only need to check M times, so it will be O(M).
If you have enough time for preprocessing you could build up an index: a list (in some easy to search data structure, like a tree or hash table) of all the words contained in all tweets. Each word has the ids of tweets attached that contain this word.
Then you can lookup the keywords in the index and compute the intersection of the IDs.
This technique is known as an inverted index.
I think you can do it using HashSet with O(M+N) but if need to save some space you can also try bloom filter which gives false positive with low probability.
it's depend:
Is it real time filtering ?
Are you going to re-run the filtering with a different set of words ?
if it's real time - it also depend on the number of works. you can the contains method or build regex and hope that it's will be fast.
if it offline work that we want to do, if you are not going to change the set of works that you can use the methods like real time, if you think that you are going to change the filter, that you will want to build the next index.
for each work, save hash where the key is the tweets id (the value is a bit)
finding all the tweets with filter words, go over the word and intersect the tweet id for each word

Fastest way to find Strings in String collection that begin with certain chars

I have a large collection of Strings. I want to be able to find the Strings that begin with "Foo" or the Strings that end with "Bar". What would be the best Collection type to get the fastest results? (I am using Java)
I know that a HashSet is very fast for complete matches, but not for partial matches I would think? So, what could I use instead of just looping through a List? Should I look into LinkedList's or similar types? Are there any Collection Types that are optimized for this kind of queries?
The best collection type for this problem is SortedSet. You would need two of them in fact:
Words in regular order.
Words with their characters inverted.
Once these SortedSets have been created, you can use method subSet to find what you are looking for. For example:
Words starting with "Foo":
forwardSortedSet.subSet("Foo","Fop");
Words ending with "Bar":
backwardSortedSet.subSet("raB","raC");
The reason we are "adding" 1 to the last search character is to obtain the whole range. The "ending" word is excluded from the subSet, so there is no problem.
EDIT: Of the two concrete classes that implement SortedSet in the standard Java library, use TreeSet. The other (ConcurrentSkipListSet) is oriented to concurrent programs and thus not optimized for this situation.
It's been a while but I needed to implement this now and did some testing.
I already have a HashSet<String> as source so generation of all other datastructures is included in search time. 100 different sources are used and each time the data structures need to be regenerated. I only need to match a few single Strings each time. These tests ran on Android.
Methods:
Simple loop through HashSet and call endsWith() on
each string
Simple loop through HashSet and perform precompiled
Pattern match (regex) on each string.
Convert HashSet to single String joined by \n and
single match on whole String.
Generate SortedTree with reversed Strings from
HashSet. Then match with subset() as explained by #Mario Rossi.
Results:
Duration for method 1: 173ms (data setup:0ms search:173ms)
Duration for method 2: 6909ms (data setup:0ms search:6909ms)
Duration for method 3: 3026ms (data setup:2377ms search:649ms)
Duration for method 4: 2111ms (data setup:2101ms search:10ms)
Conclusion:
SortedSet/SortedTree is extremely fast in searching. Much faster than just looping through all Strings. However, creating the structure takes a lot of time. Regexes are much slower, but generating a single large String out of hundreds of Strings is more of a bottleneck on Android/Java.
If only a few matches need to be made, then you better loop through your collection. If you have much more matches to make it may be very useful to use a SortedTree!
If the list of words is stable (not many words are added or deleted), a very good second alternative is to create 2 lists:
One with the words in normal order.
The second with the characters in each word reversed.
For speed purposes, make them ArrayLists. Never LinkedLists or other variants which perform extremely bad on random access (the core of binary search; see below).
After the lists are created, they can be sorted with method Collections.sort (only once each) and then searched with Collections.binarySearch. For example:
Collections.sort(forwardList);
Collections.sort(backwardList);
And then to search for words starting in "Foo":
int i= Collections.binarySearch(forwardList,"Foo") ;
while( i < forwardList.size() && forwardList.get(i).startsWith("Foo") ) {
// Process String forwardList.get(i)
i++;
}
And words ending in "Bar":
int i= Collections.binarySearch(backwardList,"raB") ;
while( i < backwardList.size() && backwardList.get(i).startsWith("raB") ) {
// Process String backwardList.get(i)
i++;
}

Categories