I have an object with 5 fields as Strings named String1 to String5.
When I remove String1 from the object, I have to update the values such that String1 will have String2 value and so on and String5 will become null.
Say we have a HashMap as below,
HashMap<Integer,String>
It has 5 values. Keys 1 to 5 and corresponding String values.
Now if we have to remove the 1st value such that the 2nd value will become the 1st and 3rd will become 2nd and so on.
How can we achieve this ?
eg: HashMap has
(1,"Art")
(2,"Math")
(3,"Science")
(4,"History")
(5,"Physics")
Now I have to delete the 1st value, then the HashMap will be
(1,"Math")
(2,"Science")
(3,"History")
(4,"Physics")
(5,null)
If I have to delete the 2nd value, then the 1st one will remain the same, but the lower order will be change as follows
(1,"Art")
(2,"Science")
(3,"History")
(4,"Physics")
(5,null)
As mentioned in the comments, you might be better off using a List<String>
private static List<String> removeTopic(List<String> topics, String topic) {
List<String> topicsCopy = new ArrayList<>(topics);
topicsCopy.remove(topic);
topicsCopy.add(null);
return topicsCopy;
}
Then:
List<String> topics = Arrays.asList("Art", "Math", "Science", "History", "Physics");
System.out.println(topics);
topics = removeElement(topics, "Math");
System.out.println(topics);
[Art, Math, Science, History, Physics]
[Art, Science, History, Physics, null]
You can do it like this.
Map<Integer, String> map = new HashMap<>(Map.of(1,"Art", 2, "Math",
3,"Science", 4,"History", 5,"Physics"));
System.out.println(map);
delete(2,map);
System.out.println(map);
public static void delete(int key, Map<Integer,String> map) {
for (int i = key; i <= map.size(); i++) {
map.put(i, map.get(i+1));
}
But there is no reason to do so. Your keys imply a linear ordering which will certainly work. But why not just use a List of values and simply delete them with the built in methods?
Then you don't need a fancy method to do so. You can just do it like this.
List<String> list = new ArrayList<>(List.of("Art", "Math", "Science", "History", "Physics"));
System.out.println(list);
list.remove("Math");
System.out.println(list);
If you really want a null value at the end you can add it. But it serves little value as far as I can tell.
private Map<Integer, String> values = new HashMap<>();
public void add(String s) {
values.put(values.size(), s);
}
public void remove(int slot) {
for (int i = slot; i <= values.size(); i++) {
if (i == slot) {
values.remove(slot);
} else {
String s = values.get(i);
values.put(i - 1, s);
values.remove(i);
}
}
values.put(values.size(), null);
}
public static void main(String[] args) {
Main4 m = new Main4();
for (int i = 0; i < 10; i++) {
m.add(String.valueOf(i));
}
System.out.println(m.values);
m.remove(3);
System.out.println(m.values);
}
I have an ArrayList<String>, and I want to remove repeated strings from it. How can I do this?
If you don't want duplicates in a Collection, you should consider why you're using a Collection that allows duplicates. The easiest way to remove repeated elements is to add the contents to a Set (which will not allow duplicates) and then add the Set back to the ArrayList:
Set<String> set = new HashSet<>(yourList);
yourList.clear();
yourList.addAll(set);
Of course, this destroys the ordering of the elements in the ArrayList.
Although converting the ArrayList to a HashSet effectively removes duplicates, if you need to preserve insertion order, I'd rather suggest you to use this variant
// list is some List of Strings
Set<String> s = new LinkedHashSet<>(list);
Then, if you need to get back a List reference, you can use again the conversion constructor.
In Java 8:
List<String> deduped = list.stream().distinct().collect(Collectors.toList());
Please note that the hashCode-equals contract for list members should be respected for the filtering to work properly.
Suppose we have a list of String like:
List<String> strList = new ArrayList<>(5);
// insert up to five items to list.
Then we can remove duplicate elements in multiple ways.
Prior to Java 8
List<String> deDupStringList = new ArrayList<>(new HashSet<>(strList));
Note: If we want to maintain the insertion order then we need to use LinkedHashSet in place of HashSet
Using Guava
List<String> deDupStringList2 = Lists.newArrayList(Sets.newHashSet(strList));
Using Java 8
List<String> deDupStringList3 = strList.stream().distinct().collect(Collectors.toList());
Note: In case we want to collect the result in a specific list implementation e.g. LinkedList then we can modify the above example as:
List<String> deDupStringList3 = strList.stream().distinct()
.collect(Collectors.toCollection(LinkedList::new));
We can use parallelStream also in the above code but it may not give expected performace benefits. Check this question for more.
If you don't want duplicates, use a Set instead of a List. To convert a List to a Set you can use the following code:
// list is some List of Strings
Set<String> s = new HashSet<String>(list);
If really necessary you can use the same construction to convert a Set back into a List.
Java 8 streams provide a very simple way to remove duplicate elements from a list. Using the distinct method.
If we have a list of cities and we want to remove duplicates from that list it can be done in a single line -
List<String> cityList = new ArrayList<>();
cityList.add("Delhi");
cityList.add("Mumbai");
cityList.add("Bangalore");
cityList.add("Chennai");
cityList.add("Kolkata");
cityList.add("Mumbai");
cityList = cityList.stream().distinct().collect(Collectors.toList());
How to remove duplicate elements from an arraylist
You can also do it this way, and preserve order:
// delete duplicates (if any) from 'myArrayList'
myArrayList = new ArrayList<String>(new LinkedHashSet<String>(myArrayList));
Here's a way that doesn't affect your list ordering:
ArrayList l1 = new ArrayList();
ArrayList l2 = new ArrayList();
Iterator iterator = l1.iterator();
while (iterator.hasNext()) {
YourClass o = (YourClass) iterator.next();
if(!l2.contains(o)) l2.add(o);
}
l1 is the original list, and l2 is the list without repeated items
(Make sure YourClass has the equals method according to what you want to stand for equality)
this can solve the problem:
private List<SomeClass> clearListFromDuplicateFirstName(List<SomeClass> list1) {
Map<String, SomeClass> cleanMap = new LinkedHashMap<String, SomeClass>();
for (int i = 0; i < list1.size(); i++) {
cleanMap.put(list1.get(i).getFirstName(), list1.get(i));
}
List<SomeClass> list = new ArrayList<SomeClass>(cleanMap.values());
return list;
}
It is possible to remove duplicates from arraylist without using HashSet or one more arraylist.
Try this code..
ArrayList<String> lst = new ArrayList<String>();
lst.add("ABC");
lst.add("ABC");
lst.add("ABCD");
lst.add("ABCD");
lst.add("ABCE");
System.out.println("Duplicates List "+lst);
Object[] st = lst.toArray();
for (Object s : st) {
if (lst.indexOf(s) != lst.lastIndexOf(s)) {
lst.remove(lst.lastIndexOf(s));
}
}
System.out.println("Distinct List "+lst);
Output is
Duplicates List [ABC, ABC, ABCD, ABCD, ABCE]
Distinct List [ABC, ABCD, ABCE]
There is also ImmutableSet from Guava as an option (here is the documentation):
ImmutableSet.copyOf(list);
Probably a bit overkill, but I enjoy this kind of isolated problem. :)
This code uses a temporary Set (for the uniqueness check) but removes elements directly inside the original list. Since element removal inside an ArrayList can induce a huge amount of array copying, the remove(int)-method is avoided.
public static <T> void removeDuplicates(ArrayList<T> list) {
int size = list.size();
int out = 0;
{
final Set<T> encountered = new HashSet<T>();
for (int in = 0; in < size; in++) {
final T t = list.get(in);
final boolean first = encountered.add(t);
if (first) {
list.set(out++, t);
}
}
}
while (out < size) {
list.remove(--size);
}
}
While we're at it, here's a version for LinkedList (a lot nicer!):
public static <T> void removeDuplicates(LinkedList<T> list) {
final Set<T> encountered = new HashSet<T>();
for (Iterator<T> iter = list.iterator(); iter.hasNext(); ) {
final T t = iter.next();
final boolean first = encountered.add(t);
if (!first) {
iter.remove();
}
}
}
Use the marker interface to present a unified solution for List:
public static <T> void removeDuplicates(List<T> list) {
if (list instanceof RandomAccess) {
// use first version here
} else {
// use other version here
}
}
EDIT: I guess the generics-stuff doesn't really add any value here.. Oh well. :)
public static void main(String[] args){
ArrayList<Object> al = new ArrayList<Object>();
al.add("abc");
al.add('a');
al.add('b');
al.add('a');
al.add("abc");
al.add(10.3);
al.add('c');
al.add(10);
al.add("abc");
al.add(10);
System.out.println("Before Duplicate Remove:"+al);
for(int i=0;i<al.size();i++){
for(int j=i+1;j<al.size();j++){
if(al.get(i).equals(al.get(j))){
al.remove(j);
j--;
}
}
}
System.out.println("After Removing duplicate:"+al);
}
If you're willing to use a third-party library, you can use the method distinct() in Eclipse Collections (formerly GS Collections).
ListIterable<Integer> integers = FastList.newListWith(1, 3, 1, 2, 2, 1);
Assert.assertEquals(
FastList.newListWith(1, 3, 2),
integers.distinct());
The advantage of using distinct() instead of converting to a Set and then back to a List is that distinct() preserves the order of the original List, retaining the first occurrence of each element. It's implemented by using both a Set and a List.
MutableSet<T> seenSoFar = UnifiedSet.newSet();
int size = list.size();
for (int i = 0; i < size; i++)
{
T item = list.get(i);
if (seenSoFar.add(item))
{
targetCollection.add(item);
}
}
return targetCollection;
If you cannot convert your original List into an Eclipse Collections type, you can use ListAdapter to get the same API.
MutableList<Integer> distinct = ListAdapter.adapt(integers).distinct();
Note: I am a committer for Eclipse Collections.
If you are using model type List< T>/ArrayList< T> . Hope,it's help you.
Here is my code without using any other data structure like set or hashmap
for (int i = 0; i < Models.size(); i++){
for (int j = i + 1; j < Models.size(); j++) {
if (Models.get(i).getName().equals(Models.get(j).getName())) {
Models.remove(j);
j--;
}
}
}
If you want to preserve your Order then it is best to use LinkedHashSet.
Because if you want to pass this List to an Insert Query by Iterating it, the order would be preserved.
Try this
LinkedHashSet link=new LinkedHashSet();
List listOfValues=new ArrayList();
listOfValues.add(link);
This conversion will be very helpful when you want to return a List but not a Set.
This three lines of code can remove the duplicated element from ArrayList or any collection.
List<Entity> entities = repository.findByUserId(userId);
Set<Entity> s = new LinkedHashSet<Entity>(entities);
entities.clear();
entities.addAll(s);
for(int a=0;a<myArray.size();a++){
for(int b=a+1;b<myArray.size();b++){
if(myArray.get(a).equalsIgnoreCase(myArray.get(b))){
myArray.remove(b);
dups++;
b--;
}
}
}
When you are filling the ArrayList, use a condition for each element. For example:
ArrayList< Integer > al = new ArrayList< Integer >();
// fill 1
for ( int i = 0; i <= 5; i++ )
if ( !al.contains( i ) )
al.add( i );
// fill 2
for (int i = 0; i <= 10; i++ )
if ( !al.contains( i ) )
al.add( i );
for( Integer i: al )
{
System.out.print( i + " ");
}
We will get an array {0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10}
Code:
List<String> duplicatList = new ArrayList<String>();
duplicatList = Arrays.asList("AA","BB","CC","DD","DD","EE","AA","FF");
//above AA and DD are duplicate
Set<String> uniqueList = new HashSet<String>(duplicatList);
duplicatList = new ArrayList<String>(uniqueList); //let GC will doing free memory
System.out.println("Removed Duplicate : "+duplicatList);
Note: Definitely, there will be memory overhead.
ArrayList<String> city=new ArrayList<String>();
city.add("rajkot");
city.add("gondal");
city.add("rajkot");
city.add("gova");
city.add("baroda");
city.add("morbi");
city.add("gova");
HashSet<String> hashSet = new HashSet<String>();
hashSet.addAll(city);
city.clear();
city.addAll(hashSet);
Toast.makeText(getActivity(),"" + city.toString(),Toast.LENGTH_SHORT).show();
you can use nested loop in follow :
ArrayList<Class1> l1 = new ArrayList<Class1>();
ArrayList<Class1> l2 = new ArrayList<Class1>();
Iterator iterator1 = l1.iterator();
boolean repeated = false;
while (iterator1.hasNext())
{
Class1 c1 = (Class1) iterator1.next();
for (Class1 _c: l2) {
if(_c.getId() == c1.getId())
repeated = true;
}
if(!repeated)
l2.add(c1);
}
LinkedHashSet will do the trick.
String[] arr2 = {"5","1","2","3","3","4","1","2"};
Set<String> set = new LinkedHashSet<String>(Arrays.asList(arr2));
for(String s1 : set)
System.out.println(s1);
System.out.println( "------------------------" );
String[] arr3 = set.toArray(new String[0]);
for(int i = 0; i < arr3.length; i++)
System.out.println(arr3[i].toString());
//output: 5,1,2,3,4
List<String> result = new ArrayList<String>();
Set<String> set = new LinkedHashSet<String>();
String s = "ravi is a good!boy. But ravi is very nasty fellow.";
StringTokenizer st = new StringTokenizer(s, " ,. ,!");
while (st.hasMoreTokens()) {
result.add(st.nextToken());
}
System.out.println(result);
set.addAll(result);
result.clear();
result.addAll(set);
System.out.println(result);
output:
[ravi, is, a, good, boy, But, ravi, is, very, nasty, fellow]
[ravi, is, a, good, boy, But, very, nasty, fellow]
This is used for your Custom Objects list
public List<Contact> removeDuplicates(List<Contact> list) {
// Set set1 = new LinkedHashSet(list);
Set set = new TreeSet(new Comparator() {
#Override
public int compare(Object o1, Object o2) {
if (((Contact) o1).getId().equalsIgnoreCase(((Contact) o2).getId()) /*&&
((Contact)o1).getName().equalsIgnoreCase(((Contact)o2).getName())*/) {
return 0;
}
return 1;
}
});
set.addAll(list);
final List newList = new ArrayList(set);
return newList;
}
As said before, you should use a class implementing the Set interface instead of List to be sure of the unicity of elements. If you have to keep the order of elements, the SortedSet interface can then be used; the TreeSet class implements that interface.
import java.util.*;
class RemoveDupFrmString
{
public static void main(String[] args)
{
String s="appsc";
Set<Character> unique = new LinkedHashSet<Character> ();
for(char c : s.toCharArray()) {
System.out.println(unique.add(c));
}
for(char dis:unique){
System.out.println(dis);
}
}
}
public Set<Object> findDuplicates(List<Object> list) {
Set<Object> items = new HashSet<Object>();
Set<Object> duplicates = new HashSet<Object>();
for (Object item : list) {
if (items.contains(item)) {
duplicates.add(item);
} else {
items.add(item);
}
}
return duplicates;
}
ArrayList<String> list = new ArrayList<String>();
HashSet<String> unique = new LinkedHashSet<String>();
HashSet<String> dup = new LinkedHashSet<String>();
boolean b = false;
list.add("Hello");
list.add("Hello");
list.add("how");
list.add("are");
list.add("u");
list.add("u");
for(Iterator iterator= list.iterator();iterator.hasNext();)
{
String value = (String)iterator.next();
System.out.println(value);
if(b==unique.add(value))
dup.add(value);
else
unique.add(value);
}
System.out.println(unique);
System.out.println(dup);
If you want to remove duplicates from ArrayList means find the below logic,
public static Object[] removeDuplicate(Object[] inputArray)
{
long startTime = System.nanoTime();
int totalSize = inputArray.length;
Object[] resultArray = new Object[totalSize];
int newSize = 0;
for(int i=0; i<totalSize; i++)
{
Object value = inputArray[i];
if(value == null)
{
continue;
}
for(int j=i+1; j<totalSize; j++)
{
if(value.equals(inputArray[j]))
{
inputArray[j] = null;
}
}
resultArray[newSize++] = value;
}
long endTime = System.nanoTime()-startTime;
System.out.println("Total Time-B:"+endTime);
return resultArray;
}
I need to check if all Strings from ArrayList are present in another ArrayList. I can use containsAll but this is not what I want to achieve. Let's me show you this on example:
assertThat(firstArray).containsAll(secondArray);
This code will check if all items from one array is in another one. But I need to check that every single item from one array is contained in any place in the second array.
List<String> firstArray = new ArrayList<>;
List<String> secondArray = new ArrayList<>;
firstArray.add("Bari 1908")
firstArray.add("Sheffield United")
firstArray.add("Crystal Palace")
secondArray.add("Bari")
secondArray.add("Sheffield U")
secondArray.add("C Palace")
So I want to check if first item from secondArray is in firstArray(true) than that second(true) and third(false). I wrote the code which is doing this job but it's quite complicated and I would like to know if there is any simpler way to achieve this goal (maybe with using hamcrest matchers or something like that)
ArrayList<String> notMatchedTeam = new ArrayList<>();
for (int i = 0; i < secondArray.size(); i++) {
String team = secondArray.get(i);
boolean teamMatched = false;
for (int j = 0; j < firstArray.size(); j++) {
teamMatched = firstArray.get(j).contains(team);
if (teamMatched) {
break;
}
}
if (!teamMatched) {
notMatchedTeam.add(team);
}
}
You can do something like this
List<String> firstArray = new ArrayList<>();
List<String> secondArray = new ArrayList<>();
firstArray.add("Bari 1908");
firstArray.add("Sheffield United");
firstArray.add("Crystal Palace");
secondArray.add("Bari");
secondArray.add("Sheffield U");
secondArray.add("C Palace");
Set<String> firstSet= firstArray
.stream()
.collect(Collectors.toSet());
long count= secondArray.stream().filter(x->firstSet.contains(x)).count();
///
Map<String, Boolean> result =
secondArray.stream().collect(Collectors.toMap(s->s, firstSet::contains));
If count >0, then there are some items in second array which are not there in first.
result contains the string with its status.
Thanks
If you have space concerns like you have millions of words in one file and need to check entry of second file in first then use trie. From first make trie and check every entry of second in first.
Situation:
In your question you said that you wanted to return for each element if it exists or not, and in your actual code you are only returning a list of matching elements.
Solution:
You need to return a list of Boolean results instead, this is the code you need:
public static List<Boolean> whichElementsFound(List<String> firstList, List<String> secondList){
ArrayList<Boolean> resultList = new ArrayList<>();
for (int i = 0; i < secondList.size(); i++) {
String team = secondList.get(i);
resultList.add(firstList.contains(team));
}
return resultList;
}
Demo:
This is a working Demo using this method, returning respectively a List<Boolean> to reflects which element from the first list are found in the second.
Edit:
If you want to return the list of elements that were not found, use the following code:
public static List<String> whichElementsAreNotFound(List<String> firstList, List<String> secondList){
ArrayList<String> resultList = new ArrayList<>();
for (int i = 0; i < secondList.size(); i++) {
String team = secondList.get(i);
if(!firstList.contains(team)){
resultList.add(team);
}
}
return resultList;
}
This is the Demo updated.
I am comparing three arrays of Strings using the two classes below. Without using any hash maps or changing the structure of my code too much (I can't change the signature of findMatchingElements()), is there a way to minimize the number of comparisons that my method makes, in order to construct the new array of shared elements?
In TestRun.java I tested my code on three arrays with 8 elements each, which resulted in 46 comparisons made. I want to achieve a lower number of comparisons. Is there a way?
I tried using the remove() method to remove a string from the collection once it was successfully compared to a matching element from the query collection. That prevented some redundant comparisons, but it did not result in a significant reduction.
import java.util.*;
public class CommonElements {
int originalCollectionCount = 0;
Object[] originalCollections;
int listCount = 1;
int matchCount;
int comparisonCount = 0;
public Comparable[] findMatchingItems(Object[] collections)
{
String[] queryArray = (String[])collections[0];
String[] secondaryArray = (String[])collections[1];
ArrayList<String> queryList = new ArrayList(Arrays.asList(queryArray));
ArrayList<String> secondaryList = new ArrayList(Arrays.asList(secondaryArray));
ArrayList<String> commonList = new ArrayList();
int i = 0;
if(listCount == 1){
originalCollectionCount = collections.length;
originalCollections = collections;
}
listCount ++;
for(String x:queryList)
{
for(String y:secondaryList)
{
comparisonCount++;
if(x.compareTo(y) == 0)
{
commonList.add(x); //add mutually shared item to commonList
secondaryList.remove(y); //remove mutually shared item from consideration
if(originalCollectionCount == listCount) //if every list has been examined
{
System.out.println(commonList.get(i));
}
i++;
break;
}
}
}
String[] commonListResult = new String[commonList.size()];
commonList.toArray(commonListResult);
if(originalCollectionCount > listCount){
findMatchingItems(new Object[] {commonListResult,originalCollections[listCount]});}
if (collections.length == 0) {
return new Comparable[0];
} else if (collections.length == 1) {
return (Comparable[]) collections[0];
}
return commonListResult;
}
public int getComparisons(){
return comparisonCount;}
}
public class TestRun {
private final static String[] COLLECTION_5_1 = {"Pittsburgh", "New York", "Chicago", "Cleveland", "Miami", "Dallas", "Atlanta", "Detroit"};
private final static String[] COLLECTION_5_2 = {"Dallas", "Atlanta", "Cleveland", "Chicago", "Washington", "Houston", "Baltimore", "Denver"};
private final static String[] COLLECTION_5_3 = {"Chicago", "Kansas City", "Cleveland", "Jacksonville", "Atlanta", "Tampa Bay", "Dallas", "Seattle"};
public static void main(String[] args) {
new TestRun();
}
public TestRun() {
CommonElements commonElements = new CommonElements();
Object[] input = new Object[3];
input[0] = COLLECTION_5_1;
input[1] = COLLECTION_5_2;
input[2] = COLLECTION_5_3;
System.out.println("Matching items:");
commonElements.findMatchingItems(input);
System.out.println(commonElements.comparisonCount + " comparisons made.");
}
}
You could run a single advanced for loop as below provide the length for both array are same if not run throug it accordingly.
for(String str:arrayStr1){
if(arrayStr2.contains(str)){
newArray.add(str);
}
}
List<String> list_5_1 = new ArrayList<>(Arrays.asList(COLLECTION_5_1));
//[Pittsburgh, New York, Chicago, Cleveland, Miami, Dallas, Atlanta, Detroit]
List<String> list_5_2 = new ArrayList<>(Arrays.asList(COLLECTION_5_2));
//[Dallas, Atlanta, Cleveland, Chicago, Washington, Houston, Baltimore, Denver]
list_5_1.retainAll(list_5_2);
//[Chicago, Cleveland, Dallas, Atlanta]
We have to pass list returned from Arrays.asList, as Arrays.asList method returns only immutable list.
am comparing three arrays of Strings using the two classes below. Without using any hash maps or changing the structure of my code too much (I can't change the signature of findMatchingElements()), is there a way to minimize the number of comparisons that my method makes, in order to construct the new array of shared elements?
Sure. Your nested loops have complexity of O(m*n). When you create a temporary HashMap, you can reduce it to O(m+n) and gain a lot for big inputs. From practical POV, somewhere around length of 10 it should get faster than your solution.
I'm giving no code as it's too straightforward.
I have text files within a directory. What i need to do is;
---for each word in all files
---find positional indexes of each word within a file
---find each file that the word has passed
In order to do this;
HashMap<String, HashMap<Integer, ArrayList<Integer>>>
I want to use a structure as above.
String word;
String pattern = "[[^\\w\\süÜıİöÖşŞğĞçÇ]\\d]+";
while ((word = infile.readLine()) != null) {
String[] wordList = word.replaceAll(pattern, " ").split("\\s+");
for (int j = 0; j < wordList.length; j++) {
if(!wordList[j].isEmpty()){
if(!refinedDict.containsKey(wordList[j])){
refinedDict.put(wordList[j], 1);
}
else{
refinedDict.put(wordList[j], refinedDict.get(wordList[j])+1);
}
}//end of for
}//end if
else{
//do something
}
}//end for
}//end while
Set<String> keys=refinedDict.keySet();
List<String> list=sortList(keys);
Iterator<String> it=list.iterator();
while(it.hasNext()){
String key=it.next();
outfile.write(key + "\t" + refinedDict.get(key) + "\n");
How can i use the ArrayList in HashMap in a HashMap
EDIT
After applying toto2's solution implementation works. However, in order to write it to a file as ---> word[fileId{positions}, fileId{positions}...]
What can be done? Implementing serializable is not useful for such a design.
I define two new classes FileId and PositionInFile instead of Integers for clarity.
Map<String, Map<FileId, List<PositionInFile>>> wordsWithLocations;
for (int j = 0; j < wordList.length; j++) {
if (!wordList[j].isEmpty()){
if (!wordsWithLocations.containsKey(wordList[j])) {
Map<FileId, List<PositionInFile>> map = new HashMap<>();
List<PositionInFile> list = new ArrayList<>();
list.add(wordPosition[j]);
map.put(fileId, list);
wordsWithLocations.put(wordList[j], map);
} else {
Map<FileId, List<PositionInFile>> map =
wordsWithLocation.get(wordList[j]);
if (map.contains(fileId)) {
map.get(fileId).add(wordPosition[j]);
} else {
List<PositionInFile> list = new ArrayList<>();
list.add(wordPosition[j]);
map.put(fileId, list);
}
}
}
}
...
for (String word : wordsWithLocation) {
int nAppearances = 0;
for (List<PositionInFile> positions :
wordsWithLocation.get(word).values()) {
nAppearances += positions.size();
}
System.out.println(word + " appears " + nAppearances + " times.");
}
However I think it would be simpler and cleaner to define:
public class WordLocation {
FileId fileId;
PositionInFile position;
...
}
and then just have a Map<String, List<WordLocation>>. The downside is that you don't have such an explicit mapping to the files. The information is still there however, and the List<WordLocation> should even have the locations listed in the same order as the files were processed.
Not sure exactly.
But here's a general way that I use for a Map that the value is of Collection type.
Map<String, Collection<something>> map ...
for ... do some job
if map.containsKey(keyFound) {
map.get(foundKey).add(foundValue);
} else {
Collection <- create collection
Collection.add(foundValue);
map.put(foundKey, collection)
}
You can also check Google Guava multi-maps.
Hope that helps...
nested Map would work. however I would create a class for that, i.e.
class WordsInFile{
String fileName;
Map<String, List<Integer>> wordIdxMap;
}
this does actually no big difference with nesting maps. but more readable, and you can add methods like findWord(...)... to avoid to be get lost by invoking twice of maps' get(object) methods. It let you know what you are about to get.
i don't know if it is a good idea...
Assuming you have your HashMap defined as above and add an entry like this:
HashMap<String, HashMap<Integer, ArrayList<Integer>>> outer = ...
HashMap<Integer, ArrayList<Integer>> inner = ...
inner.put(1, new ArrayList<Integer>());
outer.put("key1", inner);
you can retrieve the ArrayList as:
ArrayList<Integer> arr = outer.get("key1").get(1);