java.lang.UnsupportedOperationException: This operation is not supported on Query Results
at org.datanucleus.store.query.AbstractQueryResult.contains(AbstractQueryResult.java:250)
at java.util.AbstractCollection.retainAll(AbstractCollection.java:369)
at namespace.MyServlet.doGet(MyServlet.java:101)
I'm attempting to take one list I retrieved from a datastore query, and keep only the results which are also in a list I retrieved from a list of keys. Both my lists are populated as expected, but I can't seem to user retainAll on either one of them.
// List<Data> listOne = new ArrayList(query.execute(theQuery));
// DatastoreService ds = DatastoreServiceFactory.getDatastoreService();
// List<Data> listTwo = new ArrayList(ds.get(keys).values());
// listOne.retainAll(listTwo);
EDIT
Ok, in an attempt to simplify, since this is apparently multiple problems in one, I have stopped using the low level API for datastore and instead of am just pulling one by one with a loop.
List<MyClass> test = (List<MyClass>) query.execute();
List<MyClass> test2 = new ArrayList<MyClass>();
for (String key : favorites) {
test2.add(pm.getObjectById(MyClass.class, key));
}
log.info(test.toString());
test.retainAll(test2);
The above works. It doesn't throw the exception. The below does throw the exception. The only difference is the log.info. I'm stumped.
List<MyClass> test = (List<MyClass>) query.execute();
List<MyClass> test2 = new ArrayList<MyClass>();
for (String key : favorites) {
test2.add(pm.getObjectById(MyClass.class, key));
}
test.retainAll(test2);
It will not let me do new ArrayList() on the query result since it returns an array of objects.
You however need to put them in a new ArrayList(). The returned List implementation apparently doesn't support retainAll(). That's what the exception is telling you.
A "plain" ArrayList supports it. If passing through the ArrayList constructor is not possible due to difference in generic type, then you'll need to manually loop over it and cast each item before adding.
List<Data> listTwo = new ArrayList<Data>();
for (Object object : ds.get(keys).values()) {
listTwo.add((Data) object);
}
listOne.retainAll(listTwo);
Update: as per your update, the entities are apparently lazily loaded/filled. Most ORM's (DataNucleus is one) may indeed do that. As I don't use DataNucleus, I can't go in detail how to fix that in a "nice" way. But you at least now know the root cause of the problem and it can be solved the same way as above. Fill the list test in a loop as well.
If the type of collection you use for your "list of keys" does not support retainAll that exception will be thrown. Which type are you using?
TIP: you don't need to iterate to fill listTwo.
just do:
listTwo.addAll(ds.get(keys).values())
Related
I have two list containing an important number of object with each N elements:
List<Foo> objectsFromDB = {{MailId=100, Status=""}, {{MailId=200, Status=""}, {MailId=300, Status=""} ... {MailId=N , Status= N}}
List <Foo> feedBackStatusFromCsvFiles = {{MailId=100, Status= "OPENED"}, {{MailId=200, Status="CLICKED"}, {MailId=300, Status="HARDBOUNCED"} ... {MailId=N , Status= N}}
Little Insights:
objectFromDB retrieves row of my database by calling a Hibernate method.
feedBackStatusFromCsvFiles calls a CSVparser method and unmarshall to Java objects.
My entity class Foo has all setters and getters. So I know that the basic idea is to use a foreach like this:
for (Foo fooDB : objectsFromDB) {
for(Foo fooStatus: feedBackStatusFromCsvFiles){
if(fooDB.getMailId().equals(fooStatus.getMailId())){
fooDB.setStatus(fooStatus.getStatus());
}
}
}
As far as my modest knowledge of junior developer is, I think it is a very bad practice doing it like this? Should I implement a Comparator and use it for iterating on my list of objects? Should I also check for null cases?
Thanks to all of you for your answers!
Assuming Java 8 and considering the fact that feedbackStatus may contain more than one element with the same ID.
Transform the list into a Map using ID as key and having a list of elements.
Iterate the list and use the Map to find all messages.
The code would be:
final Map<String, List<Foo>> listMap =
objectsFromDB.stream().collect(
Collectors.groupingBy(item -> item.getMailId())
);
for (final Foo feedBackStatus : feedBackStatusFromCsvFiles) {
listMap.getOrDefault(feedBackStatus.getMailId(), Colleactions.emptyList()).forEach(item -> item.setStatus(feedBackStatus.getStatus()));
}
Use maps from collections to avoid the nested loops.
List<Foo> aList = new ArrayList<>();
List<Foo> bList = new ArrayList<>();
for(int i = 0;i<5;i++){
Foo foo = new Foo();
foo.setId((long) i);
foo.setValue("FooA"+String.valueOf(i));
aList.add(foo);
foo = new Foo();
foo.setId((long) i);
foo.setValue("FooB"+String.valueOf(i));
bList.add(foo);
}
final Map<Long,Foo> bMap = bList.stream().collect(Collectors.toMap(Foo::getId, Function.identity()));
aList.stream().forEach(it->{
Foo bFoo = bMap.get(it.getId());
if( bFoo != null){
it.setValue(bFoo.getValue());
}
});
The only other solution would be to have the DTO layer return a map of the MailId->Foo object, as you could then use the CVS list to stream, and simply look up the DB Foo object. Otherwise, the expense of sorting or iterating over both of the lists is not worth the trade-offs in performance time. The previous statement holds true until it definitively causes a memory constraint on the platform, until then let the garbage collector do its job, and you do yours as easy as possible.
Given that your lists may contain tens of thousands of elements, you should be concerned that you simple nested-loop approach will be too slow. It will certainly perform a lot more comparisons than it needs to do.
If memory is comparatively abundant, then the fastest suitable approach would probably be to form a Map from mailId to (list of) corresponding Foo from one of your lists, somewhat as #MichaelH suggested, and to use that to match mailIds. If mailId values are not certain to be unique in one or both lists, however, then you'll need something a bit different than Michael's specific approach. Even if mailIds are sure to be unique within both lists, it will be a bit more efficient to form only one map.
For the most general case, you might do something like this:
// The initial capacity is set (more than) large enough to avoid any rehashing
Map<Long, List<Foo>> dbMap = new HashMap<>(3 * objectFromDb.size() / 2);
// Populate the map
// This could be done more effciently if the objects were ordered by mailId,
// which perhaps the DB could be enlisted to ensure.
for (Foo foo : objectsFromDb) {
Long mailId = foo.getMailId();
List<Foo> foos = dbMap.get(mailId);
if (foos == null) {
foos = new ArrayList<>();
dbMap.put(mailId, foos);
}
foos.add(foo);
}
// Use the map
for (Foo fooStatus: feedBackStatusFromCsvFiles) {
List<Foo> dbFoos = dbMap.get(fooStatus.getMailId());
if (dbFoos != null) {
String status = fooStatus.getStatus();
// Iterate over only the Foos that we already know have matching Ids
for (Foo fooDB : dbFoos) {
fooDB.setStatus(status);
}
}
}
On the other hand, if you are space-constrained, so that creating the map is not viable, yet it is acceptable to reorder your two lists, then you should still get a performance improvement by sorting both lists first. Presumably you would use Collections.sort() with an appropriate Comparator for this purpose. Then you would obtain an Iterator over each list, and use them to iterate cooperatively over the two lists. I present no code, but it would be reminiscent of the merge step of a merge sort (but the two lists are not actually merged; you only copy status information from one to the other). But this makes sense only if the mailIds from feedBackStatusFromCsvFiles are all distinct, for otherwise the expected result of the whole task is not well determined.
your problem is merging Foo's last status into Database objects.so you can do it in two steps that will make it more clearly & readable.
filtering Foos that need to merge.
merging Foos with last status.
//because the status always the last,so you needn't use groupingBy methods to create a complex Map.
Map<String, String> lastStatus = feedBackStatusFromCsvFiles.stream()
.collect(toMap(Foo::getMailId, Foo::getStatus
, (previous, current) -> current));
//find out Foos in Database that need to merge
Predicate<Foo> fooThatNeedMerge = it -> lastStatus.containsKey(it.getMailId());
//merge Foo's last status from cvs.
Consumer<Foo> mergingFoo = it -> it.setStatus(lastStatus.get(it.getMailId()));
objectsFromDB.stream().filter(fooThatNeedMerge).forEach(mergingFoo);
Title of the question may give you the impression that it is duplicate question, but according to me it is not.
I am just a few months old in Java and a month old in MongoDB, SpringBoot and REST.
I have a Mongo Collection with 3 fields in a document, _id (default field), appName and appKey. I am using list to iterate through all the documents and find one document whose appName and appKey matches with the one that is passed. This collection right now has only 4 entries, and thus it is running smoothly. But I was reading a bit about collections and found that if there will be a higher number of documents in a collection then the result with list will be much slower than hashMap.
But as I have already said that I am quite new to Java, I am having a bit of trouble converting my code to hashMap, so I was hoping if someone can guide me through this.
I am also attaching my code for reference.
public List<Document> fetchData() {
// Collection that stores appName and appKey
MongoCollection<Document> collection = db.getCollection("info");
List<Document> nameAndKeyList = new ArrayList<Document>();
// Getting the list of appName and appKey from info DB
AggregateIterable<Document> output = collection
.aggregate(Arrays.asList(new BasicDBObject("$group", new BasicDBObject("_id",
new BasicDBObject("_id", "$id").append("appName", "$appName").append("appKey", "$appKey"))
)));
for (Document doc : output) {
nameAndKeyList.add((Document) doc.get("_id"));
}
return nameAndKeyList;
}// End of Method
And then I am calling it in another method of the same class:
List<Document> nameAndKeyList = new ArrayList<>();
//InfoController is the name of the class
InfoController obj1 = new InfoController();
nameAndKeyList = obj1.fetchData();
// Fetching and checking if the appName & appKey pair
// is present in the DB one by one.
// If appName & appKey mismatches, it increments the value
// of 'i' and check them with the other values in DB
for (int i = 0; i < nameAndKeyList.size(); i++) {
"followed by my code"
And if I am not wrong then there will be no need for the above loop also.
Thanks in advance.
You just need a simple find query to get the record you need directly from Mongo DB.
Document document = collection
.find(new Document("appName", someappname).append("appKey", someappkey)).first();
First of all a list is not much slower or faster than an HashMap. A Hasmap is commonly used to save key-pair values such as "ID", "Name" or something like that. In your case I see you are using ArrayList without a specified size for the list. better use a linked list when you do not know the size because an arraylist is holding a array behind and extending this by copying. If you want to generate a Hasmap out of the List or use a Hasmap you need to map an ID and the value to the records.
HashMap<String /*type of the identifier*/, String /*type of value*/> map = new HashMap<String,String>();
for (Document doc : output) {
map.put(doc.get("_id"), doc.get("_value"));
}
First, avoid premature optimization (lookup the expression if you don’t know what it is). Put a realistic number of thousands of items containing near-realistic data in your list. Try to retrieve an item that isn’t there. This will force your for loop to traverse the entire list. See how long it takes. Try a number of times to get an impression of whether you get impatient. If you don’t, you’re done.
If you find out that you need a speed-up, I agree that HashMap is one of the obvious solutions to try. One of the first things to consider with this is a key type for you HashMap. As I understand, what you need to search for is an item where appName and appKey are both right. The good solution is to write a simple class with these two fields and equals and hashCode methods (I’ll call it DocumentHashMapKey for now, think of a better name). For hashCode(), try Objects.hash(appName, appKey). If it doesn’t give satisfactory performance with the data you have, consider alternatives. Now you are ready to build your HashMap< DocumentHashMapKey, Document>.
If you’re lazy or just want a first impression of how a HashMap performs, you may also build your keys by concatenating appName + "$##" + appKey (where the string in the middle is something that is unlikely to be part of a name or key) and use HashMap<String, Document>.
Everything I said can be refined depending on your needs. This was just to get you started.
Thanks everyone for your help, without which I would not have got to a solution.
public HashMap<String, String> fetchData() {
// Collection that stores appName and apiKey
MongoCollection<Document> collection = db.getCollection("info");
HashMap<String, String> appKeys = new HashMap<String, String>();
// Getting the list of appName and appKey from info DB
AggregateIterable<Document> output = collection
.aggregate(Arrays.asList(new BasicDBObject("$group", new BasicDBObject("_id",
new BasicDBObject("_id", "$id").append("appName", "$appName").append("appKey", "$appKey"))
)));
String appName = null;
String appKey = null;
for (Document doc : output) {
Document temp = (Document) doc.get("_id");
appName = (String) temp.get("appName");
appKey = (String) temp.get("appKey");
appKeys.put(appName, appKey);
}
return appKeys;
Calling the above method into another method of the same class.
InfoController obj = new InfoController();
//Fetching the values of 'appName' & 'appKey' sent from 'info' DB
HashMap<String, String> appKeys = obj.fetchData();
storedAppkey = appKeys.get(appName);
//Handling the case of mismatch
if (storedAppkey == null || storedApikey.compareTo(appKey)!=0)
{//Then the response and further processing that I need to do.
Now what HashMap has done is that it has made my code more readable and the 'for' loop that I was using for iterating is gone, although it might not make much difference in the performance as of now.
Thanks once again to everyone for your help and support.
I'd like to imagine there's existing API functionality for this. Suppose there was Java code that looks something like this:
JavaRDD<Integer> queryKeys = ...; //values not particularly important
List<Document> allMatches = db.getCollection("someDB").find(queryKeys); //doesn't work, I'm aware
JavaPairRDD<Integer, Iterator<ObjectContainingKey>> dbQueryResults = ...;
Goal of this: After a bunch of data transformations, I end up with an RDD of integer keys that I'd like to make a single db query with (rather than a bunch of queries) based on this collection of keys.
From there, I'd like to turn the query results into a pair RDD of the key and all of its results in an iterator (making it easy to hit the ground going again for the next steps I'm intending to take). And to clarify, I mean a pair of the key and its results as an iterator.
I know there's functionality in MongoDB capable of coordinating with Spark, but I haven't found anything that'll work with this yet (it seems to lean towards writing to a database rather than querying it).
I managed to figure this out in an efficient enough manner.
JavaRDD<Integer> queryKeys = ...;
JavaRDD<BasicDBObject> queries = queryKeys.map(value -> new BasicDBObject("keyName", value));
BasicDBObject orQuery = SomeHelperClass.buildOrQuery(queries.collect());
List<Document> queryResults = db.getCollection("docs").find(orQuery).into(new ArrayList<>());
JavaRDD<Document> parallelResults = sparkContext.parallelize(queryResults);
JavaRDD<ObjectContainingKey> results = parallelResults.map(doc -> SomeHelperClass.fromJSONtoObj(doc));
JavaPairRDD<Integer, Iterable<ObjectContainingKey>> keyResults = results.groupBy(obj -> obj.getKey());
And the method buildOrQuery here:
public static BasicDBObject buildOrQuery(List<BasicDBObject> queries) {
BasicDBList or = new BasicDBList();
for(BasicDBObject query : queries) {
or.add(query);
}
return new BasicDBObject("$or", or);
}
Note that there's a fromJSONtoObj method that will convert your object back from JSON into all of the required field variables. Also note that obj.getKey() is simply a getter method associated to whatever "key" it is.
I am throwing a ConcurrentModificationExample in the following code. I checked the API and it has to do with me trying to modify an object while another thread is iterating over it. I am clueless on the matter. I have created a comment above the line causing the exception. The Employee class doesn't contain anything other than the three variables for storing information.
I will be including the entire class as I would also like to know if there is a way to simplify my code as it repeats many things such as object creation and adding everything to the lists.
When you call employeesByAge in here with dep.employees:
dep.employeesByAge(dep.employees)
that will pass in dep.employees to employeesByAge such that in:
public class Department{
LinkedList<Employee> employees = ...;
public LinkedList<Employee> employeesByAge(LinkedList<Employee> outputList) {
...
}
}
both the employee member field and the outputList parameter refers to the same list, not just two list with the same content, but the same list instance.
Then you do:
for (Employee emp: employees){
//the list is null. add the first employee
if (outputList.isEmpty()){
outputList.add(emp);
} else
...
}
which iterates the employee and modifies outputList, but remember that these two are the same list object. Thus, ConcurrentModificationException.
What you're attempting to do is similar to this...
List list = ...;
for(item: list) {
list.add(item);
}
That is, you're updating a collection with elements by iterating over the same collection. All
outputList.add(...);
in Department are adding elements to the collection from the same collection 'employees'.
In main(), by doing
dep.employeesByAge(dep.employees)
you're attempting to update 'dep.employees' with 'dep.employees.' which results in concurrent modification exception.
ArrayList<Persons> persList = new ArrayList<Persons>();
for(Persons p : persList){
Persons pers = new Persons();
pers = service.getPersons(id);
p.setAddress(pers.getAddress());
persList.add(pers);
}
Is this the right way to add all found Persons to persList? Thank you in advance.
No, you shouldn't modify a list while you're iterating over it, other than via the Iterator.remove method. Aside from anything else, even if this code didn't throw an exception, it would go on forever unless persList was empty... there's always be new people to iterate over!
You should basically create a new list collecting the items to add, and then use addAll at the end:
ArrayList<Persons> persList = new ArrayList<Persons>();
// Populate the list, presumably
List<Persons> extraPeople = new ArrayList<Persons>();
for(Persons p : persList){
// Note: there's no point in creating a new object only to ignore it...
Persons pers = service.getPersons(id);
p.setAddress(pers.getAddress());
extraPeople.add(pers);
}
persList.addAll(extraPeople);
This code still doesn't make much sense in my view, as you're fetching via the same id value on every iteration... I can only hope this was an example rather than real code.
Also note that if each instance of your Persons class is meant to be a single person, it would be better to call it Person.