I have a mongodb collection that needs to be cleaned before a certain process starts, i do this by using mongoTemplate.dropCollection() method becouse it is much faster than using the deleteAll() method on the repository.
The problem arises when i introduced indexes, my model is annotated as follows:
#Document
public class TestModel {
#Indexed
private String testField;
}
and repository
public interface TestModelRepository extends MongoRepository<TestModel, String> {
}
This makes sure that the index is created at application start time
i noticed that by using the repository's deleteAll() method instead of dropping the collection preserves the index, but i was wondering if there is a way with spring-data to make sure that indexes are in place when i make an insert.
also any method to re-create indexes based on the annotated model after drop would be appreciated.
something like
mongoTemplate.createIndexes( TestModel.class );
How can i achieve this?
There is no method like this
mongoTemplate.createIndexes( TestModel.class );
Before deleting, just get the indexInfo and after dropping collection, Recreate the indexes.
List<IndexInfo> indexInfo = mongoTemplate.indexOps(TestModel.class).getIndexInfo();
mongoTemplate.dropCollection(TestModel.class);
indexInfo.forEach(index -> {
DBObject indexOptions = new BasicDBObject();
if(! index.getName().equals("_id_"))
{
indexOptions.put(index.getName(), 1);
CompoundIndexDefinition indexDefinition = new CompoundIndexDefinition(indexOptions);
indexDefinition.named(index.getName());
mongoTemplate.indexOps(TestModel.class).ensureIndex(indexDefinition);
}
});
Related
I have an interface that enforces implementing classes to define and implement a function that returns an org.apache.lucene.search.Query. These classes create various queries like TermQuery, PhraseQuery, etc. Is it possible to take the org.apache.lucene.search.Query that gets returned and iterate over all of the queries and terms it's comprised of?
public interface BaseQuery {
public Query getQuery();
default Query toQuery() {
Query query = getQuery();
// iterate through Query and do things to each term
return query;
}
}
public class ContainsQuery implements BaseQuery {
#Override
protected Query getQuery() {
PhraseQuery.Builder queryBuilder = new PhraseQuery.Builder();
queryBuilder.add(new Term("field", "value");
return queryBuilder.build();
}
}
As you can't update the changes (setTerms or similar), not even in this implementation (PhraseQuery), maybe this works.
You could first retrieve them, and loop over them. Whatever modification you wish to do, update the term or create a new one, or even discard those unwanted.
Then, assign query to a newly constructed object with the modified terms. Something like a manual update/set for a Query object. In the example I just add the terms to the builder, but you could include the previous parameters as well (slop, ...).
default Query toQuery()
{
Query query = getQuery();
Term[] terms= query.getTerms();
List<Term> modifiedTerms = new ArrayList<>();
for (Term t : terms)
{
/*Your modifications here f.e --> you copy one, create two terms and discard one
Term toAdd=null;
toAdd= t.clone();
...
toAdd = new Term((t.field()+"suffix"), t.text());
...
toAdd = new Term("5","6");
...
(do nothing-discard)
if (toAdd!=null)
modifiedTerms.add(toAdd);*/
}
PhraseQuery.Builder builder = new PhraseQuery.Builder();
for (int i=0;i<modifiedTerms.size();i++)
builder.add(modifiedTerms.get(i),i);
query = builder.build();
return query;
}
/* override the reference and assign a new one very similar to s
set/update, if those where implemented.
The last query will be erased on the next GC
so there's not a big gap here. Also this is most surely
a not needed micro-optimization, no worries. */
The way to do this is using the QueryTermExtractor.
WeightedTerm[] terms = QueryTermExtractor.getTerms(query);
for (WeightedTerm term : terms) {
System.out.println("THE TERM: " + term.getTerm());
}
The issue I was having is that all examples I found were calling .getTerms() on a org.apache.lucene.search.Query but .getTerms() seems to no longer be implemented on the base Query class.
Also, #aran's suggested approach of constructing a new Query object is an appropriate method to "modifying" the terms on an already constructed immutable Query object.
This is my first attempt to implement Entity Component System in my project and I'm not sure how some of its mechanics works. For example do I remove an entity? Since all systems are using entities list throughout whole game loop, every attempt of deleting element of that list is condemned to ConcurrentModificationException. Going by this advice I've tried to setting some kind of "toRemove" flag for entities and look for it every time system iterate through list
public class DrawingSystem extends System {
public DrawingSystem(List<Entity> entityList) {
super(entityList);
}
public void update(Batch batch) {
for (Entity entity : entityList) {
removeIfNeccesarry(entity);
//code
}
}
public void removeIfNeccesarry(Entity entity){
if(entity.toRemove){
entityList.remove(entity);
}
}
}
but that didn't help getting rid of the exception. I'm sure there is a elegant solution to this problem since this design pattern is broadly used but I'm just not aware of it.
Check out iterators:
"Iterators allow the caller to remove elements from the underlying collection during the iteration with well-defined semantics."
https://docs.oracle.com/javase/8/docs/api/index.html?java/util/Iterator.html
Iterator<Entity> it = entityList.iterator();
while (it.hasNext()) {
Entity entity = it.next();
if (...) {
it.remove();
}
}
You could also store the indices of the entities to remove somewhere outside the list and then remove the dead entities in an extra step after the update/render.
This has the advantage that you do not miss entities in later steps of your update.
Edit: Added code.
I have the following domain object:
#Document
class Foo {
#Id
private final String bar;
private final String baz;
// getters, setters, constructor omitted
}
Which is inserted as follows:
Collection<Foo> foos = ...;
mongoTemplate.insert(foos, Foo.class);
How to save all results in one call ignoring all duplicate key exceptions ?
In my case it was not suitable to allow modification/overwriting of the existing documents as in #marknorkin's answer. Instead, I only wanted to insert new documents. I came up with this using MongoOperations, which is injectable in Spring. The code below is in Kotlin.
try {
// we do not want to overwrite existing documents, especially not behind the event horizon
// we hence use unordered inserts and supresss the duplicate key exceptions
// as described in: https://docs.mongodb.com/v3.2/reference/method/db.collection.insertMany/#unordered-inserts
mongoOps.bulkOps(BulkOperations.BulkMode.UNORDERED, EventContainer::class.java)
.insert(filtered)
.execute()
} catch (ex: BulkOperationException) {
if (!isDuplicateKeyException(ex)) {
throw ex
}
}
With this little helper
private fun isDuplicateKeyException(ex: BulkOperationException): Boolean {
val duplicateKeyErrorCode = 11000
return ex.errors.all { it.code == duplicateKeyErrorCode }
}
I searched through spring data mongo documentation and other resources, but didn't find expected answer.
Seems like Mongo inserts batch docs until unique key constraint is met, and it's up to DB to decide.
So for example if you need to insert 100 docs and document on position 50 already exists in DB then the first 49 will be inserted and the second 50 will not.
What I came up is the next solution:
Set<String> ids = foos.stream().map(Foo::getBar).collect(toSet()); // collect all ids from docs that will be inserted
WriteResult writeResult = mongoTemplate.remove(new Query(Criteria.where("_id").in(ids)), Foo.class); // perform remove with collected ids
mongoTemplate.insert(foos, Foo.class); // now can safely insert batch
So DB will be called twice.
Also as bar is indexed field the remove operation will be fast.
I'm trying to merge these three objects into a single complex object:
public class Person {
private String name;
private List<Event> events;
// getters and setters
}
public class Event {
private String name;
private List<Gift> gifts;
// getters and setters
}
public class Gift {
private String name;
private String recipient;// the name of the person
private String eventName;
// getters and setters
}
My goal is to save the Person object in MongoDB using Morphia and this how I want my document laid out. I've created a document builder, of sorts, that combines lists of each object. Each Person gets a list of all Events, but can only receive specific Gifts. While my document builder does create a document that Morphia can persist, only the Gifts of that last recipient (sort order) are inserted into the Events for all Persons. Though for the correct Events.
public void merge() {
for (Person person : listOfPersons) {
for (Event event : listOfEvents) {
// somePersonsGifts: a sublist of gifts based on Event and Person.
List<Gift> somePersonsGifts = new ArrayList<Gift>();
for (Gift gift : listOfGifts) {
if (person.getName().equals(gift.getRecipient()) && gift.getEventName().equals(event.getName())) {
somePersonsGifts.add(gift);
}
}
event.setGifts(somePersonsGifts);
}
person.setEvents(listOfEvents)
}
}
If I modify the code slightly to process one person at a time by removing the outer loop and having the method take an argument for specific index of the Persons list:
public void merge(int p) {
Person person = listOfPersons.get(p);
//...and so on
I get one complete Person object with the correct gifts. If try to feed the this modified version into a loop, the problem comes back. I've tried using regular for-loops and synchronized collections. I've tried using Google Guava's ImmutableArrayList and still no luck. I know the problem is that I'm changing the lists while accessing them but I can't find anyway around it. I wrote a DAO that uses the MongoDB driver directly and it works properly, but it's a lot more code and quite ugly. I really want this approach to work, the answer is in front of me but I just can't see it. Any help would be greatly appreciated.
Here is your problem:
List<Gift> somePersonsGifts = new ArrayList<Gift>();
....
event.setGifts(somePersonsGifts);
You add the gifts only for one person; if you want to aggregate all the gifts into the event, re-use the existing list.
I don't know anything about MongoDB or Morphia but I suspect the problem is your use of the setters event.setGifts(somePersonsGifts) and person.setEvents(events). Your code does not seem to merge the existing gift and event lists with the ones you are calculating further in the loop, which is how you would want it to behave (if I understand the question correctly).
You should retrieve the allready existing gift list (and event list too) instead of overwriting them with empty new ones.
I don't know if the method merge() is inside the list but I assume that since you are using the list events here
person.setEvents(events);
Maybe you meant
person.setEvents(listOfEvents)
Notice that you are adding all the events to each person. If all the persons went to all the events, it is unnecessary to have the events inside the person.
Im using ORMLite in my Android app. I need to persist this class, which has a HashMap. What is a good way of persisting it? Its my first time trying to persist a HashMap, also first time with ORMLite so any advice would be greatly appreciated!
*Edit*
If that makes any difference, the Exercise class is simply a String (that also works as id in the database), and the Set class has an int id (which is also id in database), int weight and int reps.
#DatabaseTable
public class Workout {
#DatabaseField(generatedId = true)
int id;
#DatabaseField(canBeNull = false)
Date created;
/*
* The hashmap needs to be persisted somehow
*/
HashMap<Exercise, ArrayList<Set>> workoutMap;
public Workout() {
}
public Workout(HashMap<Exercise, ArrayList<Set>> workoutMap, Date created){
this.workoutMap = workoutMap;
this.created = created;
}
public void addExercise(Exercise e, ArrayList<Set> setList) {
workoutMap.put(e, setList);
}
...
}
Wow. Persisting a HashMap whose value is a List of Sets. Impressive.
So in ORMLite you can persist any Serializable field. Here's the documentation about the type and how you have to configure it:
http://ormlite.com/docs/serializable
So your field would look something like:
#DatabaseField(dataType = DataType.SERIALIZABLE)
Map<Exercise, List<Set>> workoutMap;
Please note that if the map is at all large then this will most likely not be very performant. Also, your Exercise class (and the List and Set classes) need to implement Serializable.
If you need to search this map, you might consider storing the values in the Set in another table in which case you might want to take a look at how ORMLite persists "foreign objects".