I have a persistent class NewsClass with persistent field newsSource.
// PERSISTENT
class NewsClass {
// Persistent
String newsSource;
// Other persistent fields
}
Now to query this entity
Query q = pm.newQuery(NewsClass.class);
q.setFilter("newsSource=='http://somerandomurl'");
List<NewsClass> result = (List<NewsClass>) q.execute();
It turns out that JDO doesn't look for the newsSource field but rather tries an instantiation like new sSource(). I also tried things like q.setFileter("\"newsSource\"=='http://reandomurl'"); as a workaround but didn't work either.
There's about 1GB of data already (on the AppEngine datastore which uses a soft schema) so renaming the field doesn't really look like a good idea.
Please how do I make this query work?
EDIT
Here's what I got in my logger.
CreatorExpression defined with class of sSourceId yet this class is not found
Related
While trying to update entity I'm first retrieving it from the database, then I'm mapping the TO from frontend on it using Orika Mapper.
Then I'm trying to retrieve some data not related to this entity using 'JpaRepository' and findAllByOrderByCode method. And while this operation I'm getting a strange error saying that: "An unexpected exception occurred: detached entity passed to persist:".
And this error refers not to the basic field from the entity but to the object from the collection from this entity.
Summarize:
I have entity A which have bidirectional mapping One to Many to the entity B:
class A {
List<B> b;
}
then I want to update whole A with an object from frontend which I mapped using Orika Mapper.
And while trying to get some data I have an error.
I found that Orika by default makes a deep copy for collections so entityA = customsClearanceOrderRepository.findById(requestTo.getId());
entityA which has List of entitiesB and which are tracked and included in persistence context is replaced with a deep copy of them so they have another address and it means their aren't any longer tracked by Hibernate.
So I tried to map those collections by myself, to just update the fields and not create a new object and then the problem has gone.
Everything would be fine but when I removed this line List<SthTo> all = someRefersToDb.findAllByOrderByCode(); // error appears here
then the problem also doesn't exist, even that I'm again using orika which makes this deep copy. And I understand that it works fine because of 'saveAndFlush' in fact while updating makes EntityManager.merge(entity) and the problem with another address for entities is not a problem for that (cause it copies not tracked object into persistence context).
entityA = entityARepository.findById(requestTo.getId());
entityAMapper.map(requestTo, entityA);
List<SthTo> all = someRefersToDb.findAllByOrderByCode(); // error appears here
EntityA entityASaved = entityARepository.saveAndFlush(entityA);
So I want to know what's going on here: someRefersToDb.findAllByOrderByCode();
Is there some kind of checking the state of the entityA?
Everything is by default, I mean there is no magical #Transactional(propagation = Propagation.REQUIRES_NEW) or sth like this.
I know why!
Hibernate while running someRefersToDb.findAllByOrderByCode();
in fact, call also session.flush() which is used to synchronize session data with the database. And since Orika changed the addresses of entities their aren't any longer a part of the persistence context and the synchronization fails.
I have some stored procedures which select data from multiple entities, so where should i define them as they are not getting there data from a single repository?
i have defined stored procedures specific which are strictly getting data from single table in there entity classes like this
#Entity
#Table(name = "accounts", schema = "ma_db")
#NamedStoredProcedureQueries({
#NamedStoredProcedureQuery(name="getAccountsList", procedureName = "GET_ACCOUNT", parameters = {
#StoredProcedureParameter(mode = ParameterMode.IN, name = "UserId", type = String.class)
} )
It's really impossible to tell without more information about you application and the SP in question.
But here are some guidelines:
Think about what the SP is doing. What is the main domain concept it is concerned with?
This doesn't have to be an entity, maybe you don't need such an entity in your java code, maybe you need it and haven't realized it yet.
One typical example where I have seen this situation is with reports or exports. Those are domain objects as well, although they often don't match to a JPA entity.
If there really isn't a matching entity to associate the SP with, maybe the right thing to do is to just create a simple class that executes the SP using a JdbcTemplate.
I'm using simple Java classes which are the schema for my mongo db table.
There are several frameworks for serialization/ deserialization to/ from JSON and CRUD operations for mongo (I've looked into Jackson serializer and Morphia).
But none of them seems to provide a solution for handling changes:
Let's say I have this class as my schema:
Class Person
{
String name;
int age;
String occupation;
}
In my code, I will probably use a setter in some place for age:
Person newDbEntry = new Person();
newDbEntry.setAge(45);
newDbEntry.setOccupation("Carpenter");
Now let's say that at some point of the development process, it was decided that age field name needs to be changed to "theAge", and it was also decided to remove "occupation" field from this collection completely- to a new table.
The problem that I'm faced with is that all my queries look like this:
JsonObject query = new JsonObject().put("age",new JsonObject().put("$gte", 22);
In other words, all field names are written in queries as Strings (and also in all other mongo APIs- update, findAndModify, etc).
I'm looking for a way to "bind" all mentions of the field "age" in my code with the POJO class- so that when something in the POJO schema changes (like renaming this field), I'll have (ideally) compiler errors in all queries that mention this field.
As it currently stands, changes to schema cause no compiler errors and - more critically - usually no runtime errors. The old string query just quietly returns no results, or something similar. This makes changes to the schema very hard to implement.
How should this be done correctly?
Here's the solution that I ended up using:
Project lombok now supports FieldNames generation:
https://projectlombok.org/features/experimental/FieldNameConstants
So instead of using the name hardcoded as string:
serviceRepository.setField(id, “service.serviceName”, “newName”);
I use:
serviceRepository.setField(id, ConnectivityServiceDetails.Fields.service + "." + ConnectivityService.Fields.serviceName, “newName”);
This way, when we search in Intellij for usages of this field (or try to refactor it), it will find those places also automatically.
I'm using Hibernate Envers in my app to track changes in all fields of my entities.
I'm using #Audited(withModifiedFlag=true) annotation to do it.
The records are been correcty recorded at database and the _mod fields correctly indicate the changed fields.
I want to get a particular revision from some entity and the information of what fields have been changed. I'm using the follow method to do it:
List<Object[]> results = reader.createQuery()
.forRevisionsOfEntity(this.getDao().getClazz(), false, true)
.add(AuditEntity.id().eq(id))
.getResultList();
This method returns an list of an object array with my entity as first element.
The problem is that the returned entity doesn't have any information about the changed fields. So, my question is: how to get the information about the changed fields?
I know that this question is a bit old now but I was trying to do this and didn't really find any answers.
There doesn't seem to be a nice way to achieve this, but here is how I went about it.
Firstly you need to use projections, which no longer gives you a nice entity model already mapped for you. You'll still get back an array of Objects but each object in the array corresponds to each projection that you added (in order).
final List<Object[]> resultList = reader.createQuery()
.forRevisionsOfEntity(this.getDao().getClazz(), false, true)
// if you want revision properties like revision number/type etc
.addProjection(AuditEntity.revisionNumber())
// for your normal entity properties
.addProjection(AuditEntity.id())
.addProjection(AuditEntity.property("title")) // for each of your entity's properties
// for the modification properties
.addProjection(new AuditProperty<Object>(new ModifiedFlagPropertyName(new EntityPropertyName("title"))))
.add(AuditEntity.id().eq(id))
.getResultList();
You then need to map each result manually. This part is up to you, but I'm use a separate class as a revision model as it contains extra data to the normal entity data. If you wanted you could probably achieve this with #Transient properties on your entity class though.
final List<MyEntityRevision> results = resultList.stream().map(this::transformRevisionResult)
.collect(Collectors.toList());
private MyEntityRevision transformRevisionResult(Object[] revisionObjects) {
final MyEntityRevision rev = new MyEntityRevision();
rev.setRevisionNumber((Integer) revisionObjects[0]);
rev.setId((Long) revisionObjects[1]);
rev.setTitle((String) revisionObjects[2]);
rev.setTitleModified((Boolean) revisionObjects[3]);
return rev;
}
I have the following section of code that I want to use to return a collection of my object:
Session session = HibernateUtil.getSession();
List<MyObj> myObjList = (List<MyObj>)
session.createCriteria(MyObj.class)
.add(Restrictions.eq("searchField", searchField)).list();
Iterator<MyObj> myObjIt = myObjList.listIterator();
log.debug("list size: " + myObjList.size());
while(myObjIt.hasNext()){
MyObj myObj = myObjIt.next();
log.debug(myObj.getMyField());
}
However, my log keeps printing the same record as many times as the size of the list. If I refactor slightly, my code works correctly like this:
SQLQuery query = session.createSQLQuery(
"select my_field from my_table where search_field = :searchField"
);
query.setParameter("myField", myField);
List result = query.list();
for(Iterator iter = result.iterator(); iter.hasNext();){
Object[] row = (Object[]) iter.next();
log.debug(row[0]);
}
Am I doing something wrong in my first code segment? I should be able to go about this either way, and since I'm using Hibernate, I'd rather the ORM be working as expected, so I'd prefer the former method over the latter. Anyone have any thoughts?
Fwiw, I am using Hibernate 3.5.4 final, Hibernate-validator 4.2.0 Final, hibernate-search 3.4.0 Final, and hibername-c3p0 3.6.5 final, all from the maven repos.
Edited to clarify based on comments.
From what you have described in the question, your both code segments should return the same results. Assuming that in first code segment Hibernate executes pretty the same query as in second segment (you can check it in log, just enable 'hibernate.show_sql' config parameter) - problem is somewhere in converting result set to MyObj list. It is pretty unlikely that it happens due to a bug in hibernate, so it can be due to incorrect entity class mapping. If you do not see any problems with mapping, please add more details to the question (your entity class with mappings, db table schema and data sample) so anyone can reproduce the issue.
Likely your MyObj class does not have Id column mapping properly defined. For example if the field/property mapped as Id has the same value for all the objects in the result list, hibernate will return same objects (as in your case).
Regarding using primitives as Id type: hibernate allow using primitive types, but it has the following line in the docs:
We recommend that you declare consistently-named identifier properties on persistent classes and that you use a nullable (i.e., non-primitive) type.
Summarizing possible Id mapping issues:
1. Id mapped column is not unique in db.
2. Corresponding setter for Id property getter mapped is not specified or does not really save passed argument value.
3. Not-nullable type of Id field.