When migrating entities to HRD #Parent keys become null - java

I am in the process of moving an existing Google AppEngine application from the master-slave datastore (MSD) to the new high-replication datastore (HRD).
The application is written in Java, using Objectify 3.1 for persistence.
In my old (MSD) application, I have an entity like:
public class Session {
#Id public Long id;
public Key<Member> member;
/* other properties and methods */
}
In the new (HRD) application, I have changed this into:
public class Session {
#Id public Long id;
// HRD: #Parent is needed to ensure strongly consistent queries.
#Parent public Key<Member> member;
/* other properties and methods */
}
I need the Session objects to be strongly consistent with their parent Member object.
When I migrate (a working copy of) my application using Google's HRD migration tool, all Members and Sessions are there. However, all member properties of Session objects become null. Apparently, these properties are not migrated.
I was prepared to re-parent my Session objects, but if the member property is null, that is impossible. Can anyone explain what I am doing wrong, and if this problem can be solved?

#Id and #Parent are not "real" properties in the underlying entity. They are part of the key which defines the entity; Objectify maps them to properties on your POJO.
The transformation you are trying to make is one of the more complicated problems in GAE. Remember that an entity with a different parent (say, some value vs null) is a different entity; it has a different key. For example, loading an entity with a null parent, setting the parent to a value, and saving the entity, does not change the entity -- it creates a new one. You would still need to delete the old entity and update any foreign key references.
Your best bet is to import the data as-is with the regular 'member' field. You can also have the #Parent field (call it anything; you can rename it at any time since it's not a "real" property). After you migrate, make a pass through your data:
Load each Session
Check for null parentMember. If null:
Assign parentMember and save entity
Delete entity with null parentMember
Be very careful of foreign key references if you do this.

Related

org.springframework.orm.jpa.JpaSystemException: identifier of an instance of com.cc.domain.User was altered from 90 to null; [duplicate]

org.hibernate.HibernateException: identifier of an instance
of org.cometd.hibernate.User altered from 12 to 3
in fact, my user table is really must dynamically change its value, my Java app is multithreaded.
Any ideas how to fix it?
Are you changing the primary key value of a User object somewhere? You shouldn't do that. Check that your mapping for the primary key is correct.
What does your mapping XML file or mapping annotations look like?
You must detach your entity from session before modifying its ID fields
In my case, the PK Field in hbm.xml was of type "integer" but in bean code it was long.
In my case getters and setter names were different from Variable name.
private Long stockId;
public Long getStockID() {
return stockId;
}
public void setStockID(Long stockID) {
this.stockId = stockID;
}
where it should be
public Long getStockId() {
return stockId;
}
public void setStockId(Long stockID) {
this.stockId = stockID;
}
In my case, I solved it changing the #Id field type from long to Long.
In my particular case, this was caused by a method in my service implementation that needed the spring #Transactional(readOnly = true) annotation. Once I added that, the issue was resolved. Unusual though, it was just a select statement.
Make sure you aren't trying to use the same User object more than once while changing the ID. In other words, if you were doing something in a batch type operation:
User user = new User(); // Using the same one over and over, won't work
List<Customer> customers = fetchCustomersFromSomeService();
for(Customer customer : customers) {
// User user = new User(); <-- This would work, you get a new one each time
user.setId(customer.getId());
user.setName(customer.getName());
saveUserToDB(user);
}
In my case, a template had a typo so instead of checking for equivalency (==) it was using an assignment equals (=).
So I changed the template logic from:
if (user1.id = user2.id) ...
to
if (user1.id == user2.id) ...
and now everything is fine. So, check your views as well!
It is a problem in your update method. Just instance new User before you save changes and you will be fine. If you use mapping between DTO and Entity class, than do this before mapping.
I had this error also. I had User Object, trying to change his Location, Location was FK in User table. I solved this problem with
#Transactional
public void update(User input) throws Exception {
User userDB = userRepository.findById(input.getUserId()).orElse(null);
userDB.setLocation(new Location());
userMapper.updateEntityFromDto(input, userDB);
User user= userRepository.save(userDB);
}
Also ran into this error message, but the root cause was of a different flavor from those referenced in the other answers here.
Generic answer:
Make sure that once hibernate loads an entity, no code changes the primary key value in that object in any way. When hibernate flushes all changes back to the database, it throws this exception because the primary key changed. If you don't do it explicitly, look for places where this may happen unintentionally, perhaps on related entities that only have LAZY loading configured.
In my case, I am using a mapping framework (MapStruct) to update an entity. In the process, also other referenced entities were being updates as mapping frameworks tend to do that by default. I was later replacing the original entity with new one (in DB terms, changed the value of the foreign key to reference a different row in the related table), the primary key of the previously-referenced entity was already updated, and hibernate attempted to persist this update on flush.
I was facing this issue, too.
The target table is a relation table, wiring two IDs from different tables. I have a UNIQUE constraint on the value combination, replacing the PK.
When updating one of the values of a tuple, this error occured.
This is how the table looks like (MySQL):
CREATE TABLE my_relation_table (
mrt_left_id BIGINT NOT NULL,
mrt_right_id BIGINT NOT NULL,
UNIQUE KEY uix_my_relation_table (mrt_left_id, mrt_right_id),
FOREIGN KEY (mrt_left_id)
REFERENCES left_table(lef_id),
FOREIGN KEY (mrt_right_id)
REFERENCES right_table(rig_id)
);
The Entity class for the RelationWithUnique entity looks basically like this:
#Entity
#IdClass(RelationWithUnique.class)
#Table(name = "my_relation_table")
public class RelationWithUnique implements Serializable {
...
#Id
#ManyToOne
#JoinColumn(name = "mrt_left_id", referencedColumnName = "left_table.lef_id")
private LeftTableEntity leftId;
#Id
#ManyToOne
#JoinColumn(name = "mrt_right_id", referencedColumnName = "right_table.rig_id")
private RightTableEntity rightId;
...
I fixed it by
// usually, we need to detach the object as we are updating the PK
// (rightId being part of the UNIQUE constraint) => PK
// but this would produce a duplicate entry,
// therefore, we simply delete the old tuple and add the new one
final RelationWithUnique newRelation = new RelationWithUnique();
newRelation.setLeftId(oldRelation.getLeftId());
newRelation.setRightId(rightId); // here, the value is updated actually
entityManager.remove(oldRelation);
entityManager.persist(newRelation);
Thanks a lot for the hint of the PK, I just missed it.
Problem can be also in different types of object's PK ("User" in your case) and type you ask hibernate to get session.get(type, id);.
In my case error was identifier of an instance of <skipped> was altered from 16 to 32.
Object's PK type was Integer, hibernate was asked for Long type.
In my case it was because the property was long on object but int in the mapping xml, this exception should be clearer
If you are using Spring MVC or Spring Boot try to avoid:
#ModelAttribute("user") in one controoler, and in other controller
model.addAttribute("user", userRepository.findOne(someId);
This situation can produce such error.
This is an old question, but I'm going to add the fix for my particular issue (Spring Boot, JPA using Hibernate, SQL Server 2014) since it doesn't exactly match the other answers included here:
I had a foreign key, e.g. my_id = '12345', but the value in the referenced column was my_id = '12345 '. It had an extra space at the end which hibernate didn't like. I removed the space, fixed the part of my code that was allowing this extra space, and everything works fine.
Faced the same Issue.
I had an assosciation between 2 beans. In bean A I had defined the variable type as Integer and in bean B I had defined the same variable as Long.
I changed both of them to Integer. This solved my issue.
I solve this by instancing a new instance of depending Object. For an example
instanceA.setInstanceB(new InstanceB());
instanceA.setInstanceB(YOUR NEW VALUE);
In my case I had a primary key in the database that had an accent, but in other table its foreign key didn't have. For some reason, MySQL allowed this.
It looks like you have changed identifier of an instance
of org.cometd.hibernate.User object menaged by JPA entity context.
In this case create the new User entity object with appropriate id. And set it instead of the original User object.
Did you using multiple Transaction managers from the same service class.
Like, if your project has two or more transaction configurations.
If true,
then at first separate them.
I got the issue when i tried fetching an existing DB entity, modified few fields and executed
session.save(entity)
instead of
session.merge(entity)
Since it is existing in the DB, when we should merge() instead of save()
you may be modified primary key of fetched entity and then trying to save with a same transaction to create new record from existing.

How to save entities with manually assigned identifiers using Spring Data JPA?

I'm updating an existing code that handles the copy or raw data from one table into multiple objects within the same database.
Previously, every kind of object had a generated PK using a sequence for each table.
Something like that :
#Id
#Column(name = "id")
#GeneratedValue(strategy = GenerationType.IDENTITY)
private Integer id;
In order to reuse existing IDs from the import table, we removed GeneratedValue for some entities, like that :
#Id
#Column(name = "id")
private Integer id;
For this entity, I did not change my JpaRepository, looking like this :
public interface EntityRepository extends JpaRepository<Entity, Integer> {
<S extends Entity> S save(S entity);
}
Now I'm struggling to understand the following behaviour, within a spring transaction (#Transactional) with the default propagation and isolation level :
With the #GeneratedValue on the entity, when I call entityRepository.save(entity) I can see with Hibernate show sql activated that an insert request is fired (however seems to be only in the cache since the database does not change)
Without the #GeneratedValue on the entity, only a select request is fired (no insert attempt)
This is a big issue when my Entity (without generated value) is mapped to MyOtherEntity (with generated value) in a one or many relationship.
I thus have the following error :
ERROR: insert or update on table "t_other_entity" violates foreign key constraint "other_entity_entity"
Détail : Key (entity_id)=(110) is not present in table "t_entity"
Seems legit since the insert has not been sent for Entity, but why ? Again, if I change the ID of the Entity and use #GeneratedValue I don't get any error.
I'm using Spring Boot 1.5.12, Java 8 and PostgreSQL 9
You're basically switching from automatically assigned identifiers to manually defined ones which has a couple of consequences both on the JPA and Spring Data level.
Database operation timing
On the plain JPA level, the persistence provider doesn't necessarily need to immediately execute a single insert as it doesn't have to obtain an identifier value. That's why it usually delays the execution of the statement until it needs to flush, which is on either an explicit call to EntityManager.flush(), a query execution as that requires the data in the database to be up to date to deliver correct results or transaction commit.
Spring Data JPA repositories automatically use default transactions on the call to save(…). However, if you're calling repositories within a method annotated with #Transactional in turn, the databse interaction might not occur until that method is left.
EntityManager.persist(…) VS. ….merge(…)
JPA requires the EntityManager client code to differentiate between persisting a completely new entity or applying changes to an existing one. Spring Data repositories w ant to free the client code from having to deal with this distinction as business code shouldn't be overloaded with that implementation detail. That means, Spring Data will somehow have to differentiate new entities from existing ones itself. The various strategies are described in the reference documentation.
In case of manually identifiers the default of inspecting the identifier property for null values will not work as the property will never be null by definition. A standard pattern is to tweak the entities to implement Persistable and keep a transient is-new-flag around and use entity callback annotations to flip the flag.
#MappedSuperclass
public abstract class AbstractEntity<ID extends SalespointIdentifier> implements Persistable<ID> {
private #Transient boolean isNew = true;
#Override
public boolean isNew() {
return isNew;
}
#PrePersist
#PostLoad
void markNotNew() {
this.isNew = false;
}
// More code…
}
isNew is declared transient so that it doesn't get persisted. The type implements Persistable so that the Spring Data JPA implementation of the repository's save(…) method will use that. The code above results in entities created from user code using new having the flag set to true, but any kind of database interaction (saving or loading) turning the entity into a existing one, so that save(…) will trigger EntityManager.persist(…) initially but ….merge(…) for all subsequent operations.
I took the chance to create DATAJPA-1600 and added a summary of this description to the reference docs.

JPA: getReference() vs new ''mock'' object with ID only?

I wonder is it livable to associate an entity with a child entity by using not a proxy object but by creating a new object and setting Id manually? Like this?
#Transactional
public void save(#NonNull String name, #NonNull Long roleId) {
User user = new User();
user.setName(name);
Role role = new Role(); role.setRoleId(roleId);
// Instead of:
// roleRepository.getOne(roleId);
user.setRole(role);
userRepository.save(user);
}
I know that the accepted and well-documented way to do it is by calling smth. like:
em.getReference(Role.class, roleId) ;
or if use Spring Data
roleRepository.getOne(roleId);
or Hibernetish way:
session.load(Role.class, roleId)
So the question is, what bad consequences can one face if he does this trick by cheating the JPA provider and using this new object with set Id? Note, the only reason to do getOne() is to associate a newly created entity with an existing one. Yet the Role mock object is not managed, no fear of loosing any data. It simply does its job for connecting two entities.
From the Hibernate documentation:
getReference() obtains a reference to the entity. The state may or may
not be initialized. If the entity is already associated with the
current running Session, that reference (loaded or not) is returned.
If the entity is not loaded in the current Session and the entity
supports proxy generation, an uninitialized proxy is generated and
returned, otherwise the entity is loaded from the database and
returned.
So after testing I found that it basically does not even hit the database to check the presence of ID and save() would fail at commit if FK constraint is violated. It just requires additional dependency to auto-wire (RoleRepository).
So why should I have this proxy fetched by invoking getOne() instead of this mock object created with new if my case is as simple as this one? What and when may go wrong with this approach?
Thank you for clarifying things.
EDIT:
Hibernate/JPA, save a new entity while only setting id on #OneToOne association
This related topic doesn't answer the question. I am asking why calling JPA's API getReference() is better and what wrong may happen to me if I adopt this practice of creating a new "mock" objects with a given Id with new operator?

Hibernate: How to make entity and all associations readOnly by default? (or auto-evict the association from the session)

I have an entity which belongs to a customer entity.
I would like the entity including all associations to be kind of read-only.
public class Foo{
#ManyToOne(fetch=FetchType.LAZY)
#JoinColumn(name = "customer_id")
private Customer customer;
#Basic
#Column(name = "firstname")
private String firstName;
// getters for both fields ....
}
Here is what I want:
Calls to setters of foo should not be persisted.
Calls to myFoo.getCustomer() should return a readonly customer so that calls to setters like myFoo.getCustomer().setSomething("should not be persisted") should not work.
Example:
List<Foo> list = fooDAO.getList();
for (Foo foo : liust) {
String f1 = foo.getCustomer().getSomeField(); // should work
foo.getCustomer.setSomeField("change that should not be persisted"); // calling this setter should not have an effect or should even throw an UnsupportedOperationException() or something
foo.setFirstName("change should not be persisted"); // also this should not be persisted.
}
Currently my solution for the association is kind of manual:
public Customers getCustomer() {
// detach the referenced object from the Hibernate session
// to avoid that changes to these association are persisted to the database
getCustomersDAO().evict(customer); // calls session.evict(o) under the hood
return customer;
}
Here is my question:
What ways are there to avoid changes to associations being persisted to the Database? E.g. using an annotation?
In general I would like this behaviour to be the default.
But it should also be possible to allow changes to be persisted. So I need it configurable. So I though about doing it on the Query-level.
My environment:
Hibernate v3.6
Spring 3.2 with HibernateDAOSupport /
HibernateTemplate and annotation based Transaction-handling.
I ran into this issue. I was storing filename as a property of an object in the database, and then trying to manipulate it in the application after reading it to reflect storage path based on environment. I tried making the DAO transaction readOnly, and hibernate still persisted changes of the property to the db. I then used entityManager.detach(object) before I manipulated the property, and that worked. But, I then later got tangled up into issues where other service methods that were dependent the auto persistence of objects to the db weren't behaving as they should.
The lesson I learned is that if you need to manipulate a property of a object that you don't want persisted, then make a "transient" property for that object with it's own getters and setters, and make sure the transient property setter isn't passed properties from the entity model that then become manipulated. For example, if you have a property "filename" of String, and you have a transient property of "s3key", the first thing the "s3Key" property setter should do is make a new String object passed into the setter for it to use in the manipulation.
#Transient
private String s3Key;
public String getS3Key {
String s3key = s3Prefix + "/" + <other path info> + this.filename;
return s3key;
}
Though all the "readOnly" and "detach" stuff for jpa/hibernate allows for custom handling of persistence of objects to the db, it just seems going to that length to customize the behavior of the objects to that extent causes supportability problems later, when persisting entity changes automatically is expected or counted on in other service functions. My own experience has been it is best to use a programming pattern of making use of transient properties for handling changes to entities you don't want persisted.

Persistence Error Message: An instance of a null PK has been incorrectly provided for the find operation

I am trying to use Netbeans 7.01 to follow a tutorial on JSF 2.0 and JPA. I am using oracle XE and JDBC_6. I used JSF pages from entities wizard to generate my JSF pages. Everything works fine as I can retrive data from the database and display them. However when I attempt to create or update a record in the database, I get this error:
An instance of a null PK has been incorrectly provided for the find operation
How is this caused and how can I solve it?
This basically means that you did the following:
Entity entity = em.find(Entity.class, null);
Note that the PK is null here. To fix your problem, just make sure that it's not null.
This may be because you are running a find operation on an entity that has not been persisted yet. In which situation, the #ID field (if it is autogenerated), will not have a value, ie. it will be null. You are then trying to find the entity, and as #BalusC points out, you are sending a null value into your find method.
It means that when you are trying to persist an entity you are sending the PK of the entity as null.
So you have three options:
Define manually the PK for the Entity.
If your database uses a type like Serial (Informix, MS SQLSERVER, etc) then the value will by autoincremented by the RDMS you can use IDENTITY strategy, so now you can pass null value for your entity's pk.
#Entity
public class Inventory implements Serializable {
#Id
#GeneratedValue(strategy=GenerationType.IDENTITY)
private long id;
If your database uses a sequences for generate pks (Oracle, Postgresql, etc) then the value be provided by a sequence so you can use:
#Entity
public class Inventory implements Serializable {
#Id
#GeneratedValue(generator="InvSeq")
#SequenceGenerator(name="InvSeq",sequenceName="INV_SEQ", allocationSize=5)
private long id;
For more information you can see: http://wiki.eclipse.org/EclipseLink/Examples/JPA/PrimaryKey

Categories