What is the difference between insert () and createObject()? - java

I have a setChatsList() method and it has a huge code:
public void setChatsList(final ChatsModel chatsModel) {
Realm realm = Realm.getDefaultInstance();
realm.executeTransaction(new Realm.Transaction() {
#Override
public void execute(#NonNull Realm realm) {
ChatsModel realmChats = realm.createObject(ChatsModel.class);
Response realmResponse = realm.createObject(Response.class);
Item realmItem = realm.createObject(Item.class);
Message realmMessage = realm.createObject(Message.class);
Attachment realmAttachment = realm.createObject(Attachment.class);
Video realmVideo = realm.createObject(Video.class);
Response response = chatsModel.getResponse();
RealmList<Item> items = new RealmList<>();
Integer itemCount = response.getCount();
RealmList<Item> itemList = response.getItems();
if (itemList != null) {
for (Item item : itemList) {
Message message = item.getMessage();
realmMessage.setId(message.getId());
realmMessage.setDate(message.getDate());
realmMessage.setOut(message.getOut());
realmMessage.setUserId(message.getUserId());
realmMessage.setReadState(message.getReadState());
realmMessage.setTitle(message.getTitle());
realmMessage.setBody(message.getBody());
realmMessage.setRandomId(message.getRandomId());
RealmList<Attachment> attachments = message.getAttachments();
RealmList<Attachment> attachmentList = new RealmList<>();
if (attachments != null) {
for (Attachment attachment : attachments) {
String type = attachment.getType();
Video video = attachment.getVideo();
realmVideo.setAccessKey(video.getAccessKey());
realmVideo.setCanAdd(video.getCanAdd());
realmVideo.setCanEdit(video.getCanEdit());
realmVideo.setComments(video.getComments());
realmVideo.setDate(video.getDate());
realmVideo.setDescription(video.getDescription());
realmVideo.setDuration(video.getDuration());
realmVideo.setId(video.getId());
realmVideo.setOwnerId(video.getOwnerId());
realmVideo.setPhoto130(video.getPhoto130());
realmVideo.setPhoto320(video.getPhoto320());
realmVideo.setPhoto640(video.getPhoto640());
realmVideo.setPlatform(video.getPlatform());
realmVideo.setTitle(video.getTitle());
realmVideo.setViews(video.getViews());
realmAttachment.setType(type);
realmAttachment.setVideo(realmVideo);
attachmentList.add(realmAttachment);
}
realmMessage.setAttachments(attachmentList);
}
realmResponse.getItems().add(item);
}
}
realmResponse.setCount(itemCount);
realmChats.setResponse(realmResponse);
}
});
}
Works correctly!
Just read in the official documentation about the method insert(), also for storage in the database. I rewrote the setChatsList() method thus:
public void setChatsList(final ChatsModel chatsModel) {
Realm realm = Realm.getDefaultInstance();
realm.executeTransaction(new Realm.Transaction() {
#Override
public void execute(#NonNull Realm realm) {
realm.insert(chatsModel);
}
});
}
And to my surprise, it worked fine too and the code is less!
But I'm sure that not everything is so smooth, I think that somewhere there is a catch.
Question: What is the difference between insert() and createObject()?

insert()
saves an unmanaged object into the Realm (managed object is no-op), without creating a managed proxy object as return value.
createObject()
creates a managed object in Realm, and returns a proxy to this managed object.
copyToRealm()
saves an unmanaged object into the Realm, with returning a proxy to the created managed object.
The key difference between insert() and copyToRealm() is whether a proxy is returned or not; which means that inserting many items is much more efficient by re-using a single unmanaged object and calling insert() on it with the right parameters.
However, you generally need createObject() if you want to set up relations between objects.
P.S. insert/copyToRealm/createObject(clazz) have a counterpart insertOrUpdate, copyToRealmOrUpdate and createObject(clazz, primaryKeyValue) for objects with primary keys.

Assuming you have primary key as integer
0 is the default value for int fields, so if you have a RealmObject with 0 as the value then, it means that realm.createObject(YourRealmClass.class) will fail with the error mentioned below.
RealmPrimaryKeyConstraintException: Value already exists:
as it will create an object with the default value.
What is the better way to create RealmObjects?
copyToRealmOrUpdate() or insert().
I will recommend you that use copyToRealmOrUpdate() because. it is better to use as it first checks if record exists or not . If record exists then it will update if record does not exits it will insert new record .

Related

Spring Data Rest: Limit sending values on Update method

I have implemented by project using Spring-Data-Rest. I am trying to do an update on an existing record in a table. But when I try to send only a few fields instead of all the fields(present in Entity class) through my request, Spring-Data-Rest thinking I am sending null/empty values. Finally when I go and see the database the fields which I am not sending through my request are overridden with null/empty values. So my understanding is that even though I am not sending these values, spring data rest sees them in the Entity class and sending these values as null/empty. My question here is, is there a way to disable the fields when doing UPDATE that I am not sending through the request. Appreciate you are any help.
Update: I was using PUT method. After reading the comments, I changed it to PATCH and its working perfectly now. Appreciate all the help
Before update, load object from database, using jpa method findById return object call target.
Then copy all fields that not null/empty from object-want-to-update to target, finally save the target object.
This is code example:
public void update(Object objectWantToUpdate) {
Object target = repository.findById(objectWantToUpdate.getId());
copyNonNullProperties(objectWantToUpdate, target);
repository.save(target);
}
public void copyNonNullProperties(Object source, Object target) {
BeanUtils.copyProperties(source, target, getNullPropertyNames(source));
}
public String[] getNullPropertyNames (Object source) {
final BeanWrapper src = new BeanWrapperImpl(source);
PropertyDescriptor[] propDesList = src.getPropertyDescriptors();
Set<String> emptyNames = new HashSet<String>();
for(PropertyDescriptor propDesc : propDesList) {
Object srcValue = src.getPropertyValue(propDesc.getName());
if (srcValue == null) {
emptyNames.add(propDesc.getName());
}
}
String[] result = new String[emptyNames.size()];
return emptyNames.toArray(result);
}
You can write custom update query which updates only particular fields:
#Override
public void saveManager(Manager manager) {
Query query = sessionFactory.getCurrentSession().createQuery("update Manager set username = :username, password = :password where id = :id");
query.setParameter("username", manager.getUsername());
query.setParameter("password", manager.getPassword());
query.setParameter("id", manager.getId());
query.executeUpdate();
}
As some of the comments pointed out using PATCH instead of PUT resolved the issue. Appreciate all the inputs. The following is from Spring Data Rest Documentation:
"The PUT method replaces the state of the target resource with the supplied request body.
The PATCH method is similar to the PUT method but partially updates the resources state."
https://docs.spring.io/spring-data/rest/docs/current/reference/html/#customizing-sdr.hiding-repository-crud-methods
Also, I like #Tran Quoc Vu answer but not implementing it for now since I dont have to use custom controller. If there is some logic(ex: validation) involved when updating the entity, I am in favor of using the custom controller.

Weak reference and self refreshing cache manager

Sorry for the long question, I need to present the environment otherwise you may misunderstand my issue.
Current state
I have a cache manager< K, V >, that for a given object of class K, returns a holder parametrized by the type V, representing the value associated on a web service to the corresponding K.
Holder
The Holder classes manage the fetch, synchronization, and scheduling of next fetch, because the cache is designed for multiple parallel calls. The data fetched by the web service has an expiry date (provided in the header), after which the holder can fetch it again and schedules itself again for next expiry. I have 3 classes(for list, map and other), but they are all used the same way. The Holder< V > class has 5 methods, 2 for direct access and 3 for IoC access
void waitData() waits until the data is fetched at least once. Internally is uses a countdownlatch.
V copy() waits for the data to be fetched at least once, then returns a copy of the cached V. Simple items are returned as they are, while more complex (eg Map for the prices in a given shop referenced by furniture id) are copied in a synchronized loop (to avoid another fetch() to corrupt the data)
void follow(JavaFX.Listener< V >) registers a new listener of V to be notified on modifications on the holder's data. If the holder already has received data, the listener is notified of this data as if it was new.
void unfollow (JavaFX.Listener< V >) unregisters apreviously registered listener.
Observable asObservable() returns an Observable . That allows to be used eg in javafx GUI.
Typically this allows me to do things like streaming of multiple data in parallel with adequate time, eg
Stream.of(1l, 2l, 3l).parallel().map(cache::getPrice).mapToInt(p->p.copy().price).min();
or to make much more complex Bindings in javafx, eg when the price depends on the number of items you want to purchase
Self Scheduling
The holder class contains a SelfScheduling< V > object, that is responsible to actually fetch the data, put it in the holder and reschedule itself after data expire.
The SelfScheduling use a ScheduledExecutorService in the cache, to schedule its own fetch() method. It starts by scheduling itself after 0 ms, rescheduling itself after 10s if error, or after expiry if new data was fetched. It can be paused, resumed, is started on creation, and can be stopped.
This is the behavior I want to modify. I want the self executor to remove the Holder from the cache on expiry, if the holder is not used anywhere in the code
Cache manager
Just for the information, my cache manager consists of a Map< K, Holder< V > > cachedPrices to hold the cache data, and a method getPrice(K) that syncs over the cache if holder missing, create the holder if required(double check to avoid unnecessary sync), and return the holder.
Global Code
Here is a example of what my code looks like
public class CacheExample {
public static class Holder<T>{
SimpleObjectProperty<T> data = new SimpleObjectProperty<>();
// real code removed
T copy() {
return null;
}
Observable asObservable() {
return null;
}
void follow(ChangeListener<? super T> listener) {
}
}
public static class SelfScheduled implements Runnable {
// should use enum
private Object state = "start";
public void schedule(long ms) {
// check state, sync, etc.
}
#Override
public void run() {
long next = fetch();
schedule(next);
}
public long fetch() {
// set the value in the holder
// return the next expiry
return 0;
}
}
public Map<Long, Holder<Object>> cachePrices = new HashMap<>();
public Holder<Object> getPrice(long param) {
Holder<Object> ret = cachePrices.get(param);
if (ret == null) {
// sync, re check, etc.
synchronized (cachePrices) {
ret = cachePrices.get(param);
if (ret == null) {
ret = new Holder<>();
// should be the fetch() call instead of null
makeSchedule(ret.data, null);
}
}
}
return ret;
}
public void makeSchedule(SimpleObjectProperty<Object> data, Runnable run) {
// code removed.
// creates a selfscheduler with fetch method and the data to store the
// result.
}
}
Expected modifications
As I wrote above, I want to modify the way the cache holds the data in memory.
Especially, I see no reason to maintain a huge number of self scheduling entities to fetch data when those data are no more used. If the expiry is 5s (some web sevices ARE), and I cache 1000 data(that's a very low value), then that means I will make 200 fetch() per second for no reason.
What I expect is that, when the Holder is no more used, the self scheduling stops itself and instead of fetching data, it actually removes the holder from the cache. example :
Holder< Price > p = cache.getPrice(1);
// here if the fetch() is called it should fetch the data
p.copy().price;
// now the price is no more used, on next fetch() it should remove p from the cache.
// If that happens, and later I re enter that code, the holder and the selfscheduler will be re created.
Holder< Price > p2 = cache.getPrice(22);
mylist.add(p2);
// now there is a strong reference to this price, so the fetch() method will keep scheduling the selfscheduler
// until mylist is no more strongly referenced.
Incorrect
However my knowledge of adequate technologies is limited in that field. To what I understood, I should use a weak reference in the cache manager and the self scheduling to know when the holder is no more strongly referenced (typically, start the fetch() by checking if the reference became null, in which case just stop); However this would lead to the holder being GC'd BEFORE the next expiry, which I don't want : some data have very long expiry and are only used in a simple method, eg cache.getShopLocation() should not be GC'd just after the value returned by copy() is used.
Thus, this code is incorrect :
public class CacheExampleIncorrect {
public static class Holder<T>{
SimpleObjectProperty<T> data = new SimpleObjectProperty<>();
// real code removed
T copy() {
return null;
}
Observable asObservable() {
return null;
}
void follow(ChangeListener<? super T> listener) {
}
}
public static class SelfScheduled<T> implements Runnable {
WeakReference<Holder<T>> holder;
Runnable onDelete;
public void schedule(long ms) {
// check state, sync, etc.
}
#Override
public void run() {
Holder<T> h = holder.get();
if (h == null) {
onDelete.run();
return;
}
long next = fetch(h);
schedule(next);
}
public long fetch(Holder<T> h) {
// set the value in the holder
// return the next expiry
return 0;
}
}
public Map<Long, WeakReference<Holder<Object>>> cachePrices = new HashMap<>();
public Holder<Object> getPrice(long param) {
WeakReference<Holder<Object>> h = cachePrices.get(param);
Holder<Object> ret = h == null ? null : h.get();
if (h == null) {
synchronized (cachePrices) {
h = cachePrices.get(param);
ret = h == null ? null : h.get();
if (ret == null) {
ret = new Holder<>();
h = new WeakReference<>(ret);
// should be the fetch() call instead of null
SelfScheduled<Object> sched = makeSchedule(h, null);
cachePrices.put(param, h);
// should be synced on cachedprice
sched.onDelete = () -> cachePrices.remove(param);
}
}
}
return ret;
}
public <T> SelfScheduled<T> makeSchedule(WeakReference<Holder<Object>> h, Runnable run) {
// creates a selfscheduler with fetch method and the data to store the
// result.
return null;
}
}

Detect objects to be deleted from Realm database

In my project I'm using Realm for storing data from API.
Before updating objects to Realm I'd like to check which objects are new (doesn't exist in database) and which objects should be deleted (exist in database, but don't exist in API response).
For checking new objects I'm iterating through API response and using simple Realm query to check which object is new
for(Follower follower: results.data){
Follower followerFromDb = realm.where(Follower.class).equalTo("id", follower.id).findFirst();
if(followerFromDb == null){
Log.d("REALM", "Object is not in the DB");
}
}
My problem is - how to efficiently check which objects should be deleted from the database.
I have a pretty nice trick for deleting objects not in API response, which is that I add an indexed field called #Index private boolean isBeingSaved; to my RealmObject:
public class Thingy extends RealmObject {
//...
#Index
private boolean isBeingSaved;
}
Then as I map the API response to RealmObjects, I set this to true:
ApiResponse apiResponse = retrofitService.getSomething();
Thingy thingy = new Thingy();
thingy.set/*...*/;
thingy.setIsBeingSaved(true);
realm.insertOrUpdate(thingy);
Afterwards, you've set each of these to true for the new elements. So you can do a deletion for all that is false.
realm.where(Thingy.class)
.equalTo(ThingyFields.IS_BEING_SAVED, false)
.findAll()
.deleteAllFromRealm();
Then you'll need to iterate the remaining objects and set their boolean field to false
for(Thingy thingy: realm.where(Thingy.class).findAll()) {
thingy.setIsBeingSaved(false);
}
And it works!
I do not know of a more optimized solution unfortunately, I can clearly see that this is O(N) because of iteration at the end. But you can follow https://github.com/realm/realm-java/issues/762 for bulk update support.
In your particular case, the special flag is isBeingSaved, and I guess you don't want to immediately delete them, but this is how I did it when I needed this functionality.
It sounds like you're database only contains data from the API, and local data is defunkt when an api call response is returned. If that's the case then you can simply delete everything in your database, and add everything from the api response into your Realm.
Realm realm = Realm.getDefaultInstance();
realm.executeTransaction(new Realm.Transaction() {
public void execute(Realm realm) {
realm.deleteAll(); //Delete everything
object.delete(); //Delete specific object
realm.delete(RealmModel) //Delete all of specific type
}
}
Remember to close your realm when you're done.

Cannot seem to write custom object to Firebase database?

I'm working on a project using Firebase, which I've never used before, and I know almost nothing about Firebase itself, as the rest of my team has been responsible for most of the dealings with it. I'm writing a parser for some Excel data where I need to extract some specific data and then upload it to Firebase. The parsing is done, but I'm having trouble writing it to Firebase.
We have a sub-database called "families" in our root database that I need to write this data to. I have a class called RegistrationSheet which contains all the data in this particular spreadsheet broken up into objects to represent the structure of the JSON. I'm aware that you can write custom objects to the Firebase database and it will be converted to a JSON format that represents that data. I found a page detailing the different data types that can be written to the database and converted to JSON, and among them were Map and List. So here are my classes that represent the "families" database.
RegistrationSheet.java:
public class RegistrationSheet {
public List<Object> families;
public void addFamily(Family f) { families.add(f); }
public RegistrationSheet() {
families = new ArrayList<>();
}
public void writeToFirebase() {
DatabaseReference ref = FirebaseDatabase.getInstance().getReference().child("families");
ref.removeValue(); // Delete the data currently in the database so we can rewrite it.
ref.setValue(families);
}
public File outputJSON() {
return null;
}
}
Family.java:
public class Family {
public Map<String, Object> fields;
public void addField(String str, Object obj) { fields.put(str, obj); }
public Family() {
fields = new HashMap<>();
}
}
Child.java:
public class Child {
public Map<String, Object> fields;
public void addField(String str, Object obj) { fields.put(str, obj); }
public Child() {
fields = new HashMap<>();
}
}
The families list contains Family objects, and one of the "fields" that can be added to the Map in the Family objects is a List of Child objects. I figured that because these are all objects that are valid to write to Firebase, that simply writing the "families" list in the RegistrationSheet object would be enough:
public void writeToFirebase() {
DatabaseReference ref = FirebaseDatabase.getInstance().getReference().child("families");
ref.removeValue(); // Delete the data currently in the database so we can rewrite it.
ref.setValue(families);
}
Is there something wrong with the structure of any of my classes or how I'm writing the data to Firebase? Because after executing this, the "families" sub-database disappears from Firebase and I have to restore it from my backup. It seems I have the correct DatabaseReference since removeValue() seems to be removing it, but why isn't it then writing the data from the families list?
I would appreciate any help that someone could provide.
Try the following code. It gives you the reason as to why it is not writing the value.
public void writeToFirebase() {
DatabaseReference ref = FirebaseDatabase.getInstance().getReference().child("families");
ref.removeValue(); // Delete the data currently in the database so we can rewrite it.
ref.setValue(object, new DatabaseReference.CompletionListener() {
#Override
public void onComplete(DatabaseError databaseError, DatabaseReference reference) {
if (databaseError != null) {
Log.e(TAG, "Failed to write message", databaseError.toException());
}
}
});
}
Then you can debug your code based on the Exception generated

Create a new object if current field is null in database

I have a database model class Project, and it has two embedded object fields:
#Embedded
public ContacterInfo contacter = new ContacterInfo();
#Embedded
public CompanyInfo company = new CompanyInfo();
'cause I don't want to check is null every time I use company and contacter, so I decided to create them anyway.
What I expected is, when there's nothing for contacter in database, Java would create a new ContacterInfo for me, and then I can just use it for new data. But in fact, I found contacter still could be set to null. I suspect that JPA load null from database and override my new create object with it.
How can I fix this ?
You can use the JPA Entity listeners (ObjectDB has a good explication). By example:
#PostLoad
void onPostLoad() {
if (contacter == null) {
contacter = new ContacterInfo();
}
if (company == null) {
company = new CompanyInfo();
}
}
Every time JPA load an instance of current entity, onPostLoad will be called.
Good luck!

Categories