Q1 : Please let me know what is different between two ways of implementation in below (about get realm instance). I want to know which one is faster, lighter in memory and what is recommended ?
1. Set and Get Realm as Default (with specific config)
private void setupCustomRealm() {
if (!Utils.isStringHasText(databaseName)) {
databaseName = DbManager.getInstance().getCurrentDb();
}
// get config
RealmConfiguration config = getRealmConfigByDBName(databaseName);
Realm.setDefaultConfiguration(config);
Realm.compactRealm(config);
}
public Realm getCustomRealm() {
if (firstTime) {
setupCustomRealm();
}
return Realm.getDefaultInstance();
}
2 .Get Realm from config directly
public Realm getCustomRealm(Context context) {
if (!Utils.isStringHasText(databaseName)) {
databaseName = DbManager.getInstance().getCurrentDb();
}
// get config
RealmConfiguration config = getRealmConfigByDBName(context, databaseName);
Realm.compactRealm(config);
return Realm.getInstance(config);
}
Q2 : In my application, now we are consider between 2 ways of implementation.
1: We create a new Realm instance every time when we need to do something with Database (in both of worker thread and UI thread) and close it when task get done.
2: We create only one instance of Realm and let it live along with application, when quit application we close instance above.
Please explain me the advantage and disadvantage of each one and which ways is recommended (My application using Service to handle database and network connection)
If I have 2 heavy tasks (take a long time to complete it's transaction), what is difference between execute 2 task by one Realm instance and execute 2 task on 2 Realm instances in 2 separate thread (I mean one thread have one Realm instances and instance will executes one in 2 tasks above), and which one if safer and faster.
what will happen if there is a problem while executing a transaction (example not responding or throws some exception)
Note: I am not an official Realm person, but I've been using Realm for a while now.
Here's a TL;DR version
1.) It's worth noting a few things:
A given Realm file should be accessed only with the same RealmConfiguration throughout the application, so the first solution here is preferable (don't create a new config for each Realm).
Realm.compactRealm(RealmConfig) works only when there are no open Realm instances on any threads. So either at application start, or at application finish (personally I found that it makes start-up slower, so I call compactRealm() when my activity count reaches 0, which I manage with a retained fragment bound to the activity - but that's just me).
2.) It's worth noting that Realm.getInstance() on first call creates a thread-local cache (the cache is shared among Realm instances that belong to the same thread, and increments a counter to indicate how many Realm instances are open on that given thread. When that counter reaches 0 as a result of calling realm.close() on all instances, the cache is cleared.
It's also worth noting that the Realm instance is thread-confined, so you will need to open a new Realm on any thread where you use it. This means that if you're using it in an IntentService, you'll need to open a new Realm (because it's in a background thread).
It is extremely important to call realm.close() on Realm instances that are opened on background threads.
Realm realm = null;
try {
realm = Realm.getDefaultInstance();
//do database operations
} finally {
if(realm != null) {
realm.close();
}
}
Or API 19+:
try(Realm realm = Realm.getDefaultInstance()) {
//do database operations
}
When you call realm.close() on a particular Realm instance, it invalidates the results and objects that belong to it. So it makes sense both to open/close Realms in Activity onCreate() and onDestroy(), or open it within the Application and share the same UI thread Realm instance for queries on the UI thread.
(It's not as important to close the Realm instance on the UI thread unless you intend to compact it after all of them are closed, but you have to close Realm instances on background threads.)
Note: calling RealmConfiguration realmConfig = new RealmConfiguration.Builder(appContext).build() can fail on some devices if you call it in Application.onCreate(), because getFilesDir() can return null, so it's better to initialize your RealmConfiguration only after the first activity has started.
With all that in mind, the answer to 2) is:
While I personally create a single instance of Realm for the UI thread, you'll still need to open (and close!) a new Realm instance for any background threads.
I use a single instance of Realm for the UI thread because it's easier to inject that way, and also because executeTransactionAsync()'s RealmAsyncTask gets cancelled if the underlying Realm instance is closed while it's still executing, so I didn't really want that to happen. :)
Don't forget, that you need a Realm instance on the UI thread to show RealmResults<T> from realm queries (unless you intend to use copyFromRealm() which makes everything use more memory and is generally slower)
IntentService works like a normal background thread, so you should also close the realm instance there as well.
Both heavy tasks work whether it's the same Realm instance or the other (just make sure you have a Realm instance on that given thread), but I'd recommend executing these tasks serially, one after the other.
If there's an exception during the transaction, you should call realm.cancelTransaction() (the docs say begin/commit, but it always forgets about cancel).
If you don't want to manually manage begin/commit/cancel, you should use realm.executeTransaction(new Realm.Transaction() { ... });, because it automatically calls begin/commit/cancel for you. Personally I use executeTransaction() everywhere because it's convenient.
Related
In a Spring boot with data JPA project with Hibernate on a PostgreSQL database, multiple tasks are executing simultaneously. There's a TaskExecutor pool and a database connection pool.
Sometimes these tasks require some of the same objects (update: with object we mean objects stored in the database). In an attempt to ensure the tasks don't conflict (update: "don't try to access/modify the same records at the same time"), a locking service was created. A task gets a lock on the objects it requires and only releases the lock when the task is done, at which time the next task can get a lock on them and start its work.
In practice, this isn't working. One particular case of a record being deleted in task A and still being visible during part of task B keeps popping up. The actual exception is a foreign key constraint not being fulfilled: task B first selects the (deleted) object as one for which a relationship is to be created (so task B still sees the deleted object at this point!), but then upon creation of the relationship in task B it fails because the deleted object is no longer present.
After consultation with a colleague, the idea came up that flushing a repository isn't quite the same as committing. Hence, task A unlocks when its logic is done and changes are flushed, but the changed data has not yet been actually committed to the database. In the mean time, task B gets a lock and starts reading the data and only a little bit later the commit for task A happens.
To make sure the lock of task A is only released after its database changes have been committed, I tried this code:
TransactionSynchronizationManager.registerSynchronization(new TransactionSynchronization() {
#Override
public void afterCompletion(int status) {
logDebugMessage("after completion method called");
// Release the locks
if(lockObtained) {
logDebugMessage("releasing locks");
lockService.releaseLocks(taskHolder.getId());
logDebugMessage("done releasing locks");
}
else
logDebugMessage("no locks to release");
}
});
That, in itself, didn't make the issue disappear. Next brainwave was that the next task, task B, already has a transaction open while it's waiting for a lock. When it gets a lock, it reads using this already-open transaction and then ?for some reason? reads data from before the commit. I admit this doesn't make much sense, but some desperation is beginning to set in. Anyway, with this additional idea, every task is now run as such:
#Override
public void run() {
try{
// Start the progress
taskStartedDateTime = LocalDateTime.now();
logDebugMessage("started");
// Let the task determine which objects need to be locked
List<LockableObjectId> objectsToLock = getObjectsToLock();
// Try to obtain a lock
lockObtained = locksObtained(objectsToLock);
if(lockObtained) {
// Do the actual task in a new transaction
TransactionTemplate transactionTemplate = new TransactionTemplate(transactionManager);
transactionTemplate.setPropagationBehavior(TransactionDefinition.PROPAGATION_REQUIRES_NEW);
transactionTemplate.execute(new TransactionCallbackWithoutResult() {
#Override
protected void doInTransactionWithoutResult(TransactionStatus status) {
TransactionSynchronizationManager.registerSynchronization(new TransactionSynchronization() {
#Override
public void afterCompletion(int status) {
logDebugMessage("after completion method called");
// Release the locks
if(lockObtained) {
logDebugMessage("releasing locks");
lockService.releaseLocks(taskHolder.getId());
logDebugMessage("done releasing locks");
}
else
logDebugMessage("no locks to release");
}
});
try{
// Run the actual task
runTask();
But this still doesn't resolve the issue.
Is it possible the commit to the database was done from Java side and task B reads the database before the commit is done in the database itself? Does the afterCompletion method get called after Java sent the commit, but before the database has actually executed it? If so, is there a way to get a database confirmation that the commit has actually been executed?
Or are we entirely on the wrong track here?
I am using three activities which are opened at the same time. All activities are retreive data from sqlite. I don't close or re open my connection when i am going from activity a->b or from b->c.
I just dispose my db when activity is destroying.
Activity A
SqliteConnection db;
OnCreate method
db = new SqliteConnection(mypath);
OnDestroy
db.Dispose();
db=null;
Onbuttonclick
startActivity(new Intent(this, ActivityB));
Same code is running when i am going from activity b->c.
Inside the same activity i use sqlite plenty of times.
Is this a good practice? Should i dispose my connection immediatelly after a use? Or should i close my connection on pause and reopen on resume? Or can i pass the same opened connection to the next activity? Which is the best approach?
Question modifieded
class databaseHelper
{
private static SqliteConnection db;
public static SqliteConnection openDatabase(Context context)
{
if(db==null)
db = new SqliteConnection(mypath);
return db;
}
}
And inside my activity on create
databaseHelper.openDatabase(this).myquery....
I don`t roll with Java nor xamarin. Here is a Kotlin code, it is pretty self-explanatory.
class DatabaseHelper { //Public class
companion object { ///This is equiavalent to java static.
private var instance: YourDatabase? = null
fun getDbInstance(context: Context): YourDatabase? //This functions returns the initialized DB instance.
{
if(instance == null)
instance = YourDatabase(context) // initializing the DB only one time
return instance
}
}
}
Just create a public class and name it for example "DatabaseHelper". Inside the class, create one static variable of your database type. Create a public function that returns the static variable. Inside the function, first, check if the static instance is null and if it is null, then initialize it with your database instance. This way, when you need to use your database instance, just, access the static function, provide it with the context and it will return you the initialized database instance.
In Kotlin
DatabaseHelper.getDbInstance(this).yourDbFunction()
UPDATE
Since this answer took off, I would like to suggest improvements to my previous solution. Instead of passing a context of activity to initialize the database, use application context. If you give an activity context to the static database instance, a memory leak will occur because the database instance holds a strong reference to the activity and the activity will NOT be eligible for garbage collection.
Proper usage:
val myDb = MyDb(applicationContext)
In general we should encapsulate access to a local store in another class such as a DAO/Repository/Service instead of having them directly in the Activity. this promotes loose coupling between views and data/network access. This also decouples the lifecycle of your DB connection, from the lifecycle of the currently running activity, giving you more control and opportunity for reuse.
Try using a bound Service to and have your DB connections there. Because it is a bound Service, it'll only be around if there is an Activity around that binds to it. Each Activity will bind to the same instance of the Service so it means you wont have duplicate connections. When no Activities are bind to it, it'll automatically be destroyed, destroying the connection along with it.
For a more modern, structured approach, using Jetpack components, you can look at https://github.com/googlesamples/android-sunflower
Background
I am using Realm within my app. When data is loaded it then undergoes intense processing therefore the processing occurs on a background thread.
The coding pattern in use is the Unit of Work pattern and Realm only exists within a repository under a DataManager. The idea here is that each repository can have a different database/file storage solution.
What I have tried
Below is an example of some similar code to what I have in my FooRespository class.
The idea here is that an instance of Realm is obtained, used to query the realm for objects of interest, return them and close the realm instance. Note that this is synchronous and at the end copies the objects from Realm to an unmanaged state.
public Observable<List<Foo>> getFoosById(List<String> fooIds) {
Realm realm = Realm.getInstance(fooRealmConfiguration);
RealmQuery<Foo> findFoosByIdQuery = realm.where(Foo.class);
for(String id : fooIds) {
findFoosByIdQuery.equalTo(Foo.FOO_ID_FIELD_NAME, id);
findFoosByIdQuery.or();
}
return findFoosByIdQuery
.findAll()
.asObservable()
.doOnUnsubscribe(realm::close)
.filter(RealmResults::isLoaded)
.flatMap(foos -> Observable.just(new ArrayList<>(realm.copyFromRealm(foos))));
}
This code is later used in conjunction with the heavy processing code via RxJava:
dataManager.getFoosById(foo)
.flatMap(this::processtheFoosInALongRunningProcess)
.subscribeOn(Schedulers.io()) //could be Schedulers.computation() etc
.subscribe(tileChannelSubscriber);
After reading the docs, my belief is that the above should work, as it is NOT asynchronous and therefore does not need a looper thread. I obtain the instance of realm within the same thread therefore it is not being passed between threads and neither are the objects.
The problem
When the above is executed I get
Realm access from incorrect thread. Realm objects can only be accessed
on the thread they were created.
This doesn't seem right. The only thing I can think of is that the pool of Realm instances is getting me an existing instance created from another process using the main thread.
Kay so
return findFoosByIdQuery
.findAll()
.asObservable()
This happens on UI thread, because that's where you're calling it from initially
.subscribeOn(Schedulers.io())
Aaaaand then you're tinkering with them on Schedulers.io().
Nope, that's not the same thread!
As much as I dislike the approach of copying from a zero-copy database, your current approach is riddled with issues due to misuse of realmResults.asObservable(), so here's a spoiler for what your code should be:
public Observable<List<Foo>> getFoosById(List<String> fooIds) {
return Observable.defer(() -> {
try(Realm realm = Realm.getInstance(fooRealmConfiguration)) { //try-finally also works
RealmQuery<Foo> findFoosByIdQuery = realm.where(Foo.class);
for(String id : fooIds) {
findFoosByIdQuery.equalTo(FooFields.ID, id);
findFoosByIdQuery.or(); // please guarantee this works?
}
RealmResults<Foo> results = findFoosByIdQuery.findAll();
return Observable.just(realm.copyFromRealm(results));
}
}).subscribeOn(Schedulers.io());
}
Note that you are creating the instance outside of all your RxJava processing pipeline. Thus on the main thread (or whichever thread you are on, when calling getFoosById().
Just because the method returns an Observable doesn't mean that it runs on another thread. Only the processing pipeline of the Observable created by the last statement of your getFoosById() method runs on the correct thread (the filter(), the flatMap() and all the processing done by the caller).
You thus have to ensure that the call of getFoosById()is already done on the thread used by Schedulers.io().
One way to achieve this is by using Observable.defer():
Observable.defer(() -> dataManager.getFoosById(foo))
.flatMap(this::processtheFoosInALongRunningProcess)
.subscribeOn(Schedulers.io()) //could be Schedulers.computation() etc
.subscribe(tileChannelSubscriber);
On Tomcat 6, I have a servlet running which accepts requests and passes these onto an external system.
There is a throttling limitation on the external system - if the number of requests exceed a certain number per second, then the external system responds with a Http 503.
No further requests may hit the external system for at least 2 seconds or else the external system will restart its throttling timer.
Initially, I detected the 503 HttpResponse and did a Thread.sleep(2000) but that is wrong as it doesn't prevent the servlet servicing other requests using other threads - once a 503 response is detected, I need to block all threads for at least the 2 seconds.
Ideally, I would prefer the blocked threads not to wake up all at the same time but say a 100ms apart so that requests would be handled in order.
I've looked at the Condition and ReentrantLock but unsure if these are appropriate.
Just create a global (static) date variable in the servlet. When you get a 503, change this variable from null to the local time. The servlet should always check this variable before contacting the external system. If the variable is null, or more than 2 seconds have passed, then you can proceed. Otherwise block the thread (or throw an exception).
Looks like calling Amazon services to me, and it can be managed so easy.
You need a central and managed module for doing it, and it comes like a single module.
The important thing is you should not reach the throttling limitation at all, and if you get too much requests which would reach this value, so you should respond to your client check the result later(as async work).
If the request is kinda important business(such as capturing a payment), so you have to implement a failover with the module too, simply by persisting the request data into the database, so if there is any fail, you will have the data from the database.
If you are familiar with MQ arch, so it would be the best solution where they are designed for this kind of stuffs, but you like to have your own, you may accept and process all requests to call teh external system by the module manage.
first you may have a entity class which carries the request info like
class entity{public String id,srv,blah_blah;}
Second, a stand-alone module for accepting and processing the requests, which would be the context for the requests too. like following
class business{private business(){}// fan of OOP? K, go for singleton
private static final ArrayList<entity> ctx=new ArrayList<entity>();
static public void accept_request(entity e){_persist(e);ctx.add(e);}
static private void _persist(entity e){/*persist it to the db*/}
static private void _done(entity e){_remove(e);/*informing 3rd. parties if any*/}
static private void _remove(entity e){/*remove it from the db, it's done*/}
final private static int do_work(e){/*do the real business*/return 0;}//0 as success, 1, fail, 2....
}
But it's not completed yet, now you need a way to call the do_work() guy, so I suggest a background thread(would be daemon too!)
So clients just push the requests to this context-like class, and here we need the thread, like following
class business{...
static public void accept_request(entity e){_persist(e);ctx.add(e);synchronized(ctx){ctx.notify();}}
...
private static final Runnable r=new Runnable(){public void run(){try{
while(!Thread.currentThread().interrupt()){
if(ctx.size()==0){synchronized(ctx){if(ctx.size()==0){ctx.wait();}}}
while(ctx.size()>0){entity e=ctx.get(0);ctx.remove(0);
if(do_work(e)==0){_done(e);}else{ctx.add(e);/*give him another chance maybe!*/}end-else
Thread.Sleep(100/*appreciate sleep time*/);}//end-loop
}
}catch(Throwable wt){/*catch signals, maybe death thread*/}}};
static private Thread t;
void static public start_module(){t=new Thread(r);t.start();}
void static public stop_module(){t.interrupt();t.stop();}
...}
Tip: try not start the thread(calling start_module()) out of container initiation process, or you will have memory leak! best solution would call the thread by init() method of servlet(s) would call this module(once), and of course halting the the thread by application halt (destroy())
Hello In my web application I am maintaining list of URL authorized for user in a HashMap and compare the requested URL and revert as per the authorization. This Map has Role as key and URLs as value in form of List. My problem is where I should have this Map?
In Session: It may have hundreds of URLs and that can increase the burden of session.
In Cache at Application loading: The URLs may get modified on the fly and then I need to resync it by starting server again.
In Cache that update periodically: Application level Cache that will update periodically.
I require a well optimized approach that can serve the purpose, help me with the same.
I'm preferring to make it as a singleton Class and Have a thread that updates it periodically .. The thread will maintain the state of the cache .. this thread will be started when you get the fist instance of the cache
public class CacheSingleton {
private static CacheSingleton instance = null;
private HashMap<String,Role> authMap;
protected CacheSingleton() {
// Exists only to defeat instantiation.
// Start the thread to maintain Your map
}
public static CacheSingleton getInstance() {
if(instance == null) {
instance = new CacheSingleton();
}
return instance;
}
// Add your cache logic here
// Like getRole,checkURL() ... etc
}
wherever in your code you can get the cached data
CacheSingleton.getInstance().yourMethod();