Where to maintain the cache in application - java

Hello In my web application I am maintaining list of URL authorized for user in a HashMap and compare the requested URL and revert as per the authorization. This Map has Role as key and URLs as value in form of List. My problem is where I should have this Map?
In Session: It may have hundreds of URLs and that can increase the burden of session.
In Cache at Application loading: The URLs may get modified on the fly and then I need to resync it by starting server again.
In Cache that update periodically: Application level Cache that will update periodically.
I require a well optimized approach that can serve the purpose, help me with the same.

I'm preferring to make it as a singleton Class and Have a thread that updates it periodically .. The thread will maintain the state of the cache .. this thread will be started when you get the fist instance of the cache
public class CacheSingleton {
private static CacheSingleton instance = null;
private HashMap<String,Role> authMap;
protected CacheSingleton() {
// Exists only to defeat instantiation.
// Start the thread to maintain Your map
}
public static CacheSingleton getInstance() {
if(instance == null) {
instance = new CacheSingleton();
}
return instance;
}
// Add your cache logic here
// Like getRole,checkURL() ... etc
}
wherever in your code you can get the cached data
CacheSingleton.getInstance().yourMethod();

Related

How to create a reusable Map

Is there a way to populate a Map once from the DB (through Mongo repository) data and reuse it when required from multiple classes instead of hitting the Database through the repository.
As per your comment, what you are looking for is a Caching mechanism. Caches are components which allow data to live in memory, as opposed to files, databases or other mediums so as to allow for the fast retrieval of information (against a higher memory footprint).
There are probably various tutorials online, but usually caches all have the following behaviour:
1. They are key-value pair structures.
2. Each entity living in the cache also has a Time To Live, that is, how long will it considered to be valid.
You can implement this in the repository layer, so the cache mechanism will be transparent to the rest of your application (but you might want to consider exposing functionality that allows to clear/invalidate part or all the cache).
So basically, when a query comes to your repository layer, check in the cache. If it exists in there, check the time to live. If it is still valid, return that.
If the key does not exist or the TTL has expired, you add/overwrite the data in the cache. Keep in mind that when updating the data model yourself, you also invalidate the cache accordingly so that new/fresh data will be pulled from the DB on the next call.
You can declare the map field as public static and this would allow application wide access to hit via ClassLoadingData.mapField
I think a better solution, if I understood the problem would be a memoized function, that is a function storing the value of its call. Here is a sketch of how this could be done (note this does not handle possible synchronization problem in a multi threaded environment):
class ClassLoadingData {
private static Map<KeyType,ValueType> memoizedValues = new HashMap<>();
public Map<KeyType,ValueType> getMyData() {
if (memoizedData.isEmpty()) { // you can use more complex if to handle data refresh
populateData(memoizedData);
} else {
return memoizedData;
}
}
private void populateData() {
// do your query, and assign result to memoizedData
}
}
Premise: I suggest you to use an object-relational mapping tool like Hibernate on your java project to map the object-oriented
domain model to a relational database and let the tool handle the
cache mechanism implicitally. Hibernate specifically implements a multi-level
caching scheme ( take a look at the following link to get more
informations:
https://www.tutorialspoint.com/hibernate/hibernate_caching.htm )
Regardless my suggestion on premise you can also manually create a singleton class that will be used from every class in the project that goes to interact with the DB:
public class MongoDBConnector {
private static final Logger LOGGER = LoggerFactory.getLogger(MongoDBConnector.class);
private static MongoDBConnector instance;
//Cache period in seconds
public static int DB_ELEMENTS_CACHE_PERIOD = 30;
//Latest cache update time
private DateTime latestUpdateTime;
//The cache data layer from DB
private Map<KType,VType> elements;
private MongoDBConnector() {
}
public static synchronized MongoDBConnector getInstance() {
if (instance == null) {
instance = new MongoDBConnector();
}
return instance;
}
}
Here you can define then a load method that goes to update the map with values stored on the DB and also a write method that instead goes to write values on the DB with the following characteristics:
1- These methods should be synchronized in order to avoid issues if multiple calls are performed.
2- The load method should apply a cache period logic ( maybe with period configurable ) to avoid to load for each method call the data from the DB.
Example: Suppose your cache period is 30s. This means that if 10 read are performed from different points of the code within 30s you
will load data from DB only on the first call while others will read
from cached map improving the performance.
Note: The greater is the cache period the more is the performance of your code but if the DB is managed you'll create inconsistency
with cache if an insertion is performed externally ( from another tool
or manually ). So choose the best value for you.
public synchronized Map<KType, VType> getElements() throws ConnectorException {
final DateTime currentTime = new DateTime();
if (latestUpdateTime == null || (Seconds.secondsBetween(latestUpdateTime, currentTime).getSeconds() > DB_ELEMENTS_CACHE_PERIOD)) {
LOGGER.debug("Cache is expired. Reading values from DB");
//Read from DB and update cache
//....
sampleTime = currentTime;
}
return elements;
}
3- The store method should automatically update the cache if insert is performed correctly regardless the cache period is expired:
public synchronized void storeElement(final VType object) throws ConnectorException {
//Insert object on DB ( throws a ConnectorException if insert fails )
//...
//Update cache regardless the cache period
loadElementsIgnoreCachePeriod();
}
Then you can get elements from every point in your code as follow:
Map<KType,VType> liveElements = MongoDBConnector.getElements();

Understanding clients and servers

I'm pretty much new to ignite and have a question about responsibility of client and server nodes. As far as I got from the documentation client nodes are very small machines, so it's not their purpose to perform some heavy cache operations. For instance I need to load data from some persistence store, perform some heavy cache-related computations and put resulting data into cache. It looks like this:
I.
//This is on a client node
public class Loader{
private DataSource dataSource;
#IgniteInstanceResource
private Ignite ignite;
public void load(){
String key;
String values;
//retreive key and value from the dataSource
IgniteDataStreamer<String, String> streamer = ignite.dataStreamer("cache");
String result;
//process value
streamer.addData(key, result); //<---------1
}
}
The question is about //1. Is it client's node responsibility to process loaded data and put it into cache? I actually have intention to do the following: create task for each loaded String key and String value and perform all evaluation and cache related operations on a server node. Like the following:
II.
public class LoaderJob extends ComputeJobAdapter{
private String key;
private String value;
#Override
public Object execute(){
//perform all computation and putting into cache here
//and return Tuple2(key, result);
}
}
public class LoaderTask extends extends ComputeTaskSplitAdapter<Void, Void {
//...
public Void reduce(List<ComputeJobResult> results) throws IgniteException {
results.stream().forEach(result -> {
Tuple2<String, String> jobResult = result.getData();
ignite.dataStreamer("cache").addData(jobResult._1, jobResult._2);
});
return null;
}
}
In the second case what the client is doing is just to load data from the persistance store and then publishing tasks on servers.
What is the common way of doing things like that?
It depends on amount of data and computational complexity. In case of big amount of data you can load data right from server, without using client.
Here is the simplest example for DataStreamer, you need only to add loading data from your persistent store and do calculations before using DataStreamer.
Also, it depends on other things, like a client confuguration(CPU, RAM, network) and connection between client and server nodes. If client have a good configuration, for example, as a server, and it's in the same network as a server nodes, then it's not a problem to make load and computations on client and only after it stream data to cache.
Creating dedicate job for some data by yourself, is bad idea. Something like this doing in streamer (data will be buffered and sent to specific node where are will be stored).
client nodes are very small machines, so it's not their purpose to perform some heavy cache operations
This is not a true statement. You are able to give enough resource to client JVM, to load data.
You should create one data streamer on client side and load data from this machine. Also streamer instance is thread save, so you can load date from some threads simultaneously.
IgniteDataStreamer is the the fastest way to load data in a cache. So, the first case is valid.
I think, the second case make sense if a data will be gathered from persistence store on the server nodes and client send only parameters of the loading.

Deeper understanding about how Realm works?

Q1 : Please let me know what is different between two ways of implementation in below (about get realm instance). I want to know which one is faster, lighter in memory and what is recommended ?
1. Set and Get Realm as Default (with specific config)
private void setupCustomRealm() {
if (!Utils.isStringHasText(databaseName)) {
databaseName = DbManager.getInstance().getCurrentDb();
}
// get config
RealmConfiguration config = getRealmConfigByDBName(databaseName);
Realm.setDefaultConfiguration(config);
Realm.compactRealm(config);
}
public Realm getCustomRealm() {
if (firstTime) {
setupCustomRealm();
}
return Realm.getDefaultInstance();
}
2 .Get Realm from config directly
public Realm getCustomRealm(Context context) {
if (!Utils.isStringHasText(databaseName)) {
databaseName = DbManager.getInstance().getCurrentDb();
}
// get config
RealmConfiguration config = getRealmConfigByDBName(context, databaseName);
Realm.compactRealm(config);
return Realm.getInstance(config);
}
Q2 : In my application, now we are consider between 2 ways of implementation.
1: We create a new Realm instance every time when we need to do something with Database (in both of worker thread and UI thread) and close it when task get done.
2: We create only one instance of Realm and let it live along with application, when quit application we close instance above.
Please explain me the advantage and disadvantage of each one and which ways is recommended (My application using Service to handle database and network connection)
If I have 2 heavy tasks (take a long time to complete it's transaction), what is difference between execute 2 task by one Realm instance and execute 2 task on 2 Realm instances in 2 separate thread (I mean one thread have one Realm instances and instance will executes one in 2 tasks above), and which one if safer and faster.
what will happen if there is a problem while executing a transaction (example not responding or throws some exception)
Note: I am not an official Realm person, but I've been using Realm for a while now.
Here's a TL;DR version
1.) It's worth noting a few things:
A given Realm file should be accessed only with the same RealmConfiguration throughout the application, so the first solution here is preferable (don't create a new config for each Realm).
Realm.compactRealm(RealmConfig) works only when there are no open Realm instances on any threads. So either at application start, or at application finish (personally I found that it makes start-up slower, so I call compactRealm() when my activity count reaches 0, which I manage with a retained fragment bound to the activity - but that's just me).
2.) It's worth noting that Realm.getInstance() on first call creates a thread-local cache (the cache is shared among Realm instances that belong to the same thread, and increments a counter to indicate how many Realm instances are open on that given thread. When that counter reaches 0 as a result of calling realm.close() on all instances, the cache is cleared.
It's also worth noting that the Realm instance is thread-confined, so you will need to open a new Realm on any thread where you use it. This means that if you're using it in an IntentService, you'll need to open a new Realm (because it's in a background thread).
It is extremely important to call realm.close() on Realm instances that are opened on background threads.
Realm realm = null;
try {
realm = Realm.getDefaultInstance();
//do database operations
} finally {
if(realm != null) {
realm.close();
}
}
Or API 19+:
try(Realm realm = Realm.getDefaultInstance()) {
//do database operations
}
When you call realm.close() on a particular Realm instance, it invalidates the results and objects that belong to it. So it makes sense both to open/close Realms in Activity onCreate() and onDestroy(), or open it within the Application and share the same UI thread Realm instance for queries on the UI thread.
(It's not as important to close the Realm instance on the UI thread unless you intend to compact it after all of them are closed, but you have to close Realm instances on background threads.)
Note: calling RealmConfiguration realmConfig = new RealmConfiguration.Builder(appContext).build() can fail on some devices if you call it in Application.onCreate(), because getFilesDir() can return null, so it's better to initialize your RealmConfiguration only after the first activity has started.
With all that in mind, the answer to 2) is:
While I personally create a single instance of Realm for the UI thread, you'll still need to open (and close!) a new Realm instance for any background threads.
I use a single instance of Realm for the UI thread because it's easier to inject that way, and also because executeTransactionAsync()'s RealmAsyncTask gets cancelled if the underlying Realm instance is closed while it's still executing, so I didn't really want that to happen. :)
Don't forget, that you need a Realm instance on the UI thread to show RealmResults<T> from realm queries (unless you intend to use copyFromRealm() which makes everything use more memory and is generally slower)
IntentService works like a normal background thread, so you should also close the realm instance there as well.
Both heavy tasks work whether it's the same Realm instance or the other (just make sure you have a Realm instance on that given thread), but I'd recommend executing these tasks serially, one after the other.
If there's an exception during the transaction, you should call realm.cancelTransaction() (the docs say begin/commit, but it always forgets about cancel).
If you don't want to manually manage begin/commit/cancel, you should use realm.executeTransaction(new Realm.Transaction() { ... });, because it automatically calls begin/commit/cancel for you. Personally I use executeTransaction() everywhere because it's convenient.

Why are my objects garbage-collected in this class

I have a class that loads data from an API and stores/ handles the results as a POJO. It looks roughly like this (some stuff omitted that doesn't concern the question):
public class ResultLoader {
Search search;
Result result;
static ResultLoader instance;
private ResultLoader() {
}
public static synchronized ResultLoader getInstance() {
if (instance == null) {
instance = new ResultLoader();
}
return instance;
}
public void init(#NonNull Search search) {
this.search = search;
}
}
The Result object can get so large that it becomes too large to be passed around Activities via Intents, so, as you can see, I designed the ResultLoader as a Singleton so every class can access the Result object.
I simulate the Android device running low on memory by limiting background processes to one, then switch around some apps, go back to my app.
On my ResultLoader's instance, either the Result object became null or the instance was recreated; i checked this with
ResultLoader.getInstance().getResult() == null
How can this be and what can I do to prevent this?
Your objects GC'ed because android killed your app. So it destroyed all loaded classes with static data.
Next time when you get back to your app new app created and new ResultLoader class loaded with instance field equals null. When you try to get instance via getInstance method new instance created with empty result.
You should save your Result to persistent memory. E.g. when you get result save it to a file. And when you create ResultLoader check that file exists and load data from a file. Or load result again if your app got killed.
In android, your app can and will be recreated due to numerous reasons (low memory being one of them).
See this answer here to implement a saveinstancestate behavior on a custom class.

How do I remotely invalidate user's servlet session on-the-fly?

I have a page where accounts with alpha permissions may access. The JSP checks the session for a attribute named "AlphaPerm".
But the problem I'm struggling with is if I find a user is messing/abusing the alpha testing permissions, I want to stop him immediately. I can change his permissions in my database right away but that doesn't stop the abuser right away.
A possible solution is checking my database every time my users do something, But I don't want to do that because that would slow the database down.
So how do I kill his session on-the-fly (Creating a admin page is my plan, but how do I get the users session object)? Basically I want to make a admin page so I can BAN a user.
You can keep references to user sessions by implementing an HttpSessionListener. This example shows how to implement a session counter, but you could also keep references to individual sessions by storing them in a context scoped collection. You could then access the sessions from your admin page, inspect their attributes and invalidate some of them. This post may also have useful info.
Edit: Here's a sample implementation (not tested):
public class MySessionListener implements HttpSessionListener {
static public Map<String, HttpSession> getSessionMap(ServletContext appContext) {
Map<String, HttpSession> sessionMap = (Map<String, HttpSession>) appContext.getAttribute("globalSessionMap");
if (sessionMap == null) {
sessionMap = new ConcurrentHashMap<String, HttpSession>();
appContext.setAttribute("globalSessionMap", sessionMap);
}
return sessionMap;
}
#Override
public void sessionCreated(HttpSessionEvent event) {
Map<String, HttpSession> sessionMap = getSessionMap(event.getSession().getServletContext());
sessionMap.put(event.getSession().getId(), event.getSession());
}
#Override
public void sessionDestroyed(HttpSessionEvent event) {
Map<String, HttpSession> sessionMap = getSessionMap(event.getSession().getServletContext());
sessionMap.remove(event.getSession().getId());
}
}
You can then access the session map from any servlet:
Collection<HttpSession> sessions = MySessionListener.getSessionMap(getServletContext()).values();
As far as I understand your question checking against the DB is definitely a bad thing.
Also you must be comparing some values against some other standard values to decide if the user is messing around.
So an alternate to DB checking you can store these values in the user session and check for those values.
Also instead of creating an Admin page (probably a JSP page) I would suggest using a ServletFilter to do this work.
Also One thing I would personally suggest is instead of invalidating the whole session, you should put some restrictions on the users either for some time or till next login (for example restricting the access of some resource).

Categories