store variable in application scope java glassfish - java

i'm trying to store a number in the applicationscope of a glassfish webservice
the webservice:
#WebService()
public class datacheck {
//TODO 080 disable sql_log in the settings of hibernate
//TODO 090 check todo's from webservice_1
private int counter = 5;
when i request the counter variable i get 5
and
#WebMethod(operationName = "increaseCounter")
public Integer increaseCounter() {
counter++;
return counter;
}
returns 6 but
when i try to do this afterwards i get 5 again:
#WebMethod(operationName = "getCounter")
public Integer getCounter() {
return counter;
}
how do i store a variable that is available for all methods in the webservice?

This depends on your use case and architecture to an extent. If every user should see the result of increment counter then yo could declare it statically in your code.
private static int counter = 5;
This will only work if you have only one JVM in your application though and would require careful thought about synchronization.
Alternatively you could persist it externally ( to a database or file for example )

Implementing the Singleton pattern should work. you will end up with the same instance in the whole JVM. Beware though: writing to a singleton from different threads might be a contented lock, and that way be dragons!
There's also ThreadLocal if you want to constraint an object to one thread (i think glassfish is one thread per request but dont cite me :)

Related

How to access variable from different class scope

I'm trying to create that uses the java_rosbridge library, but I am having issues with accessing and updating the variable status between class scopes.
Boolean isDoorbellRinging() {
Boolean status = false;
bridge.subscribe(SubscriptionRequestMsg.generate("/doorbell").setType("std_msgs/Bool").setThrottleRate(1)
.setQueueLength(1), new RosListenDelegate() {
public void receive(JsonNode data, String stringRep) {
MessageUnpacker<PrimitiveMsg<String>> unpacker = new MessageUnpacker<PrimitiveMsg<String>>(
PrimitiveMsg.class);
PrimitiveMsg<String> msg = unpacker.unpackRosMessage(data);
logger.info(data.get("msg").get("data").asText());
status = ((data.get("msg").get("data").asInt() > 0) ? true : false);
}
});
return status;
}
It's receiving the data correctly as I get the correct output with logger.info(..) when not trying to access status. However, when including status = ((data.get("msg")...
I'm currently receiving this error:
Local variables referenced from an inner class must be final or effectively final
What that message means, is that in closures (lambdas and methods in anonymous classes), after their definition, variables from outer scope must not be reassigned. You can circumvent this using containers (collections, arrays, atomics, and so on). In this case, AtomicBoolean may come in handy: define your variable as AtomicBoolean and use set instead of assignment. If you have to distinguish between null and false, use AtomicReference<Boolean>.
'status' is on a stack frame corresponding to the execution of isDoorbellRinging().
The 'receive' call presumably will execute at some future time. At that point, isDoorbellRinging() will have returned. There will be no 'status' to modify. It will have ceased to be.
By the same point, isDoorbellRinging() cannot 'now' return a value that will be determined at some point in the future.
You need some way to handle the asynchronous nature of this.

Spring cache value not being renewed when unless condition is not met

I'm having an issue with the unless condition of the Cacheable annotation.
From the documentation, I understand that the unless condition is verified after the method being annotated is called and the value the method returns is cached (and actually returned) only if the unless condition is not met. Otherwise, the cached value should be returned.
Firstly, is this assumption true?
EDIT:
[From the Spring documentation] As the name implies, #Cacheable is used
to demarcate methods that are cacheable - that is, methods for whom
the result is stored into the cache so on subsequent invocations (with
the same arguments), the value in the cache is returned without having
to actually execute the method.
[My understanding] So for a given key, the method will be always executed until the unless condition is not met once. Then the cached value will be
returned for all subsequent calls to the method.
To illustrate my issue, I tried to break down my code into four simples classes:
1) DummyObject that represents instances to be cached and retrieved. It's a timestamp wrapper to show what is last value that has been cached. The toBeCached boolean is a flag that should be checked in the unless condition to know if the instance returned should be cached or not.
2) DummyDAO that returns DummyObject instances based on provided keys. Upon retrieval of an instance, the DAO checks when was the last value retrieved and verified if it should be cached or not (independently of what key is provided. Doesn't matter if this logic is "broken" as I'm always using the same key for my example). The DAO then marks the instance returned with the flag toBeCached. If the value is marked to be cached, the DAO actually updates its lastRetrieved timestamp as the instance should be eventually cached by the CachedDAO (because the unless condition won't be met).
3) DummyCachedDao that calls the DummyDAO to retrieve instances of DummyObject. If instances are marked toBeCached, it should cache the newly returned value. It should return the previously cached value otherwise.
4) The Application that retrieves a value (that will be cached), sleeps for a short time (not long enough for the cache duration to pass), retrieves a value (that should be the cached one), sleeps again (long enough for cache duration to pass), retrieves again a value (that should be a new value to be cached).
Unfortunately, this code does not work as expected as the retrieved value is always the original value that has been cached.
To ensure that the logic worked as expected, I checked if the unless conditions are met or not by replacing the retrieveTimestamp by retrieveTimestampBypass in the Application class. Since internal calls bypass the Spring proxy, the retrieveTimestamp method and whatever breakpoints or logs I put in are actually caught/shown.
What would cause the value to never be cached again? Does the cache need to be clean from previous values first?
public class DummyObject
{
private long timestamp;
private boolean toBeCached;
public DummyObject(long timestamp, boolean toBeCached)
{
this.timestamp = timestamp;
this.toBeCached = toBeCached;
}
public long getTimestamp()
{
return timestamp;
}
public boolean isToBeCached()
{
return toBeCached;
}
}
#Service
public class DummyDAO
{
private long cacheDuration = 3000;
private long lastRetrieved;
public DummyObject retrieveTimestamp(String key)
{
long renewalTime = lastRetrieved + cacheDuration;
long time = System.currentTimeMillis();
boolean markedToBeCached = renewalTime < time;
System.out.println(renewalTime + " < " + time + " = " + markedToBeCached);
if(markedToBeCached)
{
lastRetrieved = time;
}
return new DummyObject(time, markedToBeCached);
}
}
#Service
public class DummyCachedDAO
{
#Autowired
private DummyDAO dao;
// to check the flow.
public DummyObject retrieveTimestampBypass(String key)
{
return retrieveTimestamp(key);
}
#Cacheable(cacheNames = "timestamps", unless = "#result.isToBeCached() != true")
public DummyObject retrieveTimestamp(String key)
{
return dao.retrieveTimestamp(key);
}
}
#SpringBootApplication
#EnableCaching
public class Application
{
public final static String KEY = "cache";
public final static String MESSAGE = "Cached timestamp is: %s [%s]";
public static void main(String[] args) throws InterruptedException
{
SpringApplication app = new SpringApplication(Application.class);
ApplicationContext context = app.run(args);
DummyCachedDAO cache = (DummyCachedDAO) context.getBean(DummyCachedDAO.class);
// new value
long value = cache.retrieveTimestamp(KEY).getTimestamp();
System.out.println(String.format(MESSAGE, value, new Date(value)));
Thread.sleep(1000);
// expecting same value
value = cache.retrieveTimestamp(KEY).getTimestamp();
System.out.println(String.format(MESSAGE, value, new Date(value));
Thread.sleep(5000);
// expecting new value
value = cache.retrieveTimestamp(KEY).getTimestamp();
System.out.println(String.format(MESSAGE, value, new Date(value));
SpringApplication.exit(context, () -> 0);
}
}
There are so many details and maybe issues here but first of all you should remove
private long lastRetrieved;
from DummyDao class.
DummyDao is a singleton instance lastRetrieved field is not thread safe.
As you can also see from the logs after you cache the item first time it will always be retrieved from there as it has cached in the first call.
Otherwise you should have seen following log
3000 < 1590064933733 = true
The problem is actually quite simple.
There is no solution to my problem and rightfully so.
The original assumption I had was that "the unless condition is verified every time after the method being annotated is called and the value the method returns is cached (and actually returned) only if the unless condition is not met. Otherwise, the cached value should be returned."
However, this was not the actual behavior because as the documentation states, "#Cacheable is used to demarcate methods that are cacheable - that is, methods for whom the result is stored into the cache so on subsequent invocations (with the same arguments), the value in the cache is returned without having to actually execute the method."
So for a given key, the method will be always executed until the unless condition is not met once. Then the cached value will be returned for all subsequent calls to the method.
So I tried to approach the problem in a different way for my experiment, by trying to use a combination of annotations (#Caching with #Cacheable and #CachePut, although the documentation advises against it).
The value that I was retrieving was always the new one while the one in the cache was always the expected one. (*)
That's when I tilted THAT I couldn't upload the value in the cache based on an internal timestamp that would have been generated in the method that is being cached AND retrieving at the same the cached value if the unless condition was met or the new one otherwise.
What would be the point of executing the method every single time to compute the latest value but returning the cached one (because of the unless condition I was setting)? There is no point...
What I wanted to achieve (update the cache if a period expired) would have been possible if the condition of cacheable was to specify when to retrieve the cached version or retrieve/generate a new one. As far as I am aware, the cacheable is only to specify when a method needs to be cached in the first place.
That is the end of my experiment. The need to test this arose when I came across an issue with an actual production project that used an internal timestamp with this unless condition.
FYI, the most obvious solution to this problem is to use a cache provider that actually provides TTL capabilities.
(*) PS: I also tried few #caching combinations of #CacheEvict (with condition="#root.target.myNewExpiringCheckMethod()==true") and #Cacheable but it failed as well as the CacheEvict enforce the execution of the annotated method.

How to create a reusable Map

Is there a way to populate a Map once from the DB (through Mongo repository) data and reuse it when required from multiple classes instead of hitting the Database through the repository.
As per your comment, what you are looking for is a Caching mechanism. Caches are components which allow data to live in memory, as opposed to files, databases or other mediums so as to allow for the fast retrieval of information (against a higher memory footprint).
There are probably various tutorials online, but usually caches all have the following behaviour:
1. They are key-value pair structures.
2. Each entity living in the cache also has a Time To Live, that is, how long will it considered to be valid.
You can implement this in the repository layer, so the cache mechanism will be transparent to the rest of your application (but you might want to consider exposing functionality that allows to clear/invalidate part or all the cache).
So basically, when a query comes to your repository layer, check in the cache. If it exists in there, check the time to live. If it is still valid, return that.
If the key does not exist or the TTL has expired, you add/overwrite the data in the cache. Keep in mind that when updating the data model yourself, you also invalidate the cache accordingly so that new/fresh data will be pulled from the DB on the next call.
You can declare the map field as public static and this would allow application wide access to hit via ClassLoadingData.mapField
I think a better solution, if I understood the problem would be a memoized function, that is a function storing the value of its call. Here is a sketch of how this could be done (note this does not handle possible synchronization problem in a multi threaded environment):
class ClassLoadingData {
private static Map<KeyType,ValueType> memoizedValues = new HashMap<>();
public Map<KeyType,ValueType> getMyData() {
if (memoizedData.isEmpty()) { // you can use more complex if to handle data refresh
populateData(memoizedData);
} else {
return memoizedData;
}
}
private void populateData() {
// do your query, and assign result to memoizedData
}
}
Premise: I suggest you to use an object-relational mapping tool like Hibernate on your java project to map the object-oriented
domain model to a relational database and let the tool handle the
cache mechanism implicitally. Hibernate specifically implements a multi-level
caching scheme ( take a look at the following link to get more
informations:
https://www.tutorialspoint.com/hibernate/hibernate_caching.htm )
Regardless my suggestion on premise you can also manually create a singleton class that will be used from every class in the project that goes to interact with the DB:
public class MongoDBConnector {
private static final Logger LOGGER = LoggerFactory.getLogger(MongoDBConnector.class);
private static MongoDBConnector instance;
//Cache period in seconds
public static int DB_ELEMENTS_CACHE_PERIOD = 30;
//Latest cache update time
private DateTime latestUpdateTime;
//The cache data layer from DB
private Map<KType,VType> elements;
private MongoDBConnector() {
}
public static synchronized MongoDBConnector getInstance() {
if (instance == null) {
instance = new MongoDBConnector();
}
return instance;
}
}
Here you can define then a load method that goes to update the map with values stored on the DB and also a write method that instead goes to write values on the DB with the following characteristics:
1- These methods should be synchronized in order to avoid issues if multiple calls are performed.
2- The load method should apply a cache period logic ( maybe with period configurable ) to avoid to load for each method call the data from the DB.
Example: Suppose your cache period is 30s. This means that if 10 read are performed from different points of the code within 30s you
will load data from DB only on the first call while others will read
from cached map improving the performance.
Note: The greater is the cache period the more is the performance of your code but if the DB is managed you'll create inconsistency
with cache if an insertion is performed externally ( from another tool
or manually ). So choose the best value for you.
public synchronized Map<KType, VType> getElements() throws ConnectorException {
final DateTime currentTime = new DateTime();
if (latestUpdateTime == null || (Seconds.secondsBetween(latestUpdateTime, currentTime).getSeconds() > DB_ELEMENTS_CACHE_PERIOD)) {
LOGGER.debug("Cache is expired. Reading values from DB");
//Read from DB and update cache
//....
sampleTime = currentTime;
}
return elements;
}
3- The store method should automatically update the cache if insert is performed correctly regardless the cache period is expired:
public synchronized void storeElement(final VType object) throws ConnectorException {
//Insert object on DB ( throws a ConnectorException if insert fails )
//...
//Update cache regardless the cache period
loadElementsIgnoreCachePeriod();
}
Then you can get elements from every point in your code as follow:
Map<KType,VType> liveElements = MongoDBConnector.getElements();

Performance Testing for a back-end service in Java

I have a back-end service in Java that I need to test its performance. It is not exposed to web and may never be. I was wondering how I can test this multi-threaded service's (simply a class with public methods) performance under a lot of traffic (100K+ calls per second).
If you are saying to create 100K+ calls per second by your program ,then use ThreadExecutor to create maximum threads you want to test for public methods in yours call.
For example following code simaultaneously call yours public methods with 1000 threads accessing the methods
ExecutorService executor = Executors.newFixedThreadPool(1000);
List<YourClass> callingList = new ArrayList<YourClass>();
for (int i = 0; i < 1000; i++) {
EntryPoint Impl = new EntryPoint (YourClass);
callingList.add(Impl);
}
private Class EntryPoint implements Callable<Object>{
private YourClass example;
EntryPoint (YourClass class) {
this.example = class;
}
public List onCall() throws InterruptedException {
example.yourPublicMethod();
If you want to measure time taken by each threads for each methods use aspectJ for interceptor.You can record the time taken for each methods in a list through out the call of 1000 threads.Finally u can aagain iterate a list to get time taken for each on each methods .If you are looking for tools u can use VisualVM Or Jconsole.You can get information about CPU usuage,memory usuage ,threads status ,garbage collector,number of objects created and byte consumbed by objects and number of class loaded and many more .
JMeter works well for load testing.
http://jmeter.apache.org/

Semi static field in Java application

I have written a web-service application that has in a main class generated random value per request (for logging).
I cannot set it as a static field because next request will override it.
I also cannot pass it to the every class that I use in the main one (as an argument or with setter).
Is it possible to create some semi-static field - visible for one request but not for every other that go to the web-service ?
You can safely assume that, in the Java EE model, each single request is served by a single thread and that there is no contention by concurrent requests.
Having said that, you can employ a Singleton using a ThreadLocal, let the Servlet populate the value and have the underlying classes access the sigleton without having notion of the threads or the HTTP request context:
public class RandomValueHolder {
private static ThreadLocal<Long> randomValue;
public static Long getRandomValue() {
return randomValue.get();
}
public static void setRandomValue(Long value) {
randomValue = new ThreadLocal<Long>();
randomValue.set(value);
}
}
Why not use HttpRequest and store the value as attribute
Save the data in the request itself with Request.setAttribute() and use the corresponding Request.getAttribute() to retrieve it.

Categories