Weak reference and self refreshing cache manager - java

Sorry for the long question, I need to present the environment otherwise you may misunderstand my issue.
Current state
I have a cache manager< K, V >, that for a given object of class K, returns a holder parametrized by the type V, representing the value associated on a web service to the corresponding K.
Holder
The Holder classes manage the fetch, synchronization, and scheduling of next fetch, because the cache is designed for multiple parallel calls. The data fetched by the web service has an expiry date (provided in the header), after which the holder can fetch it again and schedules itself again for next expiry. I have 3 classes(for list, map and other), but they are all used the same way. The Holder< V > class has 5 methods, 2 for direct access and 3 for IoC access
void waitData() waits until the data is fetched at least once. Internally is uses a countdownlatch.
V copy() waits for the data to be fetched at least once, then returns a copy of the cached V. Simple items are returned as they are, while more complex (eg Map for the prices in a given shop referenced by furniture id) are copied in a synchronized loop (to avoid another fetch() to corrupt the data)
void follow(JavaFX.Listener< V >) registers a new listener of V to be notified on modifications on the holder's data. If the holder already has received data, the listener is notified of this data as if it was new.
void unfollow (JavaFX.Listener< V >) unregisters apreviously registered listener.
Observable asObservable() returns an Observable . That allows to be used eg in javafx GUI.
Typically this allows me to do things like streaming of multiple data in parallel with adequate time, eg
Stream.of(1l, 2l, 3l).parallel().map(cache::getPrice).mapToInt(p->p.copy().price).min();
or to make much more complex Bindings in javafx, eg when the price depends on the number of items you want to purchase
Self Scheduling
The holder class contains a SelfScheduling< V > object, that is responsible to actually fetch the data, put it in the holder and reschedule itself after data expire.
The SelfScheduling use a ScheduledExecutorService in the cache, to schedule its own fetch() method. It starts by scheduling itself after 0 ms, rescheduling itself after 10s if error, or after expiry if new data was fetched. It can be paused, resumed, is started on creation, and can be stopped.
This is the behavior I want to modify. I want the self executor to remove the Holder from the cache on expiry, if the holder is not used anywhere in the code
Cache manager
Just for the information, my cache manager consists of a Map< K, Holder< V > > cachedPrices to hold the cache data, and a method getPrice(K) that syncs over the cache if holder missing, create the holder if required(double check to avoid unnecessary sync), and return the holder.
Global Code
Here is a example of what my code looks like
public class CacheExample {
public static class Holder<T>{
SimpleObjectProperty<T> data = new SimpleObjectProperty<>();
// real code removed
T copy() {
return null;
}
Observable asObservable() {
return null;
}
void follow(ChangeListener<? super T> listener) {
}
}
public static class SelfScheduled implements Runnable {
// should use enum
private Object state = "start";
public void schedule(long ms) {
// check state, sync, etc.
}
#Override
public void run() {
long next = fetch();
schedule(next);
}
public long fetch() {
// set the value in the holder
// return the next expiry
return 0;
}
}
public Map<Long, Holder<Object>> cachePrices = new HashMap<>();
public Holder<Object> getPrice(long param) {
Holder<Object> ret = cachePrices.get(param);
if (ret == null) {
// sync, re check, etc.
synchronized (cachePrices) {
ret = cachePrices.get(param);
if (ret == null) {
ret = new Holder<>();
// should be the fetch() call instead of null
makeSchedule(ret.data, null);
}
}
}
return ret;
}
public void makeSchedule(SimpleObjectProperty<Object> data, Runnable run) {
// code removed.
// creates a selfscheduler with fetch method and the data to store the
// result.
}
}
Expected modifications
As I wrote above, I want to modify the way the cache holds the data in memory.
Especially, I see no reason to maintain a huge number of self scheduling entities to fetch data when those data are no more used. If the expiry is 5s (some web sevices ARE), and I cache 1000 data(that's a very low value), then that means I will make 200 fetch() per second for no reason.
What I expect is that, when the Holder is no more used, the self scheduling stops itself and instead of fetching data, it actually removes the holder from the cache. example :
Holder< Price > p = cache.getPrice(1);
// here if the fetch() is called it should fetch the data
p.copy().price;
// now the price is no more used, on next fetch() it should remove p from the cache.
// If that happens, and later I re enter that code, the holder and the selfscheduler will be re created.
Holder< Price > p2 = cache.getPrice(22);
mylist.add(p2);
// now there is a strong reference to this price, so the fetch() method will keep scheduling the selfscheduler
// until mylist is no more strongly referenced.
Incorrect
However my knowledge of adequate technologies is limited in that field. To what I understood, I should use a weak reference in the cache manager and the self scheduling to know when the holder is no more strongly referenced (typically, start the fetch() by checking if the reference became null, in which case just stop); However this would lead to the holder being GC'd BEFORE the next expiry, which I don't want : some data have very long expiry and are only used in a simple method, eg cache.getShopLocation() should not be GC'd just after the value returned by copy() is used.
Thus, this code is incorrect :
public class CacheExampleIncorrect {
public static class Holder<T>{
SimpleObjectProperty<T> data = new SimpleObjectProperty<>();
// real code removed
T copy() {
return null;
}
Observable asObservable() {
return null;
}
void follow(ChangeListener<? super T> listener) {
}
}
public static class SelfScheduled<T> implements Runnable {
WeakReference<Holder<T>> holder;
Runnable onDelete;
public void schedule(long ms) {
// check state, sync, etc.
}
#Override
public void run() {
Holder<T> h = holder.get();
if (h == null) {
onDelete.run();
return;
}
long next = fetch(h);
schedule(next);
}
public long fetch(Holder<T> h) {
// set the value in the holder
// return the next expiry
return 0;
}
}
public Map<Long, WeakReference<Holder<Object>>> cachePrices = new HashMap<>();
public Holder<Object> getPrice(long param) {
WeakReference<Holder<Object>> h = cachePrices.get(param);
Holder<Object> ret = h == null ? null : h.get();
if (h == null) {
synchronized (cachePrices) {
h = cachePrices.get(param);
ret = h == null ? null : h.get();
if (ret == null) {
ret = new Holder<>();
h = new WeakReference<>(ret);
// should be the fetch() call instead of null
SelfScheduled<Object> sched = makeSchedule(h, null);
cachePrices.put(param, h);
// should be synced on cachedprice
sched.onDelete = () -> cachePrices.remove(param);
}
}
}
return ret;
}
public <T> SelfScheduled<T> makeSchedule(WeakReference<Holder<Object>> h, Runnable run) {
// creates a selfscheduler with fetch method and the data to store the
// result.
return null;
}
}

Related

How to share parent ThreadLocal object reference with the Child threads?

Use case
I have a gRPC+Guice based service application where for a particular call the code flow looks like: A -> B -> C and A -> X -> Y for a particular service request.
where, A = Top level Service operation/Activity class; B = Class which creates ExecutorService Threadpool with class C as task; X and Y are normal classes.
I want a shared object ContainerDoc across class B, C and Y these classes but do not want to pass on to method parameters. So, I have decided to use InheritableThreadLocal.
But I want to understand how to enforce sharing the parent ThreadLocal ContainerDoc to the Child Threads, so that any updates done in the ContainerDoc by child Thread is also visible to parent thread?
Does overriding childValue method to return same contained object as of parent to make it work? (See below implementation).
How to ensure Thread-Safety?
Sample Implementation
class ContainerDoc implements ServiceDoc {
private final Map < KeyEnum, Object > containerMap;
public ContainerDoc() {
this.containerMap = new HashMap < KeyEnum, Object > ();
// Should it be ConcurrentHashmap to account for concurrent updates?
}
public < T > T getEntity(final KeyEnum keyEnum) {
return (T) containerMap.get(keyEnum);
}
public void putEntity(final KeyEnum keyEnum, final Object value) {
entities.put(keyEnum, value);
}
enum KeyEnum {
Key_A,
Key_B;
}
}
public enum MyThreadLocalInfo {
THREADLOCAL_ABC(ContainerDoc.class, new InheritableThreadLocal < ServiceDoc > () {
// Sets the initial value to an empty class instance.
#Override
protected ServiceContext initialValue() {
return new ContainerDoc();
}
// Just for reference. The below impl shows default
// behavior. This method is invoked when any new
// thread is created from a parent thread.
// This ensures every child thread will have same
// reference as parent.
#Override
protected ServiceContext childValue(final ServiceDoc parentValue) {
return parentValue;
// Returning same reference but I think this
// value gets copied over to each Child thread as
// separate object instead of reusing the same
// copy for thread-safety. So, how to ensure
// using the same reference as of parent thread?
}
}),
THREADLOCAL_XYZ(ABC.class, new InheritableThreadLocal < ServiceDoc > () {
....
....
});
private final Class << ? extends ServiceDoc > contextClazz;
private final InheritableThreadLocal < ServiceDoc > threadLocal;
MyThreadLocalInfo(final Class << ? extends ServiceDoc > contextClazz,
final InheritableThreadLocal < ServiceDoc > threadLocal) {
this.contextClazz = contextClazz;
this.threadLocal = threadLocal;
}
public ServiceDoc getDoc() {
return threadLocal.get();
}
public void setDoc(final ServiceDoc serviceDoc) {
Validate.isTrue(contextClazz.isAssignableFrom(serviceDoc.getClass()));
threadLocal.set(serviceDoc);
}
public void clearDoc() {
threadLocal.remove();
}
}
Client code (from Child Thread class or regular class
MyThreadLocalInfo.THREADLOCAL_ABC.setDoc(new ContainerDoc());
MyThreadLocalInfo.THREADLOCAL_ABC.getDoc().put(Key_A, new Object());
MyThreadLocalInfo.THREADLOCAL_ABC.clearDoc();
Returning same reference
but I think this value gets copied over to each Child thread as separate object
How would such a "separate object" be instantiated by the runtime? This theory is incorrect. Your childValue() implementation is exactly the same as the default.
An InheritableThreadLocal is assigned a value based on the parent when a new thread is created. An ExecutorService could have any implementation, and you don't specify how yours creates threads, but for your approach to work, the parent thread would need to set the value, create a new thread, and then execute the task with that new thread. In other words, it can only work with un-pooled threads.
ThreadLocal is a kludge to work around design flaws in third-party code that you can't change. Even if it works, it's a last resort—and here, it doesn't work.
Pass the ServiceDoc as a method or constructor parameter as necessary to B, C, and Y.
This probably means X needs to pass along the ServiceDoc as well, but, since there is no Executor involved in the X-Y code path, A could conditionally initialize a ThreadLocal before calling X. It's just probably uglier than passing it as a parameter.

How to create a custom BodyPublisher for Java 11 HttpRequest

I'm trying to create a custom BodyPublisher that would deserialize my JSON object. I could just deserialize the JSON when I'm creating the request and use the ofByteArray method of BodyPublishers but I would rather use a custom publisher.
public class CustomPublisher implements HttpRequest.BodyPublisher {
private byte[] bytes;
public CustomPublisher(ObjectNode jsonData) {
...
// Deserialize jsonData to bytes
...
}
#Override
public long contentLength() {
if(bytes == null) return 0;
return bytes.length
}
#Override
public void subscribe(Flow.Subscriber<? super ByteBuffer> subscriber) {
CustomSubscription subscription = new CustomSubscription(subscriber, bytes);
subscriber.onSubscribe(subscription);
}
private CustomSubscription implements Flow.Subscription {
private final Flow.Subscriber<? super ByteBuffer> subscriber;
private boolean cancelled;
private Iterator<Byte> byterator;
private CustomSubscription(Flow.Subscriber<? super ByteBuffer> subscriber, byte[] bytes) {
this.subscriber = subscriber;
this.cancelled = false;
List<Byte> bytelist = new ArrayList<>();
for(byte b : bytes) {
bytelist.add(b);
}
this.byterator = bytelist.iterator();
}
#Override
public void request(long n) {
if(cancelled) return;
if(n < 0) {
subscriber.onError(new IllegalArgumentException());
} else if(byterator.hasNext()) {
subscriber.onNext(ByteBuffer.wrap(new byte[]{byterator.next()));
} else {
subscriber.onComplete();
}
}
#Override
public void cancel() {
this.cancelled = true;
}
}
}
This implementation works, but only if subscriptions request method gets called with 1 as a parameter. But that's what happens when I am using it with the HttpRequest.
I'm pretty sure this is not any way preferred or optimal way of creating the custom subscription but I have yet to found better way to make it work.
I would greatly appreciate if anyone can lead me to a better path.
You are right to avoid making a byte array out of it, as that would create memory issues for large objects.
I wouldn’t try to write a custom publisher. Rather, just take advantage of the factory method HttpRequest.BodyPublishers.ofInputStream.
HttpRequest.BodyPublisher publisher =
HttpRequest.BodyPublishers.ofInputStream(() -> {
PipedInputStream in = new PipedInputStream();
ForkJoinPool.commonPool().submit(() -> {
try (PipedOutputStream out = new PipedOutputStream(in)) {
objectMapper.writeTree(
objectMapper.getFactory().createGenerator(out),
jsonData);
}
return null;
});
return in;
});
As you have noted, you can use HttpRequest.BodyPublishers.ofByteArray. That is fine for relatively small objects, but I program for scalability out of habit. The problem with assuming code won’t need to scale is that other developers will assume it is safe to pass large objects, without realizing the impact on performance.
Writing your own body publisher will be a lot of work. Its subscribe method is inherited from Flow.Publisher.
The documentation for the subscribe method starts with this:
Adds the given Subscriber if possible.
Each time your subscribe method is called, you need to add the Subscriber to some sort of colllection, you need to create an implementation of Flow.Subscription, and you need to immediately pass it to the subscriber’s onSubscribe method. Your Subscription implementation object needs to send back one or more ByteBuffers, only when the Subscription’s request method is called, by invoking the corresponding Subscriber’s (not just any Subscriber’s) onNext method, and once you’ve sent all of the data, you must call the same Subscriber’s onComplete() method. On top of that, the Subscription implementation object needs to handle cancel requests.
You can make a lot of this easier by extending SubmissionPublisher, which is a default implementation of Flow.Publisher, and then adding a contentLength() method to it. But as the SubmissionPublisher documentation shows, you still have a fair amount of work to do, for even a minimal working implementation.
The HttpRequest.BodyPublishers.of… methods will do all of this for you. ofByteArray is okay for small objects, but ofInputStream will work for any object you could ever pass in.

Java concurrency - pass data to other waiting thread

I have two components:
The manager, on which add(Data) can be called. This will add some data to the manager.
The clients, which can call retrieve(predicate) on the manager. A list of Data objects which match the given predicate are returned. If there is no such data, retrieve keeps waiting.
A typical blocking priority queue cannot be used here, since the client is not interested in every new object. Only those who are allowed by his requirements as defined in the predicate are useful for him.
How can this be implemented in Java? I could get it working with a x.notifyAll() call after each call to add(Data) in the manager, and a x.wait() in the retrieve(predicates) method. I was wondering if the java.concurrent package has more higher-level functionalities which can be used for this problem.
Here is an outline of something that may give you an idea. For simplicity I am going to assume that predicates and data are strings.
As you stated you do not know your predicates ahead of time so I would try to dynamically update and cache based on new incoming predicates.
Manager
public class Manager(){
private Map<String, Set<String>> jobs = new HashMap<>():
private Set<String> knownPredicates = new HasSet();
private final static String GENERAL = "GENERAL_DATA";
public void addJob(String data){
Set<String> matchingPredicates = getMatchingPredicates(data);
if(matchingPredicates.isEmpty()){
updateJobs(GENERAL, data);
} else {
for(String predicate: matchingPredicates){
updateJobs(GENERAL, data);
}
}
synchronized(this){
notifyAll();
}
}
private Set<String> getMatchingPredicates(String data){
Set<String> matchingPredicates = new HashSet<>();
for(String knownPredicate: knownPredicates){
// Check if the data matched the predicate. If so add it to the list
}
return matchingPredicates;
}
private void updateJobs(String predicate, String data){
Set<String> dataList;
if(jobs.containsKey(predicate)){
dataList = jobs.get(predicate);
} else {
dataList = new HashSet<>();
}
dataList.add(data);
jobs.put(predicate, dataList);
}
public synchronized List<String> retrieve(String predicate){
Set<String> jobsToReturn;
knownPredicates.add(predicate);
if(jobs.containsKey(predicate)){
jobsToReturn = jobs.remove(predicate);
}
for(String unknownData: jobs.get()){
//Check if unknownData matches the new predicate if it does add it to jobsToReturn
}
cleanupData(jobsToReturn);
return jobsToReturn;
}
//Removes data that may match more than one predicate
private static void cleanupData(Set<String> dataSet){
for(String data: dataSet){
for(Set <String> predicateSet: jobs.values()){
predicateSet.remove(data);
}
}
}
}
Client
public class Client() implements Runnable{
private Manager managerRef;
public Client(Manager m){
managerRef = m;
}
public void run() {
while(true){
String predicate = //Get the predicate somehow
Set<String> workToDo = managerRef.retrieve(predicate)
if(workToDo.isEmpty()){
synchornized(managerRef){
managerRef.wait();
}
} else {
//Do something
}
}
}
}
The above is only a skeleton though. You would have to resolve some issue regarding clearing your known predicates etc. . .
You might need to consider implementing predicate-based caching with the following behavior:
If 'retrieve(predicate)' method has never been called and 'add(Data)' method is executed, a new Data object is simply added to the manager and cache remains empty.
If 'retrieve(predicate)' method is called, the client checks the cache for the requested predicate in order to retrieve references to the corresponding Data objects. If cache is empty or no match has been found, the system runs a search on the specified predicate against all Data objects in the manager and updates the cache. To improve the performance, if no match found, flag this up in the cache so that the subsequent queries for the same predicate are returned faster.
If 'add(Data)' method is called and cache isn't empty, the Data object being added is scanned for all predicates already in the cache and the matching objects are associated by a reference with the corresponding predicates in the cache.
Note as any caching mechanism, it will be slower at the start but will improve as more objects fill up the cache.

synchronized cache service implementation

I am developing a java application where need to implement cache service to serve the requests. The requirement is like:
1) 1 or more threads come to fetch some data and if data is null is
cache then only one thread goes to DB to load the data in cache.
2) Once done , all subsequent threads will be served from cache.
So for this the implementation is like:
public List<Tag> getCachedTags() throws Exception
{
// get data from cache
List<Tag> tags = (List<Tag>) CacheUtil.get(Config.tagCache,Config.tagCacheKey);
if(tags == null) // if data is null
{
// one thread will go to DB and others wait here
synchronized(Config.tagCacheLock)
{
// first thread get this null and go to db, subsequent threads returns from here.
tags = (List<Tag>) CacheUtil.get(Config.tagCache,Config.tagCacheKey);
if(tags == null)
{
tags = iTagService.getTags(null);
CacheUtil.put(Config.tagCache, Config.tagCacheKey, tags);
}
}
}
return tags;
}
Now is this the correct approach, and as I am making lock in a static String, then is not it will be a class level lock? please suggest me some better approach
If you want to globally synchronize, just use custom object for this purpose:
private static final Object lock = new Object();
Do not use the String constant as they are interned, so the string constant with the same content declared in completely different part of your program will be the same String object. And in general avoid locking on the static fields. Better to instantiate your class and declare the lock as non-static. Currently you may use it as singleton (with some method like Cache.getInstance()), but later when you realize that you have to support several independent caches you will need less refactoring to achieve this.
In Java-8 preferred way to fetch object once is using the ConcurrentHashMap.computeIfAbsent like this:
private final ConcurrentHashMap<String, Object> cache = new ConcurrentHashMap<>();
public List<Tag> getCachedTags() throws Exception
List<Tag> tags = (List<Tag>)cache.computeIfAbsent(Config.tagCacheKey,
k -> iTagService.getTags(null));
return tags;
}
This is simple and robust. In previous Java versions you may probably use AtomicReference to wrap the objects:
private final ConcurrentHashMap<String, AtomicReference<Object>> cache =
new ConcurrentHashMap<>();
public List<Tag> getCachedTags() throws Exception
AtomicReference<Object> ref = cache.get(key);
if(ref == null) {
ref = new AtomicReference<>();
AtomicReference<Object> oldRef = cache.putIfAbsent(key, ref);
if(oldRef != null) {
ref = oldRef;
}
synchronized(ref) {
if(ref.get() == null) {
ref.set(iTagService.getTags(null));
}
}
}
return (List<Tag>)ref.get();
}

Efficient state machine pattern in java

I am writing a java simulation application which has a lot of entities to simulate. Each of these entities has a certain state at any time in the system. A possible and natural approach to model such an entity would be using the state (or state machine) pattern. The problem is that it creates a lot of objects during the runtime if there are a lot of state switches, what might cause bad system performance. What design alternatives do I have? I want performance to be the main criteria after maintainability.
Thanks
The below code will give you high performance (~10ns/event) zero runtime GC state machine implementation. Use explicit state machines whenever you have a concept of state in the system or component, this not only makes the code clean and scalable but also lets people (not even programmers) see immediately what the system does without having to dig in numerous callbacks:
abstract class Machine {
enum State {
ERROR,
INITIAL,
STATE_0,
STATE_1,
STATE_2;
}
enum Event {
EVENT_0,
EVENT_1,
EVENT_2;
}
public static final int[][] fsm;
static {
fsm = new int[State.values().length][];
for (State s: State.values()) {
fsm[s.ordinal()] = new int[Event.values().length];
}
}
protected State state = State.INITIAL;
// child class constructor example
// public Machine() {
// // specify allowed transitions
// fsm[State.INITIAL.ordinal()][Event.EVENT_0.ordinal()] = State.STATE_0.ordinal();
// fsm[State.STATE_0.ordinal()][Event.EVENT_0.ordinal()] = State.STATE_0.ordinal();
// fsm[State.STATE_0.ordinal()][Event.EVENT_1.ordinal()] = State.STATE_1.ordinal();
// fsm[State.STATE_1.ordinal()][Event.EVENT_1.ordinal()] = State.STATE_1.ordinal();
// fsm[State.STATE_1.ordinal()][Event.EVENT_2.ordinal()] = State.STATE_2.ordinal();
// fsm[State.STATE_1.ordinal()][Event.EVENT_0.ordinal()] = State.STATE_0.ordinal();
// fsm[State.STATE_2.ordinal()][Event.EVENT_2.ordinal()] = State.STATE_2.ordinal();
// fsm[State.STATE_2.ordinal()][Event.EVENT_1.ordinal()] = State.STATE_1.ordinal();
// fsm[State.STATE_2.ordinal()][Event.EVENT_0.ordinal()] = State.STATE_0.ordinal();
// }
public final void onEvent(Event event) {
final State next = State.values()[ fsm[state.ordinal()][event.ordinal()] ];
if (next == State.ERROR) throw new RuntimeException("invalid state transition");
if (acceptEvent(event)) {
final State prev = state;
state = next;
handleEvent(prev, event);
}
}
public abstract boolean acceptEvent(Event event);
public abstract void handleEvent(State prev, Event event);
}
if fsm is replaced with a unidimentional array of size S*E it will also improve cache proximity characteristics of the state machine.
My suggestion:
Have you "transitions managment" be configurable (i.e - via XML).
Load the XML to a repository holding the states.
The internal data structure will be a Map:
Map<String,Map<String,Pair<String,StateChangeHandler>>> transitions;
The reason for my selection is that this will be a map from a state name
To a map of "inputs" and new states:
Each map defines a map between possible input and the new state it leads to which is defined by the state name and a StateChangeHandler I will elaborate on later
change state method at the repository would have a signature of:
void changeState(StateOwner owner, String input)
This way the repository is stateless in the sense of the state owner using it, you can copy one copy, and not worry about thread safety issues.
StateOwner will be an interface your Classes that need state changing should implement.
I think the interface should look like this:
public interace StateOwner {
String getState();
void String setState(String newState);
}
In addition, you will have a ChangeStateHandler interface:
public interface StateChangeHandler {
void onChangeState(StateOwner, String newState) {
}
}
When the repository's changeState method is called, it will
check at the data structure that the current state of the stateOwner has a map of "inputs".
If it has such a map, it will check if the input has a new State to change to, and invoke the onChangeState method.
I will suggest you have a default implementation of the StateChangeHandler, and of course sub classes that will define the state change behavior more explicitly.
As I previously mentioned, all this can be loaded from an XML configuration, and using reflection you can instantitate StateChangeHandler objects based on their name (as mentioned at the XML) and that will be held in the repository.
Efficiency and good performance rely and obtained using the following points:
a. The repository itself is stateless - no internal references of StateOwner should be kept.
b. You load the XML once , when the system starts, after that you should work with in memory data structure.
c. You will provide specific StateChangeHandler implementation only when needed, the default implementation should do basicaly nothing.
d. No need to instantiate new objects of Handlers (as they should be stateless)
This proposal isn't universal, it isn't UML compliant but for simple thing, it's a simple mean.
import java.util.HashMap;
import java.util.Map;
class Mobile1
{
enum State {
FIRST, SECOND, THIRD
}
enum Event {
FIRST, SECOND, THIRD
}
public Mobile1() { // initialization may be done by loading a file
Map< Event, State > tr;
tr = new HashMap<>();
tr.put( Event.FIRST, State.SECOND );
_fsm.put( State.FIRST, tr );
tr = new HashMap<>();
tr.put( Event.SECOND, State.THIRD );
_fsm.put( State.SECOND, tr );
tr = new HashMap<>();
tr.put( Event.THIRD, State.FIRST );
_fsm.put( State.THIRD, tr );
}
public void activity() { // May be a long process, generating events,
System.err.println( _state );// to opposite to "action()" see below
}
public void handleEvent( Event event ) {
Map< Event, State > trs = _fsm.get( _state );
if( trs != null ) {
State futur = trs.get( event );
if( futur != null ) {
_state = futur;
// here we may call "action()" a small piece of code executed
// once per transition
}
}
}
private final Map<
State, Map<
Event, State >> _fsm = new HashMap<>();
private /* */ State _state = State.FIRST;
}
public class FSM_Test {
public static void main( String[] args ) {
Mobile1 m1 = new Mobile1();
m1.activity();
m1.handleEvent( Mobile1.Event.FIRST );
m1.activity();
m1.handleEvent( Mobile1.Event.SECOND );
m1.activity();
m1.handleEvent( Mobile1.Event.FIRST ); // Event not handled
m1.activity();
m1.handleEvent( Mobile1.Event.THIRD );
m1.activity();
}
}
output:
FIRST
SECOND
THIRD
THIRD
FIRST

Categories