Efficient state machine pattern in java - java

I am writing a java simulation application which has a lot of entities to simulate. Each of these entities has a certain state at any time in the system. A possible and natural approach to model such an entity would be using the state (or state machine) pattern. The problem is that it creates a lot of objects during the runtime if there are a lot of state switches, what might cause bad system performance. What design alternatives do I have? I want performance to be the main criteria after maintainability.
Thanks

The below code will give you high performance (~10ns/event) zero runtime GC state machine implementation. Use explicit state machines whenever you have a concept of state in the system or component, this not only makes the code clean and scalable but also lets people (not even programmers) see immediately what the system does without having to dig in numerous callbacks:
abstract class Machine {
enum State {
ERROR,
INITIAL,
STATE_0,
STATE_1,
STATE_2;
}
enum Event {
EVENT_0,
EVENT_1,
EVENT_2;
}
public static final int[][] fsm;
static {
fsm = new int[State.values().length][];
for (State s: State.values()) {
fsm[s.ordinal()] = new int[Event.values().length];
}
}
protected State state = State.INITIAL;
// child class constructor example
// public Machine() {
// // specify allowed transitions
// fsm[State.INITIAL.ordinal()][Event.EVENT_0.ordinal()] = State.STATE_0.ordinal();
// fsm[State.STATE_0.ordinal()][Event.EVENT_0.ordinal()] = State.STATE_0.ordinal();
// fsm[State.STATE_0.ordinal()][Event.EVENT_1.ordinal()] = State.STATE_1.ordinal();
// fsm[State.STATE_1.ordinal()][Event.EVENT_1.ordinal()] = State.STATE_1.ordinal();
// fsm[State.STATE_1.ordinal()][Event.EVENT_2.ordinal()] = State.STATE_2.ordinal();
// fsm[State.STATE_1.ordinal()][Event.EVENT_0.ordinal()] = State.STATE_0.ordinal();
// fsm[State.STATE_2.ordinal()][Event.EVENT_2.ordinal()] = State.STATE_2.ordinal();
// fsm[State.STATE_2.ordinal()][Event.EVENT_1.ordinal()] = State.STATE_1.ordinal();
// fsm[State.STATE_2.ordinal()][Event.EVENT_0.ordinal()] = State.STATE_0.ordinal();
// }
public final void onEvent(Event event) {
final State next = State.values()[ fsm[state.ordinal()][event.ordinal()] ];
if (next == State.ERROR) throw new RuntimeException("invalid state transition");
if (acceptEvent(event)) {
final State prev = state;
state = next;
handleEvent(prev, event);
}
}
public abstract boolean acceptEvent(Event event);
public abstract void handleEvent(State prev, Event event);
}
if fsm is replaced with a unidimentional array of size S*E it will also improve cache proximity characteristics of the state machine.

My suggestion:
Have you "transitions managment" be configurable (i.e - via XML).
Load the XML to a repository holding the states.
The internal data structure will be a Map:
Map<String,Map<String,Pair<String,StateChangeHandler>>> transitions;
The reason for my selection is that this will be a map from a state name
To a map of "inputs" and new states:
Each map defines a map between possible input and the new state it leads to which is defined by the state name and a StateChangeHandler I will elaborate on later
change state method at the repository would have a signature of:
void changeState(StateOwner owner, String input)
This way the repository is stateless in the sense of the state owner using it, you can copy one copy, and not worry about thread safety issues.
StateOwner will be an interface your Classes that need state changing should implement.
I think the interface should look like this:
public interace StateOwner {
String getState();
void String setState(String newState);
}
In addition, you will have a ChangeStateHandler interface:
public interface StateChangeHandler {
void onChangeState(StateOwner, String newState) {
}
}
When the repository's changeState method is called, it will
check at the data structure that the current state of the stateOwner has a map of "inputs".
If it has such a map, it will check if the input has a new State to change to, and invoke the onChangeState method.
I will suggest you have a default implementation of the StateChangeHandler, and of course sub classes that will define the state change behavior more explicitly.
As I previously mentioned, all this can be loaded from an XML configuration, and using reflection you can instantitate StateChangeHandler objects based on their name (as mentioned at the XML) and that will be held in the repository.
Efficiency and good performance rely and obtained using the following points:
a. The repository itself is stateless - no internal references of StateOwner should be kept.
b. You load the XML once , when the system starts, after that you should work with in memory data structure.
c. You will provide specific StateChangeHandler implementation only when needed, the default implementation should do basicaly nothing.
d. No need to instantiate new objects of Handlers (as they should be stateless)

This proposal isn't universal, it isn't UML compliant but for simple thing, it's a simple mean.
import java.util.HashMap;
import java.util.Map;
class Mobile1
{
enum State {
FIRST, SECOND, THIRD
}
enum Event {
FIRST, SECOND, THIRD
}
public Mobile1() { // initialization may be done by loading a file
Map< Event, State > tr;
tr = new HashMap<>();
tr.put( Event.FIRST, State.SECOND );
_fsm.put( State.FIRST, tr );
tr = new HashMap<>();
tr.put( Event.SECOND, State.THIRD );
_fsm.put( State.SECOND, tr );
tr = new HashMap<>();
tr.put( Event.THIRD, State.FIRST );
_fsm.put( State.THIRD, tr );
}
public void activity() { // May be a long process, generating events,
System.err.println( _state );// to opposite to "action()" see below
}
public void handleEvent( Event event ) {
Map< Event, State > trs = _fsm.get( _state );
if( trs != null ) {
State futur = trs.get( event );
if( futur != null ) {
_state = futur;
// here we may call "action()" a small piece of code executed
// once per transition
}
}
}
private final Map<
State, Map<
Event, State >> _fsm = new HashMap<>();
private /* */ State _state = State.FIRST;
}
public class FSM_Test {
public static void main( String[] args ) {
Mobile1 m1 = new Mobile1();
m1.activity();
m1.handleEvent( Mobile1.Event.FIRST );
m1.activity();
m1.handleEvent( Mobile1.Event.SECOND );
m1.activity();
m1.handleEvent( Mobile1.Event.FIRST ); // Event not handled
m1.activity();
m1.handleEvent( Mobile1.Event.THIRD );
m1.activity();
}
}
output:
FIRST
SECOND
THIRD
THIRD
FIRST

Related

I'm I affecting the values? or the references? Removing multiple Observers without a LifeCycleOwner

I want to be able to connect different LiveData<X> to the same Observer.
Up to now, my little module has been working extremely fine as hell, but to avoid linking the ViewModel to a LifecycleOwner, I included a way for the module to use the .observeForever() function if owner is null,
The observers are wrapped inside a bigger one that stores an int value that declares if the onChange() on each of the LiveData<>'s is the first one or not... this was done because in some cases I needed to ignore the initial onChange() callback.
Because there may be many observers (Keeping track of this int value)... depending on the amount of LiveData<X> ("sources" as the docs call them), it was easy for me to clear all the observers with the liveData.removeObservers(owner); function, automatically clearing all the observers at that specific owner.
But because the owner is null, now I need to keep a reference to all the observers in a single one, and remove THEM by name liveData.removeObserver(observer);
My first concern is that by declaring new on each iteration, I'm losing the reference to that observer forever.
If that's the case I can remove the Observer inside the wrapper, which is, as intended, the same among all observers, BUT the first thing that comes intuitively to mind, is that the obvious thing to do is not to destroy the most inner observer, but the outer one, because that destroys the inner one as well.
The problem is that the outer ones are different, while the inner one is the one that is common, so:
Which one should I remove, and How should they be declared?
private RunTimeObserverWrapper<? super T> runTimeObserverWrapper;
/*This is the recent change I made in the hopes of being able to remove it/them */
private void connectObservers(
List<LiveData<T>> liveDatas,
boolean ignoreInitialization
) {
this.listLiveData = liveDatas;
Observer<? super T> itemObserver = itemObserverInitializer();
/*This is the common observer*/
for (LiveData<T> liveData: listLiveData
) {
if (!liveData.hasObservers()) {
runTimeObserverWrapper = new RunTimeObserverWrapper<>(ignoreInitialization, itemObserver);
/*This is the recent change I made placing it as a field variable*/
// RunTimeObserverWrapper<? super T> runTimeObserverWrapper = new RunTimeObserverWrapper<>(ignoreInitialization, itemObserver);
/*This was working as intended, but I want to disassociate the module from the LifeCycleOwner*/
if (owner != null) {
liveData.observe(
owner,
runTimeObserverWrapper
);
} else {
liveData.observeForever(
runTimeObserverWrapper
);
}
/*New Snippet*/
// liveData.observe(
// owner,
// runTimeObserverWrapper
// );
/*Old snippet*/
}
}
}
public void destroyObserversAndList() {
for (LiveData<T> liveData: listLiveData
) {
// liveData.removeObservers(owner);
/*This was correctly removing ALL Observers*/
if (owner != null) {
liveData.removeObservers(owner);
} else {
liveData.removeObserver(runTimeObserverWrapper);
}
/*This is the new snippet to account for a lack of LifeCycleOwner*/
}
listLiveData.clear();
}
So, as you can see, my concern is that by calling liveData.removeObserver(runTimeObserverWrapper);, I''ll only be removing the last observer defined by new inside the iteration.
What should I do?
The solution was a lot harder than I first realized.
Primarily, because the answer was found by solving a secondary unrelated problem: the Database would at times return an empty value for the table in which all the rows in it were erased, and my list differ module, didnt knew what type of list it should erase.
The solution to this problem helped solve 2 problems at once, maybe a 3rd one, and one of them was the problem mentioned above.
The solution was to wrap both: the LiveData with its own Observer, now both should be given an Id, so that its answer could be stored with its Id in a LinkedHashMap.
If the answer was empty, it would just store the empty list with its corresponding Id, erasing the previous value.
Now that the LiveData was bundled with its own Observer, I could make a method that could release it from its own Observer, and, by iterating throught each LiveDataWrapper, I could call this method and remove them.
public void destroyObserversAndList() {
if (listLiveData != null) {
for (LiveDataSourceWrapper<T> liveData: listLiveData
) {
liveData.removeObserver();
}
listLiveData.clear();
}
}
And the LiveDataSourceWrapper class:
public class LiveDataSourceWrapper<X> {
private int liveDataId;
private SourceObserverWrapper<X> identityLiveDataObserver;
private LiveData<X> liveData;
public LiveDataSourceWrapper(
LiveData<X> liveData,
int liveDataId
) {
this.liveData = liveData;
this.liveDataId = liveDataId;
}
public void observeForever(#NonNull SourceObserver<X> observer) {
identityLiveDataObserver = new SourceObserverWrapper<X>(liveDataId, observer);
liveData.observeForever(identityLiveDataObserver);
}
public void removeObserver() {
liveData.removeObserver(identityLiveDataObserver);
}
public boolean hasObservers() {
return liveData.hasObservers();
}
}

Weak reference and self refreshing cache manager

Sorry for the long question, I need to present the environment otherwise you may misunderstand my issue.
Current state
I have a cache manager< K, V >, that for a given object of class K, returns a holder parametrized by the type V, representing the value associated on a web service to the corresponding K.
Holder
The Holder classes manage the fetch, synchronization, and scheduling of next fetch, because the cache is designed for multiple parallel calls. The data fetched by the web service has an expiry date (provided in the header), after which the holder can fetch it again and schedules itself again for next expiry. I have 3 classes(for list, map and other), but they are all used the same way. The Holder< V > class has 5 methods, 2 for direct access and 3 for IoC access
void waitData() waits until the data is fetched at least once. Internally is uses a countdownlatch.
V copy() waits for the data to be fetched at least once, then returns a copy of the cached V. Simple items are returned as they are, while more complex (eg Map for the prices in a given shop referenced by furniture id) are copied in a synchronized loop (to avoid another fetch() to corrupt the data)
void follow(JavaFX.Listener< V >) registers a new listener of V to be notified on modifications on the holder's data. If the holder already has received data, the listener is notified of this data as if it was new.
void unfollow (JavaFX.Listener< V >) unregisters apreviously registered listener.
Observable asObservable() returns an Observable . That allows to be used eg in javafx GUI.
Typically this allows me to do things like streaming of multiple data in parallel with adequate time, eg
Stream.of(1l, 2l, 3l).parallel().map(cache::getPrice).mapToInt(p->p.copy().price).min();
or to make much more complex Bindings in javafx, eg when the price depends on the number of items you want to purchase
Self Scheduling
The holder class contains a SelfScheduling< V > object, that is responsible to actually fetch the data, put it in the holder and reschedule itself after data expire.
The SelfScheduling use a ScheduledExecutorService in the cache, to schedule its own fetch() method. It starts by scheduling itself after 0 ms, rescheduling itself after 10s if error, or after expiry if new data was fetched. It can be paused, resumed, is started on creation, and can be stopped.
This is the behavior I want to modify. I want the self executor to remove the Holder from the cache on expiry, if the holder is not used anywhere in the code
Cache manager
Just for the information, my cache manager consists of a Map< K, Holder< V > > cachedPrices to hold the cache data, and a method getPrice(K) that syncs over the cache if holder missing, create the holder if required(double check to avoid unnecessary sync), and return the holder.
Global Code
Here is a example of what my code looks like
public class CacheExample {
public static class Holder<T>{
SimpleObjectProperty<T> data = new SimpleObjectProperty<>();
// real code removed
T copy() {
return null;
}
Observable asObservable() {
return null;
}
void follow(ChangeListener<? super T> listener) {
}
}
public static class SelfScheduled implements Runnable {
// should use enum
private Object state = "start";
public void schedule(long ms) {
// check state, sync, etc.
}
#Override
public void run() {
long next = fetch();
schedule(next);
}
public long fetch() {
// set the value in the holder
// return the next expiry
return 0;
}
}
public Map<Long, Holder<Object>> cachePrices = new HashMap<>();
public Holder<Object> getPrice(long param) {
Holder<Object> ret = cachePrices.get(param);
if (ret == null) {
// sync, re check, etc.
synchronized (cachePrices) {
ret = cachePrices.get(param);
if (ret == null) {
ret = new Holder<>();
// should be the fetch() call instead of null
makeSchedule(ret.data, null);
}
}
}
return ret;
}
public void makeSchedule(SimpleObjectProperty<Object> data, Runnable run) {
// code removed.
// creates a selfscheduler with fetch method and the data to store the
// result.
}
}
Expected modifications
As I wrote above, I want to modify the way the cache holds the data in memory.
Especially, I see no reason to maintain a huge number of self scheduling entities to fetch data when those data are no more used. If the expiry is 5s (some web sevices ARE), and I cache 1000 data(that's a very low value), then that means I will make 200 fetch() per second for no reason.
What I expect is that, when the Holder is no more used, the self scheduling stops itself and instead of fetching data, it actually removes the holder from the cache. example :
Holder< Price > p = cache.getPrice(1);
// here if the fetch() is called it should fetch the data
p.copy().price;
// now the price is no more used, on next fetch() it should remove p from the cache.
// If that happens, and later I re enter that code, the holder and the selfscheduler will be re created.
Holder< Price > p2 = cache.getPrice(22);
mylist.add(p2);
// now there is a strong reference to this price, so the fetch() method will keep scheduling the selfscheduler
// until mylist is no more strongly referenced.
Incorrect
However my knowledge of adequate technologies is limited in that field. To what I understood, I should use a weak reference in the cache manager and the self scheduling to know when the holder is no more strongly referenced (typically, start the fetch() by checking if the reference became null, in which case just stop); However this would lead to the holder being GC'd BEFORE the next expiry, which I don't want : some data have very long expiry and are only used in a simple method, eg cache.getShopLocation() should not be GC'd just after the value returned by copy() is used.
Thus, this code is incorrect :
public class CacheExampleIncorrect {
public static class Holder<T>{
SimpleObjectProperty<T> data = new SimpleObjectProperty<>();
// real code removed
T copy() {
return null;
}
Observable asObservable() {
return null;
}
void follow(ChangeListener<? super T> listener) {
}
}
public static class SelfScheduled<T> implements Runnable {
WeakReference<Holder<T>> holder;
Runnable onDelete;
public void schedule(long ms) {
// check state, sync, etc.
}
#Override
public void run() {
Holder<T> h = holder.get();
if (h == null) {
onDelete.run();
return;
}
long next = fetch(h);
schedule(next);
}
public long fetch(Holder<T> h) {
// set the value in the holder
// return the next expiry
return 0;
}
}
public Map<Long, WeakReference<Holder<Object>>> cachePrices = new HashMap<>();
public Holder<Object> getPrice(long param) {
WeakReference<Holder<Object>> h = cachePrices.get(param);
Holder<Object> ret = h == null ? null : h.get();
if (h == null) {
synchronized (cachePrices) {
h = cachePrices.get(param);
ret = h == null ? null : h.get();
if (ret == null) {
ret = new Holder<>();
h = new WeakReference<>(ret);
// should be the fetch() call instead of null
SelfScheduled<Object> sched = makeSchedule(h, null);
cachePrices.put(param, h);
// should be synced on cachedprice
sched.onDelete = () -> cachePrices.remove(param);
}
}
}
return ret;
}
public <T> SelfScheduled<T> makeSchedule(WeakReference<Holder<Object>> h, Runnable run) {
// creates a selfscheduler with fetch method and the data to store the
// result.
return null;
}
}

Putting methods that handle HashMap of all instances of a class in a separate class

I have a class that creates index cards, and within it, I have an instance variable that is a static HashMap that stores all the instances created.
I have been thinking a lot about it and I thought that the methods that handle the opperations over that HashMap should go in a different class, because those methods don't opperate directly over any index card, they opperate over the list of index cards.
This way, I would have an IndexCard class, and an ListAdministrator class. And both classes would handle different functions.
The problem is that this new class (ListAdministrator) would only have static methods, because there is only one list and there is no reason to create any new list of index cards, I only need one.
Should I move those methods to another class or should I keep it like this? Is that a good practice?
This is the code:
class IndexCard {
public static HashMap <String, IndexCard> list = new HashMap <> ();
public String name;
public String address;
public String phone;
public String email;
public LocalDate dateRegister;
IndexCard(String name, String dni, String address, String phone, String email) {
this.name = name;
this.address = address;
this.phone = phone;
this.email = email;
dateRegister = LocalDate.now();
if (Utils.validarDni(dni) && !list.containsKey(dni)) {
list.put(dni, this);
} else {
throw new InvalidParameterException ("Error when entering the data or the DNI has already been previously registered");
}
}
/**
* Update the data of the selected card.
*/
public void update() throws IllegalAccessException {
String key = getKeyWithObject(this);
Scanner reader = new Scanner(System.in);
Field[] fields = this.getClass().getFields();
for (Field field: fields) {
String nameField = Utils.splitCamelCase(field.getName());
if (!Modifier.isStatic(field.getModifiers()) && (field.getType()).equals(String.class)) {
System.out.println ("Enter new " + nameField);
String value = reader.nextLine().trim();
field.set(this, value);
}
}
reader.close();
list.put(key, this);
System.out.println("Updated data \n \n");
}
/**
* Delete the selected card.
*/
public void delete() throws IllegalAccessException {
String key = getKeyWithObject(this);
Field [] fields = this.getClass().getFields();
for (Field field: fields) {
if (!Modifier.isStatic(field.getModifiers())) {
field.set(this, null);
}
}
list.remove(key);
}
/**
* Displays the data of the selected card on screen.
*/
public void print() throws IllegalAccessException {
Field [] fields = this.getClass().getFields();
for (Field field: fields) {
if (!Modifier.isStatic(field.getModifiers())) {
String nameFieldConSpaces = Utils.splitCamelCase(field.getName());
Object value = field.get(this);
System.out.println(nameFieldConSpaces + ":" + value);
}
}
}
/**
* Print all the entries of the desired sublist with the ID, Name and phone number.
*/
public static <T extends IndexCard> void SubClasslist (Class <T> subClass) {
for (HashMap.Entry <String, IndexCard> entry: list.entrySet ()) {
String key = entry.getKey ();
IndexCard card = entry.getValue ();
if (card.getClass().equals(subClass)) {
System.out.println ("ID:" + key + "| Name:" + card.name + "| Phone:" + card.phone);
}
}
}
/**
* Returns the object stored in the list of cards when entering the corresponding key.
*/
public static IndexCard GetObjetWithKey(String key) {
try {
return list.get(key);
} catch (IllegalArgumentException e) {
System.out.println (e + ": The indicated key does not appear in the database.");
return null;
}
}
/**
* Obtain the Key when entering the corresponding card.
*/
public static String getKeyWithObject (Object obj) {
for (HashMap.Entry <String, IndexCard> entry: list.entrySet()) {
if (obj.equals(entry.getValue())) {
return entry.getKey();
}
}
throw new IllegalArgumentException ("The indicated data does not appear in the database, and therefore we could not obtain the key.");
}
/**
* Returns a list of cards when entering the main data of the card.
* #param data Corresponds to the identifying name of the file.
*/
public static ArrayList <IndexCard> SearchByName (String data) {
try {
ArrayList <IndexCard> listCards = new ArrayList <> ();
for (HashMap.Entry <String, IndexCard> entry: list.entrySet ()) {
IndexCard card = entry.getValue ();
String name = entry.getValue().name;
if (name.toLowerCase().trim().contains(data.toLowerCase().trim())) {
listCards.add(card);
}
}
return listCards;
} catch (IllegalArgumentException e) {
System.out.println (e + "The indicated data does not appear in the database, you may have entered it incorrectly.");
return null;
}
}
}
All those static methods are what I would put in the new class.
This is how the new class ListAdministrator would look. It would not even need a constructor.
class ListAdministrator{
public static HashMap <String, IndexCard> list = new HashMap <> ();
/**
* Print all the entries of the desired sublist with the ID, Name and phone number.
*/
public static <T extends IndexCard> void SubClasslist (Class <T> subClass) {
for (HashMap.Entry <String, IndexCard> entry: list.entrySet ()) {
String key = entry.getKey ();
IndexCard card = entry.getValue ();
if (card.getClass().equals(subClass)) {
System.out.println ("ID:" + key + "| Name:" + card.name + "| Phone:" + card.phone);
}
}
}
/**
* Returns the object stored in the list of cards when entering the corresponding key.
*/
public static IndexCard GetObjetWithKey(String key) {
try {
return list.get(key);
} catch (IllegalArgumentException e) {
System.out.println (e + ": The indicated key does not appear in the database.");
return null;
}
}
/**
* Obtain the Key when entering the corresponding card.
*/
public static String getKeyWithObject (Object obj) {
for (HashMap.Entry <String, IndexCard> entry: list.entrySet()) {
if (obj.equals(entry.getValue())) {
return entry.getKey();
}
}
throw new IllegalArgumentException ("The indicated data does not appear in the database, and therefore we could not obtain the key.");
}
/**
* Returns a list of cards when entering the main data of the card.
* #param data Corresponds to the identifying name of the file.
*/
public static ArrayList <IndexCard> SearchByName (String data) {
try {
ArrayList <IndexCard> listCards = new ArrayList <> ();
for (HashMap.Entry <String, IndexCard> entry: list.entrySet ()) {
IndexCard card = entry.getValue ();
String name = entry.getValue().name;
if (name.toLowerCase().trim().contains(data.toLowerCase().trim())) {
listCards.add(card);
}
}
return listCards;
} catch (IllegalArgumentException e) {
System.out.println (e + "The indicated data does not appear in the database, you may have entered it incorrectly.");
return null;
}
}
}
You should keep the concerns of managing the IndexCards and the IndexCards themselves separate because of the Single Responsibility Principle. Furthermore the ListAdministrator should handle everything that deals with the management of the IndexCards, also deletion and creation of the managed objects.
The name ListAdministrator is somehow not meeting the point as it does not administrate lists, maybe use something like IndexCardRegistry.
To deal with concurrency you could use a ConcurrentMap as your main data storage.
Having ListAdministrator all static might come in handy if your IndexCards need access to it or other IndexCards, but this would not be the best design. Do they need to know anyway? From my understanding the IndexCards could be simple POJOs that contain only data and no logic at all.
On the other hand with an all-static ListAdministrator you will not be able to use two instances of managed objects at the same time in the future without major refactoring your code. Even if you never would expect this today a well defined object registry that can handle any object might come in handy in projects to come. Therefore I would rather use real instances for the ListAdministrator (and program against it's interface to stay flexible).
In more detail referring to your comments:
The idea of this approach is to keep concerns clearly separated, which will make future changes to your code feasible in case the project grows (most projects tend to do so). My understanding is that the ListAdministrator should manage your IndexCards. In a way this is the same as Object Relational Mappers work, but at the moment your database is a HashMap. If you create an interface for ListAdministrator you may even swap out the HashMap with a database without having to change its clients.
On second investigation of your code I found that IndexCards not only store the data but as well have methods to update the data. This represents another break of the Single Responsibility Principle and should be dealt with. If the ListAdministrator would provide an update method for a given IndexCard it could be used by as many different clients you can think of without changing any code behind the ListAdministrators API. Your first client would be the command-line interface you already have programmed, the next might be a web service.
With an all-static ListAdministrator you have one static Class that manages one static data set. It will always only deal with IndexCards, everything you add will end up in the same HashMap (if allowed/compatible). Every part of your application with access to the class ListAdministrator would have full access to the data. If you needed another ListAdministrator (handling create, delete, update, search) for a different type you would have to refactor everything to accomodate this or start duplicating code. Why not create an instance based solution in the first place. You would have your repository for IndexCards, and could add new repositories at will.
Maybe this is over-engineering for your use case but in keeping the responsibilities clearly separated you will find out that many extensions of your code will happen orthogonal (not affecting existing code), and this is where the fun really begins. And how do you want to practice this if not with smaller projects.
More details about the reason of using interfaces for flexible code (in response to latest comment)
The short answer is: always code against an interface (as stated in numerous articles and java books). But why?
A Java interface is like a contract between a class and its clients. It defines some methods, but does not implement them itself. To implement an interface you define a class with class XYZ implements SomeInterface and the source code of the class does whatever it finds reasonable to answer to the methods defined in the interface. You try to keep the interface small, to contain only the essential methods because the smaller the interface is, the less methods you have to take into account when changes have to be made.
A common idiom in Java would be to define a List<T> return type (the interface) for a method, which most likely would be an ArrayList (concrete class), but could be a LinkedList (another concrete class) as well, or anything else that implements the List interface. By just returning the List interface you prevent your client to use other methods of the otherwise returned concrete class as well which would greatly reduce your freedom to change the internal implementation of your "ListProvider". You hide the internal implementation but agree to return something that fulfills the given interface. If you want to conceed to even less obligations, you could return the interface Iteratable instead of List.
Checkout the Java API, you will find standard classes like ArrayList implement many interfaces. You could always use an ArrayList internally and return it as the smallest interface possible to do the job.
Back to your project. It would be essential to refer to the Registry (ListAdministrator) via its interface, not its concrete class. The interface would define methods like
interface IndexCardRegistry {
void delete(Long id) throws IllegalAccessException;
void update(Long id, Some data) throws IllegalAccessException;
// ...
}
What it does is of no concern for the client, it just hopes everything goes right. So if a client calls the repositories update method it would rely on the repository to update the targeted IndexCard. The repository could store the data as it wants, in a HashMap, in a List or even in a database, it would not matter to the clients.
class IndexCardMapBasedRegistry implements IndexCardRegistry {
private Map store = new HashMap();
void delete(Long id) throws IllegalAccessException {
// code to remove the IndexCard with id from the hashmap
}
void update(Long id, Some data) throws IllegalAccessException {
// code to get the IndexCard with id from
// the hashmap and update its contents
}
// ...
}
Now the new iteration, at creation of your registry you swap out IndexCardMapBasedRegistry for the new
class IndexCardDatabaseRegistry implements IndexCardRegistry {
private Database db;
void delete(Long id) throws IllegalAccessException {
// code to remove the IndexCard with id from the database
}
void update(Long id, Some data) throws IllegalAccessException {
// code to update the IndexCard with id in the database
}
// ...
}
IndexCardRegistry indexCards = new IndexCardMapBasedRegistry(); becomes
IndexCardRegistry indexCards = new IndexCardDatabaseRegistry();
The client must not change at all, but the Registry would be able to handle an amount of IndexCards that otherwise would blow your computers memory.
Stay with IndexCard class and dont need to create new class ListAdministrator
In class IndexCard you have list as of type hashmap and it represent in memory data structure and you have n number of method in this class to work in this data structure so i suggest stay with single class as it will serve single responsibility.

How to deal with GlazedLists's PluggableList requirement for shared publisher and lock

I have just started using GlazedLists in a Java project which uses beansbinding extensively (MVVM pattern).
PluggableList is allowing me to bind a source list to a table, and then change the source list at runtime. In order to make this happen every source list must share the same ListEventPublisher and ReadWriteLock, since PluggableList must share a lock and plublisher with it's source. I accomplish this by creating a static publisher and lock in my class that owns the potential source lists, use those static values to create the list in every instantiation of the class as well as the PluggableList, as shown in the pseudo code below:
public class ModelClass
{
final static EventList LIST = new BasicEventList();
final static ListEventPublisher LISTEVENTPUBLISHER = LIST.getPublisher();
final static ReadWriteLock READWRITELOCK = LIST.getReadWriteLock();
final EventList sourceList =
new BasicEventList(LISTEVENTPUBLISHER, READWRITELOCK);
}
public class UiControllerClass
{
final PluggableList pluggableList =
new PluggableList(ModelClass.LISTEVENTPUBLISHER, ModelClass.READWRITELOCK);
// ... call pluggableList.setSource(someSourceList)
}
I have two concerns with this:
(1) I have to make a decision in the Model because of a specific requirement of a component in the UiController. This seems to violate the MVVM pattern.
(2) The shared lock potentially impacts the performance of the lists if there are very many and they are accessed frequently, since they all share the same lock. Each of these lists should otherwise be able to operate independently without caring about each other.
Am I going about this incorrectly? Is there a better way to make PluggableLists work without the ModelClass having to know about a special UiControllerClass requirement and without the potential performance hit?
I came up with an elegant solution that preserves the MVVM pattern as well as eliminates the need for a shared lock and publisher.
I created a custom list transformation that extends PluggableList and overrides it's setSource method. The new source list is then synchronized with a new list created by the PluggableList (it will have the same publisher and lock as the PluggableList).
public class HotSwappablePluggableList<T>
extends PluggableList<T>
{
private EventList<T> syncSourceList = new BasicEventList<>();
private ListEventListener<T> listEventListener = null;
public HotSwappablePluggableList()
{
super(new BasicEventList<T>());
}
#Override
public void setSource(final EventList<T> sourceList)
{
getReadWriteLock().writeLock().lock();
try
{
if (listEventListener != null)
{
syncSourceList.removeListEventListener(listEventListener);
}
syncSourceList = sourceList;
final EventList<T> syncTargetList = createSourceList();
listEventListener = GlazedLists.syncEventListToList(syncSourceList, syncTargetList);
super.setSource(syncTargetList);
}
finally
{
getReadWriteLock().writeLock().unlock();
}
}
}

How to enforce that only a given module in my app can change properties in my object?

I have a class which has a State property. When instantiated, this property contains the current state of the object. The states have a well defined flow (example: from state 1 you can only go to states 2 and 3. From 2 you can only go to 4 and so on...), so I intent to create a module in my system that will manage these changes.
Ideally it would receive the object and the action performed and it would set the new state.
I am aware how to do that, but I am missing the point: how can I force everyone to use this module to change the state? I don't want anybody else changing the state, only this module.
Is there a design pattern or OO "trick" I could use?
I don't know if this is useful, but I am using JAVA EE6.
out of the top of my head:
class A {
protected State state
public State getState(){ return state; }
}
class B extends A {
public void setState( State s ){ this.state = s }
}
so the property remains read-only for all users of class A, but writable if you cast the instance to B.
UPDATE:
to use the callback-mechanismus:
interface StateChanger(){
public void call( State state );
}
class A {
protected State state
public State getState(){ return state; }
public void setState( StateChanger stateChanger ){
stateChanger.call( state );
}
}
then you can use it as:
classAInstance.setState( new StateChanger(){
#Override
public void call( State state ){
//do stuff
}
} );
Perhaps you need to include in the StateChanger.call() also the this as a parameter
in dynamic languages like Groovy it looks really compact
You simply use package access visibility for your mutators/setters and put the instance to be secured in the same package with the classes in the module that can change it.
Example:
package com.bla.bla.bla.models;
public class ProtectedData {
private String protectedField;
public String getProtectedField() {
return protectedField;
}
void setProtectedField(String newValue) {//package visibility
this.protectedField = newValue;
}
}
package com.bla.bla.bla.models;//the same package (no matter the name)
public class ProtectedWorkflowController {
public void closeProtectedData(ProtectedData data) {
data.setProtectedField(null);
}
}
PS: the notion of module is undefined in Java, so I consider it is not wrong considering module==package in your case.

Categories