Workaround for abstract AND static method in Java - java

Our framework / rules engine defines a Flow as an atomic sequence where a user is asked a question and then his or her results are interpreted, most often resulting in transition to a subsequent flow for more questions and processing. For reuse purposes, we have a pattern we call "isNeeded", where a static method hangs off a Flow class, letting other Flows know if its needed at any point in a sequence through the application. For example, a "Process Payment Flow" might have an isNeeded method like this:
public static boolean isNeeded() {
ReasonTracker rt;
if (User has payment due) {
rt = new ReasonTracker(ProcessPaymentFlow.class, true, "payment due");
} else {
rt = new ReasonTracker(ProcessPaymentFlow.class, false, "no payment due");
}
return rt.isNeeded();
}
So, if you're anywhere in the application and want to see if the user should be given the ProcessPaymentFlow, you can call the isNeeded() method and then send the user in where appropriate. Furthermore, there is logging so we can figure out why certain users hit a particular flow and others do not.
Now, as a modification to our framework, I'm trying to standardize the use of this method. In my head, there is a final static "isNeeded()" method that calls an overridable abstract static "isNeededInner()" method, where one can define the cases, logging, and results, in an OO-enforceable way. However! I do recognize that this is a contradictory concept.
Without resorting to hacks / trickery, IS there a way to mimic the concept of an abstract static method in Java, or am I constrained by the way we've thought this up, so far? Bonus - Is there some way to work in this or super with getClass() to avoid manually inserting the class name for logging?

Related

Java store reflected Method statically in class: Safe?

Is something like the following 'safe' in Java, and why?
public final class Utility {
private Utility() {}
private static Method sFooMethod = null;
public static void callFoo(SomeThing thing) {
try {
if(sFooMethod == null)
sFooMethod = SomeThing.class.getMethod("foo");
sFooMethod.invoke(thing);
} catch(Exception e) {} // Just for simplicity here
}
}
My rationale would be that even if another thread writes to sFooMethod in the background and the current thread sees it suddenly somewhere during execution of callFoo(), it would still just result in the same old reflective invoke of thing.foo()?
Extra question: In what ways does the following approach differ (positive/negative) from the above? Would it be preferred?
public final class Utility {
private Utility() {}
private static final Method sFooMethod;
static {
try {
sFooMethod = SomeThing.class.getMethod("foo");
} catch(Exception e) {}
}
public static void callFoo(SomeThing thing) {
try {
if(sFooMethod != null)
sFooMethod.invoke(thing);
} catch(Exception e) {}
}
}
Background update from comment:
I am writing an Android app and I need to call a method that was private until API 29, when it was made public without being changed. In an alpha release (can't use this yet) of the AndroidX core library Google provides a HandlerCompat method that uses reflection to call the private method if it is not public. So I copied Google's method into my own HandlerCompatP class for now, but I noticed that if I call it 1000 times, then the reflective lookup will occur 1000 times (I couldn't see any caching). So that got me thinking about whether there is a good way to perform the reflection once only, and only if needed.
"Don't use reflection" is not an answer here as in this case it is required, and Google themselves intended for it to happen in their compatibility library. My question is also not whether using reflection is safe and/or good practice, I'm well aware it's not good in general, but instead whether given that I am using reflection, which method would be safe/better.
The key to avoiding memory consistency errors is understanding the happens-before relationship. This relationship is simply a guarantee that memory writes by one specific statement are visible to another specific statement.
Java language specification states following:
17.4.5. Happens-before Order
Two actions can be ordered by a happens-before relationship. If one
action happens-before another, then the first is visible to and
ordered before the second.
If we have two actions x and y, we write hb(x, y) to indicate that x
happens-before y.
If x and y are actions of the same thread and x comes before y in
program order, then hb(x, y).
As, in your case, writing to and then reading from the static field are happening in same tread. So the `happens before' relation is established. So the read operation will always see effects of the write operation.
Also, all threads are going to write same data. At worse, all eligible threads will write to the variable same time. The variable will have reference to the object that got assigned last and rest of the dereferenced objects will be garbage collected.
There won't be many threads in your App which will enter the same method at once, which will cause significant performance hit due to lot of object creation. But if you want to set the variable only once then second approach is better. As static blocks are thread safe.
Is something like the following 'safe' in Java, and why?
No I would not recommend using reflections, unless you have to.
Most of the time developers design their classes in a way, so that access to a hidden field or method is never required. There will most likely be a better way to access the hidden content.
Especially hidden fields and methods could change their name, when the library they are contained in is updated. So your code could just stop working suddenly and you would not know why, since the compiler would not output any errors.
It is also faster to directly access a method or field then through reflections, because the reflections first need to search for it and the direct access don't
So don't use reflections if you don't have to
I'm not sure what your goal is -- there is probably a better way to do what you're trying to do.
The second approach, with a static initializer, is preferable because your first implementation has a race condition.

Akka Java FSM by Example

Please note: I am a Java developer with no working knowledge of Scala (sadly). I would ask that any code examples provided in the answer would be using Akka's Java API.
I am trying to use the Akka FSM API to model the following super-simple state machine. In reality, my machine is much more complicated, but the answer to this question will allow me to extrapolate to my actual FSM.
And so I have 2 states: Off and On. You can go fro Off -> On by powering the machine on by calling SomeObject#powerOn(<someArguments>). You can go from On -> Off by powering the machine off by calling SomeObject#powerOff(<someArguments>).
I'm wondering what actors and supporting classes I'll need in order to implement this FSM. I believe the actor representing the FSM has to extend AbstractFSM. But what classes represent the 2 states? What code exposes and implements the powerOn(...) and powerOff(...) state transitions? A working Java example, or even just Java pseudo-code, would go a long way for me here.
I think we can do a bit better than copypasta from the FSM docs (http://doc.akka.io/docs/akka/snapshot/java/lambda-fsm.html). First, let's explore your use case a bit.
You have two triggers (or events, or signals) -- powerOn and powerOff. You would like send these signals to an Actor and have it change state, of which the two meaningful states are On and Off.
Now, strictly speaking an FSM needs one additional component: an action you wish to take on transition.
FSM:
State (S) x Event (E) -> Action (A), State (S')
Read: "When in state S, if signal E is received, produce action A and advance to state S'"
You don't NEED an action, but an Actor cannot be directly inspected, nor directly modified. All mutation and acknowledgement occurs through asynchronous message passing.
In your example, which provides no action to perform on transition, you basically have a state machine that's a no-op. Actions occur, state transitions without side effect and that state is invisible, so a working machine is identical to a broken one. And since this all occurs asynchronously, you don't even know when the broken thing has finished.
So allow me to expand your contract a little bit, and include the following actions in your FSM definitions:
When in Off, if powerOn is received, advance state to On and respond to the caller with the new state
When in On, if powerOff is received, advance state to Off and respond to the caller with the new state
Now we might be able to build an FSM that is actually testable.
Let's define a pair of classes for your two signals. (the AbstractFSM DSL expects to match on class):
public static class PowerOn {}
public static class PowerOff {}
Let's define a pair of enums for your two states:
enum LightswitchState { on, off }
Let's define an AbstractFSM Actor (http://doc.akka.io/japi/akka/2.3.8/akka/actor/AbstractFSM.html). Extending AbstractFSM allows us to define an actor using a chain of FSM definitions similar to those above rather than defining message behavior directly in an onReceive() method. It provides a nice little DSL for these definitions, and (somewhat bizarrely) expects that the definitions be set up in a static initializer.
A quick detour, though: AbstractFSM has two generics defined which are used to provide compile time type checking.
S is the base of State types we wish to use, and D is the base of Data types. If you're building an FSM that will hold and modify data (maybe a power meter for your light switch?), you would build a separate class to hold this data rather than trying to add new members to your subclass of AbstractFSM. Since we have no data, let's define a dummy class just so you can see how it gets passed around:
public static class NoDataItsJustALightswitch {}
And so, with this out of the way, we can build our actor class.
public class Lightswitch extends AbstractFSM<LightswitchState, NoDataItsJustALightswitch> {
{ //static initializer
startWith(off, new NoDataItsJustALightswitch()); //okay, we're saying that when a new Lightswitch is born, it'll be in the off state and have a new NoDataItsJustALightswitch() object as data
//our first FSM definition
when(off, //when in off,
matchEvent(PowerOn.class, //if we receive a PowerOn message,
NoDataItsJustALightswitch.class, //and have data of this type,
(powerOn, noData) -> //we'll handle it using this function:
goTo(on) //go to the on state,
.replying(on); //and reply to the sender that we went to the on state
)
);
//our second FSM definition
when(on,
matchEvent(PowerOff.class,
NoDataItsJustALightswitch.class,
(powerOn, noData) -> {
goTo(off)
.replying(off);
//here you could use multiline functions,
//and use the contents of the event (powerOn) or data (noData) to make decisions, alter content of the state, etc.
}
)
);
initialize(); //boilerplate
}
}
I'm sure you're wondering: how do I use this?! So let's make you a test harness using straight JUnit and the Akka Testkit for java:
public class LightswitchTest {
#Test public void testLightswitch() {
ActorSystem system = ActorSystem.create("lightswitchtest");//should make this static if you're going to test a lot of things, actor systems are a bit expensive
new JavaTestKit(system) {{ //there's that static initializer again
ActorRef lightswitch = system.actorOf(Props.create(Lightswitch.class)); //here is our lightswitch. It's an actor ref, a reference to an actor that will be created on
//our behalf of type Lightswitch. We can't, as mentioned earlier, actually touch the instance
//of Lightswitch, but we can send messages to it via this reference.
lightswitch.tell( //using the reference to our actor, tell it
new PowerOn(), //to "Power On," using our message type
getRef()); //and giving it an actor to call back (in this case, the JavaTestKit itself)
//because it is asynchronous, the tell will return immediately. Somewhere off in the distance, on another thread, our lightbulb is receiving its message
expectMsgEquals(LightswitchState.on); //we block until the lightbulb sends us back a message with its current state ("on.")
//If our actor is broken, this call will timeout and fail.
lightswitch.tell(new PowerOff(), getRef());
expectMsgEquals(LightswitchState.off);
system.stop(lightswitch); //switch works, kill the instance, leave the system up for further use
}};
}
}
And there you are: an FSM lightswitch. Honestly though, an example this trivial doesn't really show the power of FSMs, as a data-free example can be performed as a set of "become/unbecome" behaviors in like half as many LoC with no generics or lambdas. Much more readable IMO.
PS consider learning Scala, if only to be able to read other peoples' code! The first half of the book Atomic Scala is available free online.
PPS if all you really want is a composable state machine, I maintain Pulleys, a state machine engine based on statecharts in pure java. It's getting on in years (lot of XML and old patterns, no DI integration) but if you really want to decouple the implementation of a state machine from inputs and outputs there may be some inspiration there.
I know about Actors in Scala.
This Java Start Code may help you, to go ahead:
Yes, extend your SimpleFSM from AbstractFSM.
The State is an enum in the AbstractFSM.
Your <someArguments> can be the Data Part in your AbstractFSM
Your powerOn and powerOff are Actor Messages/Events.
And the State switching is in the transitions Part
// states
enum State {
Off, On
}
enum Uninitialized implements Data {
Uninitialized
}
public class SimpleFSM extends AbstractFSM<State, Data> {
{
// fsm body
startWith(Off, Uninitialized);
// transitions
when(Off,
matchEvent(... .class ...,
(... Variable Names ...) ->
goTo(On).using(...) ); // powerOn(<someArguments>)
when(On,
matchEvent(... .class ...,
(... Variable Names ...) ->
goTo(Off).using(...) ); // powerOff(<someArguments>)
initialize();
}
}
Real working Project see
Scala and Java 8 with Lambda Template for a Akka AbstractFSM
Well this is a really old question but if you get as a hit from Google, but if you are still interested implementing FSM with Akka, I suggest to look this part of the documentation.
If you want to see how a practical model driven state machine implementation, you can check my blog1, blog2.

Time taken to execute all methods in a method stack

A lot of times while writing applications, I wish to profile and measure the time taken for all methods in a stacktrace. What I mean is say:
Method A --> Method B --> Method C ...
A method internally calls B and it might call another. I wish to know the time taken to execute inside each method. This way in a web application, I can precisely know the percentage of time being consumed by what part of the code.
To explain further, most of the times in spring application, I write an aspect which collects information for every method call of a class. Which finally gives me summary. But I hate doing this, its repetitive and verbose and need to keep changing regex to accommodate different classes. Instead I would like this:
#Monitor
public void generateReport(int id){
...
}
Adding some annotation on method will trigger instrumentation api to collect all statistics of time taken by this method and any method later called. And when this method is exited, it stops collection information. I think this should be relatively easy to implement.
The questions is: Are there any reasonable alternatives that lets me do that for general java code? Or any quick way of collection this information. Or even a spring plugin for spring applications?
PS: Exactly like XRebel, it generates beautiful summaries of time take by the security, dao, service etc part of code. But it costs a bomb. If you can afford, you should definitely buy it.
You want to write a Java agent. Such an agent allows you to redefine a class when it is loaded. This way, you can implement an aspect without polluting your source code. I have written a library, Byte Buddy, which makes this fairly easy.
For your monitor example, you could use Byte Buddy as follows:
new AgentBuilder.Default()
.rebase(declaresMethod(isAnnotatedWith(Monitor.class))
.transform( (builder, type) ->
builder
.method(isAnnotatedWith(Monitor.class))
.intercept(MethodDelegation.to(MonitorInterceptor.class);
);
class MonitorInterceptor {
#RuntimeType
Object intercept(#Origin String method,
#SuperCall Callable<?> zuper)
throws Exception {
long start = System.currentTimeMillis();
try {
return zuper.call();
} finally {
System.out.println(method + " took " + (System.currentTimeMillis() - start);
}
}
}
The above built agent can than be installed on an instance of the instrumentation interface which is provided to any Java agent.
As an advantage over using Spring, the above agent will work for any Java instance, not only for Spring beans.
I don't know if theres already a library doing it nor can I give you a ready to use code. But I can give you a description how you can implement it on your own.
First of all i assume its no problem to include AspectJ into your project. Than create an annotation f.e. #Monitor which acts as marker for the time measurment of whatever you like.
Than create a simple data strucutre holding the information you wana track.
An example for this could be the following :
public class OperationMonitoring {
boolean active=false;
List<MethodExecution> methodExecutions = new ArrayList<>();
}
public class MethodExecution {
MethodExcecution invoker;
List<MethodExeuction> invocations = new ArrayList<>();
long startTime;
long endTime;
}
Than create an Around advice for all methods. On execution check if the called Method is annotated with your Monitoring annotation. If yes started monitoring each method execution in this thread. A simple example code could look like:
#Aspect
public class MonitoringAspect {
private ThreadLocal<OperationMonitoring> operationMonitorings = new ThreadLocal<>();
#Around("execution(* *.*(..))")
public void monitoring(ProceedingJoinPoint pjp) {
Method method = extractMethod(pjp);
if (method != null) {
OperationMonitoring monitoring = null;
if(method.isAnnotationPresent(Monitoring.class){
monitoring = operationMonitorings.get();
if(monitoring!=null){
if(!monitoring.active) {
monitoring.active=true;
}
} else {
// Create new OperationMonitoring object and set it
}
}
if(monitoring == null){
// this method is not annotated but is the tracking already active?
monitoring = operationMonitoring.get();
}
if(monitoring!=null && monitoring.active){
// do monitoring stuff and invoke the called method
} else {
// invoke the called method without monitoring
}
// Stop the monitoring by setting monitoring.active=false if this method was annotated with Monitoring (and it started the monitoring).
}
}
private Method extractMethod(JoinPoint joinPoint) {
if (joinPoint.getKind().equals(JoinPoint.METHOD_EXECUTION) && joinPoint.getSignature() instanceof MethodSignature) {
return ((MethodSignature) joinPoint.getSignature()).getMethod();
}
return null;
}
}
The code above is just a how to. I would also restructure the code but I've written it in a textfield, so please be aware of architectural flaws. As mentioned with a comment at the end. This solution does not supporte multiple annotated methods along the way. But it would be easy to add this.
A limitation of this approach is that it fails when you start additional threads during a tracked path. Adding support for starting new threads in a monitored Thread is not that easy. Thats also the reason why IoC frameworks have own features for handling threads to be able to track this.
I hope you understand the general concept of this, if not feel free to ask further questions.
This is the exact reason why I built the open source tool stagemonitor, which uses Byte Buddy to insert profiling code. If you want to monitor a web application you don't have to alter or annotate your code. If you have a standalone application, there is a #MonitorRequests annotation you can use.
You say you want to know the percentage of time taken within each routine on the stack.
I assume you mean inclusive time.
I also assume you mean wall-clock time, on the theory that if one of those lower-level callees happens to do some I/O, locking, etc., you don't want to be blind to that.
So a stack-sampling profiler that samples on wall-clock time will be getting the right kind of information.
The percentage time that A takes is the percentage of samples containing A, same for B, etc.
To get the percentage of A's time used by B, it is the percentage of samples containing A that happen to have B at the next level below.
The information is all in the stack samples, but it may be hard to get the profiler to extract just the information you want.
You also say you want precise percentage.
That means you also need a large number of stack samples.
For example, if you want to shrink the uncertainty of your measurements by a factor of 10, you need 100 times as many samples.
In my experience finding performance problems, I am willing to tolerate an uncertainty of 10% or more, because my goal is to find big wastage, not to know with precision how bad it is.
So I take samples manually, and look at them manually.
In fact, if you look at the statistics, you only have to see something wasteful on as few as two samples to know it's bad, and the fewer samples you take before seeing it twice, the worse it is.
(Example: If the problem wastes 30% of time, it takes on average 2/30% = 6.67 samples to see it twice. If it wastes 90% of time, it only takes 2.2 samples, on average.)

Java pattern for nested callbacks?

I'm looking for a Java pattern for making a nested sequence of non-blocking method calls. In my case, some client code needs to asynchronously invoke a service to perform some use case, and each step of that use case must itself be performed asynchronously (for reasons outside the scope of this question). Imagine I have existing interfaces as follows:
public interface Request {}
public interface Response {}
public interface Callback<R extends Response> {
void onSuccess(R response);
void onError(Exception e);
}
There are various paired implementations of the Request and Response interfaces, namely RequestA + ResponseA (given by the client), RequestB + ResponseB (used internally by the service), etc.
The processing flow looks like this:
In between the receipt of each response and the sending of the next request, some additional processing needs to happen (e.g. based on values in any of the previous requests or responses).
So far I've tried two approaches to coding this in Java:
anonymous classes: gets ugly quickly because of the required nesting
inner classes: neater than the above, but still hard for another developer to comprehend the flow of execution
Is there some pattern to make this code more readable? For example, could I express the service method as a list of self-contained operations that are executed in sequence by some framework class that takes care of the nesting?
Since the implementation (not only the interface) must not block, I like your list idea.
Set up a list of "operations" (perhaps Futures?), for which the setup should be pretty clear and readable. Then upon receiving each response, the next operation should be invoked.
With a little imagination, this sounds like the chain of responsibility. Here's some pseudocode for what I'm imagining:
public void setup() {
this.operations.add(new Operation(new RequestA(), new CallbackA()));
this.operations.add(new Operation(new RequestB(), new CallbackB()));
this.operations.add(new Operation(new RequestC(), new CallbackC()));
this.operations.add(new Operation(new RequestD(), new CallbackD()));
startNextOperation();
}
private void startNextOperation() {
if ( this.operations.isEmpty() ) { reportAllOperationsComplete(); }
Operation op = this.operations.remove(0);
op.request.go( op.callback );
}
private class CallbackA implements Callback<Boolean> {
public void onSuccess(Boolean response) {
// store response? etc?
startNextOperation();
}
}
...
In my opinion, the most natural way to model this kind of problem is with Future<V>.
So instead of using a callback, just return a "thunk": a Future<Response> that represents the response that will be available at some point in the future.
Then you can either model subsequent steps as things like Future<ResponseB> step2(Future<ResponseA>), or use ListenableFuture<V> from Guava. Then you can use Futures.transform() or one of its overloads to chain your functions in a natural way, but while still preserving the asynchronous nature.
If used in this way, Future<V> behaves like a monad (in fact, I think it may qualify as one, although I'm not sure off the top of my head), and so the whole process feels a bit like IO in Haskell as performed via the IO monad.
You can use actor computing model. In your case, the client, services, and callbacks [B-D] all can be represented as actors.
There are many actor libraries for java. Most of them, however, are heavyweight, so I wrote a compact and extendable one: df4j. It considers actor model as a specific case of more general dataflow computing model and, as a result, allows user to create new types of actors, to optimally fit user's requirements.
I am not sure if I get you question correctly. If you want to invoke a service and on its completion result need to be passed to other object which can continue processing using result. You can look at using Composite and Observer to achive this.

Pattern for request-response flow with inner classes

I have an application that consists of two processes, one client process with a (SWT-based) GUI and one server process. The client process is very lightweight, which means a lot of GUI operations will have to query the server process or request it to something, for example in response to the user clicking a button or choosing a menu item. This means that there will be a lot of event handlers that looks like this:
// Method invoked e.g. in response to the user choosing a menu item
void execute(Event event) {
// This code is executed on the client, and now we need some info off the server:
server.execute(new RemoteRequest() {
public void run() {
// This code is executed on the server, and we need to update the client
// GUI with current progress
final Result result = doSomeProcessing();
client.execute(new RemoteRequest() {
public void run() {
// This code is again executed on the client
updateUi(result);
}
}
}
});
}
However, since the server.execute implies a serialization (it is executed on a remote machine), this pattern is not possible without making the whole class serializable (since the RemoteRequest inner classes are not static (just to be clear: it is not a requirement that the Request implementation can access the parent instance, for the sake of the application they could be static).
Of course, one solution is to create separate (possibly static inner) classes for the Request and Response, but this hurts readability and makes it harder to understand the execution flow.
I have tried to find any standard pattern for solving this problem, but I have not find anything that answers my concern about readability.
To be clear, there will be a lot of these operations, and the operations are often quite short. Note that Future objects are not entirely useful here, since in many cases one request to the server will need to do multiple things on the client (often varying), and it is also not always a result being returned.
Ideally, I would like to be able to write code like this: (obvious pseudo-code now, please disregard the obvious errors in details)
String personName = nameField.getText();
async exec on server {
String personAddress = database.find(personName);
async exec on client {
addressField.setText(personAddress);
}
Order[] orders = database.searchOrderHistory(personName);
async exec on client {
orderListViewer.setInput(orders);
}
}
Now I want to be clear, that the underlying architecture is in place and works well, the reason this solution is there is of course not the above example. The only thing I am looking for is a way to write code like the above, without having to define static classes for each process transition. I hope that I did not just complicate things by giving this example...
My preference is to use Command Pattern and generic AsynchronousCallbacks. This kind of approach is used in GWT for communicating with the server, for example. Commands are Serializable, AsyncCallback is an interface.
Something along these lines:
// from the client
server.execute(new GetResultCommand(args), new AsyncCallback<Result>()
{
public void onSuccess(Result result) {
updateUi(); // happens on the client
}
});
The server then needs to receive a command, process it and issue an appropriate response with Result.
I ran into a similar problem the other day.
There is a solution which makes use of anonymous classes (and thus does not require you to define static inner classes), yet makes those anonymous classes be static (and thus not reference the outer object).
Simply define the anonymous classes in a static method, like this:
void execute(Event event) {
static_execute(server, client, event);
}
// An anonymous class is static if it is defined in a static method. Let's use that.
static void static_execute(final TheServer server, final TheClient client, final Event event) {
server.execute(new RemoteRequest() {
public void run() {
final Result result = doSomeProcessing();
// See note below!
client.execute(new RemoteRequest() {
public void run() {
updateUi(result);
}
});
}
});
}
The main advantage of this approach, compared to using named static inner classes, is probably that you avoid having to define fields and constructors for those classes.
-- Come to think of it, the same trick probably needs to be applied once more, for the server->client direction. I'll leave this as an exercise for the reader :-)
FIRST:
Without Pattern , if I would suggest, you can make a seperate class for handle all the Patterns. Just pass the instance of each event generated Object to a class and delegate the event request to other classes. Delegation will lead to very clearer approach, just required to use instanceof and then delegate further. Every event can be concise to a seperate place.
Along with the above Approach , yes, COMMAND PATTERN is definately a good option to log requests but you are getting EVENT State for every request raised , so you can try STATE PATTERN as it allows an object to alter its behaviour when state changes.
I actually ended up solving this by creating a base class with a custom serializer that takes care of this. I still hope it is solved in the language eventually.

Categories