Scheduler after return keyword - java

I have a method which opens a web service session. The method structure looks something like this:
public Soap getServicePort()
{
//TODO: Open a connect and return the SOAP object
return soap;
}
I have a requirement to add a monitor straight after the return. The monitor's job is to wait for 2hrs and in-activate the session and rebuild a new one - well reason been the current session will be invalid at that time and therefore we need to rebuild and return a new session.
Can anyone suggest a reasonable way of doing this?
Thanks.

public Soap getServicePort()
{
try {
return soap;
} finally {
// add monitor here.
}
}
But be careful: monitor should not throw exceptions. Put its initiation ito try/catch.
Probably better solution is wraper pattern. For example you can define interface with method getServicePort() and 2 implementations: one your real implementation and other that wraps real and adds monitor. This solution is more flexible. For example probably you will have to create your monitor afeter other methods and even after other methods implemented in other classes.
In this case you can use AOP. There are several ways to use it. One is using Dynamic Proxy of java. Other is using special tools like AspectJ.
So, choose your solution. Your choice should depend on the complexity of your task and number of methods/classes that required to implement this functionality. If it is only one method use try/finally, if it is several methods in the same class, use wrapper pattern. If it is required for several methods in several classes use Proxy or AspectJ.

You can try logic like this.. no need to have monitor on this
private Soap soap = null;
public Soap getServicePort()
{
try {
if(soap!=null && soap.isValide()){
// not sure about the method isValide(), some condition to check session
return soap;
}else{
// create new soap & return
return soap;
}
} catch(Exception e){
}// END Catch
}// END MEthod
Call the method as many times as you want...

Related

How to avoid if

I have a Request object with field request_type and number of other fields. request_type can be 'additionRequest' , 'deletionRequest' 'informationRequest'.
Based on request_type other fields in Request object are processed differently.
My simple minded approach is
if additionRequest
algorithm1
else if deletionRequest
algorithm2
else if deletionRequest
algorithm3
end
How I can avoid these if statements and still apply proper algorithm?
If you want to avoid conditional statements then you can leverage object oriented features such as:
Map<String, Function<Request, Result>> parsers = new HashMap<>();
parsers.put("additionRequest", request -> {
// parse and generate Result here
return result;
});
Result result = parsers.get(request.request_type).apply(request);
Seems to me that perhaps a Command Pattern could come in handy here. If you make an object structure of these commands and encapsulate the algorithm that you want to execute within the object, then you can construct the specific sub-objects and later on use "execute" method of the command to invoke the algorithm. Just make sure you are using polymorphic references:
if additionRequest
algorithm1
else if deletionRequest
algorithm2
else if deletionRequest
algorithm3
end
will become
void theRequestExecutor(Reqest req) {
req.executeAlgorithm()
}
https://en.wikipedia.org/wiki/Command_pattern
Use HashMap<RequestType, RequestHandler> for this case. RequestHandler can be an interface which will be implemented for each situation you want to handle.
Hope this help.
You can create Map of key:String, value :Function of RequestType ReturnType. Depending on type of request it will call corresponding function.
Example:
Map<String, Function<RequestType, ResponseType> requestProcessors = new HashMap<>;
requestProcessors.add("additionRequest", this::methodToHandleAddRequest);
requestProcessors.add("deletionRequest", this::methodToHandleDeleteRequest);
Inside request handler do
return this.requestProcessors.get(request.request_type).apply(request);
Note you may have to create response interface if different responses are different. Different response type will inherit from response interface
The object-oriented solution is always to include the logic with the data. In this case include whatever you want a request to do in the request object itself, instead of (presumably) passing out pure data.
Something like
public interface Request {
Response apply();
}
Anything else, like creating a map of functions, creating a thin abstraction layer or applying some pattern is a workaround. These are suitable only if the first solution can not be applied. This might be the case if the Request objects are not under your control, for example generated automatically or third party classes.

How can i manage multiple method calls as one transaction

I have a problem with establishing a transaction manager/scope for my REST API (JAVA), I have below functions in my API back end and i want to excute all below functions as one transaction,
Call third party WS end point
Decrypt the response
Save the response in to DB1
Save the response in to DB2
I want to make sure all above steps are completed or rollback if any one fail, I have enough information to do the rollback, but i have no idea what would be the best practice to implement proper transaction management mechanism, because above mentioned steps happen in 3 separate classes per API call,
This is a pseudo code for my class structure
class CallWS {
public People callThWS() {
// functions related to call third party WS and decryption (step 1,2)
}
}
class People peopleServices {
public People getPeopleData() {
callThWS ppl= new callThWS();
People pplObj = ppl.callThWS();
// save to DB1, (step 3)
return pplObj;
}
}
class People peopleContr {
public People getAllPeople() {
peopleServices ppSer= new peopleServices();
People pplObj2 = ppSer.getPeopleData();
// save to DB2, (Step 4)
return pplObj2;
}
}
Please help me on this,
Thanks
What you need is Distributed Transactions(XA). Check for examples of various transaction managers which support XA. Check this article for using XA provider in standalone applications(Warning: Old article).
If you control sources of all classes listed and you can refactor your code the way you have a single entry point, you can do it quite easily, except the call to an external web service. The pseudo code is below.
Here we should agree that all resources your are calling in your methods are transactional. As I mentioned earlier call to an external WS would not fall to that category because calls to external web services are not transactional by their nature. Again if you do not change data withing a call to the external service you may consider just leave it outside transaction. You still have a bit of control. Like rolling back transaction in case a call to the external service was unsuccessful and as far as you have not changed anything on the other side, you may not care about rolling back a transaction there.
However you still have some options for a transaction call to an external WS call like Web Services Atomic Transactions, but I bet you would need control for sources and maybe even environment on the other side. In such a lucky circumstances you would rather want to achieve it by avoiding the WS call.
class RestAPIEntryPointResource {
#Inject
CallWS callWS;
#Inject
PeopleServices peopleServices ;
#Inject
PeopleContr peopleContr;
/*Put some transaction demarcation here.
If your class is an EJB, it is already done for you.
With Spring you have various options to mark the method transactional.
You also may want to take a manual control, but it look redundant here. */
public void entryPointMethod() {
callWS.callThWS();
peopleServices.getPeopleData();
peopleContr.getAllPeople();
}
}
class CallWS {
public People callThWS() {
// functions related to call third party WS and decryption (step 1,2)
}
}
class PeopleServices {
public People getPeopleData() {
..........
}
}
class PeopleContr {
public People getAllPeople() {
.........
}
}

Record method calls in one session for replaying in future test sessions?

I have a backend system which we use a third-party Java API to access from our own applications. I can access the system as a normal user along with other users, but I do not have godly powers over it.
Hence to simplify testing I would like to run a real session and record the API calls, and persist them (preferably as editable code), so we can do dry test runs later with API calls just returning the corresponding response from the recording session - and this is the important part - without needing to talk to the above mentioned backend system.
So if my application contains line on the form:
Object b = callBackend(a);
I would like the framework to first capture that callBackend() returned b given the argument a, and then when I do the dry run at any later time say "hey, given a this call should return b". The values of a and b will be the same (if not, we will rerun the recording step).
I can override the class providing the API so all the method calls to capture will go through my code (i.e. byte code instrumentation to alter behavior of classes outside my control is not necessary).
What framework should I look into to do this?
EDIT: Please note that bounty hunters should provide actual code demonstrating the behavior I look for.
Actually You can build such framework or template, by using proxy pattern. Here I explain, how you can do it using dynamic proxy pattern. The idea is to,
Write a proxy manager to get recorder and replayer proxies of API on demand!
Write a wrapper class to store your collected information and also implement hashCode and equals method of that wrapper class for efficient lookup from Map like data structure.
And finally use recorder proxy to record and replayer proxy for replaying purpose.
How recorder works:
invokes the real API
collects the invocation information
persists data in expected persistence context
How replayer works:
Collect the method information (method name, parameters)
If collected information matches with previously recorded information then return the previously collected return value.
If returned value does not match, persist the collected information (As you wanted).
Now, lets look at the implementation. If your API is MyApi like bellow:
public interface MyApi {
public String getMySpouse(String myName);
public int getMyAge(String myName);
...
}
Now we will, record and replay the invocation of public String getMySpouse(String myName). To do that we can use a class to store the invocation information like bellow:
public class RecordedInformation {
private String methodName;
private Object[] args;
private Object returnValue;
public String getMethodName() {
return methodName;
}
public void setMethodName(String methodName) {
this.methodName = methodName;
}
public Object[] getArgs() {
return args;
}
public void setArgs(Object[] args) {
this.args = args;
}
public Object getReturnValue() {
return returnType;
}
public void setReturnValue(Object returnValue) {
this.returnValue = returnValue;
}
#Override
public int hashCode() {
return super.hashCode(); //change your implementation as you like!
}
#Override
public boolean equals(Object obj) {
return super.equals(obj); //change your implementation as you like!
}
}
Now Here comes the main part, The RecordReplyManager. This RecordReplyManager gives you proxy object of your API , depending on your need of recording or replaying.
public class RecordReplyManager implements java.lang.reflect.InvocationHandler {
private Object objOfApi;
private boolean isForRecording;
public static Object newInstance(Object obj, boolean isForRecording) {
return java.lang.reflect.Proxy.newProxyInstance(
obj.getClass().getClassLoader(),
obj.getClass().getInterfaces(),
new RecordReplyManager(obj, isForRecording));
}
private RecordReplyManager(Object obj, boolean isForRecording) {
this.objOfApi = obj;
this.isForRecording = isForRecording;
}
#Override
public Object invoke(Object proxy, Method method, Object[] args) throws Throwable {
Object result;
if (isForRecording) {
try {
System.out.println("recording...");
System.out.println("method name: " + method.getName());
System.out.print("method arguments:");
for (Object arg : args) {
System.out.print(" " + arg);
}
System.out.println();
result = method.invoke(objOfApi, args);
System.out.println("result: " + result);
RecordedInformation recordedInformation = new RecordedInformation();
recordedInformation.setMethodName(method.getName());
recordedInformation.setArgs(args);
recordedInformation.setReturnValue(result);
//persist your information
} catch (InvocationTargetException e) {
throw e.getTargetException();
} catch (Exception e) {
throw new RuntimeException("unexpected invocation exception: " +
e.getMessage());
} finally {
// do nothing
}
return result;
} else {
try {
System.out.println("replying...");
System.out.println("method name: " + method.getName());
System.out.print("method arguments:");
for (Object arg : args) {
System.out.print(" " + arg);
}
RecordedInformation recordedInformation = new RecordedInformation();
recordedInformation.setMethodName(method.getName());
recordedInformation.setArgs(args);
//if your invocation information (this RecordedInformation) is found in the previously collected map, then return the returnValue from that RecordedInformation.
//if corresponding RecordedInformation does not exists then invoke the real method (like in recording step) and wrap the collected information into RecordedInformation and persist it as you like!
} catch (InvocationTargetException e) {
throw e.getTargetException();
} catch (Exception e) {
throw new RuntimeException("unexpected invocation exception: " +
e.getMessage());
} finally {
// do nothing
}
return result;
}
}
}
If you want to record the method invocation, all you need is getting an API proxy like bellow:
MyApi realApi = new RealApi(); // using new or whatever way get your service implementation (API implementation)
MyApi myApiWithRecorder = (MyApi) RecordReplyManager.newInstance(realApi, true); // true for recording
myApiWithRecorder.getMySpouse("richard"); // to record getMySpouse
myApiWithRecorder.getMyAge("parker"); // to record getMyAge
...
And to replay all you need:
MyApi realApi = new RealApi(); // using new or whatever way get your service implementation (API implementation)
MyApi myApiWithReplayer = (MyApi) RecordReplyManager.newInstance(realApi, false); // false for replaying
myApiWithReplayer.getMySpouse("richard"); // to replay getMySpouse
myApiWithRecorder.getMyAge("parker"); // to replay getMyAge
...
And You are Done!
Edit:
The basic steps of recorder and replayers can be done in above mentioned way. Now its upto you, that how you want to use or perform those steps. You can do what ever you want and whatever you like in the recorder and replayer code blocks and just choose your implementation!
I should prefix this by saying I share some of the concerns in Yves Martin's answer: that such a system may prove frustrating to work with and ultimately less helpful than it would seem at first blush.
That said, from a technical standpoint, this is an interesting problem, and I couldn't not take a go at it. I put together a gist to log method calls in a fairly general way. The CallLoggingProxy class defined there allows usage such as the following.
Calendar original = CallLoggingProxy.create(Calendar.class, Calendar.getInstance());
original.getTimeInMillis(); // 1368311282470
CallLoggingProxy.ReplayInfo replayInfo = CallLoggingProxy.getReplayInfo(original);
// Persist the replay info to disk, serialize to a DB, whatever floats your boat.
// Come back and load it up later...
Calendar replay = CallLoggingProxy.replay(Calendar.class, replayInfo);
replay.getTimeInMillis(); // 1368311282470
You could imagine wrapping your API object with CallLoggingProxy.create prior to passing it into your testing methods, capturing the data afterwards, and persisting it using whatever your favorite serialization system happens to be. Later, when you want to run your tests, you can load the data back up, create a new instance based on the data with CallLoggingProxy.replay, and passing that into your methods instead.
The CallLoggingProxy is written using Javassist, as Java's native Proxy is limited to working against interfaces. This should cover the general use case, but there are a few limitations to keep in mind:
Classes declared final can't be proxied by this method. (Not easily fixable; this is a system limitation)
The gist assumes the same input to a method will always produce the same output. (More easily fixable; the ReplayInfo would need to keep track of sequences of calls for each input instead of single input/output pairs.)
The gist is not even remotely threadsafe (Fairly easily fixable; just requires a little thought and effort)
Obviously the gist is simply a proof of concept, so it's also not been very thoroughly tested, but I believe the general principle is sound. It's also possible there's a more fully baked framework out there to achieve this sort of goal, but if such a thing does exist, I'm not aware of it.
If you do decide to continue with the replay approach, then hopefully this will be enough to give you a possible direction to work in.
I had the same needs some months ago for non-regression testing when planning a heavy technical refactoring of a large application and... I have found nothing available as a framework.
In fact, replaying may be particularly difficult and may only work in a specific context - no (or few) application with a standard complexity can be really considered as stateless. It is a common problem when testing persistence code with a relational database. To be relevant, the complete system initial state must be restored and each replay step must impact the global state the same way. It becomes a challenge when a system state is distributed into pieces like databases, files, memory... Let's guess what happens if a timestamp taken from a system's clock is used somewhere !
So a more pratical option is to only record... and then do a clever comparison for subsequent runs.
Depending of the number of runs you plan, a human-driven session on the application may be enough, or you have to investing in an automated scenario in a robot playing with your application user interface.
First to record: you can use dynamic proxy interface or aspect programming to intercept method call and to capture state before and after invocation. It may mean: dump concerned database tables, copy some files, serialize Java objects in text format like XML.
Then compare this reference capture with a new run. This comparison should be tuned to exclude any irrelevant elements from each piece of state, like row identifiers, timestamps, file names... to only compare data where your backend's added value shines.
Finally nothing really standard, and often a few specific scripts and codes may be enough to achieve the aim: detect as much errors as possible and try to prevent non-expected side-effects.
This can be done with AOP, aspect oriented programming. It allows to intercept method calls by byte code manipulation. Do a bit of search for examples.
In one case this can do recording, in the other replaying.
Pointers: wikipedia, AspectJ, Spring AOP.
Unfortunately one moves a bit outside the java syntax, and a simple example can better be sought elsewhere. With explanation.
Maybe combined with unit tests / some mocking test framework for offline testing with recorded data.
you could look into 'Mockito'
Example:
//You can mock concrete classes, not only interfaces
LinkedList mockedList = mock(LinkedList.class);
//stubbing
when(mockedList.get(0)).thenReturn("first");
when(mockedList.get(1)).thenThrow(new RuntimeException());
//following prints "first"
System.out.println(mockedList.get(0));
//following throws runtime exception
System.out.println(mockedList.get(1));
//following prints "null" because get(999) was not stubbed
System.out.println(mockedList.get(999));
after you could replay each test more times and it will return data that you put in.
// pseudocode
class LogMethod {
List<String> parameters;
String method;
addCallTo(String method, List<String> params):
this.method = method;
parameters = params;
}
}
Have a list of LogMethods and call new LogMethod().addCallTo() before every call in your test method.
The idea of playing back the API calls sounds like a use case for the event sourcing pattern. Martin Fowler has a good article on it here. This is a nice pattern that records events as a sequence of objects which are then stored, you can then replay the sequence of events as required.
There is an implementation of this pattern using Akka called Eventsourced, which may help you build the type of system you require.
I had a similar problem some years ago. None of the above solutions would have worked for methods that are not pure functions (side effect free). The major task is in my opinion:
how to extract a snapshot of the recorded object(s) (not only restricted to objects implementing Serializable)
how to generate test code of a serialized representation in a readable way (not only restricted to beans, primitives and collections)
So I had to go my own way - with testrecorder.
For example, given:
ResultObject b = callBackend(a);
...
ResultObject callBackend(SourceObject source) {
...
}
you will only have to annotate the method like this:
#Recorded
ResultObject callBackend(SourceObject source) {
...
}
and start your application (the one that should be recorded) with the testrecorder agent. Testrecorder will manage all tasks for you, such as:
serializing arguments, results, state of this, exceptions (complete object graph!)
finding a readable representation for object construction and object matching
generating a test from the serialized data
you can extend recordings to global variables, input and output with annotations
An example for the test will look like this:
void testCallBackend() {
//arrange
SourceObject sourceObject1 = new SourceObject();
sourceObject1.setState(...); // testrecorder can use setters but is not limited to them
... // setting up backend
... // setting up globals, mocking inputs
//act
ResultObject resultObject1 = backend.callBackend(sourceObject1);
//assert
assertThat(resultObject, new GenericMatcher() {
... // property matchers
}.matching(ResultObject.class));
... // assertions on backend and sourceObject1 for potential side effects
... // assertions on outputs and globals
}
If I understood you question correctly, you should try db4o.
You will store the objects with db4o and restore later to mock and JUnit tests.

How to automate Hibernate boilerplate in RMI remote methods

I have an RMI class that accepts remote calls from clients.
This class uses Hibernate to load entities and perform some business logic, in general read-only.
Currently most of the remote methods bodies look like that :
try {
HibernateUtil.currentSession().beginTransaction();
//load entities, do some business logic...
} catch (HibernateException e) {
logger.error("Hibernate problem...", e);
throw e;
} catch (other exceptions...) {
logger.error("other problem happened...", e);
throw e;
} finally {
HibernateUtil.currentSession().getTransaction().rollback(); //this because it's read-only, we make sure we don't commit anything
HibernateUtil.currentSession().close();
}
I would like to know if there is some pattern that I could (relatively easily) implement in order to automatically have this "try to open session/catch hibernate exception/finally close hibernate resources" behavior without having to code it in every method.
Something similar to "open session in view" that is used in webapps, but that could be applied to remotr RMI method calls instead of HTTP requests.
Ideally I would like to be able to still call the methods directly, not to use some reflexion passing method names as strings.
I would suggest you to use spring+hibernate stack. This saves us a lot of repeatable code which I guess you are looking for. Please check this link. Its actually an example of web application but same can be use for a standalone application as well.
All i wanted was a "quick and clean" solution, if possible, so no new framework for now (I might use Spring+Hibernate stack later on though).
So I ended up using a "quick-and-not-so-dirty" solution involving a variant of the "Command" pattern, where the hibernate calls are encapsulated inside anonymous inner classes implementing my generic Command interface, and the command executer wraps the call with the Hibernate session and exception handling. The generic bit is in order to have different return value types for the execute method.
I am not 100% satisfied with this solution since it still implies some boilerplate code wrapped around my business logic (I am especially unhappy about the explicit casting needed for the return value) and it makes it slightly more complicated to understand and debug.
However the gain in repetitive code is still significant (from about 10 lines to 3-4 lines per method), and more importantly the Hibernate handling logic is concentrated in one class, so it can be changed easily there if needed and it's less error-prone.
Here is some of the code :
The Command interface :
public interface HibernateCommand<T> {
public T execute(Object... args) throws Exception;
}
The Executer :
public class HibernateCommandExecuter {
private static final Logger logger = Logger.getLogger(HibernateCommandExecuter.class);
public static Object executeCommand(HibernateCommand<?> command, boolean commit, Object... args) throws RemoteException{
try {
HibernateUtil.currentSession().beginTransaction();
return command.execute(args);
} catch (HibernateException e) {
logger.error("Hibernate problem : ", e);
throw new RemoteException(e.getMessage());
}catch(Exception e){
throw new RemoteException(e.getMessage(), e);
}
finally {
try{
if(commit){
HibernateUtil.currentSession().getTransaction().commit();
}else{
HibernateUtil.currentSession().getTransaction().rollback();
}
HibernateUtil.currentSession().close();
}catch(HibernateException e){
logger.error("Error while trying to clean up Hibernate context :", e);
}
}
}
}
Sample use in a remotely called method (but it could be used locally also) :
#Override
public AbstractTicketingClientDTO doSomethingRemotely(final Client client) throws RemoteException {
return (MyDTO) HibernateCommandExecuter.executeCommand(new HibernateCommand<MyDTO>() {
public AbstractTicketingClientDTO execute(Object...args) throws Exception{
MyDTO dto = someService.someBusinessmethod(client);
return dto;
}
},false);
}
Note how the client argument is declared final, so it can be referrenced inside the inner class. If not possible to declare final, it could be passed as parameter to the executeCommand method.

How should a service which needs to be stoppable, but not by clients, be stopped?

The easy answer is to have an interface with all the regular operations which also includes a stop() method.
interface Service {
operation( parameters ...);
somethingElse( parameters ... );
stop();
}
The main problem with the stop method is that most clients who get a reference to a Service should probably not also be able to stop the service.
Another alternative is to simply define two interfaces, the service and the Stoppable
interface Service {
operation( parameters ...);
somethingElse( parameters ... );
}
interface Stoppable {
void stop();
}
The only problem with this approach is if the implementation is wrapped by another Service, then the stop method is hidden away.
The original problem of stopping clients from "stopping" your service is still possible, they just need to first check if the reference is an instance of Stoppable and then they can "stop" it.
How would you solve this problem?
I have an idea which solves the problem elegantly (well for me) without leaving a public stop available. However before I show it, I'd like some ideas.
Use interface inheritance:
interface Service {
operation( parameters ...);
somethingElse( parameters ... );
}
interface StoppableService extends Service {
void stop();
}
Clients that only need the raw Service get given that by an appropriate factory method. Anything that should be able to stop the service, request the StoppableService.
If you are worried about clients casting to StoppableService can calling stop(), then you'll need your concrete implementations to separate out this function. Give them a concrete implementation that has a no-op version of stop(). You can still have your factory provide a working implementation of stop() to anyone who should be able stop it. Perhaps when they ask for an implementation from your factory, they can pass in appropriate credentials so you can determine what they should be able to do and give them the correct version.
You could do something like
interface Service {
operation( parameters ...);
somethingElse( parameters ... );
<E> E getControlInterface(Class<E> clazz);
}
and then
Stoppable stoppable = service.getControlInterface(Stoppable.class);
if (stoppable != null) {
stoppable.stop();
}
But I think this is overcomplicating, implementing Stoppable interface should be enough. The wrappers should be aware of Stoppable and implement it also.

Categories