I'm relatively new to Webflux and am trying to zip two objects together in order to return a tuple, which is then being used to build an object in the following step.
I'm doing it like so:
//Will shorten my code quite a bit.
//I'm including only the pieces that are invovled in my problem //"Flux.zip" call.
//This is a repository that is used in my "problem" code. It is simply an
//interface which extends ReactiveCrudRepository from spring data.
MyRepository repo;
//wiring in my repository...
public MyClass(MyRepository repo) {
this.repo = repo;
}
//Below is later in my reactive chain
//Starting halfway down the chain, we have Flux of objA
(flatMapMany returning Flux<ObjectA>)
//Problem code below...
//Some context :: I am zipping ObjectA with a reference to an object
//I am saving. I am saving an object from earlier, which is stored in an
//AtomicReference<ObjectB>
.flatMap(obj ->
Flux.zip(Mono.just(obj), repo.save(atomicReferenceFromEarlier.get()))
//Below, when calling "getId()" it logs the SAME ID FOR EACH OBJECT,
//even though I want it to return EACH OBJECT'S ID THAT WAS SAVED.
.map(myTuple2 -> log("I've saved this object {} ::" myTuple2.getT2().getId())))
//Further processing....
So, my ultimate issue is, the "second" parameter I'm zipping, the repo.save(someAtomicReferencedObject.get()) is the same for every "zipped" tuple.
in the following step, I'm logging something like "I'm now building object", just to see what object I've returned for each event, but my "second" object in my tuple is always the same...
How can I zip and ensure that the "save" call to the repo returns a unique object for each event in the flux?
However, when I check the database, I really have saved unique entities for each event in my flux. So, the save is happening as expected, but when the repo returns a Mono, it's the same one for each tuple returned.
Please let me know if I should clarify something if anything is unclear. Thank you in advance for any and all help.
In my Java app, I have the following service method that calls another method and accumulate responses. Then returns these responses as a list. If there is not any exception, it works properly. However, it is possible to encounter exception for one of the call in the loop. In that case, it cannot return the previous responses retrieved until exception (if there are 10 process in the loop and there is an exception for the 6th process, then it cannot return the previous 5 responses added to the response list).
public List<CommandResponse> process(final UUID uuid) {
final Site site = siteRepository.findByUuid(uuid)
.orElseThrow(() -> new EntityNotFoundException(SITE_ENTITY_NAME));
// code omitted for brevity
for (Type providerType : providerTypeList) {
// operations
responses.add(demoService.demoMethod());
}
return responses;
}
Under these conditions, I am wondering if I should use a try-catch mechanism or should I return response in the loop and finally return null. What would you suggest for this situations?
public CommandResponse operation(final UUID uuid) {
final Site site = siteRepository.findByUuid(uuid)
.orElseThrow(() -> new EntityNotFoundException(SITE_ENTITY_NAME));
// code omitted for brevity
for (Type providerType : providerTypeList) {
// operations
return demoService.demoMethod();
}
return null;
}
Well, following the best practices the method demoMethod() should not throw exception, instead capture the exception and send it as response.
This implies either CommandResponse can hold exception response. Following this the code looks as follows:
class CommandResponse<T>{
public T errorResponse();
public T successResponse();
public boolean isSucces();
}
And then later while rendering response you can handle failures/exceptions as per use case.
OR
another way to handle this is having an interface Response with two implementations one for Success & another for failure. Thus making method process to return List<Response>.
It all depends on the requirements, the contract between your process() method and its callers.
I can imagine two different styles of contract:
All Or Nothing: the caller needs the complete responses list, and can't sensibly proceed if some partial response is missing. I'd recommend to throw an exception in case of an error. Typically, this is the straightforward approach, and applies to many real-world situations (and the reason why the concept of exceptions was introduced).
Partial Results: the caller wants to get as much of the complete results list as currently possible (plus the information which parts are missing?). Return a data structure consisting of partial results plus error descriptions. This places an additional burden on the caller (extracting reults from a structure instead of directly getting them, having to explicitly deal with error messages etc.), so I'd only go that way if there is a convincing use case.
Both contracts can be valid. Only you know which one matches your situation. So, choose the right one, and document the decision.
Spring session stores serialized objects in my database. The problem is, sometimes my code changes. Sometimes my objects change. This is normal. However, I get errors like this:
org.springframework.core.convert.ConversionFailedException: Failed to convert from type [byte[]] to type [java.lang.Object] for value '{-84, ..., 112}'; nested exception is org.springframework.core.serializer.support.SerializationFailedException: Failed to deserialize payload. Is the byte array a result of corresponding serialization for DefaultDeserializer?; nested exception is java.io.InvalidClassException: com.mysite.MyClass; local class incompatible: stream classdesc serialVersionUID = 1432849980928799324, local class serialVersionUID = 8454085305026634675
I get this error by invoking a Spring Boot endpoint with HttpSession as an argument, such as this one:
#GetMapping("/stuff")
public #ResponseBody MyClass getStuff(HttpSession session) {
try {
Object myObject = session.getAttribute("MyClass");
if (myObject != null && myObject instanceof MyClass) {
return (MyClass) myObject;
} else {
return null;
}
} catch (Exception e) {
logger.warn("Invalid session data", e);
return null;
}
}
However, because the exception is thrown before the method gets invoked, I am not able to recover from this normal, expected error.
As a workaround, I am forced to delete the entire session table each deployment, even though most of the objects are still compatible!
To be clear, the solution is NOT to add a serialVersionUuid. Because the objects really do change in non-compatible ways from one deployment to the next. This is not a serialization question. This is a Spring Session error recovery question.
My question is: How can I gracefully recover from these issues?
You did not provide details but I assume you are using Spring's JDBC session implementation enabled by #EnableJdbcHttpSession?
In this case you can take a look at JdbcHttpSessionConfiguration and particularly at setSpringSessionConversionService and setConversionService. I believe if you provide your own implementation (you can see example at createConversionServiceWithBeanClassLoader) then you should be able to catch deserialization error and return empty session.
I think all you need is derive MyNotFailingSessionDeserializer from DeserializingConverter, override convert method, catch SerializationFailedException and return null or empty session (not sure if either works).
Then you create your conversion service like createConversionServiceWithBeanClassLoader does but use your MyNotFailingSessionDeserializer instead of DeserializingConverter
I have a backend system which we use a third-party Java API to access from our own applications. I can access the system as a normal user along with other users, but I do not have godly powers over it.
Hence to simplify testing I would like to run a real session and record the API calls, and persist them (preferably as editable code), so we can do dry test runs later with API calls just returning the corresponding response from the recording session - and this is the important part - without needing to talk to the above mentioned backend system.
So if my application contains line on the form:
Object b = callBackend(a);
I would like the framework to first capture that callBackend() returned b given the argument a, and then when I do the dry run at any later time say "hey, given a this call should return b". The values of a and b will be the same (if not, we will rerun the recording step).
I can override the class providing the API so all the method calls to capture will go through my code (i.e. byte code instrumentation to alter behavior of classes outside my control is not necessary).
What framework should I look into to do this?
EDIT: Please note that bounty hunters should provide actual code demonstrating the behavior I look for.
Actually You can build such framework or template, by using proxy pattern. Here I explain, how you can do it using dynamic proxy pattern. The idea is to,
Write a proxy manager to get recorder and replayer proxies of API on demand!
Write a wrapper class to store your collected information and also implement hashCode and equals method of that wrapper class for efficient lookup from Map like data structure.
And finally use recorder proxy to record and replayer proxy for replaying purpose.
How recorder works:
invokes the real API
collects the invocation information
persists data in expected persistence context
How replayer works:
Collect the method information (method name, parameters)
If collected information matches with previously recorded information then return the previously collected return value.
If returned value does not match, persist the collected information (As you wanted).
Now, lets look at the implementation. If your API is MyApi like bellow:
public interface MyApi {
public String getMySpouse(String myName);
public int getMyAge(String myName);
...
}
Now we will, record and replay the invocation of public String getMySpouse(String myName). To do that we can use a class to store the invocation information like bellow:
public class RecordedInformation {
private String methodName;
private Object[] args;
private Object returnValue;
public String getMethodName() {
return methodName;
}
public void setMethodName(String methodName) {
this.methodName = methodName;
}
public Object[] getArgs() {
return args;
}
public void setArgs(Object[] args) {
this.args = args;
}
public Object getReturnValue() {
return returnType;
}
public void setReturnValue(Object returnValue) {
this.returnValue = returnValue;
}
#Override
public int hashCode() {
return super.hashCode(); //change your implementation as you like!
}
#Override
public boolean equals(Object obj) {
return super.equals(obj); //change your implementation as you like!
}
}
Now Here comes the main part, The RecordReplyManager. This RecordReplyManager gives you proxy object of your API , depending on your need of recording or replaying.
public class RecordReplyManager implements java.lang.reflect.InvocationHandler {
private Object objOfApi;
private boolean isForRecording;
public static Object newInstance(Object obj, boolean isForRecording) {
return java.lang.reflect.Proxy.newProxyInstance(
obj.getClass().getClassLoader(),
obj.getClass().getInterfaces(),
new RecordReplyManager(obj, isForRecording));
}
private RecordReplyManager(Object obj, boolean isForRecording) {
this.objOfApi = obj;
this.isForRecording = isForRecording;
}
#Override
public Object invoke(Object proxy, Method method, Object[] args) throws Throwable {
Object result;
if (isForRecording) {
try {
System.out.println("recording...");
System.out.println("method name: " + method.getName());
System.out.print("method arguments:");
for (Object arg : args) {
System.out.print(" " + arg);
}
System.out.println();
result = method.invoke(objOfApi, args);
System.out.println("result: " + result);
RecordedInformation recordedInformation = new RecordedInformation();
recordedInformation.setMethodName(method.getName());
recordedInformation.setArgs(args);
recordedInformation.setReturnValue(result);
//persist your information
} catch (InvocationTargetException e) {
throw e.getTargetException();
} catch (Exception e) {
throw new RuntimeException("unexpected invocation exception: " +
e.getMessage());
} finally {
// do nothing
}
return result;
} else {
try {
System.out.println("replying...");
System.out.println("method name: " + method.getName());
System.out.print("method arguments:");
for (Object arg : args) {
System.out.print(" " + arg);
}
RecordedInformation recordedInformation = new RecordedInformation();
recordedInformation.setMethodName(method.getName());
recordedInformation.setArgs(args);
//if your invocation information (this RecordedInformation) is found in the previously collected map, then return the returnValue from that RecordedInformation.
//if corresponding RecordedInformation does not exists then invoke the real method (like in recording step) and wrap the collected information into RecordedInformation and persist it as you like!
} catch (InvocationTargetException e) {
throw e.getTargetException();
} catch (Exception e) {
throw new RuntimeException("unexpected invocation exception: " +
e.getMessage());
} finally {
// do nothing
}
return result;
}
}
}
If you want to record the method invocation, all you need is getting an API proxy like bellow:
MyApi realApi = new RealApi(); // using new or whatever way get your service implementation (API implementation)
MyApi myApiWithRecorder = (MyApi) RecordReplyManager.newInstance(realApi, true); // true for recording
myApiWithRecorder.getMySpouse("richard"); // to record getMySpouse
myApiWithRecorder.getMyAge("parker"); // to record getMyAge
...
And to replay all you need:
MyApi realApi = new RealApi(); // using new or whatever way get your service implementation (API implementation)
MyApi myApiWithReplayer = (MyApi) RecordReplyManager.newInstance(realApi, false); // false for replaying
myApiWithReplayer.getMySpouse("richard"); // to replay getMySpouse
myApiWithRecorder.getMyAge("parker"); // to replay getMyAge
...
And You are Done!
Edit:
The basic steps of recorder and replayers can be done in above mentioned way. Now its upto you, that how you want to use or perform those steps. You can do what ever you want and whatever you like in the recorder and replayer code blocks and just choose your implementation!
I should prefix this by saying I share some of the concerns in Yves Martin's answer: that such a system may prove frustrating to work with and ultimately less helpful than it would seem at first blush.
That said, from a technical standpoint, this is an interesting problem, and I couldn't not take a go at it. I put together a gist to log method calls in a fairly general way. The CallLoggingProxy class defined there allows usage such as the following.
Calendar original = CallLoggingProxy.create(Calendar.class, Calendar.getInstance());
original.getTimeInMillis(); // 1368311282470
CallLoggingProxy.ReplayInfo replayInfo = CallLoggingProxy.getReplayInfo(original);
// Persist the replay info to disk, serialize to a DB, whatever floats your boat.
// Come back and load it up later...
Calendar replay = CallLoggingProxy.replay(Calendar.class, replayInfo);
replay.getTimeInMillis(); // 1368311282470
You could imagine wrapping your API object with CallLoggingProxy.create prior to passing it into your testing methods, capturing the data afterwards, and persisting it using whatever your favorite serialization system happens to be. Later, when you want to run your tests, you can load the data back up, create a new instance based on the data with CallLoggingProxy.replay, and passing that into your methods instead.
The CallLoggingProxy is written using Javassist, as Java's native Proxy is limited to working against interfaces. This should cover the general use case, but there are a few limitations to keep in mind:
Classes declared final can't be proxied by this method. (Not easily fixable; this is a system limitation)
The gist assumes the same input to a method will always produce the same output. (More easily fixable; the ReplayInfo would need to keep track of sequences of calls for each input instead of single input/output pairs.)
The gist is not even remotely threadsafe (Fairly easily fixable; just requires a little thought and effort)
Obviously the gist is simply a proof of concept, so it's also not been very thoroughly tested, but I believe the general principle is sound. It's also possible there's a more fully baked framework out there to achieve this sort of goal, but if such a thing does exist, I'm not aware of it.
If you do decide to continue with the replay approach, then hopefully this will be enough to give you a possible direction to work in.
I had the same needs some months ago for non-regression testing when planning a heavy technical refactoring of a large application and... I have found nothing available as a framework.
In fact, replaying may be particularly difficult and may only work in a specific context - no (or few) application with a standard complexity can be really considered as stateless. It is a common problem when testing persistence code with a relational database. To be relevant, the complete system initial state must be restored and each replay step must impact the global state the same way. It becomes a challenge when a system state is distributed into pieces like databases, files, memory... Let's guess what happens if a timestamp taken from a system's clock is used somewhere !
So a more pratical option is to only record... and then do a clever comparison for subsequent runs.
Depending of the number of runs you plan, a human-driven session on the application may be enough, or you have to investing in an automated scenario in a robot playing with your application user interface.
First to record: you can use dynamic proxy interface or aspect programming to intercept method call and to capture state before and after invocation. It may mean: dump concerned database tables, copy some files, serialize Java objects in text format like XML.
Then compare this reference capture with a new run. This comparison should be tuned to exclude any irrelevant elements from each piece of state, like row identifiers, timestamps, file names... to only compare data where your backend's added value shines.
Finally nothing really standard, and often a few specific scripts and codes may be enough to achieve the aim: detect as much errors as possible and try to prevent non-expected side-effects.
This can be done with AOP, aspect oriented programming. It allows to intercept method calls by byte code manipulation. Do a bit of search for examples.
In one case this can do recording, in the other replaying.
Pointers: wikipedia, AspectJ, Spring AOP.
Unfortunately one moves a bit outside the java syntax, and a simple example can better be sought elsewhere. With explanation.
Maybe combined with unit tests / some mocking test framework for offline testing with recorded data.
you could look into 'Mockito'
Example:
//You can mock concrete classes, not only interfaces
LinkedList mockedList = mock(LinkedList.class);
//stubbing
when(mockedList.get(0)).thenReturn("first");
when(mockedList.get(1)).thenThrow(new RuntimeException());
//following prints "first"
System.out.println(mockedList.get(0));
//following throws runtime exception
System.out.println(mockedList.get(1));
//following prints "null" because get(999) was not stubbed
System.out.println(mockedList.get(999));
after you could replay each test more times and it will return data that you put in.
// pseudocode
class LogMethod {
List<String> parameters;
String method;
addCallTo(String method, List<String> params):
this.method = method;
parameters = params;
}
}
Have a list of LogMethods and call new LogMethod().addCallTo() before every call in your test method.
The idea of playing back the API calls sounds like a use case for the event sourcing pattern. Martin Fowler has a good article on it here. This is a nice pattern that records events as a sequence of objects which are then stored, you can then replay the sequence of events as required.
There is an implementation of this pattern using Akka called Eventsourced, which may help you build the type of system you require.
I had a similar problem some years ago. None of the above solutions would have worked for methods that are not pure functions (side effect free). The major task is in my opinion:
how to extract a snapshot of the recorded object(s) (not only restricted to objects implementing Serializable)
how to generate test code of a serialized representation in a readable way (not only restricted to beans, primitives and collections)
So I had to go my own way - with testrecorder.
For example, given:
ResultObject b = callBackend(a);
...
ResultObject callBackend(SourceObject source) {
...
}
you will only have to annotate the method like this:
#Recorded
ResultObject callBackend(SourceObject source) {
...
}
and start your application (the one that should be recorded) with the testrecorder agent. Testrecorder will manage all tasks for you, such as:
serializing arguments, results, state of this, exceptions (complete object graph!)
finding a readable representation for object construction and object matching
generating a test from the serialized data
you can extend recordings to global variables, input and output with annotations
An example for the test will look like this:
void testCallBackend() {
//arrange
SourceObject sourceObject1 = new SourceObject();
sourceObject1.setState(...); // testrecorder can use setters but is not limited to them
... // setting up backend
... // setting up globals, mocking inputs
//act
ResultObject resultObject1 = backend.callBackend(sourceObject1);
//assert
assertThat(resultObject, new GenericMatcher() {
... // property matchers
}.matching(ResultObject.class));
... // assertions on backend and sourceObject1 for potential side effects
... // assertions on outputs and globals
}
If I understood you question correctly, you should try db4o.
You will store the objects with db4o and restore later to mock and JUnit tests.
In a GWT app I present items that can be edited by users. Loading and saving the items is perfomed by using the GWT request factory. What I now want to achive is if two users concurrently edit an item that the user that saves first wins in the fashion of optimistic concurrency control. Meaning that when the second user saves his changes the request factory backend recognizes that the version or presence of the item stored in the backend has changed since it has been transfered to the client and the request factory/backend then somehow prevents the items from being updated/saved.
I tried to implement this in the service method that is used to save the items but this will not work because request factory hands in the items just retrieved from the backend with applied user's changes meaning the versions of these items are the current versions from the backend and a comparison pointless.
Are there any hooks in the request factory processing I coud leverage to achieve the requested behaviour? Any other ideas? Or do I have to use GWT-RPC instead...
No: http://code.google.com/p/google-web-toolkit/issues/detail?id=6046
Until the proposed API is implemented (EntityLocator, in comment #1, but it's not clear to me how the version info could be reconstructed from its serialized form), you'll have to somehow send the version back to the server.
As I said in the issue, this cannot be done by simply making the version property available in the proxy and setting it; but you could add another property: getting it would always return null (or similar nonexistent value), so that setting it on the client-side to the value of the "true" version property would always produce a change, which guaranties the value will be sent to the server as part of the "property diff"; and on the server-side, you could handle things either in the setter (when RequestFactory applies the "property diff" and calls the setter, if the value is different from the "true" version, then throw an exception) or in the service methods (compare the version sent from the client –which you'd get from a different getter than the one mapped on the client, as that one must always return null– to the "true" version of the object, and raise an error if they don't match).
Something like:
#ProxyFor(MyEntity.class)
interface MyEntityProxy extends EntityProxy {
String getServerVersion();
String getClientVersion();
void setClientVersion(String clientVersion);
…
}
#Entity
class MyEntity {
private String clientVersion;
#Version private String serverVersion;
public String getServerVersion() { return serverVersion; }
public String getClientVersion() { return null; }
public void setClientVersion(String clientVersion) {
this.clientVersion = clientVersion;
}
public void checkVersion() {
if (Objects.equal(serverVersion, clientVersion)) {
throw new OptimisticConcurrencyException();
}
}
}
Note that I haven't tested this, this is pure theory.
We came up with another workaround for optimistic locking in our app. Since the version can't be passed with the proxy itself (as Thomas explained) we are passing it via HTTP GET parameter to the request factory.
On the client:
MyRequestFactory factory = GWT.create( MyRequestFactory.class );
RequestTransport transport = new DefaultRequestTransport() {
#Override
public String getRequestUrl() {
return super.getRequestUrl() + "?version=" + getMyVersion();
}
};
factory.initialize(new SimpleEventBus(), transport);
On the server we create a ServiceLayerDecorator and read version from the RequestFactoryServlet.getThreadLocalRequest():
public static class MyServiceLayerDecorator extends ServiceLayerDecorator {
#Override
public final <T> T loadDomainObject(final Class<T> clazz, final Object domainId) {
HttpServletRequest threadLocalRequest = RequestFactoryServlet.getThreadLocalRequest();
String clientVersion = threadLocalRequest.getParameter("version") );
T domainObject = super.loadDomainObject(clazz, domainId);
String serverVersion = ((HasVersion)domainObject).getVersion();
if ( versionMismatch(serverVersion, clientVersion) )
report("Version error!");
return domainObject;
}
}
The advantage is that loadDomainObject() is called before any changes are applied to the domain object by RF.
In our case we're just tracking one entity so we're using one version but approach can be extended to multiple entities.