I'm working on a project revamp for a company, where they want to split their system between front-end/client and back-end/server (more like a middleman between the front-end and the database server), and I'm supposed to use JAX-WS RPC and maintain the current functionality.
By maintaining functionality they mean that some methods are supposed to return null, which is forbidden by WS-I.
Searching for possible solutions, I stumbled upon this article: http://victor-ichim.blogspot.com.br/2011/03/rpcliteral-and-null-object-pattern.html which basically solves a similar problem by using EJB Interceptors to intercept and replace null results with empty objects.
Working around the concept, I thought of intercepting the results just like so, replacing null with something like a string template, intercepting it again on the client and replacing that template back with null.
My questions are:They don't use EJB by default, so no Interceptors per se. Is there some implementation that could work for both Tomcat and JBoss?
Even if I'm able to intercept the return server-side, how could I do it client-side?
If I can use SOAPHandlers, how can I avoid raising the SOAP Fault for trying to return null?
Since I also had problems with JAXB not handling interfaces, what I ended up doing was using the #XmlJavaTypeAdapter annotation to enable (selectively, since every return and parameter that could practically be null needs to be annotated) converting values from and back to null, in sort of a hackjob manner. I created a generic-ish adapter for Serializable objects, and followed the same sort of approach for other kinds of Objects:
public class SerializableAdapter extends XmlAdapter<String, Serializable>>{
private static final String NULL = "'NULL'"; // Will hopefully never collide
#Override
public Serializable unmarshal(String e) throws Exception {
if (e == NULL) {
return null;
}
byte [] eB = e.getBytes("ISO-8859-1");
InputStream iS = new ByteArrayInputStream(Base64.getDecoder().decode(eB));
ObjectInputStream oIS = new ObjectInputStream(iS);
return (Serializable) oIS.readObject();
}
#Override
public String marshal(Serializable o) throws Exception {
if (o == null) {
return NULL;
}
ByteArrayOutputStream bAOS = new ByteArrayOutputStream();
ObjectOutputStream oOS = new ObjectOutputStream(bAOS);
oOS.writeObject(o);
return Base64.getEncoder().encodeToString(bAOS.toByteArray());
}
}
And then annotated every Serializable instance with #XmlJavaTypeAdapter(SerializableAdapter.class) since using package-level #XmlJavaTypeAdaptersdidn't work for some reason, and so forth for other cases. JAXB seems to eagerly cast the types encoded to and from when calling the adapters, so it will compile just fine even if the object to be marshalled isn't an instance of the expected class/interface, and throw exceptions only at runtime.
I don't recommend doing it this way, since it will require annotating every single method/parameter or package, and will break at the first one that didn't get annotated and yet received null. This very adapter still serves a purpose for cases where I need to work with interfaces, and the implementing classes also implement Serializable, although there are cases that still need specific adapters, but that's usually badly thought-out code.
Partially because of the hackness of this and the hassle of annotating everything, I managed to convince the company to move away from SOAP RPC bindings, so I was able to have null parameters and returns without this.
Related
We are trying to create a Java client for an API created with Spring Data.
Some endpoints return hal+json responses containing _embedded and _links attributes.
Our main problem at the moment is trying to wrap our heads around the following structure:
{
"_embedded": {
"plans": [
{
...
}
]
},
...
}
When you hit the plans endpoint you get a paginated response the content of which is within the _embedded object. So the logic is that you call plans and you get back a response containing an _embedded object that contains a plans attribute that holds an array of plan objects.
The content of the _embedded object can vary as well, and trying a solution using generics, like the example following, ended up returning us a List of LinkedHashMap Objects instead of the expected type.
class PaginatedResponse<T> {
#JsonProperty("_embedded")
Embedded<T> embedded;
....
}
class Embedded<T> {
#JsonAlias({"plans", "projects"})
List<T> content; // This instead of type T ends up deserialising as a List of LinkedHashMap objects
....
}
I am not sure if the above issue is relevant to this Jackson bug report dating from 2015.
The only solution we have so far is to either create a paginated response for each type of content, with explicitly defined types, or to include a List<type_here> for each type of object we expect to receive and make sure that we only read from the populated list and not the null ones.
So our main question to this quite spread out issue is, how one is supposed to navigate such an API without the use of Spring?
We do not consider using Spring in any form as an acceptable solution. At the same time, and I may be quite wrong here, but it looks like in the java world Spring is the only framework actively supporting/promoting HAL/HATEOAS?
I'm sorry if there are wrongly expressed concepts, assumptions and terminology in this question but we are trying to wrap our heads around the philosophy of such an implementation and how to deal with it from a Java point of view.
You can try consuming HATEOS API using super type tokens. A kind of generic way to handle all kind of hateos response.
For example
Below generic class to handle response
public class Resource<T> {
protected Resource() {
this.content = null;
}
public Resource(T content, Link... links) {
this(content, Arrays.asList(links));
}
}
Below code to read the response for various objects
ObjectMapper objectMapper = new ObjectMapper();
Resource<ObjectA> objectA = objectMapper.readValue(response, new TypeReference<Resource<ObjectA>>() {});
Resource<ObjectB> objectB = objectMapper.readValue(response, new TypeReference<Resource<ObjectB>>() {});
You can refer below
http://www.java-allandsundry.com/2012/12/json-deserialization-with-jackson-and.html
http://www.java-allandsundry.com/2014/01/consuming-spring-hateoas-rest-service.html
The problem
We have upgraded our Spring-boot version from 2.0.5 to 2.1.8.
As a result, Spring AMQP upgraded from 2.0.6 to 2.1.8 also.
Since then Jackson2JsonMessageConverter is unable to parse answer messages coming from methods annotated with #RabbitListener because they return an interface (actually declared as a generic in the code). And this interface is used to set the message _TypeId_ property .
with version 2.0 it used to set TypeId with the actual concrete class.
I did some digging and here is my understanding of the problem (code will follow)
When the method annotated with #RabbitListener returns, the MessagingMessageListenerAdapter#onMessage is invoked and encapsulates the result in a InvocationResult Object, which contains a genericType property.
This genericType is set up from the return type of the method annotated with #RabbitListener.
Then, it is used by the Jackson2JsonMessageConverter#createMessage method to setup _TypeId_ property.
On the other side, the Jackson2JsonMessageConverter#fromMessage can then parse the Json using this propery to find out the actual Type.
The problem is that, since the introduction of InvocationResult and genericType, our method annotated with #RabbitListener is declared as returning an interface and so the _TypeId_ property is set up with the interface instead of the actual concrete class. Here is the bit of code from Jackson2JsonMessageConverter#fromMessage (actualy from AbstractJackson2MessageConverter)which has changed (among other):
if (getClassMapper() == null) {
getJavaTypeMapper().fromJavaType(this.objectMapper.constructType(
genericType == null ? objectToConvert.getClass() : genericType), messageProperties);
}
else {
getClassMapper().fromClass(objectToConvert.getClass(), messageProperties); // NOSONAR never null
}
Since genericType is not null and contains the interfaceType... you can see our trouble.
Prior to version 2.1, we add no problem since the Jackson2JsonMessageConverter#createMessage() was always directly using objectToConvert.getClass():
if (getClassMapper() == null) {
getJavaTypeMapper().fromJavaType(this.jsonObjectMapper.constructType(objectToConvert.getClass()),
messageProperties);
}
else {
getClassMapper().fromClass(objectToConvert.getClass(),
messageProperties);
}
The code
Here is our code:
public abstract class AWorker<I extends RequestDtoInterface, O extends ResponseDtoInterface> {
#RabbitListener(queues = "${rabbit.worker.queue}"
, errorHandler = "workerErrorHandler"
, returnExceptions = "true")
public O receiveMessage(I inputMessage, #Header(LoggerUtil.LOGGER_MDC_ID) String mdcId, #Header(RabbitConstants.CONTEXT_INFO_HEADER_KEY) String contextInfoStr) {
if(inputMessage instanceof RequestDtoFirstImplementation.class){
return new ResponseDtoFirstImplementation();
}else{
return new ResponseDtoSecondImplementation();
}
}
}
Of course, the content of receiveMessage method is simplified, the point is that the actual implementation can return differents concretes types depending of the input concrete type
Possible solution
I figured two possibles workarounds, but none is really nice or easy to maintain.
The first would be to use a RemoteInvocationAwareMessageConverterAdapter to encapsulate the Jackson2jsonMessageConverter.
If I do that, the MessageConverter#toMessaget(Object object, MessageProperties messageProperties) is called instead of MessageConverter#toMessage(Object object, MessageProperties messageProperties, #Nullable Type genericType) and so we fall back to the old way.
But that sounds more like a hack than a proper solution. And if RemoteInvocationAwareMessageConverter changes its behaviour, we are back to the initial problem.
The second would be to use an existing ClassMapper but, even if it works, that sounds more difficult to maintain (especially when it comes to trustedPackages if we have a lot of class to serialize coming from different packages). Or implement a customClassMapper which also add a bit of work to do.
Since it used to work fine prior to version 2.1 I'm not sure it's absolutely necessary to reimplement ClassMapper.
But I can't find any simple way to make genericType set up with the concrete type
I have a backend system which we use a third-party Java API to access from our own applications. I can access the system as a normal user along with other users, but I do not have godly powers over it.
Hence to simplify testing I would like to run a real session and record the API calls, and persist them (preferably as editable code), so we can do dry test runs later with API calls just returning the corresponding response from the recording session - and this is the important part - without needing to talk to the above mentioned backend system.
So if my application contains line on the form:
Object b = callBackend(a);
I would like the framework to first capture that callBackend() returned b given the argument a, and then when I do the dry run at any later time say "hey, given a this call should return b". The values of a and b will be the same (if not, we will rerun the recording step).
I can override the class providing the API so all the method calls to capture will go through my code (i.e. byte code instrumentation to alter behavior of classes outside my control is not necessary).
What framework should I look into to do this?
EDIT: Please note that bounty hunters should provide actual code demonstrating the behavior I look for.
Actually You can build such framework or template, by using proxy pattern. Here I explain, how you can do it using dynamic proxy pattern. The idea is to,
Write a proxy manager to get recorder and replayer proxies of API on demand!
Write a wrapper class to store your collected information and also implement hashCode and equals method of that wrapper class for efficient lookup from Map like data structure.
And finally use recorder proxy to record and replayer proxy for replaying purpose.
How recorder works:
invokes the real API
collects the invocation information
persists data in expected persistence context
How replayer works:
Collect the method information (method name, parameters)
If collected information matches with previously recorded information then return the previously collected return value.
If returned value does not match, persist the collected information (As you wanted).
Now, lets look at the implementation. If your API is MyApi like bellow:
public interface MyApi {
public String getMySpouse(String myName);
public int getMyAge(String myName);
...
}
Now we will, record and replay the invocation of public String getMySpouse(String myName). To do that we can use a class to store the invocation information like bellow:
public class RecordedInformation {
private String methodName;
private Object[] args;
private Object returnValue;
public String getMethodName() {
return methodName;
}
public void setMethodName(String methodName) {
this.methodName = methodName;
}
public Object[] getArgs() {
return args;
}
public void setArgs(Object[] args) {
this.args = args;
}
public Object getReturnValue() {
return returnType;
}
public void setReturnValue(Object returnValue) {
this.returnValue = returnValue;
}
#Override
public int hashCode() {
return super.hashCode(); //change your implementation as you like!
}
#Override
public boolean equals(Object obj) {
return super.equals(obj); //change your implementation as you like!
}
}
Now Here comes the main part, The RecordReplyManager. This RecordReplyManager gives you proxy object of your API , depending on your need of recording or replaying.
public class RecordReplyManager implements java.lang.reflect.InvocationHandler {
private Object objOfApi;
private boolean isForRecording;
public static Object newInstance(Object obj, boolean isForRecording) {
return java.lang.reflect.Proxy.newProxyInstance(
obj.getClass().getClassLoader(),
obj.getClass().getInterfaces(),
new RecordReplyManager(obj, isForRecording));
}
private RecordReplyManager(Object obj, boolean isForRecording) {
this.objOfApi = obj;
this.isForRecording = isForRecording;
}
#Override
public Object invoke(Object proxy, Method method, Object[] args) throws Throwable {
Object result;
if (isForRecording) {
try {
System.out.println("recording...");
System.out.println("method name: " + method.getName());
System.out.print("method arguments:");
for (Object arg : args) {
System.out.print(" " + arg);
}
System.out.println();
result = method.invoke(objOfApi, args);
System.out.println("result: " + result);
RecordedInformation recordedInformation = new RecordedInformation();
recordedInformation.setMethodName(method.getName());
recordedInformation.setArgs(args);
recordedInformation.setReturnValue(result);
//persist your information
} catch (InvocationTargetException e) {
throw e.getTargetException();
} catch (Exception e) {
throw new RuntimeException("unexpected invocation exception: " +
e.getMessage());
} finally {
// do nothing
}
return result;
} else {
try {
System.out.println("replying...");
System.out.println("method name: " + method.getName());
System.out.print("method arguments:");
for (Object arg : args) {
System.out.print(" " + arg);
}
RecordedInformation recordedInformation = new RecordedInformation();
recordedInformation.setMethodName(method.getName());
recordedInformation.setArgs(args);
//if your invocation information (this RecordedInformation) is found in the previously collected map, then return the returnValue from that RecordedInformation.
//if corresponding RecordedInformation does not exists then invoke the real method (like in recording step) and wrap the collected information into RecordedInformation and persist it as you like!
} catch (InvocationTargetException e) {
throw e.getTargetException();
} catch (Exception e) {
throw new RuntimeException("unexpected invocation exception: " +
e.getMessage());
} finally {
// do nothing
}
return result;
}
}
}
If you want to record the method invocation, all you need is getting an API proxy like bellow:
MyApi realApi = new RealApi(); // using new or whatever way get your service implementation (API implementation)
MyApi myApiWithRecorder = (MyApi) RecordReplyManager.newInstance(realApi, true); // true for recording
myApiWithRecorder.getMySpouse("richard"); // to record getMySpouse
myApiWithRecorder.getMyAge("parker"); // to record getMyAge
...
And to replay all you need:
MyApi realApi = new RealApi(); // using new or whatever way get your service implementation (API implementation)
MyApi myApiWithReplayer = (MyApi) RecordReplyManager.newInstance(realApi, false); // false for replaying
myApiWithReplayer.getMySpouse("richard"); // to replay getMySpouse
myApiWithRecorder.getMyAge("parker"); // to replay getMyAge
...
And You are Done!
Edit:
The basic steps of recorder and replayers can be done in above mentioned way. Now its upto you, that how you want to use or perform those steps. You can do what ever you want and whatever you like in the recorder and replayer code blocks and just choose your implementation!
I should prefix this by saying I share some of the concerns in Yves Martin's answer: that such a system may prove frustrating to work with and ultimately less helpful than it would seem at first blush.
That said, from a technical standpoint, this is an interesting problem, and I couldn't not take a go at it. I put together a gist to log method calls in a fairly general way. The CallLoggingProxy class defined there allows usage such as the following.
Calendar original = CallLoggingProxy.create(Calendar.class, Calendar.getInstance());
original.getTimeInMillis(); // 1368311282470
CallLoggingProxy.ReplayInfo replayInfo = CallLoggingProxy.getReplayInfo(original);
// Persist the replay info to disk, serialize to a DB, whatever floats your boat.
// Come back and load it up later...
Calendar replay = CallLoggingProxy.replay(Calendar.class, replayInfo);
replay.getTimeInMillis(); // 1368311282470
You could imagine wrapping your API object with CallLoggingProxy.create prior to passing it into your testing methods, capturing the data afterwards, and persisting it using whatever your favorite serialization system happens to be. Later, when you want to run your tests, you can load the data back up, create a new instance based on the data with CallLoggingProxy.replay, and passing that into your methods instead.
The CallLoggingProxy is written using Javassist, as Java's native Proxy is limited to working against interfaces. This should cover the general use case, but there are a few limitations to keep in mind:
Classes declared final can't be proxied by this method. (Not easily fixable; this is a system limitation)
The gist assumes the same input to a method will always produce the same output. (More easily fixable; the ReplayInfo would need to keep track of sequences of calls for each input instead of single input/output pairs.)
The gist is not even remotely threadsafe (Fairly easily fixable; just requires a little thought and effort)
Obviously the gist is simply a proof of concept, so it's also not been very thoroughly tested, but I believe the general principle is sound. It's also possible there's a more fully baked framework out there to achieve this sort of goal, but if such a thing does exist, I'm not aware of it.
If you do decide to continue with the replay approach, then hopefully this will be enough to give you a possible direction to work in.
I had the same needs some months ago for non-regression testing when planning a heavy technical refactoring of a large application and... I have found nothing available as a framework.
In fact, replaying may be particularly difficult and may only work in a specific context - no (or few) application with a standard complexity can be really considered as stateless. It is a common problem when testing persistence code with a relational database. To be relevant, the complete system initial state must be restored and each replay step must impact the global state the same way. It becomes a challenge when a system state is distributed into pieces like databases, files, memory... Let's guess what happens if a timestamp taken from a system's clock is used somewhere !
So a more pratical option is to only record... and then do a clever comparison for subsequent runs.
Depending of the number of runs you plan, a human-driven session on the application may be enough, or you have to investing in an automated scenario in a robot playing with your application user interface.
First to record: you can use dynamic proxy interface or aspect programming to intercept method call and to capture state before and after invocation. It may mean: dump concerned database tables, copy some files, serialize Java objects in text format like XML.
Then compare this reference capture with a new run. This comparison should be tuned to exclude any irrelevant elements from each piece of state, like row identifiers, timestamps, file names... to only compare data where your backend's added value shines.
Finally nothing really standard, and often a few specific scripts and codes may be enough to achieve the aim: detect as much errors as possible and try to prevent non-expected side-effects.
This can be done with AOP, aspect oriented programming. It allows to intercept method calls by byte code manipulation. Do a bit of search for examples.
In one case this can do recording, in the other replaying.
Pointers: wikipedia, AspectJ, Spring AOP.
Unfortunately one moves a bit outside the java syntax, and a simple example can better be sought elsewhere. With explanation.
Maybe combined with unit tests / some mocking test framework for offline testing with recorded data.
you could look into 'Mockito'
Example:
//You can mock concrete classes, not only interfaces
LinkedList mockedList = mock(LinkedList.class);
//stubbing
when(mockedList.get(0)).thenReturn("first");
when(mockedList.get(1)).thenThrow(new RuntimeException());
//following prints "first"
System.out.println(mockedList.get(0));
//following throws runtime exception
System.out.println(mockedList.get(1));
//following prints "null" because get(999) was not stubbed
System.out.println(mockedList.get(999));
after you could replay each test more times and it will return data that you put in.
// pseudocode
class LogMethod {
List<String> parameters;
String method;
addCallTo(String method, List<String> params):
this.method = method;
parameters = params;
}
}
Have a list of LogMethods and call new LogMethod().addCallTo() before every call in your test method.
The idea of playing back the API calls sounds like a use case for the event sourcing pattern. Martin Fowler has a good article on it here. This is a nice pattern that records events as a sequence of objects which are then stored, you can then replay the sequence of events as required.
There is an implementation of this pattern using Akka called Eventsourced, which may help you build the type of system you require.
I had a similar problem some years ago. None of the above solutions would have worked for methods that are not pure functions (side effect free). The major task is in my opinion:
how to extract a snapshot of the recorded object(s) (not only restricted to objects implementing Serializable)
how to generate test code of a serialized representation in a readable way (not only restricted to beans, primitives and collections)
So I had to go my own way - with testrecorder.
For example, given:
ResultObject b = callBackend(a);
...
ResultObject callBackend(SourceObject source) {
...
}
you will only have to annotate the method like this:
#Recorded
ResultObject callBackend(SourceObject source) {
...
}
and start your application (the one that should be recorded) with the testrecorder agent. Testrecorder will manage all tasks for you, such as:
serializing arguments, results, state of this, exceptions (complete object graph!)
finding a readable representation for object construction and object matching
generating a test from the serialized data
you can extend recordings to global variables, input and output with annotations
An example for the test will look like this:
void testCallBackend() {
//arrange
SourceObject sourceObject1 = new SourceObject();
sourceObject1.setState(...); // testrecorder can use setters but is not limited to them
... // setting up backend
... // setting up globals, mocking inputs
//act
ResultObject resultObject1 = backend.callBackend(sourceObject1);
//assert
assertThat(resultObject, new GenericMatcher() {
... // property matchers
}.matching(ResultObject.class));
... // assertions on backend and sourceObject1 for potential side effects
... // assertions on outputs and globals
}
If I understood you question correctly, you should try db4o.
You will store the objects with db4o and restore later to mock and JUnit tests.
In a GWT app I present items that can be edited by users. Loading and saving the items is perfomed by using the GWT request factory. What I now want to achive is if two users concurrently edit an item that the user that saves first wins in the fashion of optimistic concurrency control. Meaning that when the second user saves his changes the request factory backend recognizes that the version or presence of the item stored in the backend has changed since it has been transfered to the client and the request factory/backend then somehow prevents the items from being updated/saved.
I tried to implement this in the service method that is used to save the items but this will not work because request factory hands in the items just retrieved from the backend with applied user's changes meaning the versions of these items are the current versions from the backend and a comparison pointless.
Are there any hooks in the request factory processing I coud leverage to achieve the requested behaviour? Any other ideas? Or do I have to use GWT-RPC instead...
No: http://code.google.com/p/google-web-toolkit/issues/detail?id=6046
Until the proposed API is implemented (EntityLocator, in comment #1, but it's not clear to me how the version info could be reconstructed from its serialized form), you'll have to somehow send the version back to the server.
As I said in the issue, this cannot be done by simply making the version property available in the proxy and setting it; but you could add another property: getting it would always return null (or similar nonexistent value), so that setting it on the client-side to the value of the "true" version property would always produce a change, which guaranties the value will be sent to the server as part of the "property diff"; and on the server-side, you could handle things either in the setter (when RequestFactory applies the "property diff" and calls the setter, if the value is different from the "true" version, then throw an exception) or in the service methods (compare the version sent from the client –which you'd get from a different getter than the one mapped on the client, as that one must always return null– to the "true" version of the object, and raise an error if they don't match).
Something like:
#ProxyFor(MyEntity.class)
interface MyEntityProxy extends EntityProxy {
String getServerVersion();
String getClientVersion();
void setClientVersion(String clientVersion);
…
}
#Entity
class MyEntity {
private String clientVersion;
#Version private String serverVersion;
public String getServerVersion() { return serverVersion; }
public String getClientVersion() { return null; }
public void setClientVersion(String clientVersion) {
this.clientVersion = clientVersion;
}
public void checkVersion() {
if (Objects.equal(serverVersion, clientVersion)) {
throw new OptimisticConcurrencyException();
}
}
}
Note that I haven't tested this, this is pure theory.
We came up with another workaround for optimistic locking in our app. Since the version can't be passed with the proxy itself (as Thomas explained) we are passing it via HTTP GET parameter to the request factory.
On the client:
MyRequestFactory factory = GWT.create( MyRequestFactory.class );
RequestTransport transport = new DefaultRequestTransport() {
#Override
public String getRequestUrl() {
return super.getRequestUrl() + "?version=" + getMyVersion();
}
};
factory.initialize(new SimpleEventBus(), transport);
On the server we create a ServiceLayerDecorator and read version from the RequestFactoryServlet.getThreadLocalRequest():
public static class MyServiceLayerDecorator extends ServiceLayerDecorator {
#Override
public final <T> T loadDomainObject(final Class<T> clazz, final Object domainId) {
HttpServletRequest threadLocalRequest = RequestFactoryServlet.getThreadLocalRequest();
String clientVersion = threadLocalRequest.getParameter("version") );
T domainObject = super.loadDomainObject(clazz, domainId);
String serverVersion = ((HasVersion)domainObject).getVersion();
if ( versionMismatch(serverVersion, clientVersion) )
report("Version error!");
return domainObject;
}
}
The advantage is that loadDomainObject() is called before any changes are applied to the domain object by RF.
In our case we're just tracking one entity so we're using one version but approach can be extended to multiple entities.
I have a servlet that invokes generic actions passing in a form and object (depending on what the action needs)
CommitmentServlet.java
CommitmentListDAO clDAO = new CommitmentListDAO();
CommitmentItemForm form = new CommitmentItemForm(clDAO);
CommitmentItem obj = new CommitmentItem();
actionMap.put(null, new ListAction(form);
actionMap.put("list", new ListAction(form);
actionMap.put("view", new ViewAction(form, obj)
actionMap.put("delete", new DeleteAction(form, obj);
actionMap.put("edit", new EditAction(form, obj);
ControllerAction action = (ControllerAction) actionMap.get(request.getParameter("method"));
action.service(request, response);
EditAction.java
public class EditAction implements ControllerAction {
private Form form;
private Object obj;
public EditAction(Form form, Object obj) {
this.form = form;
this.obj = obj;
}
public void service(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {
obj = form.edit(request);
request.setAttribute("obj", obj);
request.setAttribute("form", form);
if (form.isSucces()) {
RequestDispatcher view = request.getRequestDispatcher(success page);
view.forward(request, response);
}
else {
RequestDispatcher view = request.getRequestDispatcher(failure page);
view.forward(request, response);
}
}
}
The actual business logic is located in the form object passed in to the generic action.
The generic actions allow me to quickly get CRUD Controller functionality for any new objects. I just have to code the business logic form such as here
CommitmentItemForm.java
public Object edit(HttpServletRequest request) {
CommitmentItem commitmentItem = null;
STKUser authenticatedUser = (STKUser) request.getSession().getAttribute("STKUserSession");
String ownedByBadge = null;
List deptSupervisorList = null;
try {
deptSupervisorList = STKUserDAO.getList(authenticatedUser.getDepartment()); //<--- Static call is it OK??
commitmentItem = CommitmentListDAO.retreive(request.getParameter("commitment_id"), authenticatedUser);
ownedByBadge = commitmentItem.getOwned_by();
}
catch (DAOException e) {
setError(FORM_RESULTS, e.getMessage());
}
catch (ValidatorException e) {
// ValidatorExceptions are thrown when the DAO can not find a record
setError(FORM_RESULTS, e.getMessage());
LOGGER.log(Level.INFO, e.getMessage(), authenticatedUser);
}
if (ownedByBadge != null) {
if (ownedByBadge.equals(authenticatedUser.getBadge()) || ownedByBadge.equals(authenticatedUser.getAtaBadge())) {
}
else {
setError(FORM_RESULTS, "You are not authorized to edit this data.");
LOGGER.log(Level.INFO, "Error - You are not authorized to edit this data '" + commitmentItem.getCommitment_id() + "'", authenticatedUser);
}
}
request.setAttribute("deptSupervisorList", deptSupervisorList); // <--- Is this acceptable to do???
return commitmentItem;
}
1) is my approach of setting a request attribute and returning an object in method un orthodox?
2) I'm making a static call to get the deptSupervisorList. Is this asking for trouble??
3) Does my servlet, generic action, business form seem like an acceptable approach to develop a java web application without using a framework?
EDIT:
What is the difference?
Static
deptSupervisorList = STKUserDAO.getList(authenticatedUser.getDepartment());
vs
non-static
STKUserDAO userDAO = new STKUserDAO();
deptSupervisorList = userDAO.getList(authenticatedUser.getDepartment());
public static List getList(String dept) throws DAOException {
...
}
First some caveats:
This is subjective
I agree with SidCool that the answer is to take a look at some of the existing web application frameworks out there. If anything, just to find out how they do it.
I'm a massive fan of dependency injection
To answer your questions:
It's not great to pass data around in request attributes because: it's not type safe; it's a bit of an invisible bag of things -- always better if you can see output objects in the type signature; at some point you'll find yourself wanting to store two things in the request attributes under the same name
Dependency injection is the way of the future. Making a static call is bad because: you've now tightly coupled the two objects making reuse harder as well as making it harder to test
I'd definitely have a look at some other frameworks here. Most of them tend to have a single dispatch servlet, I think you'll end up writing a lot of very similar looking servlets. A lot of frameworks will also use reflection to try and get the transformation between request and POJO done as early and as easily as possible.
Other:
All of your actions are off parameters i.e. ?method=[list,view,delete,edit]. Often it is preferable to use routes (e.g. index.html is usually used for 'list').
To answer your feedback / questions from the comments:
Running on older version of Java
Wow, that sucks. There are frameworks that run on Java 1.4, though. Spring MVC would be my recommendation but there are more here. That said, the reason that I suggested looking at other frameworks wasn't just to use them but more to be inspired by them. Writing your own Web Application Framework is practically a rite of passage and can be pretty fun. Writing it in such a constrained environment just adds to the challenge.
What I'd suggest:
Try out a recent Java framework or even a non Java one (e.g. Ruby on Rails), just to see what's possible
When writing your own only framework, just use 1 servlet and dispatch down to your various 'controllers'. The reason for this is that Servlets are not great at letting you put your whole application together (what Spring MVC does, is loads up the 'application' using a ContextListener and then servlets and filters look up the 'application' from the ServletContext)
The tight coupling of static
Tight coupling is when two objects can't be used without each other, ever. Why is this bad, you ask? Because you can never reuse the code for something else (say, if you decided to load some data from a file, introduced a caching layer, used it in a different project etc.). Most importantly, some would say, is that it is difficult to test it. This is because you can't just replace the object that you statically call with another one. Interfaces are usually used to decouple objects but realistically, you can do it just by setting the object in via dependency injection (which is a complicated way of saying: put it in the constructor or as a setter).
OO and being a civil engineer
It's all good. Some of the best programmers I know didn't start out that way. For me, using the Dependency Injection pattern is an awesome way to write 'good' code by default. Note: if you look at Dependency Injection, you don't need a framework for it. You just need to construct all of your objects in one place and all of your objects should get all of their dependencies either in the constructor or in a setter. Not static methods or singletons allowed.
What's the difference
An alternative 'what's the difference' that better illustrates what I mean would look like this:
// code in your application builder
// assuming an interface called UserDAO
UserDAO userDAO = new STKUserDAO();
CommitmentItemForm form = new CommitmentItemForm(userDao);
public class CommitmentItemForm {
private final UserDAO userDao;
public CommitmentItemForm(UserDAO userDao) { this.userDao = userDao; }
public Object edit(HttpServletRequest request) {
...
deptSupervisorList = userDao.getList(authenticatedUser.getDepartment());
...
}
}
vs
public class CommitmentItemForm {
public CommitmentItemForm()
public Object edit(HttpServletRequest request) {
...
deptSupervisorList = STKUserDAO.getList(authenticatedUser.getDepartment());
...
}
}
The static method definitely looks like it's less work, so why is it so bad? Essentially, it's because in the static version, you can never look up the deptSupervisorList from anything but an STKUserDAO. In the non static version, you can supply any implementation of the UserDAO interface. This means that you could use the same CommitmentItemForm code regardless of whether:
You were doing it in a test and you were creating a Mock version of UserDAO that returned an exception every time so that you could test that
You found out that you needed to retrieve your departments list from a JSON HTTP REST web service, or from a file
It's also immediately obvious from the signature of CommitmentItemForm that it needs a UserDAO to function (because it's required in the constructor).
This is one of those little things that if you do it with all of your code, you will find that your code is not only more flexible, it's more testable, more reusable and the parts that you suddenly find you need to change in the future are better isolated.