I have an RMI class that accepts remote calls from clients.
This class uses Hibernate to load entities and perform some business logic, in general read-only.
Currently most of the remote methods bodies look like that :
try {
HibernateUtil.currentSession().beginTransaction();
//load entities, do some business logic...
} catch (HibernateException e) {
logger.error("Hibernate problem...", e);
throw e;
} catch (other exceptions...) {
logger.error("other problem happened...", e);
throw e;
} finally {
HibernateUtil.currentSession().getTransaction().rollback(); //this because it's read-only, we make sure we don't commit anything
HibernateUtil.currentSession().close();
}
I would like to know if there is some pattern that I could (relatively easily) implement in order to automatically have this "try to open session/catch hibernate exception/finally close hibernate resources" behavior without having to code it in every method.
Something similar to "open session in view" that is used in webapps, but that could be applied to remotr RMI method calls instead of HTTP requests.
Ideally I would like to be able to still call the methods directly, not to use some reflexion passing method names as strings.
I would suggest you to use spring+hibernate stack. This saves us a lot of repeatable code which I guess you are looking for. Please check this link. Its actually an example of web application but same can be use for a standalone application as well.
All i wanted was a "quick and clean" solution, if possible, so no new framework for now (I might use Spring+Hibernate stack later on though).
So I ended up using a "quick-and-not-so-dirty" solution involving a variant of the "Command" pattern, where the hibernate calls are encapsulated inside anonymous inner classes implementing my generic Command interface, and the command executer wraps the call with the Hibernate session and exception handling. The generic bit is in order to have different return value types for the execute method.
I am not 100% satisfied with this solution since it still implies some boilerplate code wrapped around my business logic (I am especially unhappy about the explicit casting needed for the return value) and it makes it slightly more complicated to understand and debug.
However the gain in repetitive code is still significant (from about 10 lines to 3-4 lines per method), and more importantly the Hibernate handling logic is concentrated in one class, so it can be changed easily there if needed and it's less error-prone.
Here is some of the code :
The Command interface :
public interface HibernateCommand<T> {
public T execute(Object... args) throws Exception;
}
The Executer :
public class HibernateCommandExecuter {
private static final Logger logger = Logger.getLogger(HibernateCommandExecuter.class);
public static Object executeCommand(HibernateCommand<?> command, boolean commit, Object... args) throws RemoteException{
try {
HibernateUtil.currentSession().beginTransaction();
return command.execute(args);
} catch (HibernateException e) {
logger.error("Hibernate problem : ", e);
throw new RemoteException(e.getMessage());
}catch(Exception e){
throw new RemoteException(e.getMessage(), e);
}
finally {
try{
if(commit){
HibernateUtil.currentSession().getTransaction().commit();
}else{
HibernateUtil.currentSession().getTransaction().rollback();
}
HibernateUtil.currentSession().close();
}catch(HibernateException e){
logger.error("Error while trying to clean up Hibernate context :", e);
}
}
}
}
Sample use in a remotely called method (but it could be used locally also) :
#Override
public AbstractTicketingClientDTO doSomethingRemotely(final Client client) throws RemoteException {
return (MyDTO) HibernateCommandExecuter.executeCommand(new HibernateCommand<MyDTO>() {
public AbstractTicketingClientDTO execute(Object...args) throws Exception{
MyDTO dto = someService.someBusinessmethod(client);
return dto;
}
},false);
}
Note how the client argument is declared final, so it can be referrenced inside the inner class. If not possible to declare final, it could be passed as parameter to the executeCommand method.
Related
Suppose I've got an endpoint in Dropwizard, say
#GET
public Response foo() { throw new NullPointerException(); }
When I hit this endpoint it logs the exception and everything, which is great! I love it. What I love less is that it returns a big status object to the user with status: ERROR (which is fine) as well as a gigantic stack trace, which I'm less excited about.
Obviously it's best to catch and deal with exceptions on my own, but from time to time they're going to slip through. Writing a try catch block around the entire resource every time is fine, but (a) it's cumbersome, and (b) I always prefer automated solutions to "you have to remember" solutions.
So what I would like is something that does the following:
Logs the stack trace (I use slf4j but I assume it would work for whatever)
Returns a general purpose error response, which does not expose potentially privileged information about my server!
I feel like there must be a built-in way to do this -- it already handles exceptions in a relatively nice way -- but searching the docs hasn't turned up anything. Is there a good solution for this?
As alluded to by reek in the comments, the answer is an ExceptionMapper. You'll need a class like this:
#Provider
public class RuntimeExceptionMapper implements ExceptionMapper<RuntimeException> {
#Override
public Response toResponse(RuntimeException runtime) {
// ...
}
}
You can do whatever logging or etc. you like in the toResponse method, and the return value is what is actually sent up to the requester. This way you have complete control, and should set up sane defaults -- remember this is for errors that slip through, not for errors you actually expect to see! This is also a good time to set up different behaviors depending on what kind of exceptions you're getting.
To actually make this do anything, simply insert the following line (or similar) in the run method of your main dropwizard application:
environment.jersey().register(new RuntimeExceptionMapper());
where environment is the Environment parameter to the Application's run method. Now when you have an uncaught RuntimeException somewhere, this will trigger, rather than whatever dropwizard was doing before.
NB: this is still not an excuse not to catch and deal with your exceptions carefully!
Add the following to your yaml file. Note that it will remove all the default exception mappers that dropwizard adds.
server:
registerDefaultExceptionMappers: false
Write a custom exception mapper as below:
public class CustomExceptionMapper implements ExceptionMapper<RuntimeException> {
#Override
public Response toResponse(RuntimeException runtime) {
// ...
}
}
Then register the exception mapper in jersey:
environment.jersey().register(new CustomExceptionMapper());
Already mentioned this under the comments, but then thought I would give it a try with a use case.
Would suggest you to start differentiating the Exception that you would be throwing. Use custom exception for the failures you know and throw those with pretty logging. At the same RuntimeException should actually be fixed. Anyhow if you don't want to display stack trace to the end user you can probably catch a generic exception, log the details and customize the Response and entity accordingly.
You can define a
public class ErrorResponse {
private int code;
private String message;
public ErrorResponse() {
}
public ErrorResponse(int code, String message) {
this.code = code;
this.message = message;
}
... setters and getters
}
and then within you resource code you can modify the method as -
#GET
public Response foo() {
try {
...
return Response.status(HttpStatus.SC_OK).entity(response).build();
} catch (CustomBadRequestException ce) {
log.error(ce.printStackTrace());
return Response.status(HttpStatus.SC_BAD_REQUEST).entity(new ErrorResponse(HttpStatus.SC_BAD_REQUEST, ce.getMessage())).build();
} catch (Exception e) {
log.error(e.printStackTrace(e));
return Response.status(HttpStatus.SC_INTERNAL_SERVER_ERROR).entity(new ErrorResponse(HttpStatus.SC_INTERNAL_SERVER_ERROR, e.getMessage())).build();
}
}
This article details Checked and Unchecked Exceptions implementation for Jersey with customized ExceptionMapper:
https://www.codepedia.org/ama/error-handling-in-rest-api-with-jersey/
Official Dropwizard documentation also covers a simpler approach, just catching using WebApplicationException:
#GET
#Path("/{collection}")
public Saying reduceCols(#PathParam("collection") String collection) {
if (!collectionMap.containsKey(collection)) {
final String msg = String.format("Collection %s does not exist", collection);
throw new WebApplicationException(msg, Status.NOT_FOUND)
}
// ...
}
https://www.dropwizard.io/en/stable/manual/core.html#responses
It worked for me by simply registering the custom exception mapper created in the run method of the main class.
environment.jersey().register(new CustomExceptionMapper());
where CustomExceptionMapper can implement ExceptionMapper class like this
public class CustomExceptionMapperimplements ExceptionMapper<Exception>
I'm implementing simultaneous write into database and Oracle Coherence 3.7.1 and want to make whole operation transactional.
I would like to have a critique on my approach.
Currently, I've created façade class like this:
public class Facade {
#EJB
private JdbcDao jdbcDao;
#EJB
private CoherenceDao coherenceDao;
#TransactionAttribute(TransactionAttributeType.REQUIRES_NEW)
private void updateMethod(List<DomainObject> list) {
jdbcDao.update(list);
coherenceDao.update(list);
}
}
I guess JDBC DAO would not need to do anything specific about transactions, it something happens Hibernate would throw some kind of RuntimeException.
public class JdbcDao {
private void update(List<DomainObject> list) {
// I presume there is nothing specific I have to do about transactions.
// if I don't catch any exceptions it would work just fine
}
}
Here is interesting part. How do I make Coherence support transactions?
I guess I should open coherence transaction inside update() method and on any exceptions inside it I should throw RuntimeException myself?
I currently thinking of something like this:
public class CoherenceDao {
private void update(List<DomainObject> list) {
// how should I make it transactional?
// I guess it should somehow throw RuntimeException?
TransactionMap mapTx = CacheFactory.getLocalTransaction(cache);
mapTx.setTransactionIsolation(TransactionMap.TRANSACTION_REPEATABLE_GET);
mapTx.setConcurrency(TransactionMap.CONCUR_PESSIMISTIC);
// gather the cache(s) into a Collection
Collection txnCollection = Collections.singleton(mapTx);
try {
mapTx.begin();
// put into mapTx here
CacheFactory.commitTransactionCollection(txnCollection, 1);
} catch (Throwable t) {
CacheFactory.rollbackTransactionCollection(txnCollection);
throw new RuntimeException();
}
}
}
Would this approach work as expected?
I know that you asked this question a year ago and my answer now might not be as much as value for you after a year but I still give it a try.
What you are trying to do works as long as there is no RuneTimeException after the method call of coherenceDao.update(list); You might be assuming that you don't have any line of codes after that line but that's not the whole story.
As an example: You might have some deferrable constraints in your Database. Those constraints will be applied when the container is trying to commit the transaction which is on method exit of updateMethod(List<DomainObject> list) and after your method call to coherenceDao.update(list). Another cases would be like a connection timeout to database after that coherenceDao.update(list) is executed but still before the transaction commit.
In both cases your update method of CoherenceDAO class is executed safe and sound and your coherence transaction is not rollbacked anymore which will put your cache in an inconsistent state because you will get a RuneTimeException because of those DB or Hibernate Exceptions and that will cause your container managed transaction to be rollbacked!
I have a backend system which we use a third-party Java API to access from our own applications. I can access the system as a normal user along with other users, but I do not have godly powers over it.
Hence to simplify testing I would like to run a real session and record the API calls, and persist them (preferably as editable code), so we can do dry test runs later with API calls just returning the corresponding response from the recording session - and this is the important part - without needing to talk to the above mentioned backend system.
So if my application contains line on the form:
Object b = callBackend(a);
I would like the framework to first capture that callBackend() returned b given the argument a, and then when I do the dry run at any later time say "hey, given a this call should return b". The values of a and b will be the same (if not, we will rerun the recording step).
I can override the class providing the API so all the method calls to capture will go through my code (i.e. byte code instrumentation to alter behavior of classes outside my control is not necessary).
What framework should I look into to do this?
EDIT: Please note that bounty hunters should provide actual code demonstrating the behavior I look for.
Actually You can build such framework or template, by using proxy pattern. Here I explain, how you can do it using dynamic proxy pattern. The idea is to,
Write a proxy manager to get recorder and replayer proxies of API on demand!
Write a wrapper class to store your collected information and also implement hashCode and equals method of that wrapper class for efficient lookup from Map like data structure.
And finally use recorder proxy to record and replayer proxy for replaying purpose.
How recorder works:
invokes the real API
collects the invocation information
persists data in expected persistence context
How replayer works:
Collect the method information (method name, parameters)
If collected information matches with previously recorded information then return the previously collected return value.
If returned value does not match, persist the collected information (As you wanted).
Now, lets look at the implementation. If your API is MyApi like bellow:
public interface MyApi {
public String getMySpouse(String myName);
public int getMyAge(String myName);
...
}
Now we will, record and replay the invocation of public String getMySpouse(String myName). To do that we can use a class to store the invocation information like bellow:
public class RecordedInformation {
private String methodName;
private Object[] args;
private Object returnValue;
public String getMethodName() {
return methodName;
}
public void setMethodName(String methodName) {
this.methodName = methodName;
}
public Object[] getArgs() {
return args;
}
public void setArgs(Object[] args) {
this.args = args;
}
public Object getReturnValue() {
return returnType;
}
public void setReturnValue(Object returnValue) {
this.returnValue = returnValue;
}
#Override
public int hashCode() {
return super.hashCode(); //change your implementation as you like!
}
#Override
public boolean equals(Object obj) {
return super.equals(obj); //change your implementation as you like!
}
}
Now Here comes the main part, The RecordReplyManager. This RecordReplyManager gives you proxy object of your API , depending on your need of recording or replaying.
public class RecordReplyManager implements java.lang.reflect.InvocationHandler {
private Object objOfApi;
private boolean isForRecording;
public static Object newInstance(Object obj, boolean isForRecording) {
return java.lang.reflect.Proxy.newProxyInstance(
obj.getClass().getClassLoader(),
obj.getClass().getInterfaces(),
new RecordReplyManager(obj, isForRecording));
}
private RecordReplyManager(Object obj, boolean isForRecording) {
this.objOfApi = obj;
this.isForRecording = isForRecording;
}
#Override
public Object invoke(Object proxy, Method method, Object[] args) throws Throwable {
Object result;
if (isForRecording) {
try {
System.out.println("recording...");
System.out.println("method name: " + method.getName());
System.out.print("method arguments:");
for (Object arg : args) {
System.out.print(" " + arg);
}
System.out.println();
result = method.invoke(objOfApi, args);
System.out.println("result: " + result);
RecordedInformation recordedInformation = new RecordedInformation();
recordedInformation.setMethodName(method.getName());
recordedInformation.setArgs(args);
recordedInformation.setReturnValue(result);
//persist your information
} catch (InvocationTargetException e) {
throw e.getTargetException();
} catch (Exception e) {
throw new RuntimeException("unexpected invocation exception: " +
e.getMessage());
} finally {
// do nothing
}
return result;
} else {
try {
System.out.println("replying...");
System.out.println("method name: " + method.getName());
System.out.print("method arguments:");
for (Object arg : args) {
System.out.print(" " + arg);
}
RecordedInformation recordedInformation = new RecordedInformation();
recordedInformation.setMethodName(method.getName());
recordedInformation.setArgs(args);
//if your invocation information (this RecordedInformation) is found in the previously collected map, then return the returnValue from that RecordedInformation.
//if corresponding RecordedInformation does not exists then invoke the real method (like in recording step) and wrap the collected information into RecordedInformation and persist it as you like!
} catch (InvocationTargetException e) {
throw e.getTargetException();
} catch (Exception e) {
throw new RuntimeException("unexpected invocation exception: " +
e.getMessage());
} finally {
// do nothing
}
return result;
}
}
}
If you want to record the method invocation, all you need is getting an API proxy like bellow:
MyApi realApi = new RealApi(); // using new or whatever way get your service implementation (API implementation)
MyApi myApiWithRecorder = (MyApi) RecordReplyManager.newInstance(realApi, true); // true for recording
myApiWithRecorder.getMySpouse("richard"); // to record getMySpouse
myApiWithRecorder.getMyAge("parker"); // to record getMyAge
...
And to replay all you need:
MyApi realApi = new RealApi(); // using new or whatever way get your service implementation (API implementation)
MyApi myApiWithReplayer = (MyApi) RecordReplyManager.newInstance(realApi, false); // false for replaying
myApiWithReplayer.getMySpouse("richard"); // to replay getMySpouse
myApiWithRecorder.getMyAge("parker"); // to replay getMyAge
...
And You are Done!
Edit:
The basic steps of recorder and replayers can be done in above mentioned way. Now its upto you, that how you want to use or perform those steps. You can do what ever you want and whatever you like in the recorder and replayer code blocks and just choose your implementation!
I should prefix this by saying I share some of the concerns in Yves Martin's answer: that such a system may prove frustrating to work with and ultimately less helpful than it would seem at first blush.
That said, from a technical standpoint, this is an interesting problem, and I couldn't not take a go at it. I put together a gist to log method calls in a fairly general way. The CallLoggingProxy class defined there allows usage such as the following.
Calendar original = CallLoggingProxy.create(Calendar.class, Calendar.getInstance());
original.getTimeInMillis(); // 1368311282470
CallLoggingProxy.ReplayInfo replayInfo = CallLoggingProxy.getReplayInfo(original);
// Persist the replay info to disk, serialize to a DB, whatever floats your boat.
// Come back and load it up later...
Calendar replay = CallLoggingProxy.replay(Calendar.class, replayInfo);
replay.getTimeInMillis(); // 1368311282470
You could imagine wrapping your API object with CallLoggingProxy.create prior to passing it into your testing methods, capturing the data afterwards, and persisting it using whatever your favorite serialization system happens to be. Later, when you want to run your tests, you can load the data back up, create a new instance based on the data with CallLoggingProxy.replay, and passing that into your methods instead.
The CallLoggingProxy is written using Javassist, as Java's native Proxy is limited to working against interfaces. This should cover the general use case, but there are a few limitations to keep in mind:
Classes declared final can't be proxied by this method. (Not easily fixable; this is a system limitation)
The gist assumes the same input to a method will always produce the same output. (More easily fixable; the ReplayInfo would need to keep track of sequences of calls for each input instead of single input/output pairs.)
The gist is not even remotely threadsafe (Fairly easily fixable; just requires a little thought and effort)
Obviously the gist is simply a proof of concept, so it's also not been very thoroughly tested, but I believe the general principle is sound. It's also possible there's a more fully baked framework out there to achieve this sort of goal, but if such a thing does exist, I'm not aware of it.
If you do decide to continue with the replay approach, then hopefully this will be enough to give you a possible direction to work in.
I had the same needs some months ago for non-regression testing when planning a heavy technical refactoring of a large application and... I have found nothing available as a framework.
In fact, replaying may be particularly difficult and may only work in a specific context - no (or few) application with a standard complexity can be really considered as stateless. It is a common problem when testing persistence code with a relational database. To be relevant, the complete system initial state must be restored and each replay step must impact the global state the same way. It becomes a challenge when a system state is distributed into pieces like databases, files, memory... Let's guess what happens if a timestamp taken from a system's clock is used somewhere !
So a more pratical option is to only record... and then do a clever comparison for subsequent runs.
Depending of the number of runs you plan, a human-driven session on the application may be enough, or you have to investing in an automated scenario in a robot playing with your application user interface.
First to record: you can use dynamic proxy interface or aspect programming to intercept method call and to capture state before and after invocation. It may mean: dump concerned database tables, copy some files, serialize Java objects in text format like XML.
Then compare this reference capture with a new run. This comparison should be tuned to exclude any irrelevant elements from each piece of state, like row identifiers, timestamps, file names... to only compare data where your backend's added value shines.
Finally nothing really standard, and often a few specific scripts and codes may be enough to achieve the aim: detect as much errors as possible and try to prevent non-expected side-effects.
This can be done with AOP, aspect oriented programming. It allows to intercept method calls by byte code manipulation. Do a bit of search for examples.
In one case this can do recording, in the other replaying.
Pointers: wikipedia, AspectJ, Spring AOP.
Unfortunately one moves a bit outside the java syntax, and a simple example can better be sought elsewhere. With explanation.
Maybe combined with unit tests / some mocking test framework for offline testing with recorded data.
you could look into 'Mockito'
Example:
//You can mock concrete classes, not only interfaces
LinkedList mockedList = mock(LinkedList.class);
//stubbing
when(mockedList.get(0)).thenReturn("first");
when(mockedList.get(1)).thenThrow(new RuntimeException());
//following prints "first"
System.out.println(mockedList.get(0));
//following throws runtime exception
System.out.println(mockedList.get(1));
//following prints "null" because get(999) was not stubbed
System.out.println(mockedList.get(999));
after you could replay each test more times and it will return data that you put in.
// pseudocode
class LogMethod {
List<String> parameters;
String method;
addCallTo(String method, List<String> params):
this.method = method;
parameters = params;
}
}
Have a list of LogMethods and call new LogMethod().addCallTo() before every call in your test method.
The idea of playing back the API calls sounds like a use case for the event sourcing pattern. Martin Fowler has a good article on it here. This is a nice pattern that records events as a sequence of objects which are then stored, you can then replay the sequence of events as required.
There is an implementation of this pattern using Akka called Eventsourced, which may help you build the type of system you require.
I had a similar problem some years ago. None of the above solutions would have worked for methods that are not pure functions (side effect free). The major task is in my opinion:
how to extract a snapshot of the recorded object(s) (not only restricted to objects implementing Serializable)
how to generate test code of a serialized representation in a readable way (not only restricted to beans, primitives and collections)
So I had to go my own way - with testrecorder.
For example, given:
ResultObject b = callBackend(a);
...
ResultObject callBackend(SourceObject source) {
...
}
you will only have to annotate the method like this:
#Recorded
ResultObject callBackend(SourceObject source) {
...
}
and start your application (the one that should be recorded) with the testrecorder agent. Testrecorder will manage all tasks for you, such as:
serializing arguments, results, state of this, exceptions (complete object graph!)
finding a readable representation for object construction and object matching
generating a test from the serialized data
you can extend recordings to global variables, input and output with annotations
An example for the test will look like this:
void testCallBackend() {
//arrange
SourceObject sourceObject1 = new SourceObject();
sourceObject1.setState(...); // testrecorder can use setters but is not limited to them
... // setting up backend
... // setting up globals, mocking inputs
//act
ResultObject resultObject1 = backend.callBackend(sourceObject1);
//assert
assertThat(resultObject, new GenericMatcher() {
... // property matchers
}.matching(ResultObject.class));
... // assertions on backend and sourceObject1 for potential side effects
... // assertions on outputs and globals
}
If I understood you question correctly, you should try db4o.
You will store the objects with db4o and restore later to mock and JUnit tests.
So I have a try/finally block. I need to execute a number of methods in the finally block. However, each one of those methods can throw an exception. Is there a way to ensure that all these methods are called (or attempted) without nested finally blocks?
This is what I do right now, which is pretty ugly :
protected void verifyTable() throws IOException {
Configuration configuration = HBaseConfiguration.create();
HTable hTable = null;
try {
hTable = new HTable(configuration, segmentMatchTableName);
//...
//various business logic here
//...
} finally {
try {
try {
if(hTable!=null) {
hTable.close(); //This can throw an IOException
}
} finally {
try {
generalTableHelper.deleteTable(configuration, segmentMatchTableName); //This can throw an IOException
} finally {
try {
generalTableHelper.deleteTable(configuration, wordMatchTableName); //This can throw an IOException
} finally {
generalTableHelper.deleteTable(configuration, haplotypeTableName); //This can throw an IOException
}
}
}
} finally {
HConnectionManager.deleteConnection(configuration, true); //This can throw an IOException
}
}
}
Is there a more-elegant way to do this?
The standard (working) way to right resource management in Java (the principle applies to other languages as well) is:
Resource resource = acquire(resource);
try {
use(resource);
} finally {
resource.release();
}
Or using the shortcut (with an extra bit of cleverness) in the current version of Java SE:
try (Resource resource = acquire(resource)) {
use(resource);
}
(As Joe K points out, you may need to wrap the resource to make it confirm to the specific interface that the Java language depends upon.)
Two resources, and you just apply the idiom twice:
Resource resource = acquire(resource);
try {
SubResource sub = resource.acquire();
try {
use(sub);
} finally {
sub.release();
}
} finally {
resource.release();
}
And in Java SE 7:
try (
Resource resource = acquire(resource);
SubResource sub = resource.acquire()
) {
use(resource, sub);
}
The really great advantage of the new language feature is that resource handling was more often than not broken when written out.
You might have more complicated exception handling. For instance, you don't want to throw low-level exceptions such as IOException through to the application proper - you probably want to wrap in some subtype of RuntimeException. This can, with Java's typicaly verboseness, be factored out using the Execute Around idiom (see this excellent question). From Java SE 8, there will also be shorter syntax with randomly different semantics.
with(new ResourceSubAction() { public void use(Resource resource, SubResource sub) {
... use resource, sub ...
}});
If this is Java 7, you could consider using the new try-with-resources construct. You may need to create some basic AutoCloseable wrappers for deleting the tables.
In general, there's no way out of this. You need multiple finally blocks.
However, I don't want to comment on your specific code, whether or not that's an appropriate design. It certainly looks pretty odd.
There is no way I'm afraid. There was a similar pattern when closing io resources. Eg what do you do when closing a file throws an IOException? Usually you just had to ignore it. As this was a bit if an anti pattern they introduced the try-with syntax in Java 7. For your example though I think there is no other option. Perhaps put each finally into its own method to make it clearer
To call multiple methods from a finally block, you have to ensure that none of them throw -- which is a good idea anyway, because any exception thrown from a finally block will override the exception or return value thrown from the try/catch.
The most common use case is a file or database connection, in which case you write a "close quietly" method (or use one from an existing library, such as Jakarta Commons IO). If the things you need to clean up don't let you use a pre-existing method, you write your own (in your case, deleteTableQuietly()).
If you're using JDK-7, you can also use the "try with resource" construct.
You could create an abstract class Action with an execute method, and derive from that class one class for each method throwing exception which you want to call, calling this method from the execute method. Then you can create a list of Actions and loop over the elements of the list, calling their execute method in a try finally block, ignoring exceptions.
deleteTableSilently(table1);
deleteTableSilently(table2);
deleteTableSilently(table3);
deleteTableSilently()
try
deleteTable()
catch whatever
log.error();
Consider using the java.util.concurrent framework -- if you code each call as an individual Callable (named or anonymous), you could use ExecutorService.invokeAll.
I'm using the java.util.concurrency framework for the first time. Here's a very simplified version of what I'm doing. For those not intimately familiar with the framework, future.get() executes a Callable object defined in the future. future.getOriginatingRequest() returns an object I set in the future for use by the Callable object and I'm just trying to log which originating request object failed (its enough to know the class name of it).
try {
future.get();
} catch (ExecutionException e) {
logger.error("Failed to execute future with id '" +
future.getOriginatingRequest().getClass().getName() + "'");
}
The problem I'm having is that the logging framework is outputting this:
Failed to execute future with id '$Proxy22'
Thus instead of the real class name I am getting $Proxy22 or some other number. Is there a way to get ahold of the real class name rather than the proxy name? Bonus points is someone can clearly explain why I'm getting the proxy string in the first place!
I can answer the bonus question: the string is the name of a dynamic Proxy class, generated in runtime.
As for how you can get to the masked class, there's not even a guarantee that one exists at all. The only thing you can do is to call Proxy.getInvocationHandler() on your proxy object and hope that the invocation handler will reveal more information (unlikely but may be worth a shot).
I find good for me solution on http://www.techper.net/2009/06/05/how-to-acess-target-object-behind-a-spring-proxy/
#SuppressWarnings({"unchecked"})
protected <T> T getTargetObject(Object proxy, Class<T> targetClass) throws Exception {
if (AopUtils.isJdkDynamicProxy(proxy)) {
return (T) ((Advised)proxy).getTargetSource().getTarget();
} else {
return (T) proxy; // expected to be cglib proxy then, which is simply a specialized class
}
}
Usage
#Override
protected void onSetUp() throws Exception {
getTargetObject(fooBean, FooBeanImpl.class).setBarRepository(new MyStubBarRepository());
}
i'm guessing that the proxy class is a subclass of the class you're looking for. especially if the class you're looking for is a class you wrote.
can you access the inheritance tree for the object you found? maybe through reflection?