I'm trying to do a singleton pattern using AspectJ and the clause pertypewithin.
This is for educational purpose to fully understand the clause, i already did this using other techniques but i can't make it work as i wish.
package resourceManager;
//For every Type that extends resource
public aspect ResourcePool pertypewithin(resource+) {
//The resource
public Object cached;
Object around(Object instance): execution( *.new(..))&&!within(resource)&&this(instance){
if(cached==null)
{
proceed(instance);
cached=instance;
}
System.out.println(instance+" "+ cached);
return cached;
}
}
My problem is that at the end i return cached but the new object in my main function will hold the value of instance. Inside the aspect it behaves as it should: the first time i instantiate a resource, cached and instance will hold the same value, the secondo time, they will differ.
instance: resourceManager.RB#4ffac352 cached: resourceManager.RB#4ffac352
instance: resourceManager.RB#582d6583 cached: resourceManager.RB#4ffac352
But when i print the 2 new objects, this happens:
resourceManager.RB#4ffac352
resourceManager.RB#582d6583
I just found this old question today after it was edited lately.
What you want is a caching solution based on pertypewithin. While the idea is nice, there is one technical problem: You cannot return an object from an around() : execution(*.new(..)) advice because the object in question is not fully instantiated yet and the around advice implicitly returns void. Try for yourself and change the advice return type to void and do not return anything. It works - surprise, surprise! ;-)
So what can you do instead? Use around() : call(*.new(..)) instead in order to manipulate the result of a constructor call. You can even skip object creation altogether by not calling proceed() from there.
There are several ways to utilise this in order to make your resource objects cached singletons. But because you specifically asked for a pertypewithin use case, I am going to choose a solution involving this type of aspect instantiation. The drawback here is that you need to combine two aspects in order to achieve the desired outcome.
Sample resource classes:
package de.scrum_master.resource;
public class Foo {}
package de.scrum_master.resource;
public class Bar {}
Driver application creating multiple instances of each resource type:
package de.scrum_master.app;
import de.scrum_master.resource.Bar;
import de.scrum_master.resource.Foo;
public class Application {
public static void main(String[] args) {
System.out.println(new Foo());
System.out.println(new Foo());
System.out.println(new Bar());
System.out.println(new Bar());
}
}
Modified version of resource pool aspect:
package de.scrum_master.resourceManager;
public aspect ResourcePool pertypewithin(de.scrum_master.resource..*) {
private Object cached;
after(Object instance) : execution(*.new(..)) && this(instance) {
// Not necessary because SingletonAspect only proceeds for 'cache == null'
//if (cached != null) return;
cached = instance;
System.out.println("Cached instance = " + cached);
}
public Object getCachedInstance() {
return cached;
}
}
Aspect skipping object over object creation and returning cached objects:
package de.scrum_master.resourceManager;
public aspect SingletonAspect {
Object around() : call(de.scrum_master.resource..*.new(..)) {
Object cached = ResourcePool
.aspectOf(thisJoinPointStaticPart.getSignature().getDeclaringType())
.getCachedInstance();
return cached == null ? proceed() : cached;
}
}
Console log:
Cached instance = de.scrum_master.resource.Foo#5caf905d
de.scrum_master.resource.Foo#5caf905d
de.scrum_master.resource.Foo#5caf905d
Cached instance = de.scrum_master.resource.Bar#8efb846
de.scrum_master.resource.Bar#8efb846
de.scrum_master.resource.Bar#8efb846
Related
I am somewhat new to Spring and have recently generated a JHipster monolith application with the WebFlux option. My current aim is to make it compatible with Firestore and implement some missing features like inserting document references. To do so, I am currently having the following structure:
A domain object class "Device" which holds a field String firmwareType;
A domain object class "FirmwareType"
A DTO object DeviceDTO which holds a field FirmwareType firmwareType;
Correspondingly, I also have the corresponding Repository (extending FirestoreReactiveRepository which extends ReactiveCrudRepository) and Controller implementations, which all work fine. To perform the conversion from a "full object" of FirmwareType in the DTO-object to a String firmwareTypeId; in the Device-object, I implemented a MapStruct Mapper:
#Mapper(unmappedTargetPolicy = org.mapstruct.ReportingPolicy.IGNORE, componentModel = "spring")
public abstract class DeviceMapper {
private final Logger logger = LoggerFactory.getLogger(DeviceMapper.class);
#Autowired
protected FirmwareTypeRepository fwTypeRepo;
public abstract Device dtoToDevice(DeviceDTO deviceDTO);
public abstract DeviceDTO deviceToDto(Device device);
public abstract List<DeviceDTO> devicesToDTOs(List<Device> devices);
public abstract List<Device> dtosToDevices(List<DeviceDTO> dtos);
public String map(FirmwareType value) {
if (value == null || value.getId() == null || value.getId().isEmpty()) {
return null;
}
return value.getId();
}
public FirmwareType map(String value) {
if (value == null || value.isEmpty()) {
return null;
}
return fwTypeRepo.findById(value).block(); // <<-- this gets stuck!
}
}
The FirmwareTypeRepository which is autowired as fwTypeRepo field:
#Repository
public interface FirmwareTypeRepository extends FirestoreReactiveRepository<FirmwareType> {
Mono<FirmwareType> findById(String id);
}
The corresponding map functions get called perfectly fine, but the fwTypeRepo.findById(..) call in the marked line (third-last line) seems to get stuck somewhere and never returns or throws an error. When the "fwTypeRepo" via its Controller-endpoint is called, it works without any issues.
I suppose it must be some kind of calling context issue or something? Is there another way to force a result by Mono synchronously than block?
Thanks for your help in advance, everyone!
Edit: At this point, I am sure it has something to do with Autowiring the Repository. It seems to not correctly do so / use the correct instance. While a customized Interface+Impl is called correctly, the underlying logic (from FirestoreReactive/ReactiveCrudRepository) doesn't seem to supply data correctly (also when #Autowire is used in other components!). I found some hints pointing at the package-structure but (i.e. Application class needs to be in a root package) but that isn't an issue.
Mapstruct is not reactive as i know so this approach won't work, you'll need mapstruct to return a mono that builds the object itself but that wouldn't make sense as it's a mapping lib which is only for doing blocking things.
Could try use 2 Mono/Mappers, 1 for each DB call and then just Mono.zip(dbCall1, dbCall2) and set the the mapped db call output into the other objects field.
var call1 = Mono.fromFuture(() -> db.loadObject1()).map(o -> mapper1.map(o));
var call2 = Mono.fromFuture(() -> db.loadObject2()).map(o -> mapper2.map(o));
Mono.zip(call1, call2)
.map(t -> {
var o1 = t.getT1();
var o2 = t.getT2();
o1.setField(o2);
});
I have to test a method which uses a mutable object
private final List<LogMessage> buffer;
...
flushBuffer() {
sender.send(buffer);
buffer.clear();
}
I need to test that it sends buffers with exact size.
ArgumentCaptor is not applicable because the captured collection is clear by the time of assertion.
Is there a kind of matcher which can reuse Hamcrest's hasSize() and does check right in time of method call?
I would prefer something like this hypothetical collectionWhich matcher:
bufferedSender.flushBuffer();
verify(sender).send(collectionWhich(hasSize(5)));
A lightweight alternative to David's idea: Use an Answer to make a copy at the time of the call. Untested code, but this should be pretty close:
final List<LogMessage> capturedList = new ArrayList<>();
// This uses a lambda, but you could also do it with an anonymous inner class:
// new Answer<Void>() {
// #Override public Void answer(InvocationOnMock invocation) { /* ... */ }
// }
when(sender.send(any())).thenAnswer(invocation -> {
List<LogMessage> argument = (List<LogMessage>) invocation.getArguments()[0];
capturedList.addAll(argument);
});
bufferedSender.flushBuffer();
assertThat(capturedList).hasSize(5);
The Jeff Bowman answer is fine but I think that we can improve it by inlining the assertion in the Answer object itself. It avoids creating unnecessary copy objects and additional local variable(s).
Besides in cases of we need to copy the state of custom objects (by performing a deep copy of it), this way is much simpler. Indeed, it doesn't require any custom code or library to perform the copies as the assertion is done on the fly.
In Java 8, it would give :
import static org.mockito.Mockito.*;
when(sender.send(any())).thenAnswer(invocation -> {
List<LogMessage> listAtMockTime = invocation.getArguments()[0];
Assert.assertEquals(5, listAtMockTime.getSize());
});
bufferedSender.flushBuffer();
Note that InvocationOnMock.getArgument(int index) returns an unbounded wildcard (?). So no cast is required from the caller as the returned type is defined by the target : here the declared variable for which one we assign the result.
You would have the same issue than with ArgumenCaptor as the verify() method checks the invocation with the state of the object after the execution. No capture is performed to keep only the state at the invocation time.
So with a mutable object I think that a better way would be to not use Mockito and instead create a stub of the Sender class where you capture the actual size of the collection as send() is invoked.
Here is a sample stub class (minimal example that you could of course enrich/adapt) :
class SenderStub extends Sender {
private int bufferSize;
private boolean isSendInvoked;
public int getBufferSize() {
return bufferSize;
}
public boolean isSendInvoked(){
return isSendInvoked;
}
#Override
public void send(List<LogMessage> buffer ) {
this.isSendInvoked = true;
this.bufferSize = buffer.size();
}
}
Now you have a way to check whether the Sender was invoked and the size (or even more) of that.
And so put aside Mockito to create this mock and verify its behavior :
SenderStub sender = new SenderStub();
MyClassToTest myClass = new MyClassToTest(sender);
// action
myClass.flushBuffer();
// assertion
Assert.assertTrue(sender.isInvoked());
Assert.assertEquals(5, sender.getBufferSize());
I am writing an application using the JVMTI. I am trying to instrument the bytecode: by injecting method calls on every method entry.
I know how to do that, but the problem is in the instrument class, say it's called Proxy, which I load using the JNI function DefineClass. My Proxy has a few dependencies in Java Class Library, currently just java.lang.ThreadLocal<Boolean>.
Now, say I have this, where inInstrumentMethod is a plain boolean:
public static void onEntry(int methodID)
{
if (inInstrumentMethod) {
return;
} else {
inInstrumentMethod = true;
}
System.out.println("Method ID: " + methodID);
inInstrumentMethod = false;
}
The code compiles and works. However, if I make inInstrumentMethod a java.lang.ThreadLocal<Boolean>, I get a NoClassDefFoundError. The code:
private static ThreadLocal<Boolean> inInstrumentMethod = new ThreadLocal<Boolean>() {
#Override protected Boolean initialValue() {
return Boolean.FALSE;
}
};
public static void onEntry(int methodID)
{
if (inInstrumentMethod.get()) {
return;
} else {
inInstrumentMethod.set(true);
}
System.out.println("Method ID: " + methodID);
inInstrumentMethod.set(false);
}
My guess is that the dependencies have not been resolved correctly, and java.lang.ThreadLocal was not loaded (and thus could not be found). The question is, then, how do I force Java to load java.lang.ThreadLocal? I don't think I could use DefineClass in this case; is there an alternative?
I don’t think that there is a problem resolving the standard class java.lang.ThreadLocal, but rather with the inner class extending it, generated by
new ThreadLocal<Boolean>() {
#Override protected Boolean initialValue() {
return Boolean.FALSE;
}
};
Solving this via DefineClass might indeed be impossible due to the circular dependency between the inner and outer class, so there’s no order which allows to define them, unless you have a full-fledged ClassLoader that returns the classes on demand.
The simplest solution is to avoid the generation of an inner class at all, which is possible with Java 8:
private static ThreadLocal<Boolean> inInstrumentMethod
= ThreadLocal.withInitial(() -> Boolean.FALSE);
If you use a version prior to Java 8, you can’t use it that way, so the best solution in that case, is to rewrite the code to accept the default value of null as initial value, eliminating the need to specify a different initial value:
private static ThreadLocal<Boolean> inInstrumentMethod = new ThreadLocal<>();
public static void onEntry(int methodID)
{
if (inInstrumentMethod.get()!=null) {
return;
} else {
inInstrumentMethod.set(true);
}
System.out.println("Method ID: " + methodID);
inInstrumentMethod.set(null);
}
You could also convert that anonymous inner class to a top level class. Since then, that class has no dependency to what was formerly its outer class, defining that subtype of ThreadLocal first, before defining the class using it, should solve the issue.
May be I'm not thinking hard enough or the answer is really elusive. Quick scenario (Try the code out. It compiles).
Consider a legacy interface
public interface LegacyInterfaceNoCodeAvailable{
void logInfo(String message);
}
The consider a legacy implementation of the interface above
public abstract class LegacyClassNoCodeAvailable implements LegacyInterfaceNoCodeAvailable{
public abstract void executeSomething();
public void rockItOldSchool(){
logInfo("bustin' chops, old-school style");
}
#Override
public void logInfo(String message){
System.out.println(message);
}
}
Now I come in as this ambitious person and writes a class for a 'New' system but that runs inside the 'Legacy' framework, hence I have to extend the legacy base class.
public class lass SpankingShiny extends LegacyClassNoCodeAvailable{
public void executeSomething(){
rockItOldSchool();
logInfo("I'm the King around here now");
System.out.println("this new stuff rocks!!");
}
}
Everything works great, just like you would expect:
SpankingShiny shiny = new SpankingShiny();
shiny.executeSomething();
The above code yields (as expected):
bustin' chops, old-school style
I'm the King around here now
this new stuff rocks!!
Now as you can see, the 'System.out.println()' faithfully prints the desired output. But I wish to replace the 'System.out.println()' with a logger.
Problem:
I'm unable to have the CGLIB proxy intercept the method to 'logInfo(string)' and have it print out my desired message through a logger (I have done the logging configuration right by the way). That method invocation 'apparently' does not hit the proxy.
Code:
public class SpankingShinyProxy implements MethodInterceptor{
private SpankingShiny realShiny;
private final Logger logger = Logger.getLogger(SpankingShinyProxy.class);
public SpankingShinyProxy(SpankingShiny realShiny) {
super();
this.realShiny = realShiny;
}
#Override
public Object intercept(Object proxyObj, Method proxyMethod, Object[] methodParams, MethodProxy methodProxy) throws Throwable {
String methodName = proxyMethod.getName();
if("logInfo".equals(methodName)){
logger.info(methodParams[0]);
}
return proxyMethod.invoke(realShiny, methodParams);
}
public static SpankingShiny createProxy(SpankingShiny realObj){
Enhancer e = new Enhancer();
e.setSuperclass(realObj.getClass());
e.setCallback(new SpankingShinyProxy(realObj));
SpankingShiny proxifiedObj = (SpankingShiny) e.create();
return proxifiedObj;
}
}
Main method:
public static void main(String... args) {
SpankingShiny shiny = new SpankingShiny();
shiny.executeSomething();
SpankingShiny shinyO = SpankingShinyProxy.createProxy(shiny);
shinyO.executeSomething();
}
The above code yields (NOT as expected):
bustin' chops, old-school style
I'm the King around here now
this new stuff rocks!!
bustin' chops, old-school style
I'm the King around here now
this new stuff rocks!!
Where would I be going wrong?
Thanks!
I had the same problem. In my case, the realObj was a proxy itself (a Spring Bean - a #Component).
So what I had to do was change the .setSuperClass() part in:
Enhancer e = new Enhancer();
e.setSuperclass(realObj.getClass());
e.setCallback(new SpankingShinyProxy(realObj));
SpankingShiny proxifiedObj = (SpankingShiny) e.create();
I changed:
e.setSuperclass(realObj.getClass());
To:
e.setSuperclass(realObj.getClass().getSuperClass());
This worked because, as said, realObj.getClass() was a CGLIB proxy itself, and that method returned a crazy-name-CGLIB-generated class, such as a.b.c.MyClass$$EnhancerBySpringCGLIB$$1e18666c. When I added .getSuperClass() it returned the class it should have been returning in the first place.
Well, first of all, you are lucky that your proxy is not hit. If you were referencing the actual proxy within intercept, you would end up with an endless loop since your reflective method incocation would get dispatched by the same SpankingShinyProxy. Again and again.
The proxy is not working since you simply delegate the method call executeSomething on your proxy to some unproxied object. You must not use realObj. All method calls must be dispatched by your proxy, also those method calls that are invoked by the must hit the proxy itself!
Change the last line in your intercept method to methodProxy.invokeSuper(proxyObj, args). Then, construct your object by using the Enhancer. If your constructor for SpankingShiny does not need arguments, calling create without any arguments if fine. Otherwise, supply the objects you would normally supply to the constructor to the create method. Then, only use the object that you get from create and you are good.
If you want more information on cglib, you might want to read this blog article: http://mydailyjava.blogspot.no/2013/11/cglib-missing-manual.html
I'm trying to create instances of CDI managed beans using the BeanManager rather than Instance .select().get().
This was suggested as a workaround to an issue I've been having with ApplicationScoped beans and garbage collection of their dependents - see CDI Application and Dependent scopes can conspire to impact garbage collection? for background and this suggested workaround.
If you use the Instance programmatic lookup method on an ApplicationScoped bean, the Instance object and any beans you get from it are all ultimately dependent on the ApplicationScoped bean, and therefore share it's lifecycle. If you create beans with the BeanManager, however, you have a handle on the Bean instance itself, and apparently can explicitly destroy it, which I understand means it will be GCed.
My current approach is to create the bean within a BeanManagerUtil class, and return a composite object of Bean, instance, and CreationalContext:
public class BeanManagerUtil {
#Inject private BeanManager beanManager;
#SuppressWarnings("unchecked")
public <T> DestructibleBeanInstance<T> getDestructibleBeanInstance(final Class<T> type,
final Annotation... qualifiers) {
DestructibleBeanInstance<T> result = null;
Bean<T> bean = (Bean<T>) beanManager.resolve(beanManager.getBeans(type, qualifiers));
if (bean != null) {
CreationalContext<T> creationalContext = beanManager.createCreationalContext(bean);
if (creationalContext != null) {
T instance = bean.create(creationalContext);
result = new DestructibleBeanInstance<T>(instance, bean, creationalContext);
}
}
return result;
}
}
public class DestructibleBeanInstance<T> {
private T instance;
private Bean<T> bean;
private CreationalContext<T> context;
public DestructibleBeanInstance(T instance, Bean<T> bean, CreationalContext<T> context) {
this.instance = instance;
this.bean = bean;
this.context = context;
}
public T getInstance() {
return instance;
}
public void destroy() {
bean.destroy(instance, context);
}
}
From this, in the calling code, I can then get the actual instance, put it in a map for later retrieval, and use as normal:
private Map<Worker, DestructibleBeanInstance<Worker>> beansByTheirWorkers =
new HashMap<Worker, DestructibleBeanInstance<Worker>>();
...
DestructibleBeanInstance<Worker> destructible =
beanUtils.getDestructibleBeanInstance(Worker.class, workerBindingQualifier);
Worker worker = destructible.getInstance();
...
When I'm done with it, I can lookup the destructible wrapper and call destroy() on it, and the bean and its dependents should be cleaned up:
DestructibleBeanInstance<JamWorker> workerBean =
beansByTheirWorkers.remove(worker);
workerBean.destroy();
worker = null;
However, after running several workers and leaving my JBoss (7.1.0.Alpha1-SNAPSHOT) for 20 minutes or so, I can see GC occurring
2011.002: [GC
Desired survivor size 15794176 bytes, new threshold 1 (max 15)
1884205K->1568621K(3128704K), 0.0091281 secs]
Yet a JMAP histogram still shows the old workers and their dependent instances hanging around, unGCed. What am I missing?
Through debugging, I can see that the context field of the bean created has the contextual of the correct Worker type, no incompleteInstances and no parentDependentInstances. It has a number of dependentInstances, which are as expected from the fields on the worker.
One of these fields on the Worker is actually an Instance, and when I compare this field with that of a Worker retrieved via programmatic Instance lookup, they have a slightly different CreationalContext makeup. The Instance field on the Worker looked up via Instance has the worker itself under incompleteInstances, whereas the Instance field on the Worker retrieved from the BeanManager doesn't. They both have identical parentDependentInstances and dependentInstances.
This suggests to me that I haven't mirrored the retrieval of the instance correctly. Could this be contributing to the lack of destruction?
Finally, when debugging, I can see bean.destroy() being called in my DestructibleBeanInstance.destroy(), and this goes through to ManagedBean.destroy, and I can see dependent objects being destroyed as part of the .release(). However they still don't get garbage collected!
Any help on this would be very much appreciated! Thanks.
I'd change a couple of things in the code you pasted.
Make that class a regular java class, no injection and pass in the BeanManager. Something could be messing up that way. It's not likely, but possibly.
Create a new CreationalContext by using BeanManager.createCreationContext(null) which will give you essentially a dependent scope which you can release when you're done by calling CreationalContext.release().
You may be able to get everything to work correctly the way you want by calling the release method on the CreationalContext you already have in the DestructibleBeanInstance, assuming there's no other Beans in that CreationalContext that would mess up your application. Try that first and see if it messes things up.
Passing in null should only be done when you injecting some class other than a bean. In your case, you are injecting a bean. However I would still expect GC to work in this case, so could you file a JIRA in the Weld issue tracker with a test case and steps to reproduce?
A nicer way solve your problem could be to use a dynamic proxy to handle the bean destruction. The code to obtain a bean class instance programaticaly would be:
public static <B> B getBeanClassInstance(BeanManager beanManager, Class<B> beanType, Annotation... qualifiers) {
final B result;
Set<Bean<?>> beans = beanManager.getBeans(beanType, qualifiers);
if (beans.isEmpty())
result = null;
else {
final Bean<B> bean = (Bean<B>) beanManager.resolve(beans);
if (bean == null)
result = null;
else {
final CreationalContext<B> cc = beanManager.createCreationalContext(bean);
final B reference = (B) beanManager.getReference(bean, beanType, cc);
Class<? extends Annotation> scope = bean.getScope();
if (scope.equals(Dependent.class)) {
if (beanType.isInterface()) {
result = (B) Proxy.newProxyInstance(bean.getBeanClass().getClassLoader(), new Class<?>[] { beanType,
Finalizable.class }, new InvocationHandler() {
#Override
public Object invoke(Object proxy, Method method, Object[] args) throws Throwable {
if (method.getName().equals("finalize")) {
bean.destroy(reference, cc);
}
try {
return method.invoke(reference, args);
} catch (InvocationTargetException e) {
throw e.getCause();
}
}
});
} else
throw new IllegalArgumentException("If the resolved bean is dependent scoped then the received beanType should be an interface in order to manage the destruction of the created dependent bean class instance.");
} else
result = reference;
}
}
return result;
}
interface Finalizable {
void finalize() throws Throwable;
}
This way the user code is simpler. It doesnt have to take care of the destruction.
The limitation of this approuch is that the case when the received beanType isn't an interface and the resolved bean class is #Dependent is not supported. But is easy to work arround. Just use an interface.
I tested this code (with JBoss 7.1.1) and it works also for dependent stateful session beans.