I have two nodes. Both of them are subscribed for a topic.
When one of the nodes publishes a message, the other one not get the message at first time. If the node publishes a message second time, then the other node gets the message.
If i call hazelcastInstance.getTopic(TopicX) at the application initialization phase, message listeners work as desired.
I think it is about lazy-init attribute.
Is there more reliable way not to face this problem? Reliable-topic could be solution?? If so, is there any sample code to implement reliable topic with spring?
#vourla, I’d suggest using ReliableTopic since it’s backed by RingBuffer & as long as backing ringbuffer is not full, listeners can read the first message properly.
Also, please see the related doc section: https://docs.hazelcast.org/docs/3.11.1/manual/html-single/index.html#configuring-reliable-topic
Instead of adding listener programatically, add it via configuration. Also, for Topic, since events are fire and forget, if you add the listener after the event fired from another node, you wont get it, whether you define it programatically or via config, but with ReliableTopic, both should work.
You can check the Spring releated doc section & related code samples as well: https://github.com/hazelcast/hazelcast-code-samples/tree/master/hazelcast-integration/spring-configuration
#gokhan-oner, thank you for the answer.
Actually I tried to implement reliable-topic at first. But I couldn't find sample implementation in spring. Syntax is a little different in spring.
Now, the implementation is done like this:
<hz:hazelcast id="instance">
<hz:ringbuffer name="topicX" capacity="1000" time-to-live-seconds="5"/>
<hz:ringbuffer name="topicY" capacity="1000" time-to-live-seconds="5"/>
<hz:reliable-topic name="topicX" topic-overload-policy="BLOCK"/>
<hz:reliable-topic name="topicY" topic-overload-policy="BLOCK"/>
</hz:hazelcast>
But, declaratively implementing topic listeners not worked. I addded listeners programmatically when context is initializing.
What is not working for me:
<hz:reliable-topic name="topicZ" topic-overload-policy="BLOCK">
<hz:message-listeners>
<hz:message-listener class-name="tr.com.test.HazelcastTopicListener"/>
</hz:message-listeners>
</hz:reliable-topic>
What is working:
HazelcastTopicListener hazelcastTopicListener = new HazelcastTopicListener();
HazelcastInstance hazelcastInstance = SpringIntegration.getBean(HazelcastInstance.class);
ITopic<Message> testTopic= hazelcastInstance.getTopic("topicZ");
testTopic.addMessageListener(hazelcastTopicListener );
#vourla, please check the hazelcast-spring-XX.xsd file. Attribute name is class-or-bean-name, not class-name. Can you try like below:
<hz:reliable-topic name="topicZ" topic-overload-policy="BLOCK">
<hz:message-listeners>
<hz:message-listener class-or-bean-name="tr.com.test.HazelcastTopicListener"/>
</hz:message-listeners>
</hz:reliable-topic>
Related
I have integrated open-telemetry collector with spring-cloud-sleuth.
I have created a spring aspect where I am trying to add a custom baggage field serviceName to the current trace using following line of code (Reference)
tracer.createBaggage( "serviceName", "spring-cloud-sleuth-otel-slf4j" );
This works but the baggage is added from the next span onwards. I found the following solution (here) which is supposed to solve this but it works with brave only
#Bean
ScopeDecorator mdcScopeDecorator() {
return MDCScopeDecorator.newBuilder().clear().add( SingleCorrelationField.newBuilder( countryCodeField() ).flushOnUpdate().build() ).build();
}
However, I could not find anything similar when not using brave. Can somebody suggest how it can be achieved?
Software versions
Spring Version 5.3.18 and earlier
JDK Version 1.8.0_202
Overview
When I use Spring ApplicationListener, in order to prevent transaction invalidation, my ApplicationListener implementation class writes the following code (of course, the code can be written differently to avoid this problem), which will cause my listener to trigger twice after the event is published. I think it's not normal, but not sure if it's a bug, so I want to ask everyone's opinion.
#Component
public class EventDemoListener implements ApplicationListener<EventDemo> {
#Autowired
DemoService1 demoService1;
#Autowired
DemoService2 demoService2;
#Autowired
EventDemoListener eventDemoListener;
#Override
public void onApplicationEvent(EventDemo event) {
eventDemoListener.testTransaction();
System.out.println("receiver " + event.getMessage());
}
#Transactional(rollbackFor = Exception.class)
public void testTransaction() {
demoService1.doService();
demoService2.doService();
}
}
Through this demo project, this problem can be reproduced. Please read the README.md document before running.
https://github.com/ZiFeng-Wu/spring-study
Analysis
After analysis, because here DI itself , When EventDemoListener is created, property filling will trigger DefaultSingletonBeanRegistry#getSingleton(String, boolean) in advance.
Then singletonFactory.getObject() executed in getSingleton() will cause the unproxyed EventDemoListener object to be put into AbstractAutoProxyCreator#earlyProxyReferences.
After the properties are filled, calling AbstractAutowireCapableBeanFactory#initializeBean(String, Object, RootBeanDefinition) and executing ApplicationListenerDetector#postProcessAfterInitialization(Object, String) will cause the unproxyed EventDemoListener object to be put into the AbstractApplicationEventMulticaster.DefaultListenerRetriever#applicationListeners container.
Then when the event is published, execute AbstractApplicationEventMulticaster.DefaultListenerRetriever#getApplicationListeners() and use ApplicationListener<?> listener =beanFactory.getBean(listenerBeanName, ApplicationListener.class) to obtain the listener is the proxied EventDemoListener object.
At this time, there are only unproxyed EventDemoListener object in the applicationListeners container, so the proxied EventDemoListener object will be added to the final returned allListeners collection, as shown in the figure below, which will eventually cause the listener to be triggered twice.
Updated answer
Now with your updated GitHub project, I can reproduce the problem. It also occurs when using a Spring AOP aspect targeting the listener class, not just in the special case of self-injection + #Transactional. IMO, it is a Spring Core bug, which is why I created PR #28322 in order to fix the issue #28283 you raised either before or after cross-posting here. You should have linked to that issue in your question, I just found it because I was searching for key words before creating an issue for it myself.
See also my comment in the issue, starting with this one.
Original answer (for reference)
OK, in your main class I changed
String configFile = "src/main/resources/spring-context.xml";
AbstractApplicationContext context = new FileSystemXmlApplicationContext(configFile);
to
AbstractApplicationContext context = new AnnotationConfigApplicationContext("com.zifeng.spring");
Now the application starts, also without DB configuration. It simply prints:
receiver test
There is no exception. I.e., if for you it does not work, probably there is a bug in your XML configuration. But actually, you really do not need it, because you used component and service annotations already.
So if I need a database setup in order to reproduce this, please, like I said in my comment, update the project to provide an H2 configuration which works out of the box.
I am trying to understand how the new functional model of Spring Cloud Streams works and how the configuration actually works under the hood.
One of the properties I am unable to figure out is spring.cloud.stream.source.
What does this property actually signify ?
I could not understand the documentation :
Note that preceding example does not have any source functions defined
(e.g., Supplier bean) leaving the framework with no trigger to create
source bindings, which would be typical for cases where configuration
contains function beans. So to trigger the creation of source binding
we use spring.cloud.stream.source property where you can declare the
name of your sources. The provided name will be used as a trigger to
create a source binding.
What if I did not need a Supplier ?
What exactly is a source binding and why is it important ?
What if I only wanted to produce to a messaging topic ? Would I still need this property ?
I also could not understand how it is used in the sample here.
Spring cloud stream looks for java.util Function<?, ?, Consumer<?>, Supplier<?> beans and creates bindings for them.
In the supplier case, the framework polls the supplier (each second by default) and sends the resulting data.
For example
#Bean
public Supplier<String> output() {
return () -> "foo";
}
spring.cloud.stream.bindings.output-out-0.destination=bar
will send foo to destination bar each second.
But, what if you don't need a polled source, but you want to configure a binding to which you can send arbitrary data. Enter spring.cloud.stream.source.
spring.cloud.stream.source=output
spring.cloud.stream.bindings.output-out-0.destination=bar
allows you to send arbitrary data to the stream bridge
bridge.send("output-out-0", "test");
In other words, it allows you to configure one or more ouput bindings that you can use in the StreamBridge; otherwise, when you send to the bridge, the binding is created dynamically.
I am just getting started with OSGI and Declarative Services (DS) using Equinox and Eclipse PDE.
I have 2 Bundles, A and B.
Bundle A exposes a component which is consumed by Bundle B. Both bundles also expose this service to the OSGI Service registry again.
Everything works fine so far and Equinox is wireing the components together, which means the Bundle A and Bundle B are instanciated by Equinox (by calling the default constructor) and then the wireing happens using the bind / unbind methods.
Now, as Equinox is creating the instances of those components / services I would like to know what is the best way of getting this instance?
So assume there is third class class which is NOT instantiated by OSGI:
Class WantsToUseComponentB{
public void doSomethingWithComponentB(){
// how do I get componentB??? Something like this maybe?
ComponentB component = (ComponentB)someComponentRegistry.getComponent(ComponentB.class.getName());
}
I see the following options right now:
1. Use a ServiceTracker in the Activator to get the Service of ComponentBundleA.class.getName() (I have tried that already and it works, but it seems to much overhead to me) and make it available via a static factory methods
public class Activator{
private static ServiceTracker componentBServiceTracker;
public void start(BundleContext context){
componentBServiceTracker = new ServiceTracker(context, ComponentB.class.getName(),null);
}
public static ComponentB getComponentB(){
return (ComponentB)componentBServiceTracker.getService();
};
}
2. Create some kind of Registry where each component registers as soon as the activate() method is called.
public ComponentB{
public void bind(ComponentA componentA){
someRegistry.registerComponent(this);
}
or
public ComponentB{
public void activate(ComponentContext context){
someRegistry.registerComponent(this);
}
}
}
3. Use an existing registry inside osgi / equinox which has those instances? I mean OSGI is already creating instances and wires them together, so it has the objects already somewhere. But where? How can I get them?
Conclusion
Where does the class WantsToUseComponentB (which is NOT a Component and NOT instantiated by OSGI) get an instance of ComponentB from? Are there any patterns or best practises? As I said I managed to use a ServiceTracker in the Activator, but I thought that would be possible without it.
What I am looking for is actually something like the BeanContainer of Springframework, where I can just say something like Container.getBean(ComponentA.BEAN_NAME). But I don't want to use Spring DS.
I hope that was clear enough. Otherwise I can also post some source code to explain in more detail.
Thanks
Christoph
UPDATED:
Answer to Neil's comment:
Thanks for clarifying this question against the original version, but I think you still need to state why the third class cannot be created via something like DS.
Hmm don't know. Maybe there is a way but I would need to refactor my whole framework to be based on DS, so that there are no "new MyThirdClass(arg1, arg2)" statements anymore.
Don't really know how to do that, but I read something about ComponentFactories in DS. So instead of doing a
MyThirdClass object = new MyThirdClass(arg1, arg2);
I might do a
ComponentFactory myThirdClassFactory = myThirdClassServiceTracker.getService(); // returns a
if (myThirdClassFactory != null){
MyThirdClass object = objectFactory.newInstance();
object.setArg1("arg1");
object.setArg2("arg2");
}
else{
// here I can assume that some service of ComponentA or B went away so MyThirdClass Componenent cannot be created as there are missing dependencies?
}
At the time of writing I don't know exactly how to use the ComponentFactories but this is supposed to be some kind of pseudo code :)
Thanks
Christoph
Christoph,
Thanks for clarifying this question against the original version, but I think you still need to state why the third class cannot be created via something like DS.
DS causes components to be published as services, therefore the only way to "get" any component from DS is to access it via the service registry. Unfortunately the service registry can be hard to use correctly using the lower level APIs because it is dynamic, so you have to cope with the possibility of services going away or not being available at precisely the moment you want them to be available, and so on. This is why DS exists: it gives you an abstraction for depending on services and managing the lifecycle of your components based on the availability of services that they reference.
If you really need to access a service without using DS or something like it (and there is quite a choice of "things like it" e.g. Spring-DM, iPOJO, Guice/Peaberry, etc) then you should use ServiceTracker. I agree there is a lot of overhead -- again, this is why DS exists instead.
To answer your suggestion no (2), no you should not create your own registry of services because the service registry already exists. If you created a separate parallel registry then you would still have to handle all the dynamics, but you would have to handle it in two places instead of one. The same applies to suggestion (3).
I hope this helps.
Regards,
Neil
UPDATED: Incidentally, although Spring has the Container.getBean() backdoor, you notice that in all Spring documentation it is strongly recommended not to use that backdoor: to get hold of a Spring bean, just create another bean that references it. The same applies to DS, i.e. the best way to get hold of a DS component is to create another DS component.
Also note that in the OSGi world, even if you're using Spring-DM there is no easy way to just call getBean() because you need to get hold of the Spring ApplicationContext first. That is itself an OSGi service, so how to you get that service?
christoph,
dont know if I really understand your problem.
per ex.
Bundle A is providing a service using DS component:
<service>
<provide interface="org.redview.lnf.services.IRedviewLnfSelectedService"/>
Bundle B requires this service using DS component:
<implementation class="ekke.xyz.rcp.application.internal.XyzApplicationLnfComponent"/>
as soon as Bundle A provides the Service, Bundle B "gets" it through the bind() methode of the implementation class:
public class XyzApplicationLnfComponent {
public void bind(IRedviewLnfSelectedService lnfSelectedService) {
// here it is
}
hope this helps
ekke
Easy way: Inject the DS component into your Activator class with Riena:
http://wiki.eclipse.org/Riena_Getting_Started_with_injecting_services_and_extensions
Then you can call it from everywhere: Activator.getDefault().getWhateverService()
I'm configuring the logging for a Java application. What I'm aiming for is two logs: one for all messages and one for just messages above a certain level.
The app uses the java.util.logging.* classes: I'm using it as is, so I'm limited to configuration through a logging.properties file.
I don't see a way to configure two FileHandlers differently: the docs and examples I've seen set properties like:
java.util.logging.FileHandler.level = INFO
While I want two different Handlers logging at different levels to different files.
Any suggestions?
http://java.sun.com/j2se/1.4.2/docs/guide/util/logging/overview.html is helpful. You can only set one Level for any individual logger (as you can tell from the setLevel() method on the logger). However, you can take the lowest of the two common levels, and then filter programmatically.
Unfortunately, you can't do this just with the configuration file. To switch with just the configuration file you would have to switch to something like log4j, which you've said isn't an option.
So I would suggest altering the logging in code, with Filters, with something like this:
class LevelFilter implements Filter {
private Level Level;
public LevelFilter(Level level) {
this.level = level;
}
public boolean isLoggable(LogRecord record) {
return level.intValue() < record.getLevel().intValue();
}
}
And then on the second handler, do setFilter(new LevelFilter(Level.INFO)) or whatever. If you want it file configurable you could use a logging properties setting you've made up yourself, and use the normal Properties methods.
I think the configuration code for setting up the two file handlers ad the programmatic code is fairly simple once you have the design, but if you want more detail add a comment and I'll edit.
I think you should be able to just subclass a handler and then override the methods to allow output to go to multiple files depending on the level of the message. This would be done by overriding the publish() method.
Alternatively, if you have to use the system-provided FileHandler, you could do a setFilter() on it to inject your own filter into the mix and, in that filter code, send ALL messages to your other file and return true if the LogRecord level if INFO or higher, causing the FileHandler.publish() to write it to the real file.
I'm not sure this is the way you should be using filters but I can't see why it won't work.