Im developping non OSGI app and i need to update the values of some properties used in camel routes (loaded BridgePropertyPlaceHolder).
So I thought:
To use Hawtio, the cool mangement console, in order update camel using JMX
Create a JMX MBean that will update the properties ..
I successfully create the MBean operations and call them using JMX, but I can't figure out how to update the camel routes that depends on these properties.
Is there a way to update the camel context externally?
Update:
Exemple of use case:when a remote server doesn't return response, we keep sending messages until we reach the max of unsuccessful attempt(messages without ack).
in camel we create a router pattern based on property loaded from file system.
This property can change occasionally, and we want to do this without restarting server, but the problem is that camel parse routes when starting context and i can't find no mean to update routes accordingly.
I am grateful for any proposal that could help:)
If you use Camel error handling to retry (redeliver) then you can use the retryWhile to keep retrying until you return false. This allows you to use java code etc, and that allows you to read the updated configuration option.
See more details at
http://camel.apache.org/exception-clause.html
And if you have a copy of Camel in Action book, see page 152
For what properties you want them to be dynamic.you can move those prop to some db and fetch them whenever you are reading.I think a redesign is required for your camel route.
Changing from endpoint parameters such as URLs etc., following procedure has to be used according to dynamic change endpoint camel:
stop the route
remove the route
change the endpoint
add the route
start the route
If the to endpoint has to be configurable, you may use the recipient list component. Here you may read properties from a database and/or from the filesystem using the appropriate Camel component.
Related
It is a web service deployed on Apache Karaf using camel-cxf. I am able to see the cxf service listing in URL localhost:8181/cxf which has some rest and soap services deployed on it.
The problem is it is returning the service listing whenever any request comes with keyword "services". For example the url http://localhost:8181/abcd/services returns cxf service listing page instead of processing the actual request.
I got to know from http://cxf.apache.org/docs/jaxrs-services-description.html that its is because of the default value of service-list-path of CXFServet is services.
Here is my Question. If I want to override this, I should set this property in etc/org.apache.cxf.osgi.cfg. This cfg file is not present under etc folder in my karaf. What are the steps to be taken if I am creating this property file manually? What features I need to install? Or creating this cfg is sufficient ?
Appreciate your help !
There should be no extra installation requirements, just create a new file etc/org.apache.cxf.osgi.cfg.
There are three settings you may be interested in:
org.apache.cxf.servlet.context = /mycxf
org.apache.cxf.servlet.service-list-path = /myservices
org.apache.cxf.servlet.hide-service-list-page = false
Where the default URL for the CXF service listing is usually like http://localhost:8181/cxf/services, with the changes above the URL would become http://localhost:8181/mycxf/myservices
If you change from false (default value) to true, then your services will be hidden and you will instead get a page stating No service was found.
Because these are initialisation settings you need to shut down Karaf for the changes to apply.
I see several points here --
The CXF framework is installed by default in karaf under the context-path /cxf.
/cxf/services can be considered as a CXF internal app that displays the list of services deployed in CXF. I don't think you can configure the name "services" here (and why would you change that?)
the "url-pattern in web.xml" you speak of (if I understand correctly) determines the context path of your servlet/application. You can specify this is camel like this:
<cxf:rsServer id="secureRsServer" address="https://0.0.0.0:8182/my/path/"
serviceClass="....">
(for the RS Server, probably same for the WS server).
I am very much new to multi-tenancy. We have an application based on Java, Spring, Hibernate/JPA etc. which doesn't support multi-tenancy.
Now, we want to convert that application into multi-tenant one. I have read about multi-tenancy and even wrote one standalone application using hibernate with separate schema approach. Link referred is here.
I think about the logging part which is bound to be changed now as log files will be maintained per tenant(client) now. So, for each tenant a separate log file will be there.Also, log file for a particular tenant shouldn't be accessed by another tenant.
Is there any logging API specific to support multi-tenancy? If not, how should i go ahead with implementing logging in multi-tenant application? What should be taken care of while implementing logging in multi-tenant application.
you can use MDC (mapped diagnostic context) support to route logging for each tenant into a separate file/dir/whatever.
you can read up on the concept here. it exists in slf4/logback and log4j
simply put, you set some property like tenantName in the MDC at the beginning of every request processing according to the specific tenant making the request and then use this property in your logging configuration to determine the log file into which log messages are written.
I have just discovered that Apache commons-configuration can read properties from a DataSource, but it does not cache them. My application needs to read properties a lot of times and it is to slow to access the database each time.
I have a Camel application that sends all messages to routes that ends with my custom beans. These beans are created with scope prototype (I believe in OOP) and they will/need to read some properties and a data source (which reads from properties url/name/etc) that depends from the current user from a SQL db. Each message I receive creates a bean and so properties are reread.
Unfortunately, I am not free to choose where to read properties from because now there is another software (GUI) not written by me that is a User/properties manager that writes to db. So I need to read properties from it.
Can you suggest me an alternative?
You could use the Netflix Archaius project, which adds the caching behavior you are looking for as well as dynamic refresh capabilities. Archaius is built around Commons Configuration.
So, rather than subclassing the DatabaseConfiguration, you could use Archaius' DynamicConfiguration, which extends Commons' AbstractConfiguration. This class will cache whatever source you would like, and refresh the properties at an interval you specify using their poll scheduling class.
The only class you would have to implement is a PolledConfigurationSource which pulls data from the database and places it in a Map. Should be pretty simple.
https://github.com/Netflix/archaius/wiki/Users-Guide
I have traditional (com.ibm.mq.jar) MQ application in Java for testing purpose. Now I need to use that application to send some messages to JMS. When I try to set any JMS property on MQ message, for example:
message.setStringProperty("JMSDestination", "queue:///" + queueName);
I always get error: 2471 - MQRC_PROPERTY_NOT_AVAILABLE. It works if I just remove JMS from the property name.
Is it possible to set JMS properties directly on MQMessage? What is a correct way to do that on MQ level?
Btw. I have the same application in .NET where setting JMS properties this way is possible so I'm only trying to use the same code in Java.
It is not allowed to do this manually. Please use the JMS API to set JMS properties.
Restrictions to MQ properties are explained here.
One thing is interessting in that document page though,
The names of properties specified directly as MQRFH2 elements are not guaranteed to be validated by the MQPUT call.
You could perhaps work around this, on a short term basis. There seems to be no guarantee that setting the MQRFH2 elements directly will not be validated, though.
There is only one file. And it is written simultaneously as web app copies run.
How do you filter only one session log messages from other log lines?
Using a servlet filter with either NDC or MDC information is the best way I've seen. A quick comparison of the two is available at http://wiki.apache.org/logging-log4j/NDCvsMDC.
I've found MDC has worked better for me in the past. Remember that you'll need to update your log4j properties file to include whichever version you prefer (pattern definitions at http://logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/PatternLayout.html).
A full example of configuring MDC with a servlet filter is available at http://veerasundar.com/blog/2009/11/log4j-mdc-mapped-diagnostic-context-example-code/.
A slightly easier to configure, but significantly inferior option: You could opt to just print out the thread ID (via the properties file) for each request and make sure that the first thing you log about each request is a session identifier. It isn't as proper (or useful), but it can work for low-volume applications.
You could set a context message including the identifier of the specific app instance using org.apache.log4j.NDC, like this:
String appInstanceId = "My App Instance 1";
org.apache.log4j.NDC.push(appInstanceId);
// handle request
org.apache.log4j.NDC.clear();
You can set up the context during the initialization of your web app instance, or inside the doPost() method of your servlets. As its name implies, you can also nest contexts within contexts with multiple push calls at different levels.
See the section "Nested Diagnostic Contexts" in the Log4J manual.
Here is a page that sets up an MDC filter for web-app -> http://rtner.de/software/MDCUserServletFilter.html
Being a servlet filter it will free you from managing MDC/NDC in each of your servlets.
Of course, you should modify it to save information more pertinent to your web-app.
If you want to differentiate sessions in the same application then the MDC is the way to go. But if you want to differentiate the web applications writing to the same file, then MDC won't help because it works on a thread basis. In such case I used to make my own appender which knows which application instance it serves. This can be done through appender configuration properties. Such appender would stick application name into each logging event as a property before writing it into the media, and then you can use a layout to show this property value in the text file it writes to. Using MDC in such case won't work because every thread will have to MDC.put(applicationName) and that is quite ugly. MDC is only good for single process, not for several processes. If someone knows the other way, I'd like to hear.