How can I distinguish between published OSGI services implementing same interface by their properties?
Assuming that you want to retrieve registered services based on certain values for properties, you need to use a filter (which is based on the LDAP syntax).
For example:
int myport = 5000;
String filter = "&(objectClass=" + MyInterface.class.getName()
+ ")(port=" + myport + ")";
ServiceReference[] serviceReferences = bundleContext.getServiceReferences(null,filter);
where you want to look for services both implementing MyInterface and having a value of the port property equal to myport.
Here is the relevant javadoc for getting the references.
Remark 1:
The above example and javadoc refer to the Release 4.2. If you are not restricted to a J2SE 1.4 runtime, I suggest you to have a look at the Release 4.3 syntax, where you can use generics.
Remark 2: (courtesy of Ray)
You can also pre-check the correctness of your filter by instead creating a Filter object from a filterStr string:
Filter filter = bundleContext.createFilter(filterStr);
which also allows you to match the filter with other criteria. You still pass filterStr to get the references, since there is no overloading that accounts for a Filter argument. Please be aware, however, that in this way you will check the correctness twice: both getServiceReferences and createFilter throw InvalidSyntaxException on parsing the filter. Certainly not a show-stopper inefficiency, I guess, but it is worth mentioning.
Luca's answer above is correct, however it assumes you are using the low level API for accessing services.
If you are using Declarative Services (which I would generally recommend) then the filter can be added to the target attribute of the service reference. For example (using the bnd annotations for DS):
#Reference(target = "(port=8080)")
public void setHttpService(HttpService http) {
// ...
}
In Blueprint you can specify the filter attribute on the reference or reference-list element. For example:
<reference id="sampleRef"
interface="org.sample.MyInterface"
filter="(port=5000)"/>
Related
I am using OSGi Declarative Services R6. As per the documentation, I created an #interface with function declarations like this:
#ObjectClassDefinition(name = "Hello World OSGi Service")
public #interface Configuration {
#AttributeDefinition(name = "My Foo Property",
description = "A sample string property",
type = AttributeType.STRING)
String my_foo_property() default "bar";
}
My property id gets generated to my.foo.property and the default value will be "bar". However, the problem I am having with SonarQube is that the Sonar Way quality profile doesn't like the my_foo_property function declaration because Method names should comply with a naming convention (squid:S00100) which means it wants something like myFooProperty.
So my question is: With OSGi DS R6, how can I override the generated property id so that my method declaration can be myFooProperty but the key can be my.foo.property.
If that is impossible, how can I add an exception in SonarQube. I don't want to remove this rule, its a good rule.
The names that rule sees as acceptable are the ones that match the regex with which it's configured, which simply makes this a question of crafting the right regex. Probably something like ([a-z][a-zA-Z0-9]*)|([a-z]+\.[a-z]+\.[a-z]+)
However, you mention you're using the Sonar way profile. In later versions of SonarQube, this profile is not editable, so you'll need to create your own profile, set it as default, and copy the rules from Sonar way into it before you can update the regex on this rule.
How can we add description on the fields and operations exposed for JMX?
JBoss version : JBoss EAP 5.1.2
We have a Service bean as
#Service
#Management(MyConfigMgnt.class)
public class MyConfigService implements MyConfigLocal, MyConfigMgnt {
public void setMyValue(String MyValue){}
public String getMyValue(){}
}
These methods are declared in the MyConfigMgnt interface.
This is visible in the jboss jmx console as
and for the field it is shown as
How do we add relevant and proper information on the fields and the MBean.
Thanks
There's 2 ways of doing this.
Re-implement your service as a DynamicMBean which is slightly more complicated but allows for the definition of attribute and operation meta-data. (i.e. MyConfigMgnt extends DynamicMBean)
An easier way (but possibly not future-proof) is to use an XMBean descriptor. XMBeans are a proprietary JBoss JMX extension where meta-data is defined in an external XML resource. It would require no actual changes to the source code except the addition of the XMBean resource location which looks something like this:
#Service(objectName = XMBeanService.OBJECT_NAME, xmbean = "resource:META-INF/service-xmbean.xml")
If you have a very large number of attributes and operations, the XMBean XML descriptor can be arduous to write, but twiddle has a helper command which will generate a template specific to your existing simple MBean, so you can save the output, fill in the details and go from there.
I want to create a log such as System.out.println("RuleName : "+ruleName); in IBM ODM rule engine.
So these are what i did;
1- Create BOM virtual methods which are static and get parameter of instance which is
the object of ilog.rules.engine.IlrRuleInstance.
instance ilog.rules.engine.IlrRuleInstance
2- Create BOM to XOM mapping by the following
System.out.println("Log icinde");
String ruleName = "";
if (instance != null )
ruleName = instance.getRuleName();
else
System.out.println("instance null!");
if (ruleName != null) {
System.out.println("RuleName: "+ ruleName);
}
return;
3- Call it in rule flow as an initial or final action.
utility.logla(ruleInstance);
But when i execute the flow my log doesnt work instance is null and ruleName also is null;
How should i configure and set logging feature by using bom. Could you give me an example of it?
Thanks.
So you could use Decision Warehouse that is part of the execution server to trace each execution. This can include which rules have been fired during the execution, but depends on what filters you apply.
Here is the documentation on DW and how to set it up: http://pic.dhe.ibm.com/infocenter/dmanager/v8r5/topic/com.ibm.wodm.dserver.rules.res.managing/topics/con_res_dw_overview.html
It is because you are calling the getRuleName() outside of the context of a rule instance within your rule flow, from what I can see from your description.
If you had a BOM method called from within the action of a rule, you could then call IlrRuleInstance.getRuleName() and it would return the name of the rule (I have done such a thing myself before).
What are you trying to achieve with this logging?
There is a much better way of logging from rules. In your Virtual method, pass the name of the rule itself rather than ruleInstance. You can also verbalize your method and use the same in each rule.
for e.g.:
From BAL:
Log the name of this rule ;
From IRL:
Log(ilog.rules.brl.IlrNameUtil.getBusinessIdentifier(?instance.ruleName));
Another approach is to use the BAL provided above (the name of this rule) within the Rule Flow (Orchestration) for your Rule Application or Module.
Of course this solution should be used only for debugging or troubleshooting scenarios.
Hope this helps.
I'm trying to find a solution for configuring a server-side Java application such that different users of the system interact with the system as if it were configured differently (Multitenancy). For example, when my application services a request from user1, I wish my application to respond in Klingon, but for all other users I want it to reply in English. (I've picked a deliberately absurd example, to avoid specifics: the important thing is that I want the app to behave differently for different requests).
Ideally there's a generic solution (i.e. one that allows me to add
user-specific overrides to any part of my config without having to change code).
I've had a look at Apache Commons Configuration which has built in support for multitenant configuration, but as far as I can tell this is done by combining some base config with some set of overrides. This means that I'd have a config specifying:
application.lang=english
and, say a user1.properties override file:
application.lang=klingon
Unfortunately it's much easier for our support team if they can see all related configurations in one place, with overrides specified somehow inline, rather than having separate files for base vs. overrides.
I think some combination of Commons Config's multitenancy + something like a Velocity template to describe the conditional elements within underlying config is kind of what I'm aiming for - Commons Config for the ease of interacting with my configuration and Velocity for very expressively describing any overrides, in a single configuration, e.g.:
#if ($user=="user1")
application.lang=klingon
#else
application.lang=english
#end
What solutions are people using for this kind of problem?
Is it acceptable for you to code each server operation like in the following?
void op1(String username, ...)
{
String userScope = getConfigurationScopeForUser(username);
String language = cfg.lookupString(userScope, "language");
int fontSize = cfg.lookupInt(userScope, "font_size");
... // business logic expressed in terms of language and fontSize
}
(The above pseudocode assumes the name of a user is passed as a parameter, but you might pass it via another mechanism, for example, thread-local storage.)
If the above is acceptable, then Config4* could satisfy your requirements. Using Config4*, the getConfigurationScopeForUser() method used in the above pseudocode can be implemented as follows (this assumes cfg is a Configuration object that has been previously initialized by parsing a configuration file):
String getConfigurationScopeForUser(String username)
{
if (cfg.type("user", username) == Configuration.CFG_SCOPE) {
return Configuration.mergeNames("user", username);
} else {
return "user.default";
}
}
Here is a sample configuration file to work with the above. Most users get their configuration from the "user.default" scope, but Mary and John have their own overrides of some of those default values:
user.default {
language = "English";
font_size = "12";
# ... many other configuration settings
}
user.John {
#copyFrom "user.default";
language = "Klingon"; # override a default value
}
user.Mary {
#copyFrom "user.default";
font_size = "18"; # override a default value
}
If the above sounds like it might meet your needs, then I suggest you read Chapters 2 and 3 of the "Getting Started Guide" to get a good-enough understanding of the Config4* syntax and API to be able to confirm/refute the suitability of Config4* for your needs. You can find that documentation on the Config4* website.
Disclaimer: I am the maintainer of Config4*.
Edit: I am providing more details in response to comments by bacar.
I have not put Config4* in a Maven repository. However, it is trivial to build Config4* with its bundled Ant build file, because Config4* does not have any dependencies on third-party libraries.
Another approach for using Config4* in a server application (prompted by a comment by bacar) with Config4* is follows...
Implement each server operation like in the following pseudo-code:
void op1(String username, ...)
{
Configuration cfg = getConfigurationForUser(username);
String language = cfg.lookupString("settings", "language");
int fontSize = cfg.lookupInt("settings", "font_size");
... // business logic expressed in terms of language and fontSize
}
The getConfigurationForUser() method used above can be implemented as shown in the following pseudocode:
HashMap<String,Configuration> map = new HashMap<String,Configuration>();
synchronized String getConfigurationForUser(String username)
{
Configuration cfg = map.get(username);
if (cfg == null) {
// Create a config object tailored for the user & add to the map
cfg = Configuration.create();
cfg.insertString("", "user", username); // in global scope
cfg.parse("/path/to/file.cfg");
map.put(username, cfg);
}
return cfg;
}
Here is a sample configuration file to work with the above.
user ?= ""; // will be set via insertString()
settings {
#if (user #in ["John", "Sam", "Jane"]) {
language = "Klingon";
} #else {
language = "English";
}
#if (user == "Mary") {
font_size = "12";
} #else {
font_size = "10";
}
... # many other configuration settings
}
The main comments I have on the two approaches are as follows:
The first approach (one Configuration object that contains lots of variables and scopes) is likely to use slightly less memory than the second approach (many Configuration objects, each with a small number of variables). But my guess is that the memory usage of either approach will be measured in KB or tens of KB, and this will be insignificant compared to the overall memory footprint of your server application.
I prefer the first approach because a single Configuration object is initialized just once, and then it is accessed via read-only lookup()-style operations. This means you don't have to worry about synchronizing access to the Configuration object, even if your server application is multi-threaded. In contrast, the second approach requires you to synchronize access to the HashMap if your server application is multi-threaded.
The overhead of a lookup()-style operation is in the order of, say, nanoseconds or microseconds, while the overhead of parsing a configuration file is in the order of, say, milliseconds or tens of milliseconds (depending on the size of the file). The first approach performs that relatively expensive parsing of a configuration file only once, and that is done in the initialization of the application. In contrast, the second approach performs that relatively expensive parsing of a configuration file "N" times (once for each of "N" users), and that repeated expense occurs while the server is processing requests from clients. That performance hit may or may not be an issue for your application.
I think ease of use is more important than ease of implementation. So, if you feel that the second approach will make it easier to maintain the configuration file, then I suggest you use that approach.
In the second approach, you may wonder why I put most of the variables in a named scope (settings) rather than in the global scope along with the "injected" user variable. I did that for a reason that is outside the scope of your question: separating the "injected" variables from the application-visible variables makes it easier to perform schema validation on the application-visible variables.
Normally user profiles are going into a DB and the user must open a session with a login. The user name may go into the HTTPSession (=Cookies) and on every request the server will get the user name and may read the profile from the DB. Shure, the DB can be some config files like joe.properties, jim.properties, admin.properties, etc.
I have a SOAP service that exposes a method
TradeDetail getTradeDetail()
TradeDetail stores 5 fields, transaction number, dates etc
I need to add a couple of fields to TradeDetail. I want to keep backward compatibility (for a while) and it looks as if my options are limited to creating a new class with the extra fields
TradeDetail2 getTradeDetail2()
Now this will work - I've done it before. But are there any other solutions that people have used?
E.g.
Fundamentally change TradeDetail2 to add name value pairs.
Inherit TradeDetail2 from TradeDetail, this would reduce code but increase coupling
Return XML or JSON instead
I will be able to retire the original interface pretty quickly so the code will get cleaned up and the extra TradeDetail2 won't last forever!
thanks
I sympathise - some of my webservices are riddled with myMethod(), myMethod2(), myMethod3() etc simply because I needed to add a few new fields.
Would it make sense for you to keep the method name and create a new endpoint for each version of your API instead? eg:
http://my.domain.com/servicename/v1
http://my.domain.com/servicename/v1.1
http://my.domain.com/servicename/v1.5
http://my.domain.com/servicename/v2
Then your method names stay sensible, regardless of how many future changes you need to make.
Any apps using your webservice would probably need to be rewritten and/or rebuilt against a new WSDL anyway in order to take advantage of the new fields, so why not just have them rewritten/rebuilt against the new v1.1 API.
I find that this also helps when communicating with the owners/developers of the apps using your service - eg, "Version [old] of our webservice API will no longer be supported after [date], please ensure that you are using at least version [new]."
This is why I prefer to have complete control over XML to Object mapping, so that I can separate model from XML interface. In your case, I would simply add new fields to TradeDetail, and consider them "optional" for backwards compatibility. This would be the example XML->Object mapping for TradeDetail in framework my team uses, written for your interface:
// this would go into my client endpoint class
public TradeDetail getTradeDetail() {
Element requestRoot = new Element("GetTradeDetail");
Element responseRoot = invokeWebServiceAndReturnJdomElement(requestRoot);
return mapTradeDetail(responseRoot);
}
// this would go into my client XO mapping class
public TradeDetail mapTradeDetail(Element root) {
TradeDetail tradeDetail = new TradeDetail();
tradeDetail.setField1 = fetchString(root, "/GetTradeDetail/Field1");
tradeDetail.setField2 = fetchInteger(root, "/GetTradeDetail/Field2");
tradeDetail.setField3 = mapField3(root, "/GetTradeDetail/Field3");
tradeDetail.setField4 = fetchString(root, "/GetTradeDetail/Field4");
}
This kind of client would ignore new fields, thus being compatible with new version of protocol, until I add something like this to the end of this same method in version 2:
if (fetchXPath(root, "/GetTradeDetail/Field5") != null) {
// so we're talking with server which speaks new version of protocol
tradeDetail.setField5 = fetchString(root, "/GetTradeDetail/Field5");
}
Server would work with similar code, possibly checking client version, and mapping extra fields only if client supports new version of protocol.
In my view, client should be written so that few extra fields added to protocol don't break the client - I don't have the luxury of being down simply because upstream provider added new functionality and didn't inform me about it. If provider changes existing mandatory fields, of course, client needs modification. This is why upstream provider should version protocol and support old version for at least a couple of months.