It looks like rules was not being added properly to the container, if you want to know which was the solutions just check this link
QUESTIONS
When I add a new Rule which uses a Java interface that has never been used before (used = referenced in any rule) in a container with already running sessions, do I always have to reload the container?
Is there something I should have taken into account to use interfaces in drools rules? (maybe the problem is there)
Since I have isolated my problem to, what I think, is a class type cache problem. Is there a way to instruct drools to restart that cache anyhow so that I can avoid reloading the whole container? (in my case reloading a container takes too long)
SUMMARY
When I add a rule to a container that has active sessions, and it uses an interface that has never been used in any rule before in the container, that rule is never triggered. But if I reload the whole container that same rule is triggered and works properly.
Just to clarify I am using drools version 7.73.0.Final.
Steps to reproduce
Below you will find the code for every step
number
Step
Clarification
1
Drools KieContainer fully operational
it needs to have at least one session that has been fired all rules
2
Create a rule and add it to the container
the rule will use an interface that has never been used before in any rule
3
Create a new KieSession in that container
4
Fire all rules in that container
5
Check if the rule has been triggered
the rule does not trigger at all
6
Reload the whole container
7
Fire the rules again
the process of firing the rules is exactly the same as in the previous steps
8
Check if the rule has been triggered
the rule is triggered and works exactly as expected
How the rule is added to the container
Resource resource = newRule.toResource();
kieFileSystem.write(resource);
IncrementalResults results = ((InternalKieBuilder) kieBuilder).incrementalBuild();
... process results which does not affect the container ...
kieContainer.updateToVersion(releaseId);
Summary of the rule itself
package ...
some imports...
rule "rule-name"
when
$something : SomeInteface(property1 == "some value", property2 == "some other value");
then
some logic to write a message
end
What I mean when I say "reload the container"
KieServices kieServices = KieServices.Factory.get();
ReleaseId releaseId = kieServices.newReleaseId(--GROUP--, --ARTIFACT--, --VERSION--);
KieFileSystem kieFileSystem = kieServices.newKieFileSystem().generateAndWritePomXML(releaseId);
Set<Rule> rules = ... we gather all our rules ...
... add all those rules to kieFileSystem
KieBuilder kieBuilder = kieServices.newKieBuilder(kieFileSystem);
kieBuilder.buildAll();
//We also process the messages produced in this buildAll() but it is not relevant for this matter since there are none.
Drools, possible, bug
I am not sure if what I found is bug since I am new to Drools and all this could be just because I am doing something wrong. But I have been debugging drools code in order to be sure if I was missing something.
When drools agenda needs to decide whether a given rule should be triggered, it uses a class type cache. And my problem is that the cache itself does not contain the same data if I add the rule or if I reload the whole container.
Differences between just adding a rule or reloading the container
Bot examples comes from class ClassObjectTypeConf
getObjectTypeNodes()
public ObjectTypeNode[] getObjectTypeNodes() {
if ( this.objectTypeNodes == null ) {
this.objectTypeNodes = getMatchingObjectTypes( this.cls );
}
return this.objectTypeNodes;
}
getMatchingObjectTypes(final Class<?> clazz)
private ObjectTypeNode[] getMatchingObjectTypes(final Class<?> clazz) {
final List<ObjectTypeNode> cache = new ArrayList<ObjectTypeNode>();
for ( ObjectTypeNode node : kBase.getRete().getObjectTypeNodes( this.entryPoint ).values() ) {
if ( clazz == DroolsQuery.class ) {
// for query objects only add direct matches
if ( ((ClassObjectType)node.getObjectType()).getClassType() == clazz ) {
cache.add( node );
}
} else if ( node.isAssignableFrom( new ClassObjectType( clazz ) ) ) {
cache.add( node );
}
}
Collections.sort(cache, OBJECT_TYPE_NODE_COMPARATOR);
return cache.toArray( new ObjectTypeNode[cache.size()] );
}
The difference is that when I fire the rules in the fourth step the method getMatchingObjectTypes(final Class clazz) is never invoked since **this.objectTypeNodes** in getObjectTypeNodes() is an empty array instead of null.
But in the seventh step in the same line of code **this.objectTypeNodes** is null and therefore getMatchingObjectTypes(final Class clazz) is invoked and the cache is updated for the given entrypoint.
Thanks to this update the agenda will be able to "know who SomeInterface is" and therefore trigger the rule.
Of course if we are in the fourth step and, instead of allowing the flow to go normally, we change this.objectTypeNodes to null (basically forcing drools to restart the cache for this entrypoint) everything will work smoothly.
Final notes
I suspect this might be a Drools bug, but since I am new I want to ask before reporting this bug directly to drools. I suspect is quite more probable that my code is wrong instead of theirs.
And thank you for the answers that pointed out the the question was poorly written, since it was true and I was not aware of it.
Related
I am connected to a gremlin server (version 3.4.0) from my java application using the gremlin-driver (version 3.4.0). I am using the the following code to connect to the server from Java.
Cluster cluster = Cluster.build("localhost").port(8182).create();
Client client = cluster.connect();
GraphTraversalSource graphTraversalSource = AnonymousTraversalSource.traversal()
.withRemote(DriverRemoteConnection.using(client, "g"));
// To get the list of vertices
List<Vertex> vertices = graphTraversalSource.V().toList();
//To add a vertex
GraphTraversal newNode = graphTraversalSource.addV("Label 1");
//To add properties to the vertex
newNode.property("key1","value1");
newNode.property("key2",1002);
Now, I have a requirement that each vertex must have some predefined but dynamic properties like name, uuid, etc. These predefined properties may vary from Vertex to Vertex (based on the vertex label) and can change in future; hence dynamic. Due to this dynamics I can not use predefined gremlin schema.
Now I think I have two option on how to implement it.
Approach 1. I can keep the validation logic on my java application and pass to gremlin only if it is valid.
Approach 2. I can implement some traversal strategy like the EventStrategy
The first option is straight forward and no rocket science there. For the second option I am facing the following problems.
Issue 1. I can not find any reference where they have implemented remote and strategy both with the same GraphTraversalSource.
Issue 2. How to stop the creation of Vertex if there is a validation failure.
I tried the following for implementing remote and strategy both with the same GraphTraversalSource but it give me serialization error.
// Here GremlinMutationListener is a class which implements MutationListener
MutationListener mutationListener = new GremlinMutationListener();
EventStrategy eventStrategy = EventStrategy.build().addListener(mutationListener).create();
GraphTraversalSource graphTraversalSource = AnonymousTraversalSource.traversal()
.withRemote(DriverRemoteConnection.using(client, "g"))
.withStrategies(eventStrategy);
the error I get is
Caused by: java.lang.IllegalArgumentException: Class is not registered: org.apache.tinkerpop.gremlin.process.traversal.strategy.decoration.EventStrategy
Note: To register this class use: kryo.register(org.apache.tinkerpop.gremlin.process.traversal.strategy.decoration.EventStrategy.class);
Also in the MutationListener I do not find a way how to stop the execution and return the validation error, besides throwing exception; which might have a lot of overheads
public class GremlinMutationListener implements MutationListener {
private static final Logger LOGGER =
LoggerFactory.getLogger(GremlinMutationListener.class);
#Override
public void vertexAdded(Vertex vertex) {
LOGGER.info("SS: vertexAdded " + StringFactory.vertexString(vertex));
// How can I return the validation error from here besides throwing exception?
// Is there some other interface which I should implement?
}
.
.
.
.
Now the question is What is the best approach here 1 or 2, considering performance. And if it is 2 how to resolve the issues (1 and 2) I am facing.
EventStrategy isn't a good way to do validation. You won't get notification of the event until after the change has already occurred in the underlying graph so a validation error would come too late.
I do think that a TraversalStrategy can be a neat way to implement validation though. I think that you would:
Implement your own ValidationTraversalStrategy to look for any mutation steps then examine their contents for "bad data" throwing an exception if there is a problem. Since strategy application occurs before traversal iteration you would stop the traversal before it made modifications to the underlying graph.
Configure "g" in Gremlin Server to use the have the strategy setup server side so that all connections to it get the benefit of that strategy automatically.
The downside here is that not all graphs support the ability to include custom traversal strategies so you need to be ok with reduced code portability by taking this approach.
Another approach which is more portable (and perhaps easier) is to build a Gremlin DSL. In this way you can implement your validation client-side right at the time the traversal is constructed. For example you could add a step like:
public default GraphTraversal<S, Vertex> person(String personId, String name) {
if (null == personId || personId.isEmpty()) throw new IllegalArgumentException("The personId must not be null or empty");
if (null == name || name.isEmpty()) throw new IllegalArgumentException("The name of the person must not be null or empty");
return coalesce(__.V().has(VERTEX_PERSON, KEY_PERSON_ID, personId),
__.addV(VERTEX_PERSON).property(KEY_PERSON_ID, personId)).
property(KEY_NAME, name);
}
That example is taken from the KillrVideo example repo - you can look there for more inspiration and also consider the related blog posts tied to that repo:
https://www.datastax.com/dev/blog/gremlin-dsls-in-java-with-dse-graph
https://academy.datastax.com/content/gremlin-dsls-python-dse-graph
https://academy.datastax.com/content/gremlin-dsls-net-dse-graph
Even though these blog posts use different programming languages, the content of each post is applicable to anyone using Gremlin from any language.
In short, when #CacheEvict is called on a method and if the key for the entry is not found, Gemfire is throwing EntryNotFoundException.
Now in detail,
I have a class
class Person {
String mobile;
int dept;
String name;
}
I have two Cache regions defined as personRegion and personByDeptRegion and the Service is as below
#Service
class PersonServiceImpl {
#Cacheable(value = "personRegion")
public Person findByMobile(String mobile) {
return personRepository.findByMobile(mobile);
}
#Cacheable(value = "personByDeptRegion")
public List<Person> findByDept(int deptCode) {
return personRepository.findByDept(deptCode);
}
#Caching(
evict = { #CacheEvict(value = "personByDeptRegion", key="#p0.dept"},
put = { #CachePut(value = "personRegion",key = "#p0.mobile")}
)
public Person updatePerson(Person p1) {
return personRepository.save(p1);
}
}
When there is a call to updatePerson and if there are no entries in the personByDeptRegion, this would throw an exception that EntryNotFoundException for the key 1 ( or whatever is the dept code ). There is a very good chance that this method will be called before the #Cacheable methods are called and want to avoid this exception.
Is there any way we could tweak the Gemfire behavior to gracefully return when the key is not existing for a given region ?.
Alternatively, I am also eager to know if there is a better implementation of the above scenario using Gemfire as cache.
Spring Data Gemfire : 1.7.4
Gemfire Version : v8.2.1
Note: The above code is for representation purpose only and I have multiple services with same issue in actual project.
First, I commend you for using Spring's Caching annotations on your application #Service components. All too often developers enable caching in their Repositories, which I think is bad form, especially if complex business rules (or even additional IO; e.g. calling a web service from a service component) are involved prior to or after the Repository interaction(s), particularly in cases where caching behavior should not be affected (or determined).
I also think your caching UC (updating one cache (personRegion) while invalidating another (personByDeptRegion) on a data store update) by following a CachePut with a CacheEvict seems reasonable to me. Though, I would point out that the seemingly intended use of the #Caching annotation is to combine multiple Caching annotations of the same type (e.g. multiple #CacheEvict or multiple #CachePut) as explained in the core Spring Framework Reference Guide. Still, there is nothing preventing your intended use.
I created a similar test class here, modeled after your example above, to verify the problem. Indeed the jonDoeUpdateSuccessful test case fails (with the GemFire EntryNotFoundException, shown below) since no people in Department "R&D" were previously cached in the "DepartmentPeople" GemFire Region prior to the update, unlike the janeDoeUpdateSuccessful test case, which causes the cache to be populated before the update (even if the entry has no values, which is of no consequence).
com.gemstone.gemfire.cache.EntryNotFoundException: RESEARCH_DEVELOPMENT
at com.gemstone.gemfire.internal.cache.AbstractRegionMap.destroy(AbstractRegionMap.java:1435)
NOTE: My test uses GemFire as both a "cache provider" and a System of Record (SOR).
The problem really lies in SDG's use of Region.destroy(key) in the GemfireCache.evict(key) implementation rather than, and perhaps more appropriately, Region.remove(key).
GemfireCache.evict(key) has been implemented with Region.destroy(key) since inception. However, Region.remove(key) was not introduced until GemFire v5.0. Still, I can see no discernible difference between Region.destroy(key) and Region.remove(key) other than the EntryNotFoundException thrown by Region.destroy(key). Essentially, they both destroy the local entry (both key and value) as well as distribute the operation to other caches in the cluster (providing a non-LOCAL Scope is used).
So, I have filed SGF-539 to change SDG to call Region.remove(key) in GemfireCache.evict(key) rather than Region.destroy(key).
As for a workaround, well, there is basically only 2 things you can do:
Restructure your code and your use of the #CacheEvict annotation, and/or...
Make use of the condition on #CacheEvict.
It is unfortunate that a condition cannot be specified using a class type, something akin to a Spring Condition (in addition to SpEL), but this interface is intended for another purpose and the #CacheEvict, condition attribute does not accept a class type.
At the moment, I don't have a good example of how this might work so I am moving forward on SGF-539.
You can following this ticket for more details and progress.
Sorry for the inconvenience.
-John
I'm trying to implement a few tests with JBPM 6. I'm currently working a a simple hello world bpmn2 file, which is loaded correctly.
My understading of the documentation ( Click ) is that persistence should be disabled by default. "By default, if you do not configure the process engine otherwise, process instances are not made persistent."
However, when I try to implement it, and without doing anything special to enable persistence, I hit persistence related problems every time I try to do anything.
javax.persistence.PersistenceException: No Persistence provider for EntityManager named org.jbpm.persistence.jpa
at javax.persistence.Persistence.createEntityManagerFactory(Persistence.java:69)
at javax.persistence.Persistence.createEntityManagerFactory(Persistence.java:47)
at org.jbpm.runtime.manager.impl.jpa.EntityManagerFactoryManager.getOrCreate(EntityManagerFactoryManager.java:33)
at org.jbpm.runtime.manager.impl.DefaultRuntimeEnvironment.init(DefaultRuntimeEnvironment.java:73)
at org.jbpm.runtime.manager.impl.RuntimeEnvironmentBuilder.get(RuntimeEnvironmentBuilder.java:400)
at org.jbpm.runtime.manager.impl.RuntimeEnvironmentBuilder.get(RuntimeEnvironmentBuilder.java:74)</blockquote>
I Create my runtime environement the following way,
RuntimeEnvironment environment = RuntimeEnvironmentBuilder.Factory.get()
.newDefaultInMemoryBuilder()
.persistence(false)
.addAsset(ResourceFactory.newClassPathResource("examples/helloworld.bpmn2.xml"), ResourceType.BPMN2)
.addAsset(ResourceFactory.newClassPathResource("examples/newBPMNProcess.bpmn"), ResourceType.BPMN2)
.get();
As my understanding is that persistence should be disabled by default, I don't see what I'm doing wrong. It could be linked to something included in some of my dependencies, but I don't have found anything on it either.
Has anybody faced the same issue already or has any advice.
Thanks
A RuntimeManager is a combination of a process engine and a human task service. The human task service needs persistence (to start the human tasks etc.), that's why it's still asking for a datasource, even if you configure the engine to not use persistence.
If you want to use an engine without our human task service, you don't need persistence at all, but I wouldn't use a RuntimeManager in that case, simply create a ksession from the kbase directly:
http://docs.jboss.org/jbpm/v6.1/userguide/jBPMCoreEngine.html#d0e1805
The InMemoryBuilder which you use in your code is supposed to (as per API documentation) not be persistent, but it is actually adding a persistence manager to the environment, just with an InMemoryMapper instead of a JPAMapper because of the way the init() method in DefaultRuntimeEnvironment is implemented:
public void init() {
if (emf == null && getEnvironmentTemplate().get(EnvironmentName.CMD_SCOPED_ENTITY_MANAGER) == null) {
emf = EntityManagerFactoryManager.get().getOrCreate("org.jbpm.persistence.jpa");
}
addToEnvironment(EnvironmentName.ENTITY_MANAGER_FACTORY, emf);
if (this.mapper == null) {
if (this.usePersistence) {
this.mapper = new JPAMapper(emf);
} else {
this.mapper = new InMemoryMapper();
}
}
}
As you can see above, this still tries to getOrCreate() a persistence unit (I have seen a better implementation which also checks for the value of persistence attribute somewhere, but the issue here is, DefaultRuntimeEnvironment doesn't do that).
What you need to start with to get away without persistence is a newEmptyBuilder():
RuntimeEnvironment env = RuntimeEnvironmentBuilder.Factory.get()
.newEmptyBuilder()
.knowledgeBase(KieServices.Factory.get().getKieClasspathContainer().getKieBase("my-knowledge-base"))
// ONLY REQUIRED FOR PER-REQUEST AND PER-INSTANCE STRATEGY
//.addEnvironmentEntry("IS_JTA_TRANSACTION", false)
.persistence(false)
.get();
Do mind though that this will only work for Singleton runtime managers - PerProcessInstance and PerRequest expect to be able to suspend a running transaction if necessary, which is only possible if you have an entity manager to be able to persist state.
For testing with those two strategies also use addEnvironmentEntry() above.
I want to create a log such as System.out.println("RuleName : "+ruleName); in IBM ODM rule engine.
So these are what i did;
1- Create BOM virtual methods which are static and get parameter of instance which is
the object of ilog.rules.engine.IlrRuleInstance.
instance ilog.rules.engine.IlrRuleInstance
2- Create BOM to XOM mapping by the following
System.out.println("Log icinde");
String ruleName = "";
if (instance != null )
ruleName = instance.getRuleName();
else
System.out.println("instance null!");
if (ruleName != null) {
System.out.println("RuleName: "+ ruleName);
}
return;
3- Call it in rule flow as an initial or final action.
utility.logla(ruleInstance);
But when i execute the flow my log doesnt work instance is null and ruleName also is null;
How should i configure and set logging feature by using bom. Could you give me an example of it?
Thanks.
So you could use Decision Warehouse that is part of the execution server to trace each execution. This can include which rules have been fired during the execution, but depends on what filters you apply.
Here is the documentation on DW and how to set it up: http://pic.dhe.ibm.com/infocenter/dmanager/v8r5/topic/com.ibm.wodm.dserver.rules.res.managing/topics/con_res_dw_overview.html
It is because you are calling the getRuleName() outside of the context of a rule instance within your rule flow, from what I can see from your description.
If you had a BOM method called from within the action of a rule, you could then call IlrRuleInstance.getRuleName() and it would return the name of the rule (I have done such a thing myself before).
What are you trying to achieve with this logging?
There is a much better way of logging from rules. In your Virtual method, pass the name of the rule itself rather than ruleInstance. You can also verbalize your method and use the same in each rule.
for e.g.:
From BAL:
Log the name of this rule ;
From IRL:
Log(ilog.rules.brl.IlrNameUtil.getBusinessIdentifier(?instance.ruleName));
Another approach is to use the BAL provided above (the name of this rule) within the Rule Flow (Orchestration) for your Rule Application or Module.
Of course this solution should be used only for debugging or troubleshooting scenarios.
Hope this helps.
I'm trying to find a solution for configuring a server-side Java application such that different users of the system interact with the system as if it were configured differently (Multitenancy). For example, when my application services a request from user1, I wish my application to respond in Klingon, but for all other users I want it to reply in English. (I've picked a deliberately absurd example, to avoid specifics: the important thing is that I want the app to behave differently for different requests).
Ideally there's a generic solution (i.e. one that allows me to add
user-specific overrides to any part of my config without having to change code).
I've had a look at Apache Commons Configuration which has built in support for multitenant configuration, but as far as I can tell this is done by combining some base config with some set of overrides. This means that I'd have a config specifying:
application.lang=english
and, say a user1.properties override file:
application.lang=klingon
Unfortunately it's much easier for our support team if they can see all related configurations in one place, with overrides specified somehow inline, rather than having separate files for base vs. overrides.
I think some combination of Commons Config's multitenancy + something like a Velocity template to describe the conditional elements within underlying config is kind of what I'm aiming for - Commons Config for the ease of interacting with my configuration and Velocity for very expressively describing any overrides, in a single configuration, e.g.:
#if ($user=="user1")
application.lang=klingon
#else
application.lang=english
#end
What solutions are people using for this kind of problem?
Is it acceptable for you to code each server operation like in the following?
void op1(String username, ...)
{
String userScope = getConfigurationScopeForUser(username);
String language = cfg.lookupString(userScope, "language");
int fontSize = cfg.lookupInt(userScope, "font_size");
... // business logic expressed in terms of language and fontSize
}
(The above pseudocode assumes the name of a user is passed as a parameter, but you might pass it via another mechanism, for example, thread-local storage.)
If the above is acceptable, then Config4* could satisfy your requirements. Using Config4*, the getConfigurationScopeForUser() method used in the above pseudocode can be implemented as follows (this assumes cfg is a Configuration object that has been previously initialized by parsing a configuration file):
String getConfigurationScopeForUser(String username)
{
if (cfg.type("user", username) == Configuration.CFG_SCOPE) {
return Configuration.mergeNames("user", username);
} else {
return "user.default";
}
}
Here is a sample configuration file to work with the above. Most users get their configuration from the "user.default" scope, but Mary and John have their own overrides of some of those default values:
user.default {
language = "English";
font_size = "12";
# ... many other configuration settings
}
user.John {
#copyFrom "user.default";
language = "Klingon"; # override a default value
}
user.Mary {
#copyFrom "user.default";
font_size = "18"; # override a default value
}
If the above sounds like it might meet your needs, then I suggest you read Chapters 2 and 3 of the "Getting Started Guide" to get a good-enough understanding of the Config4* syntax and API to be able to confirm/refute the suitability of Config4* for your needs. You can find that documentation on the Config4* website.
Disclaimer: I am the maintainer of Config4*.
Edit: I am providing more details in response to comments by bacar.
I have not put Config4* in a Maven repository. However, it is trivial to build Config4* with its bundled Ant build file, because Config4* does not have any dependencies on third-party libraries.
Another approach for using Config4* in a server application (prompted by a comment by bacar) with Config4* is follows...
Implement each server operation like in the following pseudo-code:
void op1(String username, ...)
{
Configuration cfg = getConfigurationForUser(username);
String language = cfg.lookupString("settings", "language");
int fontSize = cfg.lookupInt("settings", "font_size");
... // business logic expressed in terms of language and fontSize
}
The getConfigurationForUser() method used above can be implemented as shown in the following pseudocode:
HashMap<String,Configuration> map = new HashMap<String,Configuration>();
synchronized String getConfigurationForUser(String username)
{
Configuration cfg = map.get(username);
if (cfg == null) {
// Create a config object tailored for the user & add to the map
cfg = Configuration.create();
cfg.insertString("", "user", username); // in global scope
cfg.parse("/path/to/file.cfg");
map.put(username, cfg);
}
return cfg;
}
Here is a sample configuration file to work with the above.
user ?= ""; // will be set via insertString()
settings {
#if (user #in ["John", "Sam", "Jane"]) {
language = "Klingon";
} #else {
language = "English";
}
#if (user == "Mary") {
font_size = "12";
} #else {
font_size = "10";
}
... # many other configuration settings
}
The main comments I have on the two approaches are as follows:
The first approach (one Configuration object that contains lots of variables and scopes) is likely to use slightly less memory than the second approach (many Configuration objects, each with a small number of variables). But my guess is that the memory usage of either approach will be measured in KB or tens of KB, and this will be insignificant compared to the overall memory footprint of your server application.
I prefer the first approach because a single Configuration object is initialized just once, and then it is accessed via read-only lookup()-style operations. This means you don't have to worry about synchronizing access to the Configuration object, even if your server application is multi-threaded. In contrast, the second approach requires you to synchronize access to the HashMap if your server application is multi-threaded.
The overhead of a lookup()-style operation is in the order of, say, nanoseconds or microseconds, while the overhead of parsing a configuration file is in the order of, say, milliseconds or tens of milliseconds (depending on the size of the file). The first approach performs that relatively expensive parsing of a configuration file only once, and that is done in the initialization of the application. In contrast, the second approach performs that relatively expensive parsing of a configuration file "N" times (once for each of "N" users), and that repeated expense occurs while the server is processing requests from clients. That performance hit may or may not be an issue for your application.
I think ease of use is more important than ease of implementation. So, if you feel that the second approach will make it easier to maintain the configuration file, then I suggest you use that approach.
In the second approach, you may wonder why I put most of the variables in a named scope (settings) rather than in the global scope along with the "injected" user variable. I did that for a reason that is outside the scope of your question: separating the "injected" variables from the application-visible variables makes it easier to perform schema validation on the application-visible variables.
Normally user profiles are going into a DB and the user must open a session with a login. The user name may go into the HTTPSession (=Cookies) and on every request the server will get the user name and may read the profile from the DB. Shure, the DB can be some config files like joe.properties, jim.properties, admin.properties, etc.