I am new to Kie Workbench. I am using Java Rest calls to fire rules in kie workbench. Please find the code below:
public class RuleEngineConnector {
#Value("${brms.execution.server.url}")
private String SERVER_URL;
#Value("${brms.execution.server.username}")
private String USER;
#Value("${brms.execution.server.password}")
private String PASSWORD;
#Value("${brms.containerId}")
private String CONTAINER_ID;
private static final MarshallingFormat FORMAT = MarshallingFormat.JAXB;
public String getAdapter(AdapterRuleDO adapterRule) {
KieServicesConfiguration cfg = KieServicesFactory.newRestConfiguration(SERVER_URL, USER, PASSWORD);
cfg.setMarshallingFormat(FORMAT);
Set<Class<?>> extraClasses = new HashSet<Class<?>>();
extraClasses.add(AdapterRuleDO.class);
cfg.addJaxbClasses(extraClasses);
KieServicesClient kieServicesClient = KieServicesFactory.newKieServicesClient(cfg);
ServiceResponse<ExecutionResults> response = getRulesResponse(adapterRule, kieServicesClient);
List<AdapterRuleDO> listOfObjects = (List<AdapterRuleDO>) response.getResult().getValue("get-adapter");//to be changed
return listOfObjects.get(0).getAdapterName();
}
private ServiceResponse<ExecutionResults> getRulesResponse(AdapterRuleDO adapterRule, KieServicesClient kieServicesClient) {
List<Command<?>> commands = new ArrayList<Command<?>>();
KieCommands commandsFactory = KieServices.Factory.get().getCommands();
commands.add(commandsFactory.newInsert(adapterRule, "adapterRule"));
commands.add(commandsFactory.newFireAllRules());
commands.add(commandsFactory.newGetObjects("get-adapter"));
BatchExecutionCommand batchExecution = commandsFactory.newBatchExecution(commands);
RuleServicesClient ruleServicesClient = kieServicesClient.getServicesClient(RuleServicesClient.class);
ServiceResponse<ExecutionResults> response = ruleServicesClient.executeCommandsWithResults(CONTAINER_ID, batchExecution);
return response;
}
}
I am getting the rules fired correctly and the values are getting properly updated in AdapterRuleDO class after the rule is fired. One problem is when I again call this method to execute rules for the second time, I receive two AdapterRuleDO object and for each subsequent calls I get one additional object. It seems that the objects in the session are stored and not getting cleared for each call. How can I achieve that for every call I will get only one AdapterRuleDO object in return?
Please note I have only one such decision table where this fact has been used.
After searching different blogs and the user forums got a solution which worked fine.The above problem can be resolved by the following steps:
1) Use the "adapterRule" to get the result instead of "get-adapter".
2) In KIE Workbench, search for deployment descriptor and make this following change:
<runtime-strategy>PER_REQUEST</runtime-strategy>
By default, runtime strategy is SINGLETON.
Hope this makes sense and help somebody out.
If you are interested in stateless evaluation try to configure your session as stateless. This will create a new session for each request.
You should be able to do this in the kie-workbench.
Hope it helps,
In place of below line:
BatchExecutionCommand batchExecution = commandsFactory.newBatchExecution(commands);
Use this line:
BatchExecutionCommand batchExecution = commandsFactory.newBatchExecution(commands,Ksession_name);
Related
I have implemented by project using Spring-Data-Rest. I am trying to do an update on an existing record in a table. But when I try to send only a few fields instead of all the fields(present in Entity class) through my request, Spring-Data-Rest thinking I am sending null/empty values. Finally when I go and see the database the fields which I am not sending through my request are overridden with null/empty values. So my understanding is that even though I am not sending these values, spring data rest sees them in the Entity class and sending these values as null/empty. My question here is, is there a way to disable the fields when doing UPDATE that I am not sending through the request. Appreciate you are any help.
Update: I was using PUT method. After reading the comments, I changed it to PATCH and its working perfectly now. Appreciate all the help
Before update, load object from database, using jpa method findById return object call target.
Then copy all fields that not null/empty from object-want-to-update to target, finally save the target object.
This is code example:
public void update(Object objectWantToUpdate) {
Object target = repository.findById(objectWantToUpdate.getId());
copyNonNullProperties(objectWantToUpdate, target);
repository.save(target);
}
public void copyNonNullProperties(Object source, Object target) {
BeanUtils.copyProperties(source, target, getNullPropertyNames(source));
}
public String[] getNullPropertyNames (Object source) {
final BeanWrapper src = new BeanWrapperImpl(source);
PropertyDescriptor[] propDesList = src.getPropertyDescriptors();
Set<String> emptyNames = new HashSet<String>();
for(PropertyDescriptor propDesc : propDesList) {
Object srcValue = src.getPropertyValue(propDesc.getName());
if (srcValue == null) {
emptyNames.add(propDesc.getName());
}
}
String[] result = new String[emptyNames.size()];
return emptyNames.toArray(result);
}
You can write custom update query which updates only particular fields:
#Override
public void saveManager(Manager manager) {
Query query = sessionFactory.getCurrentSession().createQuery("update Manager set username = :username, password = :password where id = :id");
query.setParameter("username", manager.getUsername());
query.setParameter("password", manager.getPassword());
query.setParameter("id", manager.getId());
query.executeUpdate();
}
As some of the comments pointed out using PATCH instead of PUT resolved the issue. Appreciate all the inputs. The following is from Spring Data Rest Documentation:
"The PUT method replaces the state of the target resource with the supplied request body.
The PATCH method is similar to the PUT method but partially updates the resources state."
https://docs.spring.io/spring-data/rest/docs/current/reference/html/#customizing-sdr.hiding-repository-crud-methods
Also, I like #Tran Quoc Vu answer but not implementing it for now since I dont have to use custom controller. If there is some logic(ex: validation) involved when updating the entity, I am in favor of using the custom controller.
I am new to Kie Workbench and Execution Server. I am using Java Rest calls to run rules in kie workbench. Please find the code below:
private String kieServerUrl;
private String kieServerContainerId;
private String KieServerUsername;
private String kieServerPassword;
private RuleServicesClient ruleClient;
private static final String INPUT_OUT_IDENTIFIER = "Input";
private static final String SESSION_OBJECTS = "SessionObjects";
private static final String RUN_ALL_RULES = "RunAllRules";
public void init() {
final KieServicesConfiguration config = KieServicesFactory.newRestConfiguration(kieServerUrl, KieServerUsername, kieServerPassword);
config.setMarshallingFormat(MarshallingFormat.XSTREAM);
KieServicesClient kieServicesClient = KieServicesFactory.newKieServicesClient(config);
ruleClient = kieServicesClient.getServicesClient(RuleServicesClient.class);
}
#Override
public Output process(final Input input) {
Output output = null;
logger.debug("Running rules ..");
BatchExecutionCommandImpl executionCommand = new BatchExecutionCommandImpl();
executionCommand.getCommands().add(new InsertObjectCommand(input, INPUT_OUT_IDENTIFIER));
executionCommand.getCommands().add(new FireAllRulesCommand(RUN_ALL_RULES));
executionCommand.getCommands().add(new GetObjectsCommand(null, SESSION_OBJECTS));
logger.debug("Sending commands to the server");
ServiceResponse<ExecutionResults> response = ruleClient.executeCommandsWithResults(kieServerContainerId, executionCommand);
if(response.getType().equals(ServiceResponse.ResponseType.SUCCESS)){
logger.debug("Commands executed with success! Response: ");
final ExecutionResultImpl result = (ExecutionResultImpl) response.getResult();
ArrayList<Object> values = (ArrayList<Object>)result.getValue(SESSION_OBJECTS);
}else{
logger.error("Error executing rules. Message: {}", response.getMsg());
}
logger.debug("...finished running rules.");
return output;
}
The rules are correctly executed and the Output Object are instancied during the rules. One problem is when I again call this method to execute rules for the second time, I receive two Output object and for each subsequent calls I get one additional object. It seems that the objects in the session are stored and not getting cleared for each call. How can I achieve that for every call I will get only one Output object in return?
Since you are new to Drools, you may not know that Drools has two session types, stateless and stateful. Verify the KIE Execution Server session configuration is stateless, as stateful keeps the facts from prior requests processing.
Verify it is stateless by its settings in the project editor:
Open Project Editor -> Knowledge bases and sessions
Review the existing one or create one with:
Add Knowledge Sessions -> and set the State to Stateless
OK, so I have an interesting problem. I am using java/maven/spring-boot/cassandra... and I am trying to create a dynamic instantiation of the Mapper setup they use.
I.E.
//Users.java
import com.datastax.driver.mapping.annotations.Table;
#Table(keyspace="mykeyspace", name="users")
public class Users {
#PartitionKey
public UUID id;
//...
}
Now, in order to use this I would have to explicitly say ...
Users user = (DB).mapper(Users.class);
obviously replacing (DB) with my db class.
Which is a great model, but I am running into the problem of code repetition. My Cassandra database has 2 keyspaces, both keyspaces have the exact same tables with the exact same columns in the tables, (this is not my choice, this is an absolute must have according to my company). So when I need to access one or the other based on a form submission it becomes a mess of duplicated code, example:
//myWebController.java
import ...;
#RestController
public class MyRestController {
#RequestMapping(value="/orders", method=RequestMethod.POST)
public string getOrders(...) {
if(Objects.equals(client, "first_client_name") {
//do all the things to get first keyspace objects like....
FirstClientUsers users = (db).Mapper(FirstClientUsers.class);
//...
} else if(Objects.equals(client, "second_client_name") {
SecondClientUsers users = (db).Mapper(SecondClientUsers.class);
//....
}
return "";
}
I have been trying to use methods like...
Class cls = Class.forName(STRING_INPUT_VARIABLE_HERE);
and that works ok for base classes but when trying to use the Accessor stuff it no longer works because Accessors have to be interfaces, so when you do Class cls, it is no longer an interface.
I am trying to find any other solution on how to dynamically have this work and not have to have duplicate code for every possible client. Each client will have it's own namespace in Cassandra, with the exact same tables as all other ones.
I cannot change the database model, this is a must according to the company.
With PHP this is extremely simple since it doesn't care about typecasting as much, I can easily do...
function getData($name) {
$className = $name . 'Accessor';
$class = new $className();
}
and poof I have a dynamic class, but the problem I am running into is the Type specification where I have to explicitly say...
FirstClientUsers users = new FirstClientUsers();
//or even
FirstClientUsers users = Class.forName("FirstClientUsers");
I hope this is making sense, I can't imagine that I am the first person to have this problem, but I can't find any solutions online. So I am really hoping that someone knows how I can get this accomplished without duplicating the exact same logic for every single keyspace we have. It makes the code not maintainable and unnecessarily long.
Thank you in advance for any help you can offer.
Do not specify the keyspace in your model classes, and instead, use the so-called "session per keyspace" pattern.
Your model class would look like this (note that the keyspace is left undefined):
#Table(name = "users")
public class Users {
#PartitionKey
public UUID id;
//...
}
Your initialization code would have something like this:
Map<String, Mapper<Users>> mappers = new ConcurrentHashMap<String, Mapper<Users>>();
Cluster cluster = ...;
Session firstClientSession = cluster.connect("keyspace_first_client");
Session secondClientSession = cluster.connect("keyspace_second_client");
MappingManager firstClientManager = new MappingManager(firstClientSession);
MappingManager secondClientManager = new MappingManager(secondClientSession);
mappers.put("first_client", firstClientManager.mapper(Users.class));
mappers.put("second_client", secondClientManager.mapper(Users.class));
// etc. for all clients
You would then store the mappers object and make it available through dependency injection to other components in your application.
Finally, your REST service would look like this:
import ...
#RestController
public class MyRestController {
#javax.inject.Inject
private Map<String, Mapper<Users>> mappers;
#RequestMapping(value = "/orders", method = RequestMethod.POST)
public string getOrders(...) {
Mapper<Users> usersMapper = getUsersMapperForClient(client);
// process the request with the right client's mapper
}
private Mapper<Users> getUsersMapperForClient(String client) {
if (mappers.containsKey(client))
return mappers.get(client);
throw new RuntimeException("Unknown client: " + client);
}
}
Note how the mappers object is injected.
Small nit: I would name your class User in the singular instead of Users (in the plural).
I want to write a method which has to return all the root nodes a certain user is able to write/read.
Currently I am doing the following:
public List<String> getAllowedRootPaths(String username) throws RepositoryException {
SecuritySupport securitySupport = Components.getComponent(SecuritySupport.class);
UserManager userManager = securitySupport.getUserManager();
myUser = userManager.getUser(username);
Session session = MgnlContext.getJCRSession("website");
List<String> results = new ArrayList<String>();
if (getRoles().contains("rootPublisher")) {
//check user access and add to array
}
return results;
}
public Collection<String> getRoles() {
return magnoliaUser.getAllRoles();
}
My old method was to use the
HierarchyManager hm = MgnlContext.getHierarchyManager("website");
and test
hm.isGranted(node.getPath(), Permission.READ)
but since thats deprecated I'm currently looking for another solution. I am aware that Session has a test for AccessRights but i seems that only works for a usersession.
Maybe someone has an Idea how to do that without manually grabbing the Roles and checking for the int values.
Greetings,
Hellfiend
As Ducaz035 suggested you can do MgnlContext.getAccessManager("website") and call isGranted() on AccessManager.
Alternatively you can just call PermissionUtil.isGranted(node,permission) and let that method to locate proper access manager itself.
AccessManager#isGranted(node.getPath(), Permission.READ)
Seems like the one you are looking for.
Hope that helps,
Cheers
EDIT: replaced 'retrieve.name == "name1"' by 'retrieve.name.equals("name1")'.
EDIT2: Added #BeforeClass and #AfterClass (credit: http://digitalsanctum.com/2012/06/01/play-framework-2-tutorial-ebean-orm/).
I'm writing JUnit tests for a play web app and for some odd reason I can't seem to modify the database entries. Here's the simplest example I could come up with that illustrates the problem:
#BeforeClass
public static void setup() throws IOException {
app = Helpers.fakeApplication(Helpers.inMemoryDatabase());
Helpers.start(app);
server = Ebean.getServer("default");
ServerConfig config = new ServerConfig();
config.setDebugSql(true);
ddl = new DdlGenerator((SpiEbeanServer) server, new H2Platform(), config);
// drop
String dropScript = ddl.generateDropDdl();
ddl.runScript(false, dropScript);
// create
String createScript = ddl.generateCreateDdl();
ddl.runScript(false, createScript);
}
#AfterClass
public static void stopApp() {
// drop
String dropScript = ddl.generateDropDdl();
ddl.runScript(false, dropScript);
Helpers.stop(app);
}
#Test
public void UserModify(){
// create user (User extends Model)
User user = new User();
user.id = (long) 1;
user.name = "name1";
user.save();
// modify
user.name = "name2";
user.update();
user.save();
// look-up
User retrieve = User.find.byId((long) 1);
assertFalse("Old name", retrieve.name.equals("name1"));
}
Needless to say this should pass, but it doesn't... I know you can use "update()" to change database fields, because someone else on the project says he uses it like that and it works.
Play Framework documentation: http://www.playframework.com/documentation/2.1.1/Home
Any ideas why this test fails?
This is happening because of a problem in Play Framework.
Play doesn't enhance code in "test" folder, only in "app" folder.
Because of that getters and setters are not generated, and Ebean is relying on setters to detect that object is dirty and to support lazy loading. This way in your case Ebean doesn't know that property was updated on object.
As a simple solution, you can create getters and setters yourself. Also, that seems to be fixed already and probably should be included in next Play release: https://github.com/playframework/Play20/blob/master/framework/test/integrationtest-java/test/models/EbeanEnhancementTest.java
Here's a simple Play project with User model and working test: https://github.com/pharod/so-play2-issue1
See more details on getters and setters generation by Play here, in "Caveats" section near bottom of page: http://www.playframework.com/documentation/2.1.1/JavaEbean
As others have stated, you should use .equals() for string equality.
But the main problem is that to run this kind of tests (accessing the database) You need a running application.
In play, this could be done, by running a fake application with the test. Check out this guide:
http://www.playframework.com/documentation/2.0/JavaTest