java service vs Rules engine implementation - java

I am confused choosing between java service and IBM Rules Designer. I am aware of the fact that we should use Rules Engine for less development effort and whenever the business requirements are subject to change frequently. But I have requirements which can be developed either using java or Rules Engine. Considering the performance, maintenance cost,re usability and other factors in long term which is the best option to implement? what are the ideal cases when to use this either of them?

I believe the question is a bit objective.
For me, if certain "changeable logic" is related routine work (e.g. Such settings are required when introducing a new user to system, or a new product to be sold etc), I will consider using rule engine (or other "soft coding" skill as mentioned in the comment in OP). As we should not require deploying the application again just because we need to do such routine job.
However, if some logic is related to requirement and such change is not triggered by routine job, I am inclined to write it in code.

Related

Migrating a large-scale application from JavaEE to Akka

Suppose I have a very large-scale server-side web application written in JavaEE (and related technologies classically combined with it), and I decided to migrate it completely to Akka (and related technologies usually combined with it, including moving the code to Scala). The reasons of the migration decision are not important: suppose I have to do it, and that's all to it.
My question is: What would be the strategy to follow here, aiming to optimize the migration time and the scalability of the resulting application?
If the question lacks of details, I can provide some, although I would like to hear strategies without being very specific.
This is an open ended question. But let me try and give you some ideas. Having worked with both J2EE as well as Play2/Akka/Spray.io (Scala) based system I can provide you will the following high level/general guidance for migration.
Partition your system: Partition your current system based on functionality and rank them according to their criticality to business, your stakeholders and clients. Partitions can done based on different dimensions ( architectural components at runtime, business features, development team/modules) etc. You also need to find dependencies between these partitions.
Identify candidate partition: Once you have ranked partitions, it’s useful to pick the smallest possible partition that overlaps in as many dimensions as possible and has the least amount of coupling. Usually this is the case if your initial architecture is modular.
Implement a prototype: Take the candidate partition and create a prototype that provides the same functional capability. Now evaluate and compare the new capability against the old in terms of various quality attributes (performance, modifiability, extensibility etc). The prototype will also give you an estimate of technical risk, challenges, and effort.
Create a new architecture: I think at this point you should have enough input to create the first version of your new architecture. Also identify how capabilities of other partitions will be implemented in this new architecture. Selecting the most complex partition and try to map it to this new architecture is really good exercise and can massively reduce your technical risk in the future.
Field the prototype: Try to field the prototype to a small subset of users/stakeholders and get feedback. Decoupling the prototype using REST/pub-sub interfaces is a good idea.
Plan for migration: Create a plan and schedule for rest of your system.
I can be more specific if you provide more targeted questions.

Complex decision system modified by user - Java

I have a problem and I would like to ask for alternatives on my existing technologies since the programmed feature would be complex and would be given to users, so it should be as simple as it can be on front-end. I need java based technology.
What I need to do:
I am having a basic structure with lot of datas. These datas are mostly well written like Integers, Dates, Booleans etc, so things what can be compared easily.
I need to model decisions with batches of requirements which can be defined and altered by many sources like inner business processes and governmental laws.
So I am thinking to give a scripting ability to the users (most of them have university degrees, so some complexity is ok).
Let's see a simplified example.
Let A be a structure with the following.
A.budget - Integer
A.bankRelatedDebt - Integer
A.privateRelatedDebt - Integer
A.deadLine - Date
A.hasPermissionFromGovernment - Boolean
A.hasProblematicContracts - Boolean
I need rules to define to decide if the rule stands or falls, so I need boolean back.
Rule1: The budget is over 1 million EUR
Rule2: Has no problematic document or has a permission from government
Rule3: The deadline won't be in a month range.
Rule4: The overall debt (local + private) doesn't exceed 100.000 EUR
These rules could be hardcoded in other cases, but this has to be super-dynamic and based on given datas.
We have the options of drools and antlr I would need alternatives if you can mention. Or if you can mention technologies to avoid, that is helpful as well and welcomed, so I can avoid it.
For what it's worth. I would love to do such an expert system too, so bear with my ramblings. First some negative points as you asked what to avoid.
There are many pitfalls.
The "programming" is done by the users, there probably is no version control system for restoral, there maybe is no staging system but one is working in the production system. Think of extending a common library rule test wise. No unit tests?
Then there is the user acceptance. Especially there is a competitor, Excel programming, which you have to supercede. Generating reports with human electable text blocks, diagrams.
Your nunbered rules still lack some life: the system could assist with providing categories to select from: Rule1 - restriction on monetary resource. Nice would be to propose "would you also like to restrict on limitited resources? (a) Rule1, (b) ... .
Also what is the product? What are the advantages? What are the goals?
Reports, calculation scenarios (what-ifs, tolerances calculated through).
I certainly would first write a technical document along above lines, and than search the tools - as you seem to be doing. Drools is too basic. ANTLR for a DSL I find risky.
Tools
Data mining seems to be the keyword you are searching.
The JVM programming language Scala (not easily acquired), is productive for DSL, parsing.
Many functional languages are a bit easier and offer scripting too (Java scripting API).
What about a web project, maybe using jetty as embedded web server. So you may apply HTML and JavaScript. HTML5?
A rich client platform (eclipse or NetBeans) requires experience for rapid development. For nice graphics, maybe JavaFX (too early).
Develop a DSL for your needs using either Groovy or Scala.
We use CodeMirror to provide syntax highlighting in a web page.
Works great for us with Groovy.
I would vote against drools because I have terrible experiences, but some people like it.
I would propose a language already integrated in java: JavaScript. Why?
Is simple enough and has nice access to java beans: instead of
budget.getDealLine() you can use budget.deadLine
you have tons of places to check for information
you can add simple functions to make it more easy to use
But if you choose JavaScript, Python, Drools, ANTLR remember:
Users do not have version control systems like SVN/GIT, so it is up
to you make it happen.
Give them a tool (a webpage or whatever) that automatically save every version of every script they wrote.
Give them a way to test what they wrote without damaging anything.
Give them tools to rollback whatever changes they made.
Make as much static tests as possible once they commit the code before executing it.
Syntax highlighting will make them happier.
And remember: they will use the tool in ways you don't expect, and you will end up writing (or rewriting) most of the scripts. No university degree means you can trust them to understand what they are doing. (Not even CS!)
So if you can make the system less dynamic, would be in your benefit
It's like strategy pattern,all different rules are different algorithm apply to the Context(A),algorithm can be selected at runtime.
Add a filter chain design pattern to that,so that you can choose different algorithms(rules) at the same time.
Roolie is a very simple java rule engine that meight be helpful for you .As Roolie says:
Roolie is an extremely simple Java Rule Engine (Non-JSR 94) that uses rules you create in Java. Simply create your basic rules, implement the single "passes" method for each, then chain them together in an XML file to create more complex rules.
If you had the records in a database, you could select the matching ones with SQL syntax.
For example:
SELECT * FROM data
WHERE budget > 100000
AND privatCredits < 50000

Authoring JBoss Drools rules using UI

We're using JBoss Drools to externalise some particularly prone to change business logic in some services we are building.
Where these rules can be created and maintained by our developers this is working very well and we have a good level of integration and integrated workflow.
We are looking to expand its use to a new service that has a very high level of customisation required. Essentially an "expert user" needs to be able to setup rules of two different kinds:
"standard" rules - these are almost implicit rules that we know are common requirements and which we could build UI for to set e.g. only allowing certain operations to take place between two dates etc.
"custom" rules - completely off the wall requests that whilst we could try and anticipate we'd rather just let people write and test their own rules against :)
My question is, is it possible (and indeed is there anything out there as an example) of using Drools for both 1 & 2? Basically, to have a fixed UI application author Drools rules effectively AND have a "free text" rule editor embedded in our UI?
Any suggestions appreciated!
You have a few options.
For (2), you can simply embed the rules editor from Guvnor in your web application. All editors in Guvnor are embeddable components, so you can choose what you want to use and what you don't. The problem that I see in this approach is that you might be giving too much power to the users :). In other words, the ability to write any rules based on the model requires disciplines that are typically only known to technical users. For instance, writing tests to validate the rules. Some business users have enough technical knowledge for that, but I would say it is probably the exception, not the rule.
What I prefer and recommend most of the time is to develop your own domain specific GUI, that uses/exposes concepts and terminology that are familiar to the business users and a way to write rules that "makes sense" for their specific job. Sometimes, they will not even know they are writing "rules", but they will. Behind the scenes, your application takes the input from the Domain Specific GUI and generates the rules dynamically, either using the drools API or a string based template. This solves your (1) requirement, but might be powerful enough to solve (2) as well.

Real life experience with the Axon Framework [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
As part of researching CQRS for use with a project, I ran across the Axon Framework, and I was wondering if anyone has any real life experience with it. Just to be clear, I'm asking about the framework, not CQRS as an architectural pattern.
My project already uses Spring and Spring Integration which fits nicely with Axon's own requirements, but before i dedicate a lot of time to it, I would like to know if anyone has some first hand experience. In particular I'm interested i possible pitfalls that are not immediately apparent from the documentation.
The framework relies heavily on eventsourcing, which means that all state changes are >written to the data store as events. "
This is completely untrue, it does not rely heavily on event-sourcing. One of the implementations for storing the aggregate in this framework use Event-Sourcing but you can easily use also the classes provided to use a standard relational model.
It is just better with event-sourcing.
So you have a historical reference of all your data. This is nice but makes changing your >domain after you've gone in production a very daunting proposition especially if you sold >the customer on the system's "strong auditability" "
I don't think it is a lot easier with a standard relational model that only stores the current state.
The framework encourages denormalizing your data, to the point that some have suggested >having a table per view in the application. This makes your application extremely >difficult to maintain, especially when the original developers are gone"
This is unrelated to the framework but to the architectural pattern in use (CQRS).
And sorry to mention that but having one denormalizer/view is a good idea as it stays a simple object.
So maintenance is easy because SQL request/insertion as also easy.
So this argument is not very strong.
How about a view which uses a 1000 tables model with inner joins everywhere and complex SQL queries?
Again, CQRS helps because, basically, the view data is just a SELECT * from the table which correspond to the view.
if somehow you made a mistake in one of the eventhandlers, your only option is to >"replay" the eventlog, which depending on the size of your data can take a very long >time. The tooling for this however is non-existent.
I agree on the point that currently there is a lack of tooling to replay events and that this can take a long time. However, it is theoretically possible to only replay a portion of the event and not all the content of the event store.
Replaying can have side effects, so >developers become scared of doing it
Replaying event have side effects -> that's untrue. For me side effects means modifying the state of the system. In an event-sourced CQRS application, the state is stored in the event-store. Replaying the events does not modify the event store.
You can have side effect on the query side of the model yes. But you don't care if you have made a mistake because you are still able to correct it and replay the event once again.
it's extremely easy to have developers mess up using this framework. if they don't store >changes to domain objects in events, next time you replay your events you are in for a >surprise.
Well if you misused and misunderstand the architecture, the concept, etc. then ok I agree with you. But perhaps the problem is not the framework here.
Should you store delta's ? absolute values ? if you don't keep tabs on your developers >you are bound to end up with both and you will be f***ed
I can say that for every system I would say that it's unrelated directly to the framework itself. It's like saying, "Java is crap because you can messed up everything if someone codes a bad implementation of hashCode and equals methods."
And for the last part of your comment, I already seen samples like helloWorld with the Spring framework.
Of course it is completely useless in a simple example.
Be careful in your comment to make a difference between the concept (CQRS + EventSourcing) and the framework. Make a difference please.
Since you have stated that you want to use CQRS for your project (and I assume that the JVM is your target platform) I think Axon Framework is an excellent choice.
I have built a fairly complex trading platform on it (no, the trading sample is not complex) and I have not seen any obvious flaws of the framework.
Since I use EventSourcing, the test fixtures made it very easy to write BDD style "given, when, then" tests. This lets you treat an aggregate as a black box and concentrate on checking that the correct set of events come out when you put in a certain command.
About pitfalls: before jumping in, make sure
That you have the concepts of CQRS figured out.
Make a list (paper, whiteboard, whatever) of all your aggregates, command handlers, event handlers, sagas, commands and events. This is the hard part of building your system, figuring out what it should do and how. After this, the reference manual should show you how to wire it all together with Axon.
Some non Axon specific points:
Being able to rebuild the view store from events is a concept of EventSourcing, and not something that is exclusive to Axon, but I found it pretty easy to create a service that will send me all events from an aggregate type, aggregate id or a certain event type.
Being able to build a new reporting component one year after the project is launched and instantly get reports on data from the time of the project launch and onwards is awesome.
I've been using AxonFramework for more than one year on a complex project developed for a big bank.
The requirements were demanding, customer's expectations were high, and release times narrow.
I've choosed AxonFramework because, at the project kick off moment, it was the most complete and the best documented implementation of CQRS available in Java, well designed, easy to integrate, to test and to extend.
After more than one year I think that these considerations are still valid and current.
Another consideration has guided my choice: I wanted that the commitment on such a difficult project to become a training opportunity for me and other members of the team.
We started to develop with AxonFramework version 1.0 and moved to version 1.4 as newer versions were released.
Our team experience with CQRS and the implementation provided by the AxonFramework was absolutely positive.
It provided us with a consistent and uniform manner to develop each feature that guided us and make you feel at ease.
Without it some features of the application would have been much more complicated to develop.
I am referring mainly to the various long-running processes that need to be handled and to the related compensation logic, but also to the many business logics pieces that have been necessary, here and there, that fitted nicely and uncoupled in the event driven architecture promoted by CQRS.
Our choice was to be conservative in the write model, so we preferred a JPA based persistence instead of the event sourced one.
The query model is made up of views. We have tried to make sure that each view contains all the required data from a single page using intermediate views when necessary.
Anyhow we developed the write model as we were applying event sourcing, so we take care of modifying the state of aggregates exclusively through events. When the customer asked for a cloning function of a very complex aggregate it was just a matter of replaying the source events (with uuid translated) to a brand new instance - the down side in this case have been the events upcasting (but this functionality was greatly improved in the imminent 2.0 version).
As in each project during the development we found a lot of bugs, in our code mainly, but also in components supposed to be mature and stable, like the application server, the IoC container, the cache, the workflow engine and some of the other libraries that are easily to be found in any large J2EE application.
As any other human product AxonFramework was not immune to bugs too, but surprisingly for a young and niche project like this, they have been few, not critical, and quickly resolved by new releases.
The kind and immediate support provided by the author on the mailing list is another invaluable feature and helped me a lot when I was in trouble.
The application was released in production a year ago and is currently maintained and under active development of new features.
The customer is satisfied and asks for more.
When to use AxonFramework is more a matter of when to use CQRS. For a response it's worth to go back to the official documentation: http://www.axonframework.org/docs/1.4/introduction.html#d4e51
In our case definitively it was worth it.
The OP specifically asks about the pitfalls relating to the Axon Framework rather than CQRS. This makes the question difficult to answer, as Axon started as a fairly faithful implementation of the famous book by Eric Evans
The main advantage is that it does exactly what it says on the tin: it handles the hard parts of a CQRS based design for you: aggregates, sagas, event sourcing, command handlers, event handlers, BASE consistency etc. When you follow the best practices, you end up with a highly responsive and horizontally scalable application. If you use it with event sourcing, your data is completely auditable, and at least in theory, you can determine the state your application had at any given point in time. Tooling to do this is not provided; you will have to roll your own.
The main developer of the framework is very approachable and extremely knowledgeable on the subject of high performance and scalable computing in java. He tends to answer every question on the mailing list within a few hours. This is both an advantage and the major pitfall: at this time (early 2014), the Axon Framework depends heavily on one person. The rest of the pitfalls I would like to mention are probably more the result of event sourcing than of CQRS or Axon (as of 2018 the framework is supported by the company Axoniq)
Design your data model very carefully upfront. Though it is easy to add to, making fundamental changes to your datamodel can be very difficult. If you make a fundamental mistake in the datamodel, your application may not perform well, or even fail to work at all. For example, if you choose a tree shaped data model, with one long lived aggregate root at the top, this aggregate may grow very large as it accrues more and more events over time, and it may take a long time to load and store. I don't know what will happen if this goes on until an instance of the aggregate no longer fits in RAM, but I imagine could be bad. Don't do it that way.
Another pitfall (event sourcing related) is that, after a number of revisions, it can become increasingly difficult to reason about the state of an aggregate, as you sometimes have to keep in mind not only what the code does today, but also what it did in the past. This definitely makes replaying (a portion of) the event store to rebuild a view table a non trivial task.
Fixing data errors can be more difficult than with a 'traditional' design. Rather than a simple SQL statement, you will often need to make a command to change the state of your application. If the error in your data was caused by a faulty event handler, you can usually just fix the bug, clear the snapshots and let he events for the aggregate be replayed. If your bug caused spurious events to be applied, it can me much more trouble to fix. The faulty events will stay in the event store, and you may have to apply some new ones to restore your data to the correct state, or change the code to ignore or fix their behaviour.
While the framework itself is written decent enough, using it in a real world project has been nothing short of a nightmare and the choice of this framework imo was a major contributing factor to this project's failing.
The framework relies heavily on eventsourcing, which means that all state changes are written to the data store as events. So you have a historical reference of all your data. This is nice but makes changing your domain after you've gone in production a very daunting proposition especially if you sold the customer on the system's "strong auditability"
You cannot have ops guys make ad-hoc changes to the database
The framework encourages denormalizing your data, to the point that some have suggested having a table per view in the application. This makes your application extremely difficult to maintain, especially when the original developers are gone
if somehow you made a mistake in one of the eventhandlers, your only option is to "replay" the eventlog, which depending on the size of your data can take a very long time. The tooling for this however is non-existent. Replaying can have side effects, so developers become scared of doing it
it's extremely easy to have developers mess up using this framework. if they don't store changes to domain objects in events, next time you replay your events you are in for a surprise. Should you store delta's ? absolute values ? if you don't keep tabs on your developers you are bound to end up with both and you will be f***ed
There is practically no adoption of this framework, so googling for answers will not do you any good
Even though the framework does not yet support distribution it's written with it in mind and the api's are a pain to work with because of it. Firing off an event is async by default and if you want to check if an exception was raised executing the command, say a duplicate username exception, you need to pass in a listener to your commandhandler which is a future, then you wait for the future's result to come in, handle any checked exceptions, interuptedexception etc and then you can grab the exception that was thrown from the future. Ofcourse which exceptions a command can raise is not apparent from the api. Defeating the purpose of checked exceptions
Check out some of the example apps. I somehow need a unit of work listener to create an addressbook application? My goodness...
I am currently with a team working on an online casino platform launching our brand Casumo this summer. The domain and platform is build using Axon Framework and so far it it has served us solidly.
A lot of time has been saved not having to build all the infrastructure needed for command handling, event routing, event sourcing, snapshoting etc and the APIs are really nice to work with. The one bug we found in the framework so far was fixed in .. release 12 hours later and Allard is always quick to take suggestions on new features and discussing ways to leverage the framework to fulfill your needs.

Implement the Business Services in PL/SQL or Java? Favor/Cons?

I work for an enterprise that will create a web-service stack architecture (probability rest based), I'm the technical leader involved. This architecture will be created using the Java Platform, but I have a problem with some team´s members: they are from Oracle´s old school (i.e. they did the legacy using PL/SQL and in their head the business logic should be only on the database, with just a little java layer calling that), I have some arguments about this but I would like to know your arguments in favor or cons about the question.
Java Favor (in my opinion)
Scalability
Monitoring
Object Oriented Language
Sync/Async process
Rich domain
Testability
You may find the following articles interesting and helpful:
A Working Definition of Business Logic, with Implications for CRUD Code
Business Logic: From Working Definition to Rigorous Definition
Theorems Regarding Business Logic
Can You Really Create A Business Logic Layer?
I worked on such a project using MS SQL rather than Oracle. It was not a pleasant experience. The trouble is that T-SQL is not a very modern language and so we weren't as productive as we could have been and there was more code duplication than there would have been otherwise.
There's an argument to be made that the productivity of the developer is more important than the lang, so if these guys are just that good, so what. But you're not going to find a lot of young developers who will want to work that way.
It has to be a judicial decision. Either one can be more suitable based on the use case.
For a simple example
If you have a business rule which say requires data from a number of tables and based on the data received, it decides to perform a final database operation (insert or update), then in my view pl/sql procedure is the place to do it. Since this will save n/w time and bandwidth and will be a touch faster.
It's a bit tricky question, but a very thought provoking one.
Starting from Oracle 9i,It has complete support for Java Stored Procedures, i.e. you can store Java classes inside database server and execute them as you execute Pl/Sql procedures.
It would always be a good option to choose a perfect worker rather than sticking on to sentiments! As you have stated Java has presence and support for versatile concepts across various domains.
Here I can think out of following options for an optimal solution acceptable by everybody.
Write required procedures in java, store them into server
Write at least major requirements, which really reduce complexity of task compared to Pl/Sql and call them in Pl/Sql
Without polluting either of environments maintain separate layers, which is second best option

Categories