There's plenty of resources that outline how URLs should be organized for RESTful APIs but for web in general there's little to be found.
How can I structure the URLs of the web pages so that they are
Sensible from the perspective of a user browsing the web
Sensible from a separation of concerns in a Spring framework controller
To apply some context let's assume there's groups that contains elements and there are uses cases to create, view, edit and delete both.
You may have had trouble finding information about this since it's a question that touches on Information Architecture (IA) and SEO, in addition to application design. If your application or site is available on the internet (rather than an internal private network) then you have to optimize multiple, sometimes conflicting, constraints:
Making the urls sensible and understandable to users
Make your scheme manageable and scalable
Understand which portions of your app or site need to be indexible by search engines
Maintain good application design (think SOLID)
Probably several others ...
In general, I would suggest that you start with identifying your constraints, and consider "what makes sense to users" as a high priority one. Then try to work in other constraints from there. Since you mention separation of concerns, you have a good sense of what some of your design constraints are. Ultimately, it's up to you (and maybe your business SMEs) to determine which constraints need to be rigid, and which others can be relaxed.
After consideration I've come to think this
/groups/create
/group/{gid}/
/group/{gid}/edit
/group/{gid}/delete
/group/{gid}/elements/create
/group/{gid}/element/{eid}/
/group/{gid}/element/{eid}/edit
/group/{gid}/element/{eid}/delete
The only drawback is that groups are created against /groups/create rather than /group/create because otherwise the group name create would become illegal.
Another variant is to attach the id at the very end, but urls quickly become clumsy
/groups/create
/groups/view/{id}
/groups/edit/{id}
/groups/delete/{id}
/groups/view/{id}/element/create
/groups/view/{id}/element/view/{eid}
/groups/view/{id}/element/edit/{eid}
/groups/view/{id}/element/delete/{eid}
Related
We have a requirement such that Users need to be presented different facts based on some constraints.
Similar Hypothetical Example
If User belongs to Australia and earns more than 10k$ then show XYZ view of data/facts
If User belongs to USA and earns less than 5k$ then show ABC view of data/facts
...
...
Now we can either,
keep this mapping in the user model and have these business rules in the code
or
we can pull these rules out into a JSON or a DSL where we can simply change the rule without having to deploy the code for every single change.
We dont know how frequently these rules will change.
I have read arguments for and against the idea of a custom mini rule engine.
Arguments for:
Every small change will not require a deployment
All the rules related to this specific functionality are at one place (probably) making it easier to get an overview
Arguments against:
(Martin Fowler article) It will become difficult to reason about your code
Anemic Data model anti-pattern
Over time it will be difficult to manage these rules
Conflicts between rules or the chance of some facts not belonging to any of
In general it depends on your use case. Looking at the examples you have provided it looks like an excellent application of a rules engine. By externalising that logic you'll make your application more declarative, easier to maintain, and will deliver value to your users faster.
In your case this is the key statement:
We dont know how frequently these rules will change.
That suggests that you really need to externalize that logic either to a rules engine or a DSL of your choice. If you don't you'll be signing up to deploy new code every time those rules do change.
Your examples as written are pretty classic examples of ProductionRules
https://martinfowler.com/dslCatalog/productionRule.html
https://en.wikipedia.org/wiki/Production_system_(computer_science)
There are many good open source and commercial rules engines available. I'd consider those before creating a custom DSL. The logic you've written matches very well with those systems.
Some of the technical downsides of rules engines and DSLs are:
Rules systems can be difficult to test
You have to carefully design the inputs and outputs to your rules
You'll need to understand, document, and integrate another tool or custom DSL parser
Building rules is a different mental model than some developers are used to and it can take time to do it well
There is nothing wrong having business logic abstracted. And declarative rules seem appropriate in your scenario. One should be able to extract a - human readable - report, showing the business logic, the rules applied.
So the first stage would be the requirements, what you would like as product.
This can become extra code/modeling without impeding on the existent code base.
Do not start in the wild: do not search a solution library, when the problem and solution are unclear. Often a "solution framework" is applied, and the problem modeled after that. With much boiler plate code and not exactly matching what you actually would want.
At this stage you probably could make a simple prototype of a do-it-yourself rule engine. Maybe even make a fast and rough prototype. Then look for existing rule engines, and make prototypes. Preferably not on your application, but using Test-Driven-Development in unittests.
A bad idea is to immediately leave the rules definition maintenance to the end admin users. Such functionality has implications: missing test staging, working on the production system, versioning, technical qualifications of end users like big integrative picture.
As a last remark: this might have been something for the Software Engineering forum.
For a restfull service, does the noun can be omitted and discarded?
Instead of /service/customers/555/orders/111
Can / should I expose: /service/555/111 ?
Is the first option mandatory or are there several options and this is debatable?
It's totally up to you, I think the nice thing about having the nouns is that it helps you see from the URL what the service is trying to achieve.
Also taking into account that under customer you can have something like below and from the URL you can distinguish between order and quote for a customer
/service/customers/555/quote/111
/service/customers/555/order/111
One of the core aspects of REST is that URLs should be treated as opaque entities. A client should never create a URL, just use URLs that have been supplied by the server. Only the server hosting the entities needs to know something about the URL structure.
So use the URL scheme that makes most sense to you when designing the service.
Regarding the options you mentioned:
Omitting the nouns makes it hard to extend your service if e.g. you want to add products, receipts or other entities.
Having the orders below the customers surprises me (but once again, that's up to you designing the service). I'd expect something like /service/customers/555 and /service/orders/1234567.
Anyway, the RESTful customer document returned from the service should contain links to his or her orders and vice versa (plus all other relevant relationships).
To a certain degree, the "rules" for nameing RESTful endpoints should follow the same naming rules that "Clean Code" for example teaches.
Meaning: names should mean something. And they should say what they mean, and mean what they say.
Coming from there: it probably depends on the nature of that service. If you only can "serve" customers - then you could omit the customer part - because that doesn't add (much) meaningful information. But what if you later want to serve other kinds of clients?
In other words: we can't tell you what is right for your application - because that depends on the requirements / goals of your environment.
And worth noting: do not only consider todays requirements. Step back and consider those "future grow paths" that seem most likely. And then make sure that the API you are defining today will work nicely with those future extensions that are most likely to happen.
Instead of /service/customers/555/orders/111
Can / should I expose: /service/555/111 ?
The question is broad but as you use REST paths to define nested information, that has to be as much explicit as required.
If providing long paths in the URL is a problem for you, as alternative provide the contextual information in the body of the request.
I think that the short way /service/555/111 lacks consistency.
Suppose that /service/555/111 correspond to invoke the service for the customer 555 and the order 111.
You know that. But the client of the API doesn't know necessarily what the paths meaning are.
Besides, suppose now that you wish invoke the invoke the same service for the customer 555 but for the year 2018. How do that now ?
Like that :
/service/555/2018 would be error prone as you will have to add a parameter to convey the last path value and service/555/years/2018 will make your API definition inconsistent.
Clarity, simplicity and consistency matters.
According to me usage of noun is not necessary or comes under any standard,but yes it's usage helps your endpoint to be more specific and simple to understand.
So if any nomenclature is making your URL more human readable or easy to understand then that type or URL I usually prefer to create and keep things simple. It also helps your service consumer who understand the functionality of any service partially by name itself.
Please follow https://restfulapi.net/resource-naming/ for the best practices.
For a restfull service, does the noun can be omitted and discarded?
Yes. REST doesn't care what spelling you use for your resource identifiers.
URL shorteners work just fine.
Choices of spelling are dictated by local convention, they are much like variables in that sense.
Ideally, the spellings are independent of the underlying domain and data models, so that you can change the models without changing the api. Jim Webber expressed the idea this way
The web is not your domain, it's a document management system. All the HTTP verbs apply to the document management domain. URIs do NOT map onto domain objects - that violates encapsulation. Work (ex: issuing commands to the domain model) is a side effect of managing resources. In other words, the resources are part of the anti-corruption layer. You should expect to have many many more resources in your integration domain than you do business objects in your business domain.
Resources adapt your domain model for the web
That said, if you are expecting clients to discover URIs in your documentation (rather than by reading them out of well specified hypermedia responses), then its going to be a good idea to use URI spellings that follow a simple conceptual model.
I am planning on introducing Java rules and currently in the process of evaluating Drools to externalize (physically and logically) the business rules from the application.
Since these business rules would be very often by the business, I would want the business to make necessary changes to the rules via GUI.
I have Googled on integrating java web app + Drools + Guvnor and I'm not getting anywhere.
My Questions:
Does Drools support a lightweight GUI for editing the rules?
Is Drools Guvnor a lightweight GUI, or is there are a way to step it down?
How easy to integrate an application to Guvnor to read the rules?
Any other suggestions on the generally a Simple implementation of Integrating Java Application + Drools + Guvnor would be great.
Any pointers to tutorials would also do it for me.
I'm in the process of doing something akin to what you want to do.
Since these business rules would be very often by the business, I would want the business to make necessary changes to the rules via GUI.
WARNING WARNING WARNING!!! It's a common misconception that as long as there's a GUI, non-programmers can use it. That's... not the conclusion I've come to. It helps, but the hard part of programming isn't writing code, it's coming up with good solutions to the right problems. I'm certain some of the more intelligent and technically inclined business folks can do stuff with Guvnor, but don't expect it to be some kind of yellow brick road to the Land of Magical Bliss. You still have to provide the business people with a sane data model and API which lets them do what they need but which prevents them from doing by accident what they don't want to do.
1. Does Drools support a lightweight GUI for editing the rules?
2. Is Drools Guvnor a lightweight GUI, or is there are a way to step it down?
Well, "lightweight" is subject for discussion, but Guvnor works fairly well and doesn't require more than a browser, so I think it's OK.
3. How easy to integrate an application to Guvnor to read the rules?
Well, once you've got Guvnor up and running, wiring up your application to use a KnowledgeAgent to connect to Guvnor and listen for new rule updates isn't particularly hard.
Unfortunately, Drools in general and Guvnor in particular has a bunch of quirks which you may have to work around (for instance, you have to explode the Guvnor WAR file and edit files in WEB-INF/beans.xml to set up a non-default configuration...). But once you've got that straightened out, I think it works pretty well.
As for documentation, the Javadocs can be a bit sparse at times, but the web site has some pretty good stuff, and including several books.
All in all, Drools and Guvnor are powerful tools, but it's not trivial to get them working. If you really need what they have to offer, they're worth it, but it might also be worth considering if a simpler scripting solution will suffice.
Now, if you find yourself actually needing Drools, here's my top piece of advice - keep separate data models for your database and your rules. It does mean there's a lot of boring conversion code to write, but it's well worth it.
The app I'm working on uses JPA for database matters, and it wouldn't have been very pleasant to use that data model in our rules. Several reasons:
What's fits your data layer doesn't necessarily fit Drools, and vice versa.
We have a tree structure, which in JPA naturally is a #OneToMany relation with the children in a list in the parent node. Working with lists in Drools was rather painful, so we flattened it to a structure where we insert one ParentNode object, and a bunch of ChildNode objects, which was far easier to work with.
Dare to refactor.
The rule's data model needs to live inside of Guvnor, too - and that means you could break all rules if you rename an entity class or something like that. A separate data model for rules means you can refactor your database stuff without worries.
Show them what they need to see.
Databases can grow fairly complex, but the rules normally don't need to care about many of these complexities. Exposing these complexities to the people editing rules can cause a lot of unnecessary confusion. For example, we've found that for our scenarios, there's absolutely no need to expose rule writers to many-to-many relationships (which proved very complicated to handle in Drools) - so we make them look like one-to-many instead, and everything becomes more natural.
Protection.
We've designed most our rules to work like this pseudo-code:
rule "Say hello to user"
when
user : User
then
insert(new ShowMessageCommand("Hello " + user.getName() + "!"))
end
... and so, for each rule package, it's clearly defined which response commands you can insert, and what they do. After we've run the rules in our app, we pick out the objects inserted into the session by the rules and act upon them (the visitor pattern has proven very useful to avoid long if instanceof/else if instanceof/else chains).
I'm very happy we did it this way, instead of letting the rule writers do whatever they think they want to do with our JPA objects.
Anyway, hope this helps.
The above answer is well explained.
But on how to integrate Java & Drools-Guvnor is as follows...
private static KnowledgeBase readKnowledgeBase() throws Exception {
KnowledgeAgent kagent = KnowledgeAgentFactory
.newKnowledgeAgent( "MortgageAgent" );
kagent.applyChangeSet( ResourceFactory
.newClassPathResource( "changeset.xml" ) );
KnowledgeBase kbase = kagent.getKnowledgeBase();
kagent.dispose();
return kbase;
}
<?xml version="1.0" encoding="UTF-8"?>
<change-set xmlns='http://drools.org/drools-5.0/change-set'
xmlns:xs='http://www.w3.org/2001/XMLSchema-instance'
xs:schemaLocation='http://drools.org/drools-5.0/change-set drools-change-set-5.0.xsd'>
<add>
<resource
source='http://localhost:8080/guvnor-webapp/org.drools.guvnor.Guvnor/package/mortgages/LATEST'
type='PKG' basicAuthentication='enabled' username='admin' password='admin' />
</add>
</change-set>
Hope it is helpful as well.
After talking with some people at the DevCon in London and after looking at Records Management source code I noticed that there's actually no good example of how to implement custom documents lifecycle.
I know there's examples of rules and content modeling and even workflows but that solutions can't be really used to implement something more serious like Records Management.
What I'm wondering is how to effectively map a Java solution (I have more experience with OO and Java than Alfresco) to Alfresco. What should be defined as Java class and what should be type/aspect in content model. When to favor behaviour over rules and when to actually use workflows. In my first few projects I used workflows to implement document lifecycle, I wrote quite a lot of bussines/domain logic in workflow nodes - as actions (JS). I found out later that this is quite hard to maintain since you have some code in workflows, some in repository as scripts (Data Dictionary/Scripts) some Java, ...
Is the Records Management good example to start learning from and see some best practices in implementing full document lifecycle? Are there any other resources?
I'm strugling the most with how to implement full lifecycle in java and how to "centralize" the bussines/domain logic.
The scope of ECM is huge, and therefore it's quite hard to come up with completely general guidelines: you really need to stick with the use case you have to address and find the best solution to it. RM is a great example of how to implement a records management solution on top of Alfresco, but it's absolutely useless when it comes to implementing a web publishing process, for which the WCM QS is what you want to look at as a starting point.
On the whole of APIs Alfresco offers to developers, their inner characteristics are ultimately the best resource to understand when to use them. Let's see if I can make sense of (at least most important among) them.
Content Types
This is where you always need to start implementing an Alfresco project. You need to closely work with someone with deep domain knowledge of the documental processed you need to implement, and define root elements for different document lifecycles. In Alfresco you must assign one and only one content type to a given node. This is done at content creation time, and it's not often changed along the content lifecycle. Thus, content types are normally used to identify content items with radically different lifecycles (e.g. cm:document and ws:article), and defining a content type means to extract the basic meta data properties that will be used or useful along the whole document lifecycle.
Aspects
While content types are basically a static vertical classification and enrichment of documents, aspects are their dynamic cousins. As opposed to content types, you can apply or remove aspects dynamically with lesser-to-none destructive consequences to the content nodes. They can or not enrich the document with more metadata, and can be applied to items regardless of their content types. These characteristics make aspects possibly the most flexible feature of the Alfresco content model: you can use them to to mark content or enable/disable operations shared among different content lifecycles (e.g. cm:versionable, rma:filePlanComponent). By nature, aspects are meant to handle cross cutting concepts that occur in several distinct lifecycles or lifecycle steps.
Behaviors
Here we start an overview of how to add logic in your Alfresco solution. Behaviors are automatic computations that are fired by specific triggers, where triggers are defined as a [type/aspect, policy] pair (e.g. [cm:versionable, onCreateNode]). They are in general executed within the same transaction of the event that fires the trigger, there's no guarantee on the order of execution and there's no coordination or orchestration. This makes them perfect for automatic content generation or handling (e.g. creating a thumbnail or updating some metadata) which needs to be integral parts of the content lifecycle, but that are not strictly part of a formalized process.
They're rather accessory or supplementary operations for normal operations or workflows. They require Java coding, thus forming a rather fixed part of your solution. You normally identify and design behaviors right after finishing the content modeling phase and before starting designing the workflows.
Rules
Similar to behaviors, rules are triggered upon specific events, but they're a much more generic and dynamic than them. You can configure rules only on folders, at runtime, and bind them to events that happen within the folder. This makes them ideal to create special buckets within your content repository (e.g. send an email whenever content is added to a specific folder), where side effects happen when you deal with content within it. They're implemented as hidden nodes within folders, thus being integral parts of an export: you can in theory borrow them in different Alfresco implementations, provided the required pieces are available.
They normally are used when a piece of logic applies to content of several different types, but possibly not all the items of the affected types, and only when you can store all the affected content nodes within a sub branch of the repository. Even if such constraint might sound heavy, rules turn out to be quite a handy tool (e.g. generate a thumbnail for all the png documents with mime type image/png in /images).
Actions
Actions are bundled pieces of logic that can be invoked against a node on demand. They're the building blocks for rules, and often used within workflows (e.g. send an email). They are also handy to be directly bound to UI components/buttons, in order to allow the user to be directly exposed to available features of your application. You normally end up developing an action when you need (of want to enable) reusing the same piece of logic in different contextes, such as a workflow, a rule and/or direct user interaction.
Workflows
This is probably the core business of document management: workflows allow you to build a coordinated process that guides users through a defined sequence of steps, basically implementing a human algorithm. Workflows allow you to write custom code, but for the sake of maintainability you probably want to limit such code to the bare minimum of what the workflow itself needs in order to execute, and externalize more complex operations to actions or scripts.
If you're doing document management, the design and implementation of a workflow can start right after content modeling, possibly spawning several other development activities such as accessory actions and scripts, and they're likely to last until you call your code feature complete, and you start fiddling with all the infinite change requests or leftovers on the UI :-)
First of all, I have a very superficial knowledge of SAP. According to my understanding, they provide a number of industry specific solutions. The concept seems very interesting and I work on something similar for banking industry. The biggest challenge we face is how to adapt our products for different clients. Many concepts are quite similar across enterprises, but there are always some client-specific requirements that have to be resolved through configuration and customization. Often this requires reimplementing and developing customer specific features.
I wonder how efficient in this sense SAP products are. How much effort has to be spent in order to adapt the product so it satisfies specific customer needs? What are the mechanisms used (configuration, programming etc)? How would this compare to developing custom solution from scratch? Are they capable of leveraging and promoting best practices?
Disclaimer: I'm talking about the ABAP-based part of SAP software only.
Disclaimer 2, ref PATRYs response: HR is quite a bit different from the rest of the SAP/ABAP world. I do feel rather competent as a general-purpose ABAP developer, but HR programming is so far off my personal beacon that I've never even tried to understand what they're doing there. %-|
According to my understanding, they provide a number of industry specific solutions.
They do - but be careful when comparing your own programs to these solutions. For example, IS-H (SAP for Healthcare) started off as an extension of the SD (Sales & Distribution) system, but has become very much more since then. While you could technically use all of the techniques they use for their IS, you really should ask a competent technical consultant before you do - there are an awful lot of pits to avoid.
The concept seems very interesting and I work on something similar for banking industry.
Note that a SAP for Banking IS already exists. See here for the documentation.
The biggest challenge we face is how to adapt our products for different clients.
I'd rather rephrase this as "The biggest challenge is to know where the product is likely to be adapted and to structurally prepare the product for adaption." The adaption techniques are well researched and easily employed once you know where the customer is likely to deviate from your idea of the perfect solution.
How much effort has to be spent in
order to adapt the product so it
satisfies specific customer needs?
That obviously depends on the deviation of the customer's needs from the standard path - but that won't help you. With a SAP-based system, you always have three choices. You can try to customize the system within its limits. Customizing basically means tweaking settings (think configuration tables, tens of thousands of them) and adding stuff (program fragments, forms, ...) in places that are intended to do so. Technology - see below.
Sometimes customizing isn't enough - you can develop things additionally. A very frequent requirement is some additional reporting tool. With the SAP system, you get the entire development environment delivered - the very same tools that all the standard applications were written with. Your programs can peacefully coexist with the standard programs and even use common routines and data. Of course you can really screw things up, but show me a real programming environment where you can't.
The third option is to modify the standard implementations. Modifications are like a really sharp two-edged kitchen knife - you might be able to cook really cool things in half of the time required by others, but you might hurt yourself really badly if you don't know what you're doing. Even if you don't really intend to modify the standard programs, it's very comforting to know that you could and that you have full access to the coding.
(Note that this is about the application programs only - you have no chance whatsoever to tweak the kernel, but fortunately, that's rarely necessary.)
What are the mechanisms used (configuration, programming etc)?
Configurations is mostly about configuration tables with more or less sophisticated dialog applications. For the programming part of customizing, there's the extension framework - see http://help.sap.com/saphelp_nw70ehp1/helpdata/en/35/f9934257a5c86ae10000000a155106/frameset.htm for details. It's basically a controlled version of dependency injection. As a solution developer, you have to anticipate the extension points, define the interface that has to be implemented by the customer code and then embed the call in your code. As a project developer, you have to create an implementation that adheres to the interface and activate it. The basic runtime system takes care of glueing the two programs together, you don't have to worry about that.
How would this compare to developing custom solution from scratch?
IMHO this depends on how much of the solution is the same for all customers and how much of it has to be adapted. It's really hard to be more specific without knowing more about what you want to do.
I can only speak for the Human Resource component, but this is a component where there is a lot of difference between customers, based on a common need.
First, most of the time you set the value for a group, and then associate the object (person, location...) with a group depending on one or two values. This is akin to an indirection, and allow for great flexibility, as you can change the association for a given location without changing the others. in a few case, there is a 3 level indirection...
Second, there is a lot of customization that is nearly programming. Payroll or administrative operations are first class example of this. In the later cas, you get a table with the operation (hiring for example), the event (creation, modification...) a code for the action (I for test, F to call a function, O for a standard operation) and a text field describing the parameters of a function ("C P0001, begda, endda" to create a structure P001 with default values).
Third, you can also use such a table to indicate a function or class (ABAP-OO), that will be dynamically called. You get a developer to create this function or class, and then indicate this in the table. This is a method to replace a functionality by another one, or extend it. This is used extensively in the ESS/MSS.
Last, there is also extension point or file that you can modify. this is nearly the same as the previous one, except that you don't need to indicate the change : the file is always used (ZXPADU01/02 for HR modification of infotype)
hope this help
Guillaume PATRY