Creating empty string for 'rep:glob' - java

To restrict node access to user(using principalbased.ACL), tried with following snippet, but turns ineffective:
Map<String, Value> restrictions = new HashMap<String, Value>();
//Apply privilege to user to have read only access to root folder
restrictions.put("rep:nodePath", valueFactory.createValue(ROOT, PropertyType.PATH));
restrictions.put("rep:glob", valueFactory.createValue(""));
accessControlList.addEntry(userPrincipal, privileges, true , restrictions);
accessControlManager.setPolicy(accessControlList.getPath(), accessControlList);
Is there any other way to set rep:glob property to ""?
When I provide access to root('/'), it should access only that. Instead all the nodes under root are accessible to user.

When you don't provide the rep:glob property while restricting principal access, the default ACL effects the whole subtree of the target node.
Thus the optional rep:glob propety will save you time of creating restrictions on every path you want to limit the policy and it accepts JCR Values and the passed String instance while creating this Value will have below effect on node/subnodes restriction when appliedon the ndoe with path "/foo" :
-----------------------------------------------------------------------------------
rep:glob value | Effect |
-----------------------------------------------------------------------------------
null | matches /foo and all its children |
-----------------------------------------------------------------------------------
"" (Empty String)| matches "/foo" only |
* (wildcard) | all descendants |
/*bar | all childrens which path ends with "bar" |
/*/bar | all non-direct descendants of "/foo" named "bar" |
/bar* | all childrens which paths begin with "bar" |
*bar | all siblings and descendants of "/foo" that begin with "bar" |
-----------------------------------------------------------------------------------
If the the restriction is not applied only for your target node, then there should be some mistake somewhere, and you may need to post the whole code snippet for further help.

Even this(http://jackrabbit.apache.org/api/2.2/org/apache/jackrabbit/core/security/authorization/GlobPattern.html) pattern didn't work for root. Anyways I have used other approach to skip root access check, while getting user. Need to provide SimpleWorkspaceAccessManager in SecurityManager tag.

Related

REST API for updating informations with empty or null values

I have a general question about how best to build an API that can modify records in a database.
Suppose we have a table with 10 columns and we can query these 10 columns using REST (GET). The JSON response will contain all 10 fields. This is easy and works without problems.
The next step is that someone wants to create a new record via POST. In this case the person sends only 8 of the 10 fields in the JSON Request. We would then only fill the 8 fields in the database (the rest would be NULL). This also works without problems.
But what happens if someone wants to update a record? We see here different possibilities with advantages and disadvantages.
Only what should be updated is sent.
Problem: How can you explicitly empty / delete a field? If a "NULL" is passed in the JSON, we get NULL in the object, but any other field that is not passed is NULL as well. Therefore we cannot distinguish which field can be deleted and which field cannot be touched.
The complete object is sent.
Problem: Here the object could be fetched via a GET before, changed accordingly and returned via PUT. Now we get all information back and could write the information directly back into the database. Because empty fields were either already empty before or were cleared by the user.
What happens if the objects are extended by an update of the API. Suppose we extend the database by five more fields. The user of the API makes a GET, gets the 15 fields, but can only read the 10 fields he knows on his page (because he hasn't updated his side yet). Then he changes some of the 10 fields and sends them back via PUT. We would then update only the 10 fields on our site and the 5 new fields would be emptied from the database.
Or do you have to create a separate endpoint for each field? We have also thought about creating a map with key / value, what exactly should be changed.
About the technique: We use the Wildfly 15 with Resteasy and Jackson.
For example:
Database at the beginning
+----+----------+---------------+-----+--------+-------+
| ID | Name | Country | Age | Weight | Phone |
+----+----------+---------------+-----+--------+-------+
| 1 | Person 1 | Germany | 22 | 60 | 12345 |
| 2 | Person 2 | United States | 32 | 78 | 56789 |
| 3 | Person 3 | Canada | 52 | 102 | 99999 |
+----+----------+---------------+-----+--------+-------+
GET .../person/2
{
"id" : 2,
"name" : "Person 2",
"country" : "United States",
"age" : 22,
"weight" :62,
"phone": "56789"
}
Now I want to update his weight and remove the phone number
PUT .../person/2
{
"id" : 2,
"name" : "Person 2",
"country" : "United States",
"age" : 22,
"weight" :78
}
or
{
"id" : 2,
"name" : "Person 2",
"country" : "United States",
"age" : 22,
"weight" :78,
"phone" : null
}
Now the database should look like this:
+----+----------+---------------+-----+--------+-------+
| ID | Name | Country | Age | Weight | Phone |
+----+----------+---------------+-----+--------+-------+
| 1 | Person 1 | Germany | 22 | 60 | 12345 |
| 2 | Person 2 | United States | 32 | 78 | NULL |
| 3 | Person 3 | Canada | 52 | 102 | 99999 |
+----+----------+---------------+-----+--------+-------+
The problem is
We extend the table like this (salery)
+----+----------+---------------+-----+--------+--------+-------+
| ID | Name | Country | Age | Weight | Salery | Phone |
+----+----------+---------------+-----+--------+--------+-------+
| 1 | Person 1 | Germany | 22 | 60 | 1929 | 12345 |
| 2 | Person 2 | United States | 32 | 78 | 2831 | NULL |
| 3 | Person 3 | Canada | 52 | 102 | 3921 | 99999 |
+----+----------+---------------+-----+--------+--------+-------+
The person using the API does not know that there is a new field in JSON for the salary. And this person now wants to change the phone number of someone again, but does not send the salary. This would also empty the salary:
{
"id" : 3,
"name" : "Person 3",
"country" : "Cananda",
"age" : 52,
"weight" :102,
"phone" : null
}
+----+----------+---------------+-----+--------+--------+-------+
| ID | Name | Country | Age | Weight | Salery | Phone |
+----+----------+---------------+-----+--------+--------+-------+
| 1 | Person 1 | Germany | 22 | 60 | 1929 | 12345 |
| 2 | Person 2 | United States | 32 | 78 | 2831 | NULL |
| 3 | Person 3 | Canada | 52 | 102 | NULL | NULL |
+----+----------+---------------+-----+--------+--------+-------+
And salary should not be null, because it was not set inside the JSON request
You could deserialize your JSON to a Map.
This way, if a property has not been sent, the property is not present in the Map. If its null, its inside the map will a null value.
ObjectMapper mapper = new ObjectMapper();
TypeReference<HashMap<String, Object>> typeReference = new TypeReference<>() {};
HashMap<String, Object> jsonMap = mapper.readValue(json, typeReference);
jsonMap.entrySet().stream().map(Map.Entry::getKey).forEach(System.out::println);
Not a very convenient solution, but it might work for you.
A common technique is to track changes on the entity POJO.
Load Dog with color = black, size = null and age = null
Set size to null (the setter will mark this field as changed)
Run update SQL
The POJO will have an internal state knowning that size was changed, and thus include that field in the UPDATE. age, on the other hand, was never set, and is thus left unchanged. jOOQ works like that, I'm sure there's others.
Only what should be updated is sent. Problem: How can you explicitly empty / delete a field? If a "NULL" is passed in the JSON, we get NULL in the object, but any other field that is not passed is NULL as well. Therefore we cannot distinguish which field can be deleted and which field cannot be touched.
The problem you have identified is genuine; I have faced this too. I think it is reasonable to not provide a technical solution for this, but rather document the API usage to let the caller know the impact of leaving out a field or sending it as null. Of course, assuming that the validations on the server side are tight and ensure sanity.
The complete object is sent. Problem: Here the object could be fetched via a GET before, changed accordingly and returned via PUT. Now we get all information back and could write the information directly back into the database. Because empty fields were either already empty before or were cleared by the user.
This is "straighter-forward" and should be documented in the API.
What happens if the objects are extended by an update of the API.
With the onus put on the caller through the documentation, this too is handled implicitly.
Or do you have to create a separate endpoint for each field?
This, again, is a design issue, the solution to which varies from person-to-person. I would rather retain the API at a record level than at the level of individual value. However, there may be cases where they are needed to be that way. Eg, status updates.
Suppose we extend the database by five more fields. The user of the API makes a GET, gets the 15 fields, but can only read the 10 fields he knows on his page (because he hasn't updated his side yet). Then he changes some of the 10 fields and sends them back via PUT. We would then update only the 10 fields on our site and the 5 new fields would be emptied from the database.
So let's start with an example - what would happen on the web, where clients are interacting with your API via HTML rendered in browsers. The client would GET a form, and that form would have input controls for each of the fields. Client updates the fields in the form, submits it, and you apply those changes to your database.
When you want to extend the API to include more fields, you add those fields to the form. The client doesn't know about those fields. So what happens?
One way to manage this is that you make sure that you include in the form the correct default values for the new fields; then, if the client ignores the new fields, the correct value will be returned when the form is submitted.
More generally, the representations we exchange in our HTTP payloads are messages; if we want to support old clients, then we need the discipline of evolving the message schema in a backwards compatible way, and our clients have to be written with the understanding that the message schema may be extended with additional fields.
The person using the API does not know that there is a new field in JSON for the salary.
The same idea holds here - the new representation includes a field "salary" that the client doesn't know about, so it is the responsibility of the client to forward that data back to you unchanged, rather than just dropping it on the floor assuming it is unimportant.
There's a bunch of prior art on this from 15-20 years ago, because people writing messages in XML were facing exactly the same sort of problems. They have left some of their knowledge behind. The easiest way to find it is to search for some key phases; for instance must ignore or must forward.
See:
Versioning XML Vocabularies
Extensibility, XML Vocabularies, and XML Schema
Events in an event store have the same kinds of problems. Greg Young's book Versioning in an Event Sourced System covers a lot of the same ground (representations of events are also messages).
The accepted answer works well but it has a huge caveat which is that it's completely untyped. If the object's fields change then you'll have no compile time warning that you're looking for the wrong fields.
Therefore I would argue that it's better to force all fields to be present in the request body. Therefore a null means the user explicitly set it to null while if the user misses a field they'll receive a 400 Bad Request with the request body describing the error in detail.
Here's a great post on how to achieve this: Configure Jackson to throw an exception when a field is missing
Here's my example in Kotlin:
data class PlacementRequestDto(
val contentId: Int,
#param:JsonProperty(required = true)
val tlxPlacementId: Int?,
val keywords: List<Int>,
val placementAdFormats: List<Int>
)
Notice that the nullable field is marked as required. This way the user has to explicitly include it in the request body.
You can control empty or null values as below
public class Person{
#JsonInclude(JsonInclude.Include.NON_NULL)
private BigDecimal salary; // this will make sure salary can't be null or empty//
private String phone; //allow phone Number to be empty
// same logic for other fields
}
i) As you're updating weight and removing the phone number,Ask client to send fields which needs to updated along with record identifier i.e id in this case
{
"id" : 2,
"weight" :78,
"phone" : null
}
ii) As you're adding salary as one more column which is mandatory field & client should be aware of it..may be you have to redesign contract

Putting integers into quotes for xpath

I was debugging someone else's Selenium code.
They had an xpath something like td['6']. This was failing. I used intuition and changed it to td[6] which fixed it. However, td['6'] did not give an error as I thought it would. It located an element, but a completely different one than without the quotes.
So it got me to thinking, what does putting the number in quotes, like td['6'] actually mean?
The XPath 1.0 Specification states that ’td’ is the axis (e.g. children) and node-test (tag name TD) an everything inside the square brackets is a predicate-expression which is either evaluated as boolean (true or false) or in special-case of a number evaluated as positional filter of the node-set:
if the result is not a number, then the result will be converted as if by a call to the boolean function. Thus a location path para[3] is equivalent to para[position()=3].
Explained by Example
Following 3 XPath predicate cases are explained by evaluating on input from this sample table (w3schools):
+-------------------------------+-------------------+---------+
| Company | Contact | Country |
+-------------------------------+-------------------+---------+
| Alfreds Futterkiste | Maria Anders | Germany |
| Centro comercial Moctezuma | Francisco Chang | Mexico |
| Ernst Handel | Roland Mendel | Austria |
| Island Trading | Helen Bennett | UK |
| Laughing Bacchus Winecellars | Yoshi Tannamuri | Canada |
| Magazzini Alimentari Riuniti | Giovanni Rovelli | Italy |
+-------------------------------+-------------------+---------+
case 1: number as predicate
td[6] selects the 6th child table-data element since number 6 is evaluated as shorthand for predicate position()=6.
Example: select each cell of the the third column
XPath: //td[3]
Output:
<td>Germany</td>
<td>Mexico</td>
<td>Austria</td>
<td>UK</td>
<td>Canada</td>
<td>Italy</td>
Try online demo
case 2: quoted string as predicate
td['6'] selects every child table-data element since the string '6' is not-empty or has non-zero length and thus evaluates to true (see boolean-conversion). So node-set of TD-elements is not further filtered (because predicate is always true).
Example: select each cell (because the string-predicate is always true)
XPath: //td['3']
Output:
<td>Alfreds Futterkiste</td>
<td>Maria Anders</td>
<td>Germany</td>
<td>Centro comercial Moctezuma</td>
<td>Francisco Chang</td>
<td>Mexico</td>
<td>Ernst Handel</td>
<td>Roland Mendel</td>
<td>Austria</td>
<td>Island Trading</td>
<td>Helen Bennett</td>
<td>UK</td>
<td>Laughing Bacchus Winecellars</td>
<td>Yoshi Tannamuri</td>
<td>Canada</td>
<td>Magazzini Alimentari Riuniti</td>
<td>Giovanni Rovelli</td>
<td>Italy</td>
Try online demo
case 3: conditional predicate testing the elements
This is the real benefit of predicate expression and allows you to test on the elements, for example find all merged table-cells with colspan attribute:
td[#colspan]
See this sophisticated use-case:
Xpath expression with multiple predicates
Example: select all cells where contents starts with 'A'
XPath: //tr/td[starts-with(., 'A')]
Output:
<td>Alfreds Futterkiste</td>
<td>Austria</td>
Try online demo
td[predicate] means:
return the first td node for which predicate is true.
Each non-empty string returns true, so td['6'] will select first td node found in DOM.
td[6] is the shorthand for td[position()=6] expression which means:
return td which is the sixth child of td type."

Spring security expressions fail if user has multiple authorities

I have 3 controllers annotated with #PreAuthorize("hasAnyAuthority(x)") where x is
'ROLE_ADMIN'
'ROLE_ADMIN','ROLE_MID'
'ROLE_ADMIN','ROLE_MID','ROLE_LOW'
Crucial point: If a user has ONE authority, these annotations work just fine. Ex: A user with only ROLE_ADMIN can access all methods on all 3.
BUT if a user has some other role as well, e.g. ROLE_ADMIN,ROLE_OTHER, then all I get "Access Denied" on across all three controllers.
See this table for what I'm talking about. (hAA=hasAnyAuthority):
+-----------------------+-------------------+------------------------------+-----------------------------------------+
| Authorities | hAA('ROLE_ADMIN') | hAA('ROLE_ADMIN','ROLE_MID') | hAA('ROLE_ADMIN','ROLE_MID','ROLE_LOW') |
+-----------------------+-------------------+------------------------------+-----------------------------------------+
| ROLE_ADMIN | YES | YES | YES |
| ROLE_MID | NO | YES | YES |
| ROLE_LOW | NO | NO | YES |
| ROLE_ADMIN,ROLE_OTHER | NO | NO | NO |
| ROLE_MID,ROLE_OTHER | NO | NO | NO |
| ROLE_LOW,ROLE_OTHER | NO | NO | NO |
+-----------------------+-------------------+------------------------------+-----------------------------------------+
Just to drive the point home, I have a user whose getAuthorites returns (as string) "[ROLE_MID,mid.mid12345]" and all three controllers fail for that user. Shouldn't "hasAnyAuthority('ROLE_MID')" work for him? Why would having "mid.mid12345" cause it to fail?
PS I've tried hasAnyRole as well with the same results.
I figure out that in older versions of Spring Security 4, whole comma-separated list is acceptable without using single-quote for each authority
Example:
hasAnyAuthority('AUTHORITY1,AUTHORITY2,AUTHORTIY3')
In Spring Security 5, this is not an acceptable list so it should be replaced with
hasAnyAuthority('AUTHORITY1', 'AUTHORITY2', 'AUTHORTIY3')
Took some time to figure out so I would like to share here
I am not sure but try without single quotes and just comma-separated, take a look to documentation(https://docs.spring.io/spring-security/site/docs/current/reference/html/el-access.html)
hasAnyAuthority([authority1,authority2]) Returns true if the current principal has any of the supplied roles (given as a comma-separated list of strings)
Also I think your roles cannot start with ROLE_.
Returns true if the current principal has any of the supplied roles (given as a comma-separated list of strings). By default if the supplied role does not start with 'ROLE_' it will be added. This can be customized by modifying the defaultRolePrefix on DefaultWebSecurityExpressionHandler.
I figured out the issue. I have a UserDetailsService set up with the sole loadUserByUsername method overridden in order to connect my database entity user, Customuser, to Spring Security's Userdetails.User.
But my conversion from one to the other was incorrect.
INCORRECT
User user = new User(customuser.getUsername(), customuser.getPassword(), AuthorityUtils.createAuthorityList(customuser.getAuthorityString()));
CORRECT
User user = new User(customuser.getUsername(), customuser.getPassword(), customuser.getAuthoritiesList()); // see below
My Customuser object stores a comma-separated list of the authorities the user has. When using AuthorityUtils.createAuthorityList, it was taking that string, whether it was "ROLE_ADMIN" or "ROLE_ADMIN,ROLE_OTHER" and making the entire string ONE authority. The latter of the two above would have been a single authority authority of "ROLE_ADMIN,ROLE_OTHER". (This explains why users with only one authority were working just fine and any user with mutiple was failing.)
I had to create a custom method within Customuser to chop that custom CSV string into individual authorities before creating my UserDetails.User object in loadUserByUsername.
public Collection<? extends GrantedAuthority> getAuthoritiesList()
{
StringTokenizer st = new StringTokenizer(authorityString,",");
List<SimpleGrantedAuthority> returnlist = new ArrayList<SimpleGrantedAuthority>();
while(st.hasMoreTokens())
{
returnlist.add(new SimpleGrantedAuthority(st.nextToken()));
}
return returnlist;
}
Hope this helps someone else!

Hibernate PersistentMap returning wrong values

I am working on an application running Grails 2.4.5 and Hibernate 3.6.10.
There is a domain object that has a child PersistentMap. This map stores 4
key-value pairs where the value is always a String.
In our dev and test environments everything works fine and then occasionaly
the persistent map starts returning "1" for either the key or the value.
The other values in the parent domain object are fine. The problem has been
resolved when it occurs by updating one of the records for the map directly
in the database. This makes me think its a cacheing issue of some sort,
but I haven't been able to recreate it in a local environment.
The database underneath is MySQL.
The following is not the actual code but is representative of the structure.
class MyDomain {
static belongsTo = [owner: Owner]
static hasMany = [relatedDomains: RelatedDomain]
Set relatedDomains = []
Map flags = [:]
String simpleItem
String anotherItem
static constraints = {
owner(nullable: true)
relatedDomains(nullable: true)
flags(nullable: true)
simpleItem(nullable: true)
anotherItem(nullable: true)
}
}
This results in a couple of tables (ignoring RelatedDomain and Owner):
mydomain table
| id | version |owner_id|simple_item|another_item |
|-------|-----------|--------|-----------|-------------|
| 1 | 1 | 1 | A value |Another value|
mydomain_flags table
|flags| flags_ids | flags_elt |
|-----|-----------|-------------|
| 1 | KEY_ONE | VALUE_ONE |
| 1 | KEY_TWO | VALUE_TWO |
| 1 | KEY_THREE | VALUE_THREE |
When the MyDomain instance is retrieved the flags map will have:
[ "KEY_ONE": "VALUE_ONE", "KEY_TWO": "VALUE_TWO", "KEY_THREE" :"VALUE_THREE"]
Occasionally the map contains:
[ "KEY_ONE": "1", "KEY_TWO": "1", "KEY_THREE" :"1"]<br/>
OR
[ "1": "VALUE_ONE", "1": "VALUE_TWO", "1" :"VALUE_THREE"]
The rest of the data in the MyDomain instance is correct. It is just the flags map that seems to have an issue. The application only reads the information for the mydomain and flags, it never updates the data. It's basically configuration data for the application.
Has anyone else seen behavior like this? I don't know if its related to hibernate (version 3.6.10) or Grails/Gorm or both. I've been unable to reproduce it locally but it has happened in two separate environments.
I tracked it down to an issue with hibernate. The aliases generated for the persistent map resulted in the same alias for the key and the element. This is because the aliases are based on a static counter in the org.hibernate.mapping.Table class (in 3.6.10). The reason it was sporadic was because Grails loads all the domain classes into a HashSet and then iterates over the set binding each one. Since the Set is unordered sometimes the domain class with the persistent map would be the 3rd class mapped resulting in a key alias identical to the element alias.
This problem was fixed in Hibernate version 4.1.7
https://hibernate.atlassian.net/browse/HHH-7545
To get around the problem in Grails, I subclassed the GrailsAnnotationConfiguration class and in the constructor, created and discarded 10 Hibernate Table instances. This incremented the static counter to a safer seed value prior to loading the Grails domain classes.

How can I determine which alternative node was chosen in ANTLR

Suppose I have the following:
variableDeclaration: Identifier COLON Type SEMICOLON;
Type: T_INTEGER | T_CHAR | T_STRING | T_DOUBLE | T_BOOLEAN;
where those T_ names are just defined as "integer", "char" etc.
Now suppose I'm in the exitVariableDeclaration method of a test program called LittleLanguage. I can refer to LittleLanguageLexer.T_INTEGER (etc.) but I can't see how determine which of these types was found through the context.
I had thought it would be context.Type().getSymbol().getType() but that doesn't return the right value. I know that I COULD use context.Type().getText() but I really don't want to be doing string comparisons.
What am I missing?
Thanks
You are loosing information in the lexer by combining the tokens prematurely. Better to combine in a parser rule:
variableDeclaration: Identifier COLON type SEMICOLON;
type: T_INTEGER | T_CHAR | T_STRING | T_DOUBLE | T_BOOLEAN;
Now, type is a TerminalNode whose underlying token instance has a unique type:
variableDeclarationContext ctx = .... ;
TerminalNode typeNode = (TerminalNode) ctx.type().getChild(0);
switch(typeNode.getSymbol().getType()) {
case YourLexer.T_INTEGER:
...

Categories