I have already a working synonym.txt in solr. Now I want to add that same txt file at elasticsearch. What can I do for it? At solr it was easy, I just kept that file in the system. At elasticsearch I added this and also run some command, but it is not working.
PUT /test_index
{
"settings": {
"index": {
"analysis": {
"analyzer": {
"synonym": {
"tokenizer": "whitespace",
"filter": [ "synonym" ]
}
},
"filter": {
"synonym": {
"type": "synonym",
"synonyms_path": "analysis/synonym.txt"
}
}
}
}
}
}
What's wrong? Do I need to index it again or do I need to map this with any field? My search result depends on multiple fields.
Hope you have applied your synonym on your existing fields in your ES mapping, you have just provided your index setting, and you need to provide the index mapping to confirm it.
Also adding an analyzer to the existing field is a breaking change and you have to reindex the data again to see the updated tokens.
You must use Analyze API to see the updated tokens on your index, Also please cross-check if you have added the synonym.txt properly and there was no error while creating the index setting with this file.
Related
I'm using https://github.com/json-path/JsonPath, I want to set fields that might not be yet set in the document. For example: doc.set(JsonPath.compile("$.some.array[0].value"), "abc"); on an {} document would result in:
{
"some": {
"array": [
{
"value": "abc"
}
]
}
}
I was thinking of trying to using the the compiled path, and get access to the path tokens to work my way through the document, but it doesn't seem that they are accessible. Is there any good way or alternative way of doing this?
Being able to access the path tokens in order to create fields that are needed
I would like to implement a PDP engine using the authzforce-ce-core-pdp-engine jar file like you mentioned in the README, but with exception of the policy files in XML should be dynamic. The main idea is similar to file sharing system as one user could share multiple files to other user with each file may have different policy. I was thinking to store the policy files in some sort of DB like MySQL or MongoDB and PDP will refer to it and make a decision to grant or deny the access based on the request.
I found that the pdp core engine supports MongoDB as mentioned here.
Here is my pdp configuration file:
<?xml version="1.0" encoding="UTF-8"?>
<!-- Testing parameter 'maxPolicySetRefDepth' -->
<pdp xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://authzforce.github.io/core/xmlns/pdp/6.0" xmlns:ext="http://authzforce.github.io/core/xmlns/test/3" version="6.0.0">
<refPolicyProvider id="refPolicyProvider" xsi:type="ext:MongoDBBasedPolicyProvider" serverHost="localhost" serverPort="27017" dbName="testXACML" collectionName="policies" />
<rootPolicyProvider id="rootPolicyProvider" xsi:type="StaticRefBasedRootPolicyProvider">
<policyRef>root-rbac-policyset</policyRef>
</rootPolicyProvider>
</pdp>
So now the question is that how can I store the policy XML files as it needs to be stored in JSON with MongoDB? I tried to convert XML to JSON using JSON maven dependency, but I have a problem of converting back to XML. For example with the policy XML file like this it will create the JSON file something like this:
{"Policy": {
"xmlns": "urn:oasis:names:tc:xacml:3.0:core:schema:wd-17",
"Target": "",
"Description": "Policy for Conformance Test IIA001.",
"Version": 1,
"xmlns:xsi": "http://www.w3.org/2001/XMLSchema-instance",
"RuleCombiningAlgId": "urn:oasis:names:tc:xacml:3.0:rule-combining-algorithm:deny-overrides",
"Rule": {
"Target": {"AnyOf": [
{"AllOf": {"Match": {
"AttributeValue": {
"DataType": "http://www.w3.org/2001/XMLSchema#string",
"content": "Julius Hibbert"
},
"AttributeDesignator": {
"Category": "urn:oasis:names:tc:xacml:1.0:subject-category:access-subject",
"AttributeId": "urn:oasis:names:tc:xacml:1.0:subject:subject-id",
"MustBePresent": false,
"DataType": "http://www.w3.org/2001/XMLSchema#string"
},
"MatchId": "urn:oasis:names:tc:xacml:1.0:function:string-equal"
}}},
{"AllOf": {"Match": {
"AttributeValue": {
"DataType": "http://www.w3.org/2001/XMLSchema#anyURI",
"content": "http://medico.com/record/patient/BartSimpson"
},
"AttributeDesignator": {
"Category": "urn:oasis:names:tc:xacml:3.0:attribute-category:resource",
"AttributeId": "urn:oasis:names:tc:xacml:1.0:resource:resource-id",
"MustBePresent": false,
"DataType": "http://www.w3.org/2001/XMLSchema#anyURI"
},
"MatchId": "urn:oasis:names:tc:xacml:1.0:function:anyURI-equal"
}}},
{"AllOf": [
{"Match": {
"AttributeValue": {
"DataType": "http://www.w3.org/2001/XMLSchema#string",
"content": "read"
},
"AttributeDesignator": {
"Category": "urn:oasis:names:tc:xacml:3.0:attribute-category:action",
"AttributeId": "urn:oasis:names:tc:xacml:1.0:action:action-id",
"MustBePresent": false,
"DataType": "http://www.w3.org/2001/XMLSchema#string"
},
"MatchId": "urn:oasis:names:tc:xacml:1.0:function:string-equal"
}},
{"Match": {
"AttributeValue": {
"DataType": "http://www.w3.org/2001/XMLSchema#string",
"content": "write"
},
"AttributeDesignator": {
"Category": "urn:oasis:names:tc:xacml:3.0:attribute-category:action",
"AttributeId": "urn:oasis:names:tc:xacml:1.0:action:action-id",
"MustBePresent": false,
"DataType": "http://www.w3.org/2001/XMLSchema#string"
},
"MatchId": "urn:oasis:names:tc:xacml:1.0:function:string-equal"
}}
]}
]},
"Description": "Julius Hibbert can read or write Bart Simpson's medical record.",
"RuleId": "urn:oasis:names:tc:xacml:2.0:conformance-test:IIA1:rule",
"Effect": "Permit"
},
"PolicyId": "urn:oasis:names:tc:xacml:2.0:conformance-test:IIA1:policy"
}}
but when I try to convert it back to XML it becomes entirely different XML file. So now how can I store the XML file in MongoDB? Also how to ensure that pdp engine core could find the correct policy to be compared? I saw there is a mentioned about the json adapter in README like this but I am not sure how to implement it normally.
I answered this question on AuthzForce's github. In a nutshell, David is mostly right about the format (xml content stored as JSON string). More precisely, for AuthzForce MongoDB policy Provider, you have to store policies as shown by the part of the unit test class's setupBeforeClass method that populates the database with test policies. You'll see that we use the Jongo library (using Jackson object mapping behind the curtains) to map PolicyPOJO Java objects to JSON in the Mongodb collection. So from the PolicyPOJO class, you can pretty much guess the storage format of policies in JSON: it is a JSON object with the following fields (key-value pairs):
"id" (string): the Policy(Set) ID
"version" (string): the Policy(Set) version
"type" (string): the Policy(Set) type, i.e. '{urn:oasis:names:tc:xacml:3.0:core:schema:wd-17}Policy' (resp. '{urn:oasis:names:tc:xacml:3.0:core:schema:wd-17}PolicySet') for XACML 3.0 Policy (resp. PolicySet)
"content" (string): the actual Policy(Set)'s XML document as string (plain text)
The xml content is automatically escaped properly by the Java library (Jongo/Jackson) to fit in a JSON string. But if you use another library/language, make sure it is the case as well.
There currently isn't a JSON format for XACML policies. That's currently under consideration by the OASIS XACML Technical Committee. Bernard Butler at Waterford Institute of Technology did do some initial translation which might be of value to you.
The only other option I could think of for the time being is to create a JSON wrapper around the policies e.g.
{
"policy":"the xml policy contents escaped as valid json value or in base64"
}
I'm writing a tool to modify huge json file in groovy. I read this file, add new entry and save, but I'would like to avoid changes in places I didn't touch.
I'm using new JsonBuilder( o ).toPrettyString() to get pretty formatted json output, but this function gives me result like this:
{
"key": "Foo",
"items": [
{
"Bar1": 1
},
{
"Bar2": 2
}
]
}
when I need to get this:
{
"key": "Foo",
"items":
[
{
"Bar1": 1
},
{
"Bar2": 2
}
]
}
There should be newline before [.
It's important to me, because in other way I cannot find in GIT history, what I really changed.
Do you have any idea how to achieve this?
The JsonBuilder method toPrettyString() delegates directly to JsonOutput.prettyPrint() as follows:
public String toPrettyString() {
return JsonOutput.prettyPrint(toString());
}
The latter method is not really customizable at all. However, the source is freely available from any Maven central repository or mirror. I would suggest finding the source and creating your own variant of the method that behaves the way you would like it to. The source for JsonOutput.prettyPrint() is only about 65 lines long and shouldn't be that hard to change.
I am creating Spreadsheets via Java API and there seem to be no method for setting column width? According to this document: https://developers.google.com/sheets/samples/rowcolumn - there seem to be a way via JSON:
{
"requests": [
{
"updateDimensionProperties": {
"range": {
"sheetId": sheetId,
"dimension": "COLUMNS",
"startIndex": 0,
"endIndex": 1
},
"properties": {
"pixelSize": 160
},
"fields": "pixelSize"
}
}
]
}
Is there a way to set these via SheetProperties or GridProperties?
I think there is no way to set it using those properties. So I think the one that was specified in the docs is the only available option as of now.
I checked SpreadsheetProperties reference and
GridProperties as well and it does not mention what you're asking for.
If you plan to use
POST https://sheets.googleapis.com/v4/spreadsheets/spreadsheetId:batchUpdate
using Java, you can always resort to XHR.
I am trying to create using the Java API a new river between MongoDB and ElasticSearch. Using the REST API is pretty easy making a PUT request with the following JSON
{
"type": "mongodb",
"mongodb": {
"servers": [
{ "host": "127.0.0.1", "port": 27017 }
],
"options": { "secondary_read_preference": true },
"db": "test",
"collection": "collectionTest"
},
"index": {
"name": "testIndex",
"type": "default"
}
}
But I am having several problems with the Java API. I am trying to use the CreateIndexRequestBuilder class but I don't know how to specify the params.
Are they custom params? What about source? I'm pretty lost...
Thank you in advance!
You need to add a document with id _meta to the _river index. The type is the name that you want to give to your index. The document to send is a json containing the configuration needed for your river. Beyond the custom configuration that depends on the river, the json document needs to contain the property type, which contains the name used within the river itself to register the RiverModule. For the mongodb river it's mongodb. The json that you posted is exactly the source that you have to send.
Here is the code that you need:
client.index(Requests.indexRequest("_river").type("my_river").id("_meta").source(source)).actionGet();