Expecting 'STRING', got 'EOF' - java

I'm trying to create a config.gateway.json file for a Ubiquiti firewall and I need to upload the file to it. I go to the website jsonlint.com and try to run the following into it:
{
"LOAD_BALANCE": {
"description": "LOAD_BALANCE",
"rule": {
"2000": {
"action": "modify",
"modify": {
"lb-group": "wan2_failover"
},
"source": {
"address": "172.16.7.0/24"
},
"interfaces": {
"bridge": {
"br0": {
"aging": "300",
"bridged-conntrack": "disable",
"hello-time": "2",
"max-age": "20",
"priority": "32768",
"promiscuous": "disable",
"stp": "false"
}
},
"load-balance": {
"group": {
"wan2_failover": {
"flush-on-active": "disable",
"interface": {
"br0": {
"failover-only": "''"
},
"eth0": "''"
},
"lb-local": "enable",
"lb-local-metric-change": "enable"
},
but I get the error Expecting 'STRING', got 'EOF'
If one of the fine Java gurus could help me I'd appreciate it!

It looks like your JSON has some unbalanced curly brackets. Maybe this is what you're after:
{
"LOAD_BALANCE": {
"description": "LOAD_BALANCE",
"rule": {
"2000": {
"action": "modify",
"modify": {
"lb-group": "wan2_failover"
},
"source": {
"address": "172.16.7.0/24"
},
"interfaces": {
"bridge": {
"br0": {
"aging": "300",
"bridged-conntrack": "disable",
"hello-time": "2",
"max-age": "20",
"priority": "32768",
"promiscuous": "disable",
"stp": "false"
}
},
"load-balance": {
"group": {
"wan2_failover": {
"flush-on-active": "disable",
"interface": {
"br0": {
"failover-only": "''"
},
"eth0": "''"
},
"lb-local": "enable",
"lb-local-metric-change": "enable"
}
}
}
}
}
}
}
}

Related

how to disable es highlight the synonym?

I only want to highlight the words I search in the query, not including the synonym, but I also hope es can return the search result can contain the synonym search result, here is an example.
PUT /my_test_index/
{
"settings": {
"analysis": {
"filter": {
"native_synonym": {
"type": "synonym",
"ignore_case": true,
"expand": true,
"synonyms": [
"apple,fruit"
]
}
},
"analyzer": {
"test_analyzer": {
"tokenizer": "whitespace",
"filter": [
"native_synonym"
]
}
}
}
},
"mappings": {
"properties": {
"desc": {
"type": "text",
"analyzer": "test_analyzer"
}
}
}
}
POST /my_test_index/_doc
{
"desc": "apple"
}
POST /my_test_index/_doc
{
"desc": "fruit"
}
GET /my_test_index/_search
{
"query": {
"match": {
"desc": "apple"
}
},
"highlight": {
"fields": {
"desc": {}
}
}
}
However, es highlight both fruit and apple while I only want the apple get highlight.
Anyone knows how to solve this? Thanks in advance :)
"hits": [
{
"_index": "my_test_index",
"_type": "_doc",
"_id": "RMyZrXAB7JsJEwsbVF33",
"_score": 0.29171452,
"_source": {
"desc": "apple"
},
"highlight": {
"desc": [
"<em>apple</em>"
]
}
},
{
"_index": "my_test_index",
"_type": "_doc",
"_id": "RcyarXAB7JsJEwsboF2V",
"_score": 0.29171452,
"_source": {
"desc": "fruit"
},
"highlight": {
"desc": [
"<em>fruit</em>"
]
}
}
]
You can add a highlight query that behaves different to your actual search query. All you need then is a field indexed without the synonyms, and you should be able to get what you want:
PUT /my_test_index/
{
"settings": {
"analysis": {
"filter": {
"native_synonym": {
"type": "synonym",
"ignore_case": true,
"expand": true,
"synonyms": [
"apple,fruit"
]
}
},
"analyzer": {
"test_analyzer": {
"tokenizer": "whitespace",
"filter": [
"native_synonym"
]
}
}
}
},
"mappings": {
"properties": {
"desc": {
"type": "text",
"analyzer": "test_analyzer",
"fields": {
"raw": {
"type": "text",
"analyzer": "whitespace"
}
}
}
}
}
}
GET /my_test_index/_search
{
"query": {
"match": {
"desc": "apple"
}
},
"highlight": {
"fields": {
"desc.raw": {
"highlight_query": {
"match": {
"desc.raw": "apple"
}
}
}
}
}
}

Apache Velocity: remove key/value from json

I have a JSON
{
"Id": "xxx",
"Type": "Transaction.Create",
"Payload": {
"result": 2,
"description": "Pending",
"body": {
"redirect": {
"url": "xxx",
"fields": {
"MD": "8a829449620619e80162252adeb66a39"
}
},
"card": {
"expiryMonth": "1",
"expiryYear": "2033"
},
"order": {
"amount": 1
}
}
}
}
And I want to remove the card info of it like this:
{
"Id": "xxx",
"Type": "Transaction.Create",
"Payload": {
"result": 2,
"description": "Pending",
"body": {
"redirect": {
"url": "xxx",
"fields": {
"MD": "8a829449620619e80162252adeb66a39"
}
},
"order": {
"amount": 1
}
}
}
}
How can I do this with Apache velocity?
What works is:
#set($content = $util.urlEncode($input.json('$')))
#set($new = $content.replaceAll("2033","2055"))
Action=SendMessage&MessageBody={"body": "$new","Event-Signature": "$util.urlEncode($input.params('Event-Signature'))"}
This gives me
{
"Id": "xxx",
"Type": "Transaction.Create",
"Payload": {
"result": 2,
"description": "Pending",
"body": {
"redirect": {
"url": "xxx",
"fields": {
"MD": "8a829449620619e80162252adeb66a39"
}
},
"card": {
"expiryMonth": "1",
"expiryYear": "2050"
},
"order": {
"amount": 1
}
}
}
}
But now I want to remove the card part but it does not work:
#set($content = $util.urlEncode($input.json('$')))
#set($new = $content.delete("$.Payload.body.card"))
Action=SendMessage&MessageBody={"body": "$new","Event-Signature": "$util.urlEncode($input.params('Event-Signature'))"}
what am I doing wrong?
Main goal is transform a mapping template in API Gateway for a webhook. The webhook contains to many information and we want to remove some part of the JSON POST call.
Try using the below
#set($dummy=$content.Payload.remove("card"))

Elastic Search, creating index with sources and settings by Rest Java Client

I'm trying to create index following this guide: https://www.elastic.co/guide/en/elasticsearch/client/java-rest/master/java-rest-high-create-index.html#_providing_the_whole_source
The problem is, that index is not created properly. Looks like whole settings section as well as completion type is ignored.
My json file:
{
"settings": {
"number_of_shards": 3,
"number_of_replicas": 1,
"analysis": {
"filter": {},
"analyzer": {
"keyword_analyzer": {
"filter": [
"lowercase",
"asciifolding",
"trim"
],
"char_filter": [],
"type": "custom",
"tokenizer": "keyword"
}
}
}
},
"mappings": {
"my_type": {
"properties": {
"first": {
"type": "text",
"fields": {
"keywordstring": {
"type": "text",
"analyzer": "keyword_analyzer"
},
"completion": {
"type": "completion"
}
},
"analyzer": "standard"
},
"second": {
"type": "text",
"fields": {
"keywordstring": {
"type": "text",
"analyzer": "keyword_analyzer"
},
"completion": {
"type": "completion"
}
},
"analyzer": "standard"
},
"third": {
"type": "text",
"fields": {
"keywordstring": {
"type": "text",
"analyzer": "keyword_analyzer"
},
"completion": {
"type": "completion"
}
},
"analyzer": "standard"
},
"fourth": {
"type": "text",
"fields": {
"keywordstring": {
"type": "text",
"analyzer": "keyword_analyzer"
},
"completion": {
"type": "completion"
}
},
"analyzer": "standard"
}
}
}
}
}
Java code:
CreateIndexRequest indexRequest = new CreateIndexRequest(ESClientConfiguration.INDEX_NAME);
URL url = Resources.getResource(TERYT_INDEX_CONFIGURATION_FILE_NAME);
return Try.of(() -> Resources.toString(url, Charsets.UTF_8))
.map(jsonIndexConfiguration -> indexRequest.source(jsonIndexConfiguration, XContentType.JSON))
.get();
createIndexRequest.setTimeout(TimeValue.timeValueMinutes(2));
Try.of(() -> client.indices().create(createIndexRequest, RequestOptions.DEFAULT))...
Index is created but when I look into Index Metadata, it looks completly wrong:
{
"state": "open",
"settings": {
"index": {
"creation_date": "1556379012380",
"number_of_shards": "1",
"number_of_replicas": "1",
"uuid": "L5fmkrjeQ6eKmuDyZ3MP3g",
"version": {
"created": "7000099"
},
"provided_name": "my_index"
}
},
"mappings": {
"my_type": {
"properties": {
"first": {
"type": "text",
"fields": {
"keyword": {
"ignore_above": 256,
"type": "keyword"
}
}
},
"second": {
"type": "text",
"fields": {
"keyword": {
"ignore_above": 256,
"type": "keyword"
}
}
},
"third": {
"type": "text",
"fields": {
"keyword": {
"ignore_above": 256,
"type": "keyword"
}
}
},
"fourth": {
"type": "text",
"fields": {
"keyword": {
"ignore_above": 256,
"type": "keyword"
}
}
}
}
}
},
"aliases": [],
"primary_terms": {
"0": 1
},
"in_sync_allocations": {
"0": [
"Cx6tBeohR8mzbTO74dwsCw",
"FcTUhpb_SL2LiaEyy_uwkg"
]
}
}
There is only 1 shard without replicas, I also don't see any informations about completion type. Does someone could tell me what I'm doing wrong here?
I think that this line:
Try.of(() -> client.indices().create(createIndexRequest, RequestOptions.DEFAULT))...
is hiding an important exception.
Here you are using elasticsearch 7.0.0 which does not allow anymore giving a "type" name in your mapping.
Instead of
"mappings": {
"my_type": {
"properties": {
You should write:
"mappings": {
"properties": {
Because of the exception and the fact that probably after index creation you are indexing some documents, default index settings and mapping are applied.
Which explains what you are seeing.
You need to fix your index settings first.
I'd recommend doing that in Kibana dev console.

How to stop getting back AutoResponded code for an Envelope

I know the fact that in case of an invalid email address DocuSign sends back AutoResponded as an envelope status. So, whenever I get back AutoResponded some of my services break. Is there a way to turn this feature off of my DocuSign account?
In simple words just ignore if an status is AutoResponded.
Thanks
You can exclude the AutoResponded recipientEventStatusCode from the eventNotification.
Here is a sample CreateEnvelope request which includes all eventNotifications. You can remove the events that you do not want to receive.
{
"eventNotification": {
"url": "[Callback Url]",
"loggingEnabled": "true",
"requireAcknowledgment": "true",
"envelopeEvents": [
{ "envelopeEventStatusCode": "Delivered" },
{ "envelopeEventStatusCode": "Completed" },
{ "envelopeEventStatusCode": "Declined" },
{ "envelopeEventStatusCode": "Voided" },
{ "envelopeEventStatusCode": "Sent" }
],
"recipientEvents": [
{ "recipientEventStatusCode": "Sent" },
{ "recipientEventStatusCode": "Delivered" },
{ "recipientEventStatusCode": "Completed" },
{ "recipientEventStatusCode": "Declined" },
{ "recipientEventStatusCode": "AuthenticationFailed" },
{ "recipientEventStatusCode": "AutoResponded" }
],
},
"recipients": {
"signers": [
{
"name": "john smith",
"email": "johnsmith#foo.com",
"recipientId": "1",
"routingOrder": "1"
}
]
},
"documents": [
{
"documentId": "1",
"name": "Agreement ",
"fileExtension": "pdf",
"documentBase64": "[Document Bytes]"
}
],
"status": "sent",
"emailSubject": "Envelope for auto responded status"
}

Elasticsearch returns an error of exceeding the limit fielddata

I have a problem with sorting namely, sorting work but only for price field. When I try to sort by start_date, end_date, uid, cat title get the message about exceeding the limit:
Data too large, the date for [ "name of field here"] would be larger than the limit of [19798897459 / 18.4gb]]
I do not know why this is happening code looks correct sample query for elastica looks like this:
Mapping:
"auctions": {
"_all": { "enabled": false },
"properties": {
"cat": { "store": true, "type": "long" },
"curr": { "index": "not_analyzed", "store": true, "type": "string" },
"end_date": { "store": true, "type": "long" },
"price": { "store": true, "type": "long" },
"start_date": { "store": true, "type": "long" },
"tcat": { "store": true, "type": "long" },
"title": { "store": true, "type": "string" },
"uid": { "store": true, "type": "long" }
}
},
Request:
/search?uids=335,547&title=Karta&orderBy=uid&orderDir=asc
Method:
private NativeSearchQueryBuilder getSearchQuery(AuctionIndexSearchParams searchParams, Pageable pageable) {
final List<FilterBuilder> filters = Lists.newArrayList();
final NativeSearchQueryBuilder searchQuery = new NativeSearchQueryBuilder();
Optional.ofNullable(searchParams.getCategoryId()).ifPresent(v -> filters.add(boolFilter().must(termFilter("cat", v))));
Optional.ofNullable(searchParams.getCurrency()).ifPresent(v -> filters.add(boolFilter().must(termFilter("curr", v))));
Optional.ofNullable(searchParams.getTreeCategoryId()).ifPresent(v -> filters.add(boolFilter().must(termFilter("tcat", v))));
Optional.ofNullable(searchParams.getUid()).ifPresent(v -> filters.add(boolFilter().must(termFilter("uid", v))));
final BoolQueryBuilder boolQueryBuilder = new BoolQueryBuilder();
//access for many uids
if (searchParams.getUids() != null) {
if (searchParams.getItemId() != null || searchParams.getTitle() != null) {
Optional.ofNullable(searchParams.getUids().split(",")).ifPresent(v -> {
filters.add(boolFilter().must(termsFilter("uid", v)));
});
} else {
for (String user : searchParams.getUids().split(",")) {
boolQueryBuilder.should(queryStringQuery(user).field("uid"));
}
}
}
//access for many categories
if (searchParams.getCategories() != null) {
Optional.ofNullable(searchParams.getCategories().split(",")).ifPresent(v -> {
filters.add(boolFilter().must(termsFilter("cat", v)));
});
}
if (searchParams.getItemId() != null) {
boolQueryBuilder.must(queryStringQuery(searchParams.getItemId()).field("_id"));
}
if (Optional.ofNullable(searchParams.getTitle()).isPresent()) {
boolQueryBuilder.must(queryStringQuery(searchParams.getTitle()).analyzeWildcard(true).field("title"));
}
if (Optional.ofNullable(searchParams.getStartDateFrom()).isPresent()
|| Optional.ofNullable(searchParams.getStartDateTo()).isPresent()) {
filters.add(rangeFilter("start_date").from(searchParams.getStartDateFrom()).to(searchParams.getStartDateTo()));
}
if (Optional.ofNullable(searchParams.getEndDateFrom()).isPresent()
|| Optional.ofNullable(searchParams.getEndDateTo()).isPresent()) {
filters.add(rangeFilter("end_date").from(searchParams.getEndDateFrom()).to(searchParams.getEndDateTo()));
}
if (Optional.ofNullable(searchParams.getPriceFrom()).isPresent()
|| Optional.ofNullable(searchParams.getPriceTo()).isPresent()) {
filters.add(rangeFilter("price").from(searchParams.getPriceFrom()).to(searchParams.getPriceTo()));
}
searchQuery.withQuery(boolQueryBuilder);
FilterBuilder[] filterArr = new FilterBuilder[filters.size()];
filterArr = filters.toArray(filterArr);
searchQuery.withFilter(andFilter(filterArr));
if (searchParams.getOrderBy() != null && searchParams.getOrderDir() != null) {
if (searchParams.getOrderDir().toLowerCase().equals("asc")) {
searchQuery.withSort(SortBuilders.fieldSort(searchParams.getOrderBy()).order(SortOrder.ASC));
} else {
searchQuery.withSort(SortBuilders.fieldSort(searchParams.getOrderBy()).order(SortOrder.DESC));
}
}
if (pageable != null) {
searchQuery.withPageable(pageable);
}
System.out.println(searchQuery.build().getQuery());
System.out.println(searchQuery.build().getFilter());
System.out.println(searchQuery.build().getSort());
return searchQuery;
}
System.out.println(searchQuery.build().getQuery());
{
  "bool": {
    "must": {
      "query_string": {
        "query", "card"
        "fields": [ "title"]
        "analyze_wildcard": true
      }
    }
  }
}
System.out.println (searchQuery.build().getFilter());
{
  "and" {
    "filters": [{
      "bool": {
        "must": {
          "terms": {
            "uid" [ "335", "547"]
          }
        }
      }
    }]
  }
}
System.out.println(searchQuery.build().getSort());
null
Any ideas what might cause this exception?
I should add that I've tried these solutions:
FIELDDATA Data is too large
But the effect was even worse, then no query did not work as quickly.
For any help I will be extremely grateful!
/_stats/fielddata?fields=*
{
"_shards": {
"total": 10,
"successful": 5,
"failed": 0
},
"_all": {
"primaries": {
"fielddata": {
"memory_size_in_bytes": 19466671904,
"evictions": 0,
"fields": {
"_id": {
"memory_size_in_bytes": 0
},
"cat": {
"memory_size_in_bytes": 0
},
"price": {
"memory_size_in_bytes": 3235221240
},
"title": {
"memory_size_in_bytes": 16231450664
}
}
}
},
"total": {
"fielddata": {
"memory_size_in_bytes": 19466671904,
"evictions": 0,
"fields": {
"_id": {
"memory_size_in_bytes": 0
},
"cat": {
"memory_size_in_bytes": 0
},
"price": {
"memory_size_in_bytes": 3235221240
},
"title": {
"memory_size_in_bytes": 16231450664
}
}
}
}
},
"indices": {
"allek": {
"primaries": {
"fielddata": {
"memory_size_in_bytes": 19466671904,
"evictions": 0,
"fields": {
"_id": {
"memory_size_in_bytes": 0
},
"cat": {
"memory_size_in_bytes": 0
},
"price": {
"memory_size_in_bytes": 3235221240
},
"title": {
"memory_size_in_bytes": 16231450664
}
}
}
},
"total": {
"fielddata": {
"memory_size_in_bytes": 19466671904,
"evictions": 0,
"fields": {
"_id": {
"memory_size_in_bytes": 0
},
"cat": {
"memory_size_in_bytes": 0
},
"price": {
"memory_size_in_bytes": 3235221240
},
"title": {
"memory_size_in_bytes": 16231450664
}
}
}
}
}
}
Edit:
I solved the problem as follows:
After discernment, it turned out that I'm using version 1.7. The documentation I found information that doc_values must be set in the mapping to true if you want to sort or aggregate. Fields strings need to add another field multifield.
So after a map change to something more or less like this:
{
"_all": {
"enabled": false
},
"properties": {
"cat": {
"store": true,
"type": "long",
"doc_values": true
},
"curr": {
"index": "not_analyzed",
"store": true,
"type": "string",
"doc_values": true
},
"end_date": {
"store": true,
"type": "long",
"doc_values": true
},
"price": {
"store": true,
"type": "long",
"doc_values": true
},
"start_date": {
"store": true,
"type": "long",
"doc_values": true
},
"tcat": {
"store": true,
"type": "long",
"doc_values": true
},
"title": {
"store": true,
"type": "string",
"fields": {
"raw": {
"type": "string",
"index": "not_analyzed",
"ignore_above": 256,
"doc_values": true
}
}
},
"uid": {
"store": true,
"type": "long",
"doc_values": true
}
}
}
Sorting work, but slowed down the whole sysytem search, I will say that much, although the documentation is from about 10-20%.
You should also remember to reindex data!
Thanks!

Categories