I have template for datafactory pipeline with destination dataset sink, which I want deploy using resource manager and Java Azure SDK:
{
"name": "[concat(parameters('factoryName'), '/', parameters('pipeline_pipelineConfiguration_pipelineTemplate_destinationDataset01'))]",
"type": "Microsoft.DataFactory/factories/datasets",
"apiVersion": "2018-06-01",
"properties": {
"linkedServiceName": {
"referenceName": "[parameters('pipeline_pipelineConfiguration_pipelineTemplate_destinationLinkedService01')]",
"type": "LinkedServiceReference"
},
"annotations": [],
"type": "DelimitedText",
"typeProperties": {
"location": {
"type": "AzureBlobStorageLocation",
"fileName": {
"value": "[concat('#concat(utcnow(\'yyyy-MM-dd\'),\'-',parameters('pipeline_pipelineConfiguration_destination'),',.txt\'')]",
"type": "Expression"
},
"container": ""
},
"columnDelimiter": ",",
"escapeChar": "\\",
"firstRowAsHeader": true,
"quoteChar": "\""
},
"schema": []
},
"dependsOn": [
"[concat(variables('factoryId'), '/linkedServices/', parameters('pipeline_pipelineConfiguration_pipelineTemplate_DestinationLinkedService01'))]"
]
}
I get exception:
com.fasterxml.jackson.core.JsonParseException: Unrecognized character escape ''' (code 39)
most probably because of value parameter for fileName.
What would the best way to provide file name with date calculated on time of exporting data and part of the name taken from parameter ?
Use variable and make calculation of value this variable
or use replace() function.
Related
I'm trying to remove the below key and valuefrom the JSON string in Java. I couldn't really crack the pattern. Can anyone help me find what I'm doing wrong here?
"appointment_request_id": "77bl5ii169daj0abqaowl0ggmnwxdk1219mug023", // (, included)
String newTransformedJsonString = jsonString.replaceAll("\"appointment_request_id\":\".*\",","");
I think I need to add a wildcard to make sure the starting and ending " for value are considered. I tried ?, surrounding the " as ["]. No luck.
The value will never be empty.
The value will have spaces trimmed
Assume the value can have any character
{
"appointment_request_id": "77bl5ii169daj0abqaowl0ggmnwxdk1219mug023",
"app_spec_version": "0.0.61-5",
"previous_invoice_ids": [
"18000-A-qa4wl0kvka",
"18101-A-y49daj0ppp"
],
"contracts": [
{
"name": "bcbs.patient",
"definitions": [
{
"base_path": "/patient/v1",
"swagger": {
"swagger": "2.0",
"info": {
"version": "1.0.0",
"title": "patient-v1"
},
"basePath": "",
"tags": [
{
"name": "patient-v1",
"description": "PatientServiceResource"
}
],
"schemes": [
"http"
],
"webpages": {
"/patient/v1/insurace": {
"get": {
"tags": [
"patient-v1"
],
"summary": "Returnsanerror,butwaitsbeforedoingso.",
"operationId": "getInsurance",
"produces": [
"application/json"
],
"parameters": [
{
"name": "statusCode",
"in": "query",
"description": "",
"required": false,
"type": "integer",
"format": "int32"
}
]
}
}
}
}
}
]
}
]
}
This may be helpful.
How to make a regex to replace the value of a key in a json file
need json path expression for below json.
To find appId corresponding to a name
[
{
"name": "a0vudemo",
"appId": "80af20be-eddf-4b20-8d82"
},
{
"name": "a1app",
"appId": "55507d25-d025-4454-9443"
},
{
"name": "a1appswan",
"appId": "86cfa844-cf58-48b7-b56d"
}
]
.name=="a1app"
$..[?(#.name == 'a1app')].appId
similar question
I'm trying to create SQL Tables from a Json File which is written following the OpenApi Specification. Here is an example of an Input file I must convert:
"definitions": {
"Order": {
"type": "object",
"properties": {
"id": {
"type": "integer",
"format": "int64"
},
"petId": {
"type": "integer",
"format": "int64"
},
"quantity": {
"type": "integer",
"format": "int32"
},
"shipDate": {
"type": "string",
"format": "date-time"
},
"status": {
"type": "string",
"description": "Order Status",
"enum": [
"placed",
"approved",
"delivered"
]
},
"complete": {
"type": "boolean",
"default": false
}
},
"xml": {
"name": "Order"
}
},
"Category": {
"type": "object",
"properties": {
"id": {
"type": "integer",
"format": "int64"
},
"name": {
"type": "string"
}
},
"xml": {
"name": "Category"
}
},
My aim to to create two tables named "Order" and "Category" whose columns must be to ones listed in the "properties" field. I'm using Java.
The Input file is mutable, so I used Gson to read it. I managed to get an Output like this:
CREATE TABLE ORDER
COLUMNS:
id->
type: integer
format: int64
petId->
type: integer
format: int64
quantity->
type: integer
format: int32
shipDate->
type: string
format: date-time
status->
type: string
description: Order Status
Possibilities:
-placed
-approved
-delivered
complete->
type: boolean
default: false
CREATE TABLE CATEGORY
COLUMNS:
id->
type: integer
format: int64
name->
type: string
I'm stuck here, trying to convert the "type" and "format" fields into a type that can be read by PostgreSQL or MySQL. Furthermore, it is hard to work directly on the code to get a readable SQL string due the presence of nesting. So I thought it might be a good idea to work on the output and "translate" it to SQL. Is there any class\package that could help me reading a file like this? I'm trying to avoid the use of thousands IF ELSE conditions. Thank you.
Your assignment involves two phases.
One is "Parsing" the given JSON object and understanding the content
Second one is "Translating" the parsed content into a working SQL query
Here your Java program should work as a kind of Translation engine.
For parsing the JSON objects many java libraries are available.
To translate the parsed json into a SQL query you can simply use basic String manipulation methods.
I am using mule to transform some webservice responses on my project, and currently i am using DataWeave message transformer.
JSON that i should transform :
{
"odata.metadata": "http://mchwtatmsdb/Across/CrossTank/api/v1/$metadata#Translations",
"value": [
{
"SourceSentence": {
"Id": 2750901,
"Text": "Refrigerator:",
"Language": 1033
},
"TargetSentence": {
"Id": 2750902,
"Text": "Kühlschrank:",
"Language": 1031
},
"Id": 2264817,
"Similarity": 100,
"CreationDate": "2009-02-25T12:56:15",
"Creator": "41e8d49d-0de7-4a96-a220-af96d94fe4b0",
"ModificationDate": "2009-02-25T12:56:15",
"Modificator": "00000000-0000-0000-0000-000000000000",
"State": "SmartInserted",
"Note": ""
},
{
"SourceSentence": {
"Id": 2750906,
"Text": "Refrigerator*",
"Language": 1033
},
"TargetSentence": {
"Id": 2750907,
"Text": "Kühlschrank*",
"Language": 1031
},
"Id": 2264822,
"Similarity": 100,
"CreationDate": "2009-02-25T12:55:46",
"Creator": "41e8d49d-0de7-4a96-a220-af96d94fe4b0",
"ModificationDate": "2009-02-25T12:55:46",
"Modificator": "00000000-0000-0000-0000-000000000000",
"State": "SmartInserted",
"Note": ""
}
]
}
I am basically using transformer, define metadatas respective to json files that is included in the project.
So transformer part is so simple :
<dw:set-payload><![CDATA[%dw 1.0
%output application/json
---
{
"odata.metadata": payload."odata.metadata",
value: payload.value map ((value , indexOfValue) -> {
SourceSentence: {
Id: value.SourceSentence.Id,
Text: value.SourceSentence.Text as :string,
Language: value.SourceSentence.Language
},
TargetSentence: {
Id: value.TargetSentence.Id,
Text: value.TargetSentence.Text,
Language: value.TargetSentence.Language
},
Similarity: value.Similarity
})
}]]></dw:set-payload>
Transformation runs in expected way and it gets the necessary fields that i've set in dataweave transformer, after transformer implemented on json string, it changes the encoding somehow, and output doesn't show special characters. Such as:
{
"odata.metadata": "http://mchwtatmsdb/Across/CrossTank/api/v1/$metadata#Translations",
"value": [
{
"SourceSentence": {
"Id": 2750901,
"Text": "Refrigerator:",
"Language": 1033
},
"TargetSentence": {
"Id": 2750902,
"Text": "K252hlschrank:",
"Language": 1031
},
"Similarity": 100
},
{
"SourceSentence": {
"Id": 2750906,
"Text": "Refrigerator*",
"Language": 1033
},
"TargetSentence": {
"Id": 2750907,
"Text": "K252hlschrank*",
"Language": 1031
},
"Similarity": 100
}
]
}
"Text": "K252hlschrank*" part of the string is showing "ü" character as "252" i tried to run project both on the Windows an Linux environment. On linux, character is shown as "\u00" so i think this is somehow related OS problem. I've tried several things to fix the problem.
Tried to change project properties, set encoding to "UTF-8". It didn't work.
Tried to change run configuration, set encoding to "UTF-8". It didn't work.
Tried to give -Dfile.encoding="UTF-8" parameter into run parameters of Java, again it didn't work.
What is source of this problem, are transformers direclty using operating system's encoding ? Because without transformation, main json file represented as "ü", no encoding problem.
I solved this problem by changing my windows language settings to English(United Kingdom) from Turkish... Don't know how it is effected but it did the magic.
I try to load contacts from a given account using the REST api of sugarcrm using Java. This "kind of" works... My test scenario should return two records. It does, but the records do a) not contain all fields and b) all the returned fields have empty values:
See the entry_list part of the returned JSON:
{"entry_list": [
{
"id": null,
"module_name": "Contacts",
"name_value_list": {
"name": {
"name": "name",
"value": ""
},
"deleted": {
"name": "deleted",
"value": 0
},
"do_not_call": {
"name": "do_not_call",
"value": "0"
}
}
},
{
"id": null,
"module_name": "Contacts",
"name_value_list": {
"name": {
"name": "name",
"value": ""
},
"deleted": {
"name": "deleted",
"value": 0
},
"do_not_call": {
"name": "do_not_call",
"value": "0"
}
}
}
Here is what I set as rest_data in my request:
rest_data.put("session", sessionId);
rest_data.put("module_name", moduleName);
rest_data.put("module_id", sourceId);
rest_data.put("link_field_name", relationField);
rest_data.put("related_module_query", "");
rest_data.put("related_fields", Arrays.asList());
rest_data.put("related_module_link_name_to_fields_array", Arrays.asList());
rest_data.put("offset", 0);
rest_data.put("order_by", "name ASC");
rest_data.put("limit", 0);
So I'd expect that I receive:
- all records -> works
- all fields for every record -> works not
- all values for all fields for every record -> works not
I'm using v4_1 of the REST api.
Does anybody have some hints on this?
Unlike get_entry_list works you cannot specify an empty array for the related_fields parameter. You must specify at least one field name.