Change values of respective columns in InfluxDB - java

I have below given sampleMeasurement1; I want to update values of respective columns in InfluxDB. How to update those values?
SELECT * FROM sampleMeasurement1;
{ "results": [ { "series": [ { "name": "sampleMeasurement1", "columns": [ "time", "disk_type", "field1", "field2", "hostname" ], "values": [ [ 1520315870774000000, null, 12212, 22.44, "server001" ], [ 1520315870843000000, "HDD", 112, 21.44, "localhost" ] ] } ] } ] }

We can't change tag values via InfluxDB commands, we can however write a client script that can change the value of a tag by inserting "duplicate" points in the measurement with the same timestamp, fieldset and tagset, except that the desired tag will have its value changed.
Point with wrong tag ( https://docs.influxdata.com/influxdb/v1.4/write_protocols/line_protocol_reference/#syntax ):
cpu,hostname=machine.lan cpu=50 1514970123
After running
INSERT cpu,hostname=machine.mydomain.com cpu=50 1514970123
a SELECT * FROM CPU would include
cpu,hostname=machine.lan cpu=50 1514970123
cpu,hostname=machine.mydomain.com cpu=50 1514970123
After the script runs all the INSERT commands, you'll need to drop the obsolete series of points with the old tag value:
DROP SERIES FROM cpu WHERE hostname='machine.lan'
Change tag value in InfluxDB

Related

How to add another data in my sql column of type JSON

How to add another data in my sql column of type JSON.
In my table I have one column of json type null.
so I using this command to update the value.
update myTable
set columnJson = '[{"id" : "someId1" , "name": "someNamme2"}
,{"id" : "someId2", "name": "someNamme2"}]'
where id = "rowID1";
this is working fine. and I hve two data.
Now I want to add one more data in That.
I am using same command
update myTable
set columnJson = '[{"id" : "someId3", "name": "someNamme3"}]'
where id = "rowID1";
But the previous value is getting washed away. Is there anyway I can add n number of values. I am doing this in Java.
You need JSON functions like JSON_ARRAY_APPEND see more functions to maniüulate.
Json needs some special function which have to be learned, we usually recomend nit to use JSON, because in a normalized table you can use all the sql functionality there exists,
JSON needs always a moderate learn effort
update myTable
set columnJson = '[{"id" : "someId1" , "name": "someNamme2"}
,{"id" : "someId2", "name": "someNamme2"}]'
where id = "rowID1";
Rows matched: 1 Changed: 1 Warnings: 0
update myTable
set columnJson = JSON_ARRAY_APPEND(columnJson, '$[0]', '{"id" : "someId3", "name": "someNamme3"}')
Rows matched: 1 Changed: 1 Warnings: 0
SELECT * FROM myTable
id
columnJson
rowID1
[[{"id": "someId1", "name": "someNamme2"}, "{"id" : "someId3", "name": "someNamme3"}"], {"id": "someId2", "name": "someNamme2"}]
fiddle
And if you want another position you change te point where it shold change
update myTable
set columnJson = '[{"id" : "someId1" , "name": "someNamme2"}
,{"id" : "someId2", "name": "someNamme2"}]'
where id = "rowID1";
Rows matched: 1 Changed: 1 Warnings: 0
update myTable
set columnJson = JSON_ARRAY_APPEND(columnJson, '$[1]', '{"id" : "someId3", "name": "someNamme3"}')
Rows matched: 1 Changed: 1 Warnings: 0
SELECT * FROM myTable
id
columnJson
rowID1
[{"id": "someId1", "name": "someNamme2"}, [{"id": "someId2", "name": "someNamme2"}, "{"id" : "someId3", "name": "someNamme3"}"]]
fiddle

how to fetch all great grand children from a json object efficiently

I am trying to figure out an efficient way to fetch all great grant children from the root. I understand it is not the best way to store information but I have no control on why we have this structure. Here is my json
{
"root": "some value",
"other-attribute": "some value",
"Level1": [
{"attr": "value"},
"attr" : "value",
"Level2": [
{"attr": "value"},
"Level3" : [
{"attr": "value"},
"Level4" :[
{"attr": "value"},
]
]
]
]
}
How can I fetch list of all "Level3" and "Level4" elements from the json. Do I have to traverse through each hierarchy then my program complexity would be O(n3) for 3 level and O(n4) for 4 level.

Is aggregation (count) on dimension but not on metrics supported by Druid

For example there are two dimensions: [country, website] and one metric: [PV].
I want to know the average PV of website for each country.
To make it, it's easy to get the total PV in each country, however it's difficult to get the count of website in each country, furthermore the expect result is the total PV(in each country) divided by the count of website(in each country).
What I can do is apply "groupBy" query by country & website as below, and then group the result by country outside in my application. It's very very very slow, because the query extract lots of data from Druid and most of them is meaningless just for a sum.
{
"queryType": "groupBy",
"dataSource": "--",
"dimensions": [
"country",
"website"
],
"granularity": "all",
"intervals": [
"--"
],
"aggregations": [
{
"type": "longSum",
"name": "PV",
"fieldName": "PV"
}
]
}
Any one can help with this? I'm wondering it's impossible such a common query is not supported by Druid.
Thanks in advance.
To be clear, I describe my expected result by SQL, if you have known what I want to do or not familiar to SQL, please ignore the following part.
SELECT country, sum(a.PV_all) / count(a.website) as PV_AVG FROM
(SELECT country, website, SUM(PV) as PV_all FROM DB GROUP BY country, website ) a
GROUP BY country
Have you tried using a nested groupBy query ? druid support that.
In nutshell you can have something like
{
"queryType": "groupBy",
"dataSource":{
"type": "query",
"query": {
"queryType": "groupBy",
"dataSource": "yourDataSource",
"granularity": "--",
"dimensions": ["country", "website"],
"aggregations": [
{
"type": "longSum",
"name": "PV",
"fieldName": "PV"
}
],
"intervals": [ "2012-01-01T00:00:00.000/2020-01-03T00:00:00.000" ]
}
},
"granularity": "all",
"dimensions": ["country"],
"aggregations": [
----
],
"intervals": [ "2012-01-01T00:00:00.000/2020-01-03T00:00:00.000" ]
}

JSON File Parsing procedure for those field that not has in every array

Like, I have a json file
"ref": [{
"af": [
1
],
"speaker": true,
"name": "Fahim"
},
{
"aff": [
1
],
"name": "Grewe"
}]
During parsing time, If a field is not available in every array(like here speaker). It should throw Null Pointer Exception. So, what are the procedure for parsing those field that not has in every array.
A nice JSON parsing library like this one will have different levels of validation :
https://code.google.com/p/quick-json/
you can set custom validation rules, or use a non-validating version which will just parse without checking standards etc.
Have you tried:
var ref = YourObject.ref;
for(var i=0; i<ref.length; i++){
if(ref[i].speaker!==null){
//do something
}
}

How to use Ext.data.writer.Json to create a model object with nameProperty applied

I have nameProperty set in the proxy of my store like this :
Code:
writer : {
type : 'json',
nameProperty : 'mapping',
root : 'PerVO'
}
And in my model , the mapping like this:
{
name : 'modules',
mapping : 'modules.collection',
defaultValue: []
},
This works fine when I call the CRUD operations on the store. Now I want to get the data in the format being sent to the server for other operations. The problem is that when I get the record from the store , the mapping is lost. So how can I useExt.data.writer.Json or some other api to generate the data EXACTLY as it would be generated while being sent to the server with mapping applied :
This is what the store sends for save (I need the data like this. Note : Difference is 'collection:')
"modules": {
"collection": {
"isActive": 1,
"metadata": {
"actionCodes": "",
"memos": ""
},
"moduleCode": "OM",
"moduleId": 250,
"moduleName": "Org Management",
"success": false,
"viewId": 0
}
},
This is what i get from the store when the record is looked up:
"modules": {
"isActive": 1,
"metadata": {
"actionCodes": "",
"memos": ""
},
"moduleCode": "OM",
"moduleId": 250,
"moduleName": "Org Management",
"success": false,
"viewId": 0
},
I need to get the data as sent by the store to server.
Thanks
Nohsib
Simply change the name in the mapping to the following
{
name : 'collection',
mapping : 'modules.collection',
defaultValue: []
}
The name config is the name that is used within the model. If you look at the data property of a model object, the name used in the mapping is used to store the value in the model. The name is same name that we use while retrieving data from a model object too. Ex. model.get(name);
From the api.
The name by which the field is referenced within the Model

Categories