DataTables: Rearrange array entities during render - java

I use DataTables with server-side processing. The json object I receive contains an array of LocalDateTime element:
...
"SimpleDate": [ 2000,12,31,0,0 ]
...
My columns definition in the initialization script is the following:
"columns": [
{ "data": "SimpleDate"}
]
By default, the column is rendered comma-separated: 2000,12,31,0,0
How can I change it to 31.12.2000?
I tried columnDefsand render like:
"columnDefs": [
{
"render": function ( data, type, row ) {
return data.2 + '.' + data.1 + '.' + data.0;
},
"targets": 0
}
but this simply stops the table from rendering. I assume, accessing the array via data.xis not possible in this state.
So, how do I do it?

You are not accessing the elements of the data array properly.
"render": function ( data, type, row ) {
return data[2] + '.' + data[1] + '.' + data[0];
},

Try something like below.
"columnDefs": ["targets": 0 , "data": "SimpleDate","render": function ( data, type, row ) { return data[2] + '.' + data[1]+ '.' + data[0]; }}

Related

How to return multiple columns as a JSON with other columns

I have a database with next scheme, that uses postgressql and the postgis plugin:
table_id | id | mag | time | felt | tsunami | geom
I have a next SQL to select some rows and return those columns as a JSON:
SELECT ROW_TO_JSON(t) as properties
FROM (
SELECT id, mag, time, felt, tsunami FROM earthquakes
) t
I would like to create an SQL sentence that returns the table_id, properties and geom like:
SELECT table_id, properties, GeometryType(geom)
from earthquakes
How can I return the table_id and geom with properties as a JSON?
Edit:
I've created this sql:
SELECT table_id,
row_to_json((SELECT d FROM (SELECT id, mag, time, felt, tsunami ) d)) AS properties,
GeometryType(geom)
FROM earthquakes ORDER BY table_id ASC;
But when I do a request with postman, it returns this:
[
{
"table_id": 1,
"properties": {
"type": "json",
"value": "{\"id\" : \"ak16994521\", \"mag\" : 2.3}"
}
},
...
]
How can I return the values as an object?
My expected result should be:
[
{
"table_id": 1,
"properties": {"id" : "ak16994521", "mag" : 2.3}
},
...
]
Java Method:
public List<Map<String, Object>> readTable(String nameTable) {
try {
String SQL = "SELECT table_id, GeometryType(geom) FROM " + nameTable + " ORDER BY table_id ASC;";
return jdbcTemplate.queryForList(SQL);
} catch( BadSqlGrammarException error) {
log.info("ERROR READING TABLE: " + nameTable);
return null;
}
}
whit this code returns this json:
[
{
"table_id": 1,
"geometrytype": "POINT"
},
{
"table_id": 2,
"geometrytype": "POINT"
},
....
]
My expected result should be:
[
{
"table_id": 1,
"properties": {"id" : "ak16994521", "mag" : 2.3, "time": 1507425650893, "felt": "null", "tsunami": 0 },
"geometrytype": "POINT"
},
...
]
Why not do the whole conversion to JSON on the database?
SELECT json_agg(x) as json_values FROM (
SELECT
table_id,
row_to_json((select d from (select id, mag, time, felt, tsunami) d)) as properties,
GeometryType(geom)
FROM earthquakes
ORDER BY table_id ASC
) x;
I found this sql that creates a geojson file, so when I call method returns perfect as string serialized.
SELECT row_to_json(fc) FROM (
SELECT 'FeatureCollection' As type,
array_to_json(array_agg(f)) As features FROM
(SELECT 'Feature' As type,
ST_AsGeoJSON(lg.geom)::json As geometry,
row_to_json((SELECT l FROM (SELECT id, mag, time, felt, tsunami) As l )) As properties FROM terremotos As lg ) As f ) As fc;

Mule Dataweave Cannot coerce a :null to a :number

I am building a Java map of two different Json data streams. However if one of the streams contains a null value in the quantity, the flow will error with the following condition:
($.itemid) : '"' ++ skuLookup[$.sku][0].qty as :number as :string {format: "#.0000"} ++ '"'
^
Cannot coerce a :null to a :number
(com.mulesoft.weave.mule.exception.WeaveExecutionException). Message payload is of type: String
The ideal scenario would be ignore the null values within the iteration Here is the dataweave as untouched:
%output application/java
%var skuLookup = flowVars.SSRCreateStarshipItems groupBy $.sku
---
flowVars.SSRGetOrderItems map ({
($.itemid) : '"' ++ skuLookup[$.sku][0].qty as :number as :string
{format: "#.0000"} ++ '"'
})
Here is my ideal solution, however I cant seem to get it to work:
%output application/java
%var skuLookup = flowVars.SSRCreateStarshipItems groupBy $.sku
---
flowVars.SSRGetOrderItems map ({
($.itemid) : '"' ++ skuLookup[$.sku][0].qty as :number as :string
{format: "#.0000"} ++ '"'
}) when skuLookup[$.sku][0].qty != null
Setting the default value resolve your problem.
For example, If you were passing in a payload such as this:
{
"testMap": [
{
"testName": "TestName",
"Value": "3"
},
{
"testName": "TestName",
"Value": null
}
]
}
You could convert the Value key to a number or pass null with this:
%dw 1.0
%input payload application/json
%output application/json
---
payload.testMap map (data, index) -> {
testName: data.testName,
Value: data.Value as :number as :string {format: "#.0000"} default null
}
And this is the result:
[
{
"testName": "TestName",
"Value": "3.0000"
},
{
"testName": "TestName",
"Value": null
}
]
Also, if you want to remove the key completely, insert this after the output header for json:
skipNullOn="everywhere"

DB script for changing the model of mongoDB collection [duplicate]

In MongoDB, is it possible to update the value of a field using the value from another field? The equivalent SQL would be something like:
UPDATE Person SET Name = FirstName + ' ' + LastName
And the MongoDB pseudo-code would be:
db.person.update( {}, { $set : { name : firstName + ' ' + lastName } );
The best way to do this is in version 4.2+ which allows using the aggregation pipeline in the update document and the updateOne, updateMany, or update(deprecated in most if not all languages drivers) collection methods.
MongoDB 4.2+
Version 4.2 also introduced the $set pipeline stage operator, which is an alias for $addFields. I will use $set here as it maps with what we are trying to achieve.
db.collection.<update method>(
{},
[
{"$set": {"name": { "$concat": ["$firstName", " ", "$lastName"]}}}
]
)
Note that square brackets in the second argument to the method specify an aggregation pipeline instead of a plain update document because using a simple document will not work correctly.
MongoDB 3.4+
In 3.4+, you can use $addFields and the $out aggregation pipeline operators.
db.collection.aggregate(
[
{ "$addFields": {
"name": { "$concat": [ "$firstName", " ", "$lastName" ] }
}},
{ "$out": <output collection name> }
]
)
Note that this does not update your collection but instead replaces the existing collection or creates a new one. Also, for update operations that require "typecasting", you will need client-side processing, and depending on the operation, you may need to use the find() method instead of the .aggreate() method.
MongoDB 3.2 and 3.0
The way we do this is by $projecting our documents and using the $concat string aggregation operator to return the concatenated string.
You then iterate the cursor and use the $set update operator to add the new field to your documents using bulk operations for maximum efficiency.
Aggregation query:
var cursor = db.collection.aggregate([
{ "$project": {
"name": { "$concat": [ "$firstName", " ", "$lastName" ] }
}}
])
MongoDB 3.2 or newer
You need to use the bulkWrite method.
var requests = [];
cursor.forEach(document => {
requests.push( {
'updateOne': {
'filter': { '_id': document._id },
'update': { '$set': { 'name': document.name } }
}
});
if (requests.length === 500) {
//Execute per 500 operations and re-init
db.collection.bulkWrite(requests);
requests = [];
}
});
if(requests.length > 0) {
db.collection.bulkWrite(requests);
}
MongoDB 2.6 and 3.0
From this version, you need to use the now deprecated Bulk API and its associated methods.
var bulk = db.collection.initializeUnorderedBulkOp();
var count = 0;
cursor.snapshot().forEach(function(document) {
bulk.find({ '_id': document._id }).updateOne( {
'$set': { 'name': document.name }
});
count++;
if(count%500 === 0) {
// Excecute per 500 operations and re-init
bulk.execute();
bulk = db.collection.initializeUnorderedBulkOp();
}
})
// clean up queues
if(count > 0) {
bulk.execute();
}
MongoDB 2.4
cursor["result"].forEach(function(document) {
db.collection.update(
{ "_id": document._id },
{ "$set": { "name": document.name } }
);
})
You should iterate through. For your specific case:
db.person.find().snapshot().forEach(
function (elem) {
db.person.update(
{
_id: elem._id
},
{
$set: {
name: elem.firstname + ' ' + elem.lastname
}
}
);
}
);
Apparently there is a way to do this efficiently since MongoDB 3.4, see styvane's answer.
Obsolete answer below
You cannot refer to the document itself in an update (yet). You'll need to iterate through the documents and update each document using a function. See this answer for an example, or this one for server-side eval().
For a database with high activity, you may run into issues where your updates affect actively changing records and for this reason I recommend using snapshot()
db.person.find().snapshot().forEach( function (hombre) {
hombre.name = hombre.firstName + ' ' + hombre.lastName;
db.person.save(hombre);
});
http://docs.mongodb.org/manual/reference/method/cursor.snapshot/
Starting Mongo 4.2, db.collection.update() can accept an aggregation pipeline, finally allowing the update/creation of a field based on another field:
// { firstName: "Hello", lastName: "World" }
db.collection.updateMany(
{},
[{ $set: { name: { $concat: [ "$firstName", " ", "$lastName" ] } } }]
)
// { "firstName" : "Hello", "lastName" : "World", "name" : "Hello World" }
The first part {} is the match query, filtering which documents to update (in our case all documents).
The second part [{ $set: { name: { ... } }] is the update aggregation pipeline (note the squared brackets signifying the use of an aggregation pipeline). $set is a new aggregation operator and an alias of $addFields.
Regarding this answer, the snapshot function is deprecated in version 3.6, according to this update. So, on version 3.6 and above, it is possible to perform the operation this way:
db.person.find().forEach(
function (elem) {
db.person.update(
{
_id: elem._id
},
{
$set: {
name: elem.firstname + ' ' + elem.lastname
}
}
);
}
);
I tried the above solution but I found it unsuitable for large amounts of data. I then discovered the stream feature:
MongoClient.connect("...", function(err, db){
var c = db.collection('yourCollection');
var s = c.find({/* your query */}).stream();
s.on('data', function(doc){
c.update({_id: doc._id}, {$set: {name : doc.firstName + ' ' + doc.lastName}}, function(err, result) { /* result == true? */} }
});
s.on('end', function(){
// stream can end before all your updates do if you have a lot
})
})
update() method takes aggregation pipeline as parameter like
db.collection_name.update(
{
// Query
},
[
// Aggregation pipeline
{ "$set": { "id": "$_id" } }
],
{
// Options
"multi": true // false when a single doc has to be updated
}
)
The field can be set or unset with existing values using the aggregation pipeline.
Note: use $ with field name to specify the field which has to be read.
Here's what we came up with for copying one field to another for ~150_000 records. It took about 6 minutes, but is still significantly less resource intensive than it would have been to instantiate and iterate over the same number of ruby objects.
js_query = %({
$or : [
{
'settings.mobile_notifications' : { $exists : false },
'settings.mobile_admin_notifications' : { $exists : false }
}
]
})
js_for_each = %(function(user) {
if (!user.settings.hasOwnProperty('mobile_notifications')) {
user.settings.mobile_notifications = user.settings.email_notifications;
}
if (!user.settings.hasOwnProperty('mobile_admin_notifications')) {
user.settings.mobile_admin_notifications = user.settings.email_admin_notifications;
}
db.users.save(user);
})
js = "db.users.find(#{js_query}).forEach(#{js_for_each});"
Mongoid::Sessions.default.command('$eval' => js)
With MongoDB version 4.2+, updates are more flexible as it allows the use of aggregation pipeline in its update, updateOne and updateMany. You can now transform your documents using the aggregation operators then update without the need to explicity state the $set command (instead we use $replaceRoot: {newRoot: "$$ROOT"})
Here we use the aggregate query to extract the timestamp from MongoDB's ObjectID "_id" field and update the documents (I am not an expert in SQL but I think SQL does not provide any auto generated ObjectID that has timestamp to it, you would have to automatically create that date)
var collection = "person"
agg_query = [
{
"$addFields" : {
"_last_updated" : {
"$toDate" : "$_id"
}
}
},
{
$replaceRoot: {
newRoot: "$$ROOT"
}
}
]
db.getCollection(collection).updateMany({}, agg_query, {upsert: true})
(I would have posted this as a comment, but couldn't)
For anyone who lands here trying to update one field using another in the document with the c# driver...
I could not figure out how to use any of the UpdateXXX methods and their associated overloads since they take an UpdateDefinition as an argument.
// we want to set Prop1 to Prop2
class Foo { public string Prop1 { get; set; } public string Prop2 { get; set;} }
void Test()
{
var update = new UpdateDefinitionBuilder<Foo>();
update.Set(x => x.Prop1, <new value; no way to get a hold of the object that I can find>)
}
As a workaround, I found that you can use the RunCommand method on an IMongoDatabase (https://docs.mongodb.com/manual/reference/command/update/#dbcmd.update).
var command = new BsonDocument
{
{ "update", "CollectionToUpdate" },
{ "updates", new BsonArray
{
new BsonDocument
{
// Any filter; here the check is if Prop1 does not exist
{ "q", new BsonDocument{ ["Prop1"] = new BsonDocument("$exists", false) }},
// set it to the value of Prop2
{ "u", new BsonArray { new BsonDocument { ["$set"] = new BsonDocument("Prop1", "$Prop2") }}},
{ "multi", true }
}
}
}
};
database.RunCommand<BsonDocument>(command);
MongoDB 4.2+ Golang
result, err := collection.UpdateMany(ctx, bson.M{},
mongo.Pipeline{
bson.D{{"$set",
bson.M{"name": bson.M{"$concat": []string{"$lastName", " ", "$firstName"}}}
}},
)

mongo + spring data + aggragate sum

I am looking for a solution without spring data. My project requirement is to do without spring data.
To calculate the sum using aggregate function by mongo command, able to get output. But same by using spring data getting exception.
Sample mongo query :
db.getCollection('events_collection').aggregate(
{ "$match" : { "store_no" : 3201 , "event_id" : 882800} },
{ "$group" : { "_id" : "$load_dt", "event_id": { "$first" : "$event_id" }, "start_dt" : { "$first" : "$start_dt" }, "count" : { "$sum" : 1 } } },
{ "$sort" : { "_id" : 1 } },
{ "$project" : { "load_dt" : "$_id", "ksn_cnt" : "$count", "event_id" : 1, "start_dt" : 1, "_id" : 0 } }
)
Same thing done in java as,
String json = "[ { \"$match\": { \"store_no\": 3201, \"event_id\": 882800 } }, { \"$group\": { \"_id\": \"$load_dt\", \"event_id\": { \"$first\": \"$event_id\" }, \"start_dt\": { \"$first\": \"$start_dt\" }, \"count\": { \"$sum\": 1 } } }, { \"$sort\": { \"_id\": 1 } }, { \"$project\": { \"load_dt\": \"$_id\", \"ksn_cnt\": \"$count\", \"event_id\": 1, \"start_dt\": 1, \"_id\": 0 } } ]";
BasicDBList pipeline = (BasicDBList) JSON.parse(json);
System.out.println(pipeline);
AggregationOutput output = col.aggregate(pipeline);
exception is :
com.mongodb.CommandFailureException: { "serverUsed" : "somrandomserver/10.10.10.10:27001" , "errmsg" : "exception: pipeline element 0 is not an object" , "code" : 15942 , "ok" : 0.0}
Could someone please suggest how to use aggregate function with spring?
Try the following (untested) Spring Data MongoDB aggregation equivalent
import static org.springframework.data.mongodb.core.aggregation.Aggregation.*;
MongoTemplate mongoTemplate = repository.getMongoTemplate();
Aggregation agg = newAggregation(
match(Criteria.where("store_no").is(3201).and("event_id").is(882800)),
group("load_dt")
.first("event_id").as("event_id")
.first("start_dt").as("start_dt")
.count().as("ksn_cnt"),
sort(ASC, previousOperation()),
project("ksn_cnt", "event_id", "start_dt")
.and("load_dt").previousOperation()
.and(previousOperation()).exclude()
);
AggregationResults<OutputType> result = mongoTemplate.aggregate(agg,
"events_collection", OutputType.class);
List<OutputType> mappedResult = result.getMappedResults();
As a first step, filter the input collection by using a match operation which accepts a Criteria query as an argument.
In the second step, group the intermediate filtered documents by the "load_dt" field and calculate the document count and store the result in the new field "ksn_cnt".
Sort the intermediate result by the id-reference of the previous group operation as given by the previousOperation() method.
Finally in the fourth step, select the "ksn_cnt", "event_id", and "start_dt" fields from the previous group operation. Note that "load_dt" again implicitly references an group-id field. Since you do not want an implicit generated id to appear, exclude the id from the previous operation via and(previousOperation()).exclude().
Note that if you provide an input class as the first parameter to the newAggregation method the MongoTemplate will derive the name of the input collection from this class. Otherwise if you don’t not specify an input class you must provide the name of the input collection explicitly. If an input-class and an input-collection is provided the latter takes precedence.

Read out elements of a string in specific order

For some reasons I have to use a specific string in my project. This is the text file (it's a JSON File):
{"algorithm":
[
{ "key": "onGapLeft", "value" : "moveLeft" },
{ "key": "onGapFront", "value" : "moveForward" },
{ "key": "onGapRight", "value" : "moveRight" },
{ "key": "default", "value" : "moveBackward" }
]
}
I've defined it in JAVA like this:
static String input = "{\"algorithm\": \n"+
"[ \n" +
"{ \"key\": \"onGapLeft\", \"value\" : \"moveLeft\" }, \n" +
"{ \"key\": \"onGapFront\", \"value\" : \"moveForward\" }, \n" +
"{ \"key\": \"onGapRight\", \"value\" : \"moveRight\" }, \n" +
"{ \"key\": \"default\", \"value\" : \"moveBackward\" } \n" +
"] \n" +
"}";
Now I have to isolate the keys and values in an array:
key[0] = onGapLeft; value[0] = moveLeft;
key[1] = onGapFront; value[1] = moveForward;
key[2] = onGapRight; value[2] = moveRight;
key[3] = default; value[3] = moveBackward;
I'm new to JAVA and don't understand the string class very well. Is there an easy way to get to that result? You would help me really!
Thanks!
UPDATE:
I didn't explained it well enough, sorry. This program will run on a LEGO NXT Robot. JSON won't work there as I want it to so I have to interpret this JSON File as a normal STRING! Hope that explains what I want :)
I propose a solution in several step.
1) Let's get the different parts of your ~JSON String. We will use a pattern to get the different {.*} parts :
public static void main(String[] args) throws Exception{
List<String> lines = new ArrayList<String>();
Pattern p = Pattern.compile("\\{.*\\}");
Matcher matcher = p.matcher(input);
while (matcher.find()) {
lines.add(matcher.group());
}
}
(you should take a look at Pattern and Matcher)
Now, lines contains 4 String :
{ "key": "onGapLeft", "value" : "moveLeft" }
{ "key": "onGapFront", "value" : "moveForward" }
{ "key": "onGapRight", "value" : "moveRight" }
{ "key": "default", "value" : "moveBackward" }
Given a String like one of those, you can remove curly brackets with a call to String#replaceAll();
List<String> cleanLines = new ArrayList<String>();
for(String line : lines) {
//replace curly brackets with... nothing.
//added a call to trim() in order to remove whitespace characters.
cleanLines.add(line.replaceAll("[{}]","").trim());
}
(You should take a look at String String#replaceAll(String regex))
Now, cleanLines contains :
"key": "onGapLeft", "value" : "moveLeft"
"key": "onGapFront", "value" : "moveForward"
"key": "onGapRight", "value" : "moveRight"
"key": "default", "value" : "moveBackward"
2) Let's parse one of those lines :
Given a line like :
"key": "onGapLeft", "value" : "moveLeft"
You can split it on , character using String#split(). It will give you a String[] containing 2 elements :
//parts[0] = "key": "onGapLeft"
//parts[1] = "value" : "moveLeft"
String[] parts = line.split(",");
(You should take a look at String[] String#split(String regex))
Let's clean those parts (remove "") and assign them to some variables:
String keyStr = parts[0].replaceAll("\"","").trim(); //Now, key = key: onGapLeft
String valueStr = parts[1].replaceAll("\"","").trim();//Now, value = value : moveLeft
//Then, you split `key: onGapLeft` with character `:`
String key = keyStr.split(":")[1].trim();
//And the same for `value : moveLeft` :
String value = valueStr.split(":")[1].trim();
That's it !
You should also take a look at Oracle's tutorial on regular expressions (This one is really important and you should invest time on it).
You need to use a JSON parser library here. For example, with org.json you could parse it as
String input = "{\"algorithm\": \n"+
"[ \n" +
"{ \"key\": \"onGapLeft\", \"value\" : \"moveLeft\" }, \n" +
"{ \"key\": \"onGapFront\", \"value\" : \"moveForward\" }, \n" +
"{ \"key\": \"onGapRight\", \"value\" : \"moveRight\" }, \n" +
"{ \"key\": \"default\", \"value\" : \"moveBackward\" } \n" +
"] \n" +
"}";
JSONObject root = new JSONObject(input);
JSONArray map = root.getJSONArray("algorithm");
for (int i = 0; i < map.length(); i++) {
JSONObject entry = map.getJSONObject(i);
System.out.println(entry.getString("key") + ": "
+ entry.getString("value"));
}
Output :
onGapLeft: moveLeft
onGapFront: moveForward
onGapRight: moveRight
default: moveBackward

Categories