execute raw mondodb query from java mongodb driver - java

I'm developing a web application to run native mongo query through java driver, so that I could see the results in good UI. I didn't find a straight way to do that but running js functions seems to be one way to do that.
I can run the following script from mongo shell.
rs1:PRIMARY> function showShortedItems() { return db.Items.find({});}
rs1:PRIMARY> showShortedItems()
But while trying same thing from java driver, nope.
val db = connection.getDatabase(Database.Name)
val command = new BasicDBObject("eval", "function() { return db.Items.find(); }")
val result = db.runCommand(command)
Error :
Caused by: com.mongodb.MongoCommandException: Command failed with error 13: 'not authorized on shipping-db to execute command { eval: "function() { return db.Items.find(); }" }' on server localhost:27017.
The full response is { "ok" : 0.0, "errmsg" : "not authorized on shipping-db to execute command { eval: \"function() { return db.Items.find(); }\" }", "code" : 13 }
rs1:PRIMARY> db.system.users.find({}) is empty.
mongo.conf
storage:
journal:
enabled: false

Related

Jenkins Declarative pipeline: send email notification with test report in post stage

I have a simple pipeline script with two stages (test stage and post-stage) in the post-stage, where I send a notification if the build is failed or unstable. However, I am able to send a notification with job name, build number and the console output of that job in the notification but I also need to send the link of the failing test report in the notification itself (I have the test reports in the workspace folder in Jenkins as HTML file)
For this, I have a developed a groovy script (see test.groovy) but I am not sure how to add this to the pipeline script in post-stage in the mail block (in the post-stage)
pipeline script
#!/usr/bin/env groovy
#Library(['piper-lib', 'piper-lib-os']) _
pipeline {
agent any
stages {
stage('Campaign Log') {
steps {
echo "RUNNING ${env.BUILD_ID} on ${env.JENKINS_URL}"
uiVeri5ExecuteTests script: this,
testRepository: "https://github.wdf.sap.corp/grc-iag-acert/qa.ui.automation.wdf.git",
gitBranch: "sg-pipeline",
testOptions: "-v --params.user='iag.administrator.testing#sap.com' --params.pass='Iagadministrator123#\$' ./logs/campaign_log/config.js --baseUrl=https://parrot-one-flp-iag-acert-qa-hcl.cfapps.sap.hana.ondemand.com/cp.portal/site#Shell-home"
}
}
}
post {
failure {
//def console_output = "${env.BUILD_URL}/console"
mail bcc: '', body: "Details: ${env.JOB_NAME} Build Number: ${env.BUILD_NUMBER} Build: ${env.BUILD_URL} Console Output: ${env.BUILD_URL}/console", cc: '', from: 'noreply+jaas#sap.com', replyTo: '', subject: 'Failing UIVeri5 Tests', to: 'surendra.gurram#sap.com'
}
unstable {
mail bcc: '', body: "Details: ${env.JOB_NAME} Build Number: ${env.BUILD_NUMBER} Build: ${env.BUILD_URL} Console Output: ${env.BUILD_URL}/console", cc: '', from: 'noreply+jaas#sap.com', replyTo: '', subject: 'Failing UIVeri5 Tests', to: 'surendra.gurram#sap.com'
}
}
}
test.groovy (The code I developed to add in post-stage)
test = new File($ {env.BUILD_URL} + '/execution/node/3/ws/logs/campaign_log/target/report/report.html').readLines()[160]
check = test.substring(18, test.length() - 7);
if (0 < check.toInteger()) {
println($ {env.BUILD_URL} + '/execution/node/3/ws/logs/campaign_log/target/report/report.html');
} else {
println("this test passed");
}
BTW I am pretty new to Jenkins and pipeline script, and I am stuck here, could someone help me here.

calling rest API(locally running) from nodejs failed

We created a rest API on python and it is locally running. And the 'http://127.0.0.1:5002/business' API is showing contents {"business name": "something"} if I open it on google chrome. However, when we call this API in nodejs, it always gives me the error. But if I use another API(exactly same code but different api in nodejs), it is working.
async function get_recommend_initial(){
//https://ViolaS.api.stdlib.com/InitialRecommendation#dev/
// // agent.add('providing recommendations...');
const options = {
method: 'GET'
,uri: 'http://127.0.0.1:5002/business'
// ,uri:'https://ViolaS.api.stdlib.com/InitialRecommendation#dev/'
// ,json: true
};
// return request(options).then(response => {
// console.log(response)
// return (response)
// }).catch(function (err) {
// console.log('No recommend data');
// console.log(err);
// });
return requestAPI(options).then(function(data)
{
let initial_recommendation = JSON.parse(data);
console.log(initial_recommendation);
//return initial_recommendation.information[0].name;
}).catch(function (err) {
console.log('No recommend data');
console.log(err);
});
}
1
The API that is created by python file which is running locally. You can see the API code figure by moving your mouth above 1. Thanks!!!
The python code is as follows:
app = Flask(__name__)
#Add resources to be much cleaner
api = Api(app)
features = {}
class Business(Resource):
def get(self):
return {'business name': 'something'} # Fetches first column that is Employee ID
def post(self):
some_json = request.get_json()
print(some_json)
countNumber = features.get('count',0) + 1
features['count'] = countNumber
return {'You sent': some_json,
'Count:':countNumber}, 201
def put(self):
some_json = request.get_json()
print(some_json)
#record the count number
countNumber = features.get('count',0) + 1
features['count'] = countNumber
features['ok'] = 'yes'
return {'You sent': some_json,
'Count:':countNumber,
'Ok:': features['ok']}, 201
api.add_resource(Business, '/business') # Route_1
if __name__ == '__main__':
app.run(port='5002')
The error is as following:
dialogflowFirebaseFulfillment
Error: Unknown response type: "undefined" at WebhookClient.addResponse_ (/srv/node_modules/dialogflow-fulfillment/src/dialogflow-fulfillment.js:277:13) at WebhookClient.add (/srv/node_modules/dialogflow-fulfillment/src/dialogflow-fulfillment.js:245:12) at Sys_Recommend (/srv/index.js:31:11) at <anonymous>
And the log is:
No recommend data

Unable to Connect to BigQuery from local App Engine instance in Eclipse

I'm new to Google App Engine and I'm trying to run through some of the tutorials to see how this would work for my organization. We are looking at putting some of our data into BigQuery and converting some of our Web applications to App Engine which would need to access BigQuery data.
I am using the java-docs-samples-master code, specifically bigquery/cloud-client/src/main/java/com/example/bigquery/SimpleApp.java
I can run this from the command line using
mvn exec:java -Dexec.mainClass=com.example.bigquery.SimpleAppMain
I incorporate the code into App Engine, which I'm running in Eclipse and created a wrapper so I could still run it from the command line. It works when running from the command line but I get an error when I run it from App Engine in Eclipse.
Is there something I'm missing to configure my local App Engine to connect to Big Query?
Error:
com.google.cloud.bigquery.BigQueryException: Invalid project ID 'no_app_id'. Project IDs must contain 6-63 lowercase letters, digits, or dashes. IDs must start with a letter and may not end with a dash.
at com.google.cloud.bigquery.spi.v2.HttpBigQueryRpc.translate(HttpBigQueryRpc.java:86)
at com.google.cloud.bigquery.spi.v2.HttpBigQueryRpc.create(HttpBigQueryRpc.java:170)
at com.google.cloud.bigquery.BigQueryImpl$3.call(BigQueryImpl.java:208)
...
Caused by: com.google.api.client.googleapis.json.GoogleJsonResponseException: 400
{ "code" : 400,
"errors" : [ {
"domain" : "global",
"message" : "Invalid project ID 'no_app_id'. Project IDs must contain 6-63 lowercase letters, digits, or dashes. IDs must start with a letter and may not end with a dash.",
"reason" : "invalid"
} ],
"message" : "Invalid project ID 'no_app_id'. Project IDs must contain 6-63 lowercase letters, digits, or dashes. IDs must start with a letter and may not end with a dash."
}
Code:
package com.example.bigquery;
import com.google.cloud.bigquery.BigQuery;
import com.google.cloud.bigquery.BigQueryOptions;
import com.google.cloud.bigquery.FieldValue;
import com.google.cloud.bigquery.Job;
import com.google.cloud.bigquery.JobId;
import com.google.cloud.bigquery.JobInfo;
import com.google.cloud.bigquery.QueryJobConfiguration;
import com.google.cloud.bigquery.QueryResponse;
import com.google.cloud.bigquery.QueryResult;
import java.util.List;
import java.util.UUID;
public class SimpleApp {
public void runBQ() throws Exception {
// [START create_client]
BigQuery bigquery = BigQueryOptions.getDefaultInstance().getService();
// [END create_client]
// [START run_query]
QueryJobConfiguration queryConfig =
QueryJobConfiguration.newBuilder(
"SELECT "
+ "APPROX_TOP_COUNT(corpus, 10) as title, "
+ "COUNT(*) as unique_words "
+ "FROM `publicdata.samples.shakespeare`;")
// Use standard SQL syntax for queries.
// See: https://cloud.google.com/bigquery/sql-reference/
.setUseLegacySql(false)
.build();
// Create a job ID so that we can safely retry.
JobId jobId = JobId.of(UUID.randomUUID().toString());
Job queryJob = bigquery.create(JobInfo.newBuilder(queryConfig).setJobId(jobId).build());
// Wait for the query to complete.
queryJob = queryJob.waitFor();
// Check for errors
if (queryJob == null) {
throw new RuntimeException("Job no longer exists");
} else if (queryJob.getStatus().getError() != null) {
// You can also look at queryJob.getStatus().getExecutionErrors() for all
// errors, not just the latest one.
throw new RuntimeException(queryJob.getStatus().getError().toString());
}
// Get the results.
QueryResponse response = bigquery.getQueryResults(jobId);
// [END run_query]
// [START print_results]
QueryResult result = response.getResult();
// Print all pages of the results.
while (result != null) {
for (List<FieldValue> row : result.iterateAll()) {
List<FieldValue> titles = row.get(0).getRepeatedValue();
System.out.println("titles:");
for (FieldValue titleValue : titles) {
List<FieldValue> titleRecord = titleValue.getRecordValue();
String title = titleRecord.get(0).getStringValue();
long uniqueWords = titleRecord.get(1).getLongValue();
System.out.printf("\t%s: %d\n", title, uniqueWords);
}
long uniqueWords = row.get(1).getLongValue();
System.out.printf("total unique words: %d\n", uniqueWords);
}
result = result.getNextPage();
}
// [END print_results]
}
}
From the looks of your error code, it's probably due to your project ID not being set: "no_app_id". Here is how to set your project ID for app engine: https://developers.google.com/eclipse/docs/appengine_appid_version.
Not sure if I am late, but I encountered such error while working with Firestore and it was due to no project being set on the 'Cloud Platform' tab in App Engine run configuration. When I logged into an account and selected a project ID, this error went away.

Cannot start Neo4j Server after Spatial data load

I've been trying to use the Neo4j Spatial plugin with data loaded via Java. I have added the plugin, and when I start an empty database this is confirmed by the following GET request to the server.
{
"extensions": {
"SpatialPlugin": {
"addSimplePointLayer": "http://localhost:7474/db/data/ext/SpatialPlugin/graphdb/addSimplePointLayer",
"findClosestGeometries": "http://localhost:7474/db/data/ext/SpatialPlugin/graphdb/findClosestGeometries",
"addNodesToLayer": "http://localhost:7474/db/data/ext/SpatialPlugin/graphdb/addNodesToLayer",
"addGeometryWKTToLayer": "http://localhost:7474/db/data/ext/SpatialPlugin/graphdb/addGeometryWKTToLayer",
"findGeometriesWithinDistance": "http://localhost:7474/db/data/ext/SpatialPlugin/graphdb/findGeometriesWithinDistance",
"addEditableLayer": "http://localhost:7474/db/data/ext/SpatialPlugin/graphdb/addEditableLayer",
"addCQLDynamicLayer": "http://localhost:7474/db/data/ext/SpatialPlugin/graphdb/addCQLDynamicLayer",
"addNodeToLayer": "http://localhost:7474/db/data/ext/SpatialPlugin/graphdb/addNodeToLayer",
"getLayer": "http://localhost:7474/db/data/ext/SpatialPlugin/graphdb/getLayer",
"findGeometriesInBBox": "http://localhost:7474/db/data/ext/SpatialPlugin/graphdb/findGeometriesInBBox",
"updateGeometryFromWKT": "http://localhost:7474/db/data/ext/SpatialPlugin/graphdb/updateGeometryFromWKT"
}
},
"node": "http://localhost:7474/db/data/node",
"node_index": "http://localhost:7474/db/data/index/node",
"relationship_index": "http://localhost:7474/db/data/index/relationship",
"extensions_info": "http://localhost:7474/db/data/ext",
"relationship_types": "http://localhost:7474/db/data/relationship/types",
"batch": "http://localhost:7474/db/data/batch",
"cypher": "http://localhost:7474/db/data/cypher",
"indexes": "http://localhost:7474/db/data/schema/index",
"constraints": "http://localhost:7474/db/data/schema/constraint",
"transaction": "http://localhost:7474/db/data/transaction",
"node_labels": "http://localhost:7474/db/data/labels",
"neo4j_version": "2.3.2"
}
However, when I stop the server, load my spatial data via Java with a SpatialIndexProvider.SIMPLE_WKT_CONFIG index, then adding it with:
try (Transaction tx = db.beginTx()) {
Index<Node> index = db.index().forNodes("location", SpatialIndexProvider.SIMPLE_WKT_CONFIG);
for (String line : lines) {
String[] columns = line.split(",");
Node node = db.createNode();
node.setProperty("wkt", String.format("POINT(%s %s)", columns[4], columns[3]));
node.setProperty("name", columns[0]);
index.add(node, "dummy", "value");
}
tx.success();
}
After a restart, I get the error:
2016-02-23 13:44:36.747+0000 ERROR [o.n.k.KernelHealth] setting TM not OK. Kernel has encountered some problem, please perform necessary action (tx recovery/restart) No index provider 'spatial' found. Maybe the intended provider (or one more of its dependencies) aren't on the classpath or it failed to load.
in Messages.log inside the graph.db. Is there anything obvious that I'm doing wrong?
I'm on windows 8, Neo4j 2.3.2, Java 8 and neo4j-spatial-0.15-neo4j-2.3.0.jar
Did you unzip the full spatial zip into the plugins directory?
Otherwise some classes that spatial needs can't be found.

Performance Mongodb java driver

I'm using mongodb java driver in my project for execute queries (finds, aggregate, mapreduce,...) over a big collection (5 millions of documents)
The driver version is:
<!-- MongoDB driver-->
<dependency>
<groupId>org.mongodb</groupId>
<artifactId>mongo-java-driver</artifactId>
<version>3.0.3</version>
</dependency>
My problem is when I use the api find with some filters from java, the operation takes 15 sec.
....
Iterable<Document> messageList = collection.find().filter(... some filters).sort(... fields);
// Find documents
for (Document message : messageList) {
....
// some code
....
}
I check the mongo server log file and see the trace is a COMMAND instead of a QUERY:
2015-09-01T12:11:47.496+0200 I COMMAND [conn503] command b.$cmd command: count { count: "logs", query: { timestamp: { $gte: new Date(1433109600000) }, aplicacion: "APP1", event: "Event1" } } planSummary: IXSCAN { timestamp: 1, aplicacion: 1 } keyUpdates:0 writeConflicts:0 numYields:19089 reslen:44 locks:{ Global: { acquireCount: { r: 19090 } }, MMAPV1Journal: { acquireCount: { r: 19090 } }, Database: { acquireCount: { r: 19090 } }, Collection: { acquireCount: { R: 19090 } } } 14297ms
If I run the same query from mongodb client (Robomongo), it takes 0.05 ms.
db.getCollection('logs').find({ timestamp: { $gte: new Date(1427839200000) }, aplicacion: "APP1", event: "Event1" })
and in the server log is as QUERY
All queries that are made (find, aggregate, ...) with the driver java commands are transformed? The performance is much worse than mongo shell.
I think the issue is when you ran the query in mongo shell it will return only top 20 result at a time, here you are trying to read all the document and put it into array
try this query and see
List messageList = collection.find(filter).sort(...field).limit(20).into(new ArrayList());
It's highly recommended to create a index on query field.

Categories