Unable to collect data from metric query language MQL - GCP - java

I want to execute MQL (metric query language) using below library.
<dependency>
<groupId>com.google.apis</groupId>
<artifactId>google-api-services-monitoring</artifactId>
<version>v3-rev540-1.25.0</version>
</dependency>
Here is my code snippet. which will create monitoring client and will try to collect data from GCP monitoring.
public void queryTimeSeriesData() throws IOException {
// create monitoring
Monitoring m = createAuthorizedMonitoringClient();
QueryTimeSeriesRequest req = new QueryTimeSeriesRequest();
String query = "fetch consumed_api\n" +
"| metric 'serviceruntime.googleapis.com/api/request_count'\n" +
"| align rate(2m)\n" +
"| every 2m\n" +
"| group_by [metric.response_code],\n" +
" [value_request_count_max: max(value.request_count)]";
req.setQuery(query);
HashMap<String, Object> queryTransformationSpec = new HashMap<String, Object>();
HashMap<String, Object> timingState = new HashMap<String, Object>();
HashMap<String, Object> absoluteWindow = new HashMap<String, Object>();
absoluteWindow.put("startTime", "2020-09-03T12:40:00.000Z");
absoluteWindow.put("endTime", "2020-09-03T13:41:00.000Z");
timingState.put("absoluteWindow", absoluteWindow);
timingState.put("graphPeriod", "60s");
timingState.put("queryPeriod", "60s");
queryTransformationSpec.put("timingState", timingState);
req.set("queryTransformationSpec", queryTransformationSpec);
req.set("reportPeriodicStats", false);
req.set("reportQueryPlan", false);
QueryTimeSeriesResponse res = m.projects().timeSeries().query("projects/MY_PROJECT_NAME", req).execute();
System.out.println(res);
}
Above code is working fine but its not returning data of given startTime and endTime ,
It always returns latest datapoint available. is there any problem with my code ?

Found way to execute MQL query with given time range. The
new working code is the following:
public void queryTimeSeriesData() throws IOException {
// create monitoring
Monitoring m = createAuthorizedMonitoringClient();
QueryTimeSeriesRequest req = new QueryTimeSeriesRequest();
String query = "fetch consumed_api\n" +
"| metric 'serviceruntime.googleapis.com/api/request_count'\n" +
"| align rate(5m)\n" +
"| every 5m\n" +
"| group_by [metric.response_code],\n" +
" [value_request_count_max: max(value.request_count)]" +
"| within d'2020/09/03-12:40:00', d'2020/09/03-12:50:00'\n";
req.setQuery(query);
QueryTimeSeriesResponse res = m.projects().timeSeries().query("projects/MY_PROJECT_NAME", req).execute();
System.out.println(res);
}
Included query start time and end time in query itself by using within operator. As per google docs for MQL queries:
within - Specifies the time range of the query output.

Related

Null Pointer in BigQueryIO Write

I'm getting a null pointer exception in my Dataflow pipeline, but all of the values at the location of the exception are defined correctly. In this code I am reading from a database, doing some conversions on the result set, and then trying o create a table in an existing dataset based on the table rows in that result set. I've confirmed the values passed to the BigQueryIO.writeTableRows() call are all valid, but still I get an exception on the line where I try to carry out the write to Big Query. I starred the location of the null pointer exception in the code below.
// Gather First query results
WriteResult results = pipeline
.apply("Connect", JdbcIO.<TableRow>read()
.withDataSourceConfiguration(buildDataSourceConfig(options, URL))
.withQuery(query)
.withRowMapper(new JdbcIO.RowMapper<TableRow>() {
// Convert ResultSet to PCollection
public TableRow mapRow(ResultSet rs) throws Exception {
ResultSetMetaData md = rs.getMetaData();
int columnCount = md.getColumnCount();
TableRow tr = new TableRow();
for (int i = 1; i <= columnCount; i++ ) {
String name = md.getColumnName(i);
tr.set(name, rs.getString(name));
}
return tr;
}
}))
.setCoder(TableRowJsonCoder.of())
.apply("Write to BQ",
BigQueryIO.writeTableRows()
.withSchema(schema)
.to(dataset)
.withWriteDisposition(BigQueryIO.Write.WriteDisposition.WRITE_TRUNCATE)
.withCreateDisposition(BigQueryIO.Write.CreateDisposition.CREATE_IF_NEEDED));
023-01-10T20:33:22.4214526Z WARNING: Unable to infer a schema for type com.google.api.services.bigquery.model.TableRow. Attempting to infer a coder without a schema.
2023-01-10T20:33:22.4216783Z Exception in thread "main" java.lang.NullPointerException
2023-01-10T20:33:22.4218945Z at org.apache.beam.sdk.io.gcp.bigquery.BigQueryIO$Write.validateNoJsonTypeInSchema(BigQueryIO.java:3035)
2023-01-10T20:33:22.4221029Z at org.apache.beam.sdk.io.gcp.bigquery.BigQueryIO$Write.continueExpandTyped(BigQueryIO.java:2949)
2023-01-10T20:33:22.4222727Z at org.apache.beam.sdk.io.gcp.bigquery.BigQueryIO$Write.expandTyped(BigQueryIO.java:2880)
2023-01-10T20:33:22.4226464Z at org.apache.beam.sdk.io.gcp.bigquery.BigQueryIO$Write.expand(BigQueryIO.java:2776)
2023-01-10T20:33:22.4228072Z at org.apache.beam.sdk.io.gcp.bigquery.BigQueryIO$Write.expand(BigQueryIO.java:1786)
2023-01-10T20:33:22.4234778Z at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:548)
2023-01-10T20:33:22.4237961Z at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:499)
2023-01-10T20:33:22.4240010Z at org.apache.beam.sdk.values.PCollection.apply(PCollection.java:376)
2023-01-10T20:33:22.4242466Z at edu.mayo.mcc.aide.sqaTransfer.SqaTransfer.buildPipeline(SqaTransfer.java:133)
2023-01-10T20:33:22.4244722Z at edu.mayo.mcc.aide.sqaTransfer.SqaTransfer.main(SqaTransfer.java:99)
2023-01-10T20:33:22.4246444Z . exit status 1
The current error is due to a bad schema passed in the withSchema(schema) method in BigQueryIO.
The schema can be created with TableSchema object :
TableSchema schema =
new TableSchema()
.setFields(
Arrays.asList(
new TableFieldSchema()
.setName("string_field")
.setType("STRING")
.setMode("REQUIRED"),
new TableFieldSchema()
.setName("int64_field")
.setType("INT64")
.setMode("NULLABLE"),
new TableFieldSchema()
.setName("float64_field")
.setType("FLOAT64"), // default mode is "NULLABLE"
new TableFieldSchema().setName("numeric_field").setType("NUMERIC"),
new TableFieldSchema().setName("bool_field").setType("BOOL"),
new TableFieldSchema().setName("bytes_field").setType("BYTES"),
new TableFieldSchema().setName("date_field").setType("DATE"),
new TableFieldSchema().setName("datetime_field").setType("DATETIME"),
new TableFieldSchema().setName("time_field").setType("TIME"),
new TableFieldSchema().setName("timestamp_field").setType("TIMESTAMP"),
new TableFieldSchema().setName("geography_field").setType("GEOGRAPHY"),
new TableFieldSchema()
.setName("array_field")
.setType("INT64")
.setMode("REPEATED")
.setDescription("Setting the mode to REPEATED makes this an ARRAY<INT64>."),
new TableFieldSchema()
.setName("struct_field")
.setType("STRUCT")
.setDescription(
"A STRUCT accepts a custom data class, the fields must match the custom class fields.")
.setFields(
Arrays.asList(
new TableFieldSchema().setName("string_value").setType("STRING"),
new TableFieldSchema().setName("int64_value").setType("INT64")))));
return schema;
// In the IO.
rows.apply(
"Write to BigQuery",
BigQueryIO.writeTableRows()
.to(String.format("%s:%s.%s", project, dataset, table))
.withSchema(schema)
...
You can also use a Json schema :
String tableSchemaJson =
""
+ "{"
+ " \"fields\": ["
+ " {"
+ " \"name\": \"source\","
+ " \"type\": \"STRING\","
+ " \"mode\": \"NULLABLE\""
+ " },"
+ " {"
+ " \"name\": \"quote\","
+ " \"type\": \"STRING\","
+ " \"mode\": \"REQUIRED\""
+ " }"
+ " ]"
+ "}";
// In the IO.
rows.apply(
"Write to BigQuery",
BigQueryIO.writeTableRows()
.to(String.format("%s:%s.%s", project, dataset, table))
.withJsonSchema(tableSchemaJson)
...
You can check the documentation to have more details.

Using facebook graph API 2.5 for batch request in Java

I was using facebook FQL query to fetch sharecount for multiple URLS using this code without needing any access token.
https://graph.facebook.com/fql?q=";
"SELECT url, total_count,share_count FROM link_stat WHERE url in (";
private void callFB(List validUrlList,Map> dataMap,long timeStamp,Double calibrationFactor){
try {
StringBuilder urlString = new StringBuilder();
System.out.println("List Size " + validUrlList.size());
for (int i = 0; i < (validUrlList.size() - 1); i++) {
urlString.append("\"" + validUrlList.get(i) + "\",");
}
urlString.append("\""
+ validUrlList.get(validUrlList.size() - 1) + "\"");
String out = getConnection(fbURL+URLEncoder.encode(
queryPrefix
+ urlString.toString() + ")", "utf-8"));
dataMap = getSocialPopularity(validUrlList.toArray(), dataMap);
getJSON(out, dataMap, timeStamp,calibrationFactor);
} catch (Exception e) {
e.printStackTrace();
}
}
But as now Facebook has depreciated it i am planning to use
https://graph.facebook.com/v2.5/?ids=http://timesofindia.indiatimes.com/life-style/relationships/soul-curry/An-NRI-bride-who-was-tortured-to-hell/articleshow/50012721.cms&access_token=abc
But i could not find any code to make batch request in the same also i am using pageaccesstoken so what could be the rate limit for same.
Could you please help me to find teh batch request using java for this new version.
You will always be subject to rate limiting... If you're using the /?ids= endpoint, there's already a "batch" functionality built-in.
See
https://developers.facebook.com/docs/graph-api/using-graph-api/v2.5#multirequests
https://developers.facebook.com/docs/graph-api/advanced/rate-limiting

Mapping several columns from sql to a java object

I am trying to retrieve and process code from JIRA, unfortunately the pieces of information (which are in the Metadata-Plugin) are saved in a column, not a row.
Picture of JIRA-MySQL-Database
The goal is to save this in an object with following attributes:
public class DesiredObject {
private String Object_Key;
private String Aze.kunde.name;
private Long Aze.kunde.schluessel;
private String Aze.projekt.name;
private Long Aze.projekt.schluessel
//getters and setters here
}
My workbench is STS and it's a Spring-Boot-Application.
I can fetch a List of Object-Keys with the JRJC using:
JiraController jiraconnect = new JiraController();
List<JiraProject> jiraprojects = new ArrayList<JiraProject>();
jiraprojects = jiraconnect.findJiraProjects();
This is perfectly working, also the USER_KEY and USER_VALUE are easily retrievable, but I hope there is a better way than to perform
three SQL-Searches for each project and then somehow build an object from all those lists.
I was starting with
for (JiraProject jp : jiraprojects) {
String SQL = "select * from jira_metadata where ENRICHED_OBJECT_KEY = ?";
List<DesiredObject> do = jdbcTemplateObject.query(SQL, new Object[] { "com.atlassian.jira.project.Project:" + jp.getProjectkey() }, XXX);
}
to get a list with every object, but I'm stuck as i can't figure out a ObjectMapper (XXX) who is able to write this into an object.
Usually I go with
object.setter(rs.getString("SQL-Column"));
But that isn't working, as all my columns are called the same. (USER_KEY & USER_VALUE)
The Database is automatically created by JIRA, so I can't "fix" it.
The Object_Keys are unique which is why I tried to use those to collect all the data from my SQL-Table.
I hope all you need to enlighten me is in this post, if not feel free to ask for more!
Edit: Don't worry if there are some 'project' and 'projekt', that's because I gave most of my classes german names and descriptions..
I created a Hashmap with the Objectkey and an unique token in brackets, e.g.: "(1)JIRA".
String SQL = "select * from ao_cc6aeb_jira_metadata";
List<JiraImportObjekt> jioList = jdbcTemplateObject.query(SQL, new JiraImportObjektMapper());
HashMap<String, String> hmap = new HashMap<String, String>();
Integer unique = 1;
for (JiraImportObjekt jio : jioList) {
hmap.put("(" + unique.toString() + ")" + jio.getEnriched_Object_Key(),
jio.getUser_Key() + "(" + jio.getUser_Value() + ")");
unique++;
}
I changed this into a TreeMap
Map<String, String> tmap = new TreeMap<String, String>(hmap);
And then i iterated through that treemap via
String aktuProj = new String();
for (String s : tmap.keySet()) {
if (aktuProj.equals(s.replaceAll("\\([^\\(]*\\)", ""))) {
} else { //Add Element to list and start new Element }
//a lot of other stuff
}
What I did was to put all the data in the right order, iterate through and process everything like I wanted it.
Object hinfo = hmap.get(s);
if (hinfo.toString().replaceAll("\\([^\\(]*\\)", "").equals("aze.kunde.schluessel")) {
Matcher m = Pattern.compile("\\(([^)]+)\\)").matcher(hinfo.toString());
while (m.find()) {
jmo[obj].setAzeKundeSchluessel(Long.parseLong(m.group(1), 10));
// logger.info("AzeKundeSchluessel: " +
// jmo[obj].getAzeKundeSchluessel());
}
} else ...
After the loop I needed to add the last Element.
Now I have a List with the Elements which is easy to use and ready for further steps.
I cut out a lot of code because most of it is customized for my problem.. the roadmap should be enough to solve it though.
Good luck!

List all attachments stored in cloudant with java

I'm developing a demo and I'm stuck with this.
I want to list in a java web app all the attachments (PDFs for example), but a I am not able to retrieve and list them.
I'm only able to retrieve common data (String, Ints).
Is there a standard way to retrieve and show ?
I been reading all the posts but nothing seems to work.
Here is where I add the vendor, with the attachment:
public void addVendor(final Vendor vendor, final InputStream inputStream, final long size, final String contentType)
{
final Database db = getDb();
final int id = Integer.valueOf(vendor.get_id()) + 1;
final Response r1 = db.saveAttachment(inputStream, vendor.getName() + ".txt", contentType, String.valueOf(id), null);
vendor.setAttachment(r1);
final Response r = db.post(vendor);
System.out.println("Vendor created successfully. Id: " + r.getId() + ", rev: " + r.getRev());
System.out.println("File created successfully. Id: " + r1.getId() + ", rev: " + r1.getRev());
}
Here I where I try to retrive the data:
public List<Vendor> getAllVendors()
{
List<Vendor> Vendors = new ArrayList<Vendor>();
final List<Vendor> vend2 = new ArrayList<Vendor>();
//Get db
final Database db = getDb();
final InputStream s = null;
//Get all documents
Vendors = db.view("_all_docs").includeDocs(true).query(Vendor.class);
final Database db1 = getDb();
for (final Vendor vend : Vendors) {
final Response r1 = vend.getAttachment();
final int id = Integer.valueOf(vend.get_id()) + 1;
// Here I am look to the attachment with the _ID and _REV
final InputStream in = db1.find(r1.getId(), r1.getRev()); vend.setInput(in); vend2.add(vend);
}
return Vendors;
}
I this last code, I intended to create a new list with all my Vendor data plus the blob.
When I add the vendor ( in the first part ) , I saved the " response " of the attachement in the vendor object, SO when I tried to retrive I have the data to work with ( _id and _rev ) .
I'm assuming you want to list all documents that contain attachments. If so, you can create a MapReduce view similar to this:
function(doc) {
if (doc._attachments) {
emit(doc._id, null);
}
}
You would then call the view using something like this to get a list of document ids of documents that contain attachments:
GET /dbname/_design/designdocname/_view/docswithattachments
The above GET request would look something like this in Java:
List<Foo> list = db.view("designdocname/docswithattachments")
.query(Foo.class);

Aggregate data in CSV file using Java

I have a big CSV file, thousands of rows, and I want to aggregate some columns using java code.
The file in the form:
1,2012,T1
2,2015,T2
3,2013,T1
4,2012,T1
The results should be:
T, Year, Count
T1,2012, 2
T1,2013, 1
T2,2015, 1
Put your data to a Map like structure, each time add +1 to a stored value when a key (in your case ""+T+year) found.
You can use map like
Map<String, Integer> rowMap = new HashMap<>();
rowMap("T1", 1);
rowMap("T2", 2);
rowMap("2012", 1);
or you can define your own class with T and Year field by overriding hashcode and equals method. Then you can use
Map<YourClass, Integer> map= new HashMap<>();
T1,2012, 2
String csv =
"1,2012,T1\n"
+ "2,2015,T2\n"
+ "3,2013,T1\n"
+ "4,2012,T1\n";
Map<String, Integer> map = new TreeMap<>();
BufferedReader reader = new BufferedReader(new StringReader(csv));
String line;
while ((line = reader.readLine()) != null) {
String[] fields = line.split(",");
String key = fields[2] + "," + fields[1];
Integer value = map.get(key);
if (value == null)
value = 0;
map.put(key, value + 1);
}
System.out.println(map);
// -> {T1,2012=2, T1,2013=1, T2,2015=1}
Use uniVocity-parsers for the best performance. It should take 1 second to process 1 million rows.
CsvParserSettings settings = new CsvParserSettings();
settings.selectIndexes(1, 2); //select the columns we are going to read
final Map<List<String>, Integer> results = new LinkedHashMap<List<String>, Integer>(); //stores the results here
//Use a custom implementation of RowProcessor
settings.setRowProcessor(new AbstractRowProcessor() {
#Override
public void rowProcessed(String[] row, ParsingContext context) {
List<String> key = Arrays.asList(row); // converts the input array to a List - lists implement hashCode and equals based on their values so they can be used as keys on your map.
Integer count = results.get(key);
if (count == null) {
count = 0;
}
results.put(key, count + 1);
}
});
//creates a parser with the above configuration and RowProcessor
CsvParser parser = new CsvParser(settings);
String input = "1,2012,T1"
+ "\n2,2015,T2"
+ "\n3,2013,T1"
+ "\n4,2012,T1";
//the parse() method will parse and submit all rows to your RowProcessor - use a FileReader to read a file instead the String I'm using as example.
parser.parse(new StringReader(input));
//Here are the results:
for(Entry<List<String>, Integer> entry : results.entrySet()){
System.out.println(entry.getKey() + " -> " + entry.getValue());
}
Output:
[2012, T1] -> 2
[2015, T2] -> 1
[2013, T1] -> 1
Disclosure: I am the author of this library. It's open-source and free (Apache V2.0 license).

Categories