I know we can't send null values using protobuf 3.
but i can see people are using oneof to achieve this.
message GetHomePageDataRequest{
string client_id = 1;
string user_id = 2;
oneof one_of_importance {
google.protobuf.NullValue null_importance = 3;
string importance = 4;
}
}
but i am not sure about server/client look like for above request. how we can write code so we get to know that importance is null or not?
i am using gRpc with protobuf 3 using java8
In Java, the oneof API exposes an enum, per oneof, that tells you which option is selected; see docs here. Interestingly, you don't actually need a specific value for the null option - it is sufficient simply not to assign a value at all, and test for the *_NOT_SET enum.
Note that there is also now experimental support for presence tracking in proto3, which works the same as optional did in proto2; essentially, if you use --experimental_allow_proto3_optional with protoc, you can then use:
message GetHomePageDataRequest{
string client_id = 1;
string user_id = 2;
optional string importance = 4; // or 3, if you can still change it
}
and all of the usual APIs to test for a value and clear a value etc will be generated, meaning: you will be able to distinguish explicit empty values from missing values. The proto3 presence APIs are discussed more here.
Related
I'm writing an application with Spring boot 2. My method try to generate value until the value will be unique. Each unique generated value is added to the cache. Generally, it should generate value with the first try, but the more application run - more chance that generated value will have duplicates, and it will be required to generate it again.
I want to have the metric, shows the percentile of tryToGenerate values.
Let's say my code looks the following:
public void generateUniqueValue() {
String value;
int tryToGenerate = 0;
do {
tryToGenerate ++
value = generateRandom();
boolean unique = isUniqueValue(value);
} while (!unique);
addGeneratedValueToCache(value);
}
I'm using micrometer, but have no idea what should I start with. Becase in order to calculate percentile I need to store not the single value, but the array of the values
To determine the result of isUniqueValue(..), you're already storing the used values already. You can use that storage to create a proper metric.
Let's assume that you're storing each used value within a list called usedIds. Additionally, let's say you're generating a unique number between 0 and 1000.
In that case, you could write a method like this:
public float getPercentageUsed() {
return (float) usedIds.size() / 1000;
}
Now you can create a Gauge to add this value to the metrics. Let's say that the class that's used to generate unique values is called UniqueValueService, and it's a proper Spring bean. In that case, you could create a Gauge bean like this:
#Bean
public Gauge uniqueValueUsedGauge(MeterRegistry registry, UniqueIdService service) {
return Gauge
.builder("unique-values-used", service::getPercentageUsed)
.baseUnit("%")
.description("Percentage of possible unique values that have been used")
.register(registry);
}
Edit: It seems I've misunderstood your question. If you want to get a histogram of tryToGenerate, to see how many attempts succeed within the first, second or nth attempt, you can use a DistributionSummary. For example:
#Bean
public DistributionSummary summary(MeterRegistry registry) {
return DistributionSummary
.builder("unique-value-attempts")
.sla(1, 5, 10)
.publishPercentileHistogram()
.register(registry);
}
In this example, it will count how many calls succeeded within the 1st, 5th or 10th attempt.
In your service, you can now autowire the DistributionSummary and at the end of the loop, you can use it like this:
do {
tryToGenerate++
value = generateRandom();
boolean unique = isUniqueValue(value);
} while (!unique);
distributionSummary.record(tryToGenerate); // Add this
Now you can use /actuator/metrics/unique-value-attempts.histogram?tag=le:1 to see how many calls succeeded within the first try. The same can be done with the 5th and 10th try.
I have a case where I need to store a location of each key value of json, so that for each key, it automatically fetches the location and gives the value for it from json.
Here, I have a location of the key 'vehicle_id' inside json 'car' assigned to a variable like:
String location="jresp.getJSONArray('cars').getJSONObject(0).getString('vehicle_id')"
How do I make it as a variable in JAVA such that this location fetches the value of vehicle_id for me from JSON? I need it like:
String value=jresp.getJSONArray('cars').getJSONObject(0).getString('vehicle_id');
so that it gives me a value. I've searched in net, but couldn't find it anywhere. Please help me!
Instead of having the complete statement as a variable like -
String location="jresp.getJSONArray('cars').getJSONObject(0).getString('vehicle_id')"
You could have 3 different variables like -
String jsonArray = "cars";//You might need to do some string processing to get these
int objSeq = 0;
String key = "vehicle_id";
Then you can definitely use it in your Java statement -
String value=jresp.getJSONArray(jsonArray).getJSONObject(objSeq).getString(key);
I've gone through the related questions on this site but haven't found a relevant solution.
When querying my Solr4 index using an HTTP request of the form
&facet=true&facet.field=country
The response contains all the different countries along with counts per country.
How can I get this information using SolrJ?
I have tried the following but it only returns total counts across all countries, not per country:
solrQuery.setFacet(true);
solrQuery.addFacetField("country");
The following does seem to work, but I do not want to have to explicitly set all the groupings beforehand:
solrQuery.addFacetQuery("country:usa");
solrQuery.addFacetQuery("country:canada");
Secondly, I'm not sure how to extract the facet data from the QueryResponse object.
So two questions:
1) Using SolrJ how can I facet on a field and return the groupings without explicitly specifying the groups?
2) Using SolrJ how can I extract the facet data from the QueryResponse object?
Thanks.
Update:
I also tried something similar to Sergey's response (below).
List<FacetField> ffList = resp.getFacetFields();
log.info("size of ffList:" + ffList.size());
for(FacetField ff : ffList){
String ffname = ff.getName();
int ffcount = ff.getValueCount();
log.info("ffname:" + ffname + "|ffcount:" + ffcount);
}
The above code shows ffList with size=1 and the loop goes through 1 iteration. In the output ffname="country" and ffcount is the total number of rows that match the original query.
There is no per-country breakdown here.
I should mention that on the same solrQuery object I am also calling addField and addFilterQuery. Not sure if this impacts faceting:
solrQuery.addField("user-name");
solrQuery.addField("user-bio");
solrQuery.addField("country");
solrQuery.addFilterQuery("user-bio:" + "(Apple OR Google OR Facebook)");
Update 2:
I think I got it, again based on what Sergey said below. I extracted the List object using FacetField.getValues().
List<FacetField> fflist = resp.getFacetFields();
for(FacetField ff : fflist){
String ffname = ff.getName();
int ffcount = ff.getValueCount();
List<Count> counts = ff.getValues();
for(Count c : counts){
String facetLabel = c.getName();
long facetCount = c.getCount();
}
}
In the above code the label variable matches each facet group and count is the corresponding count for that grouping.
Actually you need only to set facet field and facet will be activated (check SolrJ source code):
solrQuery.addFacetField("country");
Where did you look for facet information? It must be in QueryResponse.getFacetFields (getValues.getCount)
In the solr Response you should use QueryResponse.getFacetFields() to get List of FacetFields among which figure "country". so "country" is idenditfied by QueryResponse.getFacetFields().get(0)
you iterate then over it to get List of Count objects using
QueryResponse.getFacetFields().get(0).getValues().get(i)
and get value name of facet using QueryResponse.getFacetFields().get(0).getValues().get(i).getName()
and the corresponding weight using
QueryResponse.getFacetFields().get(0).getValues().get(i).getCount()
Hy,
Hbase allows a column family to have different qualifiers in different rows. In my case a column family has the following specification
abc[cnt] # where cnt is an integer that can be any positive integer
what I want to achieve is to get all the data from a different column family, only if the value of the described qualifier (in a different column family) matches.
for narrowing the Scan down I just add those two families I need for the query. but that is as far as I could get for now.
I already achieved the same behaviour with a SingleColumnValueFilter, but then the qualifier was known in advance. but for this one the qualifier can be abc1, abc2 ... there would be too many options, thus too many SingleColumnValueFilter's.
Then I tried using the ValueFilter, but this filter only returns those columns that match the value, thus the wrong column family.
Can you think of any way to achieve my goal, querying for a value within a dynamically created qualifier in a column family and returning the contents of the column family and another column family (as specified when creating the Scan)? preferably only querying once.
Thanks in advance for any input.
UPDATE: (for clarification as discussed in the comments)
in a more graphical way, a row may have the following:
colfam1:aaa
colfam1:aab
colfam1:aac
colfam2:abc1
colfam2:abc2
whereas I want to get all of the family colfam1 if any value of colfam2 has e.g. the value x, with regard to the fact that colfam2:abc[cnt] is dynamically created with cnt being any positive integer
I see two approaches for this: client-side filtering or server-side filtering.
Client-side filtering is more straightforward. The Scan adds only the two families "colfam1" and "colfam2". Then, for each Result you get from scanner.next(), you must filter according to the qualifiers in "colfam2".
byte[] queryValue = Bytes.toBytes("x");
Scan scan = new Scan();
scan.addFamily(Bytes.toBytes("colfam1");
scan.addFamily(Bytes.toBytes("colfam2");
ResultScanner scanner = myTable.getScanner(scan);
Result res;
while((res = scanner.next()) != null) {
NavigableMap<byte[],byte[]> colfam2 = res.getFamilyMap(Bytes.toBytes("colfam2"));
boolean foundQueryValue = false;
SearchForQueryValue: while(!colfam2.isEmpty()) {
Entry<byte[], byte[]> cell = colfam2.pollFirstEntry();
if( Bytes.equals(cell.getValue(), queryValue) ) {
foundQueryValue = true;
break SearchForQueryValue;
}
}
if(foundQueryValue) {
NavigableMap<byte[],byte[]> colfam1 = res.getFamilyMap(Bytes.toBytes("colfam1"));
LinkedList<KeyValue> listKV = new LinkedList<KeyValue>();
while(!colfam1.isEmpty()) {
Entry<byte[], byte[]> cell = colfam1.pollFirstEntry();
listKV.add(new KeyValue(res.getRow(), Bytes.toBytes("colfam1"), cell.getKey(), cell.getValue());
}
Result filteredResult = new Result(listKV);
}
}
(This code was not tested)
And then finally filteredResult is what you want. This approach is not elegant and might also give you performance issues if you have a lot of data in those families. If "colfam1" has a lot of data, you don't want to transfer it to the client if it will end up not being used if value "x" is not in a qualifier of "colfam2".
Server-side filtering. This requires you to implement your own Filter class. I believe you cannot use the provided filter types to do this. Implementing your own Filter takes some work, you also need to compile it as a .jar and make it available to all RegionServers. But then, it helps you to avoid sending loads of data of "colfam1" in vain.
It is too much work for me to show you how to custom implement a Filter, so I recommend reading a good book (HBase: The Definitive Guide for example). However, the Filter code will look pretty much like the client-side filtering I showed you, so that's half of the work done.
I have this entity - I'm trying to determine the type of its properties - in Google App Engine's internal data-types PREFERRED (as opposed to Java data types).
The below code is obviously simplified. In reality I do not know the entity's properties or anything else about it.
final DatastoreService dss = DatastoreServiceFactory.getDatastoreService();
final Query query = new Query("Person");
final PreparedQuery pq = dss.prepare(query);
for (Entity entity : pq.asIterable())
{
final Object property = entity.getProperty("some_property");
// Here I want to determine which data type 'property' represents - GAE-wise.
}
In App Engine's Java code I've found some hints:
DataTypeTranslator
DataTypeTranslator.typeMap (internal private member)
Property.Meaning.GD_PHONENUMBER
I'm unable to link those together into what I need - some sort of reflection.
I wish I was able to do something like this:
entity.getPropertyType("some_property");
Does anyone know better?
DataTypeTranslator source code here
Edit #1: <<
INGORE this one. It's me who put these postfixes (I was confused by the doc).
Here's more important info I've found.
I'm getting it in Eclipse' tool-tip mini-window when I point over an entity (one which I just fetched from the Datastore).
The Datastore seems to send it (this payload) as raw text which is nice, maybe I'll have to parse it (but, how do I get it from code LOL).
Pay attention to the types in here, it's written plain simple.
Here it is:
<Entity [Bird(9)]:
Int64Type:44rmna4kc2g23i9brlupps74ir#Int64Type = 1234567890
String:igt7qvk9p89nc3gjqn9s3jq69c = 7tns1l48vpttq5ff47i3jlq3f9
PhoneNumber:auih50aecl574ud23v9h4rfvt1#PhoneNumberType = 03-6491234
Date:k1qstkn9np0mpb6fp41cj6i3am = Wed Jul 20 23:03:13 UTC 2011
>
For example, property named String:igt7qvk9p89nc3gjqn9s3jq69c has the value of 7tns1l48vpttq5ff47i3jlq3f9 and it doesn't tell its type. Also property Date:k1qstkn9np0mpb6fp41cj6i3am.
Property named Int64Type:44rmna4kc2g23i9brlupps74ir has the value of "1234567890" and here it strictly mentions that the data type is of "Int64Type".
I'm searching for it too.
It's a bit of a hack, but at least my output includes the type (without needing a secret decoder ring). But my code is slightly different:
Query allusersentityquery = new Query();
allusersentityquery.setAncestor(userKey);
for (final Entity entity : datastore.prepare(allusersentityquery).asIterable()) {
Map<String, Object> properties = entity.getProperties();
String[] propertyNames = properties.keySet().toArray(
new String[properties.size()]);
for(final String propertyName : propertyNames) {
// propertyNames string contains
// "com.google.appengine.api.datastore.PostalAddress" if it is a Postal Address
}
}
There seems to be no documents about determining the Property Types here.