How can I find a ProductVariant by his variantKey? - java

In my project I am now needing to obtain the variant of the product from the variantKey, but I have not found any method in the JVM SDK to do it.
I tried to do it using the ProductByKeyGet method, but I only get the product if the value corresponds to the root key of the product, but if the value corresponds to the variantKey it does not return anything to me.
Does anyone know any way to get the variant from its VariantKey?
Thanks in advance.
Miguel de la Hoz

Today we released version 1.29.0 of our JVM SDK - where we added the missing support for querying product variants by key (see https://github.com/commercetools/commercetools-jvm-sdk/issues/1679).
With this version you can then write the query in a typesafe fashion:
String myKey = "foo";
ProductProjectionType projectionType = ProductProjectionType.CURRENT;
ProductProjectionQuery query =
ProductProjectionQuery.of(projectionType)
.withPredicates(product -> product.allVariants()
.where(variant -> variant.key().is(myKey)));
Hope this helps!

For that you will need to use the Product Projection endpoint, where you can query for products which have either a variant "OR" master variant with the key you desire. Through the JVM SDK, you can achieve that by doing the following:
Build a QueryPredicate<EmbeddedProductVariantQueryModel> for the key you desire :
final String myKey = "foo";
final QueryPredicate<EmbeddedProductVariantQueryModel> queryPredicate =
QueryPredicate.of("key=\"" + myKey + "\"");
Build a Function<ProductProjectionQueryModel, QueryPredicate<ProductProjection>> to query for the master variant:
final Function<ProductProjectionQueryModel, QueryPredicate<ProductProjection>> mvPredicateFunction = productQueryModel ->
productQueryModel.masterVariant().where(queryPredicate);
Build a Function<ProductProjectionQueryModel, QueryPredicate<ProductProjection>> to query for the rest of the variants:
final Function<ProductProjectionQueryModel, QueryPredicate<ProductProjection>> variantsPredicateFunction = productQueryModel ->
productQueryModel.variants().where(queryPredicate);
Combine both predicates with a semantic OR operator to build the ProductProjectionQuery (in this case on the staged projection):
final ProductProjectionQuery query = ProductProjectionQuery.ofStaged()
.withPredicates(productQueryModel -> mvPredicateFunction
.apply(productQueryModel)
.or(variantsPredicateFunction.apply(productQueryModel)));
Execute the request:
final PagedQueryResult<ProductProjection> requestStage = sphereClient.executeBlocking(query);
Since variant keys are unique, you should be expecting to yield one resulting product projection, if any:
final Optional<ProductProjection> optionalProductProjection = requestStage.head();
Traverse all (including the master variant) variants of the resultant product projection to fetch the matching variant with such key:
final Optional<ProductVariant> optionalVariant = optionalProductProjection.flatMap(
productProjection -> productProjection.getAllVariants().stream()
.filter(productVariant -> myKey.equals(productVariant.getKey()))
.findFirst());
Update:
Steps 1-4 can also be simplified to:
final String myKey = "foo";
final QueryPredicate<ProductProjection> productProjectionQueryPredicate = QueryPredicate
.of("masterVariant(key = \"" + myKey + "\") OR variants(key = \"" + myKey + "\")");
final ProductProjectionQuery query = ProductProjectionQuery.ofStaged().withPredicates(
productProjectionQueryPredicate);

Related

Group by inside otherwise clause, spark java

I have this process in SparkJava (IntelliJ app) where I have a problem that I don`t know how to resolve yet. First I declare the dataset:
private static final String CONTRA1 = "contra1";
query = "select contra1, ..., eadfinal, , ..., data_date" + FROM + dbSchema + TBLNAME " + WHERE fech = '" + fechjmCto2 + "' AND s1emp=49";
Dataset<Row> jmCto2 = sql.sql(query);
Then I have the calculations, I analyze some fields to assign some literal values. My problem is in the aggegate function:
Dataset<Row> contrCapOk1 = contrCapOk.join(jmCto2,
contrCapOk.col(CONTRA1).equalTo(jmCto2.col(CONTRA1)),LEFT)
.select(contrCapOk.col("*"),
jmCto2.col("ind"),
functions.when(jmCto2.col(CONTRA1).isNull(),functions.lit(NUEVES))
.when(jmCto2.col("ind").equalTo("N"),functions.lit(UNOS))
.otherwise(jmCto2.groupBy(CONTRA1).agg(functions.sum(jmCto2.col("eadfinal")))).as("EAD"),
What I want is to make the sum in the otherwise part. But when I execute the cluster give me this message in the log.
User class threw exception: java.lang.RuntimeException: Unsupported literal type class org.apache.spark.sql.Dataset [contra1: int, sum(eadfinal): decimal(33,6)]
in the line 211, the otherwise line.
Do you know what the problem could be?.
Thanks.
You cannot use groupBy and aggregation function in a column clause. To do what you want to do, you have to use a window.
For you case, you can define the following window:
import org.apache.spark.sql.expressions.Window;
import org.apache.spark.sql.expressions.WindowSpec;
...
WindowSpec window = Window
.partitionBy(CONTRA1)
.rangeBetween(Window.unboundedPreceding(), Window.unboundedFollowing());
Where
partitionBy is the equivalent of groupBy for aggregation
rangeBetween determine which rows of the partition will be used by aggregation function, here we take all rows
And then you use this window when calling your aggregation function, as follow:
import org.apache.spark.sql.functions;
...
Dataset<Row> contrCapOk1 = contrCapOk.join(
jmCto2,
contrCapOk.col(CONTRA1).equalTo(jmCto2.col(CONTRA1)),
LEFT
)
.select(
contrCapOk.col("*"),
jmCto2.col("ind"),
functions.when(jmCto2.col(CONTRA1).isNull(), functions.lit(NUEVES))
.when(jmCto2.col("ind").equalTo("N"), functions.lit(UNOS))
.otherwise(functions.sum(jmCto2.col("eadfinal")).over(window))
.as("EAD")
)

AWS SDK Java DynamoDB - Query with Expression Attribute Name - An expression attribute value used in expression is not defined

Similar to: I cannot query my dynamodb table from aws lambda due to wrong filterexpression? and DynamoDB update error Invalid UpdateExpression: An expression attribute value used in expression is not defined
I am trying to code a way to query DynamoDB tables using partial matches on Partition Key / Sort Key in Java.
The DynamoDB table I am trying to access has a Partition key of "Type" (A restricted key word in DynamoDB, I know, but not my choice) and a Sort key of "Id". I know the "Type" but not the full Id, so I have researched the Query method using AWS SDK 2.x source code and have implemented as shown below:
DynamoDBClient dynamoDbClient = DynamoDbClient.builder()
.region(Region.EU_WEST_1)
.credentialsProvider(StaticCredentialsProvider.create(awsCredentials))
.build();
String idKey = "wholeIdKey";
String idValue = "partialIdValue";
String typeValue = "typeValue";
Map<String, String> expressionNames = new HashMap<>();
expressionNames.put("#t", "Type");
QueryRequest request = QueryRequest.builder()
.tableName(tableName)
.keyConditionExpression("begins_with ( " + idKey + ", :" + idValue + " )
AND #t = :" + typeValue)
.expressionAttributeNames(expressionNames)
.build();
QueryResponse response = dynamoDbClient.query(request);
However, when I run this code, I get the following error message:
Exception in thread "main" software.amazon.awssdk.services.dynamodb.model.DynamoDbException:
Invalid KeyConditionExpression: An expression attribute value used in expression is not defined; attribute value: :typeValue
It's as if it's not recognizing the fact that I have told the code use the Expression Attribute Names feature to replace the "#t" with "Type" (Which is a reserved keyword in DynamoDB)
Can anyone help?
EDIT: References for code:
https://docs.aws.amazon.com/code-samples/latest/catalog/javav2-dynamodb-src-main-java-com-example-dynamodb-Query.java.html
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Expressions.ExpressionAttributeNames.html
https://www.javadoc.io/static/software.amazon.awssdk/dynamodb/2.7.14/software/amazon/awssdk/services/dynamodb/model/QueryRequest.html#expressionAttributeNames--
The name is fine, but you're prefixing both values with ':'. That causes a lookup in ExpressionAttributeValues, which you did not provide.
Never try to write dynamic values directly into the query string.
Your expressionAttributeName looks fine, but you forgot to provide a value for :typeValue so dynamoDB cannot know what to look for.
In addition to what you did, you need to add an expressionAttributeValue where you can provide proper values. See documentation here
Fixed Code for whoever wants it in the future (Thanks to #aherve and #MattTimmermans)
DynamoDBClient dynamoDbClient = DynamoDbClient.builder()
.region(Region.EU_WEST_1)
.credentialsProvider(StaticCredentialsProvider.create(awsCredentials))
.build();
String idKey = "wholeIdKey";
String idValue = "partialIdValue";
String typeValue = "typeValue";
String typeKey = "typeKey";
Map<String, String> expressionNames = new HashMap<>();
expressionNames.put("#t", "Type");
expressionNames.put("#i", "Id");
Map<String, AttributeValue> expressionValues = new HashMap<>();
expressionValues.put(":typeName", AttributeValue.builder().s(typeValue).build());
expressionValues.put(":idName", AttributeValue.builder().s(idValue).build());
QueryRequest request = QueryRequest.builder()
.tableName(tableName)
.keyConditionExpression("#t = :typeName AND begins_with ( #i, :idName )")
.expressionAttributeNames(expressionNames)
.expressionAttributeValues(expressionValues)
.build();
response = dynamoDbClient.query(request);

BigQuery, how to define array like field programmatically?

I'm trying to start a datawarehouse project, this is what I would like my schema to look like:
table: event_log
schema:
-> info
-> user_id: "xyz"
-> user_properties // <- I want this to be array like
-> 0
-> key: "name
-> value
-> int_value: null
-> string_value: "osp"
...
-> 1 // and it goes on
The problem is I don't know how to programatically define this array like structure.
I took the idea from here:
https://www.youtube.com/watch?v=pxNrkjBeHpw
here is my code (kotlin using the java google cloud library) so far:
val tableId = TableId.of(datasetName, tableName)
// First part, general field
val generalInfoFields = ArrayList<Field>()
generalInfoFields.add(Field.of("user_id", LegacySQLTypeName.STRING))
generalInfoFields.add(Field.of("user_properties", {ARRAY LIKE TYPE??}))
val general_info = Field.of("general_info", LegacySQLTypeName.RECORD, FieldList.of(generalInfoFields))
// Combine fields and create table
val tableSchema = Schema.of(general_info)
val tableDefinition = StandardTableDefinition.of(tableSchema)
val tableInfo = TableInfo.newBuilder(tableId, tableDefinition).build()
val table = bigquery.create(tableInfo)
log.info("dataset created " + dataset.datasetId.dataset)
Any help would be greatly appreciated
To define array in BigQuery schema you need to use Field.Mode.REPEATED modifier. Check official docs.
Your code will look something like this:
val arrayField = Field.newBuilder("user_properties", LegacySQLTypeName.RECORD, FieldList.of(<record nested fields here>))
.setMode(Field.Mode.REPEATED).build()

How to define custom analyzer to do global search with hibernate-search and elasticsearch

I have an implementation of hibernate-search-orm (5.9.0.Final) with hibernate-search-elasticsearch (5.9.0.Final).
I defined a custom analyzer on an entity (see beelow) and I indexed two entities :
id: "1"
title: "Médiatiques : récit et société"
abstract:...
id: "2"
title: "Mediatique Com'7"
abstract:...
The search works fine when I search on title field :
"title:médiatique" => 2 results.
"title:mediatique" => 2 results.
My problem is when I do a global search with accents (or not) :
search on "médiatique => 1 result (id:1)
search on "mediatique => 1 result (id:2)
Is there a way to resolve this?
Thanks.
Entity definition:
#Entity
#Table(name="bibliographic")
#DynamicUpdate
#DynamicInsert
#Indexed(index = "bibliographic")
#FullTextFilterDefs({
#FullTextFilterDef(name = "fieldsElasticsearchFilter",
impl = FieldsElasticsearchFilter.class)
})
#AnalyzerDef(name = "customAnalyzer",
tokenizer = #TokenizerDef(factory = StandardTokenizerFactory.class),
filters = {
#TokenFilterDef(factory = LowerCaseFilterFactory.class),
#TokenFilterDef(factory = ASCIIFoldingFilterFactory.class),
})
#Analyzer(definition = "customAnalyzer")
public class BibliographicHibernate implements Bibliographic {
...
#Column(name="title", updatable = false)
#Fields( {
#Field,
#Field(name = "titleSort", analyze = Analyze.NO, store = Store.YES)
})
#SortableField(forField = "titleSort")
private String title;
...
}
Search method :
FullTextEntityManager ftem = Search.getFullTextEntityManager(entityManager);
QueryBuilder qb = ftem.getSearchFactory().buildQueryBuilder().forEntity(Bibliographic.class).get();
QueryDescriptor q = ElasticsearchQueries.fromQueryString(queryString);
FullTextQuery query = ftem.createFullTextQuery(q, Bibliographic.class).setFirstResult(start).setMaxResults(rows);
if (filters!=null){
filters.stream().map((filter) -> filter.split(":")).forEach((f) -> {
query.enableFullTextFilter("fieldsElasticsearchFilter")
.setParameter("field", f[0])
.setParameter("value", f[1]);
}
);
}
if (facetFields!=null){
facetFields.stream().map((facet) -> facet.split(":")).forEach((f) ->{
query.getFacetManager()
.enableFaceting(qb.facet()
.name(f[0])
.onField(f[0])
.discrete()
.orderedBy(FacetSortOrder.COUNT_DESC)
.includeZeroCounts(false)
.maxFacetCount(10)
.createFacetingRequest() );
}
);
}
List<Bibliographic> bibs = query.getResultList();
To be honest I'm more surprised document 1 would match at all, since there's a trailing "s" on "Médiatiques" and you don't use any stemmer.
You are in a special case here: you are using a query string and passing it directly to Elasticsearch (that's what ElasticsearchQueries.fromQueryString(queryString) does). Hibernate Search has very little impact on the query being run, it only impacts the indexed content and the Elasticsearch mapping here.
When you run a QueryString query on Elasticsearch and you don't specify any field, it uses all fields in the document. I wouldn't bet that the analyzer used when analyzing your query is the same analyzer that you defined on your "title" field. In particular, it may not be removing accents.
An alternative solution would be to build a simple query string query using the QueryBuilder. The syntax of queries is a bit more limited, but is generally enough for end users. The code would look like this:
FullTextEntityManager ftem = Search.getFullTextEntityManager(entityManager);
QueryBuilder qb = ftem.getSearchFactory().buildQueryBuilder().forEntity(Bibliographic.class).get();
Query q = qb.simpleQueryString()
.onFields("title", "abstract")
.matching(queryString)
.createQuery();
FullTextQuery query = ftem.createFullTextQuery(q, Bibliographic.class).setFirstResult(start).setMaxResults(rows);
Users would still be able to target specific fields, but only in the list you provided (which, by the way, is probably safer, otherwise they could target sort fields and so on, which you probably don't want to allow). By default, all the fields in that list would be targeted.
This may lead to the exact same result as the query string, but the advantage is, you can override the analyzer being used for the query. For instance:
FullTextEntityManager ftem = Search.getFullTextEntityManager(entityManager);
QueryBuilder qb = ftem.getSearchFactory().buildQueryBuilder().forEntity(Bibliographic.class)
.overridesForField("title", "customAnalyzer")
.overridesForField("abstract", "customAnalyzer")
.get();
Query q = qb.simpleQueryString()
.onFields("title", "abstract")
.matching(queryString)
.createQuery();
FullTextQuery query = ftem.createFullTextQuery(q, Bibliographic.class).setFirstResult(start).setMaxResults(rows);
... and this will use your analyzer when querying.
As an alternative, you can also use a more advanced JSON query by replacing ElasticsearchQueries.fromQueryString(queryString) with ElasticsearchQueries.fromJsonQuery(json). You will have to craft the JSON yourself, though, taking some precautions to avoid any injection from the user (use Gson to build the Json), and taking care to follow the Elasticsearch query syntax.
You can find more information about simple query string queries in the official documentation.
Note: you may want to add FrenchMinimalStemFilterFactory to your list of token filters in your custom analyzer. It's not the cause of your problem, but once you manage to use your analyzer in search queries, you will very soon find it useful.

Neo4j Java OGM select with lock

I am trying to select a path with locking the last node in that path using Java OGM for Neo4j.
To do that in cypher I have written the following query:
String q = "Match path = (p:Root) - [*1..100]-(m:Leaf) WHERE m.State = 'Non-Processed' WITH m,p,path ORDER BY length(path) Limit 1 SET m.State = 'Processing' RETURN path"
It selects the necessary path with locking the last leaf(by changing its State property).
However, when I try to execute this query:
session.query(Path.class, q, propertyMap)
I get a java.lang.RuntimeException: query() only allows read only cypher. To make modifications use execute()
What is the proper way to do this?
You're probably using an older version of neo4j-ogm which had the restriction on session.query(). Please upgrade to neo4j-ogm 1.1.4
Found a (probably not the best) solution.
String uid = UUID.randomUUID().toString();
String lockQuery = "Match path = (p:Root) - [*1..100]-(m:Leaf)"
+ "WHERE m.State = 'Non-Processed' "
+ "WITH m,p,path ORDER BY length(path) Limit 1 SET m.lock = " + uid
session.execute(lockQuery);
String getQuery = "Match path = (p:Root) - [*1..100]-(m:Leaf)"
+ "WHERE m.lock = " + uid + "RETURN path";
Path path = session.query(Path.class, getQuery, new Hashmap<String, Object>());
Will this work?

Categories