I have this string 1|1|1|1 and I want to sum the numbers inside spel statement.
I tried this one but got wrong expression.
T(java.util.Arrays).stream(SERVICE_CLASS.split('|')).reduce(Integer::sum)
the log :
org.springframework.expression.spel.SpelParseException: Expression [new Object[] {T(java.util.Arrays).stream(SERVICE_CLASS.split('|')).reduce(Integer::sum)}] #81: EL1043E: Unexpected token. Expected 'rparen())' but was 'colon(:)'
the full code :
#Test fun `test sum values`() {
val cont = ScriptingContext.default();
val contextrecord: Map<String, String> = mapOf(
"value" to "1|1|1|1"
)
val context = cont.getEvalContext()
context.setRootObject(contextrecord)
val value = cont.parser.parseExpression("new Object[] {T(java.util.Arrays).stream(value.split('|')).reduce(Integer::sum)}").getValue(context) as Array<Object>
println(value.toString())
assertEquals(value[0], "4")
}
any suggest to do it ?
SpEL stands for Spring Expression LANGUAGE. It is not Java; no lambdas, or method references are supported.
You can, however, register lambdas as SpEL functions; see Spring Expression Language - Java 8 forEach or stream on list
Related
I want to create a variable in Scala which can have data of the following format:
"one" -> "two",
"three -> {"four" -> "five", "six -> "seven"},
"eight" -> {"nine" -> { "ten" -> "eleven", "twelve" -> "thirteen"},
"fourteen" -> {"fifteen" -> "sixteen}
}
I tried creating a Java HashMap using:
var requiredVar = new HashMap[String, Object]()
I am able to do something like :
var hm = new HashMap[String, String]
hm.put("four","five")
requiredVar.put("three",hm)
But if I try to add :
requiredVar.get("three").put("six","seven")
I get the error that
value put is not a member of Object
How can I get this done?
I have tried something like native to Scala as well:
val a = Map("one" -> "two" , "three" -> Map("four"->"five"))
a.get("three").put("six"->"seven")
but get the following error:
error: value put is not a member of Option[Any]
In the first case, when using Java, you get the error because the compiler doesn't know that the value that retrieved from requiredVar is a Map.
You declare requiredVar to be a HashMap[String, Object], so the compiler will only know anything retrieved from the map is an Object, nothing more specific.
Specifically, in your case:
requiredVar.get("three").put("six","seven")
requiredVar.get("three") returns an Object, which doesn't have a put() method.
You are running into a similar issue in the Scala version of your code as well. When you create the Map:
val a = Map("one" -> "two" , "three" -> Map("four"->"five"))
the compiler must infer the types of the keys and values, which it is doing by finding the closest common ancestor of all the values, which for a String and another Map, is Any, Scala's equivalent to Java's Object. So when you try to do
a.get("three").put("six"->"seven")
a.get("three") is returning an Option[Any], which doesn't have a put method. By the way, Scala's Map.get returns an Option, so that if the key is not present in the map, a None is returned instead an exception being thrown. You can also use the more concise method a("three"), which returns the value type directly (in this case Any), but will throw an exception if the key is not in the map.
There are a few ways I can think of try to achieve what you want to do.
1) Casting
If you are absolutely sure that the value you are retrieving from the map is another Map instead of a String, you can cast the value:
requiredVar.get("three").asInstanceOf[HashMap[String, String]].put("six","seven")
This is a fairly brittle approach, as if the value is a String, then you will get a runtime exception thrown.
2) Pattern Matching
Rather than casting arbitrarily, you can test the retrieved value for its type, and only call put on values you know are maps:
requiredVar.get("three") match {
case m: HashMap[String, String] => m.put("six", "seven")
case _ => // the value is probably a string here, handle this how you'd like
}
This allows you to guard against the case that the value is not a map. However, it is still brittle because the value type is Any, so in the case _ case, you don't actually know the value is a String, and would have to pattern match or cast to know for sure and use the value as a String
3) Create a new value type
Rather than rely on a top type like Object or Any, you can create types of your own to use as the value type. Something like the following could work:
import scala.collection.mutable.Map
sealed trait MyVal
case class StringVal(s: String) extends MyVal
case class MapVal(m: Map[String, String]) extends MyVal
object MyVal {
def apply(s: String): StringVal = StringVal(s)
def apply(m: Map[String, String]): MapVal = MapVal(m)
}
var rv = Map[String, MyVal]()
rv += "one" -> MyVal("two")
rv += "three" -> MyVal(Map[String, String]("four" -> "five"))
rv.get("three") match {
case Some(v) => v match {
case MapVal(m) => m("six") = "seven"
case StringVal(s) => // handle the string case as you'd like
}
case None => // handle the key not being present in the map here
}
The usage may look similar, but the advantage now is that the pattern match on the rv.get("three") is complete.
4) Union types
If you happen to be using a 3.x version of Scala, you can use a union type to specify exactly what types of values you will have in your map, and achieve something like the above option much less verbosely:
import scala.collection.mutable.Map
val rv: Map[String, String | Map[String, String]] = Map()
rv += "one" -> "two"
rv += "three" -> Map[String, String]("four" -> "five")
rv.get("three") match {
case Some(m: Map[String, String]) => m += "six" -> "seven"
case Some(s: String) => // handle string values
case None => // handle key not present
}
One thing to note though, with all of the above options, is that in Scala, it is preferable to use immutable collections, instead of mutable versions like HashMap or scala.collection.mutable.Map (which is by default a HashMap under the hood). I would do some research about immutable collections and try to think about how you can redesign your code accordingly.
Here is my Document:
{
"_id":"5b1ff7c53e3ac841302cfbc2",
"idProf":"5b1ff7c53e3ac841302cfbbf",
"pacientes":["5b20d2c83e3ac841302cfbdb","5b20d25f3e3ac841302cfbd0"]
}
I want to know how to find a duplicate entry in the array using MongoCollection in Java.
This is what I'm trying:
BasicDBObject query = new BasicDBObject("idProf", idProf);
query.append("$in", new BasicDBObject().append("pacientes", idJugador.toString()));
collection.find(query)
We can try to solve this in your Java-application code.
private final MongoCollection collection;
public boolean hasDuplicatePacientes(String idProf) {
Document d = collection.find(eq("idProf", idProf)).first();
List<String> pacientes = (List<String>) d.get("pacientes");
int original = pacientes.size();
if (original == 0) {
return false;
}
Set<String> unique = new HashSet(pacientes);
return original != unique.size();
}
Or if you're searching for a way to do this fully on db-side, I believe it's also possible with something like Neil Lunn provided.
The best approach really is to compare the length of the array to the length of an array which would have all duplicates removed. A "Set" does not have duplicate entries, so what you need to do is convert an array into a "Set" and test against the original.
Modern MongoDB $expr
Modern MongoDB releases have $expr which can be used with aggregation expressions in a regular query. Here the expressions we would use are $setDifference and $size along with $ne for the boolean comparison:
Document query = new Document(
"$expr", new Document(
"$ne", Arrays.asList(
new Document("$size", "$pacientes"),
new Document("$size",
new Document("$setDifference", Arrays.asList("$pacientes", Collections.emptyList()))
)
)
)
);
MongoCursor<Document> cursor = collection.find(query).iterator();
Which serializes as:
{
"$expr": {
"$ne": [
{ "$size": "$pacientes" },
{ "$size": { "$setDifference": [ "$pacientes", [] ] } }
]
}
}
Here it is actually the $setDifference which is doing the comparison and returning only unique elements. The $size is returning the length, both of the original document array content and the newly reduced "set". And of course where these are "not equal" ( the $ne ) the condition would be true meaning that a duplicate was found in the document.
The $expr operates on receiving a boolean true/false value in order whether to consider the document a match for the condition or not.
Earlier Version $where clause
Basically $where is a JavaScript expression that evaluates on the server
String whereClause = "this.pacientes.length != Object.keys(this.pacientes.reduce((o,e) => Object.assign(o, { [e.valueOf()]: null}), {})).length";
Document query = new Document("$where": whereClause);
MongoCursor<Document> cursor = collection.find(query).iterator();
You do need to have not explicitly disabled JavaScript evaluation on the server ( which is the default ) and it's not as efficient as using $expr and the native aggregation operators. But JavaScript expressions can be evaluated in the same way using $where, and the argument in Java code is basically sent as a string.
In the expression the .length is a property of all JavaScript arrays, so you have the original document content and the comparison to the "set". The Array.reduce() uses each array element as a "key" in a resulting object, from which the Object.keys() will then return those "keys" as a new array.
Since JavaScript objects work like a Map, only unique keys are allowed and this is a way to get that "set" result. And of course the same != comparison will return true when the removal of duplicate entries resulted in a change of length.
In either case of $expr or $where these are computed conditions which cannot use an index where present on the collection. As such it is generally recommended that additional criteria which use regular equality or range based query expressions which can indeed utilize an index be used alongside these expressions. Such additional criteria in the predicate would improve query performance greatly where an index is in place.
I have a JSON file with this type of schema:
{
"name" : "john doe",
"phone-numbers" : {
"home": ["1111", "222"],
"country" : "England"
}
}
The home phone numbers array could sometimes be empty.
My spark application receives a list of these JSONS and does this:
val dataframe = spark.read.json(filePaths: _*)
val result = dataframe.select($"name",
explode(dataframe.col("phone-numbers.home")))
When the 'home' array is empty, I receive the following error when I try to explode it:
org.apache.spark.sql.AnalysisException: cannot resolve
'phone-numbers['home']' due to data type mismatch: argument 2
requires integral type, however, ''home'' is of string type.;;
Is there an elegant way to prevent spark from exploding this field if it's empty or null?
The problem are not empty arrays ("home" : []) but arrays which are null ("home" : null) which do not work with explode
So either filter the null-values first:
val result = df
.filter($"phone-numbers.home".isNotNull)
.select($"name", explode($"phone-numbers.home"))
or replace the null-values with an empty array (which I would prefer in your situaion):
val nullToEmptyArr = udf(
(arr:Array[Long]) => if(arr==null) Array.empty[Long] else arr
)
val result = df
.withColumn("phone-numbers.home",nullToEmptyArr($"phone-numbers.home")) // clean existing column
.select($"name", explode($"phone-numbers.home"))
In spark there's a class called DataFrameNaFunctions, this class is specialized for working with missing data in DataFrames.
this class contains three essentials method : drop, replace and fill
to use this methods the only thing that you have to do is to call the df.na method wich return a DataFrameNaFunctions for your df then apply one of the three method which returns your df with the specified operation.
to resolve your problem you can use something like that :
val dataframe = spark.read.json(filePaths: _*)
val result = dataframe.na.drop().select("name",
explode(dataframe.col("phone-numbers.home")))
Hope this help, Best Regards
What is the equivalent Scala constructor (to create an immutable HashSet) to the Java
new HashSet<T>(c)
where c is of type Collection<? extends T>?.
All I can find in the HashSet Object is apply.
The most concise way to do this is probably to use the ++ operator:
import scala.collection.immutable.HashSet
val list = List(1,2,3)
val set = HashSet() ++ list
There are two parts to the answer. The first part is that Scala variable argument methods that take a T* are a sugaring over methods taking Seq[T]. You tell Scala to treat a Seq[T] as a list of arguments instead of a single argument using "seq : _*".
The second part is converting a Collection[T] to a Seq[T]. There's no general built in way to do in Scala's standard libraries just yet, but one very easy (if not necessarily efficient) way to do it is by calling toArray. Here's a complete example.
scala> val lst : java.util.Collection[String] = new java.util.ArrayList
lst: java.util.Collection[String] = []
scala> lst add "hello"
res0: Boolean = true
scala> lst add "world"
res1: Boolean = true
scala> Set(lst.toArray : _*)
res2: scala.collection.immutable.Set[java.lang.Object] = Set(hello, world)
Note the scala.Predef.Set and scala.collection.immutable.HashSet are synonyms.
From Scala 2.13 use the companion object
import scala.collection.immutable.HashSet
val list = List(1,2,3)
val set = HashSet.from(list)
I have a system where I query a REST / Atom server for documents. The queries are inspired by GData and look like :
http://server/base/feeds/documents?bq=[type in {'news'}]
I have to parse the "bq" parameter to know which type of documents will be returned without actually doing the query. So for example,
bq=[type = 'news'] -> return ["news"]
bq=[type in {'news'}] -> return ["news"]
bq=[type in {'news', 'article'}] -> return ["news", "article"]
bq=[type = 'news']|[type = 'article'] -> return ["news", "article"]
bq=[type = 'news']|[title = 'My Title'] -> return ["news"]
Basically, the query language is a list of predicate that can be combined with OR ("|") or AND (no separator). Each predicate is constraint on a field. The constraint can be =, <, >, <=, >=, in, etc... There can be spaces everywhere where it make sense.
I'm a bit lost between Regexp, StringTokenizer, StreamTokenizer, etc... and I am stuck with Java 1.4, so no Parser ...
Who can point me in the right direction ?
Thanks !
The right way would be to use parser generator like Antlr, JFlex or JavaCC.
A quick and dirty way would be:
String[] disjunctedPredicateGroups = query.split("\|");
List<String[]> normalizedPredicates = ArrayList<String[]>;
for (String conjunction : disjunctedPredicateGroups ) {
normalizedPredicates.add(conjunction.split("\[|\]"));
}
// process each predicate