I have a domain class Coach which has a has many relationship to another domain class CoachProperty.
Hibernate/Grails is creating a third joined table in the database.
In the example below I am trying to fetch the coaches which both have foo AND bar for their text value. I have tried different solutions with 'or' and 'and' in Grails which either returns an empty list or a list with BOTH foo and bar.
Coach:
class Coach {
static hasMany = [ coachProperties : CoachProperty ]
CoachProperty:
class CoachProperty {
String text
boolean active = true
static constraints = {
text(unique: true, nullable: false, blank: false)
}
}
Joined table which is being auto-created and I populated with some data, in this example I am trying to fetch coach 372 since that coach has both 1 and 2 i.e foo and bar:
+---------------------------+-------------------+
| coach_coach_properties_id | coach_property_id |
+---------------------------+-------------------+
| 150 | 2 |
| 372 | 1 |
| 372 | 2 |
| 40 | 3 |
+---------------------------+-------------------+
Inside Coach.createCriteria().list() among with other filters. This should return coach 372 but return empty:
def tempList = ["foo", "bar"]
coachProperties{
for(String temp: tempList){
and {
log.info "temp = " + temp
ilike("text",temp)
}
}
}
I seem to remember this error. Was something about not being able to use both nullable & blank at the same time.Try with just 'nullable:true'
I had to create a workaround with executeQuery where ids is the list containing the id's of the coachproperties i was trying to fetch.
def coaches = Coach.executeQuery '''
select coach from Coach as coach
join coach.coachProperties as props
where props.id in :ids
group by coach
having count(coach) = :count''', [ids: ids.collect { it.toLong()
}, count: ids.size().toLong()]
or{
coaches.each{
eq("id", it.id)
}
}
Related
I am reading City and Country data from 2 CSV files and need to merge the result using Java Stream (I need to keep the same order as the first result). I thought using parallel stream or ComletableFuture, but as I need the result of first fetch for passing as parameter to the second fetch, I am not sure if they are suitable for this scenario.
So, in order to read data from the first query and pass the result of this query to the second one and obtain result, what should I do in Java Stream?
Here are the related entities. I have to relate them using country code values.
Assume that I just need the country names for the following cities. Please keep in mind that, I need to keep the same order as the first result. For example, if the result is [Berlin, Kopenhag, Paris] then the second result should be the same order as [Germany, Denmark, France].
City:
id | name | countryCode |
------------------------------
1 | Berlin | DE |
2 | Munich | DE |
3 | Köln | DE |
4 | Paris | FR |
5 | Kopenhag | DK |
...
Country:
id | name | code |
----------------------------------
100 | Germany | DE |
105 | France | FR |
108 | Denmark | DK |
...
Here are the related classes:
public class City{
#CsvBindByPosition(position = 0)
private Integer id;
#CsvBindByPosition(position = 1)
private String name;
#CsvBindByPosition(position = 2)
private String countryCode;
// setters, getters, etc.
}
public class Country {
#CsvBindByPosition(position = 0)
private Integer id;
#CsvBindByPosition(position = 1)
private String name;
#CsvBindByPosition(position = 2)
private String code;
// setters, getters, etc.
}
You can merge your data with stream, for example add an countryName in City :
List<Country> countries = // Your CSV Country lines
List<City> cities = // Your CSV City lines
cities.forEach(city -> city.setCountryName(countries.stream()
.filter(country -> country.getCode().equals(city.getCountryCode()))
.map(Country::getName).findAny().orElse(null)));
I have a table like this called my_objects:
| code | description | open | closed |
+ ---- + ----------- + ---- + ------ +
| 1 | first | 0 | 1 |
| 1 | first | 1 | 0 |
| 2 | second | 1 | 0 |
| 2 | second | 1 | 0 |
I'm returning a JSON object like this:
{
"totalItems": 2
"myObjs": [
{
"code": 1,
"description": "first",
"openCount": 1,
"closedCount": 1
},
{
"code": 2,
"description": "second",
"openCount": 2,
"closedCount": 0
}
],
"totalPages": 1,
"curentPage": 0
}
My query in my repository (MyObjsRepository.java) looks like this:
#Query(
value = "SELECT new myObjs(code, description, "
+ "COUNT(CASE open WHEN 1 THEN 1 ELSE null END) as openCount "
+ "COUNT(CASE closed WHEN 1 THEN 1 ELSE null END) as closedCount) "
+ "FROM MyObjs "
+ "GROUP BY (code, description)"
)
Page<MyObjs> findMyObjs(Pageable pageable);
This works, but I run into an issue when trying to sort by my aggregated columns. When I try to sort by openCount, the Pageable object will contain a org.springframework.data.domain.Sort with an Order with the property openCount. The log for my application shows what's going wrong (formatted for readability):
select
myObjs0_.code as col_0_0_,
myObjs0_.description as col_1_0_,
count(case myObjs0_.open when 1 then 1 else null end) as col_2_0_,
count(case myObjs0_.closed when 1 then 1 else null end) as col_3_0_
from my_objects myObjs0_
group by (myObjs0_.code, myObjs0_.description)
order by myObjs0_.openCount asc limit ?
The aliases aren't preserved, so I get the following error:
Caused by: org.postgresql.util.PSQLException: ERROR: column myObjs0_.openCount does not exist
I've tried renaming the sorting parameters, adding columns with the aliased names to my entity, and adding open and closed to the group by clause. I think I may be able to solve this with a native query, but I'd really rather not do that. Is there a way to resolve this issue without a native query?
Edit:
The MyObjs entity looks like this:
#Entity
#Table(schema = "my_schmea", name = "my_objects")
public class MyObjs {
#Column(name = "code")
private Integer code;
#Column(name = "description")
private String description;
#Column(name = "open")
private Integer open;
#Column(name = "closed")
private Integer closed;
/* getters, setters, and constructor */
}
The MyObjsDto looks like this:
#JsonAutoDetect(getterVisibility = JsonAutoDetect.Visibility.PUBLIC_ONLY)
public class MyObjsDto {
#JsonProperty(value = "code")
private String code;
#JsonProperty(value = "description")
private String description;
#JsonProperty(value = "openCount")
private String open;
#JsonProperty(value = "closedCount")
private String closed;
/* getters, setters, and constructor */
}
Sort uses column which is present in the table. Here you are calculating it.
I would suggest you to explore and use #Formula annotation to perform the same action.
#Formula("COUNT(CASE open WHEN 1 THEN 1 ELSE null END)")
private Integer open;
#Formula("COUNT(CASE closed WHEN 1 THEN 1 ELSE null END)")
private Integer closed;
and use this attributes to apply the sorting.
I am using QueryDSL within a Spring Boot, Spring Data JPA project.
I have the following schema for a table called test:
| id | key | value |
|----|------|-------|
| 1 | test | hello |
| 1 | test | world |
| 2 | test | hello |
| 2 | foo | bar |
| 3 | test | hello |
| 3 | test | world |
Now I want to write the following SQL in QueryDSL:
select id from test where key = 'test' and value = 'hello'
INTERSECT
select id from test where key = 'test' and value = 'world'
Which would give me all ids where key is 'test' and values are 'hello' and 'world'.
I did not find any way of declaring this kind of SQL in QueryDSL yet. I am able to write the two select statements but then I am stuck at combining them with an INTERSECT.
JPAQueryFactory queryFactory = new JPAQueryFactory(em); // em is an EntityManager
QTestEntity qTestEntity = QTestEntity.testEntity;
var q1 = queryFactory.query().from(qTestEntity).select(qTestEntity.id).where(qTestEntity.key("test").and(qTestEntity.value.eq("hello")));
var q2 = queryFactory.query().from(qTestEntity).select(qTestEntity.id).where(qTestEntity.key("test").and(qTestEntity.value.eq("world")));;
In the end I want to retrieve a list of ids which match the given query. In general the amount of intersects can be something around 20 or 30, depending on the number of key/value-pairs I want to search for.
Does anyone know a way how to do something like this with QueryDSL ?
EDIT:
Assume the following schema now, with two tables: test and 'user':
test:
| userId | key | value |
|---------|------|-------|
| 1 | test | hello |
| 1 | test | world |
| 2 | test | hello |
| 2 | foo | bar |
| 3 | test | hello |
| 3 | test | world |
user:
| id | name |
|----|----------|
| 1 | John |
| 2 | Anna |
| 3 | Felicita |
The correspond java classes look like this. TestEntity has a composite key consisting of all of its properties.
#Entity
public class TestEntity {
#Id
#Column(name = "userId", nullable = false)
private String pubmedId;
#Id
#Column(name = "value", nullable = false)
private String value;
#Id
#Column(name = "key", nullable = false)
private String key;
}
#Entity
class User {
#Id
private int id;
private String name;
#ElementCollection
private Set<TestEntity> keyValues;
}
How can I map the test table to the keyValues properties within the User class?
Your TestEntity is not really an Entity, since it's id is not a primary key, it's the foreign key to the user table.
If it's only identifiable by using all its properties, it's an #Embeddable, and doesn't have any #Id properties.
You can map a collection of Embeddables as an #ElementCollection part of another entity which has the id as primary key. The id column in your case is not a property of the Embeddable, it's just the foreign key to the main table, so you map it as a #JoinColumn:
#Embeddable
public class TestEmbeddable {
#Column(name = "value", nullable = false)
private String value;
#Column(name = "key", nullable = false)
private String key;
}
#Entity
class User {
#Id
private int id;
#ElementCollection
#CollectionTable(
name="test",
joinColumns=#JoinColumn(name="id")
)
private Set<TestEmbeddable> keyValues;
}
In this case, the QueryDSL becomes something like this (don't know the exact api):
user.keyValues.any().in(new TestEmbeddable("test", "hello"))
.and(user.keyValues.keyValues.any().in(new TestEmbeddable("test", "world"))
In this case I'd probably just use an OR expression:
queryFactory
.query()
.from(qTestEntity) .select(qTestEntity.id)
.where(qTestEntity.key("test").and(
qTestEntity.value.eq("hello")
.or(qTestEntity.value.eq("world")));
However, you specifically mention wanting to use a set operation. I by the way think you want to perform an UNION operation instead of an INSERSECT operation, because the latter one would be empty with the example given.
JPA doesn't support set operations such as defined in ANSI SQL. However, Blaze-Persistence is an extension that integrates with most JPA implementations and does extend JPQL with set operations. I have recently written a QueryDSL extension for Blaze-Persistence. Using that extension, you can do:
List<Document> documents = new BlazeJPAQuery<Document>(entityManager, cbf)
.union(
select(document).from(document).where(document.id.eq(41L)),
select(document).from(document).where(document.id.eq(42L))
).fetch();
For more information about the integration and how to set it up, the documentation is available at https://persistence.blazebit.com/documentation/1.5/core/manual/en_US/index.html#querydsl-integration
I'm using modelmapper-jooq to map jOOQ records to custom pojos. Let's assume I have table like
| name | second_name | surname
----------------------------
1 | Mary | Jane | McLeod
----------------------------
2 | John | Henry | Newman
----------------------------
3 | Paul | | Signac
----------------------------
4 | Anna | | Pavlova
so the second_name can be null. My Person POJO looks like:
public class Person {
private String name;
private String secondName;
private String surname;
// assume getters and setters
}
When I map Result<Record> into Collection<Person>, every element in this collection has secondName equal null. When I map only first two rows, everything is OK. How to handle it properly, so the secondName field is null only when corresponding field in database is null? I've checked that fields in Record instances have proper values. I configure modelmapper in this way:
ModelMapper modelMapper = new ModelMapper();
modelMapper.getConfiguration().addValueReader(new RecordValueReader());
modelMapper.getConfiguration().setSourceNameTokenizer(NameTokenizers.UNDERSCORE);
Also I'm doing mapping like:
//...
private final Type collectionPersonType = new TypeToken<Collection<Person>>() {}.getType();
//...
Result<Record> result = query.fetch();
return modelMapper.map(result, collectionPersonType);
We have collection 'message' with following fields
_id | messageId | chainId | createOn
1 | 1 | A | 155
2 | 2 | A | 185
3 | 3 | A | 225
4 | 4 | B | 226
5 | 5 | C | 228
6 | 6 | B | 300
We want to select all fields of document with following criteria
distict by field 'chainId'
order(sort) by 'createdOn' in desc order
so, the expected result is
_id | messageId | chainId | createOn
3 | 3 | A | 225
5 | 5 | C | 228
6 | 6 | B | 300
We are using spring-data in our java application. I tried to go with different approaches, nothing helped me so far.
Is it possible to achieve above with single query?
What you want is something that can be achieved with the aggregation framework. The basic form of ( which is useful to others ) is:
db.collection.aggregate([
// Group by the grouping key, but keep the valid values
{ "$group": {
"_id": "$chainId",
"docId": { "$first": "$_id" },
"messageId": { "$first": "$messageId" },
"createOn": { "$first": "$createdOn" }
}},
// Then sort
{ "$sort": { "createOn": -1 } }
])
So that "groups" on the distinct values of "messageId" while taking the $first boundary values for each of the other fields. Alternately if you want the largest then use $last instead, but for either smallest or largest by row it probably makes sense to $sort first, otherwise just use $min and $max if the whole row is not important.
See the MongoDB aggregate() documentation for more information on usage, as well as the driver JavaDocs and SpringData Mongo connector documentation for more usage of the aggregate method and possible helpers.
here is the solution using MongoDB Java Driver
final MongoClient mongoClient = new MongoClient();
final DB db = mongoClient.getDB("mstreettest");
final DBCollection collection = db.getCollection("message");
final BasicDBObject groupFields = new BasicDBObject("_id", "$chainId");
groupFields.put("docId", new BasicDBObject("$first", "$_id"));
groupFields.put("messageId", new BasicDBObject("$first", "$messageId"));
groupFields.put("createOn", new BasicDBObject("$first", "$createdOn"));
final DBObject group = new BasicDBObject("$group", groupFields);
final DBObject sortFields = new BasicDBObject("createOn", -1);
final DBObject sort = new BasicDBObject("$sort", sortFields);
final DBObject projectFields = new BasicDBObject("_id", 0);
projectFields.put("_id", "$docId");
projectFields.put("messageId", "$messageId");
projectFields.put("chainId", "$_id");
projectFields.put("createOn", "$createOn");
final DBObject project = new BasicDBObject("$project", projectFields);
final AggregationOutput aggregate = collection.aggregate(group, sort, project);
and the result will be:
{ "_id" : 5 , "messageId" : 5 , "createOn" : { "$date" : "2014-04-23T04:45:45.173Z"} , "chainId" : "C"}
{ "_id" : 4 , "messageId" : 4 , "createOn" : { "$date" : "2014-04-23T04:12:25.173Z"} , "chainId" : "B"}
{ "_id" : 1 , "messageId" : 1 , "createOn" : { "$date" : "2014-04-22T08:29:05.173Z"} , "chainId" : "A"}
I tried it with SpringData Mongo and it didn't work when I group it by chainId(java.lang.NumberFormatException: For input string: "C") was the exception
Replace this line:
final DBObject group = new BasicDBObject("$group", groupFields);
with this one:
final DBObject group = new BasicDBObject("_id", groupFields);
here is the solution using springframework.data.mongodb:
Aggregation aggregation = Aggregation.newAggregation(
Aggregation.group("chainId"),
Aggregation.sort(new Sort(Sort.Direction.ASC, "createdOn"))
);
AggregationResults<XxxBean> results = mongoTemplate.aggregate(aggregation, "collection_name", XxxBean.class);