Ormlite: foreignAutoCreate insert if not exists - java

i noticed that foreignAutoCreate crash when a related data already exists, throwing something like this:
E/SQLiteLog﹕ (2067) abort at 20 in [INSERT INTO `Group` (... etc,`id` ) VALUES (?,?,?)]:
UNIQUE constraint failed: Group.id
but i have a list, example:
List<User> lstUsers = //values
im inserting values with a loop "for" with createOrUpdate:
for(...) {
dao.createOrUpdate(user);
}
and User has related data with Group by example:
#DatabaseField(canBeNull = true, foreign = true, foreignAutoCreate = true,
foreignAutoRefresh = true)
private Group group;
When i have a repeated Group id value the operation fails:
lstUsers.get(0).getGroup().getId(); // group id = 1 <-- foreign insert
lstUsers.get(1).getGroup().getId(); // group id = 1 <-- crash
lstUsers.get(3).getGroup().getId(); // group id = 1 <-- crashed already
lstUsers.get(3).getGroup().getId(); // group id = 2 <-- crashed already
... etc.
i need to insert a group or groups that is not reppeated (insert only 1 time) automatically with foreignAutoCreate no manually.
lstUsers.get(0).getGroup().getId(); // group id = 1 <-- foreign insert
lstUsers.get(1).getGroup().getId(); // group id = 1 <-- foreign exists, skip
lstUsers.get(3).getGroup().getId(); // group id = 1 <-- foreign exists, skip
lstUsers.get(3).getGroup().getId(); // group id = 2 <-- foreign insert
there is a way to do this??
UPDATE 1:
Try with this test please:
public void poblatingUsersAndGroupsList(){
List<User> lstUsers = new ArrayList<>();
Group group1 = new Group();
// this group doesn't exists in database
group1.setId(1); // should be inserted by ForeignAutoCreate
lstUsers.add(new User("user1",group1));
lstUsers.add(new User("user2",group1));
lstUsers.add(new User("user3",group1));
Group group2 = new Group();
group1.setId(2);
// this group doesn't exists in database
group1.setId(1); // should be inserted by ForeignAutoCreate
lstUsers.add(new User("user4",group1));
lstUsers.add(new User("user5",group2));
lstUsers.add(new User("user6",group2));
createUsersInGroup(lstUsers);
}
public void createUsers(List<User> lstUsers){
for(User user : lstUsers){
// here is the error
// group1 inserted the 1st time
// the 2nd, 3rd, n times are throwing error
// same for group2
dao.createOrUpdate(user);
}
}
foreignAutoCreate should work like this code, so we can avoid this block of code:
public void createUsers(List<User> lstUsers){
for(User user : lstUsers){
// (innecesary) calling or instantiating the groupDao
// (innecesary) check if not exists
groupDao.createIfNotExists(user.getGroup());
dao.createOrUpdate(user);
}
}

This is an old question and I assume you moved on. I was not able to reproduce this however. I've expanded the test cases with multiple inserts using dao.createOrUpdate(...). See the ForeignObjectTest unit test code.
One thing that I wonder about is that when you are creating a User with an assocaited Group, the Group must have been created already so that it has an id. Is that possibly the problem?
Group group = new Group();
// need to do this first to have group get an id
groupDao.create(group);
User user = new User();
user.setGroup(group);
for(...) {
dao.createOrUpdate(user);
}

Related

How to generate arbitrary subqueries/joins in a Jooq query

Situation: I am porting our application to Jooq to eliminate several n+1 problems and ensure custom queries are type-safe (DB server is Postgresql 13). In my example we have documents (ID, file name, file size). Each document can have several unqique document attributes (Document ID as FK, archive attribute ID - the type of the attribute and the value). Example data:
Document:
acme=> select id, file_name, file_size from document;
id | file_name | file_size
--------------------------------------+-------------------------+-----------
1ae56478-d27c-4b68-b6c0-a8bdf36dd341 | My Really cool book.pdf | 13264
(1 row)
Document Attributes:
acme=> select * from document_attribute ;
document_id | archive_attribute_id | value
--------------------------------------+--------------------------------------+------------
1ae56478-d27c-4b68-b6c0-a8bdf36dd341 | b334e287-887f-4173-956d-c068edc881f8 | JustReleased
1ae56478-d27c-4b68-b6c0-a8bdf36dd341 | 2f86a675-4cb2-4609-8e77-c2063ab155f1 | Tax
1ae56478-d27c-4b68-b6c0-a8bdf36dd341 | 30bb9696-fc18-4c87-b6bd-5e01497ca431 | ShippingRequired
1ae56478-d27c-4b68-b6c0-a8bdf36dd341 | 2eb04674-1dcb-4fbc-93c3-73491deb7de2 | Bestseller
1ae56478-d27c-4b68-b6c0-a8bdf36dd341 | a8e2f902-bf04-42e8-8ac9-94cdbf4b6778 | Paperback
(5 rows)
One can search via custom created JDBC prepared statement for these documents and their attribute. A user was able to create this query for a document ID and two document attributes with matching value, which returned the book 'My Really cool book.pdf':
SELECT d.id FROM document d WHERE d.id = '1ae56478-d27c-4b68-b6c0-a8bdf36dd341'
AND d.id IN(SELECT da.document_id AS id0 FROM document_attribute da WHERE da.archive_attribute_id = '2eb04674-1dcb-4fbc-93c3-73491deb7de2' AND da.value = 'Bestseller')
AND d.id IN(SELECT da.document_id AS id1 FROM document_attribute da WHERE da.archive_attribute_id = 'a8e2f902-bf04-42e8-8ac9-94cdbf4b6778' AND da.value = 'Paperback');
(After that the application fetches all document attributes for the returned document IDs - thus the n + 1 problem we want to solve)
Please note that all document values and document attributes are optional. One can only search for the file name or file size of a document but also several document attributes.
Question/Problems:
I wanted to port this code to Jooq and use a multiset, but I am struggeling how to apply the arbitrary subquery or join condition to the document attributes:
1.) How can I achieve this arbitrary adding of subqueries?
2.) Is a INNER JOIN more performant than a subquery?
Code:
import org.jooq.Condition;
import org.jooq.impl.DSL;
import org.junit.jupiter.api.Test;
import java.util.List;
import java.util.Map;
import java.util.UUID;
import static org.jooq.impl.DSL.multiset;
import static org.jooq.impl.DSL.selectDistinct;
public class InSelectExample extends BaseTest {
private record CustomDocumentAttribute(
UUID documentId, // ID of the document the attribute belongs to
UUID archiveAttributeId, // There are predefined attribute types in our application. This ID references them
String value // Real value of this attribute for the document
) {
}
private record CustomDocument(
UUID documentId, // ID of the document
String fileName, // File name of the document
Integer fileSize, // File size in bytes of the document
List<CustomDocumentAttribute> attributes // Attributes the document has
) {
}
#Test
public void findPdfDocumentsWithParameters() {
// Should print the single book
List<CustomDocument> documents = searchDocuments(UUID.fromString("1ae56478-d27c-4b68-b6c0-a8bdf36dd341"), "My Really cool book.pdf", 13264, Map.of(
UUID.fromString("2eb04674-1dcb-4fbc-93c3-73491deb7de2"), "Bestseller",
UUID.fromString("a8e2f902-bf04-42e8-8ac9-94cdbf4b6778"), "Paperback"
));
System.out.println("Size: " + documents.size()); // Should return 1 document
// Should print no books because one of the document attribute value doesn't match (Booklet instead of Paperback)
documents = searchDocuments(UUID.fromString("1ae56478-d27c-4b68-b6c0-a8bdf36dd341"), "My Really cool book.pdf", 13264, Map.of(
UUID.fromString("2eb04674-1dcb-4fbc-93c3-73491deb7de2"), "Bestseller",
UUID.fromString("a8e2f902-bf04-42e8-8ac9-94cdbf4b6778"), "Booklet"
));
System.out.println("Size: " + documents.size()); // Should return 0 documents
}
private List<CustomDocument> searchDocuments(UUID documentId, String fileName, Integer fileSize, Map<UUID, String> attributes) {
// Get the transaction manager
TransactionManager transactionManager = getBean(TransactionManager.class);
// Get the initial condition
Condition condition = DSL.noCondition();
// Check for an optional document ID
if (documentId != null) {
condition = condition.and(DOCUMENT.ID.eq(documentId));
}
// Check for an optional file name
if (fileName != null) {
condition = condition.and(DOCUMENT.FILE_NAME.eq(fileName));
}
// Check for an optional file size
if (fileSize != null) {
condition = condition.and(DOCUMENT.FILE_SIZE.eq(fileSize));
}
// Create the query
var step1 = transactionManager.getDslContext().select(
DOCUMENT.ID,
DOCUMENT.FILE_NAME,
DOCUMENT.FILE_SIZE,
multiset(
selectDistinct(
DOCUMENT_ATTRIBUTE.DOCUMENT_ID,
DOCUMENT_ATTRIBUTE.ARCHIVE_ATTRIBUTE_ID,
DOCUMENT_ATTRIBUTE.VALUE
).from(DOCUMENT_ATTRIBUTE).where(DOCUMENT_ATTRIBUTE.DOCUMENT_ID.eq(DOCUMENT.ID))
).convertFrom(record -> record.map(record1 -> new CustomDocumentAttribute(record1.value1(), record1.value2(), record1.value3())))
).from(DOCUMENT
).where(condition);
// TODO: What to do here?
var step3 = ...? What type?
for (Map.Entry<UUID, String> attributeEntry : attributes.entrySet()) {
// ???
// Reference: AND d.id IN(SELECT da.document_id AS id0 FROM document_attribute da WHERE da.archive_attribute_id = ? AND da.value = ?)
var step2 = step1.and(...??????)
}
// Finally fetch and return
return step1.fetch(record -> new CustomDocument(record.value1(), record.value2(), record.value3(), record.value4()));
}
}
Regarding your questions
1.) How can I achieve this arbitrary adding of subqueries?
You already found a solution to that question in your own answer, though I'll suggest an alternative that I personally prefer. Your approach creates N subqueries hitting your table N times.
2.) Is a INNER JOIN more performant than a subquery?
There's no general rule to this. It's all just relational algebra. If the optimiser can prove two expressions are the same thing, they can be transformed to each other. However, an INNER JOIN is not the exact same thing as a semi join, i.e. an IN predicate (although sometimes it is, in the presence of constraints). So the two operators aren't exactly equivalent, logically
An alternative approach
Your own approach maps the Map<UUID, String> to subqueries, hitting the DOCUMENT_ATTRIBUTE N times. I'm guessing that the PG optimiser might not be able to see through this and factor out the common parts into a single subquery (though technically, it could). So, I'd rather create a single subquery of the form:
WHERE document.id IN (
SELECT a.document_id
FROM document_attribute AS a
WHERE (a.archive_attribute_id, a.value) IN (
(?, ?),
(?, ?), ...
)
)
Or, dynamically, with jOOQ:
DOCUMENT.ID.in(
select(DOCUMENT_ATTRIBUTE_DOCUMENT_ID)
.from(DOCUMENT_ATTRIBUTE)
.where(row(DOCUMENT_ATTRIBUTE.ARCHIVE_ATTRIBUTE_ID, DOCUMENT_ATTRIBUTE.VALUE).in(
attributes.entrySet().stream().collect(Rows.toRowList(
Entry::getKey,
Entry::getValue
))
))
)
Using org.jooq.Rows::toRowList collectors.
Note: I don't think you have to further correlate the IN predicate's subquery by specifying a DOCUMENT_ATTRIBUTE.DOCUMENT_ID.eq(DOCUMENT.ID) predicate. That correlation is already implied by using IN itself.
After reading another question jOOQ - join with nested subquery (and not realizing the solution) and playing around with generating Java code via https://www.jooq.org/translate/, it clicked. In combination with reading https://www.jooq.org/doc/latest/manual/sql-building/column-expressions/scalar-subqueries/ one can simple add the subquery as IN() condition before executing the query. To be honest I am not sure if this is the most performant solution. The searchDocuments method then looks like this:
private List<CustomDocument> searchDocuments(UUID documentId, String fileName, Integer fileSize, Map<UUID, String> attributes) {
// Get the transaction manager
TransactionManager transactionManager = getBean(TransactionManager.class);
// Get the initial condition
Condition condition = DSL.noCondition();
// Check for an optional document ID
if (documentId != null) {
condition = condition.and(DOCUMENT.ID.eq(documentId));
}
// Check for an optional file name
if (fileName != null) {
condition = condition.and(DOCUMENT.FILE_NAME.eq(fileName));
}
// Check for an optional file size
if (fileSize != null) {
condition = condition.and(DOCUMENT.FILE_SIZE.eq(fileSize));
}
// Check for optional document attributes
if (attributes != null && !attributes.isEmpty()) {
for (Map.Entry<UUID, String> entry : attributes.entrySet()) {
condition = condition.and(DOCUMENT.ID.in(select(DOCUMENT_ATTRIBUTE.DOCUMENT_ID).from(DOCUMENT_ATTRIBUTE).where(DOCUMENT_ATTRIBUTE.DOCUMENT_ID.eq(DOCUMENT.ID).and(DOCUMENT_ATTRIBUTE.ARCHIVE_ATTRIBUTE_ID.eq(entry.getKey()).and(DOCUMENT_ATTRIBUTE.VALUE.eq(entry.getValue()))))));
}
}
// Create the query
return transactionManager.getDslContext().select(
DOCUMENT.ID,
DOCUMENT.FILE_NAME,
DOCUMENT.FILE_SIZE,
multiset(
selectDistinct(
DOCUMENT_ATTRIBUTE.DOCUMENT_ID,
DOCUMENT_ATTRIBUTE.ARCHIVE_ATTRIBUTE_ID,
DOCUMENT_ATTRIBUTE.VALUE
).from(DOCUMENT_ATTRIBUTE).where(DOCUMENT_ATTRIBUTE.DOCUMENT_ID.eq(DOCUMENT.ID))
).convertFrom(record -> record.map(record1 -> new CustomDocumentAttribute(record1.value1(), record1.value2(), record1.value3())))
).from(DOCUMENT
).where(condition
).fetch(record -> new CustomDocument(record.value1(), record.value2(), record.value3(), record.value4()));
}

Java EE + Oracle: how to use GTT(global temporary table) to avoid possible long(1000+) IN clause?

I have a query for Oracle database, built with CriteriaBuilder of Hibernate. Now, it has a IN clause which already takes about 800+ params.
Team says this may surpass 1000 and hitting the hard upper limit of Oracle itself, which only allows 1000 param for an IN clause. We need to optimize that.
select ih from ItemHistory as ih
where ih.number=:param0
and
ih.companyId in (
select c.id from Company as c
where (
( c.organizationId in (:param1) )
or
( c.organizationId like :param2 )
) and (
c.organizationId in (:param3, :param4, :param5, :param6, :param7, :param8, :param9, :param10, ..... :param818)
)
)
order by ih.eventDate desc
So, two solutions I can think of:
The easy one, as now the list from :param3 to :param818 are below 1000, and in the future, we may hit 1000, we can separate the list if size > 1000, into another IN clause, so it becomes:
c.organizationId in (:param3, :param4, :param5, :param6, :param7, :param8, :param9, :param10, ..... :param1002) or c.organizationId in (:param1003, ...)
Both the original code and solution 1 are not very efficient already. Although it can fetch 40K records in 25 seconds, we should use a GTT(Global Temporary Table), as per what I can find on AskTom or other sites, by professional DBAs. But I can only find SQL examples, not Java code.
What I can imagine is:
createNativeQuery("create global temporary table GTT_COMPANIES if not exist (companyId varchar(32)) ON COMMIT DELETE ROWS;"); and execute(Do we need index here?)
createNativeQuery("insert into GTT_COMPANIES (list)"); query.bind("1", query.getCompanyIds()); and execute(can we bind a list and insert it?)
use CriteriaQuery to select from this table(but I doubt, as CriteriaQueryBuilder will require type safe meta model class to be generated beforehand, and here we don't have the entity; this is an ad-hoc table and no model entity is mapped to it)
and, do we need to create GTT even the list size is < 1000? As often it is big, 700~800.
So, any suggestion? Someone got a working example of Hibernate CriteriaQuery + Oracle GTT?
The whole method is like this:
public List<ItemHistory> findByIdTypePermissionAndOrganizationIds(final Query<String> query, final ItemIdType idType) throws DataLookupException {
String id = query.getObjectId();
String type = idType.name();
Set<String> companyIds = query.getCompanyIds();
Set<String> allowedOrgIds = query.getAllowedOrganizationIds();
Set<String> excludedOrgIds = query.getExcludedOrganizationIds();
// if no orgs are allowed, we should return empty list
if (CollectionUtils.isEmpty(allowedOrgIds)) {
return Collections.emptyList();
}
try {
CriteriaBuilder builder = entityManager.getCriteriaBuilder();
CriteriaQuery<ItemHistory> criteriaQuery = builder.createQuery(ItemHistory.class);
Subquery<String> subQueryCompanyIds = filterByPermissionAndOrgIdsInSubquery(query, builder, criteriaQuery);
Subquery<String> subQueryCompanyIds = criteriaQuery.subquery(String.class);
Root<Company> companies = subQueryCompanyIds.from(Company.class);
companies.alias(COMPANY_ALIAS);
Path<String> orgIdColumn = companies.get(Company_.organizationId);
/* 1. get permission based restrictions */
// select COMPANY_ID where (ORG_ID in ... or like ...) and (ORG_ID not in ... and not like ...)
// actually query.getExcludedOrganizationIds() can also be very long list(1000+), but let's do it later
Predicate permissionPredicate = getCompanyIdRangeByPermission(
builder, query.getAllowedOrganizationIds(), query.getExcludedOrganizationIds(), orgIdColumn
);
/* 2. get org id based restrictions, which was done on top of permission restrictions */
// ... and where (ORG_ID in ... or like ...)
// process companyIds with and without "*" by adding different predicates, like (xxx%, yyy%) vs in (xxx, yyy)
// here, query.getCompanyIds() could be very long, may be 1000+
Predicate orgIdPredicate = groupByWildcardsAndCombine(builder, query.getCompanyIds(), orgIdColumn, false);
/* 3. Join two predicates with AND, because originally filtering is done twice, 2nd is done on basis of 1st */
Predicate subqueryWhere = CriteriaQueryUtils.joinWith(builder, true, permissionPredicate, orgIdPredicate); // join predicates with AND
subQueryCompanyIds.select(companies.get(Company_.id)); // id -> COMPANY_ID
if (subqueryWhere != null) {
subQueryCompanyIds.where(subqueryWhere);
} else {
LOGGER.warn("Cannot build subquery of org id and permission. " +
"Org ids: {}, allowed companies: {}, excluded companies: {}",
query.getCompanyIds(), query.getAllowedOrganizationIds(), query.getExcludedOrganizationIds());
}
Root<ItemHistory> itemHistory = criteriaQuery.from(ItemHistory.class);
itemHistory.alias(ITEM_HISTORY_ALIAS);
criteriaQuery.select(itemHistory)
.where(builder.and(
builder.equal(getColumnByIdType(itemHistory, idType), id),
builder.in(itemHistory.get(ItemHistory_.companyId)).value(subQueryCompanyIds)
))
.orderBy(builder.desc(itemHistory.get(ItemHistory_.eventDate)));
TypedQuery<ItemHistory> finalQuery = entityManager.createQuery(criteriaQuery);
LOGGER.trace(LOG_MESSAGE_FINAL_QUERY, finalQuery.unwrap(org.hibernate.Query.class).getQueryString());
return finalQuery.setMaxResults(MAX_LIST_FETCH_SIZE).getResultList();
} catch (NoResultException e) {
LOGGER.info("No item history events found by permission and org ids with {}={}", type, id);
throw new DataLookupException(ErrorCode.DATA_LOOKUP_NO_RESULT);
} catch (Exception e) {
LOGGER.error("Error when fetching item history events by permission and org ids with {}={}", type, id, e);
throw new DataLookupException(ErrorCode.DATA_LOOKUP_ERROR,
"Error when fetching item history events by permission and org ids with " + type + "=" + id);
}
}

How to decide when to update, delete and insert

I am using spring data jpa for creating services. I have to do insert, update and delete operation on one save button. for save and update I am using repository save method in my code. For deciding need to do update or insert I am checking count of records.
If I am sending one record then I am successfully able to do save and update operations.
But my problem is that when I am sending two record which already present is db
that need to go for update. but In my situation I am checking count of record so its going for save instead of update.
Can any one tell me what condition need to check more then it will go for update ? Or
Tell me any another way for to decide when to go for update,when to go for insert and when to go for delete?
RoomInvestigatorMappingService class
public String updatePiDetails(List<PiDetails> roomInvestMapping) {
List<RoomInvestigatorMapping> currentRecord = new ArrayList<RoomInvestigatorMapping>();
for (PiDetails inputRecorObj : roomInvestMapping) {
currentRecord = roomInvestigatorMappingRepo.findByNRoomAllocationId(inputRecorObj.getnRoomAllocationId());
}
int currentRecordCount = currentRecord.size();
int inputRecordCount = roomInvestMapping.size();
// update existing record
if (inputRecordCount == currentRecordCount) {
for (PiDetails inputObject : roomInvestMapping) {
for (RoomInvestigatorMapping currentRecordObj : currentRecord) {
currentRecordObj.nInvestigatorId = inputObject.getnInvestigatorId();
currentRecordObj.nPercentageAssigned = inputObject.getnPercentageAssigned();
currentRecordObj.nRoomAllocationId = inputObject.getnRoomAllocationId();
roomInvestigatorMappingRepo.saveAll(currentRecord);
}
}
}
//insert new record
if (inputRecordCount > currentRecordCount) {
for (PiDetails inputObject : roomInvestMapping) {
RoomInvestigatorMapping investObj = new RoomInvestigatorMapping();
investObj.nInvestigatorId = inputObject.getnInvestigatorId();
investObj.nRoomAllocationId = inputObject.getnRoomAllocationId();
investObj.nPercentageAssigned = inputObject.getnPercentageAssigned();
roomInvestigatorMappingRepo.save(investObj);
}
}
return "sucessfully";
}
RoomInvestigatorMappingRepository interface
#Query("select roomInvestMapping from RoomInvestigatorMapping as roomInvestMapping where nRoomAllocationId=?1")
List<RoomInvestigatorMapping> findByNRoomAllocationId(Integer nRoomAllocationId);
Json Input
[
{
"nInvestigatorId": 911294,
"nPercentageAssigned": 50,
"nRoomAllocationId": 1
},
{
"nInvestigatorId": 911294,
"nPercentageAssigned": 50,
"nRoomAllocationId": 2
}
]
Just use CrudRepository.existsById(ID id)
The documentation says:
Returns whether an entity with the given id exists.

Flink SQL: Repeating grouping keys in result of GROUP BY query

I want to do a simple query in Flink SQL in one table which include a group by statement. But in the results there are duplicate rows for the column specified in the group by statement. Is that because I use a streaming environment and it doesn't remember the state ?
final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
final StreamTableEnvironment tableEnv = TableEnvironment.getTableEnvironment(env);
// configure Kafka consumer
Properties props = new Properties();
props.setProperty("bootstrap.servers", "localhost:9092"); // Broker default host:port
props.setProperty("group.id", "flink-consumer"); // Consumer group ID
FlinkKafkaConsumer011<BlocksTransactions> flinkBlocksTransactionsConsumer = new FlinkKafkaConsumer011<>(args[0], new BlocksTransactionsSchema(), props);
flinkBlocksTransactionsConsumer.setStartFromEarliest();
DataStream<BlocksTransactions> blocksTransactions = env.addSource(flinkBlocksTransactionsConsumer);
tableEnv.registerDataStream("blocksTransactionsTable", blocksTransactions);
Table sqlResult
= tableEnv.sqlQuery(
"SELECT block_hash, count(tx_hash) " +
"FROM blocksTransactionsTable " +
"GROUP BY block_hash");
DataStream<Test> resultStream = tableEnv
.toRetractStream(sqlResult, Row.class)
.map(t -> {
Row r = t.f1;
String field2 = r.getField(0).toString();
long count = Long.valueOf(r.getField(1).toString());
return new Test(field2, count);
})
.returns(Test.class);
resultStream.print();
resultStream.addSink(new FlinkKafkaProducer011<>("localhost:9092", "TargetTopic", new TestSchema()));
env.execute();
I use the group by statement for the block_hash column but I have several times the same block_hash. This is the result of the print() :
Test{field2='0x2c4a021d514e4f8f0beb8f0ce711652304928528487dc7811d06fa77c375b5e1', count=1}
Test{field2='0x2c4a021d514e4f8f0beb8f0ce711652304928528487dc7811d06fa77c375b5e1', count=1}
Test{field2='0x2c4a021d514e4f8f0beb8f0ce711652304928528487dc7811d06fa77c375b5e1', count=2}
Test{field2='0x780aadc08c294da46e174fa287172038bba7afacf2dff41fdf0f6def03906e60', count=1}
Test{field2='0x182d31bd491527e1e93c4e44686057207ee90c6a8428308a2bd7b6a4d2e10e53', count=1}
Test{field2='0x182d31bd491527e1e93c4e44686057207ee90c6a8428308a2bd7b6a4d2e10e53', count=1}
How can I fix this without using BatchEnvironment ?
A GROUP BY query that runs on a stream must produce updates. Consider the following example:
SELECT user, COUNT(*) FROM clicks GROUP BY user;
Every time, the clicks table receives a new row, the count of the respective user needs to be incremented and updated.
When you convert a Table into a DataStream, these updates must be encoded in the stream. Flink uses retraction and add messages to do that. By calling tEnv.toRetractStream(table, Row.class), you convert the Table table into a DataStream<Tuple2<Boolean, Row>. The Boolean flag is important and indicates whether the Row is added or retracted from the result table.
Given the example query above and the input table clicks as
user | ...
------------
Bob | ...
Liz | ...
Bob | ...
You will receive the following retraction stream
(+, (Bob, 1)) // add first result for Bob
(+, (Liz, 1)) // add first result for Liz
(-, (Bob, 1)) // remove outdated result for Bob
(+, (Bob, 2)) // add updated result for Bob
You need to actively maintain the result yourself and add and remove rows as instructed by the Boolean flag of the retraction stream.

how to display a list based on a foreign key groovy grails

I have two tables: Region and District. So region_id is the foreign key into the table District (a region has one or many districts). So, when I select a region on my list I only want to display the districts associated with that particular region.
My correct code displays all the districts independently of region:
def list = {
params.max = Math.min(params.max? params.int('max') : 20, 100)
[districtInstanceList : District.list(params),
districtInstanceTotal: District.count()]
}
Does someone know how to only display based on the foreign key constraint? I know I could write a SQL query in my list closure, but I suppose grails probably has a way to do it.
My database is MySQL, and the grails version is 2.0.1.
My District domain is:
class District {
def scaffold = true
String name
String description
String logo
String homepage
// defines the 1:n constrain with the Region table
static belongsTo = [region : Region]
// defines the 1: constraint with the Stream table
static hasMany = [streams : Stream]
static constraints ={
name(blank:false, minSize:6, maxSize:30)
description(blank: false, maxSize:100)
}
public String toString(){
name
}
}
You can use GORM:
def list = {
params.max = Math.min(params.max? params.int('max') : 20, 100)
Region region = Region.get(params.id) // or what parameter you're using
List districts = District.findAllByRegion(region)
[districtInstanceList : districts,
districtInstanceTotal: District.count()]
}
You can read about Grails GORM here: http://grails.org/doc/latest/guide/GORM.html

Categories