Query aggregated values with database values with gql and django - java

I have taken over a project using gql and django. It completely follows the structure of the following example: https://stackabuse.com/building-a-graphql-api-with-django/
Now i want to add extra functionality. Ontop of sending databasevalues to the front end, i would also like to send aggregated ones aswell, like number of elements etc. I've added this to the Node as follows:
class DatabaseDataNode(DjangoObjectType):
model = DatabaseAnnotationData
print(DatabaseAnnotationData.objects.filter(database_id__exact=5).count()) #Prints correct value
total = DatabaseAnnotationData.objects.filter(database_id__exact=5).count()
class Meta:
print('Hello world')
filterset_fields = {
'databaseid': ['exact'],
'name': ['exact', 'icontains', 'istartswith'],
'imagefile': ['exact', 'icontains', 'istartswith'],
'annotations': ['exact'],
'created_at': ['exact'],
'modified_at': ['exact'],
'slug': ['exact'],
'total': ['exact'] # <-- Doesn't work
}
interfaces = (relay.Node, )
I also add it to the query:
const DATABASE = gql`
query Database($name: String, $first: Int, $offset: Int) {
database(databasename: $name) {
databaseid
id
name
total <---------- Added
modifiedAt
images(first: $first, offset: $offset) {
edges {
node {
imageid
imagefile
width
height
objectdataSet{
edges{
node{
x1
y1
width
height
concepts{
edges{
node{
conceptId
}
}
}
objectid
imageid{
imageid
}
}
}
}
}
}
}
}
}
`
But when i run it i get an error saying "Cannot query field "total" on type "DatabaseDataNode"
How can i implement this, to send the other value aswell?

Related

Android Firebase Firestore real time update with pagination [duplicate]

I am working with Firestore right now and have a little bit of a problem with pagination.
Basically, I have a collection (assume 10 items) where each item has some data and a timestamp.
Now, I am fetching the first 3 items like this:
Firestore.firestore()
.collection("collectionPath")
.order(by: "timestamp", descending: true)
.limit(to: 3)
.addSnapshotListener(snapshotListener())
Inside my snapshot listener, I save the last document from the snapshot, in order to use that as a starting point for my next page.
So, at some time I will request the next page of items like this:
Firestore.firestore()
.collection("collectionPath")
.order(by: "timestamp", descending: true)
.start(afterDocument: lastDocument)
.limit(to: 3)
.addSnapshotListener(snapshotListener2()) // Note that this is a new snapshot listener, I don't know how I could reuse the first one
Now I have the items from index 0 to index 5 (in total 6) in my frontend. Neat!
If the document at index 4 now updates its timestamp to the newest timestamp of the whole collection, things start to go down.
Remember that the timestamp determines its position on account of the order clause!
What I expected to happen was, that after the changes are applied, I still show 6 items (and still ordered by their timestamps)
What happened was, that after the changes are applied, I have only 5 items remaining, since the item that got pushed out of the first snapshot is not added to the second snapshot automatically.
Am I missing something about Pagination with Firestore?
EDIT: As requested, I post some more code here:
This is my function to return a snapshot listener. Well, and the two methods I use to request the first page and then the second page I posted already above
private func snapshotListener() -> FIRQuerySnapshotBlock {
let index = self.index
return { querySnapshot, error in
guard let snap = querySnapshot, error == nil else {
log.error(error)
return
}
// Save the last doc, so we can later use pagination to retrieve further chats
if snap.count == self.limit {
self.lastDoc = snap.documents.last
} else {
self.lastDoc = nil
}
let offset = index * self.limit
snap.documentChanges.forEach() { diff in
switch diff.type {
case .added:
log.debug("added chat at index: \(diff.newIndex), offset: \(offset)")
self.tVHandler.dataManager.insert(item: Chat(dictionary: diff.document.data() as NSDictionary), at: IndexPath(row: Int(diff.newIndex) + offset, section: 0), in: nil)
case .removed:
log.debug("deleted chat at index: \(diff.oldIndex), offset: \(offset)")
self.tVHandler.dataManager.remove(itemAt: IndexPath(row: Int(diff.oldIndex) + offset, section: 0), in: nil)
case .modified:
if diff.oldIndex == diff.newIndex {
log.debug("updated chat at index: \(diff.oldIndex), offset: \(offset)")
self.tVHandler.dataManager.update(item: Chat(dictionary: diff.document.data() as NSDictionary), at: IndexPath(row: Int(diff.oldIndex) + offset, section: 0), in: nil)
} else {
log.debug("moved chat at index: \(diff.oldIndex), offset: \(offset) to index: \(diff.newIndex), offset: \(offset)")
self.tVHandler.dataManager.move(item: Chat(dictionary: diff.document.data() as NSDictionary), from: IndexPath(row: Int(diff.oldIndex) + offset, section: 0), to: IndexPath(row: Int(diff.newIndex) + offset, section: 0), in: nil)
}
}
}
self.tableView?.reloadData()
}
}
So again, I am asking if I can have one snapshot listener that listens for changes in more than one page I requested from Firestore
Well, I contacted the guys over at Firebase Google Group for help, and they were able to tell me that my use case is not yet supported.
Thanks to Kato Richardson for attending to my problem!
For anyone interested in the details, see this thread
I came across the same use case today and I have successfully implemented a working solution in Objective C client. Below is the algorithm if anyone wants to apply in their program and I will really appreciate if google-cloud-firestore team can put my solution on their page.
Use Case: A feature to allow paginating a long list of recent chats along with the option to attach real time listeners to update the list to have chat with most recent message on top.
Solution: This can be made possible by using pagination logic like we do for other long lists and attaching real time listener with limit set to 1:
Step 1: On page load fetch the chats using pagination query as below:
- (void)viewDidLoad {
[super viewDidLoad];
// Do any additional setup after loading the view.
[self fetchChats];
}
-(void)fetchChats {
__weak typeof(self) weakSelf = self;
FIRQuery *paginateChatsQuery = [[[self.db collectionWithPath:MAGConstCollectionNameChats]queryOrderedByField:MAGConstFieldNameTimestamp descending:YES]queryLimitedTo:MAGConstPageLimit];
if(self.arrChats.count > 0){
FIRDocumentSnapshot *lastChatDocument = self.arrChats.lastObject;
paginateChatsQuery = [paginateChatsQuery queryStartingAfterDocument:lastChatDocument];
}
[paginateChatsQuery getDocumentsWithCompletion:^(FIRQuerySnapshot * _Nullable snapshot, NSError * _Nullable error) {
if (snapshot == nil) {
NSLog(#"Error fetching documents: %#", error);
return;
}
///2. Observe chat updates if not attached
if(weakSelf.chatObserverState == ChatObserverStateNotAttached) {
weakSelf.chatObserverState = ChatObserverStateAttaching;
[weakSelf observeChats];
}
if(snapshot.documents.count < MAGConstPageLimit) {
weakSelf.noMoreData = YES;
}
else {
weakSelf.noMoreData = NO;
}
[weakSelf.arrChats addObjectsFromArray:snapshot.documents];
[weakSelf.tblVuChatsList reloadData];
}];
}
Step 2: On success callback of "fetchAlerts" method attach the observer for real time updates only once with limit set to 1.
-(void)observeChats {
__weak typeof(self) weakSelf = self;
self.chatsListener = [[[[self.db collectionWithPath:MAGConstCollectionNameChats]queryOrderedByField:MAGConstFieldNameTimestamp descending:YES]queryLimitedTo:1]addSnapshotListener:^(FIRQuerySnapshot * _Nullable snapshot, NSError * _Nullable error) {
if (snapshot == nil) {
NSLog(#"Error fetching documents: %#", error);
return;
}
if(weakSelf.chatObserverState == ChatObserverStateAttaching) {
weakSelf.chatObserverState = ChatObserverStateAttached;
}
for (FIRDocumentChange *diff in snapshot.documentChanges) {
if (diff.type == FIRDocumentChangeTypeAdded) {
///New chat added
NSLog(#"Added chat: %#", diff.document.data);
FIRDocumentSnapshot *chatDoc = diff.document;
[weakSelf handleChatUpdates:chatDoc];
}
else if (diff.type == FIRDocumentChangeTypeModified) {
NSLog(#"Modified chat: %#", diff.document.data);
FIRDocumentSnapshot *chatDoc = diff.document;
[weakSelf handleChatUpdates:chatDoc];
}
else if (diff.type == FIRDocumentChangeTypeRemoved) {
NSLog(#"Removed chat: %#", diff.document.data);
}
}
}];
}
Step 3. On listener callback check for document changes and handle only FIRDocumentChangeTypeAdded and FIRDocumentChangeTypeModified events and ignore the FIRDocumentChangeTypeRemoved event. We are doing this by calling "handleChatUpdates" method for both FIRDocumentChangeTypeAdded and FIRDocumentChangeTypeModified event in which we are first trying to find the matching chat document from local list and if it exist we are removing it from the list and then we are adding the new document received from listener callback and adding it to the beginning of the list.
-(void)handleChatUpdates:(FIRDocumentSnapshot *)chatDoc {
NSInteger chatIndex = [self getIndexOfMatchingChatDoc:chatDoc];
if(chatIndex != NSNotFound) {
///Remove this object
[self.arrChats removeObjectAtIndex:chatIndex];
}
///Insert this chat object at the beginning of the array
[self.arrChats insertObject:chatDoc atIndex:0];
///Refresh the tableview
[self.tblVuChatsList reloadData];
}
-(NSInteger)getIndexOfMatchingChatDoc:(FIRDocumentSnapshot *)chatDoc {
NSInteger chatIndex = 0;
for (FIRDocumentSnapshot *chatDocument in self.arrChats) {
if([chatDocument.documentID isEqualToString:chatDoc.documentID]) {
return chatIndex;
}
chatIndex++;
}
return NSNotFound;
}
Step 4. Reload the tableview to see the changes.
my solution is to create 1 maintainer query - listener to observe on those removed item from first query, and we will update it every time there's new message coming.
To make pagination with snapshot listener first we have to create reference point document from the collection.After that we are listening to collection based on that reference point document.
Let's you have a collection called messages and timestamp called createdAt with each document in that collection.
//get messages
getMessages(){
//first we will fetch the very last/latest document.
//to hold listeners
listnerArray=[];
const very_last_document= await this.afs.collectons('messages')
.ref
.limit(1)
.orderBy('createdAt','desc')
.get({ source: 'server' });
//if very_last.document.empty property become true,which means there is no messages
//present till now ,we can go with a query without having a limit
//else we have to apply the limit
if (!very_last_document.empty) {
const start = very_last_document.docs[very_last_document.docs.length - 1].data().createdAt;
//listner for new messages
//all new message will be registered on this listener
const listner_1 = this.afs.collectons('messages')
.ref
.orderBy('createdAt','desc')
.endAt(start) <== this will make sure the query will fetch up to 'start' point(including 'start' point document)
.onSnapshot(messages => {
for (const message of messages .docChanges()) {
if (message .type === "added")
//do the job...
if (message.type === "modified")
//do the job...
if (message.type === "removed")
//do the job ....
}
},
err => {
//on error
})
//old message will be registered on this listener
const listner_2 = this.afs.collectons('messages')
.ref
.orderBy('createdAt','desc')
.limit(20)
.startAfter(start) <== this will make sure the query will fetch after the 'start' point
.onSnapshot(messages => {
for (const message of messages .docChanges()) {
if (message .type === "added")
//do the job...
if (message.type === "modified")
//do the job...
if (message.type === "removed")
//do the job ....
}
this.listenerArray.push(listner_1, listner_2);
},
err => {
//on error
})
} else {
//no document found!
//very_last_document.empty = true
const listner_1 = this.afs.collectons('messages')
.ref
.orderBy('createdAt','desc')
.onSnapshot(messages => {
for (const message of messages .docChanges()) {
if (message .type === "added")
//do the job...
if (message.type === "modified")
//do the job...
if (message.type === "removed")
//do the job ....
}
},
err => {
//on error
})
this.listenerArray.push(listner_1);
}
}
//to load more messages
LoadMoreMessage(){
//Assuming messages array holding the the message we have fetched
//getting the last element from the array messages.
//that will be the starting point of our next batch
const endAt = this.messages[this.messages.length-1].createdAt
const listner_2 = this.getService
.collections('messages')
.ref
.limit(20)
.orderBy('createdAt', "asc") <== should be in 'asc' order
.endBefore(endAt) <== Getting the 20 documnents (the limit we have applied) from the point 'endAt';
.onSnapshot(messages => {
if (messages.empty && this.messages.length)
this.messages[this.messages.length - 1].hasMore = false;
for (const message of messages.docChanges()) {
if (message.type === "added")
//do the job...
if (message.type === "modified")
//do the job
if (message.type === "removed")
//do the job
}
},
err => {
//on error
})
this.listenerArray.push(listner_2)
}

Using Spring Data,Mongodb, how can I avoid Duplicate vertices error

I get the error in one of the polygons i am importing.
Write failed with error code 16755 and error message 'Can't extract geo keys: { _id: "b9c5ac0c-e469-4b97-b059-436cd02ffe49", _class: .... ] Duplicate vertices: 0 and 15'
Full stack Trace: https://gist.github.com/boundaries-io/927aa14e8d1e42d7cf516dc25b6ebb66#file-stacktrace
GeoJson MultiPolygon I am importing using Spring Data MongoDB
public class MyPolgyon {
#Id
String id;
#GeoSpatialIndexed(type=GeoSpatialIndexType.GEO_2DSPHERE)
GeoJsonPoint position;
#GeoSpatialIndexed(type=GeoSpatialIndexType.GEO_2DSPHERE)
GeoJsonPoint location;
#GeoSpatialIndexed(type=GeoSpatialIndexType.GEO_2DSPHERE)
GeoJsonPolygon polygon;
public static GeoJsonPolygon generateGeoJsonPolygon(List<LngLatAlt> coordinates) {
List<Point> points = new ArrayList<Point>();
for ( LngLatAlt point: coordinates) {
org.springframework.data.geo.Point dataPoint = new org.springframework.data.geo.Point( point.getLongitude() ,point.getLatitude());
points.add(dataPoint);
}
return new GeoJsonPolygon(points);
}
How can i avoid this error in Java?
I can load the geojson fine in http://geojson.io
here is the GEOJSON: https://gist.github.com/boundaries-io/4719bfc386c3728b36be10af29860f4c#file-rol-ca-part1-geojson
removal of duplicates using:
for (com.vividsolutions.jts.geom.Coordinate coordinate : geometry.getCoordinates()) {
Point lngLatAtl = new Point(coordinate.x, coordinate.y);
boolean isADup = points.contains(lngLatAtl);
if ( !notDup ){
points.add(lngLatAtl);
}else{
LOGGER.debug("Duplicate, [" + lngLatAtl.toString() +"] index[" + count +"]");
}
count++;
}
Logging:
2017-10-27 22:38:18 DEBUG TestBugs:58 - Duplicate, [Point [x=-97.009868, y=52.358242]] index[15]
2017-10-27 22:38:18 DEBUG TestBugs:58 - Duplicate, [Point [x=-97.009868, y=52.358242]] index[3348]
In this case you have duplicate vertex at index 0 and index 1341 for 2nd polygon.
[ -62.95859676499998, 46.20653318300003 ]
The insertion fails when Mongo db is trying to build the 2d sphere index for the document. Remove the coordinate at index 1341 and you should be able to persist successfully.
You just have to cleanse the data when you find the error.
You can write a small program to read the error from mongo db and provide the update back to the client. Client can act on those messages and try again the request.
More information on geo errors can be found here.
You can look at the code here for GeoParser to find how/what errors are generated. For the specific error you got you can take a look here GeoParser. This error is generated by S2 library that Mongodb uses for validation.

How to read an updated value that has not been committed within a transaction?

I'm using Avaje Ebean API to CRUD data from PostgreSQL DB. Here is my code:
import com.avaje.ebean.Ebean;
import com.avaje.ebean.Query;
import com.avaje.ebean.SqlQuery;
import com.avaje.ebean.SqlUpdate;
...
class Example {
private void insertStockItem(PickingFieldProductItemDto dto, StockDao stockDao) {
try {
Ebean.beginTransaction();
List<PickingFieldProductItem> productList = dto.getItemList();
for(PickingFieldProductItem product : productList) {
StockDto stockDto = buildStockDto(product);
Stock stock = stockDao.getStockItem(stockDto.ID);
// stock.Quantity is always 500
if(stock != null) {
if(stock.getQuantity().compareTo(product.getOutcomingQuantity()) == 0)
stockDao.deleteStock(stockDto);
else {
int updateQuantity = stock.getQuantity() - product.getOutcomingQuantity();
// updateQuantity = 300 and I will update this value to stock table
int identity = stockDao.updateStock(stockDto.ID, updateQuantity);
}
}
}
Ebean.commitTransaction();
} catch (BusinessException e) {
Ebean.rollbackTransaction();
} finally {
Ebean.endTransaction();
}
}
}
My problem is:
At the 1st loop I get stock object and see the value of Quantity property is 500, then I update it with another value (300). The update statement ran successfully, I checked the identity var, it returns value 1.
But at the 2nd loop I get stock object again and now the value of Quantity property is still 500. My expectation is 300. Assumption that productList has only 2 elements.
Anyone can help me how to get the expected value? It's possible?
Thanks.

Grails paginate not working

I have this list code in my controller that uses dynamic finders
def listPurchaseRequest(Integer max){
params.max = Math.min(max ?: 5, 100)
def purchaseRequestList = PurchaseRequest.list (params)
if ( params.query) {
purchaseRequestList = PurchaseRequest.findAllByRequestByLikeOrRequestNumberLike("%${params.query}%", "%${params.query}%", params)
}
[purchaseRequestInstanceList: purchaseRequestList,
purchaseRequestInstanceTotal: //this]
}
My search and list is working except for my pagination.
<g:paginate total="${purchaseRequestInstanceTotal}" params="${params}" maxsteps="3" prev="«" next="»" />
if i use purchaseRequestList.totalCount it works with the default list but when the result is displayed after i search it gives me a Exception evaluating property 'totalCount' for java.util.ArrayList, Reason: groovy.lang.MissingPropertyException: No such property: totalCount for class: rms.PurchaseRequest error
if i use purchaseRequestList.count() it gives me this Could not find which method count() to invoke from this list: public java.lang.Number java.util.Collection#count(groovy.lang.Closure) public java.lang.Number java.util.Collection#count(java.lang.Object) error
You need to use the CountBy* methods for the search. Try this
def listPurchaseRequest(Integer max){
params.max = Math.min(max ?: 5, 100)
def purchaseRequestList, count
if (params.query) {
purchaseRequestList = PurchaseRequest.findAllByRequestByLikeOrRequestNumberLike("%${params.query}%", "%${params.query}%", params)
count = PurchaseRequest.countByRequestByLikeOrRequestNumberLike("%${params.query}%", "%${params.query}%")
} else {
purchaseRequestList = PurchaseRequest.list (params)
count = purchaseRequestList.totalCount
}
[purchaseRequestInstanceList: purchaseRequestList,
purchaseRequestInstanceTotal: count]
}
FYI, I moved the .list() into an else clause to save you calling both list and findBy when params.query is set.

OneToOne relationship doesn't always fetch

I have a OneToOne relationship between two tables, as shown below:
PreRecordLoad.java:
#OneToOne(mappedBy="preRecordLoadId",cascade = CascadeType.ALL)
private PreRecordLoadAux preRecordLoadAux;
PreRecordLoadAux.java:
#JoinColumn(name = "PRE_RECORD_LOAD_ID", referencedColumnName = "PRE_RECORD_LOAD_ID")
#OneToOne
private PreRecordLoad preRecordLoadId;
I'm using this method to pull back the PreRecordLoad object:
public PreRecordLoad FindPreRecordLoad(Long ID)
{
print("Finding " + ID + "f");
Query query;
PreRecordLoad result = null;
try
{
query = em.createNamedQuery("PreRecordLoad.findByPreRecordLoadId");
query.setParameter("preRecordLoadId", ID);
result = (PreRecordLoad)query.getSingleResult();
//result = em.find(PreRecordLoad.class, ID);
}
catch(Exception e)
{
print(e.getLocalizedMessage());
}
return result;
}
The '+ "f"' is to see if the passed value somehow had something at the end. It didn't.
I originally used em.find, but the same issue occurred no matter which method I used.
I used to use a BigDecimal for the ID because it was the default, and noticed I was getting a precision difference when it worked, and when it didn't work. Specifically the precision was 4 when it didn't work, but 0 when it did. I couldn't work out why this was, so I changed the BigDecimal to a Long, as I never really needed it to be a BigDecimal anyway.
When I save the new PreRecordLoad and PreRecordLoadAux objects to the database (inserting them for the first time), and then try and run this method to recall the objects, it retrieves the PreRecordLoad, but the PreRecordLoadAux is null. This is despite the entry being in the database and what looks to be full committed, as I can access it from SQLDeveloper, which is a separate session.
However, if I stop and re-run the application, then it successfully pulls back both objects. The ID being passed is the same both times, or at least appears to be.
Anyway suggestions would be greatly appreciated, thankyou.
Edit:
Here is the code for when I am persisting the objects into the DB:
if(existingPreAux==null) {
try {
preLoad.setAuditSubLoadId(auditLoad);
em.persist(preLoad);
print("Pre Record Load entry Created");
preAux.setPreRecordLoadId(preLoad);
em.persist(preAux);
print("Pre Record Load Aux entry Created");
}
catch(ConstraintViolationException e) {
for(ConstraintViolation c : e.getConstraintViolations()) {
System.out.println (c.getPropertyPath() + " " + c.getMessage());
}
}
}
else {
try {
preLoad.setPreRecordLoadId(existingPreLoad.getPreRecordLoadId());
preLoad.setAuditSubLoadId(auditLoad);
em.merge(preLoad);
print("Pre Record Load entry found and updated");
preAux.setPreRecordLoadAuxId(existingPreAux.getPreRecordLoadAuxId());
preAux.setPreRecordLoadId(preLoad);
em.merge(preAux);
print("Pre Record Load Aux entry found and updated");
}
catch(ConstraintViolationException e) {
for(ConstraintViolation c : e.getConstraintViolations()) {
System.out.println (c.getPropertyPath() + " " + c.getMessage());
}
}
}
That's in a method, and after that code, the method ends.
It's your responsibility to maintain the coherence of the object graph. So, when you do preAux.setPreRecordLoadId(preLoad);, yo must also do preLoad.setPreRecordLoadAux(preAux);.
If you don't, then every time you'll load the preAux from the same session, it will be retrieved from the first-level cache, and will thus return your incorrectly initialized instance of the entity.

Categories