How to force composite indexes to appear in Google Cloud? - java

Ok, see this picture in Google Cloud:
It said "Below are the composite indexes for this application. These indexes are managed in your app's index configuration file."
And see this following code:
public static long getPropertyMaxIDStronglyConsistent(String entityName, String propertyName, Key parentKey){
// Order alphabetically by property name:
//Key parentKey=KeyFactory.createKey(parentEntityName, parentName);
Query q = new Query(entityName, parentKey).setAncestor(parentKey)
.addSort(propertyName, SortDirection.DESCENDING);
//List<Entity> results = datastore.prepare(q).asList(FetchOptions.Builder.withDefaults());
List<Entity> results = datastore.prepare(q)
.asList(FetchOptions.Builder.withLimit(5));
if(results.size()>0){
Entity e=results.get(0);
long maxID=(Long)e.getProperty(propertyName);
return maxID;
}
else{
return 0;
}
}
Suppose that we are running this function1 getPropertyMaxIDStronglyConsistent("EnglishClass", "ClassNo", KeyFactory.createKey("EnglishClassParent", "EnglishClassParentName")).
What I found out that, the function1 is not gonna work if the kind "EnglishClass" does not appear in Indexes Table with "serving" status.
I didn't know what I did but after I was struggling for a few hours then suddenly the Index "EnglishClass" appeared. When the "EnglishClass" appeared with "serving" status, the app can work as normal without any problem.
My questions are
What are composite indexes?
Why didn't it appear immediately after running the function1?
What does "serving" status mean?
How to force composite indexes to appear in Google Cloud?
Extra:
In the datastore-indexes-auto.xml I have
<datastore-indexes autoGenerate="true">
<datastore-index kind="EnglishClass" ancestor="true" source="auto">
<property name="ClassNo" direction="desc"/>
</datastore-index>
</datastore-indexes>
But it still did not work

Indexes for the App Engine datastore are described in these docs and these docs(Java 7, but the principles are the same for Java 8).
A composite index is an index that comprises of more than one property of a model: for example an index that sorts a model by Model.name, then Model.creationDate. Composite indexes are used by queries that need to access datastore records in the order described by the query.
Some indexes must be declared in the datastore-indexes.xml file - see here
Serving status means that the index is ready for use; when an index is first uploaded App Engine must build the index, and until the index is built queries that use the building index will throw an exception. So it can be helpful to update indexes before deploying the code that requires them.
Configure your app to automatically configure indexes.

Related

CMIS query trying to retrieve folders/files under specific path returns no documents

Greetings to the community! I am using alfresco community edition 6.0.0 and I just faced a very weird problem. I am using the Java API to access my alfresco repository by running CMIS queries. I successfully fetched documents using cmis-strict like shown below:
Example 1)
select * from cmis:document WHERE cmis:name like '%doc%' AND cmis:objectId = 'e318a431-0ff4-4a4a-9537-394d2bd761af' "
Example 2)
SELECT * FROM cmis:document WHERE IN_FOLDER('63958f9c-819f-40f4-bedf-4a2e402f8b9f') AND cmis:name like '%temp%'
which work perfectly, what I would like to do is retrieve files/folders under a specific path (f.e fetch all folders under /app:company_home/app:user_homes)
what I do is running from the node browser of alfresco the following cmis-strict query
SELECT * FROM cmis:folder WHERE CONTAINS('PATH:"//app:company_home/app:user_homes//*"')
but even though there are existing folders under that directory nothing is returned. It seems that the PATH argument is not recognized as it should does, as when i run the query
SELECT * FROM cmis:folder I get back many results that have as parent the
app:company_home/app:user_homes
node
Any idea what may be the problem? Any help would be greatly appreciated, thanks :)
EDIT:
I have also tried to use lucene query like
PATH:"/app:company_home/app:user_homes//*") but no results returned too
Your user homes contains query works for me in both 5.2 and 6.1.1.
I like #Lista's suggestion of checking into your index. If that doesn't bear fruit, you might go get the CMIS object ID of the user homes folder, then use it with the IN_FOLDER clause you've already proven works.
I think both Lucene and CMIS queries (if using CONTAINS) end up on the index (not database), so it's not weird to assume something is off with the index itself. Have you tried rebuilding them? Are your nodes even in index (there's a SOLR admin console you can use to see this)?
https://docs.alfresco.com/6.0/concepts/query-lang-support.html

E11000 duplicate key error

I am trying to insert documents into mongodb from java. First record is being inserted and it is showing the error as 'E11000 duplicate key error'. I even tried to make the documents unique. Still I am getting the same error. Here I provide the screen shot of the same.
Mongodb version: v 3.4.10
#sowmyasurampalli, E11000 it's a mongodb code error that means that some entry is duplicated, when you use a field as unique field(in you case _id is default set to unique), you should enter distinct documents _ids else this error 'll be thrown, so in your app you need also to catch that error to inform the user that the entry was duplicated.
Also, if you are sure that the docs that you're inserting have unique ids, just remove your collection from the DB because it contains the inserted documents from the previous insertion!
I just dropped the collection and everything started working fine after that
1.) Just delete the database using the command : db.dropDatabase();
(don't find the above step aggressive)
2.) Create new db : use dbname
3.)Restart the server : npm start
Note : ( Dropped indexes or db will be rebuilt once again by the Schema file when the server is restarted)

find all items where a list field contains a value in dynamodb

I'm new to DynamoDb and I'm struggling to work out how to do this (using the java sdk).
I currently have a table (in mongo) for notifications. The schema is basically as follows (I've simplified it)
id: string
notifiedUsers: [123, 345, 456, 567]
message: "this is a message"
created: 12345678000 (epoch millis)
I wanted to migrate to Dynamodb, but I can't work out the best way to select all notifications that went to a particular user after a certain date?
I gather I can't have an index on a list like notifiedUsers, therefore I can't use a query in this case - is that correct?
I'd prefer not to scan and then filter, there could be a lot of records.
Is there a way to do this using a query or another approach?
EDIT
This is what I'm trying now, it's not working and I'm not sure where to take it (if anywhere).
Condition rangeKeyCondition = new Condition()
.withComparisonOperator(ComparisonOperator.CONTAINS.toString())
.withAttributeValueList(new AttributeValue().withS(userId));
if(startTimestamp != null) {
rangeKeyCondition = rangeKeyCondition.withComparisonOperator(ComparisonOperator.GT.toString())
.withAttributeValueList(new AttributeValue().withS(startTimestamp));
}
NotificationFeedDynamoRecord replyKey = new NotificationFeedDynamoRecord();
replyKey.setId(partitionKey);
DynamoDBQueryExpression<NotificationFeedDynamoRecord> queryExpression = new DynamoDBQueryExpression<NotificationFeedDynamoRecord>()
.withHashKeyValues(replyKey)
.withRangeKeyCondition(NOTIFICATIONS, rangeKeyCondition);
In case anyone else comes across this question, in the end we flattened the schema, so that there is now a record per userId. This has lead to problems because it's not possible with dynamoDb to atomically batch write records. With the original schema we had one record, and could write it atomically ensuring that all users got that notification. Now we cannot be certain, and this is causing pain.

How to apply aggregate functions(like MIN, MAX, COUNT) in JCR-SQL2?

I have some records stored as Nodes in JCR and the name of the node is the primary key. eg 1,2,3.
But the problem starts here,
the records are as follows 1,2,6,53,54
Where the numbers above are nodes under EMP unstructured node.
If I do
int count=empNode.getNodeIterator().getSize() I will get 5 As there are 5 nodes
So I do count++ which gives me 6 but 6 already exists, so I can't create a node with name 6 under EMP[nt:unstructred], thats why I want to apply MAX(nodeNames) something in the query. What should I do ?
Update ::
I use CQ5.5. EMP is an unstructered node under content like /content/EMP.
Under this(EMP) I have unstructered nodes that hold my data. And these node have names as 1,2, etc
I tried with my CQ5.4 instance to find a soloution. Unfortunatly my tries were not successful. When I used the keywords 'sql2 count' with Google, I found this page. There was asked the same question and the answer was
There is no count(*) or group by selector in JCR SQL 1, XPath [2] or
JCR-SQL2/AQM [3].
To implement such a tag cloud, you can run one query that fetches all
your content containing the relevant "tag" property:
//element(*, my:Article)[#tag]
and then iterate over the result and count your tags on the
application side by looking at the tag property values and using some
hashmap (tagid -> count).
http://www.day.com/specs/jcr/1.0/ (section 8.5)
http://www.day.com/specs/jcr/1.0/ (section 6.6)
http://www.day.com/specs/jcr/2.0/6_Query.html
I think you can connect this answer to MAX() and MIN().
I implemented a simple Apache Sling servlet to implement the count(*) function. More information here: https://github.com/artika4biz/sling-utils.
Official documentation can be found here: https://jackrabbit.apache.org/oak/docs/query/query-engine.html

Liquibase + H2 + Junit Primary Key Sequence starts over

I managed to integrate Liquibase into our Maven build to initialize a H2 inmemory database with a few enrys. Those rows have the primary key generated using a sequence table which works as expected (BigInt incremented values starting from 1).
My issue is that when i try to persist a new entity into that table from within a Junit integration test i get a "unique key constraint violation" because that new entity has the same primary key as the very first row inserted using the Liquibase changelog-xmls.
So the initialisation works perfectly fine as expected. The maven build uses the liquibase changelog-xmls
For now i just wipe the according tables completly before any integration tests with an own Runner... but that wont be a possibility in the furture. Its currently quite a chalange to investigate such issues since there is not yet much specific information on Liquibase available.
Update Workaround
While id prefer below answer using H2 brings up the problem that below changeset wont work because the required minValue is not supported.
<changeSet author="liquibase-docs" id="alterSequence-example">
<alterSequence
incrementBy="1"
maxValue="371717"
minValue="40"
ordered="true"
schemaName="public"
sequenceName="seq_id"/>
As a simple workaround i now just drop the existing sequence that was used to insert my testdata in a second changeSet:
<changeSet id="2" author="Me">
<dropSequence
sequenceName="SEQ_KEY_MY_TBL"/>
<createSequence
sequenceName="SEQ_KEY_MY_TBL"
incrementBy="1"
startValue="40"/>
</changeSet>
This way the values configured in the changelog-*.xml will be inserted using the sequence with an initial value of 1. I insert 30 rows so Keys 1-30 are used. After that the sequence gets dropped and recreated with a higher startValue. This way when persisting entities from within a Junit based integration Test the new entities will have primary keys starting from 40 and the previous unique constraint problem is solved.
Not H2 will probably soon release a version supporting minValue/maxValue since the according patch already exists.
Update:
Maybe we should mention this still is just a Workaround, anyone knows if H2 supports a Sequence with Liquibase that wont start over after DB-Init?
You should instruct liquibase to set the start value for those sequences to a value beyond those you have used for the entries you created. Liquibase has an alterSequence element for this. You can add such elements at the end of your current liquibase script.

Categories