As it comes from the official Drools documentation it is possible to obtain results from stateless session using Query.
// Set up a list of commands
List cmds = new ArrayList();
cmds.add( CommandFactory.newSetGlobal( "list1", new ArrayList(), true ) );
cmds.add( CommandFactory.newInsert( new Person( "jon", 102 ), "person" ) );
cmds.add( CommandFactory.newQuery( "Get People" "getPeople" );
// Execute the list
ExecutionResults results =
ksession.execute( CommandFactory.newBatchExecution( cmds ) );
// Retrieve the ArrayList
results.getValue( "list1" );
// Retrieve the inserted Person fact
results.getValue( "person" );
// Retrieve the query as a QueryResults instance.
results.getValue( "Get People" );
In the sample below, Get People is a drools Query which basically returns an object or a list of objects form a stateless (!) session.
In my project I need to obtain an object created in stateless Kie session, so I've created a Query:
query "getCustomerProfileResponse"
$result: CustomerProfileResponse()
end
The CustomerProfileResponse object is constructing and creating in RHS:
insert(customerProfileResponse);
I wrote the following code to execute commands in batch mode and query the resulted CustomerProfileResponse:
// Creating a batch list
List<Command<?>> commands = new ArrayList<Command<?>>(10);
commands.add(CommandFactory.newInsert(customerProfile));
commands.add(CommandFactory.newQuery(GET_CUSTOMER_PROFILE_RESPONSE,
GET_CUSTOMER_PROFILE_RESPONSE));
// GO!
ExecutionResults results = kSession.execute(CommandFactory.newBatchExecution(commands));
FlatQueryResults queryResults = (FlatQueryResults) results.getValue(GET_CUSTOMER_PROFILE_RESPONSE); // size() is 0!
But queryResults returns an empty list.
I was searching Stack Overflow for the similar questions and find out that it is not possible to run queries against stateless sessions in Drools using batch mode since the session closes immediately after execute() method is called, and the solution is to inject an empty CustomerProfileResponse object along with CustomerProfile in request.
Does anybody can shed some light onto the issue?
Adding CommandFactory.newFireAllRules() after newInsert and before NewQuery should solve the problem. See http://drools-moved.46999.n3.nabble.com/rules-users-Query-in-stateless-knowledge-session-returns-no-results-td3210735.html
Your rules will not fire until the all the command shave been executed. i.e. the implicit fireAllRules() is once all commands have been executed. Which means the query will be invoked before your rule fires to insert the object.
Instead you need to add the FireAllRules command before executing the query.
Related
I have list of String and i want to import all the elements to the graph database. By saying import i mean, i want to set the String as the Node's property. The size of the list is gonna be massive. So is there any way to automate Node naming ? Because by the traditional way, you have to create Nodes by calling graphDb.createNode() 100 times, if the size of the list is 100.
You can pass your list of strings as a parameter to a Cypher query. Here is a sample snippet:
List<String> names = ...;
try ( Transaction tx = graphDb.beginTx() )
{
String queryString = "UNWIND {names} AS name CREATE (n:User {name: name})";
Map<String, Object> parameters = new HashMap<>();
parameters.put( "names", names );
graphDb.execute( queryString, parameters );
tx.success();
}
Note: If the list of strings is "too long", the above approach will not work, as the server could run out of memory trying to do all that processing in a single transaction. In that case, you may want to use an APOC procedure like apoc.periodic.iterate to create the nodes in smaller batches.
Riddle me this Stackoverflow:
I have a query that I am sending to GAE. The query (When in String format) looks like this:
SELECT * FROM USER WHERE USER_ID = 5884677008
If I go to the GAE console and type it in via a manual GQL query, it returns the item just fine. If I browse via the GUI and scroll to it, I can see it just fine. But when I call it from the Java code, it returns nothing every time.
code:
I have already confirmed the query is correct as I printed it out as a String just so I can test it.
Anyone have any idea what is going on with this?
q = new Query(entityName); //entityName = "User", confirmed
q.setFilter(filter); //filter = "USER_ID = 5884677008", confirmed
DatastoreService datastore = DatastoreServiceFactory.getDatastoreService();
PreparedQuery pq = datastore.prepare(q);
/*
This always is empty here. Calling either pq.countEntities()); or
pq.toString()); returns size 0 or a String of nothing.
*/
Thanks!
-Sil
Edit: I Do have an index built, but it did not seem to help with the problem.
From the docs, you don't necessarily need to do toString. Have you tried asIterable or asSingleEntity on pq? Something like:
PreparedQuery pq = datastore.prepare(q);
for (Entity result : pq.asIterable()) {
String test = (String) result.getProperty("prop1");
}
That's if you have multiple entries. In the event you only have one:
PreparedQuery pq = datastore.prepare(q);
Entity result = pq.asSingleEntity();
String test = (String) result.getProperty("prop1");
Basically, if you don't call asIterable or asSingleEntity, the query is JUST prepared and doesn't run
Took quite a bit of testing, but found the issue.
The problem revolved around the filter being set. If I removed the filter, it worked fine (but returned everything). Turns out, what was being passed as a filter was a String version of the user_id as opposed to the Long version of it. There was really no way to tell as the exact SQL query DID NOT read ( SELECT * FROM USER WHERE USER_ID = "5884677008" ) when I printed it, which would have been a dead giveaway.
I changed the passed filter parameter (which I had stored in a hashmap of (String, Object) btw) from a String to a Long and that solved the issue.
One thing to point out though, as #Patrice brought up (And as I excluded from my code while posting to save space), to actually iterate through the list of results, you do need to call a method against it (Either .asIterable() or .asSingleEntity() ).
You actually can check against the number of returned entities / results by calling pq.countEntities() and it will return the correct number even before you call a formatting method against the pq, but as #tx802 pointed out, it is deprecated, and despite the fact that it worked for me, someone in the future using this post as a reference may not have it work for them.
I have a classic Java EE system, Web tier with JSF, EJB 3 for the BL, and Hibernate 3 doing the data access to a DB2 database. I am struggling with the following scenario: A user will initiate a process which involves retrieving a large data set from the database. The retrieval process takes some time and so the user does not receive an immediate response, gets impatient and opens a new browser and initiates the retrieval again, sometimes multiple times. The EJB container is obviously unaware of the fact that the first retrievals are no longer relevant, and when the database returns a result set, Hibernate starts populating a set of POJOs which take up vast amounts of memory, eventually causing an OutOfMemoryError.
A potential solution that I thought of was to use the Hibernate Session's cancelQuery method. However, the cancelQuery method only works before the database returns a result set. Once the database returns a result set and Hibernate begins populating the POJOs, the cancelQuery method no longer has an effect. In this case, the database queries themselves return rather quickly, and the bulk of the performance overhead seems to reside in populating the POJOs, at which point we can no longer call the cancelQuery method.
The solution implemented ended up looking like this:
The general idea was to maintain a map of all the Hibernate sessions that are currently running queries to the HttpSession of the user who initiated them, so that when the user would close the browser we would be able to kill the running queries.
There were two main challenges to overcome here. One was propagating the HTTP session-id from the web tier to the EJB tier without interfering with all the method calls along the way - i.e. not tampering with existing code in the system. The second challenge was to figure out how to cancel the queries once the database had already started returning results and Hibernate was populating objects with the results.
The first problem was overcome based on our realization that all methods being called along the stack were being handled by the same thread. This makes sense, as our application exists all within one container and does not have any remote calls. Being that that is the case, we created a Servlet Filter that intercepts every call to the application and adds a ThreadLocal variable with the current HTTP session-id. This way the HTTP session-id will be available to each one of the method calls lower down along the line.
The second challenge was a little more sticky. We discovered that the Hibernate method responsible for running the queries and subsequently populating the POJOs was called doQuery and located in the org.hibernate.loader.Loader.java class. (We happen to be using Hibernate 3.5.3, but the same holds true for newer versions of Hibernate.):
private List doQuery(
final SessionImplementor session,
final QueryParameters queryParameters,
final boolean returnProxies) throws SQLException, HibernateException {
final RowSelection selection = queryParameters.getRowSelection();
final int maxRows = hasMaxRows( selection ) ?
selection.getMaxRows().intValue() :
Integer.MAX_VALUE;
final int entitySpan = getEntityPersisters().length;
final ArrayList hydratedObjects = entitySpan == 0 ? null : new ArrayList( entitySpan * 10 );
final PreparedStatement st = prepareQueryStatement( queryParameters, false, session );
final ResultSet rs = getResultSet( st, queryParameters.hasAutoDiscoverScalarTypes(), queryParameters.isCallable(), selection, session );
final EntityKey optionalObjectKey = getOptionalObjectKey( queryParameters, session );
final LockMode[] lockModesArray = getLockModes( queryParameters.getLockOptions() );
final boolean createSubselects = isSubselectLoadingEnabled();
final List subselectResultKeys = createSubselects ? new ArrayList() : null;
final List results = new ArrayList();
try {
handleEmptyCollections( queryParameters.getCollectionKeys(), rs, session );
EntityKey[] keys = new EntityKey[entitySpan]; //we can reuse it for each row
if ( log.isTraceEnabled() ) log.trace( "processing result set" );
int count;
for ( count = 0; count < maxRows && rs.next(); count++ ) {
if ( log.isTraceEnabled() ) log.debug("result set row: " + count);
Object result = getRowFromResultSet(
rs,
session,
queryParameters,
lockModesArray,
optionalObjectKey,
hydratedObjects,
keys,
returnProxies
);
results.add( result );
if ( createSubselects ) {
subselectResultKeys.add(keys);
keys = new EntityKey[entitySpan]; //can't reuse in this case
}
}
if ( log.isTraceEnabled() ) {
log.trace( "done processing result set (" + count + " rows)" );
}
}
finally {
session.getBatcher().closeQueryStatement( st, rs );
}
initializeEntitiesAndCollections( hydratedObjects, rs, session, queryParameters.isReadOnly( session ) );
if ( createSubselects ) createSubselects( subselectResultKeys, queryParameters, session );
return results; //getResultList(results);
}
In this method you can see that first the results are brought from the database in the form of a good old fashioned java.sql.ResultSet, after which it runs in a loop over each set and creates an object from it. Some additional initialization is performed in the initializeEntitiesAndCollections() method called after the loop. After debugging a little, we discovered that the bulk of the performance overhead was in these sections of the method, and not in the part that gets the java.sql.ResultSet from the database, but the cancelQuery method was only effective on the first part. The solution therefore was to add an additional condition to the for loop, to check whether the thread is interrupted like this:
for ( count = 0; count < maxRows && rs.next() && !currentThread.isInterrupted(); count++ ) {
// ...
}
as well as to perform the same check before calling the initializeEntitiesAndCollections() method:
if (!Thread.interrupted()) {
initializeEntitiesAndCollections(hydratedObjects, rs, session,
queryParameters.isReadOnly(session));
if (createSubselects) {
createSubselects(subselectResultKeys, queryParameters, session);
}
}
Additionally, by calling the Thread.interrupted() on the second check, the flag is cleared and does not affect the further functioning of the program. Now when a query is to be canceled, the canceling method accesses the Hibernate session and thread stored in a map with the HTTP session-id as the key, calls the cancelQuery method on the session and calls the interrupt method of the thread.
I got a similar problem in a totally different environment. I did the following: before adding the new job to my queue I first checked whether the 'same job' is already enqueued from that user. If so I do not accept the second job and inform the user about that.
This doesn't answer your question on how to protect the user from an outOfMemory if the data is too big to fit in the available ram. But it's a good trick to protect your server from doing useless stuff.
Too complicated for me :-) I would like to create separate service for "heavy" queries. And store in it information about query parameters, maybe results, which would be valid limited time. If query execution is too long, user receive message, that execution of his task will takes considerable time, and he may wait or cancel it. Such scenario works fine for analytic queries. This variant gave you simple access to task, running on the server, to kill its.
But if you has problem with hibernate, than I suppose that problem not in analytic queries, but in ordinary business queries. If its execution too long, can you try to use L2 cache (cold start may be very long, but hot data would be received instantly)? Or optimize hibernate\jbdc parameters?
I have a Spring Batch project running in Spring Boot that is working perfectly fine. For my reader I'm using JdbcPagingItemReader with a MySqlPagingQueryProvider.
#Bean
public ItemReader<Person> reader(DataSource dataSource) {
MySqlPagingQueryProvider provider = new MySqlPagingQueryProvider()
provider.setSelectClause(ScoringConstants.SCORING_SELECT_STATEMENT)
provider.setFromClause(ScoringConstants.SCORING_FROM_CLAUSE)
provider.setSortKeys("p.id": Order.ASCENDING)
JdbcPagingItemReader<Person> reader = new JdbcPagingItemReader<Person>()
reader.setRowMapper(new PersonRowMapper())
reader.setDataSource(dataSource)
reader.setQueryProvider(provider)
//Setting these caused the exception
reader.setParameterValues(
startDate: new Date() - 31,
endDate: new Date()
)
reader.afterPropertiesSet()
return reader
}
However, when I modified my query with some named parameters to replace previously hard coded date values and set these parameter values on the reader as shown above, I get the following exception on the second page read (the first page works fine because the _id parameter hasn't been made use of by the paging query provider):
org.springframework.dao.InvalidDataAccessApiUsageException: No value supplied for the SQL parameter '_id': No value registered for key '_id'
at org.springframework.jdbc.core.namedparam.NamedParameterUtils.buildValueArray(NamedParameterUtils.java:336)
at org.springframework.jdbc.core.namedparam.NamedParameterJdbcTemplate.getPreparedStatementCreator(NamedParameterJdbcTemplate.java:374)
at org.springframework.jdbc.core.namedparam.NamedParameterJdbcTemplate.query(NamedParameterJdbcTemplate.java:192)
at org.springframework.jdbc.core.namedparam.NamedParameterJdbcTemplate.query(NamedParameterJdbcTemplate.java:199)
at org.springframework.batch.item.database.JdbcPagingItemReader.doReadPage(JdbcPagingItemReader.java:218)
at org.springframework.batch.item.database.AbstractPagingItemReader.doRead(AbstractPagingItemReader.java:108)
Here is an example of the SQL, which has no WHERE clause by default. One does get created automatically when the second page is read:
select *, (select id from family f where date_created between :startDate and :endDate and f.creator_id = p.id) from person p
On the second page, the sql is modified to the following, however it seems that the named parameter for _id didn't get supplied:
select *, (select id from family f where date_created between :startDate and :endDate and f.creator_id = p.id) from person p WHERE id > :_id
I'm wondering if I simply can't use the MySqlPagingQueryProvider sort keys together with additional named parameters set in JdbcPagingItemReader. If not, what is the best alternative to solving this problem? I need to be able to supply parameters to the query and also page it (vs. using the cursor). Thank you!
I solved this problem with some intense debugging. It turns out that MySqlPagingQueryProvider utilizes a method getSortKeysWithoutAliases() when it builds up the SQL query to run for the first page and for subsequent pages. It therefore appends and (p.id > :_id) instead of and (p.id > :_p.id). Later on, when the second page sort values are created and stored in JdbcPagingItemReader's startAfterValues field it will use the original "p.id" String specified and eventually put into the named parameter map the pair ("_p.id",10). However, when the reader tries to fill in _id in the query, it doesn't exist because the reader used the non-alias removed key.
Long story short, I had to remove the alias reference when defining my sort keys.
provider.setSortKeys("p.id": Order.ASCENDING)
had to change to in order for everything to work nicely together
provider.setSortKeys("id": Order.ASCENDING)
I had the same issue and got another possible solution.
My table T has a primary key field INTERNAL_ID.
The query in JdbcPagingItemReader was like this:
SELECT INTERNAL_ID, ... FROM T WHERE ... ORDER BY INTERNAL_ID ASC
So, the key is: in some conditions, the query didn't return results, and then, raised the error above No value supplied for...
The solution is:
Check in a Spring Batch decider element if there are rows.
If it is, continue with chunk: reader-processor-writer.
It it's not, go to another step.
Please, note that they are two different scenarios:
At the beginning, there are rows. You get them by paging and finally, there are no more rows. This has no problem and decider trick is not required.
At the beginning, there are no rows. Then, this error raised, and the decider solved it.
Hope this helps.
So in my database, I have 3 rows, two rows have defaultFlag as 0 and one is set to 1, now in my processing am updating defaultProperty of one object to 1 from 0 but am not saving this object yet.
Before saving I need to query database and find if any row has defaultFlag set or not, there would be only 1 default set.
So before doing update am running query to find if default is set and i get 2 values out, note here if i go and check in db then there is only 1 row with default set but query gives me two result because this.object default property has changed from 0 to 1 but note that this object is not yet saved in database.
I am really confused here as to why hibernate query is returning 2 when there is one row with default set in database and other object whose default property has changed but it is not saved.
Any thoughts would be helpful. I can provide query if need be.
Update
Following suggestions, I added session.clear() to before running the query.
session.clear();
String sql = "SELECT * FROM BANKACCOUNTS WHERE PARTYID = :partyId AND CURRENCYID = :currencySymbol AND ISDEFAULTBANKACCOUNT= :defaultbankAccount";
SQLQuery q = session.createSQLQuery(sql);
q.addEntity(BankAccount.class);
q.setParameter("partyId", partyId);
q.setParameter("currencySymbol", currencySymbol);
q.setParameter("defaultbankAccount", 1);
return q.uniqueResult();
and it returns 1 row in result as expected but now am getting
nested exception is org.hibernate.NonUniqueObjectException: a different object with the same identifier value was already associated with the session exception
Either query which row has the "default flag" set before you start changing it, or query for a list of rows with default flag set & clear all except the one you're trying to set.
Very easy, stop mucking about with your "brittle" current approach which will break in the face of concurrency or if data is ever in an inconsistent state. Use a reliable approach instead, which will always set the data to a valid state.
protected void makeAccountDefault (BankAccount acc) {
// find & clear any existing 'Default Accounts', other than specified.
//
String sql = "SELECT * FROM BANKACCOUNTS WHERE PARTYID = :partyId AND CURRENCYID = :currencySymbol AND ISDEFAULTBANKACCOUNT= :defaultbankAccount";
SQLQuery q = session.createSQLQuery(sql);
q.addEntity(BankAccount.class);
q.setParameter("partyId", partyId);
q.setParameter("currencySymbol", currencySymbol);
q.setParameter("defaultbankAccount", 1);
//
List<BackAccount> existingDefaults = q.list();
for (BankAccount existing : existingDefaults) {
if (! existing.equals( acc))
existing.setDefaultBankAccount( false);
}
// set the specified Account as Default.
acc.setDefaultBankAccount( true);
// done.
}
This is how you write proper code, do it simple & reliable. Never make or depend on weak assumptions about the reliability of data or internal state, always read & process "beforehand state" before you do the operation, just implement your code clean & right and it will serve you well.
I think that your second query won't be executed at all because the entity is already in the first level cache.
As your transaction is not yet commited, you don't see the changes in the underlying database.
(this is only a guess)
That's only a guess because you're not giving many details, but I suppose that you perform your myObject.setMyDefaultProperty(1) while your session is open.
In this case, be careful that you don't need to actually perform a session.update(myObject) to save the change. It is the nominal case when database update is transparently done by hibernate.
So, in fact, I think that your change is saved... (but not commited, of course, thus not seen when you check in db)
To verify this, you should enable the hibernate.show_sql option. You will see if an Update statement is triggered (I advise to always enable this option in development phase anyway)