I'm trying commitChanges, but catch java.lang.NullPointerException. log:
...
INFO: --- transaction started.
авг 04, 2015 12:33:59 PM org.apache.cayenne.access.dbsync.CreateIfNoSchemaStrategy processSchemaUpdate
INFO: Full or partial schema detected, skipping tables creation
авг 04, 2015 12:33:59 PM org.apache.cayenne.log.CommonsJdbcEventLogger logQuery
INFO: SELECT NEXT_ID FROM AUTO_PK_SUPPORT WHERE TABLE_NAME = 'ARTIST'
авг 04, 2015 12:33:59 PM org.apache.cayenne.log.CommonsJdbcEventLogger logSelectCount
INFO: === returned 1 row. - took 16 ms.
авг 04, 2015 12:33:59 PM org.apache.cayenne.log.CommonsJdbcEventLogger logQueryError
INFO: *** error.
java.lang.NullPointerException
at com.relx.jdbc.jdbc2.LinterStatementImpl.getUpdateCount(LinterStatementImpl.java:419)
at org.apache.cayenne.access.jdbc.SQLTemplateAction.execute(SQLTemplateAction.java:190)
at org.apache.cayenne.access.jdbc.SQLTemplateAction.performAction(SQLTemplateAction.java:124)
at org.apache.cayenne.access.DataNodeQueryAction.runQuery(DataNodeQueryAction.java:87)
at org.apache.cayenne.access.DataNode.performQueries(DataNode.java:280)
at org.apache.cayenne.dba.JdbcPkGenerator.longPkFromDatabase(JdbcPkGenerator.java:310)
at org.apache.cayenne.dba.JdbcPkGenerator.generatePk(JdbcPkGenerator.java:268)
at org.apache.cayenne.access.DataDomainInsertBucket.createPermIds(DataDomainInsertBucket.java:171)
at org.apache.cayenne.access.DataDomainInsertBucket.appendQueriesInternal(DataDomainInsertBucket.java:76)
at org.apache.cayenne.access.DataDomainSyncBucket.appendQueries(DataDomainSyncBucket.java:78)
at org.apache.cayenne.access.DataDomainFlushAction.preprocess(DataDomainFlushAction.java:188)
at org.apache.cayenne.access.DataDomainFlushAction.flush(DataDomainFlushAction.java:144)
at org.apache.cayenne.access.DataDomain.onSyncFlush(DataDomain.java:853)
at org.apache.cayenne.access.DataDomain$2.transform(DataDomain.java:817)
at org.apache.cayenne.access.DataDomain.runInTransaction(DataDomain.java:877)
at org.apache.cayenne.access.DataDomain.onSyncNoFilters(DataDomain.java:814)
at org.apache.cayenne.access.DataDomain$DataDomainSyncFilterChain.onSync(DataDomain.java:1031)
at org.apache.cayenne.access.DataDomain.onSync(DataDomain.java:785)
at org.apache.cayenne.access.DataContext.flushToParent(DataContext.java:817)
at org.apache.cayenne.access.DataContext.commitChanges(DataContext.java:756)
at CayenneTest2.main(CayenneTest2.java:61)
Table AUTO_PK_SUPPORT was created and filled Apache Cayenne.
Why throw the Exception?
From the stack trace you are working with Cayenne v. 3.1. The code in question is here. Cayenne SQLTemplateAction checks whether the result of the query is a ResultSet and with the answer being "no", assumes the result is an update count. So it tries to read the update count on line 190:
int updateCount = statement.getUpdateCount();
Somehow the underlying statement object (LinterStatementImpl) is not happy about that. I don't have access to source code of the Linter DB driver, so I can't say what exactly is wrong, but the driver is not behaving the way Cayenne expects it to.
Perhaps Linter is special enough to warrant its own Cayenne DbAdapter (??) Feel free to join Cayenne dev mailing list to discuss what it takes to write one.
Related
getting this error after hibernate outputs the data, any idea why this would be happening. please help!
Sep 07, 2016 12:07:00 PM org.hibernate.engine.jdbc.connections.internal.DriverManagerConnectionProviderImpl stop
INFO: HHH000030: Cleaning up connection pool [jdbc:postgresql://localhost:5432/bendb]
Sep 07, 2016 12:07:00 PM org.glassfish.jersey.filter.LoggingFilter log
INFO: 3 * Server responded with a response on thread http-nio-8080-exec-5
3 < 200
3 < Access-Control-Allow-Methods: GET, POST, DELETE, PUT
3 < Access-Control-Allow-Origin: *
3 < Allow: OPTIONS
3 < Content-Type: application/json
Sep 07, 2016 12:07:00 PM org.glassfish.jersey.filter.LoggingFilter log
INFO: 4 * Server responded with a response on thread http-nio-8080-exec-5
4 < 500
Sorry found a bug in my code so apparently the code change i made was trying to map all the junction tables (collections) in that get user rest call hence jersey just breaks out while attempting to do that. un Commenting that code and just passing the normal data sets solved the issue.
I'm getting the exception when I try to connect to the orient db using Java. Below is the exception I'm getting.
Jun 07, 2016 12:43:40 PM com.orientechnologies.common.log.OLogManager log
INFO: OrientDB auto-config DISKCACHE=891MB (heap=891MB direct=891MB os=4,006MB), assuming maximum direct memory size equals to maximum JVM heap size
Jun 07, 2016 12:43:40 PM com.orientechnologies.common.log.OLogManager log
WARNING: MaxDirectMemorySize JVM option is not set or has invalid value, that may cause out of memory errors. Please set the -XX:MaxDirectMemorySize=4006m option when you start the JVM.
Jun 07, 2016 12:43:40 PM com.orientechnologies.common.log.OLogManager log
WARNING: MaxDirectMemorySize JVM option is not set or has invalid value, that may cause out of memory errors. Please set the -XX:MaxDirectMemorySize=4006m option when you start the JVM.
Exception in thread "main" com.orientechnologies.orient.core.exception.OFileLockedByAnotherProcessException: File 'F:\Program Files\orientdb-community-2.2.0\databases\mydbo\database.ocf' is locked by another process, maybe the database is in use by another process. Use the remote mode with a OrientDB server to allow multiple access to the same database at com.orientechnologies.orient.core.storage.fs.OFileClassic.lock(OFileClassic.java:756)
at com.orientechnologies.orient.core.storage.fs.OFileClassic.openChannel(OFileClassic.java:813)
at com.orientechnologies.orient.core.storage.fs.OFileClassic.open(OFileClassic.java:603)
at com.orientechnologies.orient.core.storage.impl.local.OSingleFileSegment.open(OSingleFileSegment.java:51)
at com.orientechnologies.orient.core.storage.impl.local.OStorageConfigurationSegment.load(OStorageConfigurationSegment.java:80)
at com.orientechnologies.orient.core.storage.impl.local.OAbstractPaginatedStorage.open(OAbstractPaginatedStorage.java:186)
at com.orientechnologies.orient.core.db.document.ODatabaseDocumentTx.open(ODatabaseDocumentTx.java:231)
at orient.insert.Insert.main(Insert.java:12)
this is the code that i tried.
ODatabaseDocumentTx db = new ODatabaseDocumentTx("plocal:F:/Program Files/orientdb-community-2.2.0/databases/mydbo").open("admin", "admin");
ODocument doc = new ODocument("Person");
doc.field( "name", "Luke" );
doc.field( "surname", "Skywalker" );
doc.field( "city", new ODocument("City").field("name","Rome").field("country", "Italy") );
doc.save();
db.close();
I can't figure out the error I'm having.
You have a server running and you try to open the database from another process in plocal.
Could you please verify that you have no active OrientDB instances while accessing it in plocal (console or external processes) and that you open one plocal connection at a time?
I am migrating an application from Hibernate 4.3 to Hibernate 5.0.1-Final
I use ImplicitNamingStrategyComponentPathImpl as my hibernate.implicit_naming_strategy with Postgres 9.4.4 and my company uses hibernate.hbm2ddl.auto = update for deployment ( I know it is a bad practice but cant help it)
While the session factory initializes, it throws the below error. Apparently the generated alias is too long for Postgres. How do we go about this situation? I have tried assigning #Table(name=..) annotation to work around this it but it is getting worse as every relationship from that point gets screwd.
Caused by: org.hibernate.tool.schema.spi.SchemaManagementException: Unable to execute schema management to JDBC target [create table public.ReferenceDocumentVersion_ReferenceDocumentSourceFilesStoreDescriptor (ReferenceDocumentVersion_unid uuid not null, sourceFilesStore_filesDescriptorMap_unid uuid not null, filesDescriptorMap_KEY text not null, primary key (ReferenceDocumentVersion_unid, filesDescriptorMap_KEY))]
at org.hibernate.tool.schema.internal.TargetDatabaseImpl.accept(TargetDatabaseImpl.java:59)
at org.hibernate.tool.schema.internal.SchemaMigratorImpl.applySqlString(SchemaMigratorImpl.java:371)
at org.hibernate.tool.schema.internal.SchemaMigratorImpl.applySqlStrings(SchemaMigratorImpl.java:360)
at org.hibernate.tool.schema.internal.SchemaMigratorImpl.createTable(SchemaMigratorImpl.java:181)
at org.hibernate.tool.schema.internal.SchemaMigratorImpl.doMigrationToTargets(SchemaMigratorImpl.java:134)
at org.hibernate.tool.schema.internal.SchemaMigratorImpl.doMigration(SchemaMigratorImpl.java:59)
at org.hibernate.tool.hbm2ddl.SchemaUpdate.execute(SchemaUpdate.java:129)
at org.hibernate.tool.hbm2ddl.SchemaUpdate.execute(SchemaUpdate.java:97)
at org.hibernate.internal.SessionFactoryImpl.<init>(SessionFactoryImpl.java:481)
at org.hibernate.boot.internal.SessionFactoryBuilderImpl.build(SessionFactoryBuilderImpl.java:444)
at org.hibernate.jpa.boot.internal.EntityManagerFactoryBuilderImpl.build(EntityManagerFactoryBuilderImpl.java:802)
... 29 more
Caused by: org.postgresql.util.PSQLException: ERROR: relation "referencedocumentversion_referencedocumentsourcefilesstoredescr" already exists
at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2182)
at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1911)
at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:173)
at org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:618)
at org.postgresql.jdbc2.AbstractJdbc2Statement.executeWithFlags(AbstractJdbc2Statement.java:454)
at org.postgresql.jdbc2.AbstractJdbc2Statement.executeUpdate(AbstractJdbc2Statement.java:382)
at org.apache.tomcat.dbcp.dbcp.DelegatingStatement.executeUpdate(DelegatingStatement.java:228)
at org.apache.tomcat.dbcp.dbcp.DelegatingStatement.executeUpdate(DelegatingStatement.java:228)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at net.bull.javamelody.JdbcWrapper.doExecute(JdbcWrapper.java:404)
at net.bull.javamelody.JdbcWrapper$StatementInvocationHandler.invoke(JdbcWrapper.java:129)
at net.bull.javamelody.JdbcWrapper$DelegatingInvocationHandler.invoke(JdbcWrapper.java:286)
at com.sun.proxy.$Proxy93.executeUpdate(Unknown Source)
at org.hibernate.tool.schema.internal.TargetDatabaseImpl.accept(TargetDatabaseImpl.java:56)
... 39 more
I have addressed the situation with a custom ImplicitNamingStrategy that truncates Hibernate generated identifiers to 64 chars (MAX length for Postgres).
Previous versions of Hibernate(4.x) have encountered the same error but they just ignores it and proceeds with initializing the SessionFactory. However, Hibernate 5.x has a new boot strap API which throws a SchemaManagementException in such cases and aborts. Hibernate logs from my test scenarios are pasted below for reference.
Hibernate 4.X
INFO: HHH000396: Updating schema
Oct 04, 2015 1:38:00 PM org.hibernate.tool.hbm2ddl.DatabaseMetadata getTableMetadata
INFO: HHH000262: Table not found: ReferenceDocumentVersionEntityWithAReallyReallyReallyLongNameBeyondPostGres
Oct 04, 2015 1:38:00 PM org.hibernate.tool.hbm2ddl.DatabaseMetadata getTableMetadata
INFO: HHH000262: Table not found: ReferenceDocumentVersionEntityWithAReallyReallyReallyLongNameBeyondPostGres
Oct 04, 2015 1:38:00 PM org.hibernate.tool.hbm2ddl.DatabaseMetadata getTableMetadata
INFO: HHH000262: Table not found: ReferenceDocumentVersionEntityWithAReallyReallyReallyLongNameBeyondPostGres
Oct 04, 2015 1:38:00 PM org.hibernate.tool.hbm2ddl.SchemaUpdate execute
ERROR: HHH000388: Unsuccessful: create table ReferenceDocumentVersionEntityWithAReallyReallyReallyLongNameBeyondPostGres (unid uuid not null, path text, primary key (unid))
Oct 04, 2015 1:38:00 PM org.hibernate.tool.hbm2ddl.SchemaUpdate execute
ERROR: ERROR: relation "referencedocumentversionentitywithareallyreallyreallylongnamebe" already exists
Oct 04, 2015 1:38:00 PM org.hibernate.tool.hbm2ddl.SchemaUpdate execute
INFO: HHH000232: Schema update complete
Hibernate 5.0.2.Final
Oct 04, 2015 1:39:16 PM org.hibernate.tool.hbm2ddl.SchemaUpdate execute
INFO: HHH000228: Running hbm2ddl schema update
Oct 04, 2015 1:39:16 PM org.hibernate.tool.schema.extract.internal.InformationExtractorJdbcDatabaseMetaDataImpl processGetTableResults
INFO: HHH000262: Table not found: ReferenceDocumentVersionEntityWithAReallyReallyReallyLongNameBeyondPostGres
Oct 04, 2015 1:39:16 PM org.hibernate.tool.schema.extract.internal.InformationExtractorJdbcDatabaseMetaDataImpl processGetTableResults
INFO: HHH000262: Table not found: ReferenceDocumentVersionEntityWithAReallyReallyReallyLongNameBeyondPostGres
Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.813 sec <<< FAILURE!
testApp(org.foobar.AppTest) Time elapsed: 0.788 sec <<< ERROR!
javax.persistence.PersistenceException: [PersistenceUnit: org.foobar.persistence.default] Unable to build Hibernate SessionFactory
at org.hibernate.jpa.boot.internal.EntityManagerFactoryBuilderImpl.persistenceException(EntityManagerFactoryBuilderImpl.java:877)
at org.hibernate.jpa.boot.internal.EntityManagerFactoryBuilderImpl.build(EntityManagerFactoryBuilderImpl.java:805)
at org.hibernate.jpa.HibernatePersistenceProvider.createEntityManagerFactory(HibernatePersistenceProvider.java:58)
at javax.persistence.Persistence.createEntityManagerFactory(Persistence.java:55)
at javax.persistence.Persistence.createEntityManagerFactory(Persistence.java:39)
at org.foobar.AppTest.testApp(AppTest.java:18)
Solution
Custom ImplicitNamingStrategy
package org.foobar.persistence;
import org.hibernate.boot.model.naming.Identifier;
import org.hibernate.boot.model.naming.ImplicitNamingStrategyComponentPathImpl;
import org.hibernate.boot.spi.MetadataBuildingContext;
public class PGConstrainedImplicitNamingStrategy extends ImplicitNamingStrategyComponentPathImpl {
private static final int POSTGRES_IDENTIFIER_MAXLENGTH = 63;
public static final PGConstrainedImplicitNamingStrategy INSTANCE = new PGConstrainedImplicitNamingStrategy();
public PGConstrainedImplicitNamingStrategy() {
}
#Override
protected Identifier toIdentifier(String stringForm, MetadataBuildingContext buildingContext) {
return buildingContext.getMetadataCollector()
.getDatabase()
.getJdbcEnvironment()
.getIdentifierHelper()
.toIdentifier( stringForm.substring( 0, Math.min( POSTGRES_IDENTIFIER_MAXLENGTH, stringForm.length() ) ) );
}}
persistence.xml
<properties>
<property name="hibernate.implicit_naming_strategy" value="org.foobar.persistence.PGConstrainedImplicitNamingStrategy"/>
</properties>
This is not a scalable solution at all but helps to keep the show running. The permanent solution would be to explicitly supply identifiers so that hibernate does not generate really long identifiers. - see the answer written by maaartinus
try to follow the Migration guide in Hibernate Documentation in this link
https://github.com/hibernate/hibernate-orm/blob/5.0/migration-guide.adoc
The OP's solution may lead to collision (that's why he calls it not scalable, right?). Explicitly supplying all identifiers sound like a terrible idea to me. I'd suggest one of the following
provide a Map<String, String> mapping all overlong names to something shorter
shorten all overlong names to POSTGRES_IDENTIFIER_MAXLENGTH - N and append N characters generated from the hash of the cut away part, so the probability of collisions gets minimized
Use some identifier abbreviating function like {"Reference" -> "Ref", "Document" -> "Doc", ...} and apply it to your identifiers before they get processed, so that you get RefDocVersion_RefDocSourceFileDescr... instead of referencedocumentversion_referencedocumentsourcefilesstoredescr....
Consider using abbreviated names in you code itself. This is often advised against, as it easily leads to incomprehensible non-sense, but IMHO it increases readability when used right (use only a couple of abbreviations and use them systematically; provide a list of them).
After I update some Entities in GWT, I would like to save them. However, when I try to persist them, it does not save when I look in the AppEngine admin interface. The Boolean has not changed.
Code
EntityManager em = EMF.get().createEntityManager();
for (OnixUser s: admin) {
log.info(s.email + ", " + s.isAdmin);
em.merge(s);
}
em.close();
Update with transaction
EntityManager em = EMF.get().createEntityManager();
em.getTransaction().begin();
for (OnixUser s: admin) {
log.info(s.email + ", " + s.isAdmin);
OnixUser merged = em.merge(s);
em.persist(merged);
// em.persist(s);
}
em.getTransaction().commit();
em.close();
Still did not save. No exceptions thrown.
Log
Oct 16, 2013 3:19:10 PM com.example.sdm.server.SDMServiceImpl setAdmin
INFO: chloe#example.com, true
App Engine admin interface for OnixUser entity
Log at FINEST level
FINE: Created ManagedConnection using DatastoreService = com.google.appengine.api.datastore.DatastoreServiceImpl#2fd9270d
Oct 16, 2013 4:03:14 PM org.datanucleus.store.connection.ConnectionManagerImpl allocateConnection
FINE: Connection added to the pool : com.google.appengine.datanucleus.DatastoreConnectionFactoryImpl$DatastoreManagedConnection#31c1f89d for key=org.datanucleus.ObjectManagerImpl#6977c57b in factory=ConnectionFactory:tx[com.google.appengine.datanucleus.DatastoreConnectionFactoryImpl#2b1f5f6b]
Oct 16, 2013 4:03:14 PM com.example.sdm.server.SDMServiceImpl setAdmin
INFO: chloe#example.com, true
Oct 16, 2013 4:03:14 PM org.datanucleus.state.LifeCycleState changeState
FINE: Object "com.example.sdm.shared.OnixUser#48ef6e99" (id="com.example.sdm.shared.OnixUser:6456332278300672") has a lifecycle change : "P_CLEAN"->"P_NONTRANS"
Oct 16, 2013 4:03:14 PM org.datanucleus.store.connection.ConnectionManagerImpl$1 managedConnectionPostClose
FINE: Connection removed from the pool : com.google.appengine.datanucleus.DatastoreConnectionFactoryImpl$DatastoreManagedConnection#31c1f89d for key=org.datanucleus.ObjectManagerImpl#6977c57b in factory=ConnectionFactory:tx[com.google.appengine.datanucleus.DatastoreConnectionFactoryImpl#2b1f5f6b]
Oct 16, 2013 4:03:14 PM org.datanucleus.state.LifeCycleState changeState
FINE: Object "com.example.sdm.shared.OnixUser#48ef6e99" (id="com.example.sdm.shared.OnixUser:6456332278300672") has a lifecycle change : "P_NONTRANS"->"DETACHED_CLEAN"
Oct 16, 2013 4:03:14 PM com.google.apphosting.utils.jetty.AppEngineAuthentication$AppEngineUserRealm disassociate
FINE: Ignoring disassociate call for: chloe#example.com
If inspite of using transactions your data is not persisted then best way would be to try the logging to see what's happening underneath. As you using DataNucleus as your persistence provider, you can refer to this link to configure
the SQL logging. The information relevant to you is given near the end of page.
I am working on Mahout and found an issue when I tried to change my csv, previously it was giving me proper recommendations.
Example code:
model = new FileDataModel(new File("E:\\WriteTest.csv"));
UserSimilarity similarity = new PearsonCorrelationSimilarity(model);
UserNeighborhood neighborhood = new NearestNUserNeighborhood(2,similarity,model);
Recommender recomender = new GenericUserBasedRecommender(model,neighborhood, similarity);
List<RecommendedItem> recommendations = recomender.recommend(1,1);
for(RecommendedItem recommendation: recommendations){
System.out.println(recommendation);
}
I have just updated the values of my csv and it has stopped giving me suggestion.
CSV that is not giving me any result:
1,13,9.9
1,26,9.0
1,40,4.0
2,83,9.9
2,167,9.0
2,250,4.0
3,91,9.9
3,167,9.0
3,274,4.0
4,91,9.9
4,167,2.0
CSV which is giving me result:
1,101,5.0
1,102,3.0
1,103,3.0
2,101,5.0
2,102,2.5
2,103,3.0
2,104,2.1
3,101,5.0
3,102,2.5
3,105,4.0
3,107,5.0
4,102,2.0
4,104,4.0
4,105,2.5
4,106,3.0
4,107,2.6
5,101,5.0
5,102,3.4
5,104,2.5
5,105,2.5
5,106,1.0
Output on console respectively:
Result from 1st Dataset Aug 27, 2011 2:45:06 AM
org.slf4j.impl.JCLLoggerAdapter info INFO: Creating FileDataModel for
file WriteTest.csv Aug 27, 2011 2:45:06 AM
org.slf4j.impl.JCLLoggerAdapter info INFO: Reading file info... Aug
27, 2011 2:45:06 AM org.slf4j.impl.JCLLoggerAdapter info INFO:
Readlines: 11 Aug 27, 2011 2:45:06 AM org.slf4j.impl.JCLLoggerAdapter
info INFO: Processed 4 users
I was expecting Item no 167 but din't find any recommendation
Output of 2nd dataset:
Aug 27, 2011 2:52:42 AM org.slf4j.impl.JCLLoggerAdapter info
INFO: Creating FileDataModel for file WriteTest.csv
Aug 27, 2011 2:52:42 AM org.slf4j.impl.JCLLoggerAdapter info
INFO: Reading file info...
Aug 27, 2011 2:52:42 AM org.slf4j.impl.JCLLoggerAdapter info
INFO: Read lines: 21
Aug 27, 2011 2:52:42 AM org.slf4j.impl.JCLLoggerAdapter info
INFO: Processed 5 users
RecommendedItem[item:105, value:3.25]
The recommender is working correctly. The problem is that your data is too sparse. It cannot find a similarity that would link two users such that 167 is recommendable. Try a more realistic data set and I think the behavior will look less surprising.