RetryHandler Exceptions while using MapOnlyMapper in google appengine - java

I have a very large dataset, and want to update certain entity kinds. I am exploring MapReduce library in GoogleAppEngine. I have followed the examples listed here.
https://github.com/GoogleCloudPlatform/appengine-mapreduce/tree/master/java/example/src/com/google/appengine/demos/mapreduce/entitycount
What I am basically doing is this, in my MapSpecification
MapSpecification<Entity, Entity, Void> spec = new MapSpecification.Builder<>(
new DatastoreKeyInput(query,2),
new UrlFlattenMapper(),
new DatastoreOutput())
.setJobName("Flatten URLs entities")
.build();
and My Mapper basically performs the operations on the Entity and then Emits it, for the DatastoreOutput writer to write it back into the database.
My problem is, the Entities are getting updated fine. The endSlice is also being called in my MapperTask. But the Jobs is not completing. I keep getting these errors
[INFO] INFO: RetryHelper(28.07 ms, 1 attempts, java.util.concurrent.Executors$RunnableAdapter#7f0264e0): Attempt #1 failed [java.lang.RuntimeException: Can't serialize object: MapOnlyShardTask[context=IncrementalTaskContext[jobId=3c041e68-5041-458c-994b-290cd941f8bb, shardNumber=1, shardCount=2, lastWorkItem=Topics("jzdh"), workerCallCount=297, workerTimeMillis=42513], inputExhausted=true, isFirstSlice=false]], sleeping for 1028 ms
[INFO] Apr 26, 2016 4:39:37 PM com.google.appengine.tools.cloudstorage.RetryHelper doRetry
[INFO] INFO: RetryHelper(1.085 s, 2 attempts, java.util.concurrent.Executors$RunnableAdapter#7f0264e0): Attempt #2 failed [java.lang.RuntimeException: Can't serialize object: MapOnlyShardTask[context=IncrementalTaskContext[jobId=3c041e68-5041-458c-994b-290cd941f8bb, shardNumber=1, shardCount=2, lastWorkItem=Topics("jzdh"), workerCallCount=297, workerTimeMillis=42513], inputExhausted=true, isFirstSlice=false]], sleeping for 2435 ms
[INFO] Apr 26, 2016 4:39:37 PM com.google.appengine.tools.cloudstorage.RetryHelper doRetry
[INFO] INFO: RetryHelper(3.562 s, 3 attempts, java.util.concurrent.Executors$RunnableAdapter#6d7fcd47): Attempt #3 failed [java.lang.RuntimeException: Can't serialize object: MapOnlyShardTask[context=IncrementalTaskContext[jobId=3c041e68-5041-458c-994b-290cd941f8bb, shardNumber=0, shardCount=2, lastWorkItem=Topics("jz63"), workerCallCount=289, workerTimeMillis=41536], inputExhausted=true, isFirstSlice=false]], sleeping for 3421 ms
[INFO] Apr 26, 2016 4:39:39 PM com.google.appengine.tools.cloudstorage.RetryHelper doRetry
[INFO] INFO: RetryHelper(3.567 s, 3 attempts, java.util.concurrent.Executors$RunnableAdapter#7f0264e0): Attempt #3 failed [java.lang.RuntimeException: Can't serialize object: MapOnlyShardTask[context=IncrementalTaskContext[jobId=3c041e68-5041-458c-994b-290cd941f8bb, shardNumber=1, shardCount=2, lastWorkItem=Topics("jzdh"), workerCallCount=297, workerTimeMillis=42513], inputExhausted=true, isFirstSlice=false]], sleeping for 3340 ms
[INFO] Apr 26, 2016 4:39:41 PM com.google.appengine.tools.cloudstorage.RetryHelper doRetry
[INFO] INFO: RetryHelper(7.015 s, 4 attempts, java.util.concurrent.Executors$RunnableAdapter#6d7fcd47): Attempt #4 failed [java.lang.RuntimeException: Can't serialize object: MapOnlyShardTask[context=IncrementalTaskContext[jobId=3c041e68-5041-458c-994b-290cd941f8bb, shardNumber=0, shardCount=2, lastWorkItem=Topics("jz63"), workerCallCount=289, workerTimeMillis=41536], inputExhausted=true, isFirstSlice=false]], sleeping for 6941 ms
[INFO] Apr 26, 2016 4:39:42 PM com.google.appengine.tools.cloudstorage.RetryHelper doRetry
I havent been able to get around this issue, any help or pointers on what I could be doing would be greatly appreciated.

The Culprit in My case is a small Datastore field I have used in the Map Job. I put a transient in front of the field, and the issue was solved,

Related

How to map a collection of entities, inside an entity, using Hibernate?

I'm developing a distributed software system and I'm trying to use the Hibernate framework for the first time. One of my classes (which is a mapped Entity) has an ArrayList<> containing objects of another class (which is also an Entity).
Ex:
I have an Event class, which is a #MappedSuperclass.
I have a CorporateEvent class, which is an entity, that inherits from Event.
I have a Task class, which is also an Entity.
My CorporateEvent class has an attribute called "tasks", which is an ArrayList.
How do I map this correctly to the database so that there's a table called Event and a table called Task, which has a key connecting them to eachother?
I'm using the newest Hibernate version and the database is on a PostgreSQL server. The dialect in Hibernate is set to PostgreSQL.
We've tried using #ElementCollection and #CollectionTable, but we're getting some weird NullPointerExceptions. It might have something to do with the inheritance aspect of the code.
Here is the 3 classes in question:
Event Class:
#MappedSuperclass
#Inheritance(strategy = InheritanceType.TABLE_PER_CLASS)
public abstract class Event {
#Column(name="title")
private String title;
#Column(name="location")
private String location;
#Column(name="date")
private String date;
CorporateEvent Class:
#Entity
#Table(name="corporate_event")
public class CorporateEvent extends Event {
#Id
#GeneratedValue(strategy = GenerationType.SEQUENCE, generator = "id_generator")
#SequenceGenerator(name = "id_generator", sequenceName = "corporate_event_id_seq", allocationSize = 1)
#Column(name = "id", updatable = false, nullable = false)
private int id;
#ElementCollection
private ArrayList<Task> tasks;
#Column(name="expenses")
private String expenses;
Task Class:
#Entity
#Table(name="task")
public class Task {
#Id
#GeneratedValue(strategy = GenerationType.SEQUENCE, generator = "id_generator")
#SequenceGenerator(name="id_generator", sequenceName = "task_task_id_seq", allocationSize=1)
#Column(name="task_id", updatable = false, nullable = false)
private int id;
#Column(name="name")
private String taskName;
#Column(name="description")
private String description;
I expect the output to be that a CorporateEvent object is saved to the database and the Task objects in its ArrayList are each saved in the Task table with a CorporateEvent ID that connects them to the right event.
What I instead get is a NullPointerException on the line where I try to save the CorporateEvent object using Hibernate. This is the error message:
------------------------------------------------------------------------
Building Eventer 1.0-SNAPSHOT
------------------------------------------------------------------------
--- maven-resources-plugin:2.5:resources (default-resources) # Eventer ---
[debug] execute contextualize
Using 'UTF-8' encoding to copy filtered resources.
Copying 11 resources
--- maven-compiler-plugin:3.6.1:compile (default-compile) # Eventer ---
Nothing to compile - all classes are up to date
--- exec-maven-plugin:1.2.1:exec (default-cli) # Eventer ---
Nov 08, 2019 5:31:30 PM org.hibernate.Version logVersion
INFO: HHH000412: Hibernate Core {5.4.8.Final}
Nov 08, 2019 5:31:30 PM org.hibernate.annotations.common.reflection.java.JavaReflectionManager <clinit>
INFO: HCANN000001: Hibernate Commons Annotations {5.1.0.Final}
Nov 08, 2019 5:31:31 PM org.hibernate.engine.jdbc.connections.internal.DriverManagerConnectionProviderImpl configure
WARN: HHH10001002: Using Hibernate built-in connection pool (not for production use!)
Nov 08, 2019 5:31:31 PM org.hibernate.engine.jdbc.connections.internal.DriverManagerConnectionProviderImpl buildCreator
INFO: HHH10001005: using driver [org.postgresql.Driver] at URL [jdbc:postgresql://tek-mmmi-db0a.tek.c.sdu.dk:5432/si3_2019_group_5_db]
Nov 08, 2019 5:31:31 PM org.hibernate.engine.jdbc.connections.internal.DriverManagerConnectionProviderImpl buildCreator
INFO: HHH10001001: Connection properties: {user=si3_2019_group_5, password=****}
Nov 08, 2019 5:31:31 PM org.hibernate.engine.jdbc.connections.internal.DriverManagerConnectionProviderImpl buildCreator
INFO: HHH10001003: Autocommit mode: false
Nov 08, 2019 5:31:31 PM org.hibernate.engine.jdbc.connections.internal.DriverManagerConnectionProviderImpl$PooledConnections <init>
INFO: HHH000115: Hibernate connection pool size: 1 (min=1)
Nov 08, 2019 5:31:31 PM org.hibernate.dialect.Dialect <init>
INFO: HHH000400: Using dialect: org.hibernate.dialect.PostgreSQLDialect
Nov 08, 2019 5:31:35 PM org.hibernate.cfg.AnnotationBinder bindClass
WARN: HHH000503: A class should not be annotated with both #Inheritance and #MappedSuperclass. #Inheritance will be ignored for: com.mycompany.domain.event.Event.
Nov 08, 2019 5:31:35 PM org.hibernate.boot.internal.InFlightMetadataCollectorImpl addIdentifierGenerator
WARN: HHH000069: Duplicate generator name id_generator
Exception in thread "main" java.lang.NullPointerException
at com.mycompany.repositories.EventRepository.saveCorporateEvent(EventRepository.java:51)
at com.mycompany.eventer.Eventer.main(Eventer.java:42)
------------------------------------------------------------------------
BUILD FAILURE
------------------------------------------------------------------------
Total time: 7.007s
Finished at: Fri Nov 08 17:31:35 CET 2019
Final Memory: 9M/100M
------------------------------------------------------------------------
Failed to execute goal org.codehaus.mojo:exec-maven-plugin:1.2.1:exec (default-cli) on project Eventer: Command execution failed. Process exited with an error: 1 (Exit value: 1) -> [Help 1]
To see the full stack trace of the errors, re-run Maven with the -e switch.
Re-run Maven using the -X switch to enable full debug logging.
The line it complains about is "session.close();" in this method:
public int saveCorporateEvent(CorporateEvent corporateEvent) {
try {
session = ConnectRepository.factory.getCurrentSession();
session.beginTransaction();
session.save(corporateEvent);
session.getTransaction().commit();
return corporateEvent.getId();
} catch (Exception e) {
e.printStackTrace();
} finally {
session.close();
}
return 0;
}
And if I remove the session.close(); I get the following annotation error, which I don't understand, since an ArrayList should be supported?:
(The beginning is the same, so I'm only pasting the difference in)
Exception in thread "main" java.lang.ExceptionInInitializerError
at com.mycompany.repositories.EventRepository.saveCorporateEvent(EventRepository.java:42)
at com.mycompany.eventer.Eventer.main(Eventer.java:42)
Caused by: org.hibernate.AnnotationException: java.util.ArrayList collection type not supported for property: com.mycompany.domain.event.CorporateEvent.tasks
at org.hibernate.cfg.annotations.CollectionBinder.getCollectionBinder(CollectionBinder.java:317)
at org.hibernate.cfg.AnnotationBinder.processElementAnnotations(AnnotationBinder.java:1939)
at org.hibernate.cfg.AnnotationBinder.processIdPropertiesIfNotAlready(AnnotationBinder.java:975)
at org.hibernate.cfg.AnnotationBinder.bindClass(AnnotationBinder.java:802)
at org.hibernate.boot.model.source.internal.annotations.AnnotationMetadataSourceProcessorImpl.processEntityHierarchies(AnnotationMetadataSourceProcessorImpl.java:254)
at org.hibernate.boot.model.process.spi.MetadataBuildingProcess$1.processEntityHierarchies(MetadataBuildingProcess.java:230)
at org.hibernate.boot.model.process.spi.MetadataBuildingProcess.complete(MetadataBuildingProcess.java:273)
at org.hibernate.boot.model.process.spi.MetadataBuildingProcess.build(MetadataBuildingProcess.java:83)
at org.hibernate.boot.internal.MetadataBuilderImpl.build(MetadataBuilderImpl.java:473)
at org.hibernate.boot.internal.MetadataBuilderImpl.build(MetadataBuilderImpl.java:84)
at org.hibernate.cfg.Configuration.buildSessionFactory(Configuration.java:689)
at org.hibernate.cfg.Configuration.buildSessionFactory(Configuration.java:724)
at com.mycompany.repositories.ConnectRepository.<clinit>(ConnectRepository.java:27)
... 2 more
------------------------------------------------------------------------
BUILD FAILURE
------------------------------------------------------------------------
Total time: 11.877s
Finished at: Fri Nov 08 17:44:27 CET 2019
Final Memory: 19M/208M
------------------------------------------------------------------------
Failed to execute goal org.codehaus.mojo:exec-maven-plugin:1.2.1:exec (default-cli) on project Eventer: Command execution failed. Process exited with an error: 1 (Exit value: 1) -> [Help 1]
To see the full stack trace of the errors, re-run Maven with the -e switch.
Re-run Maven using the -X switch to enable full debug logging.
UPDATE:
Now it's somewhat working. But a new problem has appeared. As you can see below, hibernate only inserts into the CorporateEvent table, not the task table.
------------------------------------------------------------------------
Building Eventer 1.0-SNAPSHOT
------------------------------------------------------------------------
--- maven-resources-plugin:2.5:resources (default-resources) # Eventer ---
[debug] execute contextualize
Using 'UTF-8' encoding to copy filtered resources.
Copying 11 resources
--- maven-compiler-plugin:3.6.1:compile (default-compile) # Eventer ---
Nothing to compile - all classes are up to date
--- exec-maven-plugin:1.2.1:exec (default-cli) # Eventer ---
Nov 10, 2019 12:15:21 PM org.hibernate.Version logVersion
INFO: HHH000412: Hibernate Core {5.4.8.Final}
Nov 10, 2019 12:15:21 PM org.hibernate.annotations.common.reflection.java.JavaReflectionManager <clinit>
INFO: HCANN000001: Hibernate Commons Annotations {5.1.0.Final}
Nov 10, 2019 12:15:22 PM org.hibernate.engine.jdbc.connections.internal.DriverManagerConnectionProviderImpl configure
WARN: HHH10001002: Using Hibernate built-in connection pool (not for production use!)
Nov 10, 2019 12:15:22 PM org.hibernate.engine.jdbc.connections.internal.DriverManagerConnectionProviderImpl buildCreator
INFO: HHH10001005: using driver [org.postgresql.Driver] at URL [jdbc:postgresql://tek-mmmi-db0a.tek.c.sdu.dk:5432/si3_2019_group_5_db]
Nov 10, 2019 12:15:22 PM org.hibernate.engine.jdbc.connections.internal.DriverManagerConnectionProviderImpl buildCreator
INFO: HHH10001001: Connection properties: {user=si3_2019_group_5, password=****}
Nov 10, 2019 12:15:22 PM org.hibernate.engine.jdbc.connections.internal.DriverManagerConnectionProviderImpl buildCreator
INFO: HHH10001003: Autocommit mode: false
Nov 10, 2019 12:15:22 PM org.hibernate.engine.jdbc.connections.internal.DriverManagerConnectionProviderImpl$PooledConnections <init>
INFO: HHH000115: Hibernate connection pool size: 1 (min=1)
Nov 10, 2019 12:15:22 PM org.hibernate.dialect.Dialect <init>
INFO: HHH000400: Using dialect: org.hibernate.dialect.PostgreSQLDialect
Nov 10, 2019 12:15:26 PM org.hibernate.cfg.AnnotationBinder bindClass
WARN: HHH000503: A class should not be annotated with both #Inheritance and #MappedSuperclass. #Inheritance will be ignored for: com.mycompany.domain.event.Event.
Nov 10, 2019 12:15:26 PM org.hibernate.boot.internal.InFlightMetadataCollectorImpl addIdentifierGenerator
WARN: HHH000069: Duplicate generator name id_generator
Nov 10, 2019 12:15:26 PM org.hibernate.boot.internal.InFlightMetadataCollectorImpl addIdentifierGenerator
WARN: HHH000069: Duplicate generator name id_generator
Hibernate: select nextval ('corporate_event_id_seq')
Hibernate: insert into corporate_event (date, description, location, max_participants, title, expenses, corporate_id) values (?, ?, ?, ?, ?, ?, ?)
------------------------------------------------------------------------
BUILD SUCCESS
------------------------------------------------------------------------
Total time: 9.780s
Finished at: Sun Nov 10 12:15:28 CET 2019
Final Memory: 9M/100M
------------------------------------------------------------------------
These are the changes I made using your help:
CorporateEvent class:
#OneToMany(cascade=CascadeType.ALL, mappedBy="corporateEvent")
private List<Task> tasks;
Task class:
#ManyToOne
#JoinColumn(name="corporate_id")
private CorporateEvent corporateEvent;
The code runs successfully, but it will not map the data to the Task table in my database. This is the test (main) code:
public static void main(String[] args) {
ArrayList<Task> arrTasks = new ArrayList<>();
arrTasks.add(new Task("task1", "taskDes1"));
arrTasks.add(new Task("task2", "taskDes2"));
arrTasks.add(new Task("task3", "taskDes3"));
User user = new User("alex", "tholle", "sdu", "software engineering", "at#gmail.com", 70111213, "alexuser", "alexpass", new Date(), true, "student", "e2R213");
Meetup meetup = new Meetup(user.getUserName(), "meetup at the pub", "old irish", new Date().toString(), "det bliver fed", 10);
Task task = new Task("Cleaning", "Do some cleaing please");
CorporateEvent corporateEvent = new CorporateEvent(null, arrTasks, "Tinderbox", "forest", new Date().toString(), "edm music festival", 5000);
for (Task t : arrTasks) {
t.setCorporateEvent(corporateEvent);
}
EventRepository eventRepo = new EventRepository();
eventRepo.saveCorporateEvent(corporateEvent);
The database has the following tables regarding this issue:
create table task
(
task_id serial not null
constraint task_pk
primary key,
name varchar,
description varchar(1000),
corporate_id integer
constraint corporate_id
references corporate_event
on update cascade on delete cascade
);
alter table task
owner to si3_2019_group_5;
create unique index task_task_id_uindex
on task (task_id);
create table corporate_event
(
title varchar,
date varchar(1000),
location varchar,
description varchar,
max_participants integer,
expenses varchar(2000),
corporate_id serial not null
constraint corporate_event_pk
primary key,
tasks varchar(4000)
);
alter table corporate_event
owner to si3_2019_group_5;
create unique index corporate_event_id_uindex
on corporate_event (corporate_id);
Your error is likely due to your usage of the #ElementCollection annotation without a class that is annotated with #Embeddable. I would use a OneToMany annotation implementation instead.
Your CorporateEvent class:
#OneToMany(mappedBy="corporate_event")
private List<Task> tasks;
Your Task class:
#ManyToOne
#JoinColumn(name="corporate_event_id")
private CorporateEvent corporateEvent;

ParallelStream queue task in CommonPool rather than the custom pool

I wanted to use custom ThreadPool for parallelStream. Reason being I wanted to use MDCContext in the task. This is the code I wrote to use the custom ThreadPool:
final ExecutorService mdcPool = MDCExecutors.newCachedThreadPool();
mdcPool.submit(() -> ruleset.getOperationList().parallelStream().forEach(operation -> {
log.info("Sample log line");
});
When the MDC context was not getting copied to the task, I looked at the logs. These are the logs I found. The first log is executed in "(pool-16-thread-1)" but other tasks are getting executed on "ForkJoinPool.commonPool-worker". The first log also has MdcContextID. But as I am using custom ThreadPool for submitting the task, all tasks should be executing in custom ThreadPool.
16 Oct 2018 12:46:58,298 [INFO] 8fcfa6ee-d141-11e8-b84a-7da6cd73aa0b (pool-16-thread-1) com.amazon.rss.activity.business.VariablesEvaluator: Sample log line
16 Oct 2018 12:46:58,298 [INFO] (ForkJoinPool.commonPool-worker-11) com.amazon.rss.activity.business.VariablesEvaluator: Sample log line
16 Oct 2018 12:46:58,298 [INFO] (ForkJoinPool.commonPool-worker-4) com.amazon.rss.activity.business.VariablesEvaluator: Sample log line
16 Oct 2018 12:46:58,298 [INFO] (ForkJoinPool.commonPool-worker-13) com.amazon.rss.activity.business.VariablesEvaluator: Sample log line
16 Oct 2018 12:46:58,298 [INFO] (ForkJoinPool.commonPool-worker-9) com.amazon.rss.activity.business.VariablesEvaluator: Sample log line
16 Oct 2018 12:46:58,299 [INFO] (ForkJoinPool.commonPool-worker-2) com.amazon.rss.activity.business.VariablesEvaluator: Sample log line
16 Oct 2018 12:46:58,299 [INFO] (ForkJoinPool.commonPool-worker-15) com.amazon.rss.activity.business.VariablesEvaluator: Sample log line
Is this supposed to happen or am I missing something?
There is no support for running a parallel stream in a custom thread pool. It happens to be executed in a different Fork/Join pool when the operation is initiated in a worker thread of a different Fork/Join pool, but that does not seem to be a planned feature, as the Stream implementation code will still use artifacts of the common pool internally in some operations then.
In your case, it seems that the ExecutorService returned by MDCExecutors.newCachedThreadPool() is not a Fork/Join pool, so it does not exhibit this undocumented behavior at all.
There is a feature request, JDK-8032512, regarding more thread control. It’s open and, as far as I can see, without much activity.

Returning NOBODY because of SkipAdminCheck

[INFO] Oct 06, 2016 11:24:54 AM com.google.apphosting.utils.jetty.AppEngineAuthentication$AppEngineAuthenticator authenticate
[INFO] INFO: Returning NOBODY because of SkipAdminCheck.
Seems this error is produced by TaskQueue.
Queue qu = QueueFactory.getQueue(qname);
qu.add(TaskOptions.Builder.withUrl("/task/"+qname)
.payload("{\"token\":\"asdf1234\"}","UTF-8")
.method(TaskOptions.Method.POST)
.header("Host", ModulesServiceFactory.getModulesService().getVersionHostname(null,null))
Any suggestions how to fix it? Sure I googled, Google found just 3 pages about, and two first is about adding. As you see above I have added following but the code still produce messages into the log:
.header("Host", ModulesServiceFactory.getModulesService().getVersionHostname(null,null))

Migrating to Hibernate 5

I am migrating an application from Hibernate 4.3 to Hibernate 5.0.1-Final
I use ImplicitNamingStrategyComponentPathImpl as my hibernate.implicit_naming_strategy with Postgres 9.4.4 and my company uses hibernate.hbm2ddl.auto = update for deployment ( I know it is a bad practice but cant help it)
While the session factory initializes, it throws the below error. Apparently the generated alias is too long for Postgres. How do we go about this situation? I have tried assigning #Table(name=..) annotation to work around this it but it is getting worse as every relationship from that point gets screwd.
Caused by: org.hibernate.tool.schema.spi.SchemaManagementException: Unable to execute schema management to JDBC target [create table public.ReferenceDocumentVersion_ReferenceDocumentSourceFilesStoreDescriptor (ReferenceDocumentVersion_unid uuid not null, sourceFilesStore_filesDescriptorMap_unid uuid not null, filesDescriptorMap_KEY text not null, primary key (ReferenceDocumentVersion_unid, filesDescriptorMap_KEY))]
at org.hibernate.tool.schema.internal.TargetDatabaseImpl.accept(TargetDatabaseImpl.java:59)
at org.hibernate.tool.schema.internal.SchemaMigratorImpl.applySqlString(SchemaMigratorImpl.java:371)
at org.hibernate.tool.schema.internal.SchemaMigratorImpl.applySqlStrings(SchemaMigratorImpl.java:360)
at org.hibernate.tool.schema.internal.SchemaMigratorImpl.createTable(SchemaMigratorImpl.java:181)
at org.hibernate.tool.schema.internal.SchemaMigratorImpl.doMigrationToTargets(SchemaMigratorImpl.java:134)
at org.hibernate.tool.schema.internal.SchemaMigratorImpl.doMigration(SchemaMigratorImpl.java:59)
at org.hibernate.tool.hbm2ddl.SchemaUpdate.execute(SchemaUpdate.java:129)
at org.hibernate.tool.hbm2ddl.SchemaUpdate.execute(SchemaUpdate.java:97)
at org.hibernate.internal.SessionFactoryImpl.<init>(SessionFactoryImpl.java:481)
at org.hibernate.boot.internal.SessionFactoryBuilderImpl.build(SessionFactoryBuilderImpl.java:444)
at org.hibernate.jpa.boot.internal.EntityManagerFactoryBuilderImpl.build(EntityManagerFactoryBuilderImpl.java:802)
... 29 more
Caused by: org.postgresql.util.PSQLException: ERROR: relation "referencedocumentversion_referencedocumentsourcefilesstoredescr" already exists
at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2182)
at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1911)
at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:173)
at org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:618)
at org.postgresql.jdbc2.AbstractJdbc2Statement.executeWithFlags(AbstractJdbc2Statement.java:454)
at org.postgresql.jdbc2.AbstractJdbc2Statement.executeUpdate(AbstractJdbc2Statement.java:382)
at org.apache.tomcat.dbcp.dbcp.DelegatingStatement.executeUpdate(DelegatingStatement.java:228)
at org.apache.tomcat.dbcp.dbcp.DelegatingStatement.executeUpdate(DelegatingStatement.java:228)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at net.bull.javamelody.JdbcWrapper.doExecute(JdbcWrapper.java:404)
at net.bull.javamelody.JdbcWrapper$StatementInvocationHandler.invoke(JdbcWrapper.java:129)
at net.bull.javamelody.JdbcWrapper$DelegatingInvocationHandler.invoke(JdbcWrapper.java:286)
at com.sun.proxy.$Proxy93.executeUpdate(Unknown Source)
at org.hibernate.tool.schema.internal.TargetDatabaseImpl.accept(TargetDatabaseImpl.java:56)
... 39 more
I have addressed the situation with a custom ImplicitNamingStrategy that truncates Hibernate generated identifiers to 64 chars (MAX length for Postgres).
Previous versions of Hibernate(4.x) have encountered the same error but they just ignores it and proceeds with initializing the SessionFactory. However, Hibernate 5.x has a new boot strap API which throws a SchemaManagementException in such cases and aborts. Hibernate logs from my test scenarios are pasted below for reference.
Hibernate 4.X
INFO: HHH000396: Updating schema
Oct 04, 2015 1:38:00 PM org.hibernate.tool.hbm2ddl.DatabaseMetadata getTableMetadata
INFO: HHH000262: Table not found: ReferenceDocumentVersionEntityWithAReallyReallyReallyLongNameBeyondPostGres
Oct 04, 2015 1:38:00 PM org.hibernate.tool.hbm2ddl.DatabaseMetadata getTableMetadata
INFO: HHH000262: Table not found: ReferenceDocumentVersionEntityWithAReallyReallyReallyLongNameBeyondPostGres
Oct 04, 2015 1:38:00 PM org.hibernate.tool.hbm2ddl.DatabaseMetadata getTableMetadata
INFO: HHH000262: Table not found: ReferenceDocumentVersionEntityWithAReallyReallyReallyLongNameBeyondPostGres
Oct 04, 2015 1:38:00 PM org.hibernate.tool.hbm2ddl.SchemaUpdate execute
ERROR: HHH000388: Unsuccessful: create table ReferenceDocumentVersionEntityWithAReallyReallyReallyLongNameBeyondPostGres (unid uuid not null, path text, primary key (unid))
Oct 04, 2015 1:38:00 PM org.hibernate.tool.hbm2ddl.SchemaUpdate execute
ERROR: ERROR: relation "referencedocumentversionentitywithareallyreallyreallylongnamebe" already exists
Oct 04, 2015 1:38:00 PM org.hibernate.tool.hbm2ddl.SchemaUpdate execute
INFO: HHH000232: Schema update complete
Hibernate 5.0.2.Final
Oct 04, 2015 1:39:16 PM org.hibernate.tool.hbm2ddl.SchemaUpdate execute
INFO: HHH000228: Running hbm2ddl schema update
Oct 04, 2015 1:39:16 PM org.hibernate.tool.schema.extract.internal.InformationExtractorJdbcDatabaseMetaDataImpl processGetTableResults
INFO: HHH000262: Table not found: ReferenceDocumentVersionEntityWithAReallyReallyReallyLongNameBeyondPostGres
Oct 04, 2015 1:39:16 PM org.hibernate.tool.schema.extract.internal.InformationExtractorJdbcDatabaseMetaDataImpl processGetTableResults
INFO: HHH000262: Table not found: ReferenceDocumentVersionEntityWithAReallyReallyReallyLongNameBeyondPostGres
Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.813 sec <<< FAILURE!
testApp(org.foobar.AppTest) Time elapsed: 0.788 sec <<< ERROR!
javax.persistence.PersistenceException: [PersistenceUnit: org.foobar.persistence.default] Unable to build Hibernate SessionFactory
at org.hibernate.jpa.boot.internal.EntityManagerFactoryBuilderImpl.persistenceException(EntityManagerFactoryBuilderImpl.java:877)
at org.hibernate.jpa.boot.internal.EntityManagerFactoryBuilderImpl.build(EntityManagerFactoryBuilderImpl.java:805)
at org.hibernate.jpa.HibernatePersistenceProvider.createEntityManagerFactory(HibernatePersistenceProvider.java:58)
at javax.persistence.Persistence.createEntityManagerFactory(Persistence.java:55)
at javax.persistence.Persistence.createEntityManagerFactory(Persistence.java:39)
at org.foobar.AppTest.testApp(AppTest.java:18)
Solution
Custom ImplicitNamingStrategy
package org.foobar.persistence;
import org.hibernate.boot.model.naming.Identifier;
import org.hibernate.boot.model.naming.ImplicitNamingStrategyComponentPathImpl;
import org.hibernate.boot.spi.MetadataBuildingContext;
public class PGConstrainedImplicitNamingStrategy extends ImplicitNamingStrategyComponentPathImpl {
private static final int POSTGRES_IDENTIFIER_MAXLENGTH = 63;
public static final PGConstrainedImplicitNamingStrategy INSTANCE = new PGConstrainedImplicitNamingStrategy();
public PGConstrainedImplicitNamingStrategy() {
}
#Override
protected Identifier toIdentifier(String stringForm, MetadataBuildingContext buildingContext) {
return buildingContext.getMetadataCollector()
.getDatabase()
.getJdbcEnvironment()
.getIdentifierHelper()
.toIdentifier( stringForm.substring( 0, Math.min( POSTGRES_IDENTIFIER_MAXLENGTH, stringForm.length() ) ) );
}}
persistence.xml
<properties>
<property name="hibernate.implicit_naming_strategy" value="org.foobar.persistence.PGConstrainedImplicitNamingStrategy"/>
</properties>
This is not a scalable solution at all but helps to keep the show running. The permanent solution would be to explicitly supply identifiers so that hibernate does not generate really long identifiers. - see the answer written by maaartinus
try to follow the Migration guide in Hibernate Documentation in this link
https://github.com/hibernate/hibernate-orm/blob/5.0/migration-guide.adoc
The OP's solution may lead to collision (that's why he calls it not scalable, right?). Explicitly supplying all identifiers sound like a terrible idea to me. I'd suggest one of the following
provide a Map<String, String> mapping all overlong names to something shorter
shorten all overlong names to POSTGRES_IDENTIFIER_MAXLENGTH - N and append N characters generated from the hash of the cut away part, so the probability of collisions gets minimized
Use some identifier abbreviating function like {"Reference" -> "Ref", "Document" -> "Doc", ...} and apply it to your identifiers before they get processed, so that you get RefDocVersion_RefDocSourceFileDescr... instead of referencedocumentversion_referencedocumentsourcefilesstoredescr....
Consider using abbreviated names in you code itself. This is often advised against, as it easily leads to incomprehensible non-sense, but IMHO it increases readability when used right (use only a couple of abbreviations and use them systematically; provide a list of them).

Jade DispatcherException problem when using remote Containers

I have two virtual machines in a private
cloud, and I want to execute Jade both of them. They can access each
other without problems. I started in one of them the Main Container, and
in the other a Container which would connect to the main. However, I get
a Dispatcher exception when this connection tries to take place:
--------
INFO: Adding node <Container-1> to the platform
Jun 22, 2011 12:54:34 PM jade.core.messaging.MessagingService
clearCachedSlice
INFO: Clearing cache
Jun 22, 2011 12:54:34 PM jade.core.messaging.MessagingService
$CommandTargetSink handleNewSlice
WARNING: Error notifying current information to new Messaging-Slice
Container-1
jade.core.IMTPException: Dispatcher error [nested
jade.imtp.leap.DispatcherException: DispatcherException in remote site.
No skeleton for object-id 3447152]
at jade.imtp.leap.NodeStub.accept(NodeStub.java:91)
at jade.core.messaging.MessagingProxy.addRoute(MessagingProxy.java:257)
at jade.core.messaging.MessagingService
$CommandTargetSink.handleNewSlice(MessagingService.java:993)
at jade.core.messaging.MessagingService
$CommandTargetSink.consume(MessagingService.java:906)
at jade.core.CommandProcessor
$SinksFilter.accept(CommandProcessor.java:253)
at jade.core.Filter.filter(Filter.java:89)
at jade.core.Filter.filter(Filter.java:90)
at jade.core.Filter.filter(Filter.java:90)
at
jade.core.CommandProcessor.processIncoming(CommandProcessor.java:229)
at
jade.core.PlatformManagerImpl.issueNewSliceCommand(PlatformManagerImpl.java:744)
at
jade.core.PlatformManagerImpl.localAddSlice(PlatformManagerImpl.java:445)
at
jade.core.PlatformManagerImpl.localAddNode(PlatformManagerImpl.java:293)
at jade.core.PlatformManagerImpl.addNode(PlatformManagerImpl.java:245)
at
jade.imtp.leap.PlatformManagerSkel.executeCommand(PlatformManagerSkel.java:73)
at jade.imtp.leap.Skeleton.processCommand(Skeleton.java:51)
at
jade.imtp.leap.CommandDispatcher.handleCommand(CommandDispatcher.java:949)
at jade.imtp.leap.JICP.JICPServer
$ConnectionHandler.run(JICPServer.java:439)
Nested Exception:
jade.imtp.leap.DispatcherException: DispatcherException in remote site.
No skeleton for object-id 3447152
at
jade.imtp.leap.CommandDispatcher.checkRemoteExceptions(CommandDispatcher.java:516)
at
jade.imtp.leap.CommandDispatcher.dispatchSerializedCommand(CommandDispatcher.java:418)
at
jade.imtp.leap.CommandDispatcher.dispatchCommand(CommandDispatcher.java:343)
at jade.imtp.leap.NodeStub.accept(NodeStub.java:83)
at jade.core.messaging.MessagingProxy.addRoute(MessagingProxy.java:257)
at jade.core.messaging.MessagingService
$CommandTargetSink.handleNewSlice(MessagingService.java:993)
at jade.core.messaging.MessagingService
$CommandTargetSink.consume(MessagingService.java:906)
at jade.core.CommandProcessor
$SinksFilter.accept(CommandProcessor.java:253)
at jade.core.Filter.filter(Filter.java:89)
at jade.core.Filter.filter(Filter.java:90)
at jade.core.Filter.filter(Filter.java:90)
at
jade.core.CommandProcessor.processIncoming(CommandProcessor.java:229)
at
jade.core.PlatformManagerImpl.issueNewSliceCommand(PlatformManagerImpl.java:744)
at
jade.core.PlatformManagerImpl.localAddSlice(PlatformManagerImpl.java:445)
at
jade.core.PlatformManagerImpl.localAddNode(PlatformManagerImpl.java:293)
at jade.core.PlatformManagerImpl.addNode(PlatformManagerImpl.java:245)
at
jade.imtp.leap.PlatformManagerSkel.executeCommand(PlatformManagerSkel.java:73)
at jade.imtp.leap.Skeleton.processCommand(Skeleton.java:51)
at
jade.imtp.leap.CommandDispatcher.handleCommand(CommandDispatcher.java:949)
at jade.imtp.leap.JICP.JICPServer
$ConnectionHandler.run(JICPServer.java:439)
Jun 22, 2011 12:54:34 PM jade.core.PlatformManagerImpl$1 nodeAdded
INFO: --- Node <Container-1> ALIVE ---
Jun 22, 2011 12:54:34 PM
jade.core.nodeMonitoring.BlockingNodeFailureMonitor run
INFO: PING from node Container-1 exited with exception. Dispatcher error
[nested jade.imtp.leap.DispatcherException: DispatcherException in
remote site. No skeleton for object-id 3447152]
Jun 22, 2011 12:54:34 PM jade.core.PlatformManagerImpl$1 nodeUnreachable
WARNING: --- Node <Container-1> UNREACHABLE ---
Jun 22, 2011 12:54:34 PM jade.core.PlatformManagerImpl
removeTerminatedNode
INFO: --- Node <Container-1> TERMINATED ---
Jun 22, 2011 12:54:34 PM jade.core.messaging.MessagingService
clearCachedSlice
---------
In the other node I get the following:
--------
Jun 22, 2011 12:55:35 PM jade.core.AgentContainerImpl joinPlatform
SEVERE: Some problem occurred while joining agent platform.
jade.core.ServiceException: An error occurred during service booting
[nested java.lang.NullPointerException]
at
jade.core.AgentContainerImpl.bootAllServices(AgentContainerImpl.java:465)
at jade.core.AgentContainerImpl.startNode(AgentContainerImpl.java:408)
at
jade.core.AgentContainerImpl.joinPlatform(AgentContainerImpl.java:485)
at jade.core.Runtime.createAgentContainer(Runtime.java:133)
at BookBuyTest2.main(BookBuyTest2.java:25)
Exception in thread "main" java.lang.NullPointerException
at BookBuyTest2.main(BookBuyTest2.java:35)
------
Any ideas about what I am doing wrong?
Thank you very much in advance,
The problem was that in the node I put:
local-host:127.0.0.1
This was solved by putting
local-host: <actual IP of the machine\>
To me, this worked
String[] container = {
"-gui",
"-local-host 127.0.0.1",
"-container",
"Agent1:jogo.agents.Agent1;Agent2:jogo.agents.Agent2" // <- Your custom agents
};
Boot.main(container);

Categories