OpenNMS - PostgreSQL/Java database issue
I understand that the jicmp files aren't causing fatal errors, but when OpenNMS goes to create a user, there seems to be a Java exception.
Does anybody with experience know what's causing this at all?
Errors dumped from the OpenNMS installer
- using SQL directory... C:\Program Files\OpenNMS\etc
- using create.sql... C:\Program Files\OpenNMS\etc\create.sql
* using 'postgres' as the PostgreSQL user for OpenNMS
* using 'opennms' as the PostgreSQL database name for OpenNMS
java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.opennms.bootstrap.Bootstrap$4.run(Bootstrap.java:460)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.opennms.core.schema.MigrationException: an error occurred creating the OpenNMS user
at org.opennms.core.schema.Migrator.createUser(Migrator.java:339)
at org.opennms.core.schema.Migrator.prepareDatabase(Migrator.java:447)
at org.opennms.install.Installer.install(Installer.java:254)
at org.opennms.install.Installer.main(Installer.java:989)
... 6 more
Caused by: org.postgresql.util.PSQLException: ERROR: unrecognized role option "createuser"
Position: 54
at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2284)
at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2003)
at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:200)
at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:424)
at org.postgresql.jdbc.PgStatement.executeWithFlags(PgStatement.java:321)
at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:313)
at org.opennms.core.schema.Migrator.createUser(Migrator.java:337)
... 9 more
If you're still interested: the postgres database has changed the CREATE USER command. The argument CREATEUSER is no longer valid and should be changed to CREATEROLE. This should be done in the create.sql file where it tries to create the user.
I had a similar case for OpenCMS in opencms\setup\database\postgresql\create_db.sql where I changed:
#
# replacer = "${database}"
############################
# Create the user;
CREATE USER ${user}
PASSWORD '${password}'
CREATEDB CREATEUSER;
#create the database
CREATE DATABASE ${database}
WITH ENCODING='UNICODE' OWNER=${user};
#commit all (if connection is not autocommit)
commit;
for
#
# replacer = "${database}"
############################
# Create the user;
CREATE USER ${user}
PASSWORD '${password}'
CREATEDB CREATEROLE;
#create the database
CREATE DATABASE ${database}
WITH ENCODING='UNICODE' OWNER=${user};
#commit all (if connection is not autocommit)
commit;
This was after unzipping the war. I rezipped it and then it worked.
Related
I am running a Spring-Boot project in IDEA Intelli for training that writes a recipe to the database using REST-full services. The IDE runs the server and I use Postman to query.
My problem is that the H2 database does not persist the data entered. In memory it's fine, but as soon as I shutdown and start up again, the data is gone.
Looking at the database file, I see a trace file that seems to show that the DB wasn't updated because there was a lock on it. I have searched for a lock file (locate *.lock.db), stopped all Java processes, and even rebooted without a change in behavior. I've changed the FILE_LOCK options for H2 but the problem persists.
EDIT: I just noticed that I hadn't tried FILE_LOCK=FILE. I just tried it and the db isn't updated. However, a trace file is not produced.
EDIT 2: I forgot to say that I'm running on Ubuntu 20.04.
EDIT 3: I am being careful to shutdown using the "actuator/shutdown" from Postman. Using file locking, I do this:
start the server running from my IDE. I can see the lock file being created
create a new recipe
make sure I can get it (returns the recipe, no error)
shutdown using the actuator. The lock file is removed.
startup the server again. New lock file.
try to get the recipe. Returns a 404 error. No trace file.
What could be causing this? Here is my trace.db file [note: when I use FILE_LOCK=FILE, no trace file is created, so maybe the locking problem was a red herring?]:
2021-09-09 07:54:08 database: flush
org.h2.message.DbException: General error: "java.lang.IllegalStateException: The file is locked: nio:/home/knute/IdeaProjects/recipes_db.mv.db [1.4.200/7]" [50000-200]
at org.h2.message.DbException.get(DbException.java:194)
at org.h2.message.DbException.convert(DbException.java:347)
at org.h2.mvstore.db.MVTableEngine$1.uncaughtException(MVTableEngine.java:93)
at org.h2.mvstore.MVStore.handleException(MVStore.java:2877)
at org.h2.mvstore.MVStore.panic(MVStore.java:481)
at org.h2.mvstore.MVStore.<init>(MVStore.java:402)
at org.h2.mvstore.MVStore$Builder.open(MVStore.java:3579)
at org.h2.mvstore.db.MVTableEngine$Store.open(MVTableEngine.java:170)
at org.h2.mvstore.db.MVTableEngine.init(MVTableEngine.java:103)
at org.h2.engine.Database.getPageStore(Database.java:2659)
at org.h2.engine.Database.open(Database.java:675)
at org.h2.engine.Database.openDatabase(Database.java:307)
at org.h2.engine.Database.<init>(Database.java:301)
at org.h2.engine.Engine.openSession(Engine.java:74)
at org.h2.engine.Engine.openSession(Engine.java:192)
at org.h2.engine.Engine.createSessionAndValidate(Engine.java:171)
at org.h2.engine.Engine.createSession(Engine.java:166)
at org.h2.engine.Engine.createSession(Engine.java:29)
at org.h2.engine.SessionRemote.connectEmbeddedOrServer(SessionRemote.java:340)
at org.h2.jdbc.JdbcConnection.<init>(JdbcConnection.java:173)
at org.h2.jdbc.JdbcConnection.<init>(JdbcConnection.java:152)
at org.h2.Driver.connect(Driver.java:69)
at com.zaxxer.hikari.util.DriverDataSource.getConnection(DriverDataSource.java:138)
at com.zaxxer.hikari.pool.PoolBase.newConnection(PoolBase.java:358)
at com.zaxxer.hikari.pool.PoolBase.newPoolEntry(PoolBase.java:206)
at com.zaxxer.hikari.pool.HikariPool.createPoolEntry(HikariPool.java:477)
at com.zaxxer.hikari.pool.HikariPool.access$100(HikariPool.java:71)
at com.zaxxer.hikari.pool.HikariPool$PoolEntryCreator.call(HikariPool.java:725)
at com.zaxxer.hikari.pool.HikariPool$PoolEntryCreator.call(HikariPool.java:711)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: org.h2.jdbc.JdbcSQLNonTransientException: General error: "java.lang.IllegalStateException: The file is locked: nio:/home/knute/IdeaProjects/recipes_db.mv.db [1.4.200/7]" [50000-200]
at org.h2.message.DbException.getJdbcSQLException(DbException.java:505)
at org.h2.message.DbException.getJdbcSQLException(DbException.java:429)
... 33 more
Caused by: java.lang.IllegalStateException: The file is locked: nio:/home/knute/IdeaProjects/recipes_db.mv.db [1.4.200/7]
at org.h2.mvstore.DataUtils.newIllegalStateException(DataUtils.java:950)
at org.h2.mvstore.FileStore.open(FileStore.java:166)
at org.h2.mvstore.MVStore.<init>(MVStore.java:381)
... 27 more
Caused by: java.nio.channels.OverlappingFileLockException
at java.base/sun.nio.ch.FileLockTable.checkList(FileLockTable.java:229)
at java.base/sun.nio.ch.FileLockTable.add(FileLockTable.java:123)
at java.base/sun.nio.ch.FileChannelImpl.tryLock(FileChannelImpl.java:1154)
at org.h2.store.fs.FileNio.tryLock(FilePathNio.java:121)
at java.base/java.nio.channels.FileChannel.tryLock(FileChannel.java:1165)
at org.h2.mvstore.FileStore.open(FileStore.java:163)
... 28 more
2021-09-09 07:54:08 database: flush
org.h2.message.DbException: General error: "java.lang.IllegalStateException: The file is locked: nio:/home/knute/IdeaProjects/recipes_db.mv.db [1.4.200/7]" [50000-200]
at org.h2.message.DbException.get(DbException.java:194)
at org.h2.message.DbException.convert(DbException.java:347)
at org.h2.mvstore.db.MVTableEngine$1.uncaughtException(MVTableEngine.java:93)
at org.h2.mvstore.MVStore.handleException(MVStore.java:2877)
at org.h2.mvstore.MVStore.panic(MVStore.java:481)
at org.h2.mvstore.MVStore.<init>(MVStore.java:402)
at org.h2.mvstore.MVStore$Builder.open(MVStore.java:3579)
at org.h2.mvstore.db.MVTableEngine$Store.open(MVTableEngine.java:170)
at org.h2.mvstore.db.MVTableEngine.init(MVTableEngine.java:103)
at org.h2.engine.Database.getPageStore(Database.java:2659)
at org.h2.engine.Database.open(Database.java:675)
at org.h2.engine.Database.openDatabase(Database.java:307)
at org.h2.engine.Database.<init>(Database.java:301)
at org.h2.engine.Engine.openSession(Engine.java:74)
at org.h2.engine.Engine.openSession(Engine.java:192)
at org.h2.engine.Engine.createSessionAndValidate(Engine.java:171)
at org.h2.engine.Engine.createSession(Engine.java:166)
at org.h2.engine.Engine.createSession(Engine.java:29)
at org.h2.engine.SessionRemote.connectEmbeddedOrServer(SessionRemote.java:340)
at org.h2.jdbc.JdbcConnection.<init>(JdbcConnection.java:173)
at org.h2.jdbc.JdbcConnection.<init>(JdbcConnection.java:152)
at org.h2.Driver.connect(Driver.java:69)
at com.zaxxer.hikari.util.DriverDataSource.getConnection(DriverDataSource.java:138)
at com.zaxxer.hikari.pool.PoolBase.newConnection(PoolBase.java:358)
at com.zaxxer.hikari.pool.PoolBase.newPoolEntry(PoolBase.java:206)
at com.zaxxer.hikari.pool.HikariPool.createPoolEntry(HikariPool.java:477)
at com.zaxxer.hikari.pool.HikariPool.access$100(HikariPool.java:71)
at com.zaxxer.hikari.pool.HikariPool$PoolEntryCreator.call(HikariPool.java:725)
at com.zaxxer.hikari.pool.HikariPool$PoolEntryCreator.call(HikariPool.java:711)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: org.h2.jdbc.JdbcSQLNonTransientException: General error: "java.lang.IllegalStateException: The file is locked: nio:/home/knute/IdeaProjects/recipes_db.mv.db [1.4.200/7]" [50000-200]
at org.h2.message.DbException.getJdbcSQLException(DbException.java:505)
at org.h2.message.DbException.getJdbcSQLException(DbException.java:429)
... 33 more
Caused by: java.lang.IllegalStateException: The file is locked: nio:/home/knute/IdeaProjects/recipes_db.mv.db [1.4.200/7]
at org.h2.mvstore.DataUtils.newIllegalStateException(DataUtils.java:950)
at org.h2.mvstore.FileStore.open(FileStore.java:166)
at org.h2.mvstore.MVStore.<init>(MVStore.java:381)
... 27 more
Caused by: java.nio.channels.OverlappingFileLockException
at java.base/sun.nio.ch.FileLockTable.checkList(FileLockTable.java:229)
at java.base/sun.nio.ch.FileLockTable.add(FileLockTable.java:123)
at java.base/sun.nio.ch.FileChannelImpl.tryLock(FileChannelImpl.java:1154)
at org.h2.store.fs.FileNio.tryLock(FilePathNio.java:121)
at java.base/java.nio.channels.FileChannel.tryLock(FileChannel.java:1165)
at org.h2.mvstore.FileStore.open(FileStore.java:163)
... 28 more
...and the application.properties file [note: in my current iteration, I am using FILE_LOCK=FILE]:
# Required by HyperSkill
server.port=8881
management.endpoints.web.exposure.include=*
management.endpoint.shutdown.enabled=true
spring.datasource.url=jdbc:h2:file:../recipes_db
# Solves file locking problem? No.
#spring.datasource.url=jdbc:h2:file:../recipes_db;DB_CLOSE_ON_EXIT=TRUE;FILE_LOCK=NO
#spring.datasource.url=jdbc:h2:file:../recipes_db;FILE_LOCK=NO
#spring.datasource.url=jdbc:h2:file:../recipes_db;FILE_LOCK=SOCKET
#spring.datasource.url=jdbc:h2:file:../recipes_db;FILE_LOCK=FS
# Needed?
spring.datasource.auto-commit=true
# To remove warning
spring.jpa.open-in-view=true
I found the problem! Adding debug=true to the applications.properties file, I saw this:
Starting delayed evictData of schema as part of SessionFactory shut-down
So for some reason, Spring was explicitly dropping my table! Adding this to the applications.properties file stopped that:
spring.jpa.hibernate.ddl-auto=update
I am trying to boot up a spring boot application which connects to local ms-sqlserver (2019). I have already setup the db manually and kept the spring.jpa.hibernate.ddl-auto to update but still when application starts up it tries to create the table and throw an error. Application started but there was exception in console:
Caused by: com.microsoft.sqlserver.jdbc.SQLServerException: There is already an object named 'abc' in the database.
at com.microsoft.sqlserver.jdbc.SQLServerException.makeFromDatabaseError(SQLServerException.java:262)
at com.microsoft.sqlserver.jdbc.SQLServerStatement.getNextResult(SQLServerStatement.java:1624)
at com.microsoft.sqlserver.jdbc.SQLServerStatement.doExecuteStatement(SQLServerStatement.java:868)
at com.microsoft.sqlserver.jdbc.SQLServerStatement$StmtExecCmd.doExecute(SQLServerStatement.java:768)
at com.microsoft.sqlserver.jdbc.TDSCommand.execute(IOBuffer.java:7194)
at com.microsoft.sqlserver.jdbc.SQLServerConnection.executeCommand(SQLServerConnection.java:2979)
at com.microsoft.sqlserver.jdbc.SQLServerStatement.executeCommand(SQLServerStatement.java:248)
at com.microsoft.sqlserver.jdbc.SQLServerStatement.executeStatement(SQLServerStatement.java:223)
at com.microsoft.sqlserver.jdbc.SQLServerStatement.execute(SQLServerStatement.java:744)
at com.zaxxer.hikari.pool.ProxyStatement.execute(ProxyStatement.java:95)
at com.zaxxer.hikari.pool.HikariProxyStatement.execute(HikariProxyStatement.java)
at org.hibernate.tool.schema.internal.exec.GenerationTargetToDatabase.accept(GenerationTargetToDatabase.java:54)
... 18 common frames omitted
I am not able to understand when I have set the hibernate property to update why it's not able to detect the existing table "abc".
Can anyone point out if I am doing anything wrong.
I recently installed DBvisualizier to query Hive tables. I installed it on my Mac and downloaded/installed the jdbc jar files for Hive from this website: https://s3.amazonaws.com/public-repo-1.hortonworks.com/HDP/hive-jdbc4/1.0.42.1054/Simba_HiveJDBC41_1.0.42.1054.zip
When I connected to our DB and test the queries. A simple select would work:
select *
from table_name
limit 10
But when I added 'order by' or 'group by':
select *
from table_name
order by rollingtime
limit 10
I got the following error which I have no clue why. Did anyone have similar error and know how to fix this?
09:56:17 START Executing for: 'NewDev' [Hive], Database: Hive, Schema: sdc
09:56:17 FAILED [SELECT - 0 rows, 0.504 secs] [Code: 500051, SQL State: HY000] [Simba][HiveJDBCDriver](500051) ERROR processing query/statement. Error Code: 2, SQL state: Error while processing statement: FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.tez.TezTask. Vertex failed, vertexName=Map 1, vertexId=vertex_1516123265840_0008_8_00, diagnostics=[Task failed, taskId=task_1516123265840_0008_8_00_000000, diagnostics=[TaskAttempt 0 failed, info=[Error: Error while running task ( failure ) : java.lang.NoClassDefFoundError: Could not initialize class org.apache.tez.runtime.library.api.TezRuntimeConfiguration
at org.apache.tez.runtime.library.output.OrderedPartitionedKVOutput.start(OrderedPartitionedKVOutput.java:107)
at org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.init(MapRecordProcessor.java:186)
at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:188)
at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:172)
at org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:370)
at org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:73)
at org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:61)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1866)
at org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:61)
at org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:37)
at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36)
at org.apache.hadoop.hive.llap.daemon.impl.StatsRecordingThreadPool$WrappedCallable.call(StatsRecordingThreadPool.java:110)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
, errorMessage=Cannot recover from this error:java.lang.NoClassDefFoundError: Could not initialize class org.apache.tez.runtime.library.api.TezRuntimeConfiguration
at org.apache.tez.runtime.library.output.OrderedPartitionedKVOutput.start(OrderedPartitionedKVOutput.java:107)
at org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.init(MapRecordProcessor.java:186)
at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:188)
at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:172)
at org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:370)
at org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:73)
at org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:61)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1866)
at org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:61)
at org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:37)
at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36)
at org.apache.hadoop.hive.llap.daemon.impl.StatsRecordingThreadPool$WrappedCallable.call(StatsRecordingThreadPool.java:110)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
]], Vertex did not succeed due to OWN_TASK_FAILURE, failedTasks:1 killedTasks:1, Vertex vertex_1516123265840_0008_8_00 [Map 1] killed/failed due to:OWN_TASK_FAILURE]Vertex killed, vertexName=Reducer 2, vertexId=vertex_1516123265840_0008_8_01, diagnostics=[Vertex received Kill while in RUNNING state., Vertex did not succeed due to OTHER_VERTEX_FAILURE, failedTasks:0 killedTasks:1, Vertex vertex_1516123265840_0008_8_01 [Reducer 2] killed/failed due to:OTHER_VERTEX_FAILURE]DAG did not succeed due to VERTEX_FAILURE. failedVertices:1 killedVertices:1, Query: select *
from nomura_qa_mblock_capacity_stage
order by rollingtime
limit 10.
select *
from nomura_qa_mblock_capacity_stage
order by rollingtime
limit 10;
09:56:17 END Execution 1 statement(s) executed, 0 row(s) affected, exec/fetch time: 0.504/0.000 secs [0 successful, 1 errors]
This is sometimes caused by an inability to write to the configured or hardcoded .staging directory usually because of an incorrect path or invalid permissions.
The mapreduce exec engine is more verbose than the tez engine in helping to identify the culprit which you can choose by running this query in your Hive shell:
SET hive.execution.engine=mr
You may then be able to see the following error:
Permission denied: user=dbuser, access=WRITE,
inode="/user/dbuser/.staging":hdfs:hdfs:drwxr-xr-x
In this case, the "dbuser" staging directory is specified to a non-existent path, it should be /home/dbuser/.staging
At runtime, before executing any queries that perform actions which would need to be staged (sort, order, group, distribute, etc) along with setting the exec engine to mr as previously shown, you would set the staging path to a valid parent directory such as your user's home dir by running the following query:
SET yarn.app.mapreduce.am.staging-dir=/home/dbuser/.staging
Depending on version and environment, if that directive doesn't work, you can try
SET hive.exec.stagingdir=/home/dbuser/.staging
Of course, change "dbuser" to your actual home directory (or any other to which you have read/write access). The .staging directory, assuming write access by the user running the query will automatically be created.
More info at http://doc.mapr.com/display/MapR/Default+mapred+Parameters
currently I installed the H2 database, but when I the launch the program and I try to access it from my browser (http://localhost:8082/login.do), I get this error:
IO Exception: "/root/test outside /opt/h2/DB" [90028-192] 90028/90028 (Aide) org.h2.jdbc.JdbcSQLException: IO Exception: "/root/test outside /opt/h2/DB" [90028-192]
at org.h2.message.DbException.getJdbcSQLException(DbException.java:345)
at org.h2.message.DbException.get(DbException.java:179)
at org.h2.message.DbException.get(DbException.java:155)
at org.h2.engine.ConnectionInfo.setBaseDir(ConnectionInfo.java:182)
at org.h2.jdbc.JdbcConnection.<init>(JdbcConnection.java:114)
at org.h2.jdbc.JdbcConnection.<init>(JdbcConnection.java:102)
at org.h2.Driver.connect(Driver.java:72)
at org.h2.server.web.WebServer.getConnection(WebServer.java:735)
at org.h2.server.web.WebApp.login(WebApp.java:955)
at org.h2.server.web.WebApp.process(WebApp.java:211)
at org.h2.server.web.WebApp.processRequest(WebApp.java:170)
at org.h2.server.web.WebThread.process(WebThread.java:133)
at org.h2.server.web.WebThread.run(WebThread.java:89)
at java.lang.Thread.run(Thread.java:745)
How I can fixe this ?
Just add a single "." before the name of your database. For example this is the jdbc url for my database: jdbc:h2:tcp://localhost:9101/~/test and I'll change it to this to work: jdbc:h2:tcp://localhost:9101/~./test. I've read in a forum that this bug relates to H2.
you should change the form jdbc url
and the h2-data which is you start h2 server data path
jdbc:h2:/h2-data/test
I want to use the PigServer java class to run pig scripts on a remote hadoop cluster. At the moment I am using the cloudera cdh4.4.0 distribution for testing pusposes. I'm running my java program from within the cloudera VM.
The directory with the hadoop config files is included in the classpath.
When I run the same script in map reduce mode in shell it works just fine. Thanks in advance for your answer.
The java code:
Properties props = new Properties();
props.setProperty("fs.default.name", "hdfs://localhost.localdomain:8020");
props.setProperty("mapred.job.tracker", "localhost.localdomain:8021");
PigServer pigServer = new PigServer(ExecType.MAPREDUCE, props);
pigServer.registerQuery("batting = load './Batting.csv' using PigStorage(',');");
pigServer.registerQuery("runs_raw = FOREACH batting GENERATE $0 as playerID, $1 as year, $8 as runs;");
pigServer.registerQuery("runs = FILTER runs_raw BY runs > 0;");
pigServer.registerQuery("grp_data = GROUP runs by (year);");
pigServer.registerQuery("max_runs = FOREACH grp_data GENERATE group as grp, MAX(runs.runs) as max_runs;");
pigServer.registerQuery("join_max_run = JOIN max_runs by ($0, max_runs), runs by (year,runs);");
pigServer.registerQuery("join_data = FOREACH join_max_run GENERATE $0 as year, $2 as playerID, $1 as runs;");
pigServer.store("join_data", "join_data");
I also added the following maven dependency
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-client</artifactId>
<version>2.2.0</version>
</dependency>
Upon running my java program I'm getting the following stacktrace:
log4j:WARN No appenders could be found for logger (org.apache.hadoop.conf.Configuration.deprecation).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
*Exception in thread "main" org.apache.pig.impl.logicalLayer.FrontendException: ERROR 1000: Error during parsing. Unable to check name hdfs://localhost.localdomain:8020/user/cloudera*
at org.apache.pig.PigServer$Graph.parseQuery(PigServer.java:1608)
at org.apache.pig.PigServer$Graph.registerQuery(PigServer.java:1547)
at org.apache.pig.PigServer.registerQuery(PigServer.java:518)
at org.apache.pig.PigServer.registerQuery(PigServer.java:531)
at MainTestPigServer.main(MainTestPigServer.java:24)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:120)
Caused by: Failed to parse: Pig script failed to parse:
<line 1, column 10> pig script failed to validate: org.apache.pig.backend.datastorage.DataStorageException: ERROR 6007: Unable to check name hdfs://localhost.localdomain:8020/user/cloudera
at org.apache.pig.parser.QueryParserDriver.parse(QueryParserDriver.java:191)
at org.apache.pig.PigServer$Graph.parseQuery(PigServer.java:1600)
... 9 more
Caused by:
<line 1, column 10> pig script failed to validate: org.apache.pig.backend.datastorage.DataStorageException: ERROR 6007: Unable to check name hdfs://localhost.localdomain:8020/user/cloudera
at org.apache.pig.parser.LogicalPlanBuilder.buildLoadOp(LogicalPlanBuilder.java:835)
at org.apache.pig.parser.LogicalPlanGenerator.load_clause(LogicalPlanGenerator.java:3236)
at org.apache.pig.parser.LogicalPlanGenerator.op_clause(LogicalPlanGenerator.java:1315)
at org.apache.pig.parser.LogicalPlanGenerator.general_statement(LogicalPlanGenerator.java:799)
at org.apache.pig.parser.LogicalPlanGenerator.statement(LogicalPlanGenerator.java:517)
at org.apache.pig.parser.LogicalPlanGenerator.query(LogicalPlanGenerator.java:392)
at org.apache.pig.parser.QueryParserDriver.parse(QueryParserDriver.java:184)
... 10 more
Caused by: org.apache.pig.backend.datastorage.DataStorageException: ERROR 6007: Unable to check name hdfs://localhost.localdomain:8020/user/cloudera
at org.apache.pig.backend.hadoop.datastorage.HDataStorage.isContainer(HDataStorage.java:207)
at org.apache.pig.backend.hadoop.datastorage.HDataStorage.asElement(HDataStorage.java:128)
at org.apache.pig.backend.hadoop.datastorage.HDataStorage.asElement(HDataStorage.java:138)
at org.apache.pig.parser.QueryParserUtils.getCurrentDir(QueryParserUtils.java:91)
at org.apache.pig.parser.LogicalPlanBuilder.buildLoadOp(LogicalPlanBuilder.java:827)
... 16 more
Caused by: java.io.IOException: Failed on local exception: com.google.protobuf.InvalidProtocolBufferException: Protocol message contained an invalid tag (zero).; Host Details : local host is: "localhost.localdomain/127.0.0.1"; destination host is: "localhost.localdomain":8020;
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:764)
at org.apache.hadoop.ipc.Client.call(Client.java:1351)
at org.apache.hadoop.ipc.Client.call(Client.java:1300)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
at $Proxy9.getFileInfo(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
at $Proxy9.getFileInfo(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:651)
at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1679)
at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1106)
at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1102)
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1102)
at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1397)
at org.apache.pig.backend.hadoop.datastorage.HDataStorage.isContainer(HDataStorage.java:200)
... 20 more
Caused by: com.google.protobuf.InvalidProtocolBufferException: Protocol message contained an invalid tag (zero).
at com.google.protobuf.InvalidProtocolBufferException.invalidTag(InvalidProtocolBufferException.java:89)
at com.google.protobuf.CodedInputStream.readTag(CodedInputStream.java:108)
at org.apache.hadoop.ipc.protobuf.RpcHeaderProtos$RpcResponseHeaderProto.<init>(RpcHeaderProtos.java:1398)
at org.apache.hadoop.ipc.protobuf.RpcHeaderProtos$RpcResponseHeaderProto.<init>(RpcHeaderProtos.java:1362)
at org.apache.hadoop.ipc.protobuf.RpcHeaderProtos$RpcResponseHeaderProto$1.parsePartialFrom(RpcHeaderProtos.java:1492)
at org.apache.hadoop.ipc.protobuf.RpcHeaderProtos$RpcResponseHeaderProto$1.parsePartialFrom(RpcHeaderProtos.java:1487)
at com.google.protobuf.AbstractParser.parsePartialFrom(AbstractParser.java:200)
at com.google.protobuf.AbstractParser.parsePartialDelimitedFrom(AbstractParser.java:241)
at com.google.protobuf.AbstractParser.parseDelimitedFrom(AbstractParser.java:253)
at com.google.protobuf.AbstractParser.parseDelimitedFrom(AbstractParser.java:259)
at com.google.protobuf.AbstractParser.parseDelimitedFrom(AbstractParser.java:49)
at org.apache.hadoop.ipc.protobuf.RpcHeaderProtos$RpcResponseHeaderProto.parseDelimitedFrom(RpcHeaderProtos.java:2364)
at org.apache.hadoop.ipc.Client$Connection.receiveRpcResponse(Client.java:996)
at org.apache.hadoop.ipc.Client$Connection.run(Client.java:891)
Process finished with exit code 1