When I try to sync my content sources, FirstSpirit is raising the exception:
java.lang.IllegalArgumentException: Entity xxx has no gid. Entities without gid are not supported.
Anyone knows how to fix this to make the sync successfully?
Thanks a lot in advance.
Admin (Admin), session: 5167312680662795708, project: 8385, ip: 169.254.30.75
(de.espirit.common.base.control.AbstractActionProcessor): [JC_Main]Handle failed [ActionEvent[AE#addElementsToSyncFolder,[<CONTENT2 editor="1" id="39967" name="brands" revision="10166" tabletemplate="103">
<LANG displayname="Brands" language="INTL"/>
<LANG displayname="Brands" language="DE"/>
<LANG displayname="Brands" language="EN-US"/>
<LANG displayname="Brands" language="ES-MX"/>
<CONTENTPARAMETER templateid="103"/>
</CONTENT2>
]]#898975197]!
FSVersion=5.2.212.71463#4747;JDK=1.8.0_101 64bit Oracle Corporation;OS=Windows 7 6.1 amd64;Date=07.10.2016 10:29:21
java.lang.IllegalArgumentException: Entity Brands [9] has no gid. Entities without gid are not supported.
at de.espirit.firstspirit.client.gui.navigation.ppool.sync.FileSystemSyncModelImpl.addEntities(FileSystemSyncModelImpl.java:374)
at de.espirit.firstspirit.client.gui.navigation.ppool.sync.FileSystemSyncModelImpl.add(FileSystemSyncModelImpl.java:216)
at de.espirit.firstspirit.client.gui.navigation.ppool.sync.FileSystemSyncModelImpl.add(FileSystemSyncModelImpl.java:179)
at de.espirit.firstspirit.client.gui.navigation.ppool.sync.FileSystemSyncHandler.handleAddElements(FileSystemSyncHandler.java:353)
at de.espirit.firstspirit.client.gui.navigation.ppool.sync.FileSystemSyncHandler.getHandle(FileSystemSyncHandler.java:259)
at sun.reflect.GeneratedMethodAccessor29.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at de.espirit.common.gui.RunsInEDTProxyFactory$RunsInEDTInvocationHandler.invoke(RunsInEDTProxyFactory.java:143)
at com.sun.proxy.$Proxy3.getHandle(Unknown Source)
at de.espirit.common.base.control.AbstractActionProcessor$ActionProcessDelegate.handle(AbstractActionProcessor.java:1099)
at de.espirit.common.base.control.AbstractActionProcessor$AbstractActionProcess.handle(AbstractActionProcessor.java:1283)
at de.espirit.common.base.control.AbstractActionProcessor$InnerActionProcess.handle(AbstractActionProcessor.java:1575)
at de.espirit.common.base.control.AbstractActionProcessor$InnerActionProcess$1.onGrant(AbstractActionProcessor.java:1558)
at de.espirit.common.base.control.AbstractActionProcessor$ActionProcessDelegate$1.handleGrantResult(AbstractActionProcessor.java:988)
at de.espirit.common.base.control.AbstractActionProcessor$ActionProcessDelegate$1.onGrant(AbstractActionProcessor.java:970)
at de.espirit.common.base.control.AbstractActionProcessor$ActionProcessDelegate$2.handleGrantResult(AbstractActionProcessor.java:1014)
at de.espirit.common.base.control.AbstractActionProcessor$ActionProcessDelegate$2.onSuccess(AbstractActionProcessor.java:1010)
at de.espirit.common.base.control.AbstractActionProcessor$ActionProcessDelegate$2.onSuccess(AbstractActionProcessor.java:1005)
at de.espirit.common.base.control.AbstractActionProcessor$ActionProcessDelegate$3.onGrant(AbstractActionProcessor.java:1035)
at de.espirit.common.base.control.AbstractActionProcessor$ActionProcessDelegate.grant(AbstractActionProcessor.java:956)
at de.espirit.common.base.control.AbstractActionProcessor$ActionProcessDelegate.requestGrant(AbstractActionProcessor.java:1029)
at de.espirit.common.base.control.AbstractActionProcessor$ActionProcessDelegate.grant(AbstractActionProcessor.java:993)
at de.espirit.common.base.control.AbstractActionProcessor$AbstractActionProcess.grant(AbstractActionProcessor.java:1278)
at de.espirit.common.base.control.AbstractActionProcessor$InnerActionProcess.grant(AbstractActionProcessor.java:1555)
at de.espirit.common.base.control.AbstractActionProcessor$InnerActionProcess.start(AbstractActionProcessor.java:1550)
at de.espirit.common.base.control.AbstractActionProcessor.doProcess(AbstractActionProcessor.java:435)
at de.espirit.common.base.control.AbstractActionProcessor.access$600(AbstractActionProcessor.java:37)
at de.espirit.common.base.control.AbstractActionProcessor$2.execute(AbstractActionProcessor.java:588)
at de.espirit.common.util.ExecutorScheduler$ExecuteCommand.run(ExecutorScheduler.java:123)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Entities need a GID (which is technically a UUID) for transport. It seems you've got an older project where this is not the case. You can use the de.espirit.firstspirit.common.GidAgent to assign GIDs to these entities first. This must be done only once, new entities get a GID automatically.
Related
I know this kind of question have been asked previously, but I still don't get solution after reading their posts, so I decide to post this question again from here.
I Am working on Java multi-threaded application where I am trying to run HQL queries using JDBC on Hive environment. I have bunch of hive-sql queries and i am executing them on Hive in parallel with multiple threads and I am getting following exception when queries count more (for example, if i am running more than 100 queries). can some one please check this and help me on this?
2020-06-16 06:00:45,314 ERROR [main]: Terminal exception
java.lang.Exception: Map step agg_cas_auth_reinstate_derive failed.
at com.mine.idn.magellan.ParallelExecGraph.execute(ParallelExecGraph.java:198)
at com.mine.idn.magellan.WarehouseSession.executeMap(WarehouseSession.java:332)
at com.mine.idn.magellan.StandAloneEnv.execute(StandAloneEnv.java:872)
at com.mine.idn.magellan.StandAloneEnv.execute(StandAloneEnv.java:778)
at com.mine.idn.magellan.StandAloneEnv.executeAndExit(StandAloneEnv.java:642)
at com.mine.idn.magellan.StandAloneEnv.main(StandAloneEnv.java:77)
Caused by: java.sql.SQLException: org.apache.hive.service.cli.HiveSQLException: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask. java.io.IOException: Bad file descriptor
at org.apache.hive.service.cli.operation.Operation.toSQLException(Operation.java:380)
at org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:257)
at org.apache.hive.service.cli.operation.SQLOperation.access$800(SQLOperation.java:91)
at org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork$1.run(SQLOperation.java:348)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1669)
at org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork.run(SQLOperation.java:362)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.RuntimeException: java.io.IOException: Bad file descriptor
at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2850)
at org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2685)
at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2591)
at org.apache.hadoop.conf.Configuration.get(Configuration.java:1077)
at org.apache.hadoop.mapred.JobConf.checkAndWarnDeprecation(JobConf.java:2007)
at org.apache.hadoop.mapred.JobConf.<init>(JobConf.java:479)
at org.apache.hadoop.mapred.JobConf.<init>(JobConf.java:469)
at org.apache.hadoop.mapreduce.Cluster.getJob(Cluster.java:190)
at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:601)
at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:599)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1669)
at org.apache.hadoop.mapred.JobClient.getJobUsingCluster(JobClient.java:599)
at org.apache.hadoop.mapred.JobClient.getJobInner(JobClient.java:609)
at org.apache.hadoop.mapred.JobClient.getJob(JobClient.java:639)
at org.apache.hadoop.hive.ql.exec.mr.HadoopJobExecHelper.progress(HadoopJobExecHelper.java:295)
at org.apache.hadoop.hive.ql.exec.mr.HadoopJobExecHelper.progress(HadoopJobExecHelper.java:559)
at org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:425)
at org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:151)
at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:201)
at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100)
at org.apache.hadoop.hive.ql.exec.TaskRunner.run(TaskRunner.java:79)
Caused by: java.io.IOException: Bad file descriptor
at java.io.FileInputStream.close0(Native Method)
at java.io.FileInputStream.access$000(FileInputStream.java:49)
at java.io.FileInputStream$1.close(FileInputStream.java:336)
at java.io.FileDescriptor.closeAll(FileDescriptor.java:212)
at java.io.FileInputStream.close(FileInputStream.java:334)
at java.io.BufferedInputStream.close(BufferedInputStream.java:483)
at org.apache.hadoop.conf.Configuration.parse(Configuration.java:2676)
at org.apache.hadoop.conf.Configuration.parse(Configuration.java:2661)
at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2741)
... 22 more
at org.apache.hive.jdbc.HiveStatement.waitForOperationToComplete(HiveStatement.java:385)
at org.apache.hive.jdbc.HiveStatement.execute(HiveStatement.java:254)
at com.mine.idn.magellan.WarehouseSession.executeMapStep(WarehouseSession.java:797)
at com.mine.idn.magellan.WarehouseSession.access$000(WarehouseSession.java:23)
at com.mine.idn.magellan.WarehouseSession$ParallelExecResources.executeMapStep(WarehouseSession.java:91)
at com.mine.idn.magellan.ParallelExecGraph$Node.run(ParallelExecGraph.java:85)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
What i dont understand is, why hadoop framework throwing - Bad File Descriptor Exception? My Java code invoking Hadoop-Hive code and its throwing this exception.
Also one more thing is this issue is intermittent, not consistent. If i re-run the same application, most of the cases, it went through.
Thank you for
I am trying to read from bigquery using Java BigqueryIO.read method. but getting below error.
public POutput expand(PBegin pBegin) {
final String queryOperation = "select query";
return pBegin
.apply(BigQueryIO.readTableRows().fromQuery(queryOperation));
}
2020-06-08 19:32:01.391 ISTError message from worker: java.io.IOException: Failed to start reading from source: org.apache.beam.runners.core.construction.UnboundedReadFromBoundedSource$BoundedToUnboundedSourceAdapter#77f0db34 org.apache.beam.runners.dataflow.worker.WorkerCustomSources$UnboundedReaderIterator.start(WorkerCustomSources.java:792) org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.start(ReadOperation.java:361) org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:194) org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:159) org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:77) org.apache.beam.runners.dataflow.worker.StreamingDataflowWorker.process(StreamingDataflowWorker.java:1320) org.apache.beam.runners.dataflow.worker.StreamingDataflowWorker.access$1000(StreamingDataflowWorker.java:151) org.apache.beam.runners.dataflow.worker.StreamingDataflowWorker$6.run(StreamingDataflowWorker.java:1053) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) java.lang.Thread.run(Thread.java:748) Caused by: java.lang.UnsupportedOperationException: BigQuery source must be split before being read org.apache.beam.sdk.io.gcp.bigquery.BigQuerySourceBase.createReader(BigQuerySourceBase.java:173) org.apache.beam.runners.core.construction.UnboundedReadFromBoundedSource$BoundedToUnboundedSourceAdapter$ResidualSource.advance(UnboundedReadFromBoundedSource.java:467) org.apache.beam.runners.core.construction.UnboundedReadFromBoundedSource$BoundedToUnboundedSourceAdapter$ResidualSource.access$300(UnboundedReadFromBoundedSource.java:446) org.apache.beam.runners.core.construction.UnboundedReadFromBoundedSource$BoundedToUnboundedSourceAdapter$Reader.advance(UnboundedReadFromBoundedSource.java:298) org.apache.beam.runners.core.construction.UnboundedReadFromBoundedSource$BoundedToUnboundedSourceAdapter$Reader.start(UnboundedReadFromBoundedSource.java:291) org.apache.beam.runners.dataflow.worker.WorkerCustomSources$UnboundedReaderIterator.start(WorkerCustomSources.java:787) org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation$SynchronizedReaderIterator.start(ReadOperation.java:361) org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:194) org.apache.beam.runners.dataflow.worker.util.common.worker.ReadOperation.start(ReadOperation.java:159) org.apache.beam.runners.dataflow.worker.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:77) org.apache.beam.runners.dataflow.worker.StreamingDataflowWorker.process(StreamingDataflowWorker.java:1320) org.apache.beam.runners.dataflow.worker.StreamingDataflowWorker.access$1000(StreamingDataflowWorker.java:151) org.apache.beam.runners.dataflow.worker.StreamingDataflowWorker$6.run(StreamingDataflowWorker.java:1053) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) java.lang.Thread.run(Thread.java:748)
I suppose that this issue might be connected with missing tempLocation pipeline execution parameter when you are using DataflowRunner for cloud execution.
According to the documentation:
If tempLocation is not specified and gcpTempLocation is, tempLocation will not be populated.
Since this is just my presumption, I'll also encourage you to inspect native Apache Beam runtime logs to expand the overall issue evidence, as long as Stackdriver logs don't reflect a full picture of the problem.
There was raised a separate Jira tracker thread BEAM-9043, indicating this vaguely outputted error description.
Feel free to append more certain information to you origin question for any further concern or essential updates.
I am trying to connect to SQL server using Active Directory Password authentication mode. But on executing the code I get the Following error:
[pool-2-thread-1] INFO com.microsoft.aad.adal4j.AuthenticationAuthority
- [Correlation ID: 2febb587-963f-462a-9937-98b05d3a3fc8] Instance
discovery was successful
[pool-2-thread-1] ERROR com.microsoft.aad.adal4j.AuthenticationContext -
[Correlation ID: 2febb587-963f-462a-9937-98b05d3a3fc8] Execution of
class com.microsoft.aad.adal4j.AcquireTokenCallable failed.
java.lang.ClassCastException: java.util.Collections$SingletonList cannot
be cast to java.lang.String
at com.nimbusds.oauth2.sdk.util.URLUtils.serializeParameters(URLUtils.java:88)
at com.microsoft.aad.adal4j.AdalTokenRequest.toOAuthRequest(AdalTokenRequest.java:160)
at com.microsoft.aad.adal4j.AdalTokenRequest.executeOAuthRequestAndProcessResponse(AdalTokenRequest.java:86)
at com.microsoft.aad.adal4j.AuthenticationContext.acquireTokenCommon(AuthenticationContext.java:930)
at com.microsoft.aad.adal4j.AcquireTokenCallable.execute(AcquireTokenCallable.java:70)
at com.microsoft.aad.adal4j.AcquireTokenCallable.execute(AcquireTokenCallable.java:38)
at com.microsoft.aad.adal4j.AdalCallable.call(AdalCallable.java:47)
at java.util.concurrent.FutureTask.run(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
Exception in thread "main" com.microsoft.sqlserver.jdbc.SQLServerException: Failed to authenticate the user e9002802#ltfinc.net in Active Directory (Authentication=ActiveDirectoryPassword).
at com.microsoft.sqlserver.jdbc.SQLServerADAL4JUtils.getSqlFedAuthToken(SQLServerADAL4JUtils.java:57)
at com.microsoft.sqlserver.jdbc.SQLServerConnection.getFedAuthToken(SQLServerConnection.java:3853)
at com.microsoft.sqlserver.jdbc.SQLServerConnection.onFedAuthInfo(SQLServerConnection.java:3829)
at com.microsoft.sqlserver.jdbc.SQLServerConnection.processFedAuthInfo(SQLServerConnection.java:3797)
at com.microsoft.sqlserver.jdbc.TDSTokenHandler.onFedAuthInfo(tdsparser.java:261)
at com.microsoft.sqlserver.jdbc.TDSParser.parse(tdsparser.java:103)
at com.microsoft.sqlserver.jdbc.SQLServerConnection.sendLogon(SQLServerConnection.java:4545)
at com.microsoft.sqlserver.jdbc.SQLServerConnection.logon(SQLServerConnection.java:3406)
at com.microsoft.sqlserver.jdbc.SQLServerConnection.access$100(SQLServerConnection.java:85)
at com.microsoft.sqlserver.jdbc.SQLServerConnection$LogonCommand.doExecute(SQLServerConnection.java:3370)
at com.microsoft.sqlserver.jdbc.TDSCommand.execute(IOBuffer.java:7347)
at com.microsoft.sqlserver.jdbc.SQLServerConnection.executeCommand(SQLServerConnection.java:2713)
at com.microsoft.sqlserver.jdbc.SQLServerConnection.connectHelper(SQLServerConnection.java:2261)
at com.microsoft.sqlserver.jdbc.SQLServerConnection.login(SQLServerConnection.java:1921)
at com.microsoft.sqlserver.jdbc.SQLServerConnection.connectInternal(SQLServerConnection.java:1762)
at com.microsoft.sqlserver.jdbc.SQLServerConnection.connect(SQLServerConnection.java:1077)
at com.microsoft.sqlserver.jdbc.SQLServerDataSource.getConnectionInternal(SQLServerDataSource.java:1025)
at com.microsoft.sqlserver.jdbc.SQLServerDataSource.getConnection(SQLServerDataSource.java:69)
at main.AADUserPassword.main(AADUserPassword.java:22)
Caused by: java.util.concurrent.ExecutionException:
com.microsoft.aad.adal4j.AuthenticationException:
java.util.Collections$SingletonList cannot be cast to java.lang.String
at com.microsoft.sqlserver.jdbc.SQLServerADAL4JUtils.getSqlFedAuthToken(SQLServerADAL4JUtils.java:55)
... 18 more Caused by: com.microsoft.aad.adal4j.AuthenticationException: java.util.Collections$SingletonList cannot be cast to java.lang.String
at com.microsoft.sqlserver.jdbc.SQLServerADAL4JUtils.getSqlFedAuthToken(SQLServerADAL4JUtils.java:50)
... 18 more
I am not able to figure out, what exactly that exception(cannot cast to java.lang.string) means; also I have given correct username and password. I checked it using sql server management studio and it got connected.
Please help. I am in a fix. Any help would be appreciated greatly.
According comment, the error is solved by himself:
It was some JAR Problem. Now it gets connected to Azure directory Password Authentication mode using JDBC.
I posted this as answer and this can be beneficial to other community members.
Yesterday we changed the rules of one of our projects on Sonar 5.6.1, and since then the analysis is failing with the error below.
Stack:
java.lang.IllegalArgumentException: Value is too long for issue author login: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
at com.google.common.base.Preconditions.checkArgument(Preconditions.java:148) ~[guava-17.0.jar:na]
at org.sonar.db.issue.IssueDto.setAuthorLogin(IssueDto.java:353) ~[sonar-db-5.6.1.jar:na]
at org.sonar.db.issue.IssueDto.toDtoForComputationInsert(IssueDto.java:129) ~[sonar-db-5.6.1.jar:na]
at org.sonar.server.computation.step.PersistIssuesStep.execute(PersistIssuesStep.java:69) ~[sonar-server-5.6.1.jar:na]
at org.sonar.server.computation.step.ComputationStepExecutor.executeSteps(ComputationStepExecutor.java:64) ~[sonar-server-5.6.1.jar:na]
at org.sonar.server.computation.step.ComputationStepExecutor.execute(ComputationStepExecutor.java:52) ~[sonar-server-5.6.1.jar:na]
at org.sonar.server.computation.taskprocessor.report.ReportTaskProcessor.process(ReportTaskProcessor.java:75) ~[sonar-server-5.6.1.jar:na]
at org.sonar.server.computation.taskprocessor.CeWorkerCallableImpl.executeTask(CeWorkerCallableImpl.java:81) [sonar-server-5.6.1.jar:na]
at org.sonar.server.computation.taskprocessor.CeWorkerCallableImpl.call(CeWorkerCallableImpl.java:56) [sonar-server-5.6.1.jar:na]
at org.sonar.server.computation.taskprocessor.CeWorkerCallableImpl.call(CeWorkerCallableImpl.java:35) [sonar-server-5.6.1.jar:na]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) [na:1.8.0_101]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [na:1.8.0_101]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) [na:1.8.0_101]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180) [na:1.8.0_101]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) [na:1.8.0_101]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_101]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_101]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_101]
2017.01.30 16:42:28 ERROR [o.s.s.c.t.CeWorkerCallableImpl] Executed task | project=sgl | type=REPORT | id=AVnwr0Dz3syUbWgWOt8O | submitter=jenkins | time=13516ms
The message is right...
because of a configuration error, an user commited some changesets with a giant login (300+)... we cannot change the changesets (they were already pushed, and a long time ago)
1 - What can we do to stop this error on sonar?
2 - Why this error only showed up after we changed the rules?
(I think it was because the old rules generated no issue on the files of the changesets with the giant login...)
You can update the author of the commit. Yes, there are consequences for the others developers on how they get the new upstream master. GitHub has a good script on this. On the other hand, you could (less cleaner solutions):
update the lines that report the issue and commit. You said it was an error to have such a long login
delete the file, commit then recreate the file with a new commit
exclude the file or issue from SonarQube analysis
deactivate SCM information. Look at SCM > Disable the SCM Sensor
depending on the rule you can personalize it to exclude some file patterns
wait for this ticket to be fixed : )
You're right, you didn't have any issues related to this commit, that's why it hasn't failed earlier.
I am using embedded version of H2 1.4.191 with the following connection string:
jdbc:h2:<Path of database file>
I am facing two problems:
Sometimes my database size goes beyond 10GB and system stops working. I think the data stored in db should not take more than 5Mb. What is consuming memory?
Sometimes I am getting these exception :
org.h2.jdbc.JdbcSQLException: File corrupted while reading record: null. Possible solution: use the recovery tool [90030-191]
at org.h2.message.DbException.getJdbcSQLException(DbException.java:345)
at org.h2.message.DbException.get(DbException.java:168)
at org.h2.mvstore.db.MVTableEngine$Store.convertIllegalStateException(MVTableEngine.java:195)
at org.h2.mvstore.db.MVTableEngine$Store.open(MVTableEngine.java:167)
at org.h2.mvstore.db.MVTableEngine.init(MVTableEngine.java:99)
at org.h2.engine.Database.getPageStore(Database.java:2460)
at org.h2.engine.Database.open(Database.java:692)
at org.h2.engine.Database.openDatabase(Database.java:270)
at org.h2.engine.Database.(Database.java:264)
at org.h2.engine.Engine.openSession(Engine.java:65)
at org.h2.engine.Engine.openSession(Engine.java:175)
at org.h2.engine.Engine.createSessionAndValidate(Engine.java:153)
at org.h2.engine.Engine.createSession(Engine.java:136)
at org.h2.engine.Engine.createSession(Engine.java:28)
at org.h2.engine.SessionRemote.connectEmbeddedOrServer(SessionRemote.java:349)
at org.h2.jdbc.JdbcConnection.(JdbcConnection.java:107)
at org.h2.jdbc.JdbcConnection.(JdbcConnection.java:91)
at org.h2.Driver.connect(Driver.java:72)
at java.sql.DriverManager.getConnection(Unknown Source)
at java.sql.DriverManager.getConnection(Unknown Source)
at com.j256.ormlite.jdbc.JdbcConnectionSource.makeConnection(JdbcConnectionSource.java:252)
at com.j256.ormlite.jdbc.JdbcConnectionSource.getReadWriteConnection(JdbcConnectionSource.java:184)
at com.j256.ormlite.table.TableUtils.doCreateTable(TableUtils.java:440)
at com.j256.ormlite.table.TableUtils.createTable(TableUtils.java:220)
at com.j256.ormlite.table.TableUtils.createTableIfNotExists(TableUtils.java:61)
at com.limetray.pos.dbmanagers.DAO.init(DAO.java:75)
at com.limetray.pos.dbmanagers.DAO.createDB(DAO.java:172)
at com.limetray.pos.controllers.MainApp.appInitialization(MainApp.java:91)
at com.limetray.pos.controllers.MainApp.init(MainApp.java:80)
at com.sun.javafx.application.LauncherImpl.launchApplication1(LauncherImpl.java:841)
at com.sun.javafx.application.LauncherImpl.lambda$launchApplication$155(LauncherImpl.java:182)
at java.lang.Thread.run(Unknown Source)
Caused by: java.lang.IllegalStateException: File corrupted in chunk 1703, expected check value 1190, got 2040 [1.4.191/6]
at org.h2.mvstore.DataUtils.newIllegalStateException(DataUtils.java:773)
at org.h2.mvstore.Page.read(Page.java:667)
at org.h2.mvstore.Page.read(Page.java:195)
at org.h2.mvstore.MVStore.readPage(MVStore.java:1939)
at org.h2.mvstore.MVMap.readPage(MVMap.java:736)
at org.h2.mvstore.Page.getChildPage(Page.java:217)
at org.h2.mvstore.Cursor.min(Cursor.java:129)
at org.h2.mvstore.Cursor.hasNext(Cursor.java:36)
at org.h2.mvstore.MVStore.loadChunkMeta(MVStore.java:689)
at org.h2.mvstore.MVStore.readStoreHeader(MVStore.java:670)
at org.h2.mvstore.MVStore.(MVStore.java:353)
at org.h2.mvstore.MVStore$Builder.open(MVStore.java:2888)
at org.h2.mvstore.db.MVTableEngine$Store.open(MVTableEngine.java:154)
and
org.h2.jdbc.JdbcSQLException: General error: "java.lang.NullPointerException" [50000-178]
at org.h2.message.DbException.getJdbcSQLException(DbException.java:344)
at org.h2.message.DbException.get(DbException.java:167)
at org.h2.message.DbException.convert(DbException.java:294)
at org.h2.engine.Database.openDatabase(Database.java:293)
at org.h2.engine.Database.<init>(Database.java:256)
at org.h2.engine.Engine.openSession(Engine.java:57)
at org.h2.engine.Engine.openSession(Engine.java:164)
at org.h2.engine.Engine.createSessionAndValidate(Engine.java:142)
at org.h2.engine.Engine.createSession(Engine.java:125)
at org.h2.server.TcpServerThread.run(TcpServerThread.java:150)
at java.lang.Thread.run(Thread.java:744)
Caused by: java.lang.NullPointerException
at org.h2.mvstore.DataUtils.parseMap(DataUtils.java:630)
at org.h2.mvstore.MVStore.openMap(MVStore.java:411)
at org.h2.mvstore.db.TransactionStore.<init>(TransactionStore.java:96)
at org.h2.mvstore.db.MVTableEngine$Store.<init>(MVTableEngine.java:161)
at org.h2.mvstore.db.MVTableEngine.init(MVTableEngine.java:94)
at org.h2.engine.Database.getPageStore(Database.java:2355)
at org.h2.engine.Database.open(Database.java:659)
at org.h2.engine.Database.openDatabase(Database.java:262)
I think both are related to db corruption issue which is there in H2. How should I avoid this issue ? If not, Can I recover my data ?