[INFO] Oct 06, 2016 11:24:54 AM com.google.apphosting.utils.jetty.AppEngineAuthentication$AppEngineAuthenticator authenticate
[INFO] INFO: Returning NOBODY because of SkipAdminCheck.
Seems this error is produced by TaskQueue.
Queue qu = QueueFactory.getQueue(qname);
qu.add(TaskOptions.Builder.withUrl("/task/"+qname)
.payload("{\"token\":\"asdf1234\"}","UTF-8")
.method(TaskOptions.Method.POST)
.header("Host", ModulesServiceFactory.getModulesService().getVersionHostname(null,null))
Any suggestions how to fix it? Sure I googled, Google found just 3 pages about, and two first is about adding. As you see above I have added following but the code still produce messages into the log:
.header("Host", ModulesServiceFactory.getModulesService().getVersionHostname(null,null))
Related
I wanted to use custom ThreadPool for parallelStream. Reason being I wanted to use MDCContext in the task. This is the code I wrote to use the custom ThreadPool:
final ExecutorService mdcPool = MDCExecutors.newCachedThreadPool();
mdcPool.submit(() -> ruleset.getOperationList().parallelStream().forEach(operation -> {
log.info("Sample log line");
});
When the MDC context was not getting copied to the task, I looked at the logs. These are the logs I found. The first log is executed in "(pool-16-thread-1)" but other tasks are getting executed on "ForkJoinPool.commonPool-worker". The first log also has MdcContextID. But as I am using custom ThreadPool for submitting the task, all tasks should be executing in custom ThreadPool.
16 Oct 2018 12:46:58,298 [INFO] 8fcfa6ee-d141-11e8-b84a-7da6cd73aa0b (pool-16-thread-1) com.amazon.rss.activity.business.VariablesEvaluator: Sample log line
16 Oct 2018 12:46:58,298 [INFO] (ForkJoinPool.commonPool-worker-11) com.amazon.rss.activity.business.VariablesEvaluator: Sample log line
16 Oct 2018 12:46:58,298 [INFO] (ForkJoinPool.commonPool-worker-4) com.amazon.rss.activity.business.VariablesEvaluator: Sample log line
16 Oct 2018 12:46:58,298 [INFO] (ForkJoinPool.commonPool-worker-13) com.amazon.rss.activity.business.VariablesEvaluator: Sample log line
16 Oct 2018 12:46:58,298 [INFO] (ForkJoinPool.commonPool-worker-9) com.amazon.rss.activity.business.VariablesEvaluator: Sample log line
16 Oct 2018 12:46:58,299 [INFO] (ForkJoinPool.commonPool-worker-2) com.amazon.rss.activity.business.VariablesEvaluator: Sample log line
16 Oct 2018 12:46:58,299 [INFO] (ForkJoinPool.commonPool-worker-15) com.amazon.rss.activity.business.VariablesEvaluator: Sample log line
Is this supposed to happen or am I missing something?
There is no support for running a parallel stream in a custom thread pool. It happens to be executed in a different Fork/Join pool when the operation is initiated in a worker thread of a different Fork/Join pool, but that does not seem to be a planned feature, as the Stream implementation code will still use artifacts of the common pool internally in some operations then.
In your case, it seems that the ExecutorService returned by MDCExecutors.newCachedThreadPool() is not a Fork/Join pool, so it does not exhibit this undocumented behavior at all.
There is a feature request, JDK-8032512, regarding more thread control. It’s open and, as far as I can see, without much activity.
I am trying to execute the sample producer and consumer code on the Kinesis Streams website: http://docs.aws.amazon.com/streams/latest/dev/learning-kinesis-module-one-download.html
I've downloaded the source, and I am using Eclipse to run it. I've included the necessary jar files, so I would think that everything would be setup to run.
When I run the processor code that consumes the records from Kinesis, however, I get this error:
Aug 02, 2016 8:35:14 PM com.amazonaws.services.kinesis.clientlibrary.lib.worker.Worker initialize SEVERE: Caught exception when initializing LeaseCoordinator
Does anyone think they could tell me what is causing this error?
EDIT: Here is the full stack trace from the error on Eclipse:
Aug 02, 2016 9:02:27 PM com.amazonaws.services.kinesis.clientlibrary.lib.worker.Worker initialize
SEVERE: Caught exception when initializing LeaseCoordinator
com.amazonaws.services.kinesis.leases.exceptions.DependencyException: com.amazonaws.services.dynamodbv2.model.AmazonDynamoDBException: User: arn:aws:sts::500238854089:assumed-role/NORD-NONPROD-a0121-Team/AEXM is not authorized to perform: dynamodb:CreateTable on resource: arn:aws:dynamodb:us-west-2:500238854089:table/amazon-kinesis-learning (Service: AmazonDynamoDBv2; Status Code: 400; Error Code: AccessDeniedException; Request ID: BGME094FRUAEK2KFCPQIAM5U8VVV4KQNSO5AEMVJF66Q9ASUAAJG)
at com.amazonaws.services.kinesis.leases.impl.LeaseManager.createLeaseTableIfNotExists(LeaseManager.java:124)
at com.amazonaws.services.kinesis.clientlibrary.lib.worker.KinesisClientLibLeaseCoordinator.initialize(KinesisClientLibLeaseCoordinator.java:172)
at com.amazonaws.services.kinesis.clientlibrary.lib.worker.Worker.initialize(Worker.java:380)
at com.amazonaws.services.kinesis.clientlibrary.lib.worker.Worker.run(Worker.java:324)
at com.amazonaws.services.kinesis.samples.stocktrades.processor.StockTradesProcessor.main(StockTradesProcessor.java:96)
Caused by: com.amazonaws.services.dynamodbv2.model.AmazonDynamoDBException: User: arn:aws:sts::500238854089:assumed-role/NORD-NONPROD-a0121-Team/AEXM is not authorized to perform: dynamodb:CreateTable on resource: arn:aws:dynamodb:us-west-2:500238854089:table/amazon-kinesis-learning (Service: AmazonDynamoDBv2; Status Code: 400; Error Code: AccessDeniedException; Request ID: BGME094FRUAEK2KFCPQIAM5U8VVV4KQNSO5AEMVJF66Q9ASUAAJG)
at com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:1401)
at com.amazonaws.http.AmazonHttpClient.executeOneRequest(AmazonHttpClient.java:945)
at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:723)
at com.amazonaws.http.AmazonHttpClient.doExecute(AmazonHttpClient.java:475)
at com.amazonaws.http.AmazonHttpClient.executeWithTimer(AmazonHttpClient.java:437)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:386)
at com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient.doInvoke(AmazonDynamoDBClient.java:2074)
at com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient.invoke(AmazonDynamoDBClient.java:2044)
at com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient.createTable(AmazonDynamoDBClient.java:899)
at com.amazonaws.services.kinesis.leases.impl.LeaseManager.createLeaseTableIfNotExists(LeaseManager.java:117)
... 4 more
Your stack trace is telling you exactly what the problem is:
User: arn:aws:sts::500238854089:assumed-role/NORD-NONPROD-a0121-Team/AEXM is not authorized to perform: dynamodb:CreateTable on resource: arn:aws:dynamodb:us-west-2:500238854089:table/amazon-kinesis-learning
Make sure you've provided credentials to the DynamoDBClient that has CreateTable permissions - LeaseCoordinator attempts to create the leasing table in Dynamo.
It is actually possible to configure logging for Scala Kinesis Enrich by running the jar file like this:
java -jar -Dorg.slf4j.simpleLogger.defaultLogLevel=debug snowplow-kinesis-enrich-0.5.0 --config enrich.conf --resolver resolver.json
This should print all debug messages from the Kinesis Client Library. (Watch out because the output will become very verbose.) Could you try rerunning with this change to logging? Hopefully that will provide more clues about what's going wrong.
I have a very large dataset, and want to update certain entity kinds. I am exploring MapReduce library in GoogleAppEngine. I have followed the examples listed here.
https://github.com/GoogleCloudPlatform/appengine-mapreduce/tree/master/java/example/src/com/google/appengine/demos/mapreduce/entitycount
What I am basically doing is this, in my MapSpecification
MapSpecification<Entity, Entity, Void> spec = new MapSpecification.Builder<>(
new DatastoreKeyInput(query,2),
new UrlFlattenMapper(),
new DatastoreOutput())
.setJobName("Flatten URLs entities")
.build();
and My Mapper basically performs the operations on the Entity and then Emits it, for the DatastoreOutput writer to write it back into the database.
My problem is, the Entities are getting updated fine. The endSlice is also being called in my MapperTask. But the Jobs is not completing. I keep getting these errors
[INFO] INFO: RetryHelper(28.07 ms, 1 attempts, java.util.concurrent.Executors$RunnableAdapter#7f0264e0): Attempt #1 failed [java.lang.RuntimeException: Can't serialize object: MapOnlyShardTask[context=IncrementalTaskContext[jobId=3c041e68-5041-458c-994b-290cd941f8bb, shardNumber=1, shardCount=2, lastWorkItem=Topics("jzdh"), workerCallCount=297, workerTimeMillis=42513], inputExhausted=true, isFirstSlice=false]], sleeping for 1028 ms
[INFO] Apr 26, 2016 4:39:37 PM com.google.appengine.tools.cloudstorage.RetryHelper doRetry
[INFO] INFO: RetryHelper(1.085 s, 2 attempts, java.util.concurrent.Executors$RunnableAdapter#7f0264e0): Attempt #2 failed [java.lang.RuntimeException: Can't serialize object: MapOnlyShardTask[context=IncrementalTaskContext[jobId=3c041e68-5041-458c-994b-290cd941f8bb, shardNumber=1, shardCount=2, lastWorkItem=Topics("jzdh"), workerCallCount=297, workerTimeMillis=42513], inputExhausted=true, isFirstSlice=false]], sleeping for 2435 ms
[INFO] Apr 26, 2016 4:39:37 PM com.google.appengine.tools.cloudstorage.RetryHelper doRetry
[INFO] INFO: RetryHelper(3.562 s, 3 attempts, java.util.concurrent.Executors$RunnableAdapter#6d7fcd47): Attempt #3 failed [java.lang.RuntimeException: Can't serialize object: MapOnlyShardTask[context=IncrementalTaskContext[jobId=3c041e68-5041-458c-994b-290cd941f8bb, shardNumber=0, shardCount=2, lastWorkItem=Topics("jz63"), workerCallCount=289, workerTimeMillis=41536], inputExhausted=true, isFirstSlice=false]], sleeping for 3421 ms
[INFO] Apr 26, 2016 4:39:39 PM com.google.appengine.tools.cloudstorage.RetryHelper doRetry
[INFO] INFO: RetryHelper(3.567 s, 3 attempts, java.util.concurrent.Executors$RunnableAdapter#7f0264e0): Attempt #3 failed [java.lang.RuntimeException: Can't serialize object: MapOnlyShardTask[context=IncrementalTaskContext[jobId=3c041e68-5041-458c-994b-290cd941f8bb, shardNumber=1, shardCount=2, lastWorkItem=Topics("jzdh"), workerCallCount=297, workerTimeMillis=42513], inputExhausted=true, isFirstSlice=false]], sleeping for 3340 ms
[INFO] Apr 26, 2016 4:39:41 PM com.google.appengine.tools.cloudstorage.RetryHelper doRetry
[INFO] INFO: RetryHelper(7.015 s, 4 attempts, java.util.concurrent.Executors$RunnableAdapter#6d7fcd47): Attempt #4 failed [java.lang.RuntimeException: Can't serialize object: MapOnlyShardTask[context=IncrementalTaskContext[jobId=3c041e68-5041-458c-994b-290cd941f8bb, shardNumber=0, shardCount=2, lastWorkItem=Topics("jz63"), workerCallCount=289, workerTimeMillis=41536], inputExhausted=true, isFirstSlice=false]], sleeping for 6941 ms
[INFO] Apr 26, 2016 4:39:42 PM com.google.appengine.tools.cloudstorage.RetryHelper doRetry
I havent been able to get around this issue, any help or pointers on what I could be doing would be greatly appreciated.
The Culprit in My case is a small Datastore field I have used in the Map Job. I put a transient in front of the field, and the issue was solved,
I'm trying commitChanges, but catch java.lang.NullPointerException. log:
...
INFO: --- transaction started.
авг 04, 2015 12:33:59 PM org.apache.cayenne.access.dbsync.CreateIfNoSchemaStrategy processSchemaUpdate
INFO: Full or partial schema detected, skipping tables creation
авг 04, 2015 12:33:59 PM org.apache.cayenne.log.CommonsJdbcEventLogger logQuery
INFO: SELECT NEXT_ID FROM AUTO_PK_SUPPORT WHERE TABLE_NAME = 'ARTIST'
авг 04, 2015 12:33:59 PM org.apache.cayenne.log.CommonsJdbcEventLogger logSelectCount
INFO: === returned 1 row. - took 16 ms.
авг 04, 2015 12:33:59 PM org.apache.cayenne.log.CommonsJdbcEventLogger logQueryError
INFO: *** error.
java.lang.NullPointerException
at com.relx.jdbc.jdbc2.LinterStatementImpl.getUpdateCount(LinterStatementImpl.java:419)
at org.apache.cayenne.access.jdbc.SQLTemplateAction.execute(SQLTemplateAction.java:190)
at org.apache.cayenne.access.jdbc.SQLTemplateAction.performAction(SQLTemplateAction.java:124)
at org.apache.cayenne.access.DataNodeQueryAction.runQuery(DataNodeQueryAction.java:87)
at org.apache.cayenne.access.DataNode.performQueries(DataNode.java:280)
at org.apache.cayenne.dba.JdbcPkGenerator.longPkFromDatabase(JdbcPkGenerator.java:310)
at org.apache.cayenne.dba.JdbcPkGenerator.generatePk(JdbcPkGenerator.java:268)
at org.apache.cayenne.access.DataDomainInsertBucket.createPermIds(DataDomainInsertBucket.java:171)
at org.apache.cayenne.access.DataDomainInsertBucket.appendQueriesInternal(DataDomainInsertBucket.java:76)
at org.apache.cayenne.access.DataDomainSyncBucket.appendQueries(DataDomainSyncBucket.java:78)
at org.apache.cayenne.access.DataDomainFlushAction.preprocess(DataDomainFlushAction.java:188)
at org.apache.cayenne.access.DataDomainFlushAction.flush(DataDomainFlushAction.java:144)
at org.apache.cayenne.access.DataDomain.onSyncFlush(DataDomain.java:853)
at org.apache.cayenne.access.DataDomain$2.transform(DataDomain.java:817)
at org.apache.cayenne.access.DataDomain.runInTransaction(DataDomain.java:877)
at org.apache.cayenne.access.DataDomain.onSyncNoFilters(DataDomain.java:814)
at org.apache.cayenne.access.DataDomain$DataDomainSyncFilterChain.onSync(DataDomain.java:1031)
at org.apache.cayenne.access.DataDomain.onSync(DataDomain.java:785)
at org.apache.cayenne.access.DataContext.flushToParent(DataContext.java:817)
at org.apache.cayenne.access.DataContext.commitChanges(DataContext.java:756)
at CayenneTest2.main(CayenneTest2.java:61)
Table AUTO_PK_SUPPORT was created and filled Apache Cayenne.
Why throw the Exception?
From the stack trace you are working with Cayenne v. 3.1. The code in question is here. Cayenne SQLTemplateAction checks whether the result of the query is a ResultSet and with the answer being "no", assumes the result is an update count. So it tries to read the update count on line 190:
int updateCount = statement.getUpdateCount();
Somehow the underlying statement object (LinterStatementImpl) is not happy about that. I don't have access to source code of the Linter DB driver, so I can't say what exactly is wrong, but the driver is not behaving the way Cayenne expects it to.
Perhaps Linter is special enough to warrant its own Cayenne DbAdapter (??) Feel free to join Cayenne dev mailing list to discuss what it takes to write one.
I'm using gargoylesoftware htmlunit in java to get webpages after their javascript has been run (as you can't do this with the inbuilt java libraries). I'm using this in a webscraper, so this code is being run many many times over without issue, however quite randomly I get this error every hour or so after running the program:
Jan 13, 2015 8:03:52 AM com.gargoylesoftware.htmlunit.WebConsole info
INFO: ThreadStats: couldn't get the catalog (undefined)
Jan 13, 2015 8:03:52 AM com.gargoylesoftware.htmlunit.WebConsole info
INFO: ThreadStats: couldn't get the catalog ()
The code this error is appearing on is:
final HtmlPage page = webClient.getPage(URIget.getSearchURL(ToSearch, board));
I'm pretty sure the error doesn't originate from my URI.getSearchURL method, as it doesn't do anything fancy, only stick a few strings together
Try putting these after creating a WebClient and it should prevent the message from showing
webClient.setCssErrorHandler(new SilentCssErrorHandler());
LogFactory.getFactory().setAttribute("org.apache.commons.logging.Log", "org.apache.commons.logging.impl.NoOpLog");
java.util.logging.Logger.getLogger("com.gargoylesoftware.htmlunit").setLevel(Level.OFF);
java.util.logging.Logger.getLogger("org.apache.commons.httpclient").setLevel(Level.OFF);
java.util.logging.Logger.getLogger("com.gargoylesoftware.htmlunit.javascript.StrictErrorReporter").setLevel(Level.OFF);
java.util.logging.Logger.getLogger("com.gargoylesoftware.htmlunit.javascript.host.ActiveXObject").setLevel(Level.OFF);
java.util.logging.Logger.getLogger("com.gargoylesoftware.htmlunit.javascript.host.html.HTMLDocument").setLevel(Level.OFF);
java.util.logging.Logger.getLogger("com.gargoylesoftware.htmlunit.html.HtmlScript").setLevel(Level.OFF);
java.util.logging.Logger.getLogger("com.gargoylesoftware.htmlunit.javascript.host.WindowProxy").setLevel(Level.OFF);
java.util.logging.Logger.getLogger("com.gargoylesoftware").setLevel(Level.OFF);
java.util.logging.Logger.getLogger("org.apache").setLevel(Level.OFF);