Failed to setup gcp repository using elasticsearch operator - java

I'm launching 3 elastic nodes using elastic operator and i tried to set up automated snapshots for these instances.
I followed this doc
I minified the json of the service account key and created a file called gcs.client.default.credentials_file with no file extension and added this file to kubernetes secrets.
And added the secureSettings.secretName field to the spec of the elastic cluster and added the secret name to it which was gcs-credentials
But i get this error on the logs
{"#timestamp":"2022-12-26T18:45:40.037Z", "log.level":"ERROR", "message":"fatal exception while booting Elasticsearch", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"main","log.logger":"org.elasticsearch.bootstrap.Elasticsearch","elasticsearch.node.name":"elasticsearch-cluster-es-node-1","elasticsearch.cluster.name":"elasticsearch-cluster","error.type":"java.lang.IllegalStateException","error.message":"failed to load plugin class [org.elasticsearch.repositories.gcs.GoogleCloudStoragePlugin]","error.stack_trace":"java.lang.IllegalStateException: failed to load plugin class [org.elasticsearch.repositories.gcs.GoogleCloudStoragePlugin]\n\tat org.elasticsearch.server#8.5.0/org.elasticsearch.plugins.PluginsService.loadPlugin(PluginsService.java:607)\n\tat org.elasticsearch.server#8.5.0/org.elasticsearch.plugins.PluginsService.loadBundle(PluginsService.java:482)\n\tat org.elasticsearch.server#8.5.0/org.elasticsearch.plugins.PluginsService.loadBundles(PluginsService.java:290)\n\tat org.elasticsearch.server#8.5.0/org.elasticsearch.plugins.PluginsService.<init>(PluginsService.java:159)\n\tat org.elasticsearch.server#8.5.0/org.elasticsearch.plugins.PluginsService.lambda$getPluginsServiceCtor$14(PluginsService.java:634)\n\tat org.elasticsearch.server#8.5.0/org.elasticsearch.node.Node.<init>(Node.java:406)\n\tat org.elasticsearch.server#8.5.0/org.elasticsearch.node.Node.<init>(Node.java:316)\n\tat org.elasticsearch.server#8.5.0/org.elasticsearch.bootstrap.Elasticsearch$2.<init>(Elasticsearch.java:214)\n\tat org.elasticsearch.server#8.5.0/org.elasticsearch.bootstrap.Elasticsearch.initPhase3(Elasticsearch.java:214)\n\tat org.elasticsearch.server#8.5.0/org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:67)\nCaused by: java.lang.reflect.InvocationTargetException\n\tat java.base/jdk.internal.reflect.DirectConstructorHandleAccessor.newInstance(DirectConstructorHandleAccessor.java:79)\n\tat java.base/java.lang.reflect.Constructor.newInstanceWithCaller(Constructor.java:500)\n\tat java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:484)\n\tat org.elasticsearch.server#8.5.0/org.elasticsearch.plugins.PluginsService.loadPlugin(PluginsService.java:600)\n\t... 9 more\nCaused by: java.lang.IllegalArgumentException: failed to load GCS client credentials from [gcs.client.default.credentials_file]\n\tat org.elasticsearch.repositories.gcs.GoogleCloudStorageClientSettings.loadCredential(GoogleCloudStorageClientSettings.java:265)\n\tat org.elasticsearch.repositories.gcs.GoogleCloudStorageClientSettings.getClientSettings(GoogleCloudStorageClientSettings.java:221)\n\tat org.elasticsearch.repositories.gcs.GoogleCloudStorageClientSettings.load(GoogleCloudStorageClientSettings.java:209)\n\tat org.elasticsearch.repositories.gcs.GoogleCloudStoragePlugin.reload(GoogleCloudStoragePlugin.java:88)\n\tat org.elasticsearch.repositories.gcs.GoogleCloudStoragePlugin.<init>(GoogleCloudStoragePlugin.java:36)\n\tat java.base/jdk.internal.reflect.DirectConstructorHandleAccessor.newInstance(DirectConstructorHandleAccessor.java:67)\n\t... 12 more\nCaused by: java.io.IOException: Invalid PKCS#8 data.\n\tat com.google.auth.oauth2.ServiceAccountCredentials.privateKeyFromPkcs8(ServiceAccountCredentials.java:496)\n\tat com.google.auth.oauth2.ServiceAccountCredentials.fromPkcs8(ServiceAccountCredentials.java:474)\n\tat com.google.auth.oauth2.ServiceAccountCredentials.fromJson(ServiceAccountCredentials.java:212)\n\tat com.google.auth.oauth2.ServiceAccountCredentials.fromStream(ServiceAccountCredentials.java:548)\n\tat com.google.auth.oauth2.ServiceAccountCredentials.fromStream(ServiceAccountCredentials.java:520)\n\tat org.elasticsearch.repositories.gcs.GoogleCloudStorageClientSettings.lambda$loadCredential$13(GoogleCloudStorageClientSettings.java:257)\n\tat java.base/java.security.AccessController.doPrivileged(AccessController.java:569)\n\tat org.elasticsearch.repositories.gcs.SocketAccess.doPrivilegedIOException(SocketAccess.java:33)\n\tat org.elasticsearch.repositories.gcs.GoogleCloudStorageClientSettings.loadCredential(GoogleCloudStorageClientSettings.java:256)\n\t... 17 more\n"}
ERROR: Elasticsearch did not exit normally - check the logs at /usr/share/elasticsearch/logs/elasticsearch-cluster.log

Try adding the following lines to your configuration (on each Elasticsearch):
elasticsearch01:
image: docker.elastic.co/elasticsearch/elasticsearch:7.6.2
...
ulimits:
memlock:
soft: -1
hard: -1
Also check this link on Elasticsearch for more detailed information.

Related

Spring Boot Redisson not able to read clusterServersConfig

Here is the application.yml I am using for my Spring WebFlux project
redis:
redisson:
config: |
clusterServersConfig:
idleConnectionTimeout: 10000
connectTimeout: ${REDISSON_CONNECT_TIMEOUT:20000}
timeout: ${REDISSON_TIMEOUT:3000}
retryAttempts: ${REDISSON_RETRY_ATTEMPTS:3}
retryInterval: ${REDISSON_RETRY_INTERVAL:1500}
subscriptionConnectionPoolSize: ${REDISSON_SUBSCRIPTION_POOL_SIZE:50}
slaveConnectionMinimumIdleSize: ${REDISSON_SLAVE_MIN_IDLE_SIZE:24}
slaveConnectionPoolSize: ${REDISSON_SLAVE_POOL_SIZE:48}
masterConnectionMinimumIdleSize: ${REDISSON_MASTER_MIN_IDLE_SIZE:24}
masterConnectionPoolSize: ${REDISSON_MASTER_POOL_SIZE:48}
nodeAddresses:
- "rediss://${APPS_REDIS:-}:${APPS_REDIS_PORT:6379}"
password: ${APPS_REDIS_SECRET:-}
threads: ${REDISSON_THREADS:16}
nettyThreads: ${REDISSON_NETTY_THREADS:96}
But whenever I am starting the project in my laptop, this error comes up
Caused by: com.fasterxml.jackson.core.JsonParseException: Unrecognized token 'clusterServersConfig': was expecting (JSON String, Number, Array, Object or token 'null', 'true' or 'false')
I am not sure why it is saying clusterServersConfig is an unrecognized token. In the official doc also, it is mentioned and here is an example of this.
At first I thought it might be because I am running redis locally in my M1 Mac so redis-clusters aren't generated by default. I even tried to enable clusters in redis.conf and run a redis clusters with 3 nodes using redis-cli but still this happens. I have tried almost everything I could think of or search on the net. Any help appreciated :)

How can I acquire JanusGraphManagement over a remote connection?

I have a docker container running the gremlin-server.
It was started via:
./bin/gremlin-server.sh conf/gremlin-server/gremlin-server.yaml
From within a docker container, running this image:
https://hub.docker.com/r/janusgraph/janusgraph
The server is up and is listening at port 8182
$ docker ps
6019adda6081 janusgraph/janusgraph "docker-entrypoint.s…" 2 days ago Up 26 hours 0.0.0.0:8182->8182/tcp
I am interested in using a schema and indexes.
Janus offers this here: https://docs.janusgraph.org/basics/schema/
The following Is the configuration I use to attempt to connect to the gremlin-server:
AbstractConfiguration config = new BaseConfiguration();
config.setListDelimiter('/');
// contents of conf/remote-graph.properties
config.setProperty("gremlin.remote.driver.sourceName", "g");
config.setProperty("gremlin.remote.remoteConnectionClass", "org.apache.tinkerpop.gremlin.driver.remote.DriverRemoteConnection");
// contents of conf/remote-objects.yaml:
config.setProperty("clusterConfiguration.hosts", databaseUrl);
config.setProperty("clusterConfiguration.port", 8182);
config.setProperty("clusterConfiguration.serializer.className", "org.apache.tinkerpop.gremlin.driver.ser.GryoMessageSerializerV3d0/");
config.setProperty("storage.backend", "cql");
config.setProperty("clusterConfiguration.serializer.config.ioRegistries", "org.janusgraph.graphdb.tinkerpop.JanusGraphIoRegistry");
When I call
GraphTraversalSource g = traversal().withRemote(config);
I get a traversal source and everything seems fine. However, to use the management stuff that Janus provides, I seem to need a JanusGraphManagement object. I cannot get the generic Graph object above and cast it to a JanusGraph. The docs suggest using a JanusGraphFactory: https://docs.janusgraph.org/basics/configuration/#janusgraphfactory
So I call
JanusGraph janusGraph = JanusGraphFactory.open(config);
I get the following stack trace:
Exception in thread "main" java.lang.IllegalArgumentException: Could not find implementation class: org.janusgraph.diskstorage.cql.CQLStoreManager
at org.janusgraph.util.system.ConfigurationUtil.instantiate(ConfigurationUtil.java:60)
at org.janusgraph.diskstorage.Backend.getImplementationClass(Backend.java:440)
at org.janusgraph.diskstorage.Backend.getStorageManager(Backend.java:411)
at org.janusgraph.graphdb.configuration.builder.GraphDatabaseConfigurationBuilder.build(GraphDatabaseConfigurationBuilder.java:50)
at org.janusgraph.core.JanusGraphFactory.open(JanusGraphFactory.java:161)
at org.janusgraph.core.JanusGraphFactory.open(JanusGraphFactory.java:132)
at org.janusgraph.core.JanusGraphFactory.open(JanusGraphFactory.java:112)
at com.activitystream.database.GraphMigration.migrateDatabase(GraphMigration.java:69)
at com.activitystream.runners.persistence.DataStores.migrateDatabase(DataStores.java:27)
at com.activitystream.runners.persistence.EntityPersistenceRunner.main(EntityPersistenceRunner.java:23)
Caused by: java.lang.ClassNotFoundException: org.janusgraph.diskstorage.cql.CQLStoreManager
at java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:581)
at java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:178)
at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:522)
at java.base/java.lang.Class.forName0(Native Method)
at java.base/java.lang.Class.forName(Class.java:315)
at org.janusgraph.util.system.ConfigurationUtil.instantiate(ConfigurationUtil.java:56)
... 9 more
Is it possible to modify the schema over a remote connection?
If it is not possible, how can one modify the schema?
Any insight would be appreciated.
You basically have two choices - either:
Interact with your JanusGraphManagement object by way of scripts sent to Gremlin Server (typically by way of a session but I guess you could package an entire "management script" together and submit it as one request) or
Bypass Gremlin Server and instantiation your JanusGraphManagement object locally as directed in the JanusGraph documentation.
There is no way to have return a JanusGraphManagement to your client as it is not a serializable object that can be sent back from the server.

Elastic BeanStalk loading configuration from another region failed

I have uploaded a saved configuration file in a BeanStalk application in a region to another BeanStalk application in another region.
While loading that config I got an error
Stack named 'awseb-e-sme7w3eym3-stack' aborted operation. Current
state: 'CREATE_FAILED' Reason: The following resource(s) failed to
create: [AWSEBLoadBalancer]
Creating load balancer failed Reason: Property Listeners cannot be
empty Any idea about this issue ?
See the config file
AWSConfigurationTemplateVersion: 1.1.0.0
EnvironmentConfigurationMetadata:
DateCreated: '1580272974000'
DateModified: '1580273310143'
Description: xxxxxxxxxxxxxxxxxxxxx
EnvironmentTier:
Name: WebServer
Type: Standard
OptionSettings:
AWSEBAutoScalingGroup.aws:autoscaling:updatepolicy:rollingupdate:
MaxBatchSize: '1'
MinInstancesInService: '1'
RollingUpdateEnabled: true
RollingUpdateType: Health
AWSEBAutoScalingLaunchConfiguration.aws:autoscaling:launchconfiguration:
EC2KeyName: xxxxxxxxxxxxxxxxxxx
AWSEBCloudwatchAlarmHigh.aws:autoscaling:trigger:
UpperThreshold: '60'
AWSEBCloudwatchAlarmLow.aws:autoscaling:trigger:
BreachDuration: '2'
LowerThreshold: '25'
MeasureName: CPUUtilization
Period: '1'
Statistic: Maximum
Unit: Percent
AWSEBLoadBalancerSecurityGroup.aws:ec2:vpc:
VPCId: vpc-xxxxxxxxxxxxxxxx
AWSEBV2LoadBalancerListener.aws:elbv2:listener:default:
ListenerEnabled: false
AWSEBV2LoadBalancerListener443.aws:elbv2:listener:443:
SSLCertificateArns: arn:aws:acm:us-east-2:xxxxxxxxxxx:certificate/xxxxxxx-xxxxx-xxxx-xxxx-xxxxxxxxxxxx
AWSEBV2LoadBalancerTargetGroup.aws:elasticbeanstalk:environment:process:default:
HealthCheckPath: /rest/account/ping
MatcherHTTPCode: '200'
Port: '80'
Protocol: HTTP
aws:autoscaling:launchconfiguration:
IamInstanceProfile: aws-elasticbeanstalk-ec2-role
SecurityGroups:
- sg-xxxxxxxxxxxxx
aws:ec2:instances:
InstanceTypes: t2.small
aws:ec2:vpc:
ELBSubnets: subnet-xxxxxxxxxxxxxxxxxx,subnet-xxxxxxxxxxxxxxxxxx,subnet-xxxxxxxxxxxxxxx
Subnets: subnet-xxxxxxxxxxxxxxxxx,subnet-xxxxxxxxxxxxxxxxx,subnet-xxxxxxxxxxxxxxxxx
aws:elasticbeanstalk:application:environment:
JDBC_CONNECTION_STRING: jdbc:mysql://xxxxxxxxxxxxxxxxxxxxxxxxxxxx?user=xxxxxxxx&password=xxxxxxxxxxx&rewriteBatchedStatements=true&characterEncoding=UTF-8
aws.accessKeyId: xxxxxxxxxxxxxxxxxx
aws.secretKey: xxxxxxxxxxxxxxxxxxxx
com.aws.secretManger.secret.name: xxxxxxxxxxxxxxx
com.aws.secretManger.secret.region: us-east-2
com.decsond.loggly.token: xxxxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx#xxxxx
com.decsond.metakey: xxxxxxxxxxxxxxxxx/XXX==
com.decsond.mode: debug
com.decsond.server.db.environment: aws
com.decsond.server.dpBinaryColumn: xxxxxxxxxxxx
com.decsond.server.environment: xxxxxxxxxx
com.decsond.server.type: pms
aws:elasticbeanstalk:container:tomcat:jvmoptions:
JVM Options: -XX:+CMSClassUnloadingEnabled -Dmvel.disable.jit=true -Ddrools.permgenThreshold=0
Xms: 512m
Xmx: 1024m
aws:elasticbeanstalk:environment:
LoadBalancerType: application
ServiceRole: arn:aws:iam::xxxxxxxxxxxxxx:role/aws-elasticbeanstalk-service-role
aws:elasticbeanstalk:healthreporting:system:
SystemType: enhanced
aws:elasticbeanstalk:managedactions:
ManagedActionsEnabled: true
PreferredStartTime: SAT:03:01
aws:elasticbeanstalk:managedactions:platformupdate:
InstanceRefreshEnabled: true
UpdateLevel: minor
aws:elasticbeanstalk:xray:
XRayEnabled: true
aws:elbv2:listener:443:
DefaultProcess: default
ListenerEnabled: true
Protocol: HTTPS
Rules: ''
SSLPolicy: ELBSecurityPolicy-2016-08
Platform:
PlatformArn: arn:aws:elasticbeanstalk:us-east-2::platform/Tomcat 8.5 with Java 8
running on 64bit Amazon Linux/3.3.1
Any idea about the issue ?
The most likely reason is that you are referencing objects in the region from where the config was saved from.
Is this the first EB application / environment in the new region?
If it is, it's worth first creating a test application and environment, using the features you want ... that will give EB a chance to create all the region specific behind-the-scenes magic it relies on.

Kafka Stream Subscription Error - Invalid Version

When attempting to connect to a topic from Java jetty microservice, I’m getting this Kafka internal version mismatch error:
stream-thread [App-94d44dcd-f1d4-49a6-9dd3-8d4eee06f82a-StreamThread-1] Encountered the following error during processing:
java.lang.IllegalArgumentException: version must be between 1 and 3; was: 4
at org.apache.kafka.streams.processor.internals.assignment.SubscriptionInfo.<init>(SubscriptionInfo.java:67)
at org.apache.kafka.streams.processor.internals.StreamsPartitionAssignor.subscription(StreamsPartitionAssignor.java:312)
at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.metadata(ConsumerCoordinator.java:176)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.sendJoinGroupRequest(AbstractCoordinator.java:515)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.initiateJoinGroup(AbstractCoordinator.java:466)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.joinGroupIfNeeded(AbstractCoordinator.java:412)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:352)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:337)
at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:333)
at org.apache.kafka.clients.consumer.KafkaConsumer.updateAssignmentMetadataIfNeeded(KafkaConsumer.java:1218)
at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1175)
at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1154)
at org.apache.kafka.streams.processor.internals.StreamThread.pollRequests(StreamThread.java:861)
at org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:814)
at org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:767)
at org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:736)
Any ideas on what could cause such an exception?
I had come across this error myself and it is most likely because you have used non-unique APPLICATION_ID_CONFIG and/or CLIENT_ID_CONFIG
// Give the Streams application a unique name. The name must be unique in the Kafka cluster
// against which the application is run.
streamsConfiguration.put(StreamsConfig.APPLICATION_ID_CONFIG, "my-app");
streamsConfiguration.put(StreamsConfig.CLIENT_ID_CONFIG, "my-client");

Apache-Kafka-Connect , Confluent-HDFS-Connector , Unknown-magic-byte

I have use Confluent HDFS Connector for moving data from Kafka topics to HDFS log file. But when I run these commands:
./bin/connect-standalone
etc/schema-registry/connect-avro-standalone.properties \
etc/kafka-connect-hdfs/quickstart-hdfs.properties
I am taking follow error. How can i solve this problem. What is the reason of that ?
Caused by: org.apache.kafka.common.errors.SerializationException:
Error deserializing Avro message for id -1 Caused by:
org.apache.kafka.common.errors.SerializationException: Unknown magic
byte! [2017-06-03 13:44:41,895] ERROR Task is being killed and will
not recover until manually restarted
(org.apache.kafka.connect.runtime.WorkerTask:142)
This happens if you are trying to read data read the connector and have set key.converter and value.converter to be the AvroConverter but your input topic has data that was not serialized by the same AvroSerializer that uses the schema registry.
You have to match your converter to the input data. In other words, use a serializer that can deserialize the input data.

Categories