I configured s3 in file flink-conf.yaml, and it worked. But now i need to override that S3 configuration in Flink programming.
Configuration conf = new Configuration();
conf.setString("s3.access-key","xxxxxxxxxxxxxxxx");
conf.setString("s3.secret-key","xxxxxxxxxxxxxxxx");
conf.setString("s3.endpoint","http://localhost:9000");
conf.setBoolean("s3.path.style.access",true);
StreamExecutionEnvironment env= StreamExecutionEnvironment.getExecutionEnvironment(conf);
System.out.println(env.getConfiguration());
The program printed the correct configuration properties.
{jobmanager.execution.failover-strategy=region, jobmanager.rpc.address=localhost, jobmanager.bind-host=localhost,
s3.secret-key=xxxxxxxxxxxxxxxx, execution.savepoint.ignore-unclaimed-state=false, s3.endpoint=http://localhost:9000,
s3.access-key=xxxxxxxxxxxxxxxxxxxx, s3.connection.maximum=1000, s3.path.style.access=true,................ }
But it still has an error No AWS Credentials provided.
Caused by: org.apache.hadoop.fs.s3a.auth.NoAuthWithAWSException: No AWS Credentials provided by SimpleAWSCredentialsProvider EnvironmentVariableCredentialsProvider InstanceProfileCredentialsProvider : com.amazonaws.SdkClientException: Failed to connect to service endpoint:
at org.apache.hadoop.fs.s3a.AWSCredentialProviderList.getCredentials(AWSCredentialProviderList.java:159)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.getCredentialsFromContext(AmazonHttpClient.java:1257)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.runBeforeRequestHandlers(AmazonHttpClient.java:833)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:783)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:770)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:744)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:704)
at com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:686)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:550)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:530)
at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:5259)
at com.amazonaws.services.s3.AmazonS3Client.getBucketRegionViaHeadRequest(AmazonS3Client.java:6220)
at com.amazonaws.services.s3.AmazonS3Client.fetchRegionFromCache(AmazonS3Client.java:6193)
at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:5244)
at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:5206)
at com.amazonaws.services.s3.AmazonS3Client.headBucket(AmazonS3Client.java:1438)
at com.amazonaws.services.s3.AmazonS3Client.doesBucketExist(AmazonS3Client.java:1374)
at org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$verifyBucketExists$1(S3AFileSystem.java:392)
at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:109)
... 47 more
The program only runs when configured in file flink-conf.yaml, without receiving configuration from the program.
Please help me!
Related
I'm launching 3 elastic nodes using elastic operator and i tried to set up automated snapshots for these instances.
I followed this doc
I minified the json of the service account key and created a file called gcs.client.default.credentials_file with no file extension and added this file to kubernetes secrets.
And added the secureSettings.secretName field to the spec of the elastic cluster and added the secret name to it which was gcs-credentials
But i get this error on the logs
{"#timestamp":"2022-12-26T18:45:40.037Z", "log.level":"ERROR", "message":"fatal exception while booting Elasticsearch", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"main","log.logger":"org.elasticsearch.bootstrap.Elasticsearch","elasticsearch.node.name":"elasticsearch-cluster-es-node-1","elasticsearch.cluster.name":"elasticsearch-cluster","error.type":"java.lang.IllegalStateException","error.message":"failed to load plugin class [org.elasticsearch.repositories.gcs.GoogleCloudStoragePlugin]","error.stack_trace":"java.lang.IllegalStateException: failed to load plugin class [org.elasticsearch.repositories.gcs.GoogleCloudStoragePlugin]\n\tat org.elasticsearch.server#8.5.0/org.elasticsearch.plugins.PluginsService.loadPlugin(PluginsService.java:607)\n\tat org.elasticsearch.server#8.5.0/org.elasticsearch.plugins.PluginsService.loadBundle(PluginsService.java:482)\n\tat org.elasticsearch.server#8.5.0/org.elasticsearch.plugins.PluginsService.loadBundles(PluginsService.java:290)\n\tat org.elasticsearch.server#8.5.0/org.elasticsearch.plugins.PluginsService.<init>(PluginsService.java:159)\n\tat org.elasticsearch.server#8.5.0/org.elasticsearch.plugins.PluginsService.lambda$getPluginsServiceCtor$14(PluginsService.java:634)\n\tat org.elasticsearch.server#8.5.0/org.elasticsearch.node.Node.<init>(Node.java:406)\n\tat org.elasticsearch.server#8.5.0/org.elasticsearch.node.Node.<init>(Node.java:316)\n\tat org.elasticsearch.server#8.5.0/org.elasticsearch.bootstrap.Elasticsearch$2.<init>(Elasticsearch.java:214)\n\tat org.elasticsearch.server#8.5.0/org.elasticsearch.bootstrap.Elasticsearch.initPhase3(Elasticsearch.java:214)\n\tat org.elasticsearch.server#8.5.0/org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:67)\nCaused by: java.lang.reflect.InvocationTargetException\n\tat java.base/jdk.internal.reflect.DirectConstructorHandleAccessor.newInstance(DirectConstructorHandleAccessor.java:79)\n\tat java.base/java.lang.reflect.Constructor.newInstanceWithCaller(Constructor.java:500)\n\tat java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:484)\n\tat org.elasticsearch.server#8.5.0/org.elasticsearch.plugins.PluginsService.loadPlugin(PluginsService.java:600)\n\t... 9 more\nCaused by: java.lang.IllegalArgumentException: failed to load GCS client credentials from [gcs.client.default.credentials_file]\n\tat org.elasticsearch.repositories.gcs.GoogleCloudStorageClientSettings.loadCredential(GoogleCloudStorageClientSettings.java:265)\n\tat org.elasticsearch.repositories.gcs.GoogleCloudStorageClientSettings.getClientSettings(GoogleCloudStorageClientSettings.java:221)\n\tat org.elasticsearch.repositories.gcs.GoogleCloudStorageClientSettings.load(GoogleCloudStorageClientSettings.java:209)\n\tat org.elasticsearch.repositories.gcs.GoogleCloudStoragePlugin.reload(GoogleCloudStoragePlugin.java:88)\n\tat org.elasticsearch.repositories.gcs.GoogleCloudStoragePlugin.<init>(GoogleCloudStoragePlugin.java:36)\n\tat java.base/jdk.internal.reflect.DirectConstructorHandleAccessor.newInstance(DirectConstructorHandleAccessor.java:67)\n\t... 12 more\nCaused by: java.io.IOException: Invalid PKCS#8 data.\n\tat com.google.auth.oauth2.ServiceAccountCredentials.privateKeyFromPkcs8(ServiceAccountCredentials.java:496)\n\tat com.google.auth.oauth2.ServiceAccountCredentials.fromPkcs8(ServiceAccountCredentials.java:474)\n\tat com.google.auth.oauth2.ServiceAccountCredentials.fromJson(ServiceAccountCredentials.java:212)\n\tat com.google.auth.oauth2.ServiceAccountCredentials.fromStream(ServiceAccountCredentials.java:548)\n\tat com.google.auth.oauth2.ServiceAccountCredentials.fromStream(ServiceAccountCredentials.java:520)\n\tat org.elasticsearch.repositories.gcs.GoogleCloudStorageClientSettings.lambda$loadCredential$13(GoogleCloudStorageClientSettings.java:257)\n\tat java.base/java.security.AccessController.doPrivileged(AccessController.java:569)\n\tat org.elasticsearch.repositories.gcs.SocketAccess.doPrivilegedIOException(SocketAccess.java:33)\n\tat org.elasticsearch.repositories.gcs.GoogleCloudStorageClientSettings.loadCredential(GoogleCloudStorageClientSettings.java:256)\n\t... 17 more\n"}
ERROR: Elasticsearch did not exit normally - check the logs at /usr/share/elasticsearch/logs/elasticsearch-cluster.log
Try adding the following lines to your configuration (on each Elasticsearch):
elasticsearch01:
image: docker.elastic.co/elasticsearch/elasticsearch:7.6.2
...
ulimits:
memlock:
soft: -1
hard: -1
Also check this link on Elasticsearch for more detailed information.
I am trying to use azure storage as checkpoint location in my spark structured streaming application.
I have seen few articles which talks about reading/writing to azure storage, but I have not seen anyone explaining about using azure storage as checkpoint location. Following is my simple code, reading from one kafka topic and writing back to another topic, added checkpoint location.
SparkConf conf = new SparkConf().setMaster("local[*]");
conf.set(
"fs.azure.account.key.<storage-name>.blob.core.windows.net",
"<storage-key>");
conf.set("fs.wasbs.impl", "org.apache.hadoop.fs.azure.NativeAzureFileSystem");
SparkSession spark = SparkSession.builder().appName("app-name").config(conf).getOrCreate();
Dataset<Row> df = spark
.readStream()
.format("kafka")
.option("kafka.bootstrap.servers", "localhost:9092")
.option("subscribe", "input")
.load();
StreamingQuery ds = df
.selectExpr("CAST(key AS STRING)", "CAST(value AS STRING)")
.writeStream()
.format("kafka")
.option("kafka.bootstrap.servers", "localhost:9092")
.option("topic", "output")
.option("checkpointLocation", "wasbs://<container-name>#<storage-account-name>.blob.core.windows.net/<directory-name>")
.start();
ds.awaitTermination();
Azure connection details are correct. When I run this application, I could see one file(metadata) getting created at the specified azure storage location. However, app crashes after few seconds. Below is the exception.
Exception in thread "main" java.lang.IllegalArgumentException: Self-suppression not permitted
at java.lang.Throwable.addSuppressed(Throwable.java:1043)
at java.io.FilterOutputStream.close(FilterOutputStream.java:159)
at org.apache.hadoop.fs.azure.NativeAzureFileSystem$NativeAzureFsOutputStream.close(NativeAzureFileSystem.java:818)
at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
at org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:106)
at org.apache.commons.io.IOUtils.closeQuietly(IOUtils.java:339)
at org.apache.commons.io.IOUtils.closeQuietly(IOUtils.java:298)
at org.apache.spark.sql.execution.streaming.StreamMetadata$.write(StreamMetadata.scala:85)
at org.apache.spark.sql.execution.streaming.StreamExecution$$anonfun$2.apply(StreamExecution.scala:124)
at org.apache.spark.sql.execution.streaming.StreamExecution$$anonfun$2.apply(StreamExecution.scala:122)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.sql.execution.streaming.StreamExecution.<init>(StreamExecution.scala:122)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution.<init>(MicroBatchExecution.scala:49)
at org.apache.spark.sql.streaming.StreamingQueryManager.createQuery(StreamingQueryManager.scala:258)
at org.apache.spark.sql.streaming.StreamingQueryManager.startQuery(StreamingQueryManager.scala:299)
at org.apache.spark.sql.streaming.DataStreamWriter.start(DataStreamWriter.scala:296)
at com.test.Test.main(Test.java:73)
Caused by: java.io.IOException: Stream is already closed.
at com.microsoft.azure.storage.blob.BlobOutputStreamInternal.close(BlobOutputStreamInternal.java:332)
at java.io.FilterOutputStream.close(FilterOutputStream.java:159)
at org.apache.hadoop.fs.azure.NativeAzureFileSystem$NativeAzureFsOutputStream.close(NativeAzureFileSystem.java:818)
at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
at org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:106)
at sun.nio.cs.StreamEncoder.implClose(StreamEncoder.java:320)
at sun.nio.cs.StreamEncoder.close(StreamEncoder.java:149)
at java.io.OutputStreamWriter.close(OutputStreamWriter.java:233)
at com.fasterxml.jackson.core.json.WriterBasedJsonGenerator.close(WriterBasedJsonGenerator.java:883)
at com.fasterxml.jackson.databind.ObjectMapper._configAndWriteValue(ObjectMapper.java:3561)
at com.fasterxml.jackson.databind.ObjectMapper.writeValue(ObjectMapper.java:2909)
at org.json4s.jackson.Serialization$.write(Serialization.scala:27)
at org.apache.spark.sql.execution.streaming.StreamMetadata$.write(StreamMetadata.scala:78)
... 9 more
Let me know If anything needs to be configured to enable azure storage as checkpoint location or any version conflicts creating this problem.
Spark: 2.3.0
hadoop-azure : 2.7
azure-storage : 8.0
I have the following problem when running this schedule.
#Singleton
public class TaskScheduler {
private static final Logger LOG = LoggerFactory.getLogger(TaskScheduler.class);
#Inject
private BuildLayerJob buildLayerJob;
#Scheduled(fixedDelay = "30s", initialDelay = "30s")
public void loadRegistriesDescriptions(){
try {
LOG.info("Cargando lista de registries cada 30s.");
buildLayerJob.getBuildLayer().loadRegistries();
}
catch(Exception exception) {
LOG.error("Error cargando lista de registries cada 30s: " +exception.getMessage());
//exception.printStackTrace();
}
}
}
In the first execution there is no problem, but when the time expires and it is executed again it throws me the following error.
20:26:59.291 [pool-1-thread-6] ERROR i.m.s.DefaultTaskExceptionHandler - Error invoking scheduled task Error instantiating bean of type [io.micronaut.configuration.lettuce.health.RedisHealthIndicator]
Message: Unable to connect to localhost:6379
Path Taken: new HealthMonitorTask(CurrentHealthStatus currentHealthStatus,[List healthIndicators]) --> new RedisHealthIndicator(BeanContext beanContext,HealthAggregator healthAggregator,[StatefulRedisConnection[] connections])
io.micronaut.context.exceptions.BeanInstantiationException: Error instantiating bean of type [io.micronaut.configuration.lettuce.health.RedisHealthIndicator]
Message: Unable to connect to localhost:6379
Path Taken: new HealthMonitorTask(CurrentHealthStatus currentHealthStatus,[List healthIndicators]) --> new RedisHealthIndicator(BeanContext beanContext,HealthAggregator healthAggregator,[StatefulRedisConnection[] connections])
at io.micronaut.context.DefaultBeanContext.doCreateBean(DefaultBeanContext.java:1719)
at io.micronaut.context.DefaultBeanContext.addCandidateToList(DefaultBeanContext.java:2727)
at io.micronaut.context.DefaultBeanContext.getBeansOfTypeInternal(DefaultBeanContext.java:2639)
at io.micronaut.context.DefaultBeanContext.getBeansOfType(DefaultBeanContext.java:924)
at io.micronaut.context.AbstractBeanDefinition.lambda$getBeansOfTypeForConstructorArgument$9(AbstractBeanDefinition.java:1124)
at io.micronaut.context.AbstractBeanDefinition.resolveBeanWithGenericsFromConstructorArgument(AbstractBeanDefinition.java:1762)
at io.micronaut.context.AbstractBeanDefinition.getBeansOfTypeForConstructorArgument(AbstractBeanDefinition.java:1119)
at io.micronaut.context.AbstractBeanDefinition.getBeanForConstructorArgument(AbstractBeanDefinition.java:981)
at io.micronaut.configuration.lettuce.health.$RedisHealthIndicatorDefinition.build(Unknown Source)
at io.micronaut.context.DefaultBeanContext.doCreateBean(DefaultBeanContext.java:1693)
at io.micronaut.context.DefaultBeanContext.addCandidateToList(DefaultBeanContext.java:2727)
at io.micronaut.context.DefaultBeanContext.getBeansOfTypeInternal(DefaultBeanContext.java:2639)
at io.micronaut.context.DefaultBeanContext.getBeansOfType(DefaultBeanContext.java:924)
at io.micronaut.context.AbstractBeanDefinition.lambda$getBeansOfTypeForConstructorArgument$9(AbstractBeanDefinition.java:1124)
at io.micronaut.context.AbstractBeanDefinition.resolveBeanWithGenericsFromConstructorArgument(AbstractBeanDefinition.java:1762)
at io.micronaut.context.AbstractBeanDefinition.getBeansOfTypeForConstructorArgument(AbstractBeanDefinition.java:1119)
at io.micronaut.context.AbstractBeanDefinition.getBeanForConstructorArgument(AbstractBeanDefinition.java:984)
at io.micronaut.management.health.monitor.$HealthMonitorTaskDefinition.build(Unknown Source)
at io.micronaut.context.DefaultBeanContext.doCreateBean(DefaultBeanContext.java:1693)
at io.micronaut.context.DefaultBeanContext.createAndRegisterSingletonInternal(DefaultBeanContext.java:2407)
at io.micronaut.context.DefaultBeanContext.createAndRegisterSingleton(DefaultBeanContext.java:2393)
at io.micronaut.context.DefaultBeanContext.getBeanForDefinition(DefaultBeanContext.java:2084)
at io.micronaut.context.DefaultBeanContext.getBeanInternal(DefaultBeanContext.java:2058)
at io.micronaut.context.DefaultBeanContext.getBean(DefaultBeanContext.java:618)
at io.micronaut.scheduling.processor.ScheduledMethodProcessor.lambda$process$5(ScheduledMethodProcessor.java:123)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.runAndReset$$$capture(FutureTask.java:305)
at java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java)
at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: io.lettuce.core.RedisConnectionException: Unable to connect to localhost:6379
at io.lettuce.core.RedisConnectionException.create(RedisConnectionException.java:78)
at io.lettuce.core.RedisConnectionException.create(RedisConnectionException.java:56)
at io.lettuce.core.AbstractRedisClient.getConnection(AbstractRedisClient.java:234)
at io.lettuce.core.RedisClient.connect(RedisClient.java:207)
at io.lettuce.core.RedisClient.connect(RedisClient.java:192)
at io.micronaut.configuration.lettuce.AbstractRedisClientFactory.redisConnection(AbstractRedisClientFactory.java:51)
at io.micronaut.configuration.lettuce.DefaultRedisClientFactory.redisConnection(DefaultRedisClientFactory.java:52)
at io.micronaut.configuration.lettuce.$DefaultRedisClientFactory$RedisConnection1Definition.build(Unknown Source)
at io.micronaut.context.DefaultBeanContext.doCreateBean(DefaultBeanContext.java:1693)
... 31 common frames omitted
Caused by: io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection refused: localhost/127.0.0.1:6379
Caused by: java.net.ConnectException: Connection refused
at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:779)
at io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:330)
at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:334)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:702)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:650)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:576)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.base/java.lang.Thread.run(Thread.java:834)
I understand that there are problems with the connection to redis, but in the microservice deployed in GCP it continues to generate the same problem.
app.yaml
runtime: java11
service: default
instance_class: B2
env_variables:
LAYERS_SERVER_PORT: 8080
REDIS_FIXEDDELAY: 1s
REDISA_URL: "redis://A"
REDISB_URL: "redis://B"
REDISC_URL: "redis://C"
REDISD_URL: "redis://D"
basic_scaling:
max_instances: 1
idle_timeout: 270s
vpc_access_connector:
name: "projects/example/locations/us-central1/connectors/example"
Local settings. application.yml:
micronaut:
application:
name: example
server:
port: ${EXAMPLE_SERVER_PORT:3000}
cors:
enabled: true
---
redis:
servers:
REDISA:
uri: redis://IP_A
REDISB:
uri: redis://IP_B
REDISC:
uri: redis://IP_C
REDISD:
uri: redis://IP_D
Repository layers.server.repo.InfoRepositoryImpl:
#Singleton
public class InfoRepositoryImpl implements InfoRepository {
private BuildLayerJob buildLayerJob;
#Inject #Named("REDISB") RedisAsyncCommands<String, String> reddisConnectionB;
#Inject #Named("REDISA") RedisAsyncCommands<String, String> reddisConnectionA;
private static final Logger LOG = LoggerFactory.getLogger(InfoRepositoryImpl.class);
public InfoRepositoryImpl(BuildLayerJob buildLayerJob) {
this.buildLayerJob = buildLayerJob;
}
... implementation of methods to process information with redis
}
Can you please check if you are having io.micronaut.redis:micronaut-redis-lettuce dependency added to your class path/ build file.
By default Micronaut will assume redis server to be at localhost:6379, as health checks are by default enabled when redis-lettuce is being activated. It will keep probing for health checks.
If you are using micronaut application.yml, you need to provide the server url which will be accessible from the running app.
Micronaut redis
Example - application.yml
redis:
uri: redis://localhost
ssl: true
timeout: 30s
You can also use below connection string pattern to provide details about redis server.
Redis Standalone
redis :// [[username :] password#] host [: port] [/ database][?
[timeout=timeout[d|h|m|s|ms|us|ns]] [&_database=database_]]
Redis Standalone (SSL)
rediss :// [[username :] password#] host [: port] [/ database][?
[timeout=timeout[d|h|m|s|ms|us|ns]] [&_database=database_]]
Redis Standalone (Unix Domain Sockets)
redis-socket :// [[username :] password#]path
[?[timeout=timeout[d|h|m|s|ms|us|ns]][&_database=database_]]
for more details on connection string - Redis connections string
Micronaut redis configuration properties
Such errors can occur when the said data source is autoconfigured. You can disable Redis autoconfiguration if you're not using it in the application. If you need Redis for the application then you should set spring.redis.host and spring.redis.port.
In Spring boot application, I want to connect to 2 different kafka servers simultaneously. I am using KafkaAdmin and AdminClient to make the connection and perform CRUD Operations.
#Bean
public KafkaAdmin kafkaAdmin() {
Map<String, Object> configs = new HashMap<>();
String krb5location = krb5Location;
System.setProperty("java.security.krb5.conf", krb5location);
System.setProperty("java.security.auth.login.config", jaasConfigLocation);
configs.put(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG, server);
configs.put("security.protocol", "SASL_SSL");
configs.put("ssl.truststore.location", sslTruststoreLocation);
configs.put("ssl.truststore.password", sslTruststorePassowrd);
return new KafkaAdmin(configs);
}
#Bean
#PostConstruct
public AdminClient config() {
return AdminClient.create(kafkaAdmin.getConfig());
}
Similarly server 2 is configured in same springboot application.
If I load configuration of both kafka server at once during app initialization following error is displayed
>>>KRBError:
cTime is Sun Jun 03 14:23:02 IST 2001 991558382000
sTime is Tue Nov 20 10:46:53 IST 2018 1542691013000
suSec is 512097
error code is 7
error Message is Server not found in Kerberos database
cname is config1#servername.com
sname is config2#servernname.com
msgType is 30
at sun.security.krb5.KrbTgsRep.<init>(KrbTgsRep.java:73)
at sun.security.krb5.KrbTgsReq.getReply(KrbTgsReq.java:251)
at sun.security.krb5.KrbTgsReq.sendAndGetCreds(KrbTgsReq.java:262)
at sun.security.krb5.internal.CredentialsUtil.serviceCreds(CredentialsUtil.java:308)
at sun.security.krb5.internal.CredentialsUtil.acquireServiceCreds(CredentialsUtil.java:126)
at sun.security.krb5.Credentials.acquireServiceCreds(Credentials.java:458)
at sun.security.jgss.krb5.Krb5Context.initSecContext(Krb5Context.java:693)
at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:248)
at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:179)
at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:192)
at org.apache.kafka.common.security.authenticator.SaslClientAuthenticator$2.run(SaslClientAuthenticator.java:361)
at org.apache.kafka.common.security.authenticator.SaslClientAuthenticator$2.run(SaslClientAuthenticator.java:359)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.kafka.common.security.authenticator.SaslClientAuthenticator.createSaslToken(SaslClientAuthenticator.java:359)
at org.apache.kafka.common.security.authenticator.SaslClientAuthenticator.sendSaslClientToken(SaslClientAuthenticator.java:269)
at org.apache.kafka.common.security.authenticator.SaslClientAuthenticator.authenticate(SaslClientAuthenticator.java:206)
at org.apache.kafka.common.network.KafkaChannel.prepare(KafkaChannel.java:81)
at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:474)
at org.apache.kafka.common.network.Selector.poll(Selector.java:412)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:460)
at org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.run(KafkaAdminClient.java:1006)
at java.lang.Thread.run(Thread.java:748)
Caused by: KrbException: Identifier doesn't match expected value (906)
at sun.security.krb5.internal.KDCRep.init(KDCRep.java:140)
at sun.security.krb5.internal.TGSRep.init(TGSRep.java:65)
at sun.security.krb5.internal.TGSRep.<init>(TGSRep.java:60)
at sun.security.krb5.KrbTgsRep.<init>(KrbTgsRep.java:55)
... 22 more
2018-11-20 10:46:53.605 ERROR 8672 --- [| adminclient-4] org.apache.kafka.clients.NetworkClient : [AdminClient clientId=adminclient-4] Connection to node -1 failed authentication due to: An error: (java.security.PrivilegedActionException: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Server not found in Kerberos database (7) - UNKNOWN_SERVER)]) occurred when evaluating SASL token received from the Kafka Broker. This may be caused by Java's being unable to resolve the Kafka Broker's hostname correctly. You may want to try to adding '-Dsun.net.spi.nameservice.provider.1=dns,sun' to your client's JVMFLAGS environment. Users must configure FQDN of kafka brokers when authenticating using SASL and `socketChannel.socket().getInetAddress().getHostName()` must match the hostname in `principal/hostname#realm` Kafka Client will go to AUTHENTICATION_FAILED state.
I'm trying to connect to remote HDFS cluster. I've read some documentation and getting started's but didn't find a best solution how to do that.
Situation: I have HDFS on xxx-something.com. I can connect to it via SSH and everything works.
But what I'm trying to do, get the files from it to my local machine.
What I've done:
I've created core-site.xml in my conf folder (I'm creating Play! application). There I've changed fs.default.name config to hdfs://xxx-something.com:8020 (not sure about the port).
Then I'm trying to launch a simple test:
val conf = new Configuration()
conf.addResource(new Path("conf/core-site.xml"))
val fs = FileSystem.get(conf)
val status = fs.listStatus(new Path("/data/"))
And I'm getting errors:
13:56:09.012 [specs2.DefaultExecutionStrategy1] WARN org.apache.hadoop.conf.Configuration - conf/core-site.xml:a attempt to override final parameter: fs.trash.interval; Ignoring.
13:56:09.012 [specs2.DefaultExecutionStrategy1] WARN org.apache.hadoop.conf.Configuration - conf/core-site.xml:a attempt to override final parameter: hadoop.tmp.dir; Ignoring.
13:56:09.013 [specs2.DefaultExecutionStrategy1] WARN org.apache.hadoop.conf.Configuration - conf/core-site.xml:a attempt to override final parameter: fs.checkpoint.dir; Ignoring.
13:56:09.022 [specs2.DefaultExecutionStrategy1] DEBUG org.apache.hadoop.fs.FileSystem - Creating filesystem for hdfs://xxx-something.com:8021
13:56:09.059 [specs2.DefaultExecutionStrategy1] DEBUG org.apache.hadoop.conf.Configuration - java.io.IOException: config()
at org.apache.hadoop.conf.Configuration.<init>(Configuration.java:226)
at org.apache.hadoop.conf.Configuration.<init>(Configuration.java:213)
at org.apache.hadoop.security.SecurityUtil.<clinit>(SecurityUtil.java:53)
at org.apache.hadoop.net.NetUtils.<clinit>(NetUtils.java:62)
Thanks in advance!
Update:
probably the port was wrong. Now I set it to 22, I'm still getting same errors, but after 3 times it does say:
14:01:01.877 [specs2.DefaultExecutionStrategy1] DEBUG org.apache.hadoop.ipc.Client - Connecting to xxx-something.com/someIp:22
14:01:02.187 [specs2.DefaultExecutionStrategy1] DEBUG org.apache.hadoop.ipc.Client - IPC Client (47) connection to xxx-something.com/someIp:22 from britva sending #0
14:01:02.188 [IPC Client (47) connection to xxx-something.com/someIp:22 from britva] DEBUG org.apache.hadoop.ipc.Client - IPC Client (47) connection to xxx-something.com/someIp:22 from britva: starting, having connections 1
14:01:02.422 [IPC Client (47) connection to xxx-something.com/someIp:22 from britva] DEBUG org.apache.hadoop.ipc.Client - IPC Client (47) connection to xxx-something.com/someIp:22 from britva got value #1397966893
And afterwards:
Call to xxx-something.com/someIp:22 failed on local exception: java.io.EOFException
java.io.IOException: Call to xxx-something.com/someIp:22 failed on local exception: java.io.EOFException
at org.apache.hadoop.ipc.Client.wrapException(Client.java:1103)
at org.apache.hadoop.ipc.Client.call(Client.java:1071)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
at com.sun.proxy.$Proxy1.getProtocolVersion(Unknown Source)
at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:396)
at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:379)
at org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:118)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:222)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:187)
at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:89)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1328)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:65)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1346)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:244)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:122)
at HdfsSpec$$anonfun$1$$anonfun$apply$3.apply(HdfsSpec.scala:33)
at HdfsSpec$$anonfun$1$$anonfun$apply$3.apply(HdfsSpec.scala:17)
at testingSupport.specs2.MyNotifierRunner$$anon$2$$anon$1.executeBody(MyNotifierRunner.scala:16)
at testingSupport.specs2.MyNotifierRunner$$anon$2$$anon$1.execute(MyNotifierRunner.scala:16)
Caused by: java.io.EOFException
at org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:807)
at org.apache.hadoop.ipc.Client$Connection.run(Client.java:745)
What does it mean?
You'll need to find the fs.default.name property in the $HADOOP_HOME/conf/core-site.xml on the server running the Name Node (HDFS master) to get the correct port. It might be 8020, or it could be something else. That's what you should use. Make sure there's no firewall between you and the server that disallows connections on the port.