Trying to connect to Amazon EC2 through Java w/ keypair - java

I'm trying to connect to an instance of Amazon's EC2 cloud using a Keypair and I've been having a good amount of trouble. Ultimately I want to be able to transfer a file from my local machine to the cloud but I still need to get connected first. Here's what I have now:
public static void main(String[] args)
throws IOException, ClassNotFoundException, UserAuthException {
SSHClient ssh = new SSHClient();
ssh.addHostKeyVerifier("2a:a5:fe:10:51:2c:e0:de:2e:99:cc:a4:7d:43:01:0f");
ssh.connect("address for cloud instance");
try {
ssh.authPublickey("ec2-user", "C:\\Users\\Pat\\Desktop\\public_key.pem");
final String src = "C:\\Users\\Pat\\Desktop\\test file.txt";
ssh.newSCPFileTransfer().upload(new FileSystemFile(src), "/home/ec2-user/");
} finally {
ssh.disconnect();
}
}
Before, I have run this without the addHostKeyVerifier line and it gives me an error saying that it couldn't load ssh-rsa key with the fingerprint that is now in that command. The strange thing is that the amazon account shows a different fingerprint for this keypair, could that have anything to do with this? When I run the above code I get this error:
[main] INFO net.schmizz.sshj.common.SecurityUtils - BouncyCastle registration succeeded
[main] WARN net.schmizz.sshj.DefaultConfig - Disabling high-strength ciphers: cipher strengths apparently limited by JCE policy
[main] INFO net.schmizz.sshj.transport.TransportImpl - Client identity string: SSH-2.0- SSHJ_0_8_1_SNAPSHOT
[main] INFO net.schmizz.sshj.transport.TransportImpl - Server identity string: SSH-2.0- OpenSSH_6.2
[main] INFO net.schmizz.sshj.transport.TransportImpl - Disconnected - BY_APPLICATION
Exception in thread "main" java.lang.NoClassDefFoundError: org/bouncycastle/openssl/EncryptionException
at net.schmizz.sshj.userauth.keyprovider.PKCS8KeyFile$Factory.create(PKCS8KeyFile.java:45)
at net.schmizz.sshj.userauth.keyprovider.PKCS8KeyFile$Factory.create(PKCS8KeyFile.java:40)
at net.schmizz.sshj.common.Factory$Named$Util.create(Factory.java:74)
at net.schmizz.sshj.SSHClient.loadKeys(SSHClient.java:479)
at net.schmizz.sshj.SSHClient.loadKeys(SSHClient.java:438)
at net.schmizz.sshj.SSHClient.authPublickey(SSHClient.java:349)
at scpupload.SCPUpload.main(SCPUpload.java:28)
Caused by: java.lang.ClassNotFoundException: org.bouncycastle.openssl.EncryptionException
at java.net.URLClassLoader$1.run(URLClassLoader.java:372)
at java.net.URLClassLoader$1.run(URLClassLoader.java:361)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:360)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
... 7 more
any and all help is appreciated

Related

Error invoking scheduled task Error instantiating bean of type [io.micronaut.configuration.lettuce.health.RedisHealthIndicator]

I have the following problem when running this schedule.
#Singleton
public class TaskScheduler {
private static final Logger LOG = LoggerFactory.getLogger(TaskScheduler.class);
#Inject
private BuildLayerJob buildLayerJob;
#Scheduled(fixedDelay = "30s", initialDelay = "30s")
public void loadRegistriesDescriptions(){
try {
LOG.info("Cargando lista de registries cada 30s.");
buildLayerJob.getBuildLayer().loadRegistries();
}
catch(Exception exception) {
LOG.error("Error cargando lista de registries cada 30s: " +exception.getMessage());
//exception.printStackTrace();
}
}
}
In the first execution there is no problem, but when the time expires and it is executed again it throws me the following error.
20:26:59.291 [pool-1-thread-6] ERROR i.m.s.DefaultTaskExceptionHandler - Error invoking scheduled task Error instantiating bean of type [io.micronaut.configuration.lettuce.health.RedisHealthIndicator]
Message: Unable to connect to localhost:6379
Path Taken: new HealthMonitorTask(CurrentHealthStatus currentHealthStatus,[List healthIndicators]) --> new RedisHealthIndicator(BeanContext beanContext,HealthAggregator healthAggregator,[StatefulRedisConnection[] connections])
io.micronaut.context.exceptions.BeanInstantiationException: Error instantiating bean of type [io.micronaut.configuration.lettuce.health.RedisHealthIndicator]
Message: Unable to connect to localhost:6379
Path Taken: new HealthMonitorTask(CurrentHealthStatus currentHealthStatus,[List healthIndicators]) --> new RedisHealthIndicator(BeanContext beanContext,HealthAggregator healthAggregator,[StatefulRedisConnection[] connections])
at io.micronaut.context.DefaultBeanContext.doCreateBean(DefaultBeanContext.java:1719)
at io.micronaut.context.DefaultBeanContext.addCandidateToList(DefaultBeanContext.java:2727)
at io.micronaut.context.DefaultBeanContext.getBeansOfTypeInternal(DefaultBeanContext.java:2639)
at io.micronaut.context.DefaultBeanContext.getBeansOfType(DefaultBeanContext.java:924)
at io.micronaut.context.AbstractBeanDefinition.lambda$getBeansOfTypeForConstructorArgument$9(AbstractBeanDefinition.java:1124)
at io.micronaut.context.AbstractBeanDefinition.resolveBeanWithGenericsFromConstructorArgument(AbstractBeanDefinition.java:1762)
at io.micronaut.context.AbstractBeanDefinition.getBeansOfTypeForConstructorArgument(AbstractBeanDefinition.java:1119)
at io.micronaut.context.AbstractBeanDefinition.getBeanForConstructorArgument(AbstractBeanDefinition.java:981)
at io.micronaut.configuration.lettuce.health.$RedisHealthIndicatorDefinition.build(Unknown Source)
at io.micronaut.context.DefaultBeanContext.doCreateBean(DefaultBeanContext.java:1693)
at io.micronaut.context.DefaultBeanContext.addCandidateToList(DefaultBeanContext.java:2727)
at io.micronaut.context.DefaultBeanContext.getBeansOfTypeInternal(DefaultBeanContext.java:2639)
at io.micronaut.context.DefaultBeanContext.getBeansOfType(DefaultBeanContext.java:924)
at io.micronaut.context.AbstractBeanDefinition.lambda$getBeansOfTypeForConstructorArgument$9(AbstractBeanDefinition.java:1124)
at io.micronaut.context.AbstractBeanDefinition.resolveBeanWithGenericsFromConstructorArgument(AbstractBeanDefinition.java:1762)
at io.micronaut.context.AbstractBeanDefinition.getBeansOfTypeForConstructorArgument(AbstractBeanDefinition.java:1119)
at io.micronaut.context.AbstractBeanDefinition.getBeanForConstructorArgument(AbstractBeanDefinition.java:984)
at io.micronaut.management.health.monitor.$HealthMonitorTaskDefinition.build(Unknown Source)
at io.micronaut.context.DefaultBeanContext.doCreateBean(DefaultBeanContext.java:1693)
at io.micronaut.context.DefaultBeanContext.createAndRegisterSingletonInternal(DefaultBeanContext.java:2407)
at io.micronaut.context.DefaultBeanContext.createAndRegisterSingleton(DefaultBeanContext.java:2393)
at io.micronaut.context.DefaultBeanContext.getBeanForDefinition(DefaultBeanContext.java:2084)
at io.micronaut.context.DefaultBeanContext.getBeanInternal(DefaultBeanContext.java:2058)
at io.micronaut.context.DefaultBeanContext.getBean(DefaultBeanContext.java:618)
at io.micronaut.scheduling.processor.ScheduledMethodProcessor.lambda$process$5(ScheduledMethodProcessor.java:123)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.runAndReset$$$capture(FutureTask.java:305)
at java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java)
at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: io.lettuce.core.RedisConnectionException: Unable to connect to localhost:6379
at io.lettuce.core.RedisConnectionException.create(RedisConnectionException.java:78)
at io.lettuce.core.RedisConnectionException.create(RedisConnectionException.java:56)
at io.lettuce.core.AbstractRedisClient.getConnection(AbstractRedisClient.java:234)
at io.lettuce.core.RedisClient.connect(RedisClient.java:207)
at io.lettuce.core.RedisClient.connect(RedisClient.java:192)
at io.micronaut.configuration.lettuce.AbstractRedisClientFactory.redisConnection(AbstractRedisClientFactory.java:51)
at io.micronaut.configuration.lettuce.DefaultRedisClientFactory.redisConnection(DefaultRedisClientFactory.java:52)
at io.micronaut.configuration.lettuce.$DefaultRedisClientFactory$RedisConnection1Definition.build(Unknown Source)
at io.micronaut.context.DefaultBeanContext.doCreateBean(DefaultBeanContext.java:1693)
... 31 common frames omitted
Caused by: io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection refused: localhost/127.0.0.1:6379
Caused by: java.net.ConnectException: Connection refused
at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:779)
at io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:330)
at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:334)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:702)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:650)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:576)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.base/java.lang.Thread.run(Thread.java:834)
I understand that there are problems with the connection to redis, but in the microservice deployed in GCP it continues to generate the same problem.
app.yaml
runtime: java11
service: default
instance_class: B2
env_variables:
LAYERS_SERVER_PORT: 8080
REDIS_FIXEDDELAY: 1s
REDISA_URL: "redis://A"
REDISB_URL: "redis://B"
REDISC_URL: "redis://C"
REDISD_URL: "redis://D"
basic_scaling:
max_instances: 1
idle_timeout: 270s
vpc_access_connector:
name: "projects/example/locations/us-central1/connectors/example"
Local settings. application.yml:
micronaut:
application:
name: example
server:
port: ${EXAMPLE_SERVER_PORT:3000}
cors:
enabled: true
---
redis:
servers:
REDISA:
uri: redis://IP_A
REDISB:
uri: redis://IP_B
REDISC:
uri: redis://IP_C
REDISD:
uri: redis://IP_D
Repository layers.server.repo.InfoRepositoryImpl:
#Singleton
public class InfoRepositoryImpl implements InfoRepository {
private BuildLayerJob buildLayerJob;
#Inject #Named("REDISB") RedisAsyncCommands<String, String> reddisConnectionB;
#Inject #Named("REDISA") RedisAsyncCommands<String, String> reddisConnectionA;
private static final Logger LOG = LoggerFactory.getLogger(InfoRepositoryImpl.class);
public InfoRepositoryImpl(BuildLayerJob buildLayerJob) {
this.buildLayerJob = buildLayerJob;
}
... implementation of methods to process information with redis
}
Can you please check if you are having io.micronaut.redis:micronaut-redis-lettuce dependency added to your class path/ build file.
By default Micronaut will assume redis server to be at localhost:6379, as health checks are by default enabled when redis-lettuce is being activated. It will keep probing for health checks.
If you are using micronaut application.yml, you need to provide the server url which will be accessible from the running app.
Micronaut redis
Example - application.yml
redis:
uri: redis://localhost
ssl: true
timeout: 30s
You can also use below connection string pattern to provide details about redis server.
Redis Standalone
redis :// [[username :] password#] host [: port] [/ database][?
[timeout=timeout[d|h|m|s|ms|us|ns]] [&_database=database_]]
Redis Standalone (SSL)
rediss :// [[username :] password#] host [: port] [/ database][?
[timeout=timeout[d|h|m|s|ms|us|ns]] [&_database=database_]]
Redis Standalone (Unix Domain Sockets)
redis-socket :// [[username :] password#]path
[?[timeout=timeout[d|h|m|s|ms|us|ns]][&_database=database_]]
for more details on connection string - Redis connections string
Micronaut redis configuration properties
Such errors can occur when the said data source is autoconfigured. You can disable Redis autoconfiguration if you're not using it in the application. If you need Redis for the application then you should set spring.redis.host and spring.redis.port.

Connect to multiple Kafka servers using springboot

In Spring boot application, I want to connect to 2 different kafka servers simultaneously. I am using KafkaAdmin and AdminClient to make the connection and perform CRUD Operations.
#Bean
public KafkaAdmin kafkaAdmin() {
Map<String, Object> configs = new HashMap<>();
String krb5location = krb5Location;
System.setProperty("java.security.krb5.conf", krb5location);
System.setProperty("java.security.auth.login.config", jaasConfigLocation);
configs.put(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG, server);
configs.put("security.protocol", "SASL_SSL");
configs.put("ssl.truststore.location", sslTruststoreLocation);
configs.put("ssl.truststore.password", sslTruststorePassowrd);
return new KafkaAdmin(configs);
}
#Bean
#PostConstruct
public AdminClient config() {
return AdminClient.create(kafkaAdmin.getConfig());
}
Similarly server 2 is configured in same springboot application.
If I load configuration of both kafka server at once during app initialization following error is displayed
>>>KRBError:
cTime is Sun Jun 03 14:23:02 IST 2001 991558382000
sTime is Tue Nov 20 10:46:53 IST 2018 1542691013000
suSec is 512097
error code is 7
error Message is Server not found in Kerberos database
cname is config1#servername.com
sname is config2#servernname.com
msgType is 30
at sun.security.krb5.KrbTgsRep.<init>(KrbTgsRep.java:73)
at sun.security.krb5.KrbTgsReq.getReply(KrbTgsReq.java:251)
at sun.security.krb5.KrbTgsReq.sendAndGetCreds(KrbTgsReq.java:262)
at sun.security.krb5.internal.CredentialsUtil.serviceCreds(CredentialsUtil.java:308)
at sun.security.krb5.internal.CredentialsUtil.acquireServiceCreds(CredentialsUtil.java:126)
at sun.security.krb5.Credentials.acquireServiceCreds(Credentials.java:458)
at sun.security.jgss.krb5.Krb5Context.initSecContext(Krb5Context.java:693)
at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:248)
at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:179)
at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:192)
at org.apache.kafka.common.security.authenticator.SaslClientAuthenticator$2.run(SaslClientAuthenticator.java:361)
at org.apache.kafka.common.security.authenticator.SaslClientAuthenticator$2.run(SaslClientAuthenticator.java:359)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.kafka.common.security.authenticator.SaslClientAuthenticator.createSaslToken(SaslClientAuthenticator.java:359)
at org.apache.kafka.common.security.authenticator.SaslClientAuthenticator.sendSaslClientToken(SaslClientAuthenticator.java:269)
at org.apache.kafka.common.security.authenticator.SaslClientAuthenticator.authenticate(SaslClientAuthenticator.java:206)
at org.apache.kafka.common.network.KafkaChannel.prepare(KafkaChannel.java:81)
at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:474)
at org.apache.kafka.common.network.Selector.poll(Selector.java:412)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:460)
at org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.run(KafkaAdminClient.java:1006)
at java.lang.Thread.run(Thread.java:748)
Caused by: KrbException: Identifier doesn't match expected value (906)
at sun.security.krb5.internal.KDCRep.init(KDCRep.java:140)
at sun.security.krb5.internal.TGSRep.init(TGSRep.java:65)
at sun.security.krb5.internal.TGSRep.<init>(TGSRep.java:60)
at sun.security.krb5.KrbTgsRep.<init>(KrbTgsRep.java:55)
... 22 more
2018-11-20 10:46:53.605 ERROR 8672 --- [| adminclient-4] org.apache.kafka.clients.NetworkClient : [AdminClient clientId=adminclient-4] Connection to node -1 failed authentication due to: An error: (java.security.PrivilegedActionException: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Server not found in Kerberos database (7) - UNKNOWN_SERVER)]) occurred when evaluating SASL token received from the Kafka Broker. This may be caused by Java's being unable to resolve the Kafka Broker's hostname correctly. You may want to try to adding '-Dsun.net.spi.nameservice.provider.1=dns,sun' to your client's JVMFLAGS environment. Users must configure FQDN of kafka brokers when authenticating using SASL and `socketChannel.socket().getInetAddress().getHostName()` must match the hostname in `principal/hostname#realm` Kafka Client will go to AUTHENTICATION_FAILED state.

Sqoop Import using remote java client

I am writing a remote java client for sqoop(1.4.5) import from mysql to HDFS(hadoop-1.2.1).
This is my code:
Configuration config = new Configuration();
config.set("fs.default.name","hdfs://x.y.z.w:8020");
config.set("mapred.job.tracker", "x.y.z.w:9101");
SqoopOptions options = new SqoopOptions(config);
options.setConnectString("jdbc:mysql://x.y.z.w:3306/testdb");
options.setUsername("user");
options.setPassword("password");
options.setTableName("test");
options.setTargetDir("/testOut");
options.setNumMappers(1);
int ret = new ImportTool().run(options);`
I am getting the following error:
ERROR security.UserGroupInformation: PriviledgedActionException as:xxx cause:java.net.UnknownHostException: unknown host: xxxx
ERROR tool.ImportTool: Encountered IOException running import job: java.net.UnknownHostException: unknown host: xxxx
at org.apache.hadoop.ipc.Client$Connection.<init>(Client.java:236)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1239)
at org.apache.hadoop.ipc.Client.call(Client.java:1093)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)
at com.sun.proxy.$Proxy2.getProtocolVersion(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:85)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:62)
at com.sun.proxy.$Proxy2.getProtocolVersion(Unknown Source)
at org.apache.hadoop.ipc.RPC.checkVersion(RPC.java:422)
at org.apache.hadoop.hdfs.DFSClient.createNamenode(DFSClient.java:183)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:281)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:245)
at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:100)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1446)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:67)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1464)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:263)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:187)
at org.apache.hadoop.mapreduce.JobSubmissionFiles.getStagingDir(JobSubmissionFiles.java:103)
at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:942)
at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:936)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:936)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:550)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:580)
at org.apache.sqoop.mapreduce.ImportJobBase.runJob(ImportJobBase.java:119)
at org.apache.sqoop.mapreduce.ImportJobBase.runImport(ImportJobBase.java:179)
at org.apache.sqoop.manager.SqlManager.importTable(SqlManager.java:413)
at org.apache.sqoop.manager.MySQLManager.importTable(MySQLManager.java:97)
at org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java:381)
at org.apache.sqoop.tool.ImportTool.run(ImportTool.java:454)
The hadoop logs show the following:
namenode log:
INFO org.apache.hadoop.ipc.Server: IPC Server listener on 8020: readAndProcess threw exception java.io.IOException: Connection reset by peer. Co
unt of bytes read: 0
java.io.IOException: Connection reset by peer
jobtracker log:
INFO org.apache.hadoop.ipc.Server: IPC Server listener on 9101: readAndProcess threw exception java.io.IOException: Connection reset by peer. Co
unt of bytes read: 0
java.io.IOException: Connection reset by peer
Can anybody help with this?
Issue can be with hostname not being resolved properly, check your hostname.
Also check if you have given privilages to the specific hostname and user you are using in sqoop to import from Database.

Running Hadoop MR jobs without Admin privilege on Windows

I have installed Hadoop 2.3.0 in windows and able to execute MR jobs successfully. But when I trying to execute MR jobs in normal privilege (without admin privilege) means job get fails with following exception. Here I tried with Pig Script sample.
2014-10-15 12:02:32,822 WARN [main] org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:kaveen (auth:SIMPLE) cause:java.io.IOException: Split class org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigSplit not found
2014-10-15 12:02:32,823 WARN [main] org.apache.hadoop.mapred.YarnChild: Exception running child : java.io.IOException: Split class org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigSplit not found
at org.apache.hadoop.mapred.MapTask.getSplitDetails(MapTask.java:362)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:403)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:342)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163)
Caused by: java.lang.ClassNotFoundException: Class org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigSplit not found
at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:1794)
at org.apache.hadoop.mapred.MapTask.getSplitDetails(MapTask.java:360)
... 7 more
2014-10-15 12:02:32,827 INFO [main] org.apache.hadoop.mapred.Task: Runnning cleanup for the task
2014-10-15 12:02:32,827 WARN [main] org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: Output Path is null in abortTask()
Update:
I was able to drill down the problem and found that the exception raised in the following line at method "MapTask.getSplitDetails(MapTask.java:363)".
private <T> T getSplitDetails(Path file, long offset)
throws IOException {
FileSystem fs = file.getFileSystem(conf);
FSDataInputStream inFile = fs.open(file);
inFile.seek(offset);
String className = StringInterner.weakIntern(Text.readString(inFile));
Class<T> cls;
try {
cls = (Class<T>) conf.getClassByName(className);
} catch (ClassNotFoundException ce) {
IOException wrap = new IOException("Split class " + className +
" not found");
wrap.initCause(ce);
throw wrap;
}
But If I start "NodeManager" with admin privilege mean the above exception won't occur. I don't know why MR job not working when I start "NodeManager" with normal privilege.
If anyone know the reason and solution for above problem. Please guide me as soon as possible.
You can change the location of tmp directory location for hadoop using the below property
<property>
<name>hadoop.tmp.dir</name>
<value>/other/tmp</value>
</property>
Your default tmp location is c:\tmp which requires admin privilege to access. Change the location into any sub directory and try MR job without admin privilege.
Hope it helps.

RMI - Unmarshalled Exception

I can't seem to get this working. I'm just look at it for basic instruction for a lab, but I've no experience with RMI at all. I can't seem to get why I'm getting the error.
Server
public static void runServer() {
// Install security manager, if none is present
if (System.getSecurityManager() == null) {
System.setSecurityManager(new SecurityManager());
}
try {
Registry registry = LocateRegistry.getRegistry();
System.out.println("Reg: " + registry.toString());
String name = "Server";
Server server = new Server();
I_Server stub = (I_Server) UnicastRemoteObject.exportObject(server, 0);
registry.rebind(name, stub);
System.out.println("All is well :-)\n");
} catch (RemoteException re) {
System.err.println("Remote Exception in DisplayGetEngine.main()\n");
re.printStackTrace();
}
}
}
I have the following run commands and arguments in NetBeans
Arguments: -cp C:\rmi\Server\src;C:\rmi\Server\dist\Server.jar -Djava.rmi.server.codebase=file:/C:/rmi/Server/dist/Server.jar
Working Directory: C:\rmi\Server
My stacktrace is, at the rebind method.
Reg: RegistryImpl_Stub[UnicastRef [liveRef: [endpoint:[10.50.18.205:1099](remote),objID:[0:0:0, 0]]]]
Remote Exception in DisplayGetEngine.main()
java.rmi.ServerException: RemoteException occurred in server thread; nested exception is:
java.rmi.UnmarshalException: error unmarshalling arguments; nested exception is:
java.lang.ClassNotFoundException: server.I_Server
at sun.rmi.server.UnicastServerRef.oldDispatch(UnicastServerRef.java:419)
at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:267)
at sun.rmi.transport.Transport$1.run(Transport.java:177)
at sun.rmi.transport.Transport$1.run(Transport.java:174)
at java.security.AccessController.doPrivileged(Native Method)
at sun.rmi.transport.Transport.serviceCall(Transport.java:173)
at sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:553)
at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:808)
at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:667)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:724)
at sun.rmi.transport.StreamRemoteCall.exceptionReceivedFromServer(StreamRemoteCall.java:273)
at sun.rmi.transport.StreamRemoteCall.executeCall(StreamRemoteCall.java:251)
at sun.rmi.server.UnicastRef.invoke(UnicastRef.java:377)
at sun.rmi.registry.RegistryImpl_Stub.rebind(Unknown Source)
at server.Server.runServer(Server.java:50)
at server.Server.main(Server.java:31)
If I don't run rmiregistry, this is my stacktrace
Remote Exception in DisplayGetEngine.main()
java.rmi.ConnectException: Connection refused to host: 10.50.18.205; nested exception is:
java.net.ConnectException: Connection refused: connect
at sun.rmi.transport.tcp.TCPEndpoint.newSocket(TCPEndpoint.java:619)
at sun.rmi.transport.tcp.TCPChannel.createConnection(TCPChannel.java:216)
at sun.rmi.transport.tcp.TCPChannel.newConnection(TCPChannel.java:202)
at sun.rmi.server.UnicastRef.newCall(UnicastRef.java:340)
at sun.rmi.registry.RegistryImpl_Stub.rebind(Unknown Source)
at server.Server.runServer(Server.java:50)
at server.Server.main(Server.java:31)
Try calling createRegistry() first to make sure that you have a running registry. Also, port 0 should be a reserved port so you won't be able to make your server listen on that particular port. Try the default one 1099.
There is no problem in running rmiregistry. If you look into the stacktrace closely,the problem seems to be ClassNotFoundException while the arguments are being unmarshalled in the rmi registry. You will have to check the code base of the RMI server whether server.I_Server class is present in the Server.jar while running it in the classpath.
From RMI FAQs
A.4 Why am I getting a ClassNotFoundException?
Most likely the java.rmi.server.codebase property has not been set (or has not been set correctly) on a VM that is exporting your remote object(s). Please take a look at our tutorial, Dynamic code downloading using Java RMI (Using the java.rmi.server.codebase Property).
The issue was in the build file. I needed to insert this statement.
<target name="startRMI" depends="init">
<exec executable="C:\Program\Files\Java\jdk1.7.0_51\jre\bin\rmiregistry"
dir="${build.classes.dir}"></exec>
</target>

Categories