Unable to Connect to Hbase through Java - java

I am trying to connect to Hbase from Java. Hbase -Version 1.0.0
But I am unable to connect it. Kindly tell me what I am missing as I am new to Hbase.
Here is my code
public class HbaseAddRetrieveData{
public static void main(String[] args) throws IOException {
TableName tableName = TableName.valueOf("stock-prices");
Configuration conf = HBaseConfiguration.create();
conf.set("hbase.master","LocalHost:60000");
conf.set("hbase.zookeeper.property.clientPort", "2181");
conf.set("hbase.zookeeper.quorum", "LocalHost");
conf.set("zookeeper.znode.parent", "/hbase-unsecure");
System.out.println("Config set");
Connection conn = ConnectionFactory.createConnection(conf);
System.out.println("Connection");
Admin admin = conn.getAdmin();
if (!admin.tableExists(tableName)) {
System.out.println("In admin");
admin.createTable(new HTableDescriptor(tableName).addFamily(new HColumnDescriptor("cf")));
}
Table table = conn.getTable(tableName);
Put p = new Put(Bytes.toBytes("AAPL10232015"));
p.addColumn(Bytes.toBytes("cf"), Bytes.toBytes("close"), Bytes.toBytes(119));
table.put(p);
Result r = table.get(new Get(Bytes.toBytes("AAPL10232015")));
System.out.println(r);
}
Below is the error I am facing :
Exception in thread "main" org.apache.hadoop.hbase.client.RetriesExhaustedException: Can't get the locations
at org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.getRegionLocations(RpcRetryingCallerWithReadReplicas.java:305)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:131)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:56)
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200)
at org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:287)
at org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:267)
at org.apache.hadoop.hbase.client.ClientScanner.initializeScannerInConstruction(ClientScanner.java:139)
at org.apache.hadoop.hbase.client.ClientScanner.<init>(ClientScanner.java:134)
at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:823)
at org.apache.hadoop.hbase.MetaTableAccessor.fullScan(MetaTableAccessor.java:601)
at org.apache.hadoop.hbase.MetaTableAccessor.tableExists(MetaTableAccessor.java:365)
at org.apache.hadoop.hbase.client.HBaseAdmin.tableExists(HBaseAdmin.java:281)
at com.tcs.healthcare.HbaseRetrieveData.main(HbaseRetrieveData.java:32)
Kindly guide me through this

change L in localhost to l and 'H' to 'h'
Configuration conf = HBaseConfiguration.create();
conf.set("hbase.master","localhost:60000");
conf.set("hbase.zookeeper.property.clientPort", "2181");
conf.set("hbase.zookeeper.quorum", "localhost");
conf.set("zookeeper.znode.parent", "/hbase-unsecure");
If you are using remote machine, then conf.set("hbase.master","remotehost:60000"); even its not working then you check all the jars (which are remote) in the classpath
if you are using maven you can point to the jars which are there in cluster.
you can go to cluster and check below command to know the version of jars which are there in remote machine.
`hbase classpath`
Same version jars should be there in your client machine. For ex hbasex.jar in the remote machine and you are using hbasey.jar then it wont connect.
Moreover, please check from client machine you can able to ping server/cluster or not. Generally there will be firewall restrictions.

Related

Connection issue when trying to connect HBase in remote server using java

I try to connect with HBase in a remote server using java. Below is my java code
String zookeeperHost = "myserverIP";
String tableName = "User";
Configuration hconfig = HBaseConfiguration.create();
hconfig.setInt("timeout", 1200);
hconfig.set("hbase.zookeeper.quorum",zookeeperHost);
hconfig.set("hbase.zookeeper.property.clientPort", "2181");
TableName tname = TableName.valueOf(tableName);
try {
HBaseAdmin.checkHBaseAvailable(hconfig);
} catch (ServiceException e) {
e.printStackTrace();
}
HTableDescriptor htable = new HTableDescriptor(tname);
htable.addFamily( new HColumnDescriptor("Id"));
htable.addFamily( new HColumnDescriptor("Name"));
System.out.println( "Connecting..." );
Connection connection = ConnectionFactory.createConnection(hconfig);
HBaseAdmin hbase_admin = (HBaseAdmin)connection.getAdmin();
System.out.println( "Creating Table..." );
hbase_admin.createTable( htable );
System.out.println("Done!");
but I keep getting this exception
org.apache.hadoop.hbase.MasterNotRunningException:
com.google.protobuf.ServiceException: java.net.ConnectException: Connection >refused: no further information
How can I resolve this issue?
There could be two reasons for below exception
org.apache.hadoop.hbase.MasterNotRunningException
Remote VM hostname/IP is not accessible by client machine. In order to fix that, you need to update the inbound rules to allow port access. If in case of a VM host name, make sure you have added an entry in /etc/hosts to resolve the host name with IP address.
Second issue could be the HBase master is running. Make sure if HMaster service is running from your remote server.
Add below in 'hconfig'
hconfig.set("hbase.master", hbaseConnectionIpAddr);
hconfig.set("hbase.master.port", hbaseConnectionPortNum);

How to get OracleConnection in Oracle 12c WebLogic without specify IP weblogic server using Java Spring Framework

I have to deploy WAR to oracle 12c weblogic server and access it's datasource. but i will deploy it in several weblogic server with different IP. is there any way to get connection to weblogic datasource without specify ip of the weblogic itself? assuming that the WAR is deployed in the same weblogic server and needed to access it's datasouce specified in the weblogic?
i can get oracle connection but must specify the IP. here is my code :
try{
String urlparam = "t3://101.102.103.104:7001";
String datasourceparam = "jdbc/devtest";
Hashtable env = new Hashtable();
env.put( Context.INITIAL_CONTEXT_FACTORY, "weblogic.jndi.WLInitialContextFactory" );
env.put(Context.PROVIDER_URL, urlparam);
Context context=new InitialContext( env );
ds=(javax.sql.DataSource) context.lookup (datasourceparam);
conn=(OracleConnection) ds.getConnection();
System.out.println("Connection object details : " + conn);
}
catch(Exception ex){
ex.printStackTrace();
}
If i use this method, to deploy to 5 weblogic servers, i have to generate 5 different WAR with only different IP weblogic server.
Please help..
It turns out that to look at it's own datasource, i mustn't include hashtable environment in my InitialContext. this way, it will get specified datasource in itself.
try {
String datasourceparam = "jdbc/devtest";
Context context = new InitialContext();
ds = (javax.sql.DataSource) context.lookup(datasourceparam);
conn = (OracleConnection) ds.getConnection();
} catch (Exception ex) {
// handle the exception
ex.printStackTrace();
}

Not able to create HBase schema from java client

I am trying to create HBase schema from java client but it is throwing following exception:
org.apache.hadoop.hbase.client.RetriesExhaustedException: Can't get the locations
at org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.getRegionLocations(RpcRetryingCallerWithReadReplicas.java:319)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:156)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:60)
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:212)
at org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:327)
at org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:302)
at org.apache.hadoop.hbase.client.ClientScanner.initializeScannerInConstruction(ClientScanner.java:167)
at org.apache.hadoop.hbase.client.ClientScanner.<init>(ClientScanner.java:162)
at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:796)
at org.apache.hadoop.hbase.MetaTableAccessor.fullScan(MetaTableAccessor.java:602)
at org.apache.hadoop.hbase.MetaTableAccessor.tableExists(MetaTableAccessor.java:366)
at org.apache.hadoop.hbase.client.HBaseAdmin.tableExists(HBaseAdmin.java:408)
The same is working with HBase shell. Following is my code snippet:
Configuration conf = HBaseConfiguration.create();
Connection connection = ConnectionFactory.createConnection(conf);
Admin admin = connection.getAdmin();
HTableDescriptor table =new HTableDescriptor(TableName.valueOf("xyz"));
table.addFamily(new HColumnDescriptor("default"));
admin.createTable(table);
admin.close();
what could be the issue?
You should add zookeeper (cluster) address in your configuration
conf.set("hbase.zookeeper.quorum","your.zookeeper.address")

Java Client For Secure Hbase

Hi I am trying to write a java client for secure hbase.
I want to do kinit also from code itself for that i`m using the usergroup information class.
Can anyone point out where am I going wrong here?
this is the main method that Im trying to connect o hbase from.
I have to add the configuration in the CONfiguration object rather than using the xml, because the client can be located anywhere.
Please see the code below:
public static void main(String [] args) {
try {
System.setProperty(CommonConstants.KRB_REALM, ConfigUtil.getProperty(CommonConstants.HADOOP_CONF, "krb.realm"));
System.setProperty(CommonConstants.KRB_KDC, ConfigUtil.getProperty(CommonConstants.HADOOP_CONF,"krb.kdc"));
System.setProperty(CommonConstants.KRB_DEBUG, "true");
final Configuration config = HBaseConfiguration.create();
config.set(CommonConfigurationKeysPublic.HADOOP_SECURITY_AUTHENTICATION, AUTH_KRB);
config.set(CommonConfigurationKeysPublic.HADOOP_SECURITY_AUTHORIZATION, AUTHORIZATION);
config.set(CommonConfigurationKeysPublic.FS_AUTOMATIC_CLOSE_KEY, AUTO_CLOSE);
config.set(CommonConfigurationKeysPublic.FS_DEFAULT_NAME_KEY, defaultFS);
config.set("hbase.zookeeper.quorum", ConfigUtil.getProperty(CommonConstants.HBASE_CONF, "hbase.host"));
config.set("hbase.zookeeper.property.clientPort", ConfigUtil.getProperty(CommonConstants.HBASE_CONF, "hbase.port"));
config.set("hbase.client.retries.number", Integer.toString(0));
config.set("zookeeper.session.timeout", Integer.toString(6000));
config.set("zookeeper.recovery.retry", Integer.toString(0));
config.set("hbase.master", "gauravt-namenode.pbi.global.pvt:60000");
config.set("zookeeper.znode.parent", "/hbase-secure");
config.set("hbase.rpc.engine", "org.apache.hadoop.hbase.ipc.SecureRpcEngine");
config.set("hbase.security.authentication", AUTH_KRB);
config.set("hbase.security.authorization", AUTHORIZATION);
config.set("hbase.master.kerberos.principal", "hbase/gauravt-namenode.pbi.global.pvt#pbi.global.pvt");
config.set("hbase.master.keytab.file", "D:/var/lib/bda/secure/keytabs/hbase.service.keytab");
config.set("hbase.regionserver.kerberos.principal", "hbase/gauravt-datanode2.pbi.global.pvt#pbi.global.pvt");
config.set("hbase.regionserver.keytab.file", "D:/var/lib/bda/secure/keytabs/hbase.service.keytab");
UserGroupInformation.setConfiguration(config);
UserGroupInformation userGroupInformation = UserGroupInformation.loginUserFromKeytabAndReturnUGI("hbase/gauravt-datanode2.pbi.global.pvt#pbi.global.pvt", "D:/var/lib/bda/secure/keytabs/hbase.service.keytab");
UserGroupInformation.setLoginUser(userGroupInformation);
User user = User.create(userGroupInformation);
user.runAs(new PrivilegedExceptionAction<Object>() {
#Override
public Object run() throws Exception {
HBaseAdmin admins = new HBaseAdmin(config);
if(admins.isTableAvailable("ambarismoketest")) {
System.out.println("Table is available");
};
HConnection connection = HConnectionManager.createConnection(config);
HTableInterface table = connection.getTable("ambarismoketest");
admins.close();
System.out.println(table.get(new Get(null)));
return table.get(new Get(null));
}
});
System.out.println(UserGroupInformation.getLoginUser().getUserName());
} catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
I`m getting the following exception:
Caused by: org.apache.hadoop.ipc.RemoteException(javax.security.sasl.SaslException): GSS initiate failed
at org.apache.hadoop.hbase.security.HBaseSaslRpcClient.readStatus(HBaseSaslRpcClient.java:110)
at org.apache.hadoop.hbase.security.HBaseSaslRpcClient.saslConnect(HBaseSaslRpcClient.java:146)
at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupSaslConnection(RpcClient.java:762)
at org.apache.hadoop.hbase.ipc.RpcClient$Connection.access$600(RpcClient.java:354)
at org.apache.hadoop.hbase.ipc.RpcClient$Connection$2.run(RpcClient.java:883)
at org.apache.hadoop.hbase.ipc.RpcClient$Connection$2.run(RpcClient.java:880)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:880)
... 33 more
Any pointers would be helpful.
The above works nicely, but I've seen a lot of folks struggle with setting all of the right properties in the Configuration object. There's no de-facto list that I've found of exactly what you need and don't need and it is painfully dependent on your cluster configuration.
The surefire way is to have a copy of your HBase configurations in your classpath, since your client can be anywhere as you mentioned. Then you can add the resources to your object without having to specify all properties.
Configuration conf = HBaseConfiguration.create();
conf.addResource("core-site.xml");
conf.addResource("hbase-site.xml");
conf.addResource("hdfs-site.xml");
Here were some sources to back this approach:
IBM,
Scalding (Scala)
Also note that this approach doesn't limit you to actually use the internal Zookeeper principal and keytab, i.e. you can create keytabs for applications or Active Directory users and leave the internally generated keytabs for the daemons to authenticate amongst themselves.
Not sure if you still need help. I think setting the "hadoop.security.authentication" property is missing from your snippet.
I am using following code snippet to connect to secure HBase (on CDH5). You can give a try.
config.set("hbase.zookeeper.quorum", zookeeperHosts);
config.set("hbase.zookeeper.property.clientPort", zookeeperPort);
config.set("hadoop.security.authentication", "kerberos");
config.set("hbase.security.authentication", "kerberos");
config.set("hbase.master.kerberos.principal", HBASE_MASTER_PRINCIPAL);
config.set("hbase.regionserver.kerberos.principal", HBASE_RS_PRINCIPAL);
UserGroupInformation.setConfiguration(config);
UserGroupInformation.loginUserFromKeytab(ZOOKEEPER_PRINCIPAL,ZOOKEEPER_KEYTAB);
HBaseAdmin admins = new HBaseAdmin(config);
TableName[] tables = admins.listTableNames();
for(TableName table: tables){
System.out.println(table.toString());
}
in Jdk 1.8, you need set
"System.setProperty("javax.security.auth.useSubjectCredsOnly", "false");"
config.set("hbase.zookeeper.quorum", zookeeperHosts);
config.set("hbase.zookeeper.property.clientPort", zookeeperPort);
config.set("hadoop.security.authentication", "kerberos");
config.set("hbase.security.authentication", "kerberos");
config.set("hbase.master.kerberos.principal", HBASE_MASTER_PRINCIPAL);
config.set("hbase.regionserver.kerberos.principal", HBASE_RS_PRINCIPAL);
System.setProperty("javax.security.auth.useSubjectCredsOnly", "false");
UserGroupInformation.setConfiguration(config);
UserGroupInformation.loginUserFromKeytab(ZOOKEEPER_PRINCIPAL,ZOOKEEPER_KEYTAB);
HBaseAdmin admins = new HBaseAdmin(config);
TableName[] tables = admins.listTableNames();
for(TableName table: tables){
System.out.println(table.toString());
}
quote:
http://hbase.apache.org/book.html#trouble.client
question: 142.9
I think the best is https://scalding.io/2015/02/making-your-hbase-client-work-in-a-kerberized-environment/
To make the code work you don’t have to change any line from the one written in the top of this post, you just have to make your client able to access the full HBase configuration. This just implies to change your running classpath to:
/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/lib/hbase/conf:target/scala-2.11/hbase-assembly-1.0.jar
This will make everything run smoothly. It is specific for CDH 5.3 but you can adapt it for your cluster configuration.
PS No need in this:
conf.addResource("core-site.xml");
conf.addResource("hbase-site.xml");
conf.addResource("hdfs-site.xml");
Because HBaseConfiguration has
public static Configuration addHbaseResources(Configuration conf) {
conf.addResource("hbase-default.xml");
conf.addResource("hbase-site.xml");

Running Hbase ImportTSV job remotely

I am trying to run HBase importTSV hadoop job to load data into HBase from a TSV file. I am using the following code.
Configuration config = new Configuration();
Iterator iter = config.iterator();
while(iter.hasNext())
{
Object obj = iter.next();
System.out.println(obj);
}
Job job = new Job(config);
job.setJarByClass(ImportTsv.class);
job.setJobName("ImportTsv");
job.getConfiguration().set("user", "hadoop");
job.waitForCompletion(true);
I am getting this error
ERROR security.UserGroupInformation: PriviledgedActionException as:E317376 cause:org.apache.hadoop.security.AccessControlException: org.apache.hadoop.security.AccessControlException: Permission denied: user=E317376, access=WRITE, inode="staging":hadoop:supergroup:rwxr-xr-x
I dont know how user name E317376 is being set. This is my windows machine user from where I am trying to run this job in a remote cluster. My haddop user account in linux machine is "hadoop"
when i run this in linux machine which is part of Hadoop cluster under hadoop user account, everything works well. But I want to programatically run this job in a java web application. Am I doing anything wrong. Please help...
you should have a property like bellow in your mapred-site.xml file
<property>
<name>mapreduce.jobtracker.staging.root.dir</name>
<value>/user</value>
<property>
and maybe it is necessary to chmod the /user folder of your dfs file system to 777
do not forget to stop/start your jobtrackers and tasktrackers (sh stop-mapred.sh and sh start-mapred.sh)
I havent tested these solutions, but try adding something like this in your job configuration
conf.set("hadoop.job.ugi", "hadoop");
The above may be abolete so you can also try the following with user set as hadoop
(code from http://hadoop.apache.org/common/docs/r1.0.3/Secure_Impersonation.html)
:
UserGroupInformation ugi =
UserGroupInformation.createProxyUser(user, UserGroupInformation.getLoginUser());
ugi.doAs(new PrivilegedExceptionAction<Void>() {
public Void run() throws Exception {
//Submit a job
JobClient jc = new JobClient(conf);
jc.submitJob(conf);
//OR access hdfs
FileSystem fs = FileSystem.get(conf);
fs.mkdir(someFilePath);
}
}

Categories