SparkStreaming put a kafka message into Hbase [duplicate] - java

I am trying to use the HBase Java APIs to write data into HBase. I installed Hadoop/HBase through Ambari.
Here is how the configuration is currently set up:
final Configuration CONFIGURATION = HBaseConfiguration.create();
final HBaseAdmin HBASE_ADMIN;
HBASE_ADMIN = new HBaseAdmin(CONFIGURATION)
When I try to write to HBase, I check to make sure that the table exists
!HBASE_ADMIN.tableExists(tableName)
If not, create a new one. However, it appears that when attempting to check if the table exists exceptions are being thrown.
I'm wondering if I'm not correctly connected to HBase...is there any good way to verify that the configuration is correct and that I am connecting to HBase? The exception I'm getting is below.
Thanks.
java.lang.RuntimeException: java.lang.NullPointerException
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:209)
at org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:288)
at org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:268)
at org.apache.hadoop.hbase.client.ClientScanner.initializeScannerInConstruction(ClientScanner.java:140)
at org.apache.hadoop.hbase.client.ClientScanner.<init>(ClientScanner.java:135)
at org.apache.hadoop.hbase.catalog.MetaReader.fullScan(MetaReader.java:597)
at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:802)
at org.apache.hadoop.hbase.catalog.MetaReader.tableExists(MetaReader.java:359)
at org.apache.hadoop.hbase.client.HBaseAdmin.tableExists(HBaseAdmin.java:287)
at org.apache.hadoop.hbase.client.HBaseAdmin.tableExists(HBaseAdmin.java:301)
at com.business.project.hbase.HBaseMessageWriter.getTable(HBaseMessageWriter.java:40)
at com.business.project.hbase.HBaseMessageWriter.write(HBaseMessageWriter.java:59)
at com.business.project.hbase.HBaseMessageWriter.write(HBaseMessageWriter.java:54)
at com.business.project.storm.bolt.package.exampleBolt.execute(exampleBolt.java:19)
at backtype.storm.daemon.executor$fn__5697$tuple_action_fn__5699.invoke(executor.clj:659)
at backtype.storm.daemon.executor$mk_task_receiver$fn__5620.invoke(executor.clj:415)
at backtype.storm.disruptor$clojure_handler$reify__1741.onEvent(disruptor.clj:58)
at backtype.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:125)
at backtype.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:99)
at backtype.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:80)
at backtype.storm.daemon.executor$fn__5697$fn__5710$fn__5761.invoke(executor.clj:794)
at backtype.storm.util$async_loop$fn__452.invoke(util.clj:465)
at clojure.lang.AFn.run(AFn.java:24)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.NullPointerException
at org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.getMetaReplicaNodes(ZooKeeperWatcher.java:269)
at org.apache.hadoop.hbase.zookeeper.MetaRegionTracker.blockUntilAvailable(MetaRegionTracker.java:241)
at org.apache.hadoop.hbase.client.ZooKeeperRegistry.getMetaRegionLocation(ZooKeeperRegistry.java:62)
at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateMeta(ConnectionManager.java:1203)
at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1164)
at org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.getRegionLocations(RpcRetryingCallerWithReadReplicas.java:294)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:130)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:55)
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:201)

In addition to the configuration parameters suggested by Yosr, specifying
conf.set("zookeeper.znode.parent", "VALUE")
would help resolve the issue.

The property below resolved my issue
For Hortonworks:
hconfig.set("zookeeper.znode.parent", "/hbase-unsecure")
For cloudera:
hconfig.set("zookeeper.znode.parent", "/hbase")

You can use HBaseAdmin.checkHBaseAvailable(conf);

Configuration conf = HBaseConfiguration.create();
conf.set("hbase.master", "ip_address:60000");
conf.set("hbase.zookeeper.quorum","ip_address");
conf.set("hbase.zookeeper.property.clientPort", "2181");
HBaseAdmin admin = new HBaseAdmin(conf);
boolean bool = admin.tableExists("table_name");
System.out.println( bool);
ip_address : this is the ip_adress of your hbase cluster, change your hbase zookeeper port (2181) if it is not the same on your configuration files.

Related

How can I acquire JanusGraphManagement over a remote connection?

I have a docker container running the gremlin-server.
It was started via:
./bin/gremlin-server.sh conf/gremlin-server/gremlin-server.yaml
From within a docker container, running this image:
https://hub.docker.com/r/janusgraph/janusgraph
The server is up and is listening at port 8182
$ docker ps
6019adda6081 janusgraph/janusgraph "docker-entrypoint.s…" 2 days ago Up 26 hours 0.0.0.0:8182->8182/tcp
I am interested in using a schema and indexes.
Janus offers this here: https://docs.janusgraph.org/basics/schema/
The following Is the configuration I use to attempt to connect to the gremlin-server:
AbstractConfiguration config = new BaseConfiguration();
config.setListDelimiter('/');
// contents of conf/remote-graph.properties
config.setProperty("gremlin.remote.driver.sourceName", "g");
config.setProperty("gremlin.remote.remoteConnectionClass", "org.apache.tinkerpop.gremlin.driver.remote.DriverRemoteConnection");
// contents of conf/remote-objects.yaml:
config.setProperty("clusterConfiguration.hosts", databaseUrl);
config.setProperty("clusterConfiguration.port", 8182);
config.setProperty("clusterConfiguration.serializer.className", "org.apache.tinkerpop.gremlin.driver.ser.GryoMessageSerializerV3d0/");
config.setProperty("storage.backend", "cql");
config.setProperty("clusterConfiguration.serializer.config.ioRegistries", "org.janusgraph.graphdb.tinkerpop.JanusGraphIoRegistry");
When I call
GraphTraversalSource g = traversal().withRemote(config);
I get a traversal source and everything seems fine. However, to use the management stuff that Janus provides, I seem to need a JanusGraphManagement object. I cannot get the generic Graph object above and cast it to a JanusGraph. The docs suggest using a JanusGraphFactory: https://docs.janusgraph.org/basics/configuration/#janusgraphfactory
So I call
JanusGraph janusGraph = JanusGraphFactory.open(config);
I get the following stack trace:
Exception in thread "main" java.lang.IllegalArgumentException: Could not find implementation class: org.janusgraph.diskstorage.cql.CQLStoreManager
at org.janusgraph.util.system.ConfigurationUtil.instantiate(ConfigurationUtil.java:60)
at org.janusgraph.diskstorage.Backend.getImplementationClass(Backend.java:440)
at org.janusgraph.diskstorage.Backend.getStorageManager(Backend.java:411)
at org.janusgraph.graphdb.configuration.builder.GraphDatabaseConfigurationBuilder.build(GraphDatabaseConfigurationBuilder.java:50)
at org.janusgraph.core.JanusGraphFactory.open(JanusGraphFactory.java:161)
at org.janusgraph.core.JanusGraphFactory.open(JanusGraphFactory.java:132)
at org.janusgraph.core.JanusGraphFactory.open(JanusGraphFactory.java:112)
at com.activitystream.database.GraphMigration.migrateDatabase(GraphMigration.java:69)
at com.activitystream.runners.persistence.DataStores.migrateDatabase(DataStores.java:27)
at com.activitystream.runners.persistence.EntityPersistenceRunner.main(EntityPersistenceRunner.java:23)
Caused by: java.lang.ClassNotFoundException: org.janusgraph.diskstorage.cql.CQLStoreManager
at java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:581)
at java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:178)
at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:522)
at java.base/java.lang.Class.forName0(Native Method)
at java.base/java.lang.Class.forName(Class.java:315)
at org.janusgraph.util.system.ConfigurationUtil.instantiate(ConfigurationUtil.java:56)
... 9 more
Is it possible to modify the schema over a remote connection?
If it is not possible, how can one modify the schema?
Any insight would be appreciated.
You basically have two choices - either:
Interact with your JanusGraphManagement object by way of scripts sent to Gremlin Server (typically by way of a session but I guess you could package an entire "management script" together and submit it as one request) or
Bypass Gremlin Server and instantiation your JanusGraphManagement object locally as directed in the JanusGraph documentation.
There is no way to have return a JanusGraphManagement to your client as it is not a serializable object that can be sent back from the server.

Single JDBC OracleDataSource/HikariCP with primary/backup DB

I'm trying to set up a single connection pool which references our primary database until said becomes unhealthy and after which the pool fails over, filling up against our backup. Until now I've been taking advantage of an undocumented feature of our application server's JNDI datasources which allows me to specify 2 JDBC connection URL strings thusly:
jdbc:oracle:thin:#primary:1521:DB|jdbc:oracle:thin:#backup:1521:DB
I have the following code, no doubt partially cribbed from some Hikari/Spring documentation months ago.
#Bean(name = "dataSource")
public DataSource dataSource() throws SQLException {
String userName = "user";
String password = "pass";
String server = "primary";
String database = "DB";
OracleDataSource ods = new OracleDataSource();
ods.setServerName(server);
ods.setDatabaseName(database);
ods.setNetworkProtocol("tcp");
ods.setUser(userName);
ods.setPassword(password);
ods.setPortNumber(1521);
ods.setDriverType("thin");
HikariConfig hkConfig = new HikariConfig();
hkConfig.setDataSource(ods);
hkConfig.setDataSourceClassName("oracle.jdbc.pool.OracleDataSource");
hkConfig.setPoolName("springHikariRECPool");
hkConfig.setMaximumPoolSize(15);
hkConfig.setMinimumIdle(3);
hkConfig.setMaxLifetime(1800000); // 30 minutes
return new HikariDataSource(hkConfig);
}
My Google-Fu has failed me. Does anyone have any ideas on how to achieve the failover functionality?
Edit - re. #M. Deinum "Remove the construction of the OracleDataSource and just set the url on the HikariConfig."
HikariConfig hkConfig = new HikariConfig();
hkConfig.setUsername(userName);
hkConfig.setPassword(password);
hkConfig.setJdbcUrl("jdbc:oracle:thin:#primary:1521:DB|jdbc:oracle:thin:#backup:1521:DB");
hkConfig.setDataSourceClassName("oracle.jdbc.pool.OracleDataSource");
hkConfig.setPoolName("springHikariRECPool");
hkConfig.setMaximumPoolSize(15);
hkConfig.setMinimumIdle(3);
hkConfig.setMaxLifetime(1800000);
Unfortunately, this yields a fairly long stack, the base of which is this:
Caused by: java.sql.SQLException: Invalid Oracle URL specified: OracleDataSource.makeURL
at oracle.jdbc.pool.OracleDataSource.makeURL(OracleDataSource.java:1277)
at oracle.jdbc.pool.OracleDataSource.getConnection(OracleDataSource.java:185)
at com.zaxxer.hikari.pool.PoolBase.newConnection(PoolBase.java:356)
at com.zaxxer.hikari.pool.PoolBase.newPoolEntry(PoolBase.java:199)
at com.zaxxer.hikari.pool.HikariPool.createPoolEntry(HikariPool.java:444)
at com.zaxxer.hikari.pool.HikariPool.checkFailFast(HikariPool.java:515)
Investigation of that here - Hikaricp Oracle connection issue and here - Invalid Oracle URL specified: OracleDataSource.makeURL causes me to add some additional properties.
hkConfig.addDataSourceProperty("portNumber", "1521");
hkConfig.addDataSourceProperty("driverType", "thin");
Which now bombs with:
Caused by: java.net.UnknownHostException: null: Name or service not known
at java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method)
at java.net.InetAddress$2.lookupAllHostAddr(InetAddress.java:928)
at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1323)
at java.net.InetAddress.getAllByName0(InetAddress.java:1276)
at java.net.InetAddress.getAllByName(InetAddress.java:1192)
at java.net.InetAddress.getAllByName(InetAddress.java:1126)
at oracle.net.nt.TcpNTAdapter.connect(TcpNTAdapter.java:117)
at oracle.net.nt.ConnOption.connect(ConnOption.java:133)
at oracle.net.nt.ConnStrategy.execute(ConnStrategy.java:370)
The JDBC URL is no longer being referenced, it would appear. . . and, confirmed - I took the backup connection string out of the URL and reached the same exception with a standard, single server connection. So it appears the ODS demands to be configured as originally done (or mimicked with Properties).
As a last gasp for this method, I tried setting the serverName property to "primary|standby" and, as expected, that blew up as well:
Caused by: java.net.UnknownHostException: primary|backup: Name or service not known
at java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method)
at java.net.InetAddress$2.lookupAllHostAddr(InetAddress.java:928)
at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1323)
at java.net.InetAddress.getAllByName0(InetAddress.java:1276)
at java.net.InetAddress.getAllByName(InetAddress.java:1192)
at java.net.InetAddress.getAllByName(InetAddress.java:1126)
at oracle.net.nt.TcpNTAdapter.connect(TcpNTAdapter.java:117)
at oracle.net.nt.ConnOption.connect(ConnOption.java:133)
at oracle.net.nt.ConnStrategy.execute(ConnStrategy.java:411)
... 56 more
I have failed to note thus far that I am using ojdbc7.jar.
Use standard way. Support for DataGuard, failover, RAC is native feature of Oracle JDBC drivers.
1st use tnsnames.ora as described here "How to connect JDBC to tns oracle"
2nd use multiple hosts in tnsnames.ora:
DB =
(DESCRIPTION=
(ADDRESS_LIST=
(LOAD_BALANCE=off)
(FAILOVER=ON)
(ADDRESS=(PROTOCOL=TCP)( HOST=primary)(PORT=1521))
(ADDRESS=(PROTOCOL=TCP)( HOST=backup)(PORT=1521)))
(CONNECT_DATA=(SERVICE_NAME=DB)))
Oracle JDBC driver will connect to the host, where database is "OPEN" and the service named "DB" is present.
PS: you can also pass the whole tns connection string to the jdbc driver directly as a parameter.
url="jdbc:oracle:thin:#(DESCRIPTION=
(LOAD_BALANCE=on)
(ADDRESS_LIST=
(ADDRESS=(PROTOCOL=TCP)(HOST=primary)(PORT=1521))
(ADDRESS=(PROTOCOL=TCP)(HOST=secondary)(PORT=1521)))
(CONNECT_DATA=(SERVICE_NAME=DB)))"

IO Exception: "/root/test outside /opt/h2/DB

currently I installed the H2 database, but when I the launch the program and I try to access it from my browser (http://localhost:8082/login.do), I get this error:
IO Exception: "/root/test outside /opt/h2/DB" [90028-192] 90028/90028 (Aide) org.h2.jdbc.JdbcSQLException: IO Exception: "/root/test outside /opt/h2/DB" [90028-192]
at org.h2.message.DbException.getJdbcSQLException(DbException.java:345)
at org.h2.message.DbException.get(DbException.java:179)
at org.h2.message.DbException.get(DbException.java:155)
at org.h2.engine.ConnectionInfo.setBaseDir(ConnectionInfo.java:182)
at org.h2.jdbc.JdbcConnection.<init>(JdbcConnection.java:114)
at org.h2.jdbc.JdbcConnection.<init>(JdbcConnection.java:102)
at org.h2.Driver.connect(Driver.java:72)
at org.h2.server.web.WebServer.getConnection(WebServer.java:735)
at org.h2.server.web.WebApp.login(WebApp.java:955)
at org.h2.server.web.WebApp.process(WebApp.java:211)
at org.h2.server.web.WebApp.processRequest(WebApp.java:170)
at org.h2.server.web.WebThread.process(WebThread.java:133)
at org.h2.server.web.WebThread.run(WebThread.java:89)
at java.lang.Thread.run(Thread.java:745)
How I can fixe this ?
Just add a single "." before the name of your database. For example this is the jdbc url for my database: jdbc:h2:tcp://localhost:9101/~/test and I'll change it to this to work: jdbc:h2:tcp://localhost:9101/~./test. I've read in a forum that this bug relates to H2.
you should change the form jdbc url
and the h2-data which is you start h2 server data path
jdbc:h2:/h2-data/test

Using Yarn's registry error: Service RegistryOperations is in wrong state: INITED

I am trying to write a Yarn application master that submits itself into Yarn's registry (Hadoop 2.6)
In essence this is what the application master is trying to do:
ApplicationId id = ...
String path = ...
YarnConfiguration conf = new YarnConfiguration();
RegistryOperations registryOperations = RegistryOperationsFactory.createInstance(conf);
ServiceRecord record = new ServiceRecord();
record.set(YarnRegistryAttributes.YARN_ID, applicationId);
record.set(YarnRegistryAttributes.YARN_PERSISTENCE,PersistencePolicies.APPLICATION_ATTEMPT);
registryOperations.bind(path, record, BindFlags.CREATE | BindFlags.OVERWRITE);
When submitting this code to hadoop 2.6 I get the following exception:
org.apache.hadoop.service.ServiceStateException: Service RegistryOperations is in wrong state: INITED
at org.apache.hadoop.registry.client.impl.zk.CuratorService.checkServiceLive(CuratorService.java:184)
at org.apache.hadoop.registry.client.impl.zk.CuratorService.zkSet(CuratorService.java:633)
at org.apache.hadoop.registry.client.impl.zk.RegistryOperationsService.bind(RegistryOperationsService.java:114)
...
Googling the problem yield no usable results, so I tried inspecting the relevant Yarn's source code - currently without success
Anyone else having this problem? any Idea's of what causing it or how to solve it?
From reading the RegistryOperationsFactory javadoc it said that calling any of the create* functions will initialize the resulted RegistryOperations instance.
What I didnt know is that while RegistryOperationsFactory initialize it, It still need to get started.. so this code works:
ApplicationId id = ...
String path = ...
YarnConfiguration conf = new YarnConfiguration();
RegistryOperations registryOperations = RegistryOperationsFactory.createInstance(conf);
registryOperations.start();
ServiceRecord record = new ServiceRecord();
record.set(YarnRegistryAttributes.YARN_ID, applicationId);
record.set(YarnRegistryAttributes.YARN_PERSISTENCE,PersistencePolicies.APPLICATION_ATTEMPT);
registryOperations.bind(path, record, BindFlags.CREATE | BindFlags.OVERWRITE);

Storm Hbase Configuration not found

So I have set up a storm spout coming from kafka and a bolt writing to the HDFS. This all works fine. I now want to add a new bolt which write to Hbase. For some reason, my application is not picking up the hbase configuration stuff and I get the following error:
java.lang.IllegalArgumentException: HBase configuration not found using key 'null'
at org.apache.storm.hbase.bolt.AbstractHBaseBolt.prepare(AbstractHBaseBolt.java:58) ~[storm-hbase-0.9.3.2.2.0.0-2041.jar:0.9.3.2.2.0.0-2041]
at backtype.storm.daemon.executor$fn__5697$fn__5710.invoke(executor.clj:732) ~[storm-core-0.9.3.2.2.0.0-2041.jar:0.9.3.2.2.0.0-2041]
at backtype.storm.util$async_loop$fn__452.invoke(util.clj:463) ~[storm-core-0.9.3.2.2.0.0-2041.jar:0.9.3.2.2.0.0-2041]
at clojure.lang.AFn.run(AFn.java:24) [clojure-1.5.1.jar:na]
at java.lang.Thread.run(Thread.java:745) [na:1.7.0_71]
2015-04-16 02:05:44 b.s.d.executor [ERROR]
java.lang.IllegalArgumentException: HBase configuration not found using key 'null'
at org.apache.storm.hbase.bolt.AbstractHBaseBolt.prepare(AbstractHBaseBolt.java:58) ~[storm-hbase-0.9.3.2.2.0.0-2041.jar:0.9.3.2.2.0.0-2041]
at backtype.storm.daemon.executor$fn__5697$fn__5710.invoke(executor.clj:732) ~[storm-core-0.9.3.2.2.0.0-2041.jar:0.9.3.2.2.0.0-2041]
at backtype.storm.util$async_loop$fn__452.invoke(util.clj:463) ~[storm-core-0.9.3.2.2.0.0-2041.jar:0.9.3.2.2.0.0-2041]
at clojure.lang.AFn.run(AFn.java:24) [clojure-1.5.1.jar:na]
at java.lang.Thread.run(Thread.java:745) [na:1.7.0_71]
2015-04-16 02:05:44 o.a.h.u.NativeCodeLoader [WARN] Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2015-04-16 02:05:44 b.s.util [ERROR] Halting process: ("Worker died")
java.lang.RuntimeException: ("Worker died")
at backtype.storm.util$exit_process_BANG_.doInvoke(util.clj:322) [storm-core-0.9.3.2.2.0.0-2041.jar:0.9.3.2.2.0.0-2041]
at clojure.lang.RestFn.invoke(RestFn.java:423) [clojure-1.5.1.jar:na]
at backtype.storm.daemon.worker$fn__6109$fn__6110.invoke(worker.clj:495) [storm-core-0.9.3.2.2.0.0-2041.jar:0.9.3.2.2.0.0-2041]
at backtype.storm.daemon.executor$mk_executor_data$fn__5530$fn__5531.invoke(executor.clj:245) [storm-core-0.9.3.2.2.0.0-2041.jar:0.9.3.2.2.0.0-2041]
at backtype.storm.util$async_loop$fn__452.invoke(util.clj:475) [storm-core-0.9.3.2.2.0.0-2041.jar:0.9.3.2.2.0.0-2041]
at clojure.lang.AFn.run(AFn.java:24) [clojure-1.5.1.jar:na]
at java.lang.Thread.run(Thread.java:745) [na:1.7.0_71]
When I look at the code it shows the following block where the error is occuring:
public void prepare(Map map, TopologyContext topologyContext, OutputCollector collector) {
this.collector = collector;
final Configuration hbConfig = HBaseConfiguration.create();
Map<String, Object> conf = (Map<String, Object>)map.get(this.configKey);
if(conf == null) {
throw new IllegalArgumentException("HBase configuration not found using key '" + this.configKey + "'");
}
It looks like configKey isn't getting set anywhere so I tried to set it ion the HBaseBolt method as below:
SimpleHBaseMapper mapper = new SimpleHBaseMapper()
.withRowKeyField("CustomerId")
.withColumnFields(new Fields("CustomerId"))
.withCounterFields(new Fields("Count"))
.withColumnFamily("cf1");
HBaseBolt hbase = new HBaseBolt("Customer", mapper).withConfigKey("/etc/hbase/conf/hbase-site.xml");
builder.setBolt("HBASE_BOLT", hbase, 1)
.fieldsGrouping("stormspout", new Fields("CustomerId"));
Didn't seem to do anything though as I am still getting the same error....
Anyone have any suggestions?! It feels like its just not picking up my hbase-site.xml file but I'm not sure why not...
After lots of work, I finally got this to work!!
In your topology createConfig method, add
Map<String, String> HBConfig = Maps.newHashMap();
HBConfig.put("hbase.rootdir","hdfs://<IP Address>:8020/hbase");
conf.put("HBCONFIG",HBConfig);
When you instantiate HBaseBolt, do so using
.withConfigKey("HBCONFIG")
I actually ended up writing my own hbasebolt that implements IRichBolt. I then did an override on the prepare method and built the following configuration which seemed to solve my problem :-)
Configuration conf = HBaseConfiguration.create();
conf.set("hbase.zookeeper.quorum", "sandbox.hortonworks.com");
conf.set("hbase.zookeeper.property.clientPort", "2181");
conf.set("zookeeper.session.timeout", "1200000");
conf.set("hbase.zookeeper.property.tickTime", "6000");
conf.set("zookeeper.znode.parent", "/hbase-unsecure");
conf.set("hbase.coprocessor.region.classes", "com.xasecure.authorization.hbase.XaSecureAuthorizationCoprocessor");
conf.set("hbase.coprocessor.master.classes", "com.xasecure.authorization.hbase.XaSecureAuthorizationCoprocessor");
admin = new HBaseAdmin(conf);
table = new HTable(conf, _tableName);
You should set the configuration section name for your HBase settings in your configuration of topology:
Config cfg = new Config();
Map<String, String> HBConfig = Maps.newHashMap();
HBConfig.put(somekey,somevalue);
cfg.put("HBCONFIG",HBConfig);
StormSubmitter.submitTopology(TOPOLOGY_NAME, cfg, builder.createTopology());
Then in your HBase bolt configuration set this key as a bolt configuration key:
HBaseBolt bolt = new HBaseBolt("table_name", mapper).withConfigKey("HBCONFIG");

Categories