NullPointerException for KeyAffinityService - java

I am trying to implement a distributed system in Infinispan and I wanted to get the key associated with the local node. I was trying to implement this by utilizing the KeyAffinityService but I get a NullPointerException. I was hoping if someone can help me figure my mistake.
Code snippet
// Create the affinity service to find the Key for the manager
KeyAffinityService keyAffinityService = KeyAffinityServiceFactory.newLocalKeyAffinityService(
cache,
(KeyGenerator)new RndKeyGenerator(),
Executors.newSingleThreadExecutor(),
100);
The implementation of cache is done as following:
EmbeddedCacheManager manager = new DefaultCacheManager();
try{
manager = new DefaultCacheManager("democluster.xml");
}catch(IOException e){}
Cache<Integer, String> cache = manager.getCache();
Xml file
<infinispan
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="urn:infinispan:config:5.1 http://www.infinispan.org/schemas/infinispan-config-5.1.xsd"
xmlns="urn:infinispan:config:5.1">
<global>
<transport clusterName="demoCluster"/>
<globalJmxStatistics enabled="true"/>
</global>
<namedCache name="clusteredCache">
<clustering mode="distributed">
<hash numOwners="1" >
<groups enabled="true"/>
</hash>
</clustering>
</namedCache>
</infinispan>
Error:
Exception in thread "main" java.lang.NullPointerException
at org.infinispan.affinity.KeyAffinityServiceFactory.newLocalKeyAffinityService(KeyAffinityServiceFactory.java:95)
at org.infinispan.affinity.KeyAffinityServiceFactory.newLocalKeyAffinityService(KeyAffinityServiceFactory.java:104)
at SimpleCache.start(SimpleCache.java:46)
at SimpleCache.main(SimpleCache.java:96)
I was wondering if anyone had encountered anything similar or might have any ideas regarding this problem.

You are using the wrong cache, you should be doing
Cache<Integer, String> cache = manager.getCache("clusteredCache");
KeyAffinityService only works with a distributed cache, and the default cache is local-only (because you don't have a <default> element in your configuration).

Related

Wildfly 10.1 deserialize persisted session

Given Wildfly 10.1 with session persistence configured as
<subsystem xmlns="urn:jboss:domain:undertow:3.1">
...
<servlet-container name="default">
<jsp-config x-powered-by="false"/>
<persistent-sessions path="ssn" relative-to="jboss.server.tmp.dir"/>
<websockets/>
</servlet-container>
...
</subsystem>
After server shutdown I have a file named after my WAR file. Contents is somehow readable, I see a hashmap and session values inside.
I'd like to deserialize it.
After digging in Wildfly sources I've found the only class able to read session from file: org.wildfly.extension.undertow.DiskBasedModularPersistentSessionManager
But when I try to read file from src:
DiskBasedModularPersistentSessionManager sessionManager = new DiskBasedModularPersistentSessionManager("C:/tmp", "");
sessionManager.start(null);
sessionManager.baseDir = new File("c:/tmp");
sessionManager.loadSerializedSessions("www.war");
It throws me:
Exception in thread "main" java.io.UTFDataFormatException: Invalid byte
at org.jboss.marshalling.UTFUtils.readUTFBytes(UTFUtils.java:173)
at org.jboss.marshalling.river.RiverUnmarshaller.readUTF(RiverUnmarshaller.java:1833)
at org.jboss.marshalling.river.RiverUnmarshaller.doReadClassDescriptor(RiverUnmarshaller.java:959)
at org.jboss.marshalling.river.RiverUnmarshaller.doReadNewObject(RiverUnmarshaller.java:1255)
at org.jboss.marshalling.river.RiverUnmarshaller.doReadObject(RiverUnmarshaller.java:276)
at org.jboss.marshalling.river.RiverUnmarshaller.doReadObject(RiverUnmarshaller.java:209)
at org.jboss.marshalling.AbstractObjectInput.readObject(AbstractObjectInput.java:41)
at org.wildfly.extension.undertow.DiskBasedModularPersistentSessionManager.loadSerializedSessions(DiskBasedModularPersistentSessionManager.java:115)
at org.wildfly.extension.undertow.DiskBasedModularPersistentSessionManager.main(DiskBasedModularPersistentSessionManager.java:134)
After investigating the cause a little bit, I got an answer that this is some kind of mechanism to protect session migration from one major version to another. But in my case file is generated by same version of Wildfly and same lib versions.
jboss-logging-3.3.0.Final.jar
jboss-marshalling-1.4.11.Final.jar
jboss-marshalling-river-1.4.11.Final.jar
jboss-msc-1.2.6.Final.jar
undertow-servlet-1.4.0.Final.jar
wildfly-undertow-10.1.0.Final.jar
xnio-api-3.4.0.Final.jar
How can I read the session content?

trying to find db connection leak in my code, using Spring / JPA / Hikari

I've got a problem with a Spring web application that periodically runs into an error fetching a connection from my connection pool. Eventually in the logs I see entries like:
Caused by: javax.persistence.PersistenceException: org.hibernate.exception.JDBCConnectionException: Unable to acquire JDBC Connection
Caused by: java.sql.SQLTransientConnectionException: HikariPool-1 - Connection is not available, request timed out after 30000ms.
Only way to recover I've found once it hits this point is to restart Tomcat.
I think the most likely explanation is I have some code somewhere that is not properly cleaning up its connection - returning it to Hikari, leaving something open so Spring can't clean it up, etc.
To troubleshoot I've set my hikari config leakDetectionThreshold to 5000ms and enabled logging. After that, I see log entries like
2018-04-24 19:53:56 WARN ProxyLeakTask:87 - Connection leak detection
triggered for org.postgresql.jdbc.PgConnection#664ec666, stack trace
follows
java.lang.Exception: Apparent connection leak detected
at org.hibernate.engine.jdbc.connections.internal.DatasourceConnectionProviderImpl.getConnection(DatasourceConnectionProviderImpl.java:122)
at org.hibernate.internal.NonContextualJdbcConnectionAccess.obtainConnection(NonContextualJdbcConnectionAccess.java:35)
at org.hibernate.resource.jdbc.internal.LogicalConnectionManagedImpl.acquireConnectionIfNeeded(LogicalConnectionManagedImpl.java:99)
at org.hibernate.resource.jdbc.internal.LogicalConnectionManagedImpl.getPhysicalConnection(LogicalConnectionManagedImpl.java:129)
at org.hibernate.engine.jdbc.internal.StatementPreparerImpl.connection(StatementPreparerImpl.java:47)
at org.hibernate.engine.jdbc.internal.StatementPreparerImpl$5.doPrepare(StatementPreparerImpl.java:146)
at org.hibernate.engine.jdbc.internal.StatementPreparerImpl$StatementPreparationTemplate.prepareStatement(StatementPreparerImpl.java:172)
at org.hibernate.engine.jdbc.internal.StatementPreparerImpl.prepareQueryStatement(StatementPreparerImpl.java:148)
at org.hibernate.loader.Loader.prepareQueryStatement(Loader.java:1940)
at org.hibernate.loader.Loader.executeQueryStatement(Loader.java:1909)
at org.hibernate.loader.Loader.executeQueryStatement(Loader.java:1887)
at org.hibernate.loader.Loader.doQuery(Loader.java:932)
at org.hibernate.loader.Loader.doQueryAndInitializeNonLazyCollections(Loader.java:349)
at org.hibernate.loader.Loader.doList(Loader.java:2615)
at org.hibernate.loader.Loader.doList(Loader.java:2598)
at org.hibernate.loader.Loader.listIgnoreQueryCache(Loader.java:2430)
at org.hibernate.loader.Loader.list(Loader.java:2425)
at org.hibernate.loader.custom.CustomLoader.list(CustomLoader.java:335)
at org.hibernate.internal.SessionImpl.listCustomQuery(SessionImpl.java:2129)
at org.hibernate.internal.AbstractSharedSessionContract.list(AbstractSharedSessionContract.java:981)
at org.hibernate.query.internal.NativeQueryImpl.doList(NativeQueryImpl.java:147)
at org.hibernate.query.internal.AbstractProducedQuery.list(AbstractProducedQuery.java:1398)
at org.hibernate.query.internal.AbstractProducedQuery.getSingleResult(AbstractProducedQuery.java:1444)
at sun.reflect.GeneratedMethodAccessor191.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.springframework.orm.jpa.SharedEntityManagerCreator$DeferredQueryInvocationHandler.invoke(SharedEntityManagerCreator.java:379)
at com.sun.proxy.$Proxy163.getSingleResult(Unknown Source)
at com.mycompany.web.jpa.util.DBHelper.getPagedMappedDbResults(DBHelper.java:76)
at com.mycompany.web.jpa.repository.TaskRepositoryImpl.findTaskDetailsByStepIdAndIdIn(TaskRepositoryImpl.java:245)
......
So it is detecting a possible leak. Could be a false positive I suppose? But this is also the only class in my app that is doing database access outside of the standard service/repository pattern often used in Spring apps, so it seems like a likely culprit, and it's my best lead at the moment.
Anyway, the last piece of non library code I see in the trace (ie stuff I wrote, so most likely to be the cause of the leak!) is my DBHelper::getPagedMappedDbResults method, relevant bit included here:
Query q = entityManager.createNativeQuery(countQueryText);
setQueryParameters(q, parameters);
long numActualResults = 0;
try {
numActualResults = ((Number)q.getSingleResult()).longValue(); // line 76
} catch (Exception e) {
System.out.println("just in case: " + e);
}
So basically I create a Query object from my EntityManager instance, set some parameters, and run it to get some results.
Is there something I need to be doing with a Query object when I'm done with it? q.cleanup()? I don't see anything like this from reading the docs, but am I not doing good housekeeping on this resource?
The entityManager itself is created from an #Autowired annotation. My understanding is if I didn't "new" it to instantiate it and instead let the Spring framework autowire it, then Spring will do whatever cleanup is necessary. Is that right? Or do I need to be doing some cleanup after I use the entityManager?
Version details:
Tomcat 8 / Java 8
Spring 5.0.0.RELEASE
Spring Data Kay-RELEASE
Hibernate 5.2.3.Final
Hikari 2.4.5
Any advice or suggestions would be greatly appreciated, thanks!
What is the query? Is it heavy? Maybe you have deadlock here? Connection management looks fine. You do not acquire connection explicitly, so no need to release it. The query might be long running so Hibernate is not able to complete it and release the connection.
Also, you can check the number of open connections on the DB side. Do some analysis on that side as well.

SparkStreaming put a kafka message into Hbase [duplicate]

I am trying to use the HBase Java APIs to write data into HBase. I installed Hadoop/HBase through Ambari.
Here is how the configuration is currently set up:
final Configuration CONFIGURATION = HBaseConfiguration.create();
final HBaseAdmin HBASE_ADMIN;
HBASE_ADMIN = new HBaseAdmin(CONFIGURATION)
When I try to write to HBase, I check to make sure that the table exists
!HBASE_ADMIN.tableExists(tableName)
If not, create a new one. However, it appears that when attempting to check if the table exists exceptions are being thrown.
I'm wondering if I'm not correctly connected to HBase...is there any good way to verify that the configuration is correct and that I am connecting to HBase? The exception I'm getting is below.
Thanks.
java.lang.RuntimeException: java.lang.NullPointerException
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:209)
at org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:288)
at org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:268)
at org.apache.hadoop.hbase.client.ClientScanner.initializeScannerInConstruction(ClientScanner.java:140)
at org.apache.hadoop.hbase.client.ClientScanner.<init>(ClientScanner.java:135)
at org.apache.hadoop.hbase.catalog.MetaReader.fullScan(MetaReader.java:597)
at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:802)
at org.apache.hadoop.hbase.catalog.MetaReader.tableExists(MetaReader.java:359)
at org.apache.hadoop.hbase.client.HBaseAdmin.tableExists(HBaseAdmin.java:287)
at org.apache.hadoop.hbase.client.HBaseAdmin.tableExists(HBaseAdmin.java:301)
at com.business.project.hbase.HBaseMessageWriter.getTable(HBaseMessageWriter.java:40)
at com.business.project.hbase.HBaseMessageWriter.write(HBaseMessageWriter.java:59)
at com.business.project.hbase.HBaseMessageWriter.write(HBaseMessageWriter.java:54)
at com.business.project.storm.bolt.package.exampleBolt.execute(exampleBolt.java:19)
at backtype.storm.daemon.executor$fn__5697$tuple_action_fn__5699.invoke(executor.clj:659)
at backtype.storm.daemon.executor$mk_task_receiver$fn__5620.invoke(executor.clj:415)
at backtype.storm.disruptor$clojure_handler$reify__1741.onEvent(disruptor.clj:58)
at backtype.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:125)
at backtype.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:99)
at backtype.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:80)
at backtype.storm.daemon.executor$fn__5697$fn__5710$fn__5761.invoke(executor.clj:794)
at backtype.storm.util$async_loop$fn__452.invoke(util.clj:465)
at clojure.lang.AFn.run(AFn.java:24)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.NullPointerException
at org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.getMetaReplicaNodes(ZooKeeperWatcher.java:269)
at org.apache.hadoop.hbase.zookeeper.MetaRegionTracker.blockUntilAvailable(MetaRegionTracker.java:241)
at org.apache.hadoop.hbase.client.ZooKeeperRegistry.getMetaRegionLocation(ZooKeeperRegistry.java:62)
at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateMeta(ConnectionManager.java:1203)
at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1164)
at org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.getRegionLocations(RpcRetryingCallerWithReadReplicas.java:294)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:130)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:55)
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:201)
In addition to the configuration parameters suggested by Yosr, specifying
conf.set("zookeeper.znode.parent", "VALUE")
would help resolve the issue.
The property below resolved my issue
For Hortonworks:
hconfig.set("zookeeper.znode.parent", "/hbase-unsecure")
For cloudera:
hconfig.set("zookeeper.znode.parent", "/hbase")
You can use HBaseAdmin.checkHBaseAvailable(conf);
Configuration conf = HBaseConfiguration.create();
conf.set("hbase.master", "ip_address:60000");
conf.set("hbase.zookeeper.quorum","ip_address");
conf.set("hbase.zookeeper.property.clientPort", "2181");
HBaseAdmin admin = new HBaseAdmin(conf);
boolean bool = admin.tableExists("table_name");
System.out.println( bool);
ip_address : this is the ip_adress of your hbase cluster, change your hbase zookeeper port (2181) if it is not the same on your configuration files.

OrientDB Complete Embedded Cluster Test

I am trying to create a simple test that:
Activates a full server embedded instance (Embedded Server and Distributed Configuration)
Creates an initial test database in document mode during the first run (Create a Database)
Opens the test database (Open a Database)
Insert a sample record
Fetch the sample record
Add another node and repeat
I can roughly understand the steps individually but I am having some difficulty piecing together a simple test case. For example, the API documentation assumes a remote connection. I am not sure whether that is the applicable method here, and if so, what URL I should specify.
Once I have completed steps 1, 2 and 3 correctly, I should be able to just refer to the API documentation for steps 4 and 5.
As a novice user, I find difficult to interpret the documentation in context. Any help or clarification would be appreciated.
I am trying to run this test as a jUnit test. Here is what I have so far:
public class TestOrientDb {
private static final Logger log = Logger.getLogger(TestOrientDb.class);
#Test
public void testFullEmbeddedServer() throws Exception {
log.debug("connectiong to database server...");
String orientdbHome = new File("src/test/resources").getAbsolutePath(); //Set OrientDB home to current directory
log.debug("the orientdb home: " + orientdbHome);
System.setProperty("ORIENTDB_HOME", orientdbHome);
OServer server = OServerMain.create();
URL configUrl = this.getClass().getResource("/orientdb-config.xml");
server.startup(configUrl.openStream());
server.activate();
//HOW DO I CREATE A DATABASE HERE?
//HOW DO I OPEN MY DATABASE TO USE THE API LIKE THIS: http://orientdb.com/docs/last/Document-Database.html
//SHOULD I PAUSE THE THREAD TO KEEP THE SERVER ACTIVE?
log.debug("shutting down orientdb...");
server.shutdown();
}}
Here is orientdb-config.xml:
<orient-server>
<users>
<user name="root" password="password" resources="*"/>
</users>
<properties>
<entry value="/etc/kwcn/databases" name="server.database.path"/>
<entry name="log.console.level" value="fine"/>
</properties>
<handler class="com.orientechnologies.orient.server.hazelcast.OHazelcastPlugin">
<parameters>
<!-- NODE-NAME. IF NOT SET IS AUTO GENERATED THE FIRST TIME THE SERVER RUN -->
<!-- <parameter name="nodeName" value="europe1" /> -->
<parameter name="enabled" value="true"/>
<parameter name="configuration.db.default" value="${ORIENTDB_HOME}/orientdb-config.json"/>
<parameter name="configuration.hazelcast" value="${ORIENTDB_HOME}/hazelcast.xml"/>
</parameters>
</handler>
Here is hazelcast.xml:
<hazelcast xsi:schemaLocation="http://www.hazelcast.com/schema/config hazelcast-config-3.0.xsd"
xmlns="http://www.hazelcast.com/schema/config" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<group>
<name>orientdb</name>
<password>orientdb</password>
</group>
<network>
<port auto-increment="true">2434</port>
<join>
<multicast enabled="true">
<multicast-group>235.1.1.1</multicast-group>
<multicast-port>2434</multicast-port>
</multicast>
</join>
</network>
<executor-service>
<pool-size>16</pool-size>
</executor-service>
Here is orientdb-config.json:
{ "autoDeploy": true, "hotAlignment": false, "executionMode": "asynchronous", "readQuorum": 1, "writeQuorum": 2, "failureAvailableNodesLessQuorum": false, "readYourWrites": true, "servers": { "*": "master" }, "clusters": { "internal": { }, "index": { }, "*": { "servers": [ "<NEW_NODE>" ] } } }
Here is the output:
2016-02-07 16:02:17:867 INFO OrientDB auto-config DISKCACHE=10,695MB (heap=3,641MB os=16,384MB disk=71,698MB) [orientechnologies] 2016-02-07 16:02:18:016 INFO Loading configuration from input stream [OServerConfigurationLoaderXml] 2016-02-07 16:02:18:127
INFO OrientDB Server v2.2.0-beta is starting up... [OServer] 2016-02-07 16:02:18:133 INFO Databases directory: /etc/kwcn/databases [OServer] 2016-02-07 16:02:18:133 WARNI Network configuration was not found [OServer] 2016-02-07 16:02:18:133 WARNI Found
ORIENTDB_ROOT_PASSWORD variable, using this value as root's password [OServer] 2016-02-07 16:02:18:523 INFO OrientDB Server is active v2.2.0-beta. [OServer] 2016-02-07 16:02:18:523 INFO OrientDB Server is shutting down... [OServer] 2016-02-07 16:02:18:523
INFO Shutting down plugins: [OServerPluginManager] DEBUG [ kwcn.TestOrientDb]: shutting down orientdb... 2016-02-07 16:02:18:524 INFO Shutting down databases: [OServer] 2016-02-07 16:02:18:565 INFO OrientDB Engine shutdown complete [Orient] 2016-02-07
16:02:18:566 INFO OrientDB Server shutdown complete
I suggest you to take a look at
https://github.com/orientechnologies/orientdb/blob/2.1.x/distributed/src/test/java/com/orientechnologies/orient/server/distributed/AbstractServerClusterTest.java
it's the base class of OrientDB distributed tests. Its class hierarchy seems quite complex, but in the end it just instantiates multiple servers and delegates to subclasses to test operations against them.
You can also check
https://github.com/orientechnologies/orientdb/blob/2.1.x/distributed/src/test/java/com/orientechnologies/orient/server/distributed/HATest.java
that is one of its subclasses. Actually you could just copy or extend it and implement your own logic in executeTest() method.
About your questions:
HOW DO I CREATE A DATABASE HERE?
As a normal plocal db:
new ODatabaseDocumentTx("plocal:...").create()
or
new OrientGraph("plocal:...")
//HOW DO I OPEN MY DATABASE TO USE THE API LIKE THIS:
same as above:
new ODatabaseDocumentTx("plocal:...").open("admin", "admin");
//SHOULD I PAUSE THE THREAD TO KEEP THE SERVER ACTIVE?
There is no need to pause the thread, the server creates some non-daemon threads, so it will remain active. Just make sure that someone, at the end of the tests, invokes server.shutdown() (even from another thread)

Hazelcast: cannot join cluster if using programmatic config

I'm trying to manually configure Hazelcast 2.5.1 instances through the use of their programmatic API, but I find that it has different behaviors when doing -- supposedly -- similar things.
So, my first approach is rather rudimentary, which is:
String confString = "<hazelcast><network><port auto-increment=\"true\">10555</port><join><multicast enabled=\"false\" /><tcp-ip enabled=\"true\"><interface>127.0.0.1</interface></tcp-ip></join><ssl enabled=\"false\" /></network></hazelcast>";
Config config = new InMemoryXmlConfig(confString);
Hazelcast.newHazelcastInstance(config);
This will work and starting different instances will allow them to join the cluster. For readability purposes, here's the XML I'm building in memory:
<hazelcast>
<network>
<port auto-increment="true">10555</port>
<join>
<multicast enabled="false" />
<tcp-ip enabled="true">
<interface>127.0.0.1</interface>
</tcp-ip>
</join>
<ssl enabled="false" />
</network>
</hazelcast>
Starting different instances of this will make them join the cluster, which is the behavior that I want.
However, when I try to do this programatically, Hazelcast won't allow new instances to join and will complain with the following error:
Jul 09, 2015 9:39:33 AM com.hazelcast.impl.Node
WARNING: [127.0.0.1]:10556 [dev] Config seed port is 10555 and cluster size is 1. Some of the ports seem occupied!
This is the code that is supposed to do the same thing programatically:
Config config = new Config();
config.setInstanceName("HazelcastService");
config.getNetworkConfig().setPortAutoIncrement(true);
config.getNetworkConfig().setPort(10555);
config.getNetworkConfig().getJoin().getMulticastConfig().setEnabled(false);
config.getNetworkConfig().getJoin().getTcpIpConfig().setEnabled(true);
config.getNetworkConfig().getInterfaces().addInterface("127.0.0.1");
config.getNetworkConfig().getInterfaces().setEnabled(true);
SSLConfig sslConfig = new SSLConfig();
sslConfig.setEnabled(false);
config.getNetworkConfig().setSSLConfig(sslConfig);
Hazelcast.newHazelcastInstance(config);
What am I missing?
Interfaces you added in java code are not the same you added in xml.
This is what you set in java code - http://docs.hazelcast.org/docs/2.5/manual/html-single/#ConfigSpecifyInterfaces
For your configuration to work - you should add this
config.getNetworkConfig().getJoin().getTcpIpConfig().addMember("127.0.0.1");

Categories