infinispan: clustred instances not sharing cache - java

I've spent over 2 days doing nothing but trying to get Infinispan to work in a clustered environment and it's not working. I don't want to run a separate infinispan server, I just want to embed it in my application that runs on a clustered Glassfish. Is that not possible? I got a sample JSF app where you can just load values into a map that's supposed to sit in cache. I pull up one clustered instance, add the values, they show up. But when I go to the other clustered instance, it shows the map as empty.
I know I'm doing something wrong, I just don't know what. Been searching the internet and there is no comprehensive tutorial on how to get it to work.
config (coppied from a tutorial that supposedly shows clustering http://www.mastertheboss.com/infinispan/infinispan-tutorial-part-2/page-2 ):
<infinispan>
<global>
<transport clusterName="demoCluster"/>
<globalJmxStatistics enabled="true"/>
</global>
<default>
<jmxStatistics enabled="true"/>
<clustering mode="distribution">
<hash numOwners="2" rehashRpcTimeout="120000"/>
<sync/>
</clustering>
</default>
</infinispan>
Context listener:
package hazelcache.test;
import java.io.IOException;
import java.util.logging.Level;
import java.util.logging.Logger;
import javax.servlet.ServletContextEvent;
import javax.servlet.ServletContextListener;
import javax.servlet.annotation.WebListener;
import org.infinispan.manager.DefaultCacheManager;
import org.infinispan.manager.EmbeddedCacheManager;
#WebListener()
public class Listener implements ServletContextListener
{
EmbeddedCacheManager manager;
#Override
public void contextInitialized(ServletContextEvent sce)
{
try
{
manager = new DefaultCacheManager("config.xml");
manager.start();
sce.getServletContext().setAttribute("cacheManager", manager);
}
catch (IOException ex)
{
Logger.getLogger(Listener.class.getName()).log(Level.SEVERE, null, ex);
}
}
#Override
public void contextDestroyed(ServletContextEvent sce)
{
manager.stop();
}
}
Bean:
package hazelcache.test;
import java.io.IOException;
import java.util.LinkedList;
import java.util.List;
import javax.faces.bean.ManagedBean;
import javax.faces.context.FacesContext;
import javax.servlet.ServletContext;
import org.infinispan.configuration.global.GlobalConfigurationBuilder;
import org.infinispan.manager.DefaultCacheManager;
import org.infinispan.manager.EmbeddedCacheManager;
#ManagedBean(name="clusterTest")
public class ClusteredCacheBean extends CacheTestBean
{
EmbeddedCacheManager manager;
public ClusteredCacheBean() throws IOException
{
System.out.println("Before setStuffz()");
manager = (EmbeddedCacheManager) ((ServletContext)FacesContext.getCurrentInstance().
getExternalContext().getContext()).getAttribute("cacheManager");
setStuffz(manager.getCache("stuffz"));
System.out.println("After setStuffz()");
}// end ClusteredCacheBean()
private static EmbeddedCacheManager createCacheManagerProgramatically() {
return new DefaultCacheManager(GlobalConfigurationBuilder.defaultClusteredBuilder().build());
}
#Override
public String addToCache()
{
String forwardTo = null;
manager.getCache("stuffz").put(getId(), getName());
return forwardTo;
}// end addToCache()
#Override
public List getStuffzList()
{
System.out.println("Stuffz: " + getStuffz().size());
return new LinkedList(manager.getCache("stuffz").entrySet());
}
}// end class ClusteredCacheBean
I really don't know what to do at this point...

A wonderful person on another forum helped me figure it out:
1) set this jvm option: -Djava.net.preferIPv4Stack=true
asadmin> create-jvm-options --target ClusterName -Djava.net.preferIPv4Stack=true
2) Call getCache in the listener once just to create the cache as the thing is starting up:
setStuffz(manager.getCache("stuffz"));
3) put namespace on the configuration file:
<infinispan
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="urn:infinispan:config:5.1 http://docs.jboss.org/infinispan/schemas/infinispan-config-5.1.xsd"
xmlns="urn:infinispan:config:5.1">
Thank you, Tristan from the jBoss forums (https://community.jboss.org/community/infinispan)!

Related

Why does Camunda generate a numeric process instance ID, instead of UUID?

Camunda normally uses UUIDs (e. g. 98631715-0b07-11ec-ab3b-68545a6e5055) as process instance IDs. In my project a process instance ID like 124 is being generated which looks suspicious to me.
This behavior can be reproduced as described below.
Step 1
Check out this repository and start the process engines
core-processes,
core-workflow and
domain-hello-world
so that all of them use the same shared database.
Step 2
Login to the Camunda UI at http://localhost:8080 and navigate to the tasklist.
Start the Starter process in tasklist.
Step 3
Go to the cockpit and navigate to Running process instances (http://localhost:8080/camunda/app/cockpit/default/#/processes).
Click on DomainProcess.
In column ID you will see a numeric (135 in the screenshot above) process instance ID, not a UUID.
Probable cause of the error
In core-processs engine I have the following Config class:
import org.camunda.bpm.engine.impl.history.HistoryLevel;
import org.camunda.bpm.engine.impl.history.event.HistoryEvent;
import org.camunda.bpm.engine.impl.history.handler.CompositeHistoryEventHandler;
import org.camunda.bpm.engine.impl.history.handler.HistoryEventHandler;
import org.camunda.bpm.engine.spring.SpringProcessEngineConfiguration;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.beans.factory.annotation.Qualifier;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.core.io.Resource;
import org.springframework.core.io.support.ResourcePatternResolver;
import org.springframework.transaction.PlatformTransactionManager;
import javax.sql.DataSource;
import java.io.IOException;
import java.util.Collections;
import java.util.List;
import static org.apache.commons.lang3.ArrayUtils.addAll;
#Configuration
public class Config {
private static final Logger LOGGER = LoggerFactory.getLogger(Config.class);
#Autowired
#Qualifier("camundaBpmDataSource")
private DataSource dataSource;
#Autowired
#Qualifier("camundaTxManager")
private PlatformTransactionManager txManager;
#Autowired
private ResourcePatternResolver resourceLoader;
#Bean
public SpringProcessEngineConfiguration processEngineConfiguration() {
final SpringProcessEngineConfiguration config = new SpringProcessEngineConfiguration();
config.setDataSource(dataSource);
config.setTransactionManager(txManager);
config.setDatabaseSchemaUpdate("true");
config.setHistory(HistoryLevel.HISTORY_LEVEL_FULL.getName());
config.setJobExecutorActivate(true);
config.setMetricsEnabled(false);
final Logger logger = LoggerFactory.getLogger("History Event Handler");
final HistoryEventHandler testHistoryEventHandler = new HistoryEventHandler() {
#Override
public void handleEvent(final HistoryEvent evt) {
LOGGER.debug("handleEvent | " + evt.getProcessInstanceId() + " | "
+ evt.toString());
}
#Override
public void handleEvents(final List<HistoryEvent> events) {
for (final HistoryEvent curEvent : events) {
handleEvent(curEvent);
}
}
};
config.setHistoryEventHandler(new CompositeHistoryEventHandler(Collections.singletonList(testHistoryEventHandler)));
try {
final Resource[] bpmnResources = resourceLoader.getResources("classpath:*.bpmn");
final Resource[] dmnResources = resourceLoader.getResources("classpath:*.dmn");
config.setDeploymentResources(addAll(bpmnResources, dmnResources));
} catch (final IOException exception) {
exception.printStackTrace();
LOGGER.error("An error occurred while trying to deploy BPMN and DMN files", exception);
}
return config;
}
}
If I remove this configuration (or comment the #Configuration line), the error disappears.
Questions
Why does Camunda generate a numeric process instance ID in this case (and not a UUID as in other cases)?
After adding the line
config.setIdGenerator(new StrongUuidGenerator());
in the configuration class
#Bean
public SpringProcessEngineConfiguration processEngineConfiguration() {
final SpringProcessEngineConfiguration config = new SpringProcessEngineConfiguration();
config.setIdGenerator(new StrongUuidGenerator());
config.setDataSource(dataSource);
the process instance IDs became UUIDs again.
For details see Camunda documentation on ID generators.

Java WebSocket session resulting to null after onOpen

I am using websockets for the first time on a javafx project, when I start the program the session is set to the local variable session, but after when I call the sendMessage function the session is back to null. Below please find my client class
package myclient;
import java.io.IOException;
import java.io.InputStream;
import java.net.URI;
import java.util.logging.Level;
import java.util.logging.Logger;
import javafx.application.Application;
import javafx.fxml.FXMLLoader;
import javafx.scene.Parent;
import javafx.scene.Scene;
import javafx.stage.Stage;
import javax.websocket.ClientEndpoint;
import javax.websocket.ContainerProvider;
import javax.websocket.DeploymentException;
import javax.websocket.OnClose;
import javax.websocket.OnMessage;
import javax.websocket.OnOpen;
import javax.websocket.Session;
import javax.websocket.WebSocketContainer;
#ClientEndpoint
public class Client extends Application {
private static final Logger LOGGER = Logger.getLogger(Client.class.getName());
private Session session;
#OnOpen
public void onOpen(Session session){
this.session = session;
System.out.println("Opened Session " + this.session);
}
#OnClose
public void onClose(){
System.out.println("Closed Session " + this.session);
}
#OnMessage
public void onMessage(String msg){
System.out.println("Websocket message received! " + msg);
}
#Override
public void start(Stage stage) throws Exception {
Parent root = FXMLLoader.load(getClass().getResource("FXMLClient.fxml"));
Scene scene = new Scene(root);
connectToWebSocket();
stage.setScene(scene);
stage.show();
}
private void connectToWebSocket() {
System.out.println("Client WebSocket initialized>> " + this.session);
WebSocketContainer container = ContainerProvider.getWebSocketContainer();
try {
URI uri = URI.create("ws://localhost:8080/Server/endpoint");
container.connectToServer(this, uri);
}
catch (DeploymentException | IOException ex) {
LOGGER.log(Level.SEVERE, null, ex);
System.exit(-1);
}
}
public void sendMessage(String message) throws IOException{
if(this.session != null){
System.out.println(message + ", " + this.session);
this.session.getBasicRemote().sendText(message);
}
else {
System.out.println("Session is null");
}
}
public static void main(String[] args) {
launch(args);
}
}
Any suggestions?
Thanks in advance
I think I now do know the answer to this.
You are probably using tomcat or some other server for this. When you see "tomcat" in this answer, please insert the name of your actually used server.
When a connection to your websocket is opened, tomcat will create an instance of the websocket (your Client) class by itself. This means, the onOpen-Method will be called and it will look as if it was you who created the instance, who opened the connection, when really you did not. Tomcat did.
This in turn means, that when you call sendMessage on your Client instance, the session will be null, because this object never connected anywhere.
Oh, and you don't have access to the connected instance that was created by tomcat.
One way of fixing this would be to do all the work inside the onOpen-Method, however that is not practical. You may want to put the work in another method and call it from onOpen. That way, the instance created by tomcat will do the necessary work.
In my project I needed to poll on an MQTT-Topic and render the data on a website (university assignment). I did the polling in a separate class, resulting in hard to debug errors whenever trying to send received data with my sendMessage-method.
I hope this answer does clear this up a little, if not for you, maybe at least for future generations having the same university assignment...

How to create Couchbase bucket via java API?

I am Using Spring data couchbase .
package com.CouchbaseMine.config;
import java.io.IOException;
import java.net.URI;
import java.util.Arrays;
import java.util.LinkedList;
import java.util.List;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.boot.CommandLineRunner;
import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.data.couchbase.config.AbstractCouchbaseConfiguration;
import com.couchbase.client.CouchbaseClient;
#Configuration
#EnableAutoConfiguration
public class CouchbaseMineCouchBaseConfig extends AbstractCouchbaseConfiguration {
#Value("${couchbase.cluster.bucket}")
private String bucketName;
#Value("${couchbase.cluster.password}")
private String password;
#Value("${couchbase.cluster.ip}")
private String ip;
#Override
protected String getBucketName() {
List<URI> uris=new LinkedList<URI>();
uris.add(URI.create("5x.xx.xxx.xx9"));
CouchbaseClient client=null;
try {
System.err.println("-- > - > i am in ");
client=new CouchbaseClient(uris,"default","");
} catch (IOException e) {
System.err.println("IOException connetion to couchbase:"+e.getMessage() );
System.exit(1);
}
return this.bucketName;
}
#Override
protected String getBucketPassword() {
return this.password;
}
#Override
protected List<String> bootstrapHosts() {
// TODO Auto-generated method stub
//return Collections.singletonList("54.89.127.249");
return Arrays.asList(this.ip);
}
}
This is configuration class used for establish connection
Follow application properties file
server.port=3000
couchbase.cluster.ip 5x.xx.xxx.xx9
couchbase.cluster.bucket DHxxxar
couchbase.cluster.password 1221
Bottom line: I have created the bucket (Dhxxxar) manually in couchbase.But i need to automatically create the bucket(database) while i run my spring boot application.
So give me any suggestion regards the same . Thanks in advance
Try this:
Cluster cluster = CouchbaseCluster.create("127.0.0.1");
ClusterManager clusterManager = cluster.clusterManager("Administrator", "12345");
BucketSettings bucketSettings = new DefaultBucketSettings.Builder()
.type(BucketType.COUCHBASE)
.name("hello")
.quota(120)
.build();
clusterManager.insertBucket(bucketSettings);
More details:
https://developer.couchbase.com/documentation/server/current/sdk/java/managing-clusters.html
IgorekPotworek's answer is great for Couchbase Java SDK version 2.x.
For version 3.x, the code looks a little different:
Cluster cluster = Cluster.connect("localhost", "Administrator", "password");
BucketManager bucketManager = cluster.buckets();
bucketManager.createBucket(
BucketSettings.create("bucketName")
.ramQuotaMB(100));

RemoteApi calls from servlet not loading in local app engine app

I'am trying this example https://cloud.google.com/appengine/docs/java/tools/remoteapi
Everything works fine if I run script as java application, but when I do it as servlet it always loads forever and doesn't throw any errors. Also works fine on localhost. Also I noticed it happens when query is made, when I comment it out (datastore.put), servlet loads instantly.
import java.io.IOException;
import javax.servlet.http.*;
import com.google.appengine.api.datastore.DatastoreService;
import com.google.appengine.api.datastore.DatastoreServiceFactory;
import com.google.appengine.api.datastore.Entity;
import com.google.appengine.tools.remoteapi.RemoteApiInstaller;
import com.google.appengine.tools.remoteapi.RemoteApiOptions;
#SuppressWarnings("serial")
public class Gae_java_Servlet extends HttpServlet {
public void doGet(HttpServletRequest req, HttpServletResponse resp) throws IOException {
RemoteApiOptions options = new RemoteApiOptions()
.server("java-dot-project.appspot.com", 443)
.useApplicationDefaultCredential();
RemoteApiInstaller installer = new RemoteApiInstaller();
installer.install(options);
try {
DatastoreService datastore = DatastoreServiceFactory.getDatastoreService();
System.out.println("Key of new entity is " +
datastore.put(new Entity("Hello Remote API!")));
} finally {
installer.uninstall();
}
}
}
I figured it out, needed to use RemoteApiOptions().useServiceAccountCredential("service email", "p12key") instead of useApplicationDefaultCredential()

How to make MongoDB Service Available?

I am developing OSGi Mongodb bundle I have also added the following dependencies
com.mongodb
org.apache.felix.fileinstal
org.amdatu.mongo
org.apache.felix.configadmin
and all the dependency managers but in gogo console I get the following error message
org.amdatu.mongo
org.osgi.service.cm.ManagedServiceFactory(service.pid=org.amdatu.mongo) registered
org.osgi.service.log.LogService service optional unavailable
[11] agenda.mongodb.mongo_gfs
agenda.mongo.inter.AgendaMongo() unregistered
org.amdatu.mongo.MongoDBService service required unavailable
the main problem is MongoDBService is not available I must require this service for solving this problem I have read the book according to them
From a development perspective, everything seems fine, but when you
run the appliā€ cation, it will complain that the MongoDBService is
unavailable. You can figure this out with the dmcommand in the shell.
We did however set up MongoDB on our system and deployed the necessary
dependencies in our runtime. Still, the MongoDBService was unable to
start. How come? This is because the MongoDBService needs some
mandatory configuration in order to know to what database to connect
to. The Amdatu MongoDB Serviceuses the Managed Service Factory pattern
(see Chapter 4), and in order to bootstrap it, we need to supply a
configuration file. In order to supply the configuration file, we need
to create a new folder in our agendaproject. Create a new folder
called load. This is the default name that the runtime will look for
in order to spot configuration files. Next, add an empty text file and
call it something like org.amdatu.mongo-demo.xml. The configuration
file needs at least the following information: dbName=demo
I have also apply this but its still unavailable.
This is interface:
package agenda.mongo.inter;
import java.io.InputStream;
public interface AgendaMongo {
public String store_in_db();
public InputStream getData(Object file_id);
}
This is the implementation for Mongodb:
package agenda.mongodb.gridfs;
import java.io.File;
import java.io.IOException;
import java.io.InputStream;
import java.net.UnknownHostException;
import org.amdatu.mongo.MongoDBService;
import org.bson.types.ObjectId;
import agenda.mongo.inter.AgendaMongo;
import com.mongodb.DB;
import com.mongodb.DBCursor;
import com.mongodb.gridfs.GridFS;
import com.mongodb.gridfs.GridFSDBFile;
import com.mongodb.gridfs.GridFSInputFile;
public class Gridfs_Mongodb implements AgendaMongo{
GridFSInputFile gfsinput=null;
private volatile MongoDBService mongoservice;
public String store_in_db() {
/*try {
GridFS gfsHandler;
gfsHandler = new GridFS(mongoservice.getDB(), "rest_data");// database
File uri = new File("f:\\get1.jpg"); // name and
gfsinput = gfsHandler.createFile(uri);
gfsinput.saveChunks(1000);
gfsinput.setFilename("new file");
gfsinput.save();
//System.out.println(gfsinput.getId());
//save_filepath("file",gfsinput.getId());
Object get_id = gfsinput.getId();//get_filename();
//System.out.println(getData(get_id));
} catch (UnknownHostException e) {
// TODO Auto-generated catch block
//System.out.println("Exception");
e.printStackTrace();
} catch (IOException e) {
// TODO Auto-generated catch block
//System.out.println("Exception");
e.printStackTrace();
}*/
System.out.println("DB:" + mongoservice.getDB());
return mongoservice.getDB()+"";
}
/*
* Retrieving the file
*/
public InputStream getData(Object file_id) {
GridFS gfsPhoto = new GridFS(mongoservice.getDB(), "rest_data");
GridFSDBFile dataOutput = gfsPhoto.findOne((ObjectId) file_id);
DBCursor cursor = gfsPhoto.getFileList();
while (cursor.hasNext()) {
System.out.println(cursor.next());
}
System.out.println(dataOutput);
return dataOutput.getInputStream();
}
void start(){
System.out.println("hello");
System.out.println(store_in_db());
}
}
Here I was just trying to get database name because every thing can be done after that but I t was returning me NULL because MongoDBService is Unavailable.
At this is Activator class
package agenda.mongodb.gridfs;
import org.amdatu.mongo.MongoDBService;
import org.apache.felix.dm.DependencyActivatorBase;
import org.apache.felix.dm.DependencyManager;
import org.osgi.framework.BundleContext;
import agenda.mongo.inter.AgendaMongo;
public class Activator extends DependencyActivatorBase {
#Override
public void init(BundleContext arg0, DependencyManager manager)
throws Exception {
manager.add(createComponent()
.setInterface(AgendaMongo.class.getName(), null)
.setImplementation(Gridfs_Mongodb.class)
.add(createServiceDependency()
.setService(MongoDBService.class)
.setRequired(true)));
}
#Override
public void destroy(BundleContext arg0, DependencyManager arg1)
throws Exception {
// TODO Auto-generated method stub
}
}
The Interface package is an exported package and the implementation package is private.
The configuration file should have a .cfg extension (not .xml).

Categories