JMX: Read attribute from Server - java

We're using Adobe CQ (5.5) as CMS. Now, our CQ environment consists of one author server, where users can create content, and 2 publish servers which serve the content to the internet.
Now there's a replication agent which pushs content from the author server to both publish server. Unfortunately some articles block the queue of the replication agents, so no more new content is beeing published. This is not much of a problem, as it is easy to fix. The real problem is that we don't notice this blockage until users start to complain that no more changes are beeing published.
I searched around and found out that CQ provides a JMX API where monitoring applications could attach itself to it. I then tried to find some open source software which would allow me to configure alerts, so we can react faster, but I couldn't find something.
This is when I decided that I could try to write my own Java Application which just reads the attribute and sends a mail if the attribute should be true. I guess that was more complicated than I tought.
First off, I'm not a Java Developer, but since CQ is based on Java I tought I'd give it a try. I read some documentation about JMX and Java and was able to get a working connection to the CQ server. But this is almost everything I could realize.
I was able to find out that the class com.adobe.granite.replication has a type agent which stores an id for every replication agent (the id would be the name of the replication agent, for example id=replication-publish-1). Every replication-agent has different attributes, but the attribute relevant for me would be "QueueBlocked".
This is the code I've got so far (it's based on this example):
public static void main(String[] args) {
try {
JMXServiceURL url = new JMXServiceURL("service:jmx:rmi:///jndi/rmi://servername:9010/jmxrmi");
JMXConnector jmxc = JMXConnectorFactory.connect(url, null);
ClientListener listener = new ClientListener();
MBeanServerConnection mbsc = jmxc.getMBeanServerConnection();
// This outputs the domains, one of them is com.adobee.granite.replication, the one which I need to use
// This is why I'm sure that at least the connection works, I don't have any com.adobe.granite.replication class on my Eclipse installation, so the output has to come from the server
String domains[] = mbsc.getDomains();
for (int i = 0; i < domains.length; i++) {
echo("\tDomain[" + i + "] = " + domains[i]);
}
ObjectName replication = new ObjectName("com.adobe.granite.replication:type=Agent,id=replication-publish-1");
mbsc.getAttribute(replication, "QueueBlocked"); // This throws the error
} catch(Exception e) {
}
}
The error thrown is the following:
javax.management.InstanceNotFoundException: com.adobe.granite.replication:type=Agent,id=replication-publish-1
From what I understand I should be creating some kind of instance, but I don't really have an idea what instance and how to create it. I'd really appreciate any help I can get no matter if it's a documentation or code snippet :)

Solved it :)
This is the code I'm using:
import java.io.IOException;
import java.util.Iterator;
import java.util.Set;
import javax.management.Attribute;
import javax.management.MBeanServerConnection;
import javax.management.MBeanServerInvocationHandler;
import javax.management.ObjectName;
import javax.management.remote.JMXConnector;
import javax.management.remote.JMXConnectorFactory;
import javax.management.remote.JMXServiceURL;
public class Client {
public static void main(String[] args) {
try {
JMXServiceURL url = new JMXServiceURL("service:jmx:rmi:///jndi/rmi://servername:9010/jmxrmi");
JMXConnector jmxc = JMXConnectorFactory.connect(url, null);
MBeanServerConnection mbsc = jmxc.getMBeanServerConnection();
ObjectName replication1 = new ObjectName("com.adobe.granite.replication:type=agent,id=\"replication-publish-1\"");
ObjectName replication2 = new ObjectName("com.adobe.granite.replication:type=agent,id=\"replication-publish-2\"");
String replication1Status = mbsc.getAttribute(replication1, "QueuePaused").toString();
String replication2Status = mbsc.getAttribute(replication2, "QueuePaused").toString();
} catch (Exception e) {
e.printStackTrace();
}
}
}

Related

Connecting to Siebel using Java databeans hangs forever

Hi Below is a sample code I've written:
import com.siebel.data.*;
import com.siebel.data.SiebelException;
public class DataBeanDemo
{
private SiebelDataBean m_dataBean = null;
private SiebelBusObject m_busObject = null;
private SiebelBusComp m_busComp = null;
public static void main(String[] args)
{
DataBeanDemo demo = new DataBeanDemo();
}
public DataBeanDemo()
{
try
{
m_dataBean = new SiebelDataBean();
m_dataBean.login("Siebel://devServerXYZ:7777/XYZ/
ecommunication_enu", ROSADMIN, ROSADMIN, "enu");
System.out.println("Connected");
m_busObject = m_dataBean.getBusObject("Opportunity");
m_busComp = m_busObject.getBusComp("Opportunity");
m_dataBean.logoff();
}
catch (SiebelException e)
{
System.out.println(e.getErrorMessage());
}
}
}
This code executes without issues, but gets stuck at m_dataBean.login(). And never returns.
What could be the issue?
If I try to change connect string (even port name, from 7777 to any other number like 2320, 2321) then I get error could not open a session in 4 attempts SBL-JCA-00200.
3 things to verify
Parameters in the connect string. Gtway server name, OM comp name, port number etc. (Username/Password error is showed immediately but rest others throw generic errors or hung forever)
(This is something that's specific to Siebel) Ensure that Java subsystem profile has classpath pointing to siebel.jar and siebelJI_lang.jar files
Siebel Server is up and running.
If LDAP is true, then such logins cannot be used using Databeans.
In my case it was 1 & 2, that was causing issues. By the way Comp name is case sensitive.

Matlab Java Interoperability

Our web app acts as an integration layer which allows the users to run Matlab code (Matlab is a scientific programming language) which was compiled to Java, packaged up as jar files via browser (selected ones as in above image, except for remote_proxy-1.0.0.jar which is not, it is used for RMI).
The problem is that, Matlab Java runtime, contained inside the javabuilder-1.0.0.jar file, has a process-wide blocking mechanism which means if the first user sends an HTTP request to execute the cdf_read-1.0.0.jar or any Matlab-compiled-to-Java jars at all, then subsequent requests will block until the first one finishes and it will take no less than 5 seconds since JNI is used to invoke the native Matlab code and because the app server just spawns new threads to serve each request, but once again, due to the process-wide locking mechanism of Matlab Java runtime, these newly spawned threads will just block waiting for the first request to be fulfilled, thus our app can technically serve one user at a time.
So to work around this problem, for each such request, we start a new JVM process, send the request to this new process to run the job using RMI, then return the result back to the app server's process, then destroy the spawned process. So we've solved the blocking issue, but this is not very good at all in terms of memory used, this is a niche app, so number of users is in the range of thoudsands. Below is the code used to start a new process to run the BootStrap class which starts a new RMI registry, and binds a remote object to run the job.
package rmi;
import java.io.*;
import java.nio.file.*;
import static java.util.stream.Collectors.joining;
import java.util.stream.Stream;
import javax.enterprise.concurrent.ManagedExecutorService;
import org.slf4j.LoggerFactory;
//TODO: Remove sout
public class ProcessInit {
public static Process startRMIServer(ManagedExecutorService pool, String WEBINF, int port, String jar) {
ProcessBuilder pb = new ProcessBuilder();
Path wd = Paths.get(WEBINF);
pb.directory(wd.resolve("classes").toFile());
Path lib = wd.resolve("lib");
String cp = Stream.of("javabuilder-1.0.0.jar", "remote_proxy-1.0.0.jar", jar)
.map(e -> lib.resolve(e).toString())
.collect(joining(File.pathSeparator));
pb.command("java", "-cp", "." + File.pathSeparator + cp, "rmi.BootStrap", String.valueOf(port));
while (true) {
try {
Process p = pb.start();
pool.execute(() -> flushIStream(p.getInputStream()));
pool.execute(() -> flushIStream(p.getErrorStream()));
return p;
} catch (Exception ex) {
ex.printStackTrace();
System.out.println("Retrying....");
}
}
}
private static void flushIStream(InputStream is) {
try (BufferedReader br = new BufferedReader(new InputStreamReader(is))) {
br.lines().forEach(System.out::println);
} catch (IOException ex) {
LoggerFactory.getLogger(ProcessInit.class.getName()).error(ex.getMessage());
}
}
}
This class is used to start a new RMI registry so each HTTP request to execute Matlab code can be run in a separate process, the reason we do this is because each RMI registry is bound to a process, so we need a separate registry for each JVM process.
package rmi;
import java.rmi.RemoteException;
import java.rmi.registry.*;
import java.rmi.server.UnicastRemoteObject;
import java.util.logging.*;
import remote_proxy.*;
//TODO: Remove sout
public class BootStrap {
public static void main(String[] args) {
int port = Integer.parseInt(args[0]);
System.out.println("Instantiating a task runner implementation on port: " + port );
try {
System.setProperty("java.rmi.server.hostname", "localhost");
TaskRunner runner = new TaskRunnerRemoteObject();
TaskRunner stub = (TaskRunner)UnicastRemoteObject.exportObject(runner, 0);
Registry reg = LocateRegistry.createRegistry(port);
reg.rebind("runner" + port, stub);
} catch (RemoteException ex) {
Logger.getLogger(BootStrap.class.getName()).log(Level.SEVERE, null, ex);
}
}
}
This class allows to submit the request to execute the Matlab code, returns the result, and kill the newly spawned process.
package rmi.tasks;
import java.rmi.*;
import java.rmi.registry.*;
import java.util.Random;
import java.util.concurrent.*;
import java.util.logging.*;
import javax.enterprise.concurrent.ManagedExecutorService;
import remote_proxy.*;
import rmi.ProcessInit;
public final class Tasks {
/**
* #param pool This instance should be injected using #Resource(name = "java:comp/DefaultManagedExecutorService")
* #param task This is an implementation of the Task interface, this
* implementation should extend from MATLAB class and accept any necessary
* arguments, e.g Class1 and it must implement Serializable interface
* #param WEBINF WEB-INF directory
* #param jar Name of the jar that contains this MATLAB function
* #throws RemoteException
* #throws NotBoundException
*/
public static final <T> T runTask(ManagedExecutorService pool, Task<T> task, String WEBINF, String jar) throws RemoteException, NotBoundException {
int port = new Random().nextInt(1000) + 2000;
Future<Process> process = pool.submit(() -> ProcessInit.startRMIServer(pool, WEBINF, port, jar));
Registry reg = LocateRegistry.getRegistry(port);
TaskRunner generator = (TaskRunner) reg.lookup("runner" + port);
T result = generator.runTask(task);
destroyProcess(process);
return result;
}
private static void destroyProcess(Future<Process> process) {
try {
System.out.println("KILLING THIS PROCESS");
process.get().destroy();
System.out.println("DONE KILLING THIS PROCESS");
} catch (InterruptedException | ExecutionException ex) {
Logger.getLogger(Tasks.class.getName()).log(Level.SEVERE, null, ex);
System.out.println("DONE KILLING THIS PROCESS");
}
}
}
The questions:
Do I have to start a new separate RMI registry and bind a remote to it for each new process?
Is there a better way to achieve the same result?
You don't want JVM startup time to be part of the perceived transaction time. I would start a large number of RMI JVMs ahead of time, dependending on the expected number of concurrent requests, which could be in the hundreds or even thousands.
You only need one Registry: rmiregistry.exe. Start it on its default port and with an appropriate CLASSPATH so it can find all your stubs and application classes they depend on.
Bind each remote object into that Registry with sequentially-increasing names of the general form runner%d.
Have your RMI client pick a 'runner' at random from the known range 1-N where N is the number of runners. You may need a more sophisticated mechanism than mere randomness to ensure that the runner is free at the time.
You don't need multiple Registry ports or even multiple Registries.

Sending RDF/XML using Sesame or Apache Jena with Sockets

I am trying to send RDF/XML from a Client to a Server using sockets in Java. When I send the information the Server program hangs and does not receive the info unless I close the Socket or OutputStream on the Client side. Even if I flush the OutputStream on the Client-side the Server does not receive the data unless I close the Socket/Stream. I would like to send the information without closing the socket. Here is some example code for the Client (using Sesame):
import java.io.*;
import java.net.*;
import org.openrdf.rio.*;
import org.openrdf.rio.helpers.*;
import org.openrdf.model.URI;
import org.openrdf.model.Model;
import org.openrdf.model.ValueFactory;
import org.openrdf.model.Statement;
import org.openrdf.model.impl.*;
import org.openrdf.model.vocabulary.*;
public class SimpleRDFClient {
private Socket socket = null;
public static void main(String[] args) {
new SimpleRDFClient(args[0],Integer.parseInt(args[1])).launch();
}
public SimpleRDFClient(String host, int port) {
try {
socket = new Socket(host,port);
} catch (IOException ex) {
ex.printStackTrace();
}
}
public void launch() {
try {
OutputStream out = socket.getOutputStream();
BufferedOutputStream dos = new BufferedOutputStream(out);
Model model = new LinkedHashModel();
ValueFactory factory = new ValueFactoryImpl();
URI clive = factory.createURI("http://www.site.org/cliveAnderson");
Statement st = factory.createStatement(clive, RDF.TYPE, FOAF.PERSON);
model.add(st);
Rio.write(model,dos,RDFFormat.RDFXML);
dos.flush();
//Some other stuff
} catch (Exception ex) {
ex.printStackTrace();
}
}
}
And the Server Handler:
import java.io.*;
import java.net.*;
import org.openrdf.rio.*;
import org.openrdf.rio.helpers.*;
import org.openrdf.model.*;
import org.openrdf.model.impl.*;
public class SimpleRDFSHandler implements Handler {
public void handleConnection(Socket socket) {
Model model = null;
try {
InputStream in = socket.getInputStream();
model = Rio.parse(in,"www.blah.com",RDFFormat.RDFXML);
for (Statement st: model) {
System.out.println(st);
}
} catch (Exception ex) {
ex.printStackTrace();
}
}
}
The problem seems to come from the Rio.parse() method hanging (I think because it does not know when the input ends). I get a similar problem when I use the Jena api in a similar way, i.e. using Model.write(outputstream,format) and Model.read(inputstream,format) instead of Rio. I have looked at the source and the javadoc for ages but can't solve the problem. I think it must be something simple I have misunderstood. Any ideas?
I don't think this is in any way a Jena/Sesame specific issue but rather a Java issue around your use of sockets. Is there actually a practical reason you want to not close the socket?
I don't see why this would ever be advisable unless you want to continuously post data and process it as it is received on the server side? If this is the case both Jena and Sesame have APIs that specifically allow you to control what happens to data as it parsed in so that you aren't reliant on your read calls from completing before you process the data.
Also why use sockets, both Sesame and Jena have comprehensive HTTP integration which is much easier to use and deploy than rolling your own socket based server and clients.
The Ugly Hacky Solution
If you really must do this then there is a workaround but it is somewhat horrid and fragile and I would strongly recommend that you do not do this.
On the client side after you write the data write a sequence of bytes that indicate end of stream. On the server side wrap the socket stream with a custom InputStream implementation that recognizes this sequence and stops returning data when it is seen. This should allow the Jena/Sesame code which is expecting the stream to finish to function correctly.
The sequence of bytes need to be carefully chosen such that it won't naturally occur in the data.
To be honest this is a terrible idea, if your aim is to continuously post data this won't really solve your problem because then you'll just be leaking sockets server side unless you put the server side socket handling code in a while (true) loop which is likely another bad idea.

Cassandra Astyanax documentation

I am trying to use Astyanax for Cassandra with Java. I tried the example at https://github.com/Netflix/astyanax/wiki/Getting-Started. I have the code which I have just copied from this link:
package def;
import com.netflix.astyanax.AstyanaxContext;
import com.netflix.astyanax.Keyspace;
import com.netflix.astyanax.MutationBatch;
import com.netflix.astyanax.connectionpool.NodeDiscoveryType;
import com.netflix.astyanax.connectionpool.OperationResult;
import com.netflix.astyanax.connectionpool.exceptions.ConnectionException;
import com.netflix.astyanax.connectionpool.impl.ConnectionPoolConfigurationImpl;
import com.netflix.astyanax.connectionpool.impl.CountingConnectionPoolMonitor;
import com.netflix.astyanax.impl.AstyanaxConfigurationImpl;
import com.netflix.astyanax.model.Column;
import com.netflix.astyanax.model.ColumnFamily;
import com.netflix.astyanax.model.ColumnList;
import com.netflix.astyanax.serializers.StringSerializer;
import com.netflix.astyanax.thrift.ThriftFamilyFactory;
public class sample {
public static void main(String[] args) throws Exception{
AstyanaxContext<Keyspace> context = new AstyanaxContext.Builder()
.forCluster("Test Cluster")
.forKeyspace("KeyspaceName")
.withAstyanaxConfiguration(new AstyanaxConfigurationImpl()
.setDiscoveryType(NodeDiscoveryType.NONE)
)
.withConnectionPoolConfiguration(new ConnectionPoolConfigurationImpl("MyConnectionPool")
.setPort(9160)
.setMaxConnsPerHost(10)
.setSeeds("127.0.0.1:9160")
)
.withConnectionPoolMonitor(new CountingConnectionPoolMonitor())
.buildKeyspace(ThriftFamilyFactory.getInstance());
context.start();
Keyspace keyspace = context.getEntity();
ColumnFamily<String, String> CF_USER_INFO =
new ColumnFamily<String, String>(
"Standard1", // Column Family Name
StringSerializer.get(), // Key Serializer
StringSerializer.get()); // Column Serializer
// Inserting data
MutationBatch m = keyspace.prepareMutationBatch();
m.withRow(CF_USER_INFO, "acct1234")
.putColumn("firstname", "john", null)
.putColumn("lastname", "smith", null)
.putColumn("address", "555 Elm St", null)
.putColumn("age", 30, null);
m.withRow(CF_USER_INFO, "acct1234")
.incrementCounterColumn("loginCount", 1);
try {
OperationResult<Void> result = m.execute();
} catch (ConnectionException e) {
}
System.out.println("completed the task!!!");
OperationResult<ColumnList<String>> result =
keyspace.prepareQuery(CF_USER_INFO)
.getKey("Key1")
.execute();
ColumnList<String> columns = result.getResult();
// Lookup columns in response by name
int age = columns.getColumnByName("age").getIntegerValue();
long counter = columns.getColumnByName("loginCount").getLongValue();
String address = columns.getColumnByName("address").getStringValue();
// Or, iterate through the columns
for (Column<String> c : result.getResult()) {
System.out.println(c.getName());
}
}
}
But when I run this I am getting an exception:
log4j:WARN No appenders could be found for logger (com.netflix.astyanax.connectionpool.impl.ConnectionPoolMBeanManager).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
completed the task!!!
Exception in thread "main" com.netflix.astyanax.connectionpool.exceptions.BadRequestException: BadRequestException: [host=127.0.0.1(127.0.0.1):9160, latency=0(0), attempts=1] InvalidRequestException(why:Keyspace KeyspaceName does not exist)
at com.netflix.astyanax.thrift.ThriftConverter.ToConnectionPoolException(ThriftConverter.java:159)
at com.netflix.astyanax.thrift.ThriftSyncConnectionFactoryImpl$1.execute(ThriftSyncConnectionFactoryImpl.java:119)
at com.netflix.astyanax.connectionpool.impl.AbstractExecuteWithFailoverImpl.tryOperation(AbstractExecuteWithFailoverImpl.java:52)
at com.netflix.astyanax.connectionpool.impl.AbstractHostPartitionConnectionPool.executeWithFailover(AbstractHostPartitionConnectionPool.java:229)
at com.netflix.astyanax.thrift.ThriftColumnFamilyQueryImpl$1.execute(ThriftColumnFamilyQueryImpl.java:180)
at def.sample.main(sample.java:68)
Caused by: InvalidRequestException(why:Keyspace KeyspaceName does not exist)
at org.apache.cassandra.thrift.Cassandra$set_keyspace_result.read(Cassandra.java:4874)
at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:78)
at org.apache.cassandra.thrift.Cassandra$Client.recv_set_keyspace(Cassandra.java:489)
at org.apache.cassandra.thrift.Cassandra$Client.set_keyspace(Cassandra.java:476)
at com.netflix.astyanax.thrift.ThriftSyncConnectionFactoryImpl$1.execute(ThriftSyncConnectionFactoryImpl.java:109)
... 4 more
Can anyone tell me what's wrong with this? There is no proper documentation also available for this. So, can you just help me out. And even give me some links where I can get more examples on it.
why:Keyspace KeyspaceName does not exist
The error above is pretty self explanatory. The keyspace does not exists when the application connect to the localhost. So ensure that you create the keyspace and then re-run your application.
From the comment, I think you want to look into this . Excerpt from the thread,
The Keyspace serves as a client only and does not create the keyspace
or column family on cassandra. You can use the AsytanaxContext.Builder
to construct a Cluster interface through which you can actually create
the keyspace and column families.
This unit test in this link should provide you sufficient information on how to create keyspace in your cluster.
Your code sample is written to talk to a running instance of a Cassandra server on your localhost at 127.0.0.1. If you have Cassandra running elsewhere, or not at all, then you'll need to install and set up that server environment prior to executing your code.

Easy way to start a standalone JNDI server (and register some resources)

For testing purposes, I'm looking for a simple way to start a standalone JNDI server, and bind my javax.sql.DataSource to "java:/comp/env/jdbc/mydatasource" programmatically.
The server should bind itself to some URL, for example: "java.naming.provider.url=jnp://localhost:1099" (doesn't have to be JNP), so that I can look up my datasource from another process. I don't care about which JNDI server implementation I'll have to use (but I don't want to start a full-blown JavaEE server).
This should be so easy, but to my surprise, I couldn't find any (working) tutorial.
The JDK contains a JNDI provider for the RMI registry. That means you can use the RMI registry as a JNDI server. So, just start rmiregistry, set java.naming.factory.initial to com.sun.jndi.rmi.registry.RegistryContextFactory, and you're away.
The RMI registry has a flat namespace, so you won't be able to bind to java:/comp/env/jdbc/mydatasource, but you will be able to bind to something so it will accept java:/comp/env/jdbc/mydatasource, but will treat it as a single-component name (thanks, #EJP).
I've written a small application to demonstrate how to do this: https://bitbucket.org/twic/jndiserver/src
I still have no idea how the JNP server is supposed to work.
I worked on the John´s code and now is working good.
In this version I'm using libs of JBoss5.1.0.GA, see jar list below:
jboss-5.1.0.GA\client\jbossall-client.jar
jboss-5.1.0.GA\server\minimal\lib\jnpserver.jar
jboss-5.1.0.GA\server\minimal\lib\log4j.jar
jboss-remote-naming-1.0.1.Final.jar (downloaded from http://search.maven.com)
This is the new code:
import java.net.InetAddress;
import java.util.Hashtable;
import java.util.concurrent.Callable;
import javax.naming.Context;
import javax.naming.InitialContext;
import org.jnp.server.Main;
import org.jnp.server.NamingBeanImpl;
public class StandaloneJNDIServer implements Callable<Object> {
public Object call() throws Exception {
setup();
return null;
}
#SuppressWarnings("unchecked")
private void setup() throws Exception {
//configure the initial factory
//**in John´s code we did not have this**
System.setProperty(Context.INITIAL_CONTEXT_FACTORY, "org.jnp.interfaces.NamingContextFactory");
//start the naming info bean
final NamingBeanImpl _naming = new NamingBeanImpl();
_naming.start();
//start the jnp serve
final Main _server = new Main();
_server.setNamingInfo(_naming);
_server.setPort(5400);
_server.setBindAddress(InetAddress.getLocalHost().getHostName());
_server.start();
//configure the environment for initial context
final Hashtable _properties = new Hashtable();
_properties.put(Context.INITIAL_CONTEXT_FACTORY, "org.jnp.interfaces.NamingContextFactory");
_properties.put(Context.PROVIDER_URL, "jnp://10.10.10.200:5400");
//bind a name
final Context _context = new InitialContext(_properties);
_context.bind("jdbc", "myJDBC");
}
public static void main(String...args){
try{
new StandaloneJNDIServer().call();
}catch(Exception _e){
_e.printStackTrace();
}
}
}
To have good logging, use this log4j properties:
log4j.rootLogger=TRACE, A1
log4j.appender.A1=org.apache.log4j.ConsoleAppender
log4j.appender.A1.layout=org.apache.log4j.PatternLayout
log4j.appender.A1.layout.ConversionPattern=%-4r [%t] %-5p %c %x - %m%n
To consume the Standalone JNDI server, use this client class:
import java.util.Hashtable;
import javax.naming.Context;
import javax.naming.InitialContext;
/**
*
* #author fabiojm - Fábio José de Moraes
*
*/
public class Lookup {
public Lookup(){
}
#SuppressWarnings("unchecked")
public static void main(String[] args) {
final Hashtable _properties = new Hashtable();
_properties.put("java.naming.factory.initial", "org.jnp.interfaces.NamingContextFactory");
_properties.put("java.naming.provider.url", "jnp://10.10.10.200:5400");
try{
final Context _context = new InitialContext(_properties);
System.out.println(_context);
System.out.println(_context.lookup("java:comp"));
System.out.println(_context.lookup("java:jdbc"));
}catch(Exception _e){
_e.printStackTrace();
}
}
}
Here's a code snippet adapted from JBoss remoting samples. The code that is
in the samples (version 2.5.4.SP2 ) no longer works. While the fix
is simple it took me more hours than I want to think about to figure it out.
Sigh. Anyway, maybe someone can benefit.
package org.jboss.remoting.samples.detection.jndi.custom;
import java.net.InetAddress;
import java.util.concurrent.Callable;
import org.jnp.server.Main;
import org.jnp.server.NamingBeanImpl;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
public class StandaloneJNDIServer implements Callable<Object> {
private static Logger logger = LoggerFactory.getLogger( StandaloneJNDIServer.class );
// Default locator values - command line args can override transport and port
private static String transport = "socket";
private static String host = "localhost";
private static int port = 5400;
private int detectorPort = 5400;
public StandaloneJNDIServer() {}
#Override
public Object call() throws Exception {
StandaloneJNDIServer.println("Starting JNDI server... to stop this server, kill it manually via Control-C");
//StandaloneJNDIServer server = new StandaloneJNDIServer();
try {
this.setupJNDIServer();
// wait forever, let the user kill us at any point (at which point, the client will detect we went down)
while(true) {
Thread.sleep(1000);
}
}
catch(Exception e) {
e.printStackTrace();
}
StandaloneJNDIServer.println("Stopping JBoss/Remoting server");
return null;
}
private void setupJNDIServer() throws Exception
{
// start JNDI server
String detectorHost = InetAddress.getLocalHost().getHostName();
Main JNDIServer = new Main();
// Next two lines add a naming implemention into
// the server object that handles requests. Without this you get a nice NPE.
NamingBeanImpl namingInfo = new NamingBeanImpl();
namingInfo.start();
JNDIServer.setNamingInfo( namingInfo );
JNDIServer.setPort( detectorPort );
JNDIServer.setBindAddress(detectorHost);
JNDIServer.start();
System.out.println("Started JNDI server on " + detectorHost + ":" + detectorPort );
}
/**
* Outputs a message to stdout.
*
* #param msg the message to output
*/
public static void println(String msg)
{
System.out.println(new java.util.Date() + ": [SERVER]: " + msg);
}
}
I know I'm late to the party, but I ended up hacking this together like so
InitialContext ctx = new InitialContext();
// check if we have a JNDI binding for "jdbc". If we do not, we are
// running locally (i.e. through JUnit, etc)
boolean isJndiBound = true;
try {
ctx.lookup("jdbc");
} catch(NameNotFoundException ex) {
isJndiBound = false;
}
if(!isJndiBound) {
// Create the "jdbc" sub-context (i.e. the directory)
ctx.createSubcontext("jdbc");
//parse the jetty-web.xml file
Map<String, DataSource> dataSourceProperties = JettyWebParser.parse();
//add the data sources to the sub-context
for(String key : dataSourceProperties.keySet()) {
DataSource ds = dataSourceProperties.get(key);
ctx.bind(key, ds);
}
}
Have you considered using Mocks? If I recall correctly you use Interfaces to interact with JNDI. I know I've mocked them out at least once before.
As a fallback, you could probably use Tomcat. It's not a full blown J2EE impl, it starts fast, and is fairly easy to configure JNDI resources for. DataSource setup is well documented. It's sub-optimal, but should work.
You imply you've found non-working tutorials; that may mean you've already seen these:
J2EE or J2SE? JNDI works with both
Standalone JNDI server using jnpserver.jar
I had a quick go, but couldn't get this working. A little more perseverance might do it, though.
For local, one process standalone jar purpouses I would use spring-test package:
SimpleNamingContextBuilder builder = new SimpleNamingContextBuilder();
SQLServerConnectionPoolDataSource myDS = new SQLServerConnectionPoolDataSource();
//setup...
builder.bind("java:comp/env/jdbc/myDS", myDS);
builder.activate();
startup log:
22:33:41.607 [main] INFO org.springframework.mock.jndi.SimpleNamingContextBuilder - Static JNDI binding: [java:comp/env/jdbc/myDS] = [SQLServerConnectionPoolDataSource:1]
22:33:41.615 [main] INFO org.springframework.mock.jndi.SimpleNamingContextBuilder - Activating simple JNDI environment
I have been looking for a similar simple starter solution recently. The "file system service provider from Sun Microsystems" has worked for me well. See https://docs.oracle.com/javase/jndi/tutorial/basics/prepare/initial.html.
The problem with the RMI registry is that you need a viewer - here you just need to look at file contents.
You may need fscontext-4.2.jar - I obtained it from http://www.java2s.com/Code/Jar/f/Downloadfscontext42jar.htm

Categories