This is my first question at Stack Overflow, so feel free to tell me if I do something wrong :)
I'm working on a project involving EJB and JBoss 4.2.3.GA. In a point, we try to access every node of the cluster, locating an EJB and returning it.
This is the piece of code that does the JNDI lookup:
public static <I> I getCache(Class<I> i, String clusterNode) {
ServiceLocator serviceLocator = ServiceLocator.getInstance();
String jndi = serviceLocator.getRemoteJNDIName(i);
Properties props = new Properties();
props.setProperty(Context.PROVIDER_URL, "jnp://" + clusterNode + ":"
+ jndiPort);
props.setProperty(Context.URL_PKG_PREFIXES, "org.jboss.naming");
props.setProperty(Context.INITIAL_CONTEXT_FACTORY,
"org.jnp.interfaces.NamingContextFactory");
Object result = null;
try {
InitialContext ctx = new InitialContext(props);
result = ctx.lookup(jndi);
} catch (NamingException e) {
return null;
}
return (I) result;
}
Here:
clusterNode is a simple string with the IP address or the dns name of the node. E.g: "192.168.2.65" or "cluster1".
getRemoteJNDIName returns an String such as this: "MyEARName/MyEJBName/remote"
The problem is that when I call this method with, for example, "127.0.0.1" it works fine. Also, if I call it with an existing and working IP address where a server is up and running, it's OK too.
However if I call the method with a non-existing or non-working address or dns name, instead of throwing the NamingException, it returns the EJB in my own machine. Therefore, I don't know wether the node is up or not.
I guess there may be better ways to do it. I'd like to hear about them, but we cannot make "big" changes to the product due to it being in production for a few years by now.
That's it. Thank you in anticipation and best regards.
However if I call the method with a non-existing or non-working
address or dns name, instead of throwing the NamingException, it
returns the EJB in my own machine
I think this behavior can be explained if you have automatic naming discovery. When the Contex.PROVIDER_URL is not specified or the nodes in the list are not reachable (which is your case) the client is allowed to search in the network for available JNDI services.
However this only works under certain conditions, some of them:all clusters node running in ALL mode, all nodes located in the same subnet.
You can disable this behavior setting the InitialContext property jnp.disableDiscovery=true.
I guess there may be better ways to do it
According to the code, you are not catching the object polled from the JNDI, this implies that every time you need to execute a service, a new lookup (wich is a time consuming operation) has to be done. The ServiceLocator pattern suggests caching the lookup result for improving performance.
Related
I am trying to integrate Meek in an Android application. There is a sample here that shows how to instantiate the transport:
https://github.com/guardianproject/AndroidPluggableTransports/blob/master/sample/src/main/java/info/pluggeabletransports/sample/SampleClientActivity.java
The question is what do I do from there. Assuming I have an application that uses OkHttp3. Is there a way to reconcile both and use OkHttp3 as underlying transport mechanism while the app only interacts with Okhttp3?
I am also quite conflicted on how to instantiate the transport and what each option means. In the link provided above, the transport is instantiate as follows:
private void initMeekTransport() {
new MeekTransport().register();
Properties options = new Properties();
String remoteAddress = "185.152.65.180:9001";// a public Tor guard to test
options.put(MeekTransport.OPTION_URL,"https://meek.azureedge.net/"); //the public Tor Meek endpoint
options.put(MeekTransport.OPTION_FRONT, "ajax.aspnetcdn.com"); //the domain fronting address to use
options.put(MeekTransport.OPTION_KEY, "97700DFE9F483596DDA6264C4D7DF7641E1E39CE"); //the key that is needed for this endpoint
init(DispatchConstants.PT_TRANSPORTS_MEEK, remoteAddress, options);
}
However, in the README (https://github.com/guardianproject/AndroidPluggableTransports), the suggested approach is:
Properties options = new Properties();
String bridgeAddress = "https://meek.actualdomain.com";
options.put(MeekTransport.OPTION_FRONT,"www.somefrontabledomain.com");
options.put(MeekTransport.OPTION_KEY,"18800CFE9F483596DDA6264C4D7DF7331E1E39CE");
init("meek", bridgeAddress, options);
Transport transport = Dispatcher.get().getTransport(this, PT_TRANSPORTS_MEEK, options);
if (transport != null)
{
Connection conn = transport.connect(bridgeAddress);
//now use the connection, either as a proxy, or to read and write bytes directly
if (conn.getLocalAddress() != null && conn.getLocalPort() != -1)
setSocksProxy (conn.getLocalAddress(), conn.getLocalPort());
ByteArrayOutputStream baos = new ByteArrayOutputStream();
baos.write("GET https://somewebsite.org/TheProject.html HTTP/1.0".getBytes());
conn.write(baos.toByteArray());
byte[] buffer = new byte[1024*64];
int read = conn.read(buffer,0,buffer.length);
String response = new String(buffer);
}
In the latter approach, the bridge address is passed to the init() method. In the first approach it is the remote address that is passed to the init() method while the bridge address is passed as option. Which one of these approaches is the right one?
Furthermore, I would appreciate some comments on OPTION_URL, OPTION_FRONT, and OPTION_KEY. Where does each of these pieces of information come from? If I do not want to use these default information, how do i go about setting up a "Meek endpoint" for instance to not use the default Tor one? Anything particular needs to be done on the CDN? How would I use Amazon or Google instead of aspnetcdn?
As you can see, I am fairly confused.
Thanks for your interest in this library. Unfortunately, we haven't made a formal release yet, and it is still quite early in its development.
However, we think that we can help still.
First, it should be made clear that the major cloud providers (Google, Amazon, Azure) that support domain fronting, are now very much against it. Thus, you might find some limitation in using any of those platforms, especially with a fronted domain that you do not control.
You can read about Tor and Signal's own struggle with this here: https://signal.org/blog/looking-back-on-the-front/ and https://blog.torproject.org/domain-fronting-critical-open-web
Now, that said, if you still want to proceed with domain fronting, and all you are using is HTTP/S, then you can do so, without using Meek at all. You simple need to set the "Host" header on your request to the actual domain you want to access, while the request itself using the fronted domain. It is actually quite simple.
You can read more here: https://www.bamsoftware.com/papers/fronting/#sec:introduction
I have a java web application, which needs to read a file from a network drive. It works perfectly when i run it on a localhost test server, as I am logged in with my windows credentials. It does not however work when deployed on a company server.
I have been trying to implement a way to send user credentials along when trying to access the file, and my current attempt is using The Java CIFS Client Library
I am basing my attempts on the code in this answer, although my code needs to read from a file instead of write to one. I am getting a NullpointerException I cannot explain.
Code:
public static void main(String[] args) {
String filePath = "[myPath]";
String USER = "domain;username:password";
try {
NtlmPasswordAuthentication auth = new NtlmPasswordAuthentication(USER);
SmbFile sFile = new SmbFile(filePath, auth);
if(sFile.exists()){
InputStream stream = new SmbFileInputStream(sFile); //throws exception
}
} catch (SmbException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
Error:
Exception in thread "main" java.lang.NullPointerException
at jcifs.smb.ServerMessageBlock.writeString(ServerMessageBlock.java:213)
at jcifs.smb.ServerMessageBlock.writeString(ServerMessageBlock.java:202)
at jcifs.smb.SmbComNTCreateAndX.writeBytesWireFormat(SmbComNTCreateAndX.java:170)
at jcifs.smb.AndXServerMessageBlock.writeAndXWireFormat(AndXServerMessageBlock.java:101)
at jcifs.smb.AndXServerMessageBlock.encode(AndXServerMessageBlock.java:65)
at jcifs.smb.SmbTransport.doSend(SmbTransport.java:439)
at jcifs.util.transport.Transport.sendrecv(Transport.java:67)
at jcifs.smb.SmbTransport.send(SmbTransport.java:655)
at jcifs.smb.SmbSession.send(SmbSession.java:238)
at jcifs.smb.SmbTree.send(SmbTree.java:119)
at jcifs.smb.SmbFile.send(SmbFile.java:775)
at jcifs.smb.SmbFile.open0(SmbFile.java:989)
at jcifs.smb.SmbFile.open(SmbFile.java:1006)
at jcifs.smb.SmbFileInputStream.<init>(SmbFileInputStream.java:73)
at jcifs.smb.SmbFileInputStream.<init>(SmbFileInputStream.java:65)
at Test.main(Test.java:45)
The user credentials are accepted. I've tried both valid and invalid credentials, and the invalid ones gives user identification errors.
The exception is thrown when creating the inputstream, which normally would make me think that the parameter sFile object, would be null, or have a null field. I do not know which field this might be. Debugging shows that isExists = true. The URL is also valid. Here is a screenshot of my sFile object from the debugger:
What am i missing here? Why do i get a nullpointerexception?
After traversing the source code, I found that the unc variable was the one causing the NullPointerException. Long story short, my struggle was caused by me not following the standard url pattern of smb, and the jcifs library failing to give me information about this. The rules can be found here (right after the initial import statements). Here is a selection:
SMB URL Examples
smb://users-nyc;miallen:mypass#angus/tmp/
This URL references a share called tmp on the server angus as user miallen
who's password is mypass.
smb://Administrator:P%40ss#msmith1/c/WINDOWS/Desktop/foo.txt
A relativly sophisticated example that references a file msmith1's desktop as user Administrator. Notice the '#' is URL encoded with the '%40' hexcode escape.
smb://angus/
This references only a server. The behavior of some methods is different in this context(e.g. you cannot delete a server) however as you might expect the list method will list the available shares on this server.
smb://myworkgroup/
This syntactically is identical to the above example. However if myworkgroup happends to be a workgroup(which is indeed suggested by the name) the list method will return a list of servers that have registered themselves as members of myworkgroup.
smb:// Just as smb://server/ lists shares and smb://workgroup/ lists servers, the smb:// URL lists all available workgroups on a netbios LAN. Again, in this context many methods are not valid and return default values(e.g. isHidden will always return false).
smb://angus.foo.net/d/jcifs/pipes.doc
The server name may also be a DNS name as it is in this example. See Setting Name Resolution Properties for details.
smb://192.168.1.15/ADMIN$/
The server name may also be an IP address. See Setting Name Resolution Properties for details.
smb://domain;username:password#server/share/path/to/file.txt
A prototypical example that uses all the fields.
smb://myworkgroup/angus/ <-- ILLEGAL
Despite the hierarchial relationship between workgroups, servers, and filesystems this example is not valid.
smb://server/share/path/to/dir <-- ILLEGAL
URLs that represent workgroups, servers, shares, or directories require a trailing slash '/'.
smb://MYGROUP/?SERVER=192.168.10.15
SMB URLs support some query string parameters. In this example the SERVER parameter is used to override the server name service lookup to contact the server 192.168.10.15 (presumably known to be a master browser) for the server list in workgroup MYGROUP.
I'm creating app for library management with Java and MySQL ( JDBC to connect with DB ) , and I have a problem , I checked a lot of topics, books, and websites but I didn't find good answer for me. Is it the good way to deal with connections ? I think that one connection for entire app is good option in this case. My idea is that in every function in every class when I need to use Connection object , these functions will need a connection parameter. In main class I'll call manager object 'Man' for example and to every constructor etc I'll pass Man.getMyConn() as this parameter and call Man.close() when Main frame will be closed . Is it bad idea ? Maybe I should use singleton pattern or connection pool ?
Sorry for my English , I'm still learning.
public class manager {
private Connection myConn;
public manager() throws Exception {
Properties props = new Properties();
props.load(new FileInputStream("app.properties"));
String user = props.getProperty("user");
String password = props.getProperty("password");
String dburl = props.getProperty("dburl");
myConn = DriverManager.getConnection(dburl, user, password);
System.out.println("DB connection successful to: " + dburl);
}
public Connection getMyConn() {
return myConn;
}
//close class etc.
}
Usually not. Further answer depends on type of the application. If you're making web application then you should definitely go with connection pool. If you're making e.g. desktop application (where only one user can access it at the time), then you can open and close connection upon each request.
I have working applications that do it your way. As #Branislav says, it's not adequate if you want to do multiple concurrent queries. There's also a danger that the connection to the database might be lost, and you would need to restart your application to get a new one, unless you write code to catch that and recreate the connection.
Using a singleton would be overcomplicated. Having a getConnection() method (as you have done) is very important as it means you can easily change your code to use a pool later if you find you need to.
Trying to migrate an application from WebLogic 12.2.1 to Tomcat 8.5.4, what under Weblogic was an entry as Foreign JNDI Providers for an LDAP connection has been migrated to a new Resource under Tomcat.
Following this advice on Stack Overflow, a custom LdapContextFactory has been packaged as a new jar file under Tomcat lib folder.
In the Tomcat server.xml file the following GlobalNamingResources/Resource has been configured:
<Resource name="ldapConnection"
auth="Container"
type="javax.naming.ldap.LdapContext"
factory="com.sample.custom.LdapContextFactory"
singleton="false"
java.naming.referral="follow"
java.naming.factory.initial="com.sun.jndi.ldap.LdapCtxFactory"
java.naming.provider.url="ldap://some.host:389"
java.naming.security.authentication="simple"
java.naming.security.principal="CN=some,OU=some,OU=some,DC=some,DC=a,DC=b"
java.naming.security.credentials="password"
com.sun.jndi.ldap.connect.pool="true"
com.sun.jndi.ldap.connect.pool.maxsize="10"
com.sun.jndi.ldap.connect.pool.prefsize="4"
com.sun.jndi.ldap.connect.pool.timeout="30000" />
The connection above works fine when browsing the LDAP directory via an LDAP browser like Apache Directory Studio / LDAP Browser embedded in Eclipse.
The custom com.sample.custom.LdapContextFactory is quite simple:
public class LdapContextFactory implements ObjectFactory {
public Object getObjectInstance(Object obj, Name name, Context nameCtx, Hashtable<?, ?> environment)
throws Exception {
Hashtable<Object, Object> env = new Hashtable<>();
Reference reference = (Reference) obj;
Enumeration<RefAddr> references = reference.getAll();
while (references.hasMoreElements()) {
RefAddr address = references.nextElement();
String type = address.getType();
String content = (String) address.getContent();
env.put(type, content);
}
return new InitialLdapContext(env, null);
}
}
However, at start-up Tomcat is throwing the following exception:
07-Sep-2016 15:04:01.064 SEVERE [main] org.apache.catalina.mbeans.GlobalResourcesLifecycleListener.createMBeans Exception processing Global JNDI Resources
javax.naming.NameNotFoundException: [LDAP: error code 32 - 0000208D: NameErr: DSID-031001E5, problem 2001 (NO_OBJECT), data 0, best match of:
''
]; remaining name ''
at com.sun.jndi.ldap.LdapCtx.mapErrorCode(LdapCtx.java:3160)
at com.sun.jndi.ldap.LdapCtx.processReturnCode(LdapCtx.java:3081)
at com.sun.jndi.ldap.LdapCtx.processReturnCode(LdapCtx.java:2888)
at com.sun.jndi.ldap.LdapCtx.c_listBindings(LdapCtx.java:1189)
at com.sun.jndi.toolkit.ctx.ComponentContext.p_listBindings(ComponentContext.java:592)
at com.sun.jndi.toolkit.ctx.PartialCompositeContext.listBindings(PartialCompositeContext.java:330)
at com.sun.jndi.toolkit.ctx.PartialCompositeContext.listBindings(PartialCompositeContext.java:317)
at javax.naming.InitialContext.listBindings(InitialContext.java:472)
at org.apache.catalina.mbeans.GlobalResourcesLifecycleListener.createMBeans(GlobalResourcesLifecycleListener.java:136)
at org.apache.catalina.mbeans.GlobalResourcesLifecycleListener.createMBeans(GlobalResourcesLifecycleListener.java:145)
at org.apache.catalina.mbeans.GlobalResourcesLifecycleListener.createMBeans(GlobalResourcesLifecycleListener.java:110)
at org.apache.catalina.mbeans.GlobalResourcesLifecycleListener.lifecycleEvent(GlobalResourcesLifecycleListener.java:82)
at org.apache.catalina.util.LifecycleBase.fireLifecycleEvent(LifecycleBase.java:94)
at org.apache.catalina.util.LifecycleBase.setStateInternal(LifecycleBase.java:401)
at org.apache.catalina.util.LifecycleBase.setState(LifecycleBase.java:345)
at org.apache.catalina.core.StandardServer.startInternal(StandardServer.java:784)
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:152)
at org.apache.catalina.startup.Catalina.start(Catalina.java:655)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:355)
at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:495)
Similar questions and investigations suggest an invalid LDAP DN, but:
The same LDAP configuration works fine via an LDAP Client
No search is actually performed, at start-up time Tomcat throws this exception without any query
The error suggests an empty string '' as remaining name, hence not really something not found, apparently
Question(s): Is this the correct way to migrate an Foreign JNDI Providers entry from WebLogic to Tomcat? How to fix an invalid LDAP DN entry with an empty remaining name? Could it be a missing baseDN to configure somewhere?
Update
The same exact error happens when changing the LdapContextFactory to the following, as suggested via comments:
public Object getObjectInstance(Object obj, Name name, Context nameCtx, Hashtable<?, ?> environment)
throws Exception {
Hashtable<Object, Object> env = new Hashtable<>();
Reference reference = (Reference) obj;
Enumeration<RefAddr> references = reference.getAll();
String providerUrl = "no valid URL";
while (references.hasMoreElements()) {
RefAddr address = references.nextElement();
String type = address.getType();
String content = (String) address.getContent();
switch (type) {
case Context.PROVIDER_URL:
env.put(Context.PROVIDER_URL, content);
providerUrl = content;
break;
default:
env.put(type, content);
break;
}
}
InitialLdapContext context = null;
Object result = null;
try {
context = new InitialLdapContext(env, null);
LOGGER.info("looking up for " + providerUrl);
result = context.lookup(providerUrl);
} finally {
if (context != null) {
context.close();
}
}
LOGGER.info("Created new LDAP Context");
return result;
}
Change is confirmed via logging, to make sure it was deployed properly.
The involved listener is defined by default at the top of the server.xml file as
<Listener className="org.apache.catalina.mbeans.GlobalResourcesLifecycleListener" />
And cannot be disabled as per official documentation:
The Global Resources Lifecycle Listener initializes the Global JNDI resources defined in server.xml as part of the Global Resources element. Without this listener, none of the Global Resources will be available.
The same also happens on Tomcat version 8.5.5 and 7.0.69: simply adding the new global resource as above and the additional jar providing the factory above, the exception pointing at an empty remaining name will be thrown.
The stacktrace went away by appending to the java.naming.provider.url property the LDAP schema DN, using the first factory implementation provided in the question.
Below a screenshot of the LDAP client used in this context, the Apache Directory Studio / LDAP Browser embedded in Eclipse, from which it was possible to browse the concerned LDAP simply using the initial values of the question.
By appending the schema DN of the Root element to the connection URL, the exception went away and the LDAP resource is now shared via JNDI in Tomcat 8.
Further details as outcome of the troubleshooting:
In Tomcat 8 global resources are handled via a global resource listener, the GlobalResourcesLifecycleListener, defined by default in the server.xml file. Such a listener invokes a context.listBindings("") on bean creation, hence effectively browsing the LDAP directory.
This initial browsing may most probably be the difference between Tomcat and WebLogic, where LDAP is looked up via JNDI only when required, hence via direct query, rather than at start-up with a generic query. As such, in Tomcat the LDAP url would need further details, that is, a slightly different configuration as part of its url to directly point to a valid base DN.
From official WebLogic documentation:
On start up, WebLogic Server attempts to connect to the JNDI source. If the connection is successful, WebLogic Server sets up the requested objects and links in the local JNDI tree, making them available to WebLogic Server clients.
Hence, a connection is rather simpler than a listBindings:
Enumerates the names bound in the named context, along with the objects bound to them. The contents of any subcontexts are not included.
I have this code where
hazelcastAddr is an array of URL strings.
ClientConfig config = new ClientConfig();
config.setProperty("hazelcast.logging.type", LOGGING_TYPE);
for (String hazelcastAddr : hazelcastAddrArray) {
LOG.trace("Hazelcast attempting to add network config for ip address <<{}>>.", hazelcastAddr);
config.getNetworkConfig().addAddress(hazelcastAddr);
}
return config;
hazelcastInstance = HazelcastClient.newHazelcastClient(config);
If hazelcastAddr gives four different URL Strings (for four different instances) and they are all currently up and running then things go as expected. However, if one (or two or three) are down then it throws an exception and fails to return an instance of the HazelcastClient. However, since it is able to connect to at least one it seems to me that it should still connect and the running instance(s) should be able to give it the current state of the system for it to work correctly.
Please update me on how to get this behaviour or if I am missing something and this should be the correct behaviour.
Thanks in advance.