Azure Storage - Proxy Setting in Java Code - java

In a java service, I'm trying to upload a file in an azure storage directory; therefore I've written a code like this :
import com.azure.core.util.*;
import com.azure.storage.file.share.*;
import com.azure.storage.file.share.models.*;
//Create connexion string
String connectStr ="DefaultEndpointsProtocol=https;AccountName=" + accountName + ";AccountKey=" + accountKey + ";EndpointSuffix=" + endpoint;
//ShareDirectoryClient
ShareDirectoryClient dirClient = new ShareFileClientBuilder().connectionString(connectStr).shareName(shareName).resourcePath(directoryName).configuration(proxyOptions).buildDirectoryClient();
// Create empty file
dirClient.createFile(fileName, body.length());
The HTTPS request must goes through a proxy server, so, I get this error :
"Could not run 'sendFileInDirectoryProxyTest'
reactor.core.Exceptions$ReactiveException: io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection timed out: no further information: "
I can't set/use a global setting.
To set a proxy in the Java code, I've tried several things, like using the Configuration Class :
Configuration configuration = new Configuration();
configuration.put("java.net.useSystemProxies", "true");
configuration.put("https.proxyHost", "xxxxxxxxx");
configuration.put("https.proxyPort", "xxxx");
ShareDirectoryClient dirClient = new ShareFileClientBuilder().connectionString(connectStr).shareName(shareName).resourcePath(directoryName).configuration(configuration).buildDirectoryClient();
But it did not solve the issue.
I'm sure it is pretty simple, any help would be appreciated.
Thanks. Charles de Saint Andre.

You need to configure ProxyOptions and set them on the httpClientBuilder. All our Storage client builders have a .httpClient() method that accepts a client, and you can build a client with all defaults + the proxy options using a NettyAsyncClientBuilder(), which has a .proxyOptions() method. Please give that a try and let me know if you have any more issues.
Sample : azure-sdk-for-java/sdk/storage/azure-storage-blob at main · Azure/azure-sdk-for-java (github.com)

Related

redis.clients.jedis.exceptions.JedisConnectionException: java.net.UnknownHostException

I'm using Jedis to connect to my Redis instance/cluster in AWS, but I kept getting this error, here's the code, I searched extensively on SO, found the closest one is: String hostname from properties file: Java
I tried both ways, neither worked for me.
So please help.
Here's my Java code:
public static void main(String[] args) {
AWSCredentials credentials = null;
try {
credentials = new ProfileCredentialsProvider("default").getCredentials();
} catch (Exception e) {
throw new AmazonClientException("Cannot load the credentials from the credential profiles file. "
+ "Please make sure that your credentials file is at the correct "
+ "location (/Users/USERNAME/.aws/credentials), and is in valid format.", e);
}
AmazonElastiCacheClient client = new AmazonElastiCacheClient(credentials);
client.setRegion(Region.getRegion(Regions.AP_NORTHEAST_2));
DescribeCacheClustersRequest dccRequest = new DescribeCacheClustersRequest();
dccRequest.setShowCacheNodeInfo(true);
DescribeCacheClustersResult clusterResult = client.describeCacheClusters(dccRequest);
List<CacheCluster> cacheClusters = clusterResult.getCacheClusters();
for (CacheCluster cacheCluster : cacheClusters) {
for (CacheNode cacheNode : cacheCluster.getCacheNodes()) {
String addr = cacheNode.getEndpoint().getAddress();
int port = cacheNode.getEndpoint().getPort();
String url = addr + ":" + port;
System.out.println("formed url is: " + url);
Jedis jedis = new Jedis(url);
System.out.println("Connection to server sucessfully");
// check whether server is running or not
System.out.println("Server is running: " + jedis.ping());
}
}
The last line in the above code keeps throwing this error, here's the stacktrace:
Exception in thread "main" redis.clients.jedis.exceptions.JedisConnectionException: java.net.UnknownHostException: REDISNAME.nquffl.0001.apn2.cache.amazonaws.com:6379
at redis.clients.jedis.Connection.connect(Connection.java:207)
at redis.clients.jedis.BinaryClient.connect(BinaryClient.java:93)
at redis.clients.jedis.Connection.sendCommand(Connection.java:126)
at redis.clients.jedis.Connection.sendCommand(Connection.java:121)
at redis.clients.jedis.BinaryClient.ping(BinaryClient.java:106)
at redis.clients.jedis.BinaryJedis.ping(BinaryJedis.java:195)
at sporadic.AmazonElastiCacheClientExample.main(AmazonElastiCacheClientExample.java:70)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:147)Caused by: java.net.UnknownHostException: REDISNAME.nquffl.0001.apn2.cache.amazonaws.com:6379
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:184)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:589)
at redis.clients.jedis.Connection.connect(Connection.java:184)
... 11 more
What am I doing wrong?
Please point out.
Your setting shoud be this way :
Jedis jedis = new Jedis("REDISNAME.nquffl.0001.apn2.cache.amazonaws.com",6379);
NOT this way :
Jedis jedis = new Jedis("REDISNAME.nquffl.0001.apn2.cache.amazonaws.com:6379");
According to AWS Documentation http://docs.aws.amazon.com/AmazonElastiCache/latest/UserGuide/Access.Outside.html
Amazon ElastiCache is an AWS service that provides cloud-based
in-memory key-value store. On the back end it uses either the
Memcached or Redis engine. The service is designed to be accessed
exclusively from within AWS. However, if the ElastiCache cluster is
hosted inside a VPC, you can use a Network Address Translation (NAT)
instance to provide outside access.
So you have below two options :-
Either you host your app inside the AWS and have proper security group setting to allow access to your elastic-cache cluster from your ec2-instance where your app is deployed.
If you want to run your app outside of AWS then you have to modify the Network Address Translation (NAT) to provide outside access.
IMO, its easy to deploy the code in AWS-Ec2 instance and test it if you are not very familiar with the networking and NAT.
I used to have locally memcache and redis instance where i used to connect for local developement and for other environment like qa,stg,prod used to deploy it in AWS ec2 instance.
Let me know if you any issues.
In my case the 6379 port was not accepting connections, so I changed I configured Redis with different port, and it worked.

Couchbase properties file in classpath

Im working with Java backend using Tomcat and trying to connect to a hosted instance of couchbase. I have setup up the path to my config directory in ../tomcat/Catalina/localhost/context.xml.default
<Parameter name="CONFIG_DIRECTORY" value="/opt/platform/conf" override="false"/>
Also I have set a CLASSPATH param in ../tomcat/bin/setenv.sh
CLASSPATH=/opt/platform/conf/
Below is the snippet of code I am working with :
String initialNodes = RuntimeData.INSTANCE.getConfigurationValue("MYNODES");
String bucketId = RuntimeData.INSTANCE.getConfigurationValue("MYBUCKET");
System.out.println("Creating cluster for " + initialNodes);
try {
System.out.println("HOST INET ADDRESS : " + InetAddress.getByName(initialNodes));
} catch (UnknownHostException e) {
System.out.println("UNKNOWN HOST EXCEPTION : " + initialNodes);
}
cluster = CouchbaseCluster.create(initialNodes.split(","));
System.out.println("Creating bucket for " + bucketId);
bucket = cluster.openBucket(bucketId, 10, TimeUnit.SECONDS);
System.out.println("Creating graph.");
graph = new CBGraph();
In this code I have some debug logging and I can confirm that I do pull the correct values in for the initialNodes and the bucket. My issue currently comes in on the last line when I try to create a new CBGraph().
I get this error:
Caused by: com.couchbase.client.core.config.ConfigurationException: No valid node found to bootstrap from. Please check your network configuration.
My guess is that somehow the properties file that contains all of the connection info for my couchbase server is either not getting loaded into the classpath ... or it is getting loaded later than I need it too.
The only verification I have been able to do that adding the CLASSPATH setting to setenv.sh is that once tomcat is running I do see the path in classpath if I do ps auxw|grep tomcat.
Any help with this issue is welcome. I have looked at some other posts but im not sure exactly the issue im trying to solve here other than the error I get.
ADDITION:
Looking at the INFO logs at runtime I can verify that the dir containing my couchbase .properties file IS in the classpath. BUT ... it looks like couchbase is trying to initialize using default values (copied below)
INFO: Using the following configuration ...
Oct 24, 2016 3:08:09 PM com.couchbase.graph.conn.ConnectionFactory createCon
INFO: hosts = ubuntu-local
The adresses you are specifying might not be valid hostnames (or ip addresses), and you should also see one of these INFO log messages:
https://github.com/couchbase/couchbase-jvm-core/blob/4127377b8bd057ed291176d568a1e868796e4e5a/src/main/java/com/couchbase/client/core/message/cluster/SeedNodesRequest.java#L85-L102

Android xmpp error host-unknown

I'm trying to make a connection to xmpp server and this returning me this error.
W/AbstractXMPPConnection: Connection closed with error
org.jivesoftware.smack.XMPPException$StreamErrorException: host-unknown You can read more about the meaning of this stream error
at http://xmpp.org/rfcs/rfc6120.html#streams-error-conditions
at org.jivesoftware.smack.tcp.XMPPTCPConnection$PacketReader.parsePackets(XMPPTCPConnection.java:1003)
at org.jivesoftware.smack.tcp.XMPPTCPConnection$PacketReader.access$300(XMPPTCPConnection.java:944)
at org.jivesoftware.smack.tcp.XMPPTCPConnection$PacketReader$1.run(XMPPTCPConnection.java:959)
at java.lang.Thread.run(Thread.java:818)
I tried to use this example in github and put this data.
private static final String DOMAIN = "10.20.0.125";
private static final String HOST = "10.20.0.125";
private static final int PORT = 5222;
private String userName ="admin2#localhost";
private String passWord = "asdfasdf";
The server is ok, we conducted another test pc to make a communication on android but this error persists.
I see mainly 2 errors:
in the demo configuration you have this lines of code:
XMPPTCPConnectionConfiguration.Builder configBuilder = XMPPTCPConnectionConfiguration.builder();
configBuilder.setUsernameAndPassword(userName, passWord);
configBuilder.setSecurityMode(ConnectionConfiguration.SecurityMode.disabled);
configBuilder.setResource("Android");
configBuilder.setServiceName(DOMAIN);
configBuilder.setHost(HOST);
configBuilder.setPort(PORT);
First problem (main one):
DOMAIN variable SHOULD (but MUST) BE the server name you can read in server configuration, not just the IP; some functionality will broke outside the localhost.
Second problem:
while I suggest to split login from configuration (so just configure the connection and THEN login) what I don't get it's the username: localhost will be not resolved outside the server machine, so again has to be replaced with DOMAIN name (even if, in theory, the connection will give the user his domain, doesn't need to be so explicit).
so:
connection.connect();
login();
will be replaced with
connection.connect();
login(userName ,passWord,"Android" );
and you'll need to remove this 2 lines:
configBuilder.setResource("Android");
configBuilder.setUsernameAndPassword(userName, passWord);
about DOMAIN name: you'll find it in Server configuration, in Openfire it's the "Server Name" you can read in web interface in Server Information page.
I managed to find the solution.
I was using .jar files instead of the compile gradle dependencies:
compile 'org.igniterealtime.smack:smack-android:4.1.4'
compile 'org.igniterealtime.smack:smack-tcp:4.1.4'
compile 'org.igniterealtime.smack:smack-im:4.1.4'
compile 'org.igniterealtime.smack:smack-extensions:4.1.4'
Thus the error has been resolved. Thank you for your help.

Getting the Server Name from the session

In a managed bean that resides in a Database on the server Development I have this code:
s = ExtLibUtil.getCurrentSession();
theMap.put("Server Name", s.getServerName());
when I look at theMap after this has run I see Server Name and the value is blank. After this I get a datbase RepID and then try to open the database by RepID with
appDB = s.getDbDirectory(null).openDatabaseByReplicaID(repID);
if (appDB.isOpen()){
theMap.put(thisKey, repID);
}else{
theMap.put("DB " + thisKey, "Is Not Open");
}
if I have a rep copy of the database locally it opens it, if I remove the local Replica the open fails. If I change the line to:
appDB = s.getDbDirectory("Development").openDatabaseByReplicaID(repID);
the proper appDB opens. So it looks like the session thinks it is running locally because it return null for the server name. This is really strange, am I missing something? For the moment i have just hard coded the server name in the getDbDirectory but that wont work in the real world.
Is this XPiNC? That would consider the database to be running locally unless you've set the application property "Run server-based XPages on server"
String serverName = s.getEnvironmentString("ServerName", true);
or
String serverName = s.getEnvironmentString("ServerKeyFileName_Owner", true);

Java Client For Secure Hbase

Hi I am trying to write a java client for secure hbase.
I want to do kinit also from code itself for that i`m using the usergroup information class.
Can anyone point out where am I going wrong here?
this is the main method that Im trying to connect o hbase from.
I have to add the configuration in the CONfiguration object rather than using the xml, because the client can be located anywhere.
Please see the code below:
public static void main(String [] args) {
try {
System.setProperty(CommonConstants.KRB_REALM, ConfigUtil.getProperty(CommonConstants.HADOOP_CONF, "krb.realm"));
System.setProperty(CommonConstants.KRB_KDC, ConfigUtil.getProperty(CommonConstants.HADOOP_CONF,"krb.kdc"));
System.setProperty(CommonConstants.KRB_DEBUG, "true");
final Configuration config = HBaseConfiguration.create();
config.set(CommonConfigurationKeysPublic.HADOOP_SECURITY_AUTHENTICATION, AUTH_KRB);
config.set(CommonConfigurationKeysPublic.HADOOP_SECURITY_AUTHORIZATION, AUTHORIZATION);
config.set(CommonConfigurationKeysPublic.FS_AUTOMATIC_CLOSE_KEY, AUTO_CLOSE);
config.set(CommonConfigurationKeysPublic.FS_DEFAULT_NAME_KEY, defaultFS);
config.set("hbase.zookeeper.quorum", ConfigUtil.getProperty(CommonConstants.HBASE_CONF, "hbase.host"));
config.set("hbase.zookeeper.property.clientPort", ConfigUtil.getProperty(CommonConstants.HBASE_CONF, "hbase.port"));
config.set("hbase.client.retries.number", Integer.toString(0));
config.set("zookeeper.session.timeout", Integer.toString(6000));
config.set("zookeeper.recovery.retry", Integer.toString(0));
config.set("hbase.master", "gauravt-namenode.pbi.global.pvt:60000");
config.set("zookeeper.znode.parent", "/hbase-secure");
config.set("hbase.rpc.engine", "org.apache.hadoop.hbase.ipc.SecureRpcEngine");
config.set("hbase.security.authentication", AUTH_KRB);
config.set("hbase.security.authorization", AUTHORIZATION);
config.set("hbase.master.kerberos.principal", "hbase/gauravt-namenode.pbi.global.pvt#pbi.global.pvt");
config.set("hbase.master.keytab.file", "D:/var/lib/bda/secure/keytabs/hbase.service.keytab");
config.set("hbase.regionserver.kerberos.principal", "hbase/gauravt-datanode2.pbi.global.pvt#pbi.global.pvt");
config.set("hbase.regionserver.keytab.file", "D:/var/lib/bda/secure/keytabs/hbase.service.keytab");
UserGroupInformation.setConfiguration(config);
UserGroupInformation userGroupInformation = UserGroupInformation.loginUserFromKeytabAndReturnUGI("hbase/gauravt-datanode2.pbi.global.pvt#pbi.global.pvt", "D:/var/lib/bda/secure/keytabs/hbase.service.keytab");
UserGroupInformation.setLoginUser(userGroupInformation);
User user = User.create(userGroupInformation);
user.runAs(new PrivilegedExceptionAction<Object>() {
#Override
public Object run() throws Exception {
HBaseAdmin admins = new HBaseAdmin(config);
if(admins.isTableAvailable("ambarismoketest")) {
System.out.println("Table is available");
};
HConnection connection = HConnectionManager.createConnection(config);
HTableInterface table = connection.getTable("ambarismoketest");
admins.close();
System.out.println(table.get(new Get(null)));
return table.get(new Get(null));
}
});
System.out.println(UserGroupInformation.getLoginUser().getUserName());
} catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
I`m getting the following exception:
Caused by: org.apache.hadoop.ipc.RemoteException(javax.security.sasl.SaslException): GSS initiate failed
at org.apache.hadoop.hbase.security.HBaseSaslRpcClient.readStatus(HBaseSaslRpcClient.java:110)
at org.apache.hadoop.hbase.security.HBaseSaslRpcClient.saslConnect(HBaseSaslRpcClient.java:146)
at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupSaslConnection(RpcClient.java:762)
at org.apache.hadoop.hbase.ipc.RpcClient$Connection.access$600(RpcClient.java:354)
at org.apache.hadoop.hbase.ipc.RpcClient$Connection$2.run(RpcClient.java:883)
at org.apache.hadoop.hbase.ipc.RpcClient$Connection$2.run(RpcClient.java:880)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:880)
... 33 more
Any pointers would be helpful.
The above works nicely, but I've seen a lot of folks struggle with setting all of the right properties in the Configuration object. There's no de-facto list that I've found of exactly what you need and don't need and it is painfully dependent on your cluster configuration.
The surefire way is to have a copy of your HBase configurations in your classpath, since your client can be anywhere as you mentioned. Then you can add the resources to your object without having to specify all properties.
Configuration conf = HBaseConfiguration.create();
conf.addResource("core-site.xml");
conf.addResource("hbase-site.xml");
conf.addResource("hdfs-site.xml");
Here were some sources to back this approach:
IBM,
Scalding (Scala)
Also note that this approach doesn't limit you to actually use the internal Zookeeper principal and keytab, i.e. you can create keytabs for applications or Active Directory users and leave the internally generated keytabs for the daemons to authenticate amongst themselves.
Not sure if you still need help. I think setting the "hadoop.security.authentication" property is missing from your snippet.
I am using following code snippet to connect to secure HBase (on CDH5). You can give a try.
config.set("hbase.zookeeper.quorum", zookeeperHosts);
config.set("hbase.zookeeper.property.clientPort", zookeeperPort);
config.set("hadoop.security.authentication", "kerberos");
config.set("hbase.security.authentication", "kerberos");
config.set("hbase.master.kerberos.principal", HBASE_MASTER_PRINCIPAL);
config.set("hbase.regionserver.kerberos.principal", HBASE_RS_PRINCIPAL);
UserGroupInformation.setConfiguration(config);
UserGroupInformation.loginUserFromKeytab(ZOOKEEPER_PRINCIPAL,ZOOKEEPER_KEYTAB);
HBaseAdmin admins = new HBaseAdmin(config);
TableName[] tables = admins.listTableNames();
for(TableName table: tables){
System.out.println(table.toString());
}
in Jdk 1.8, you need set
"System.setProperty("javax.security.auth.useSubjectCredsOnly", "false");"
config.set("hbase.zookeeper.quorum", zookeeperHosts);
config.set("hbase.zookeeper.property.clientPort", zookeeperPort);
config.set("hadoop.security.authentication", "kerberos");
config.set("hbase.security.authentication", "kerberos");
config.set("hbase.master.kerberos.principal", HBASE_MASTER_PRINCIPAL);
config.set("hbase.regionserver.kerberos.principal", HBASE_RS_PRINCIPAL);
System.setProperty("javax.security.auth.useSubjectCredsOnly", "false");
UserGroupInformation.setConfiguration(config);
UserGroupInformation.loginUserFromKeytab(ZOOKEEPER_PRINCIPAL,ZOOKEEPER_KEYTAB);
HBaseAdmin admins = new HBaseAdmin(config);
TableName[] tables = admins.listTableNames();
for(TableName table: tables){
System.out.println(table.toString());
}
quote:
http://hbase.apache.org/book.html#trouble.client
question: 142.9
I think the best is https://scalding.io/2015/02/making-your-hbase-client-work-in-a-kerberized-environment/
To make the code work you don’t have to change any line from the one written in the top of this post, you just have to make your client able to access the full HBase configuration. This just implies to change your running classpath to:
/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.30/lib/hbase/conf:target/scala-2.11/hbase-assembly-1.0.jar
This will make everything run smoothly. It is specific for CDH 5.3 but you can adapt it for your cluster configuration.
PS No need in this:
conf.addResource("core-site.xml");
conf.addResource("hbase-site.xml");
conf.addResource("hdfs-site.xml");
Because HBaseConfiguration has
public static Configuration addHbaseResources(Configuration conf) {
conf.addResource("hbase-default.xml");
conf.addResource("hbase-site.xml");

Categories