Using redis 'standalone' in java application - java

I am following a Redis tutorial online that shows how to connect to Redis from a Java application.
I understand that there are many JAVA clients available, and the tutorial I was following was using Jedis. My question is, can a JAVA client (like Jedis) be used without actually installing the redis server itself? The tutorial shows a simple call to:
Jedis jedis = new Jedis("localhost");
and then beginning set/get operations, but I don't believe they installed Redis. I am new to Redis, but I am picturing installing the redis server to be the equivalent of installing something like Oracle, and then using a JAVA API to actually talk to that Oracle instance.
How is the Jedis API used without an actual Redis instance present? If the Jedis client was initialized without the host parameter, would it then expect to find an actual Redis server/instance running on port 6379?

Related

How to make encrypted connection to Oracle using Java JDBC Library (OJDBC 7)?

My company is implementing encryption on all database connections. I am using Wildfly and Java Project. We also use straight JDBC connections.
Services are connecting to Oracle DB for saving data or fetching data.
My connection string looks like:
jdbc:oracle:thin:#ora01.man.cosng.net:1521/int_prod
There is no encryption thing passed. Java code also has nothing related to encryption.
Is Java driver encrypted by default? (ojdbc7.jar), or do we need to send extra properties defined in this article (but this require changes in 140 projects) -> https://docs.oracle.com/cd/B19306_01/java.102/b14355/clntsec.htm#EHAFHEIG
Or any other idea?
Also Wildfly has following settings: anything you can suggest here which is straightforward:

Cannot connect Locally to ElasticCache cluster on aws using Jedis Lib

We are trying to access de ElasticCache (Redis) on aws using a Java client that runs locally using Jedis lib. We were able to access the redis using redis-cli locally following the steps here.
The problem is that when we try to connect to aws Redis using Jedis lib, the NAT public address are being translated to the redis private IPs in order to calculate the slots (initializeSlotsCache). We couldn't find a way to disable this. Are there any workaround?
Here's how we connect using Jedis:
factory = new JedisConnectionFactory(new RedisClusterConfiguration(this.clusterProperties.getNodes()));
factory.setUsePool(true);
factory.setPoolConfig(this.jedisPoolConfig());
factory.afterPropertiesSet();
return factory;
We are using the mapped NAT ips for each node. But the Jedis lib is saving the private ips in the cache, so we get the following exception:
redis.clients.jedis.exceptions.JedisConnectionException: Could not get a resource from the pool
Any suggestions would be great! We are running out of options. Thank you in advance.
You cannot connect to AWS hosted redis from outside the AWS environment.
We had faced a similar issue, and the work around we had was to install a local redis instance for dev and unit testing.

Remote access to cassandra Azure windows Instance

I have installed Cassandra on Microsoft azure instance from http://www.planetcassandra.org/cassandra/ and trying to access remotely from java client.I have enabled endpoints for port no 9042 but could not access it remotely. After googling I have modified listen_address to local IP of azure instance ,rpc_address to public IP and broadcast_rpc_address to 255.255.255.255 in cassandra.yaml file but still I could not access the instance form my java client.
Please try to refer to the article Running Cassandra with Linux on Azure and Accessing it from Node.js to set your Cassandra using PowerShell. It's for Linux instance, but I think it's helpful for you. And the comment at the bottom of the article, a blog shows more details as reference.
Meanwhile, the simple way to install cassandra is create a Datastax Enterprise instance from Azure Marketplace, please see https://ms.portal.azure.com/?feature.relex=*%2CHubsExtension#blade/Microsoft_Azure_Marketplace/GalleryFeaturedMenuItemBlade/selectedMenuItemId/home/searchQuery/cassandra. Then, you don't need to consider for installing on Azure, please see the tutorial for getting started with Datastax Enterprise.
Hope it helps.

H2 Database Auto Server mode : Accessing through web console remotely

I am fairly new to H2 Database. As a part of a PoC, I am using H2 database(version : 1.4.187) for mocking the MS SQL Server DB. I have one application, say app1 which generates the data and save into H2. Another application, app2, needs to read from the H2 database and process the data it reads. I am trying to use Auto Server mode so that even if one of the application is down, other one is able to read/write to/from the database.
After reading multiple examples, i found how to build the h2 url and shown as below:
jdbc:h2:~/datafactory;MODE=MSSQLServer;AUTO_SERVER=TRUE;
Enabled the tcp and remote access as Below:
org.h2.tools.Server.createTcpServer("-tcpAllowOthers","-webAllowOthers").start()
With this, I am able to write to the database. Now, I want to read the data using the h2-web-console application. I am able to do that from my local machine. However, I am not able to understand how I can connect to this database remotely from another machine.
My plant is to run these two apps in an ubuntu machine and I can monitor the data using the web console from my machine. Is it not possible with this approach?
How can I solve this ?
Or do I need to use server mode and explicitly start the h2 server? Any help would be appreciated.
By default, remote connections are disabled for H2 database for protection. To enable remote access to the TCP server, you need to start the TCP server using the option -tcpAllowOthers or the other flags -webAllowOthers, -pgAllowOthers
.
To start both the Web Console server (the H2 Console tool) and the TCP server with remote connections enabled, you will have to use something like below
java -jar /path/to/h2.jar -web -webAllowOthers -tcp -tcpAllowOthers -browser
More information can be found in the docs here and console settings can be configured from here
Not entirely sure but looking at the documentation and other questions answered previously regarding the same topic the url should be something like this:
jdbc:h2:tcp://<host>:<port>/~/datafactory;MODE=MSSQLServer;AUTO_SERVER=TRUE;
It seems that the host may not be localhost and the database may not be in memory
Is there a need for the H2 web console?
You can use a different SQL tool using the TCP server you have already started. I use SQuirreL SQL Client (http://squirrel-sql.sourceforge.net/) to connect to different databases.
If you need a web interface you could use Adminer (https://www.adminer.org/) which can connect to different database vendors, including MS SQL, which happens to be mode you're running H2. There is an Adminer Debian package that should work for Ubuntu.

Get Hadoop Cluster Details

I have created a pseudo cluster on a vm and I am trying to get my cluster information such as number of nodes, live node, dead etc. by using the Hadoop's API from a Java program.
Unfortunatelly, I am not able to use any of the methods of the FSNamesystem class. It's like I have to cluster discovery using Java client.
same information as we get using http port 50070.
If the following statement will work I mean if I can create the object which should not return null then I can get all the details regarding my cluster
FSNamesystem f= FSNamesystem.getFSNamesystem();
And also how I can inject dependency for NameNode?

Categories