Querying a primary key through java client - Aerospike - java

I'm trying to emulate the query Select * from namespace.set where pk="something" through aeropsike's java client. I know that we can query on a secondary index through "Filter", and create "PredExp" for other predicates, but I'm unable to figure out how we can query on a primary key.
Any help would be appreciated. Thanks a lot in advance.
Edit : I have multiple bins in my set, if that makes any difference.

I figured it out. You just have to create a "new key" while querying through the java aerospike client.
Record record = aerospikeClient.get(null, new Key(namespace, cacheName, key), binNames)
Refer to the discussion: https://discuss.aerospike.com/t/primary-key-search/558/6

Related

Remove default order by in Casper dataset

I have been using open source data set provider Casper to achieve in-memory representation of a collection of Database objects in Java.
Github Repository : https://github.com/casperds/casperdatasets
Below is the code that I have been using to pull data in Casper datasets
String[] primaryKeys = { "QUESTION_ID" };
if (resultSet != null)
{
container = CDataCacheDBAdapter.loadData(resultSet, null, primaryKeys,new HashMap<Object, Object>());
lCDataRowset = container.getAll();
preparedStatement.close();
resultSet.close();
}
The problem with using this is, when I don't mention primary keys then DBAdapter does not load data. And If I mention some column as primary keys then "Order By" does not have effect in the dataset. It just orders by primary keys.
I want to be able to pull data in dataset in order the way I have mentioned in the query.
Did anybody face this issue? Any kind of help is appreciated!! Thanks
Well it turned out to be very stupid issue. If you pass null for primaryKeys parameter then it returns data in the order the way it returns in MySQL query browser.
I thought this could help someone someday. That's why keeping this post other wise I would have deleted it.

Getting a MySQL table's key and engine information from a statement's metadata using java

I am in the process of writing a java class that will read tables from a database that exists on one database server and will then recreate the tables in another database that resides on a different server.
With the above in mind, I am obtaining the most of the tables' metadata from a result set that reads from the source database. I say most because I am unsure where I can information on the keys, the auto-increment, engine information.
Can I get this information via the statement's metadata? Or should I be looking elsewhere for this information? Possibly the database's metadata???
If this helps, here is a snippet of the code - as you can see, quite basic stuff.
Statement sourceStmt = sourceConnection.createStatement();
ResultSet sourceRS = sourceStmt.executeQuery("select * from " + tableName);
--> this is how I am getting the metadata and am not sure if this is correct
or not in regards to wanting to get the key and engine type information.
sourceDataRS.getMetaData();
Any information you can offer is greatly appreciated.

How can I get all database table names in Moodle using API

I need list of the database tables used by Moodle.
How can I get it ? Is there any API for that either in Java or PHP ?
I checked - APIs like - Data definition API, Data Manipulation API , Web Services but I could not find what I require.
These API help getting data from Moodle, but I need MetaData.
Please help. Thanks in Advance.
You can use this in Moodle
$tables = $DB->get_tables();
also
foreach ($tables as $table) {
$columns = $DB->get_columns();
foreach ($columns as $column) {
...
}
}
Or use the information_schema
SELECT table_name
FROM information_schema.tables
WHERE table_schema = 'yourmoodledatabasename'

Transform Cassandra query result to POJO with Astyanax

I am working in a Spring web application using Cassandra with Astyanax client. I want to transform result data retrieved from Cassandra queries to a POJO, but I do not know which library or Astyanax API support this.
For example, I have User column family (CF) with some basic properties (username, password, email) and other related additional information can be added to this CF. Then I fetch one User row from that CF by using OperationResult> to hold the data returned, like this:
OperationResult<ColumnList<String>> columns = getKeyspace().prepareQuery(getColumnFamily()).getRow(rowKey).execute();
What I want to do next is populating "columns" to my User object. Here, I have 2 problems and could you please help me solve this:
1/ What is the best structure of User class to hold the corresponding data retrieved from User CF? My suggestion is:
public class User {
String userName, password, email; // Basic properties
Map<String, Object> additionalInfo;
}
2/ How can I transform the Cassandra data to this POJO by using a generic method (so that it can be applied to every single CF which has mapped POJO)?
I am so sorry if there are some stupid dummy things in my questions, because I have just approached NoSQL concepts and Cassandra as well as Astyanax for 2 weeks.
Thank you so much for your help.
You can try Achilles : https://github.com/doanduyhai/achilles, an JPA compliant Entity Manager for Cassandra
Right now there is a complete implementation using Thrift API via Hector.
The CQL3 implementation using Datastax Java Driver is in progress. A beta version will be available in few months (July-August 2013)
CQL3 is great but it's still too low level because you need to extract the data yourself from the ResultSet. It's like coming back to the time when only JDBC Template was available.
Achilles is there to fill the gap.
I would suggest you to use some library like Playorm using which you can easily perform CRUD operations on your entities. See this for an example that how you can create a User object and then you can get the POJO easily by
User user1 = mgr.find(User.class, email);
Assuming that email is your NoSqlId(Primary key or row key in Cassandra).
I use com.netflix.astyanax.mapping.Mapping and com.netflix.astyanax.mapping.MappingCache for exactly this purpose.

The best way to import(merge)-export java db database

I have let's say two pc's.PC-a and PC-b which both have the same application installed with java db support.I want from time to time to copy the data from the database on PC-a to database to PC-b and vice-versa so the two PC's to have the same data all the time.
Is there an already implemented API in the database layer for this(i.e 1.export-backup database from PC-a 2.import-merge databases to PC-b) or i have to do this in the sql layer(manually)?
As you mention in the comments that you want to "merge" the databases, this sounds like you need to write custom code to do this, as presumably there could be conficts - the same key in both, but with different details against it, for example.
In short: You can't do this without some work on your side. SalesLogix fixed this problem by giving everything a site code, so here's how your table looked:
Customer:
SiteCode varchar,
CustomerID varchar,
....
primary key(siteCode, CustomerID)
So now you would take your databases, and match up each record by primary key. Where there are conflicts you would have to provide a report to the end-user, on what data was different.
Say machine1:
SiteCode|CustomerID|CustName |phone |email
1 XXX |0001 |Customer1 |555.555.1212 |darth#example.com
and on machine2:
SiteCode|CustomerID|CustName |phone |email
2 XXY |0001 |customer2 |555.555.1213 |darth#nowhere.com
3 XXX |0001 |customer1 |555.555.1212 |darth#nowhere.com
When performing a resolution:
Record 1 and 3 are in conflict, because the PK matches, but the data doesnt (email is different).
Record 2 is unique, and can freely exist in both databases.
There is NO way to do this automatically without error or data corruption or referential integrity issues.
I guess you are using Java DB (aka Derby) - in which case, assuming you just can't use a single instance, you can do a backup/restore.
Why dont you have the database on one pc. and have all other pc's request data from the host pc

Categories