Sort a table in Jackcess - java

I work with an MS-Access table in Java using Jackcess:
Database mdb = Database.open(new File(myPath));
Table myTable = mdb.getTable("TableName");
Is there a way to get the table sorted/ordered by one or more column(s)? Couldn't find anything in the docs.
Thanks for any hint.

If you iterate through the Table rows using a Cursor which is backed by an Index, you will get the rows ordered by the relevant Index.
This is an example (using the 1.x API) which iterates the table based on the order of the primary key:
for(Map<String,Object> row : Cursor.createIndexCursor(table, table.getPrimaryKeyIndex())) {
// do something with row here...
}

I was having this same issue here, but it helped.
For the folks that are using the new version of Jackcess (v: 2.1.2) here is the answer:
for (Row row : CursorBuilder.createCursor(table.getIndex("IndexToBeSorted"))){
//Your awesome code with the row here
}
Thanks!

Related

How to remove item in JTable also remove from Database?

I am pretty basic to link from Java to Database.
I link item from Database (I use MS.Access) to Java table (JTable)
but when I delete in JTable using this code
int numRows = tblweng.getSelectedRows().length;
for(int i=0; i<numRows ; i++ )
((DefaultTableModel)tblweng.getModel()).removeRow(tblweng.getSelectedRow());
it deletes only in Table but not in Database. So how I can remove both of them at the same time I click "Remove Button".
Please guide me but don't go too deep I am just a basic here. Thank in advance.
1. click Jtable // row selected
2. get data form dep_Table like
int a = dep_Table.getSelectedRow();
String b = String.valueOf(dep_Table.getValueAt(a, 1));
3. What data you want in you get and store in String
4. Connect Database
5. use Delete Query and Delete Data From Database
6. Reload Table Again
These Few basic Step is Enough for You.. I Think..

Inserting Map into Cassandra CQL3 column family by using Astyanax mutation batch

I have a Cassandra CQL3 column family with the following structure
CREATE TABLE mytable(
A text,
B text,
C text,
mymap map<text,text>,
D text,
PRIMARY KEY (A,B,C)
);
I am trying to insert a bunch of data into it using Astyanax.
The version of Cassandra that I am working with is 1.2, so I can't use BATCH insert.
I know that I can run CQL3 commands in a for loop using Prepared Statements.
I wanted to know if it's possible to use Astyanax mutation batch to insert the data into the above column family? I realize that this will make use of the Astyanax Thrift interface to insert into a CQL3 column family but for the sake of performant writes, is this a viable option?
I took a look at the structure of the column family in cassandra-cli and it looks something like this
ColumnFamily: mytable
Key Validation Class: org.apache.cassandra.db.marshal.UTF8Type
Default column value validator: org.apache.cassandra.db.marshal.BytesType
Cells sorted by: org.apache.cassandra.db.marshal.CompositeType(org.apache.cassandra.db.marshal.UTF8Type,org.apache.cassandra.db.marshal.UTF8Type,org.apache.cassandra.db.marshal.UTF8Type,org.apache.cassandra.db.marshal.ColumnToCollectionType(6974656d73:org.apache.cassandra.db.marshal.MapType(org.apache.cassandra.db.marshal.UTF8Type,org.apache.cassandra.db.marshal.UTF8Type)))
While I can insert data into the other columns (i.e A, B, C, D) by creating a POJO with #Component on the various fields, I'm not sure how to go about dealing with the map insert i.e. inserting into the mymap column.
A sample POJO that I created is
public class TestColumn {
#Component(ordinal = 0)
String bComponent;
#Component(ordinal = 1)
String cComponent;
#Component(ordinal = 2)
String columnName;
public TestColumn() {
}
}
The insertion code is as follows
AnnotatedCompositeSerializer columnSerializer = new AnnotatedCompositeSerializer(TestColumn.class);
ColumnFamily columnFamily = new ColumnFamily("mytable", StringSerializer.get(), columnSerializer);
final MutationBatch m = keyspace.prepareMutationBatch();
ColumnListMutation columnListMutation = m.withRow(columnFamily, "AVal");
columnListMutation.putColumn(new TestColumn("BVal", "CVal", null), ByteBufferUtil.EMPTY_BYTE_BUFFER,
timeToLive);
columnListMutation.putColumn(new ApiAvColumn("BVal", "CVal", "D"), "DVal",
timeToLive);
m.execute;
How exactly should I modify the above code so that I can insert the map value as well?
We solved this by using the DataStax Java Driver instead of Astyanax.

Delete all the columns and its data except for one columns using Astyanax client?

I am working on a Project in which I need to delete all the columns and its data except for one column and its data in Cassandra using Astyanax client.
I have a dynamic column family like below and we already have couple of million records into that Column Family.
create column family USER_TEST
with key_validation_class = 'UTF8Type'
and comparator = 'UTF8Type'
and default_validation_class = 'UTF8Type'
and gc_grace = 86400
and column_metadata = [ {column_name : 'lmd', validation_class : DateType}];
I have user_id as the rowKey and other columns I have is something like this -
a1,a2,a3,a4,a5,a6,a7,a8,a9,a10,a11,a12,a13,a14,a15,lmd
Now I need to delete all the columns and its data except for a15 column. Meaning, I want to keep a15 column and its data for all the user_id(rowKey) and delete rest of the columns and its data..
I already know how to delete data from Cassandra using Astyanax client for a particular rowKey-
public void deleteRecord(final String rowKey) {
try {
MutationBatch m = AstyanaxConnection.getInstance().getKeyspace().prepareMutationBatch();
m.withRow(AstyanaxConnection.getInstance().getEmp_cf(), rowKey).delete();
m.execute();
} catch (ConnectionException e) {
// some code
} catch (Exception e) {
// some code
}
}
Now how to delete all the columns and its data except for one column for all the users id which is my rowKey...
Any thoughts how this can be done using Astyanax client efficiently?
It appears that Astyanax does not currently support the slice delete functionality that is a fairly recent addition to both the storage engine and the Thrift API. If you look at the thrift API reference: http://wiki.apache.org/cassandra/API10
You see that the delete operation takes a SlicePredicate, which can take either a list of columns or a SliceRange. A SliceRange, could specify all columns greater or less than the column you wanted to keep, so that would allow you to do two slice delete operations to delete all but one of the columns in the row.
Unfortunately, Astyanax only has the ability to delete an entire row, or a defined list of columns and doesn't wrap the full SlicePredicate functionality. So it looks like you have two options:
1) See about sending a raw thrift slice delete, bypassing Astyanax wrapper, or
2) Do a column read, followed by a row delete, followed by a column write. This is not ideally efficient, but if it isn't done too frequently shouldn't be prohibitive.
or
3) Read the entire row and explicitly delete all of the columns other than the one you want to preserve.
I should note that while the storage engine and thrift API both support slice deletes, this is also not yet explicitly supported by CQL.
I filed this ticket to address that last limitation:
https://issues.apache.org/jira/browse/CASSANDRA-6292

Java update when data exists and insert if doesnt [duplicate]

In MySQL, if you specify ON DUPLICATE KEY UPDATE and a row is inserted that would cause a duplicate value in a UNIQUE index or PRIMARY KEY, an UPDATE of the old row is performed. For example, if column a is declared as UNIQUE and contains the value 1, the following two statements have identical effect:
INSERT INTO table (a,b,c) VALUES (1,2,3)
ON DUPLICATE KEY UPDATE c=c+1;
UPDATE table SET c=c+1 WHERE a=1;
I don't believe I've come across anything of the like in T-SQL. Does SQL Server offer anything comparable to MySQL's ON DUPLICATE KEY UPDATE?
I was surprised that none of the answers on this page contained an example of an actual query, so here you go:
A more complex example of inserting data and then handling duplicate
MERGE
INTO MyBigDB.dbo.METER_DATA WITH (HOLDLOCK) AS target
USING (SELECT
77748 AS rtu_id
,'12B096876' AS meter_id
,56112 AS meter_reading
,'20150602 00:20:11' AS time_local) AS source
(rtu_id, meter_id, meter_reading, time_local)
ON (target.rtu_id = source.rtu_id
AND target.time_local = source.time_local)
WHEN MATCHED
THEN UPDATE
SET meter_id = '12B096876'
,meter_reading = 56112
WHEN NOT MATCHED
THEN INSERT (rtu_id, meter_id, meter_reading, time_local)
VALUES (77748, '12B096876', 56112, '20150602 00:20:11');
There's no DUPLICATE KEY UPDATE equivalent, but MERGE and WHEN MATCHED might work for you
Inserting, Updating, and Deleting Data by Using MERGE
You can try the other way around. It does the same thing more or less.
UPDATE tablename
SET field1 = 'Test1',
field2 = 'Test2'
WHERE id = 1
IF ##ROWCOUNT = 0
INSERT INTO tablename
(id,
field1,
field2)
VALUES (1,
'Test1',
'Test2')
SQL Server 2008 has this feature, as part of TSQL.
See documentation on MERGE statement here - http://msdn.microsoft.com/en-us/library/bb510625.aspx
SQL server 2000 onwards has a concept of instead of triggers, which can accomplish the wanted functionality - although there will be a nasty trigger hiding behind the scenes.
Check the section "Insert or update?"
http://msdn.microsoft.com/en-us/library/aa224818(SQL.80).aspx

How to invertly select columns in sql?

If I have a SQL table with columns:
NR_A, NR_B, NR_C, NR_D, R_A, R_B, R_C
and on runtime, I add columns following the column's sequence such that the next column above would be R_D followed by R_E.
My problem is I need to reset the values of columns that starts with R_ (labeled that way to indicate that it is resettable) back to 0 each time I re-run my script . NR_ columns btw are fixed, so it is simpler to just say something like:
UPDATE table set col = 0 where column name starts with 'NR_'
I know that is not a valid SQL but I think its the best way to state my problem.
Any thoughts?
EDIT: btw, I use postgres (if that would help) and java.
SQL doesn't support dynamically named columns or tables--your options are:
statically define column references
use dynamic SQL to generate & execute the query/queries
Java PreparedStatements do not insulate you from this--they have the same issue, just in Java.
Are you sure you have to add columns during normal operations? Dynamic datamodels are most of the time a realy bad idea. You will see locking and performance problems.
If you need a dynamic datamodel, take a look at key-value storage. PostgreSQL also has the extension hstore, check the contrib.
If you don't have many columns and you don't expect the schema to change, just list them explicitly.
UPDATE table SET NR_A=0;
UPDATE table SET NR_B=0;
UPDATE table SET NR_C=0;
UPDATE table SET NR_D=0;
Otherwise, a simple php script could dynamically build and execute your query:
<?php
$db = pg_connect("host=localhost port=5432 user=postgres password=mypass dbname=mydb");
if(!$db) die("Failed to connect");
$reset_cols = ["A","B","C","D"];
foreach ($col in $reset_cols) {
$sql = "UPDATE my_table SET NR_" . $col . "=0";
pg_query($db,$sql);
}
?>
You could also lookup table's columns in Postgresql by querying the information schema columns tables, but you'll likely need to write a plpgsql function to loop over the query results (one row per table column starting with "NR_").
if you rather using sql query script, you should try to get the all column based on given tablename.
maybe you could try this query to get all column based on given tablename to use in your query.
SELECT attname FROM
pg_attribute, pg_type
WHERE typname = 'tablename' --your table name
AND attrelid = typrelid
AND attname NOT IN ('cmin', 'cmax', 'ctid', 'oid', 'tableoid', 'xmin', 'xmax')
--note that this attname is sys column
the query would return all column with given tablename except system column

Categories