MySQL column containing different Java datatypes - java

I have a table in my MySQL containing calculated values. At the moment this is a Decimal field.
But now I'll need this table to also hold calculated dates, booleans etc.
What's the best solution here, just change the MySQL field type to a VARCHAR and handle the rest in my Java code. Or am I gonna get me in a mess with this approach?
Any ideas and pointers are welcome!

Use a single column per datatype and introduce a discriminator column stating which of the datatype columns should be referenced. The discriminator column can simply be an int representing an enumeration of possible datatypes.

Related

How to determine size of fixed length array database column using JOOQ?

Given this PostgreSQL table with a fixed length array column :
CREATE TABLE test (
id integer,
values integer[4],
);
Will JOOQ code generation create a java constant or method that provides the max number of elements that can be stored in the values column (i.e. 4)?
After reading through JOOQ documentation on code generation and support for SQL arrays, I couldn't find anything specific about fixed length arrays. Also, nothing jumps out at me in the generated code that provides this information.
No, version 3.9 of jOOQ doesn't know any fixed size or size limit of a database array (neither with PostgreSQL array types nor with Oracle VARRAY types).
I have registered feature request #5932 for this.
I asked this question in part because I was worried about array overruns in the PostgreSQL database. After researching a way to use straight SQL to determine the size constraint, I noticed the PostgreSQL ARRAY documentation makes the statement :
As before, however, PostgreSQL does not enforce the size restriction in any case.
Based on that statement, it would appear using the array constraint to enforce size is not necessary since all array columns appear to be treated as variable length. So even if one could retrieve the PostgreSQL array size constraint through JOOQ, Straight SQL, or any other means, why bother?

Java DAO: null and Integer object or negative number and int in mapped object for optional number column?

Currently im working in a project where we had recently a discussion about this. So lets have any database table with a numeric column which is allowed to have "no value" in the source code.
Some of my colleagues prefer to use the primitive type int in their mapped objects and set the value in this case to -1; i prefer to make this column nullable, using the Integer object in source instead of the primitive type and set this value to null.
Now im searching for the pro's and con's of both variants but couldn't find anything so far.
The only article i found so far is this one:
http://www.javapractices.com/topic/TopicAction.do?Id=134
...
There is a very common exception to this rule. For Model Objects, which map roughly to database records, it's often appropriate to use null to represent optional fields stored as NULL in the database
...
#EDIT
I need to add that such kind of columns are used in this project as a kind of fake foreign keys. They are not real database constraints but used as such. Dont ask me why...i know i know :-)
-1 in this case means it has no relationship to the primay key/id of another table. So its not needed for any calculations or such stuff...
For Nullable columns, it is preferred to have null value, when you don't want to store a value. Your approach of using Null Integer object is far better than -1
Consider this, your numeric column is being used in calculation and adding -1 (instead of null) could change the calculation. But, if you have Null your calculations would be correct
Pros of using Null Integer:
If numeric column is used in calculation, your output is not effected.
When using ORM, the underlying framework doesn't have to insert value in that column, if it's an insert query
Another benefit is that your DB would have less number of rows with -1 as value in nullable columns. (Think of 10 billion rows with -1 as a value for nullable column where you wanted to put null)
Since, both Integer and Null require 4 bytes of memory space, memory benefit can be claimed by using static null Integer object.
You don't have to check for -1 everywhere
The reason those guys still use int - primitive is may be they have not yet realized the benefit of using Objects over primitives or are still used to JDK 1.4 style coding
I hope I was able to answer your question...

Algorithm to generate column prefix patterns

I am wondering in how to develop a dynamic way to generate column prefix patterns. The main idea is to standardize corporation patterns while defining column names. For example:
If I have to create a column that is a date, so the prefix will be DT_*column_name*;
If it is a name column, so it will be NM_*column_name*;
But if you don't have a well defined pattern, you can suggest a name that need to be approved.
Has anyone ever thought about something like this?
Thank you in advance
**EDIT**
Sorry, I think I didn't explained it enough. It's not exactly for handling type prefixes, but specific business/corporation names. For example (again):
Column customer should be prefixed with CSTM_
Column digit should be prefixed with DIGT_
Column franchising should be prefixed with FRCH_
Dont think is a good idea.
Consider your organization use different DBs like oracle and mysql with different data types.
The column name should be use to describe the column. You alredy have a type of the column defined.
Another drawback is if you want your application to be supported by multipe database engines. Are you going to change the schema ?

How to add arbitrary columns to Cassandra using CQL with Datastax Java driver?

I have recently started taking much interest in CQL as I am thinking to use Datastax Java driver. Previously, I was using column family instead of table and I was using Astyanax driver. I need to clarify something here-
I am using the below column family definition in my production cluster. And I can insert any arbitrary columns (with its value) on the fly without actually modifying the column family schema.
create column family FAMILY_DATA
with key_validation_class = 'UTF8Type'
and comparator = 'UTF8Type'
and default_validation_class = 'BytesType'
and gc_grace = 86400;
But after going through this post, it looks like- I need to alter the schema every time whenever I am getting a new column to insert which is not what I want to do... As I believe CQL3 requires column metadata to exist...
Is there any other way, I can still add arbitrary columns and its particular value if I am going with Datastax Java driver?
Any code samples/example will help me to understand better.. Thanks..
I believe in CQL you solve this problem using collections.
You can define the data type of a field to be a map, and then insert arbitrary numbers of key-value pairs into the map, that should mostly behave as dynamic columns did in traditional Thrift.
Something like:
CREATE TABLE data ( data_id int PRIMARY KEY, data_time long, data_values map );
INSERT INTO data (data_id, data_time, data_values) VALUES (1, 21341324, {'sum': 2134, 'avg': 44.5 });
Here is more information.
Additionally, you can find the mapping between the CQL3 types and the Java types used by the DataStax driver here.
If you enable compact storage for that table, it will be backwards compatible with thrift and CQL 2.0 both of which allow you to enter dynamic column names.
You can have as many columns of whatever name you want with this approach. The primary key is composed of two things, the first element which is the row_key and the remaining elements which when combined as a set form a single column name.
See the tweets example here
Though you've said this is in production already, it may not be possible to alter a table with existing data to use compact storage.

Can astyanax return ordered column names?

Using com.netflix.astyanax, I add entries for a given row as follows:
final ColumnListMutation<String> columnList = m.withRow(columnFamily, key);
columnList.putEmptyColumn(columnName);
Later I retrieve all my columns with:
final OperationResult<ColumnList<String>> operationResult = keyspace
.prepareQuery(columnFamily).getKey(key).execute();
operationResult.getResult().getColumnNames();
The following correctly return all the columns I have added but the columns are not ordered accordingly to when they were entered in the database. Since each column has a timestamp associated to it, there ought to be a way to do exactly this but I don't see it. Is there?
Note: If there isn't, I can always change the code above to:
columnList.putColumn(ip,new Date());
and then retrieve the column values, order them accordingly, but that seems cumbersome, inefficient, and silly since each column already has a timestamp.
I know from PlayOrm that if you do column Slices, it returns those in order. In fact, playorm uses that do enable S-SQL in partitions and basically batches the column slicing which comes back in order or reverse order depending on how requested. You may want to do a column slice from 0 to MAXLONG.
I am not sure about getting the row though. I haven't tried that.
oh, and PlayOrm is just a mapping layer on top of astyanax though not really relational and more noSql'ish really as demonstrated by it's patterns pages
http://buffalosw.com/wiki/Patterns-Page/
Cassandra will never order your columns in "insertion order".
Columns are always ordered lowest first. It also depends on how cassandra interprets your column names. You can define the interpretation with the comparator you set when defining your column family.
From what you gave it looks you use String timestamp values. If you simply serialized your timestamps as e.g. "123141" and "231" be aware that with an UTF8Type comparator "231">"123131".
Better approach: Use Time-based UUIDs as column names, as many examples for Time-series data in Cassandra propose. Then you can use the UUIDType comparator.
CREATE COLUMN FAMILY timeseries_data
WITH comparator = UUIDType
AND key_validation_class=UTF8Type;

Categories