I am storing file data. I read the file though java and when ever i am storing the file data in cassandra it gives me this error.
Exception in thread "main" com.datastax.driver.core.exceptions.SyntaxError: line 1:115 no viable alternative at input 'PRIMARY' (...* from sensorkeyspace.sensortable WHERE [PRIMARY]...)
Here my query is here
CREATE MATERIALIZED VIEW IF NOT EXISTS sensorkeyspace.texttable AS select * from sensorkeyspace.sensortable WHERE PRIMARY KEY (sensor_id) IS NOT NULL
Try altering your WHERE clause to this:
WHERE sensor_id IS NOT NULL PRIMARY KEY (sensor_id)
If you get an error indicating that:
No columns are defined for Materialized View other than primary key
Based on CASSANDRA-13564:
That error message implies you re-used only the partition key/primary key from the base as the partition key for your view (you had no extra clustering columns in your base primary key).
I get that message when I have a table with a simple PRIMARY KEY, and I try to create a view with that same, simple PRIMARY KEY.
For example, if I have this table:
CREATE TABLE stackoverflow.newtable (
name text PRIMARY KEY,
score float,
value float,
value2 blob);
This fails:
cassdba#cqlsh:stackoverflow> CREATE MATERIALIZED VIEW IF NOT EXISTS
stackoverflow.newtable_view AS SELECT * FROM stackoverflow.newtable
WHERE name IS NOT NULL PRIMARY KEY (name);
InvalidRequest: Error from server: code=2200 [Invalid query]
message="No columns are defined for Materialized View other than primary key"
But this works for the same table:
cassdba#cqlsh:stackoverflow> CREATE MATERIALIZED VIEW IF NOT EXISTS
stackoverflow.newtable_view AS SELECT * FROM stackoverflow.newtable
WHERE score IS NOT NULL AND name IS NOT NULL PRIMARY KEY (score,name);
Warnings :
Materialized views are experimental and are not recommended for production use.
Not really related, but do note that last part; about how using MVs in Cassandra really isn't a good idea, yet.
Related
I'm very new to SQL and I want the contracts_tb (query details below) is to display and link the foreign id keys referred from:
med_idref (referred from med_id (INTEGER), PRIMARY KEY o mediaadv_tb),
mediatitle_ref (title (TEXT), mediaadv_tb),
mediatype_red (mtype (TEXT), mediaadv_tb),
cus_idref (cus_id (INTEGER),PRIMARY KEY of customer_tb),
cus_companyref (referred from company (TEXT), in customer_tb)
All to be linked and displayed to contracts_tb. When I add/replace values from mediaadv_tb and customer_tb, I get this problem:
foreignkey mismatch
Also, do I have to make or assign a parent table?
Query:
DROP TABLE IF EXISTS customer_tb;
CREATE TABLE IF NOT EXISTS customer_tb (
cus_id INTEGER PRIMARY KEY,
company TEXT,
firstname TEXT,
middlename TEXT,
lastname TEXT,
gender TEXT,
dob TEXT,
dateregistered TEXT,
contactno TEXT,
emailaddress TEXT,
description TEXT,
refpic INTEGER,
cuspic BLOB
);
DROP TABLE IF EXISTS mediaadv_tb;
CREATE TABLE IF NOT EXISTS mediaadv_tb (
med_id INTEGER PRIMARY KEY,
mtype TEXT,
duration TEXT,
title TEXT,
dateadded TEXT,
desription TEXT,
previewimg BLOB,
filepath TEXT
);
DROP TABLE IF EXISTS contracts_tb;
CREATE TABLE IF NOT EXISTS contracts_tb (
contract_id INTEGER PRIMARY KEY,
customer_idref INTEGER REFERENCES customer_tb (cus_id),
media_idref INTEGER REFERENCES mediaadv_tb (med_id),
media_typeref TEXT REFERENCES mediaadv_tb(mtype),
media_titleref TEXT REFERENCES mediaadv_tb (title),
status TEXT,
priority TEXT,
dateadded TEXT,
dateexpiration TEXT,
amountpaid REAL,
arearofcoverage TEXT
);
Error :-
contracts_tb
mediaadv_tb
I believe that your issue is because the foreign keys defined that reference the media_typeref and the media_titleref columns are invalid as they do not have, or are part of a, UNIQUE indexes (no indexes). SQLite Foreign Key Support - 3. Required and Suggested Database Indexes
The referenced id columns, as they are INTEGER PRIMARY KEY, are implicitly UNIQUE indexes.
Furthermore the two columns (typeref and titleref) themself aren't even needed as the media_idref column would be used to identify the reference and thus would hold the respective values. Copying those values into the contracts table would be contrary to normalisation and may even create major headaches (e.g. if a value changed you'd have to find all other uses and also change them).
As such I'd suggest that the contracts_tb be created using :-
DROP TABLE IF EXISTS contracts_tb;
CREATE TABLE IF NOT EXISTS contracts_tb (
contract_id INTEGER PRIMARY KEY,
customer_idref INTEGER REFERENCES customer_tb (cus_id),
media_idref INTEGER REFERENCES mediaadv_tb (med_id),
status TEXT,
priority TEXT,
dateadded TEXT,
dateexpiration TEXT,
amountpaid REAL,
arearofcoverage TEXT
);
Re comment :-
What i'm making is a Java NetBeans SQLite database program, where by
using the Contracts frame, whevenr one makes a new contract, there
will be a combobox that restricts the user to only put the existing
ids or names that is referred in the contracts_tb then provides the
choices. Is it possible sir?
Yes.
More specifically:-
Assume that you have customers Fred, Bert and Harry (id's 1,2 and 3 respectively). And that you have mediaadv's M1, M2 and M3 (id's 10,11 and 12 (not 1,2 and 3 to help distinguish between mediaadv and customers)).
Additionally I'll assume the suggested contracts_tb table as opposed to the original in the question (i.e. 2 columns dropped as suggested)
The when inserting a new contract, you present a list (combobox) of the customers e.g.
Fred
Bert
Harry
(this list could be generated from a query such as:-
SELECT cus_id,firstname FROM customer_tb; i.e. all existing customers)
If you wanted Fred James Bloggs then you could use :-
SELECT cus_id,firstname||' '||middlename||'lastname' AS fullname FROM customer_tb;.
Likewise a list of the existing mediaadv could be generated from a query such as:-
SELECT med_id, description FROM mediaadv_tb; e.g.
so the combobox would have:-
M1
M2
M3
Now if the contract were for Bert (id 2) and M1 (id 10) then you build SQL something like :-
INSERT INTO contracts_tb VALUES(null,2,10,'the_status','the_priority','yyyy-mm-dd','yyyy-mm-dd',500,'the_coverage');
1st value is null i.e. no value, so as contract_id is an alias of the rowid it will be generated.
2 is the id of the customer (hence why cus_id was in the query as you need the id as it's the value you are going to store)
10 is the id of the mediaadv (again hence why med_id was in the query as you need the id as it's the value you are going to store).
the other values are what they should be.
Note the above use of INSERT requires that all columns be given. You can skip columns by specifying a list of the columns e.g. INSERT INTO contracts_tb (customer_idref,media_idref) VALUES(2,10);
As a customer with an cus_id of 2 (Bert) exists then the constraint that customer_idref is an existing id in the customer_tb is good/met and there is no conflict.
Likewise as there is a row in mediaadv_tb that has an med_id of 10 this constraint is good/met and there is no conflict.
However say the SQL were :-
INSERT INTO contracts_tb VALUES(null,2,100,'the_status','the_priority','yyyy-mm-dd','yyyy-mm-dd',500,'the_coverage');
Then as there is no med_id of 100 the constraint saying that media_idref must reference a value of 100 (in this instance) in the mediaadv_tb, column med_id, then the constraint will not be met and the insert will fail.
So again Yes, I believe that what you want is feasible.
Note a foreign key is only a constraint it doesn't bind/associate columns or join tables.
I am using java store data to Cassandra through materiliazed view but I got an issue. The issue is that its not going save data to Cassandra database. I got this error.
No columns are defined for Materialized View other than primary key
REATE MATERIALIZED VIEW IF NOT EXISTS sensorkeyspace.maxtable AS select sensor_id,humidity from sensorkeyspace.sensortable where (humidity is not null) PRIMARY KEY (sensor_id)
Exception in thread "main" com.datastax.driver.core.exceptions.InvalidQueryException: No columns are defined for Materialized View other than primary key
at com.datastax.driver.core.exceptions.InvalidQueryException.copy(InvalidQueryException.java:50)
at com.datastax.driver.core.DriverThrowables.propagateCause(DriverThrowables.java:37)
at com.datastax.driver.core.DefaultResultSetFuture.getUninterruptibly(DefaultResultSetFuture.java:245)
at com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:64)
at com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:39)
at sparkproject.SparkApp.main(SparkApp.java:41)
Caused by: com.datastax.driver.core.exceptions.InvalidQueryException: No columns are defined for Materialized View other than primary key
at com.datastax.driver.core.Responses$Error.asException(Responses.java:136)
at com.datastax.driver.core.DefaultResultSetFuture.onSet(DefaultResultSetFuture.java:179)
at com.datastax.driver.core.RequestHandler.setFinalResult(RequestHandler.java:174)
at com.datastax.driver.core.RequestHandler.access$2600(RequestHandler.java:43)
at com.datastax.driver.core.RequestHandler$SpeculativeExecution.setFinalResult(RequestHandler.java:793)
at com.datastax.driver.core.RequestHandler$SpeculativeExecution.onSet(RequestHandler.java:627)
at com.datastax.driver.core.Connection$Dispatcher.channelRead0(Connection.java:1012)
at com.datastax.driver.core.Connection$Dispatcher.channelRead0(Connection.java:935)
at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:342)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:328)
It looks like that you're creating the materialized view with the same primary key as a main table. Please check the MV definition
I saw various examples online where cassandra triggers were used to write to an audit table. I was following this one :
https://github.com/apache/cassandra/blob/cassandra-3.0/examples/triggers/src/org/apache/cassandra/triggers/AuditTrigger.java
However in my use case, I have an audit table that has a composite partition key ( PRIMARY KEY ((col1,col2),col3,col4) ) and multiple clustering columns.
I have been able to add the clustering columns by adding audit.clustering(values) but I am not able to figure out how to implement the composite partition key.
RowUpdateBuilder gives me an error if I pass update.partitionKey.partition () as the 3rd parameter of rowUpdateBuilder.
The error is :
Java.lang.IllegalArgumentException: Invalid number of components, expecting 2 but got 1.
I get the same error when I pass an array of size 2 as the 3rd parameter to rowUpdateBuilder.
Any help will be appreciated.
Build composite partition key from all of your partition key
To build composite partition key from one or more partition key, use the following method :
public DecoratedKey buildCompositePartitionKey(CFMetaData metadata, Object... partitionKey) {
return metadata.decorateKey(
CFMetaData.serializePartitionKey(
metadata.getKeyValidatorAsClusteringComparator().make(partitionKey)
)
);
}
Example :
CFMetaData metadata = Schema.instance.getCFMetaData("test_ks", "test_cf");
DecoratedKey compositePartitionKey = buildCompositePartitionKey(metadata, "col1 value", "col2 value");
RowUpdateBuilder audit = new RowUpdateBuilder(metadata, FBUtilities.timestampMicros(), compositePartitionKey);
Context: Ebean, play-Framework, Model, Optemistic Locking
Is it possible to set an annotation to a value of a model, which tells ebean that it shouldn't throw a "optemistic locking exception" for this value, because it is independent of the previous data?
Example usage: I have a lastAction value, which is updated frequently. It doesn't matter if this value is absolut correct, because it is just used to determin the automated logout time or deletion time (registered and guest user).
I believe that you can achieve this by using 2 separate tables one for optimistic-lockable attributes, another one for do-not-care attributes.
Later you can combine them in one DB view.
For example:
create table optimistic_lockable {
id bigint primary key
....
}
create table non_lockable {
id primary key
,lockable_id foreign key refences optimistic_lockable (id)
}
create view model_view as
select * from optimistic_lockable ol, non_lockable nl
where ol.id = nl.lockable_id
You map your model to model_view. And IFF the DB engine allows to insert into view, you'll probably be fine ;)
i have two tables where in the first one i have 14 millions and in the second one i have 1.5 million of data.
So i wonder how could i transfer this data to another table to be normalized ?
And how do i convert some type to another, for example: i have a field called 'year' but its type is varchar, but i want it an integer instead, how do i do that ?
I thought about do this using JDBC in a loop while from java, but i think this is not effeciently.
// 1.5 million of data
CREATE TABLE dbo.directorsmovies
(
movieid INT NULL,
directorid INT NULL,
dname VARCHAR (500) NULL,
addition VARCHAR (1000) NULL
)
//14 million of data
CREATE TABLE dbo.movies
(
movieid VARCHAR (20) NULL,
title VARCHAR (400) NULL,
mvyear VARCHAR (100) NULL,
actorid VARCHAR (20) NULL,
actorname VARCHAR (250) NULL,
sex CHAR (1) NULL,
as_character VARCHAR (1500) NULL,
languages VARCHAR (1500) NULL,
genres VARCHAR (100) NULL
)
And this is my new tables:
DROP TABLE actor
CREATE TABLE actor (
id INT PRIMARY KEY IDENTITY,
name VARCHAR(200) NOT NULL,
sex VARCHAR(1) NOT NULL
)
DROP TABLE actor_character
CREATE TABLE actor_character(
id INT PRIMARY KEY IDENTITY,
character VARCHAR(100)
)
DROP TABLE director
CREATE TABLE director(
id INT PRIMARY KEY IDENTITY,
name VARCHAR(200) NOT NULL,
addition VARCHAR(150)
)
DROP TABLE movie
CREATE TABLE movie(
id INT PRIMARY KEY IDENTITY,
title VARCHAR(200) NOT NULL,
year INT
)
DROP TABLE language
CREATE TABLE language(
id INT PRIMARY KEY IDENTITY,
language VARCHAR (100) NOT NULL
)
DROP TABLE genre
CREATE TABLE genre(
id INT PRIMARY KEY IDENTITY,
genre VARCHAR(100) NOT NULL
)
DROP TABLE director_movie
CREATE TABLE director_movie(
idDirector INT,
idMovie INT,
CONSTRAINT fk_director_movie_1 FOREIGN KEY (idDirector) REFERENCES director(id),
CONSTRAINT fk_director_movie_2 FOREIGN KEY (idMovie) REFERENCES movie(id),
CONSTRAINT pk_director_movie PRIMARY KEY(idDirector,idMovie)
)
DROP TABLE genre_movie
CREATE TABLE genre_movie(
idGenre INT,
idMovie INT,
CONSTRAINT fk_genre_movie_1 FOREIGN KEY (idMovie) REFERENCES movie(id),
CONSTRAINT fk_genre_movie_2 FOREIGN KEY (idGenre) REFERENCES genre(id),
CONSTRAINT pk_genre_movie PRIMARY KEY (idMovie, idGenre)
)
DROP TABLE language_movie
CREATE TABLE language_movie(
idLanguage INT,
idMovie INT,
CONSTRAINT fk_language_movie_1 FOREIGN KEY (idLanguage) REFERENCES language(id),
CONSTRAINT fk_language_movie_2 FOREIGN KEY (idMovie) REFERENCES movie(id),
CONSTRAINT pk_language_movie PRIMARY KEY (idLanguage, idMovie)
)
DROP TABLE movie_actor
CREATE TABLE movie_actor(
idMovie INT,
idActor INT,
CONSTRAINT fk_movie_actor_1 FOREIGN KEY (idMovie) REFERENCES movie(id),
CONSTRAINT fk_movie_actor_2 FOREIGN KEY (idActor) REFERENCES actor(id),
CONSTRAINT pk_movie_actor PRIMARY KEY (idMovie,idActor)
)
UPDATE:
I'm using SQL Server 2008.
Sorry guys i forgot to mention that are different databases :
The not normalized is call disciplinedb and the my normalized call imdb.
Best regards,
Valter Henrique.
If both tables are in the same database, then the most efficient transfer is to do it all within the database, preferably by sending a SQL statement to be executed there.
Any movement of data from the d/b server to somewhere else and then back to the d/b server is to be avoided unless there is a reason it can only be transformed off-server. If the destination is different server, then this is much less of an issue.
Though my tables were dwarfs compared to yours, I got over this kind of problem once with stored procedures. For MySQL, below is a simplified (and untested) essence of my script, but something similar should work with all major SQL bases.
First you should just add a new integer year column (int_year in example) and then iterate over all rows using the procedure below:
DROP PROCEDURE IF EXISTS move_data;
CREATE PROCEDURE move_data()
BEGIN
DECLARE done INT DEFAULT 0;
DECLARE orig_id INT DEFAULT 0;
DECLARE orig_year VARCHAR DEFAULT "";
DECLARE cur1 CURSOR FOR SELECT id, year FROM table1;
DECLARE CONTINUE HANDLER FOR NOT FOUND SET done = 1;
OPEN cur1;
PREPARE stmt FROM "UPDATE table1 SET int_year = ? WHERE id = ?";
read_loop: LOOP
FETCH cur1 INTO orig_id, orig_year;
IF done THEN
LEAVE read_loop;
END IF;
SET #year= orig_year;
SET #id = orig_id;
EXECUTE stmt USING #orig_year, #id;
END LOOP;
CLOSE cur1;
END;
And to start the procedure, just CALL move_data().
The above SQL has two major ideas to speed it up:
Use CURSORS to iterate over a large table
Use PREPARED statement to quickly execute pre-known commands
PS. for my case this speeded things up from ages to seconds, though in your case it can still take a considerable amount of time. So it would be probably best to execute from command line, not some web interface (e.g. PhpMyAdmin).
I just recently did this for ~150 Gb of data. I used a pair of merge statements for each table. The first merge statement said "if it's not in the destination table, copy it there" and the second said "if it's in the destination table, delete it from the source". I put both in a while loop and only did 10000 rows in each operation at a time. Keeping it on the server (and not transferring it through a client) is going to be a huge boon for performance. Give it a shot!