I am using java store data to Cassandra through materiliazed view but I got an issue. The issue is that its not going save data to Cassandra database. I got this error.
No columns are defined for Materialized View other than primary key
REATE MATERIALIZED VIEW IF NOT EXISTS sensorkeyspace.maxtable AS select sensor_id,humidity from sensorkeyspace.sensortable where (humidity is not null) PRIMARY KEY (sensor_id)
Exception in thread "main" com.datastax.driver.core.exceptions.InvalidQueryException: No columns are defined for Materialized View other than primary key
at com.datastax.driver.core.exceptions.InvalidQueryException.copy(InvalidQueryException.java:50)
at com.datastax.driver.core.DriverThrowables.propagateCause(DriverThrowables.java:37)
at com.datastax.driver.core.DefaultResultSetFuture.getUninterruptibly(DefaultResultSetFuture.java:245)
at com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:64)
at com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:39)
at sparkproject.SparkApp.main(SparkApp.java:41)
Caused by: com.datastax.driver.core.exceptions.InvalidQueryException: No columns are defined for Materialized View other than primary key
at com.datastax.driver.core.Responses$Error.asException(Responses.java:136)
at com.datastax.driver.core.DefaultResultSetFuture.onSet(DefaultResultSetFuture.java:179)
at com.datastax.driver.core.RequestHandler.setFinalResult(RequestHandler.java:174)
at com.datastax.driver.core.RequestHandler.access$2600(RequestHandler.java:43)
at com.datastax.driver.core.RequestHandler$SpeculativeExecution.setFinalResult(RequestHandler.java:793)
at com.datastax.driver.core.RequestHandler$SpeculativeExecution.onSet(RequestHandler.java:627)
at com.datastax.driver.core.Connection$Dispatcher.channelRead0(Connection.java:1012)
at com.datastax.driver.core.Connection$Dispatcher.channelRead0(Connection.java:935)
at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:342)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:328)
It looks like that you're creating the materialized view with the same primary key as a main table. Please check the MV definition
Related
I have a database that holds a one-to-many relationship between a 'Parent' and 'Child' where the parent can have multiple children. The child table holds the reference to the parent via foreign key constraint. Here are the table queries:
CREATE TABLE Parent (
id INT GENERATED ALWAYS AS IDENTITY,
name VARCHAR(55),
CONSTRAINT pk_parent_id PRIMARY KEY (id)
)
CREATE TABLE Child (
id INT GENERATED ALWAYS AS IDENTITY,
name VARCHAR(55),
parent INT CONSTRAINT fk_parent REFERENCES Parent
CONSTRAINT pk_child_id PRIMARY KEY (id)
)
My java code iterates over the parent list where it first inserts the parent into the database so that I can get the ID of the parent to use for the association with the child:
Statement s = this.conn.getStatement();
s.executeUpdate("INSERT INTO Parent (name) VALUES ('John')",
Statement.RETURN_GENERATED_KEYS);
ResultSet r = s.getGeneratedKeys();
r.next();
int parentId = r.getInt(1);
r.close();
s.close();
Now that I have the parent ID, I iterate over the children of that parent (a List in the parent POJO). The code is near identical except for getting the ID of the child and setting its own values. The queries look like this:
s.executeUpdate("INSERT INTO Child (name, parent) VALUES ('Rocky'," +
parentId + ")");
Running this sequence will work the majority of the time, but eventually, some behavior happens where the ID returned when getting the generated ID from the parent insert query doesn't reference the proper parent ID. This will cause this exception to occur:
java.sql.SQLIntegrityConstraintViolationException: INSERT on table 'CHILD' caused a violation of foreign key constraint 'fk_parent_id' for key (97). The statement has been rolled back.
I inspected the database after this and noticed that the ID of the violation (97 in this case) doesn't even exist in the parent table. What I don't understand is why or how I am getting an ID back after the single row insert that is not accurate, especially after there have been successful references created before.
I am storing file data. I read the file though java and when ever i am storing the file data in cassandra it gives me this error.
Exception in thread "main" com.datastax.driver.core.exceptions.SyntaxError: line 1:115 no viable alternative at input 'PRIMARY' (...* from sensorkeyspace.sensortable WHERE [PRIMARY]...)
Here my query is here
CREATE MATERIALIZED VIEW IF NOT EXISTS sensorkeyspace.texttable AS select * from sensorkeyspace.sensortable WHERE PRIMARY KEY (sensor_id) IS NOT NULL
Try altering your WHERE clause to this:
WHERE sensor_id IS NOT NULL PRIMARY KEY (sensor_id)
If you get an error indicating that:
No columns are defined for Materialized View other than primary key
Based on CASSANDRA-13564:
That error message implies you re-used only the partition key/primary key from the base as the partition key for your view (you had no extra clustering columns in your base primary key).
I get that message when I have a table with a simple PRIMARY KEY, and I try to create a view with that same, simple PRIMARY KEY.
For example, if I have this table:
CREATE TABLE stackoverflow.newtable (
name text PRIMARY KEY,
score float,
value float,
value2 blob);
This fails:
cassdba#cqlsh:stackoverflow> CREATE MATERIALIZED VIEW IF NOT EXISTS
stackoverflow.newtable_view AS SELECT * FROM stackoverflow.newtable
WHERE name IS NOT NULL PRIMARY KEY (name);
InvalidRequest: Error from server: code=2200 [Invalid query]
message="No columns are defined for Materialized View other than primary key"
But this works for the same table:
cassdba#cqlsh:stackoverflow> CREATE MATERIALIZED VIEW IF NOT EXISTS
stackoverflow.newtable_view AS SELECT * FROM stackoverflow.newtable
WHERE score IS NOT NULL AND name IS NOT NULL PRIMARY KEY (score,name);
Warnings :
Materialized views are experimental and are not recommended for production use.
Not really related, but do note that last part; about how using MVs in Cassandra really isn't a good idea, yet.
I am using Hibernate 4.1.0.Final with Spring 3
I have the following in Entity class
#Id
#Column(name = "PROJECT_NO")
#GeneratedValue(strategy=GenerationType.TABLE)
private String projectNumber;
Is it possible to use database trigger to populate the primary key of a table? Or I have to use a CustomGenerator for this?
When I tried the above I have the following exception
org.hibernate.id.IdentifierGenerationException: Unknown integral data type
for ids : java.lang.String
Database trigger doesn't have any sequence, it is using
SELECT NVL (MAX (project_no), 0) + 1 FROM projects
Edit 1
#GeneratedValue(generator="trig")
#GenericGenerator(name="trig", strategy="select",
parameters=#Parameter(name="key", value="projectNo"))
The above throws the following exception
Hibernate: select PROJECT_NO from PROJECTS where PROJECT_NO =?
java.lang.NullPointerException
exception in save null
at org.hibernate.tuple.entity.AbstractEntityTuplizer.getPropertyValue(AbstractEntityTuplizer.java:645)
at org.hibernate.persister.entity.AbstractEntityPersister.getPropertyValue(AbstractEntityPersister.java:4268)
at org.hibernate.id.SelectGenerator$SelectGeneratorDelegate.bindParameters(SelectGenerator.java:138)
at org.hibernate.id.insert.AbstractSelectingDelegate.performInsert(AbstractSelectingDelegate.java:84)
at org.hibernate.persister.entity.AbstractEntityPersister.insert(AbstractEntityPersister.java:2764)
at org.hibernate.persister.entity.AbstractEntityPersister.insert(AbstractEntityPersister.java:3275)
at org.hibernate.action.internal.EntityIdentityInsertAction.execute(EntityIdentityInsertAction.java:81)
The problem is that you're using a String instead of a numeric value. Use a Long instead of a String, and your error will disappear.
AFAIK, you can't use a trigger to populate the ID. Indeed, Hibernate would have to retrieve the generated ID, but since it doesn't have an ID, I don't see how it could read back the row it has just inserted (chicken and egg problem).
You could use your SQL query to get an ID before inserting the row, but this strategy is inefficient, and has a risk of duplicate IDs in case of concurrent inserts. So I wouldn't use this strategy. You tagged your post with Oracle. I suggest you use a sequence. that's what they're for.
As of this on the Hibernate 3.3 documentation page you can do that.
select
retrieves a primary key, assigned by a database trigger, by selecting
the row by some unique key and retrieving the primary key value.
I am using Hibernate and MySQL when my mapping for table is done and when I look at my DB table its created successfully but their is no foreign key constraint but column is created.
When I try to insert record in child table and when I put id which not exist in parent table in foreign key column then also that row get inserted.
My table engine is innoDB.
If I change dialect to MS-SQL then table get created with foreign key constraint.
Sorry users,
I got issue with this actually my DBA not given permission for ALTER command.
so thats why, when an table with foreign Key is going to create first CREATE command is executed and then ALTER command is executed.
In my case CREATE command is executed successfully but ALTER command does not coz its dnt have permission.
I'm trying to add a foreign key to an existing table, and was having issues. I figured that I had an error in my syntax, so I updated my hibernate.cfg.xml file to auto-update.
As it turns out, hibernate had the same error. Here's my SQL to add the foreign key:
alter table pbi add index FKEA3F7BDE9BAB051 (FK_idP), add constraint FKEA3F7BDE9BAB051 foreign key (FK_idP) references p (idP)
and the error is:
Cannot add or update a child row: a foreign key constraint fails (`db`.`#sql-6f8_3`, CONSTRAINT `FKEA3F7BDE9BAB051` FOREIGN KEY (`fk_idP`) REFERENCES `p` (`idP`))
Can anyone think of a reason why this would fail?
This error means that constraint can not be applied because there are existing records that would violate it.
In your case, pbi table has rows whose FK_idP column has a value for which there are no matching records with that value in idP column of p table.