Spark Datastax Java API Select statements - java

I'm using a tutorial here in this Github to run spark on cassandra using a java maven project: https://github.com/datastax/spark-cassandra-connector.
I've figured how to use direct CQL statements, as I have previously asked a question about that here: Querying Data in Cassandra via Spark in a Java Maven Project
However, now I'm trying to use the datastax java API in fear that my original code in my original question will not work for Datastax version of Spark and Cassandra. For some weird reason, it won't let me use .where even though it is outlined in the documentation that I can use that exact statement. Here is my code:
import org.apache.commons.lang3.StringUtils;
import org.apache.spark.SparkConf;
import org.apache.spark.api.java.JavaRDD;
import org.apache.spark.api.java.JavaSparkContext;
import org.apache.spark.api.java.function.Function;
import java.io.Serializable;
import static com.datastax.spark.connector.CassandraJavaUtil.*;
public class App implements Serializable
{
// firstly, we define a bean class
public static class Person implements Serializable {
private Integer id;
private String fname;
private String lname;
private String role;
// Remember to declare no-args constructor
public Person() { }
public Integer getId() { return id; }
public void setId(Integer id) { this.id = id; }
public String getfname() { return fname; }
public void setfname(String fname) { this.fname = fname; }
public String getlname() { return lname; }
public void setlname(String lname) { this.lname = lname; }
public String getrole() { return role; }
public void setrole(String role) { this.role = role; }
// other methods, constructors, etc.
}
private transient SparkConf conf;
private App(SparkConf conf) {
this.conf = conf;
}
private void run() {
JavaSparkContext sc = new JavaSparkContext(conf);
createSchema(sc);
sc.stop();
}
private void createSchema(JavaSparkContext sc) {
JavaRDD<String> rdd = javaFunctions(sc).cassandraTable("tester", "empbyrole", Person.class)
.where("role=?", "IT Engineer").map(new Function<Person, String>() {
#Override
public String call(Person person) throws Exception {
return person.toString();
}
});
System.out.println("Data as Person beans: \n" + StringUtils.join("\n", rdd.toArray()));
}
public static void main( String[] args )
{
if (args.length != 2) {
System.err.println("Syntax: com.datastax.spark.demo.JavaDemo <Spark Master URL> <Cassandra contact point>");
System.exit(1);
}
SparkConf conf = new SparkConf();
conf.setAppName("Java API demo");
conf.setMaster(args[0]);
conf.set("spark.cassandra.connection.host", args[1]);
App app = new App(conf);
app.run();
}
}
here is the error:
14/09/23 13:46:53 ERROR executor.Executor: Exception in task ID 0
java.io.IOException: Exception during preparation of SELECT "role", "id", "fname", "lname" FROM "tester"."empbyrole" WHERE token("role") > -5709068081826432029 AND token("role") <= -5491279024053142424 AND role=? ALLOW FILTERING: role cannot be restricted by more than one relation if it includes an Equal
at com.datastax.spark.connector.rdd.CassandraRDD.createStatement(CassandraRDD.scala:310)
at com.datastax.spark.connector.rdd.CassandraRDD.com$datastax$spark$connector$rdd$CassandraRDD$$fetchTokenRange(CassandraRDD.scala:317)
at com.datastax.spark.connector.rdd.CassandraRDD$$anonfun$13.apply(CassandraRDD.scala:338)
at com.datastax.spark.connector.rdd.CassandraRDD$$anonfun$13.apply(CassandraRDD.scala:338)
at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371)
at com.datastax.spark.connector.util.CountingIterator.hasNext(CountingIterator.scala:10)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
at scala.collection.Iterator$class.foreach(Iterator.scala:727)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47)
at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:273)
at scala.collection.AbstractIterator.to(Iterator.scala:1157)
at scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265)
at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157)
at scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:252)
at scala.collection.AbstractIterator.toArray(Iterator.scala:1157)
at org.apache.spark.rdd.RDD$$anonfun$4.apply(RDD.scala:608)
at org.apache.spark.rdd.RDD$$anonfun$4.apply(RDD.scala:608)
at org.apache.spark.SparkContext$$anonfun$runJob$4.apply(SparkContext.scala:884)
at org.apache.spark.SparkContext$$anonfun$runJob$4.apply(SparkContext.scala:884)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:109)
at org.apache.spark.scheduler.Task.run(Task.scala:53)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:205)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: com.datastax.driver.core.exceptions.InvalidQueryException: role cannot be restricted by more than one relation if it includes an Equal
at com.datastax.driver.core.exceptions.InvalidQueryException.copy(InvalidQueryException.java:35)
at com.datastax.driver.core.DefaultResultSetFuture.extractCauseFromExecutionException(DefaultResultSetFuture.java:256)
at com.datastax.driver.core.AbstractSession.prepare(AbstractSession.java:91)
at com.datastax.spark.connector.cql.PreparedStatementCache$.prepareStatement(PreparedStatementCache.scala:45)
at com.datastax.spark.connector.cql.SessionProxy.invoke(SessionProxy.scala:28)
at com.sun.proxy.$Proxy8.prepare(Unknown Source)
at com.datastax.spark.connector.rdd.CassandraRDD.createStatement(CassandraRDD.scala:293)
... 27 more
Caused by: com.datastax.driver.core.exceptions.InvalidQueryException: role cannot be restricted by more than one relation if it includes an Equal
at com.datastax.driver.core.Responses$Error.asException(Responses.java:97)
at com.datastax.driver.core.SessionManager$1.apply(SessionManager.java:156)
at com.datastax.driver.core.SessionManager$1.apply(SessionManager.java:131)
at com.google.common.util.concurrent.Futures$1.apply(Futures.java:711)
at com.google.common.util.concurrent.Futures$ChainingListenableFuture.run(Futures.java:849)
... 3 more
14/09/23 13:46:53 WARN scheduler.TaskSetManager: Lost TID 0 (task 0.0:0)
14/09/23 13:46:53 WARN scheduler.TaskSetManager: Loss was due to java.io.IOException
java.io.IOException: Exception during preparation of SELECT "role", "id", "fname", "lname" FROM "tester"."empbyrole" WHERE token("role") > -5709068081826432029 AND token("role") <= -5491279024053142424 AND role=? ALLOW FILTERING: role cannot be restricted by more than one relation if it includes an Equal
at com.datastax.spark.connector.rdd.CassandraRDD.createStatement(CassandraRDD.scala:310)
at com.datastax.spark.connector.rdd.CassandraRDD.com$datastax$spark$connector$rdd$CassandraRDD$$fetchTokenRange(CassandraRDD.scala:317)
at com.datastax.spark.connector.rdd.CassandraRDD$$anonfun$13.apply(CassandraRDD.scala:338)
at com.datastax.spark.connector.rdd.CassandraRDD$$anonfun$13.apply(CassandraRDD.scala:338)
at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371)
at com.datastax.spark.connector.util.CountingIterator.hasNext(CountingIterator.scala:10)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
at scala.collection.Iterator$class.foreach(Iterator.scala:727)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47)
at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:273)
at scala.collection.AbstractIterator.to(Iterator.scala:1157)
at scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265)
at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157)
at scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:252)
at scala.collection.AbstractIterator.toArray(Iterator.scala:1157)
at org.apache.spark.rdd.RDD$$anonfun$4.apply(RDD.scala:608)
at org.apache.spark.rdd.RDD$$anonfun$4.apply(RDD.scala:608)
at org.apache.spark.SparkContext$$anonfun$runJob$4.apply(SparkContext.scala:884)
at org.apache.spark.SparkContext$$anonfun$runJob$4.apply(SparkContext.scala:884)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:109)
at org.apache.spark.scheduler.Task.run(Task.scala:53)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:205)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
14/09/23 13:46:53 ERROR scheduler.TaskSetManager: Task 0.0:0 failed 1 times; aborting job
14/09/23 13:46:53 INFO scheduler.TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool
14/09/23 13:46:53 INFO scheduler.DAGScheduler: Failed to run toArray at App.java:65
Exception in thread "main" org.apache.spark.SparkException: Job aborted: Task 0.0:0 failed 1 times (most recent failure: Exception failure: java.io.IOException: Exception during preparation of SELECT "role", "id", "fname", "lname" FROM "tester"."empbyrole" WHERE token("role") > -5709068081826432029 AND token("role") <= -5491279024053142424 AND role=? ALLOW FILTERING: role cannot be restricted by more than one relation if it includes an Equal)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$org$apache$spark$scheduler$DAGScheduler$$abortStage$1.apply(DAGScheduler.scala:1020)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$org$apache$spark$scheduler$DAGScheduler$$abortStage$1.apply(DAGScheduler.scala:1018)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$abortStage(DAGScheduler.scala:1018)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$processEvent$10.apply(DAGScheduler.scala:604)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$processEvent$10.apply(DAGScheduler.scala:604)
at scala.Option.foreach(Option.scala:236)
at org.apache.spark.scheduler.DAGScheduler.processEvent(DAGScheduler.scala:604)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$start$1$$anon$2$$anonfun$receive$1.applyOrElse(DAGScheduler.scala:190)
at akka.actor.ActorCell.receiveMessage(ActorCell.scala:498)
at akka.actor.ActorCell.invoke(ActorCell.scala:456)
at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:237)
at akka.dispatch.Mailbox.run(Mailbox.scala:219)
at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:386)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
14/09/23 13:46:53 INFO cql.CassandraConnector: Disconnected from Cassandra cluster: Test Cluster
I know that my error is specifically at this section:
JavaRDD<String> rdd = javaFunctions(sc).cassandraTable("tester", "empbyrole", Person.class)
.where("role=?", "IT Engineer").map(new Function<Person, String>() {
#Override
public String call(Person person) throws Exception {
return person.toString();
}
});
When I remove the .where(), it works. But it says specifically on github that you should be able to execute .where and .map functions respectively. Does anyone have any type of reasoning for this? or solution? Thanks.
edit
i get the error to go away when i use this statement instead:
JavaRDD<String> rdd = javaFunctions(sc).cassandraTable("tester", "empbyrole", Person.class)
.where("id=?", "1").map(new Function<Person, String>() {
#Override
public String call(Person person) throws Exception {
return person.toString();
}
});
I have no idea why this option works but not the rest of my variations. Here are the statements i ran in my cql so that you know what my keyspace looks like:
session.execute("DROP KEYSPACE IF EXISTS tester");
session.execute("CREATE KEYSPACE tester WITH replication = {'class': 'SimpleStrategy', 'replication_factor': 3}");
session.execute("CREATE TABLE tester.emp (id INT PRIMARY KEY, fname TEXT, lname TEXT, role TEXT)");
session.execute("CREATE TABLE tester.empByRole (id INT, fname TEXT, lname TEXT, role TEXT, PRIMARY KEY (role,id))");
session.execute("CREATE TABLE tester.dept (id INT PRIMARY KEY, dname TEXT)");
session.execute(
"INSERT INTO tester.emp (id, fname, lname, role) " +
"VALUES (" +
"0001," +
"'Angel'," +
"'Pay'," +
"'IT Engineer'" +
");");
session.execute(
"INSERT INTO tester.emp (id, fname, lname, role) " +
"VALUES (" +
"0002," +
"'John'," +
"'Doe'," +
"'IT Engineer'" +
");");
session.execute(
"INSERT INTO tester.emp (id, fname, lname, role) " +
"VALUES (" +
"0003," +
"'Jane'," +
"'Doe'," +
"'IT Analyst'" +
");");
session.execute(
"INSERT INTO tester.empByRole (id, fname, lname, role) " +
"VALUES (" +
"0001," +
"'Angel'," +
"'Pay'," +
"'IT Engineer'" +
");");
session.execute(
"INSERT INTO tester.empByRole (id, fname, lname, role) " +
"VALUES (" +
"0002," +
"'John'," +
"'Doe'," +
"'IT Engineer'" +
");");
session.execute(
"INSERT INTO tester.empByRole (id, fname, lname, role) " +
"VALUES (" +
"0003," +
"'Jane'," +
"'Doe'," +
"'IT Analyst'" +
");");
session.execute(
"INSERT INTO tester.dept (id, dname) " +
"VALUES (" +
"1553," +
"'Commerce'" +
");");

The where method adds ALLOW FILTERING to your query under the covers. This is not a magic bullet, as it still doesn't support arbitrary fields as query predicates. In general, the field must either be indexed or a clustering column. If this isn't practical for your data model, you can simply use the filter method on the RDD. The downside is that the filter takes place in Spark and not in Cassandra.
So the id field works because it's supported in a CQL WHERE clause, whereas I'm assuming role is just a regular field. Please note that I am NOT suggesting that you index your field or change it to a clustering column, as I don't know your data model.

There is a limitation in the Spark Cassandra Connector that the where method will not work on partitioning keys. In your table empByRole, role is a partitioning key, hence the error. It should work correctly on clustering columns or indexed columns (secondary indexes).
This is being tracked as issue 37 in the GitHub project and work has been ongoing.
On the Java API doc page, the examples shown used .where("name=?", "Anna"). I assume that name is not a partitioning key, but the example could be more clear about that.

Related

Failed to convert from type [java.lang.Object[]] to type [qbr.entity.nameEntity]

I am not asking the question that is already asked here
Failed to convert from type [java.lang.Object[]] to type
My entity look like this :
#Entity
public class DuplicateManagerMetricsRelTagEntity {
#Id
#Column(name = "sn")
String sn;
#Column(name = "clientid")
String clientid;
#Column(name = "ticket_count")
String ticket_count;
public DuplicateManagerMetricsRelTagEntity(String sn, String clientid, String ticket_count) {
this.sn = sn;
this.clientid = clientid;
this.ticket_count = ticket_count;
}
public DuplicateManagerMetricsRelTagEntity() {
}
My controller look like this :
#RequestMapping("/qbr/duplicatemanager/{clientid}/{appid}/{releasetag}/")
#CrossOrigin
public List<DuplicateManagerMetricsRelTagEntity> getAllDuplicateManagerFromReleaseTag(#PathVariable String clientid, #PathVariable String[] appid, #PathVariable String releasetag) {
logger.info("Returing all duplicate managers of client {} appId {} from release tag {} ", clientid, appid, releasetag);
System.out.println("data in controller : " + clientid + " " + appid + " " + releasetag);
return duplicateManagerMetricsService.getAllDuplicateManagerFromReleaseTag(clientid, appid, releasetag);
}
My service look like this :
public List<DuplicateManagerMetricsRelTagEntity> getAllDuplicateManagerFromReleaseTag(String clientid, String[] appid, String releasetag) {
try {
System.out.println("data in service : "+ clientid + " " + appid + " " + releasetag);
return duplicateManagerMetricsRepository.getAllDuplicateManagerfromReleaseTag(clientid, appid, releasetag);
} catch (Exception e) {
logger.error(e);
return new ArrayList<>();
}
}
My Repository look like this :
#Query(value = "select a.sn, a.clientid, a.ticket_count from dbtable as a where a.clientid = ?1 AND a.appid in (?2) AND a.releasetag=?3", nativeQuery = true)
List<DuplicateManagerMetricsRelTagEntity> getAllDuplicateManagerfromReleaseTag(String clientid, String[] appid, String releasetag);
since i am not getting appid data, i was supposed to get[657-001] but it is printing its object and the error i am getting is :
DuplicateManagerMetricsController - Returing all duplicate managers of client 657 appId [657-001] from release tag WIL657.2021.05-001
data in controller : 657 [Ljava.lang.String;#63108943 WIL657.2021.05-001
data in service : 657 [Ljava.lang.String;#63108943 WIL657.2021.05-001
2022-09-02 05:08:54 DEBUG org.hibernate.SQL - select a.sn, a.clientid, a.ticket_count from dbtable as a where a.clientid = ? AND a.appid in (?) AND a.releasetag=?
2022-09-02 05:08:54 WARN o.h.e.jdbc.spi.SqlExceptionHelper - SQL Error: 933, SQLState: 42000
2022-09-02 05:08:54 ERROR o.h.e.jdbc.spi.SqlExceptionHelper - ORA-00933: SQL command not properly ended
[ERROR] 2022-09-02 05:08:54.566 [http-nio-8080-exec-2] DuplicateManagerMetricsService - org.springframework.dao.InvalidDataAccessResourceUsageException: could not extract ResultSet; SQL [n/a]; nested exception is org.hibernate.exception.SQLGrammarException: could not extract ResultSet
This is because you are extracting fields (a.sn, a.clientid, a.ticket_count) which is not a DuplicateManagerMetricsRelTagEntity type. So since you have not asked hibernate to extract the complete object but a few fields(does not matter if your fields are exactly same as the fields in the object) - so hibernate extracts it as Object[] and then is trying to map to DuplicateManagerMetricsRelTagEntity and hence the issue.
You can use: JPA Projection/DTO
In your case you can use JPQL and :
"select a from DuplicateManagerMetricsRelTagEntity a where a.clientid = ?1 AND a.appid in (?2) AND a.releasetag=?3"
Or for native queries you an try:
select * from table_name where conditions;
NOTE: It has been long since I used hibernate or jpa. So the queries might be not completely accurate. Please do check up on the proper syntax. Main idea is to let you know how hibernate understands and tries to map the type. Since you have a,b,c -> Object[] is extracted and cannot be mapped to your entity.

How to read uncommitted data using jdbc?

I want to test how JDBC transactions work. Particularly, I want to see a read of uncommitted data. I've written one integration test in spring boot environment using a locally installed PostgreSQL database.
I'm trying to insert a row into a table, read it from one transaction, then update from another transaction without committing it, and read it again hoping it would change.
Table for the test (DDL):
create table users
(
id integer default nextval('user_id_sequence'::regclass) not null
constraint users_pkey
primary key,
first_name varchar(255) not null,
second_name varchar(255) not null,
email varchar(255)
);
alter table users
owner to postgres;
The test:
public void testHealthCheck() throws SQLException {
Connection zeroConnection = dataSource.getConnection();
Integer insertedUserId = insertUserSilently(zeroConnection, new User()
.setFirstName("John")
.setSecondName("Doe")
.setEmail("johndoe#gmail.com"));
zeroConnection.close();
Connection firstConnection = dataSource.getConnection();
firstConnection.setTransactionIsolation(Connection.TRANSACTION_READ_UNCOMMITTED);
firstConnection.setAutoCommit(false);
Connection secondConnection = dataSource.getConnection();
secondConnection.setTransactionIsolation(Connection.TRANSACTION_READ_UNCOMMITTED);
secondConnection.setAutoCommit(false);
List<User> users = getAllUsersSilently(firstConnection);
log.info("Got users: {}", silentToJsonString(users));
PersistenceUtils.updateUserEmailSilently(secondConnection, insertedUserId, "johndoe#yahoo.com");
users = getAllUsersSilently(firstConnection);
log.info("Got users: {}", silentToJsonString(users));
secondConnection.rollback();
secondConnection.close();
users = getAllUsersSilently(firstConnection);
log.info("Got users: {}", silentToJsonString(users));
firstConnection.close();
}
Utility class:
private static final String INSERT_USER_SQL = "insert into users(first_name, second_name, email) values (?, ?, ?)";
private static final String UPDATE_USER_SQL = "update users set email = ? where id = ?;";
private static final String SELECT_ALL_USERS_SQL = "select * from users";
public static List<User> extractUsersSilently(ResultSet resultSet) {
List<User> resultList = newArrayList();
try {
while (resultSet.next()) {
Integer id = resultSet.getInt(1);
String firstName = resultSet.getString(2);
String secondName = resultSet.getString(3);
String email = resultSet.getString(4);
resultList.add(new User(id, firstName, secondName, email));
}
} catch (SQLException e) {
log.error("Error while extracting result set", e);
return emptyList();
}
return resultList;
}
public static Integer insertUserSilently(Connection connection, User user) {
try {
PreparedStatement insertStatement = connection.prepareStatement(INSERT_USER_SQL, Statement.RETURN_GENERATED_KEYS);
insertStatement.setString(1, user.getFirstName());
insertStatement.setString(2, user.getSecondName());
insertStatement.setString(3, user.getEmail());
insertStatement.execute();
ResultSet resultSet = insertStatement.getGeneratedKeys();
resultSet.next();
return resultSet.getInt(1);
} catch (Exception exception) {
log.error(format("Exception while inserting user %s", user), exception);
return -1;
}
}
public static List<User> getAllUsersSilently(Connection connection) {
try {
PreparedStatement selectStatement = connection.prepareStatement(SELECT_ALL_USERS_SQL);
selectStatement.execute();
return extractUsersSilently(selectStatement.getResultSet());
} catch (Exception exception) {
log.error("Exception while getting all users", exception);
return Collections.emptyList();
}
}
public static void updateUserEmailSilently(Connection connection, Integer userId, String userEmail) {
try {
PreparedStatement updateStatement = connection.prepareStatement(UPDATE_USER_SQL);
updateStatement.setString(1, userEmail);
updateStatement.setInt(2, userId);
updateStatement.execute();
} catch (Exception exception) {
log.error(format("Exception while updating user %d", userId), exception);
}
}
}
Actual results are (you have to clear table manually before the test):
Got users:
[{"id":55,"firstName":"John","secondName":"Doe","email":"johndoe#gmail.com"}]
Got users:
[{"id":55,"firstName":"John","secondName":"Doe","email":"johndoe#gmail.com"}]
Got users:
[{"id":55,"firstName":"John","secondName":"Doe","email":"johndoe#gmail.com"}]
Although second read should've seen uncommitted change to email.
Cannot read uncommitted data in Postgres
See section 13.2. Transaction Isolation of the PostgreSQL documentation:
In PostgreSQL, you can request any of the four standard transaction isolation levels, but internally only three distinct isolation levels are implemented, i.e. PostgreSQL's Read Uncommitted mode behaves like Read Committed. This is because it is the only sensible way to map the standard isolation levels to PostgreSQL's multiversion concurrency control architecture.
This means that if you want to test TRANSACTION_READ_UNCOMMITTED, you need a DBMS other than PostgreSQL.

Cassandra failure during read query at consistency QUORUM - ReadFailureException

I have a simple scala/java program to demo Cassandra java API.
I have a simple UDT class Address which is used in class User. For some reason userMapper.get(userId) fails with no clear error message.
Code is part of scala project.
Runner code (java):
void exp02() {
log.debug("JAVA -- exp02");
Cluster cluster = null;
try {
CodecRegistry codecRegistry = new CodecRegistry();
cluster = Cluster.builder() // (1)
.withCodecRegistry(codecRegistry)
.addContactPoint("127.0.0.1")
.build();
log.debug("connect...exp02");
Session session = cluster.connect(); // (2)
MappingManager manager = new MappingManager(session);
Mapper<User> userMapper = manager.mapper(User.class);
// For some reason this will break
{
log.debug("create user *********************** isClosed: " + cluster.isClosed());
log.debug("get users");
ResultSet results = session.execute("SELECT * FROM cTest.user;");
Result<User> user = userMapper.map(results);
for (User u : user) {
log.debug("User : " + u);
}
log.debug("Users printed");
UUID userId = UUID.fromString("567378a9-8533-4d1c-80a8-71bf4b77189e");
User u2 = userMapper.get(userId); // <<<--- This line throws exception, (JRunner.java:67)
log.debug("Select user = " + u2);
}
} catch (RuntimeException e) {
log.error("Exception: " + e);
e.printStackTrace();
} finally {
log.debug("close...exp02");
if (cluster != null) cluster.close(); // (5)
}
}
Main (scala):
package com.example.crunner
import org.slf4j.{Logger, LoggerFactory}
object MainRunner {
val log: Logger = LoggerFactory.getLogger(getClass())
def main(args: Array[String]): Unit = {
val jrunner = new JRunner()
jrunner.exp02()
}
}
User class (java):
package com.example.crunner;
import java.util.UUID;
import com.datastax.driver.mapping.annotations.Column;
import com.datastax.driver.mapping.annotations.PartitionKey;
import com.datastax.driver.mapping.annotations.Table;
#Table(keyspace = "cTest", name = "user",
readConsistency = "QUORUM",
writeConsistency = "QUORUM"
// caseSensitiveKeyspace = false,
// caseSensitiveTable = false
)
public class User {
#PartitionKey
#Column(name = "user_id")
private UUID userId;
private String name;
private Address address;
public User(UUID userId, String name, Address address) {
this.userId = userId;
this.name = name;
this.address = address;
}
public User() { address = new Address(); }
public UUID getUserId() {
return userId;
}
public void setUserId(UUID userId) {
this.userId = userId;
}
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
public Address getAddress() {
return address;
}
public void setAddress(Address address) {
this.address = address;
}
#Override
public String toString() {
return "User{" +
"userId=" + userId +
", name='" + name + '\'' +
", address=" + address +
'}';
}
}
UDT Address class (java)
package com.example.crunner;
import com.datastax.driver.mapping.annotations.Field;
import com.datastax.driver.mapping.annotations.UDT;
#UDT(keyspace = "cTest", name = "addressT") //, caseSensitiveType = true)
public class Address {
private String street;
private int zipCode;
public Address(String street, int zipCode) {
this.street = street;
this.zipCode = zipCode;
}
public Address() {
}
public String getStreet() {
return street;
}
public void setStreet(String street) {
this.street = street;
}
public int getZipCode() {
return zipCode;
}
public void setZipCode(int zipCode) {
this.zipCode = zipCode;
}
#Override
public String toString() {
return "Address{" +
"street='" + street + '\'' +
", zipCode=" + zipCode +
'}';
}
}
CQL (other tables not included here):
CREATE TYPE ctest.addresst (
street text,
zipcode int
);
CREATE TABLE ctest.user (
user_id uuid PRIMARY KEY,
address addresst,
name text
) WITH bloom_filter_fp_chance = 0.01
AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'}
AND comment = ''
AND compaction = {'class': 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 'max_threshold': '32', 'min_threshold': '4'}
AND compression = {'chunk_length_in_kb': '64', 'class': 'org.apache.cassandra.io.compress.LZ4Compressor'}
AND crc_check_chance = 1.0
AND dclocal_read_repair_chance = 0.1
AND default_time_to_live = 0
AND gc_grace_seconds = 864000
AND max_index_interval = 2048
AND memtable_flush_period_in_ms = 0
AND min_index_interval = 128
AND read_repair_chance = 0.0
AND speculative_retry = '99PERCENTILE';
build.sbt
name := "CassJExp2"
version := "0.1-SNAPSHOT"
scalaVersion := "2.11.9"
resolvers += "Typesafe Repository" at "http://repo.typesafe.com/typesafe/releases/"
val cassandraVersion = "3.2.0"
val logbackVersion = "1.2.3"
libraryDependencies ++= Seq(
"ch.qos.logback" % "logback-classic" % logbackVersion withSources() withJavadoc(), //
"ch.qos.logback" % "logback-core" % logbackVersion withSources() withJavadoc(), //
"ch.qos.logback" % "logback-access" % logbackVersion withSources() withJavadoc(), //
"org.slf4j" % "slf4j-api" % "1.7.25" withSources() withJavadoc(), //
"joda-time" % "joda-time" % "2.9.9" withSources() withJavadoc(), //
"com.datastax.cassandra" % "cassandra-driver-core" % cassandraVersion withSources() withJavadoc(), //
"com.datastax.cassandra" % "cassandra-driver-mapping" % cassandraVersion withSources() withJavadoc(), //
"com.datastax.cassandra" % "cassandra-driver-extras" % cassandraVersion withSources() withJavadoc() //
)
scalacOptions += "-deprecation"
When I run this code on sbt console, I get following output:
18:08:41.447 [run-main-f] DEBUG com.example.crunner.JRunner - JAVA -- exp02
18:08:41.497 [run-main-f] INFO c.d.driver.core.GuavaCompatibility - Detected Guava >= 19 in the classpath, using modern compatibility layer
18:08:41.634 [run-main-f] INFO c.datastax.driver.core.ClockFactory - Using native clock to generate timestamps.
18:08:41.644 [run-main-f] DEBUG com.example.crunner.JRunner - connect...exp02
18:08:41.674 [run-main-f] INFO com.datastax.driver.core.NettyUtil - Did not find Netty's native epoll transport in the classpath, defaulting to NIO.
18:08:42.049 [run-main-f] INFO c.d.d.c.p.DCAwareRoundRobinPolicy - Using data-center name 'datacenter1' for DCAwareRoundRobinPolicy (if this is incorrect, please provide the correct datacenter name with DCAwareRoundRobinPolicy constructor)
18:08:42.051 [run-main-f] INFO com.datastax.driver.core.Cluster - New Cassandra host /127.0.0.1:9042 added
18:08:42.107 [run-main-f] DEBUG com.example.crunner.JRunner - create user *********************** isClosed: false
18:08:42.108 [run-main-f] DEBUG com.example.crunner.JRunner - get users
18:08:42.139 [run-main-f] DEBUG com.example.crunner.JRunner - User : User{userId=54cbad6e-3f27-4b7e-bce0-8a4a4fbffbdf, name='John Doe', address=Address{street='street', zipCode=512}}
18:08:42.139 [run-main-f] DEBUG com.example.crunner.JRunner - User : User{userId=6122b896-8b28-448d-ac5c-4bc9b5c7c7ab, name='John Doe', address=Address{street='street', zipCode=512}}
... output truncated here, table contains about 150 rows ...
18:08:42.175 [run-main-f] DEBUG com.example.crunner.JRunner - User : User{userId=44f69277-ff97-4ba2-9216-bdf65eccd7c3, name='John Doe', address=Address{street='street', zipCode=512}}
18:08:42.175 [run-main-f] DEBUG com.example.crunner.JRunner - Users printed
18:08:42.203 [run-main-f] ERROR com.example.crunner.JRunner - Exception: com.datastax.driver.core.exceptions.ReadFailureException: Cassandra failure during read query at consistency QUORUM (1 responses were required but only 0 replica responded, 1 failed)
com.datastax.driver.core.exceptions.ReadFailureException: Cassandra failure during read query at consistency QUORUM (1 responses were required but only 0 replica responded, 1 failed)
at com.datastax.driver.core.exceptions.ReadFailureException.copy(ReadFailureException.java:130)
at com.datastax.driver.core.exceptions.ReadFailureException.copy(ReadFailureException.java:30)
at com.datastax.driver.mapping.DriverThrowables.propagateCause(DriverThrowables.java:41)
at com.datastax.driver.mapping.Mapper.get(Mapper.java:435)
at com.example.crunner.JRunner.exp02(JRunner.java:67)
at com.example.crunner.MainRunner$.main(MainRunner.scala:18)
at com.example.crunner.MainRunner.main(MainRunner.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at sbt.Run.invokeMain(Run.scala:67)
at sbt.Run.run0(Run.scala:61)
at sbt.Run.sbt$Run$$execute$1(Run.scala:51)
at sbt.Run$$anonfun$run$1.apply$mcV$sp(Run.scala:55)
at sbt.Run$$anonfun$run$1.apply(Run.scala:55)
at sbt.Run$$anonfun$run$1.apply(Run.scala:55)
at sbt.Logger$$anon$4.apply(Logger.scala:84)
at sbt.TrapExit$App.run(TrapExit.scala:248)
at java.lang.Thread.run(Thread.java:745)
Caused by: com.datastax.driver.core.exceptions.ReadFailureException: Cassandra failure during read query at consistency QUORUM (1 responses were required but only 0 replica responded, 1 failed)
at com.datastax.driver.core.exceptions.ReadFailureException.copy(ReadFailureException.java:142)
at com.datastax.driver.core.Responses$Error.asException(Responses.java:140)
at com.datastax.driver.core.DefaultResultSetFuture.onSet(DefaultResultSetFuture.java:179)
at com.datastax.driver.core.RequestHandler.setFinalResult(RequestHandler.java:179)
at com.datastax.driver.core.RequestHandler.access$2400(RequestHandler.java:49)
at com.datastax.driver.core.RequestHandler$SpeculativeExecution.setFinalResult(RequestHandler.java:799)
at com.datastax.driver.core.RequestHandler$SpeculativeExecution.onSet(RequestHandler.java:633)
at com.datastax.driver.core.Connection$Dispatcher.channelRead0(Connection.java:1075)
at com.datastax.driver.core.Connection$Dispatcher.channelRead0(Connection.java:998)
at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:287)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:293)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:267)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1294)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:911)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:643)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:566)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:480)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:442)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:131)
at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:144)
... 1 more
Caused by: com.datastax.driver.core.exceptions.ReadFailureException: Cassandra failure during read query at consistency QUORUM (1 responses were required but only 0 replica responded, 1 failed)
at com.datastax.driver.core.Responses$Error$1.decode(Responses.java:88)
at com.datastax.driver.core.Responses$Error$1.decode(Responses.java:38)
at com.datastax.driver.core.Message$ProtocolDecoder.decode(Message.java:289)
at com.datastax.driver.core.Message$ProtocolDecoder.decode(Message.java:269)
at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:88)
... 20 more
18:08:42.205 [run-main-f] DEBUG com.example.crunner.JRunner - close...exp02
[success] Total time: 4 s, completed Apr 18, 2017 6:08:45 PM
At the same time I get the following error message into /var/log/cassandra/system.log:
WARN [ReadStage-2] 2017-04-18 18:08:42,202 AbstractLocalAwareExecutorService.java:169 - Uncaught exception on thread Thread[ReadStage-2,10,main]: {}
java.lang.AssertionError: null
at org.apache.cassandra.db.rows.BTreeRow.getCell(BTreeRow.java:212) ~[apache-cassandra-3.9.jar:3.9]
at org.apache.cassandra.db.SinglePartitionReadCommand.canRemoveRow(SinglePartitionReadCommand.java:895) ~[apache-cassandra-3.9.jar:3.9]
at org.apache.cassandra.db.SinglePartitionReadCommand.reduceFilter(SinglePartitionReadCommand.java:859) ~[apache-cassandra-3.9.jar:3.9]
at org.apache.cassandra.db.SinglePartitionReadCommand.queryMemtableAndSSTablesInTimestampOrder(SinglePartitionReadCommand.java:744) ~[apache-cassandra-3.9.jar:3.9]
at org.apache.cassandra.db.SinglePartitionReadCommand.queryMemtableAndDiskInternal(SinglePartitionReadCommand.java:515) ~[apache-cassandra-3.9.jar:3.9]
at org.apache.cassandra.db.SinglePartitionReadCommand.queryMemtableAndDisk(SinglePartitionReadCommand.java:492) ~[apache-cassandra-3.9.jar:3.9]
at org.apache.cassandra.db.SinglePartitionReadCommand.queryStorage(SinglePartitionReadCommand.java:358) ~[apache-cassandra-3.9.jar:3.9]
at org.apache.cassandra.db.ReadCommand.executeLocally(ReadCommand.java:397) ~[apache-cassandra-3.9.jar:3.9]
at org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1801) ~[apache-cassandra-3.9.jar:3.9]
at org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2486) ~[apache-cassandra-3.9.jar:3.9]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ~[na:1.8.0_121]
at org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164) ~[apache-cassandra-3.9.jar:3.9]
at org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$LocalSessionFutureTask.run(AbstractLocalAwareExecutorService.java:136) [apache-cassandra-3.9.jar:3.9]
at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:109) [apache-cassandra-3.9.jar:3.9]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_121]
Cassandra version is [cqlsh 5.0.1 | Cassandra 3.9 | CQL spec 3.4.2 | Native protocol v4]
So userMapper can map ResultSet of users but getting a single user will fail. The userId I try to fetch exists in the user table. It is also possible to save a new user into db using the userMapper without failure.
I don't know if this is somehow related to having a UDT Address in User class. Tables / mappers without UDT classes are working fine.
EDIT:
As Marko Å valjek suggested I tried the query at command line:
cqlsh> SELECT * FROM cTest.user where user_id=567378a9-8533-4d1c-80a8-71bf4b77189e;
ReadFailure: Error from server: code=1300 [Replica(s) failed to execute read] message="Operation failed - received 0 responses and 1 failures" info={'failures': 1, 'received_responses': 0, 'required_responses': 1, 'consistency': 'ONE'}
Looks like same error than with java client.
SELECT * FROM cTest.user works fine.
EDIT 2:
This is single instance environment.
nodetool status
Datacenter: datacenter1
=======================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns Host ID Rack
UN 127.0.0.1 354.4 KiB 256 ? 33490146-da36-4359-bb24-42854bdb3c26 rack1
Note: Non-system keyspaces don't have the same replication settings, effective ownership information is meaningless
What's the reason for this error and how to fix it? Thank you for your support.

Spring JDBC & MySQL - Getting EmptyResultDataAccessException when using queryForObject() but data exists

my code is:
public AppMetaDBO getAppMetaRecord() throws SQLException {
JdbcOperations jdbcOperations = new JdbcTemplate(dataSource);
String query = "SELECT * FROM " + TABLE_APP_META;
logger.config("Executing SQL query:[" + query + "]");
return jdbcOperations.queryForObject(query, new AppMetaRowMapper());
}
Where AppMetaRowMapper is:
public class AppMetaRowMapper implements org.springframework.jdbc.core.RowMapper<AppMetaDBO> {
#Override
public AppMetaDBO mapRow(ResultSet resultSet, int i) throws SQLException {
return new AppMetaDBO(resultSet.getDouble(COL_LAST_SUPPORTED_VER));
}
}
And AppMetaDBO is simply:
#Data
#AllArgsConstructor
public class AppMetaDBO implements Serializable {
private double last_supported_version;
}
This throws a EmptyResultDataAccessExceptioneven though the DB table looks like this:
It is important to note that the same server is able to write to the db with no issues, and even read from a different table with no issues.
The code that reads without issues is very similar and works fine:
UserDBO result = null;
try {
JdbcOperations jdbcOperations = new JdbcTemplate(dataSource);
String query = "SELECT *" + " FROM " + TABLE_USERS + " WHERE " + COL_UID + "=" + quote(uid);
logger.config("Executing SQL query:[" + query + "]");
result = jdbcOperations.queryForObject(query, new UserDboRowMapper());
} catch(EmptyResultDataAccessException ignored) {}
return result;
Also important to note that the db server is not on a remote machine. In the local machine these issues are non-existent.
Please help I have absolutely no idea what I am doing wrong.
Thanks in advance.

org.hibernate.StaleStateException: Batch update returned unexpected row count from update [0]; actual row count: 0; expected: 1

I am using struts and hibernate. I have a parent and child relation using set in hbm.
In the action I am using session.saveOrUpdate() method to save but while saving it is showing the below error. Can anyone help regardng this with explanation where I made the mistake?
Here is my hbm.file
<?xml version="1.0"?>
<!DOCTYPE hibernate-mapping PUBLIC
"-//Hibernate/Hibernate Mapping DTD 3.0//EN"
"http://hibernate.sourceforge.net/hibernate-mapping-3.0.dtd">
<hibernate-mapping>
<class name="com.model.cargo" table="cargo">
<id name="id" column="id" type="java.lang.Long">
<generator class="increment" />
</id>
<property name="cname" column="cname" />
<property name="cdate" column="cdate" />
<property name="csource" column="csource" />
<property name="cdestination" column="cdestination" />
<property name="create" column="createby" />
<property name="status" column="status" />
<set name="itemList" table="item" inverse="true"
cascade="all-delete-orphan">
<key>
<column name="id" />
</key>
<one-to-many class="com.model.Item" />
</set>
</class>
<class name="com.model.Item" table="item">
<id name="itemid" column="itemid" type="java.lang.Long">
<generator class="increment" />
</id>
<property name="itemName" column="itemname" />
<property name="weight" column="weight" />
<many-to-one class="com.model.cargo" name="cargo"
column="id" />
</class>
</hibernate-mapping>
My action
package com.action;
import java.util.ArrayList;
import java.util.Collection;
import java.util.HashSet;
import java.util.Iterator;
import java.util.List;
import java.util.Set;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import javax.servlet.http.HttpSession;
import org.apache.commons.beanutils.BeanUtils;
import org.apache.struts.action.ActionForm;
import org.apache.struts.action.ActionForward;
import org.apache.struts.action.ActionMapping;
import org.apache.struts.actions.DispatchAction;
import org.hibernate.Query;
import org.hibernate.Session;
import org.hibernate.SessionFactory;
import org.hibernate.Transaction;
import com.plugin.HibernatePlugIn;
import com.form.cargoForm;
import com.model.cargo;
import com.model.Item;
public class CargoAction extends DispatchAction {
public ActionForward add(ActionMapping mapping, ActionForm form,
HttpServletRequest request, HttpServletResponse response)
throws Exception {
if (log.isDebugEnabled()) {
log.debug("Entering Master add method");
}
try {
cargoForm cargoForm = (cargoForm) form;
//System.out.println("ID" + cargoForm.getId());
cargo cargo = new cargo();
System.out.println("in cargo Action");
// copy customerform to model
cargoForm.reset(mapping, request);
BeanUtils.copyProperties(cargo, cargoForm);
cargoForm.reset(mapping, request);
// cargoForm.setInputParam("new");
// updateFormBean(mapping, request, cargoForm);
}
catch (Exception ex) {
ex.printStackTrace();
return mapping.findForward("failure");
}
return mapping.findForward("success1");
}
public ActionForward save(ActionMapping mapping, ActionForm form,
HttpServletRequest request, HttpServletResponse response)
throws Exception {
SessionFactory sessionFactory=null;
Session session =null;
System.out.println("in cargo Action");
try{
sessionFactory = (SessionFactory) servlet
.getServletContext().getAttribute(HibernatePlugIn.KEY_NAME);
session = sessionFactory.openSession();
Transaction tx = session.beginTransaction();
cargoForm carForm = (cargoForm) form;
cargo cargo = new cargo();
System.out.println("in cargo Action");
BeanUtils.copyProperties(cargo,carForm);
System.out.println("id"+ carForm.getId());
System.out.println("item id"+ carForm.getItemid());
Set itemset = carForm.getItemDtl();
System.out.println("size"+itemset.size());
Iterator iterator =itemset.iterator();
while(iterator.hasNext()) {
Item it = (Item)iterator.next();
System.out.println("name"+it.getItemName()); //log.debug("HERE");
it.setCargo(cargo); }
cargo.setItemList(itemset);
System.out.println("size"+ itemset.size());
session.saveOrUpdate("cargo",cargo);
tx.commit();
}catch(Exception e){
e.printStackTrace();
}
return mapping.findForward("success");
}
public ActionForward search(ActionMapping mapping, ActionForm form,
HttpServletRequest request, HttpServletResponse response)
throws Exception {
System.out.println("in cargo search Action");
SessionFactory sessionFactory = (SessionFactory) servlet
.getServletContext().getAttribute(HibernatePlugIn.KEY_NAME);
HttpSession session1 = request.getSession();
Session session = sessionFactory.openSession();
Transaction tx = session.beginTransaction();
cargoForm cargoform = (cargoForm) form;
// System.out.println("Name"+cargoForm.getName());
cargo cargo = new cargo();
System.out.println("in cargo search Action");
// copy customerform to model
BeanUtils.copyProperties(cargo, cargoform);
String name;
String status;
String createby;
name = cargo.getCname();
status = cargo.getStatus();
createby = cargo.getCreate();
System.out.println("Name..." + name);
System.out.println("status..." + status);
System.out.println("createby..." + createby);
try {
if ((name.equals("")) && (createby.equals(""))
&& (status.equals("")))
return mapping.findForward("failure");
String SQL_QUERY = "from cargo c where c.cname=:name or c.status=:status or c.create=:createby";
Query query = session.createQuery(SQL_QUERY);
query.setParameter("name", name);
query.setParameter("status", status);
query.setParameter("createby", createby);
ArrayList al = new ArrayList();
for (Iterator i = query.iterate(); i.hasNext();) {
cargo cargo1 = (cargo) i.next();
al.add(cargo1);
System.out.println("Cargo ID is:" + cargo1.getId());
}
System.out.println("Cargo list is:" + al.size());
session1.setAttribute("clist", al);
} catch (Exception e) {
e.printStackTrace();
return mapping.findForward("failure");
}
System.out.println("search Cargo list is success");
return mapping.findForward("success");
}
public ActionForward edit(ActionMapping mapping, ActionForm form,
HttpServletRequest request, HttpServletResponse response)
throws Exception {
SessionFactory sessionFactory=null;
Session session =null;
if (log.isDebugEnabled()) {
log.debug("Entering Master Edit method");
}
try {
sessionFactory = (SessionFactory) servlet
.getServletContext().getAttribute(HibernatePlugIn.KEY_NAME);
session = sessionFactory.openSession();
Transaction transaction=session.beginTransaction();
cargoForm carForm = (cargoForm) form;
// System.out.println(carForm.getStatus());
// System.out.println(carForm.getCreate());
cargo cargo = new cargo();
BeanUtils.copyProperties(cargo, carForm);
System.out.println("In Cargo Edit "+cargo.getId());
String qstring = "from cargo c where c.id=:id";
Query query = session.createQuery(qstring);
query.setParameter("id", cargo.getId());
ArrayList all = new ArrayList();
cargo c = (cargo) query.iterate().next();
System.out.println("Edit Cargo list " + all.size());
Set purchaseArray = new HashSet();
System.out.println("Edit"+c.getItemList().size());
carForm.setItemDtl(purchaseArray);
BeanUtils.copyProperties(carForm,c);
// transaction.commit();
session.flush();
} catch (Exception e) {
e.printStackTrace();
return mapping.findForward("failure");
}
// return a forward to edit forward
System.out.println("Edit Cargo list is success");
return mapping.findForward("succ");
}
public ActionForward delete(ActionMapping mapping, ActionForm form,
HttpServletRequest request, HttpServletResponse response)
throws Exception {
try {
SessionFactory sessionFactory = (SessionFactory) servlet
.getServletContext().getAttribute(HibernatePlugIn.KEY_NAME);
Session session = sessionFactory.openSession();
Transaction tx = session.beginTransaction();
cargoForm carForm = (cargoForm) form;
// System.out.println(carForm.getStatus());
// System.out.println(carForm.getCreate());
cargo cargo = new cargo();
BeanUtils.copyProperties(cargo, carForm);
System.out.println("In Cargo Delete "+cargo.getId());
//String qstring = "delete from cargo c where c.id=:id";
//Query query = session.createQuery(qstring);
session.delete("cargo",cargo);
// session.delete(cargo);
// session.flush();
//query.setParameter("id", cargo.getId());
//int row=query.executeUpdate();
//System.out.println("deleted row"+row);
tx.commit();
} catch (Exception e) {
e.printStackTrace();
return mapping.findForward("failure");
}
// return a forward to edit forward
System.out.println("Deleted success");
return mapping.findForward("succes");
}
}
My parent model
package com.model;
import java.util.HashSet;
import java.util.Set;
public class cargo {
private Long id;
private String cname;
private String cdate;
private String csource;
private String cdestination;
private String create;
private String status;
private Set itemList = new HashSet();
public Long getId() {
return id;
}
public void setId(Long id) {
this.id = id;
}
public String getCname() {
return cname;
}
public void setCname(String cname) {
this.cname = cname;
}
public String getCdate() {
return cdate;
}
public void setCdate(String cdate) {
this.cdate = cdate;
}
public String getCsource() {
return csource;
}
public void setCsource(String csource) {
this.csource = csource;
}
public String getCdestination() {
return cdestination;
}
public void setCdestination(String cdestination) {
this.cdestination = cdestination;
}
public String getCreate() {
return create;
}
public void setCreate(String create) {
this.create = create;
}
public String getStatus() {
return status;
}
public void setStatus(String status) {
this.status = status;
}
public Set getItemList() {
return itemList;
}
public void setItemList(Set itemList) {
this.itemList = itemList;
}
}
My child model
package com.model;
public class Item{
private Long itemid;
private String itemName;
private String weight;
private cargo cargo;
public Long getItemid() {
return itemid;
}
public void setItemid(Long itemid) {
this.itemid = itemid;
}
public String getItemName() {
return itemName;
}
public void setItemName(String itemName) {
this.itemName = itemName;
}
public String getWeight() {
return weight;
}
public void setWeight(String weight) {
this.weight = weight;
}
public cargo getCargo() {
return cargo;
}
public void setCargo(cargo cargo) {
this.cargo = cargo;
}
}
And my form
package com.form;
import java.util.HashSet;
import java.util.Set;
import javax.servlet.http.HttpServletRequest;
import org.apache.struts.action.ActionForm;
import org.apache.struts.action.ActionMapping;
import com.model.Item;
public class cargoForm extends ActionForm {
private Long id;
private String cname;
private String cdate;
private String csource;
private String cdestination;
private String create;
private String status;
private Long[] itemid;
private String[] itemName;
private String[] weight;
private Set itemset = new HashSet();
public Long getId() {
return id;
}
public void setId(Long id) {
this.id = id;
}
public String getCname() {
return cname;
}
public void setCname(String cname) {
this.cname = cname;
}
public String getCdate() {
return cdate;
}
public void setCdate(String cdate) {
this.cdate = cdate;
}
public String getCsource() {
return csource;
}
public void setCsource(String csource) {
this.csource = csource;
}
public String getCdestination() {
return cdestination;
}
public void setCdestination(String cdestination) {
this.cdestination = cdestination;
}
public String getCreate() {
return create;
}
public void setCreate(String create) {
this.create = create;
}
public String getStatus() {
return status;
}
public void setStatus(String status) {
this.status = status;
}
public Long[] getItemid() {
return itemid;
}
public void setItemid(Long[] itemid) {
this.itemid = itemid;
}
public String[] getItemName() {
return itemName;
}
public void setItemName(String[] itemName) {
this.itemName = itemName;
}
public String[] getWeight() {
return weight;
}
public void setWeight(String[] weight) {
this.weight = weight;
}
/*
* public Set getItemset() { return itemset; }
*
* public void setItemset(Set itemset) { this.itemset = itemset; }
*/
public Set getItemDtl() {
if (itemid != null) {
itemset = new HashSet();
System.out.println("cargadd form" + itemid);
for (int i = 0; i < itemid.length; i++) {
Item it = new Item();
// it.setItemId(itemId[i]);
it.setItemName(itemName[i]);
System.out.println("cargadd form" + itemName[i]);
it.setWeight(weight[i]);
itemset.add(it);
System.out.println("cargadd form" + itemset.size());
}
}
return itemset;
}
public void setItemDtl(Set itemset) {
System.out.println("cargadd form" + itemset.size());
this.itemset = itemset;
System.out.println("cargadd form" + itemset.size());
}
public void reset(ActionMapping mapping, HttpServletRequest request) {
cname = "";
csource = "";
cdestination = "";
cdate = "";
status = "";
create = "";
}
}
The error:
Hibernate: select max(itemid) from item
Hibernate: insert into item (itemname, weight, position, id, itemid) values (?, ?, ?, ?, ?)
Hibernate: update cargo set name=?, date=?, source=?, destination=?, createby=?, status=? where id=?
Oct 4, 2010 10:44:08 AM org.hibernate.jdbc.BatchingBatcher doExecuteBatch
SEVERE: Exception executing batch:
org.hibernate.StaleStateException: Batch update returned unexpected row count from update [0]; actual row count: 0; expected: 1
at org.hibernate.jdbc.Expectations$BasicExpectation.checkBatched(Expectations.java:61)
at org.hibernate.jdbc.Expectations$BasicExpectation.verifyOutcome(Expectations.java:46)
at org.hibernate.jdbc.BatchingBatcher.checkRowCounts(BatchingBatcher.java:68)
at org.hibernate.jdbc.BatchingBatcher.doExecuteBatch(BatchingBatcher.java:48)
at org.hibernate.jdbc.AbstractBatcher.executeBatch(AbstractBatcher.java:242)
at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:235)
at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:140)
at org.hibernate.event.def.AbstractFlushingEventListener.performExecutions(AbstractFlushingEventListener.java:298)
at org.hibernate.event.def.DefaultFlushEventListener.onFlush(DefaultFlushEventListener.java:27)
at org.hibernate.impl.SessionImpl.flush(SessionImpl.java:1000)
at org.hibernate.impl.SessionImpl.managedFlush(SessionImpl.java:338)
at org.hibernate.transaction.JDBCTransaction.commit(JDBCTransaction.java:106)
at com.action.CargoAction.save(CargoAction.java:125)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at org.apache.struts.actions.DispatchAction.dispatchMethod(DispatchAction.java:269)
at org.apache.struts.actions.DispatchAction.execute(DispatchAction.java:170)
at org.apache.struts.chain.commands.servlet.ExecuteAction.execute(ExecuteAction.java:58)
at org.apache.struts.chain.commands.AbstractExecuteAction.execute(AbstractExecuteAction.java:67)
at org.apache.struts.chain.commands.ActionCommandBase.execute(ActionCommandBase.java:51)
at org.apache.commons.chain.impl.ChainBase.execute(ChainBase.java:191)
at org.apache.commons.chain.generic.LookupCommand.execute(LookupCommand.java:305)
at org.apache.commons.chain.impl.ChainBase.execute(ChainBase.java:191)
at org.apache.struts.chain.ComposableRequestProcessor.process(ComposableRequestProcessor.java:283)
at org.apache.struts.action.ActionServlet.process(ActionServlet.java:1913)
at org.apache.struts.action.ActionServlet.doPost(ActionServlet.java:462)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:647)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:729)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:269)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:188)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:213)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:172)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:117)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:108)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:174)
at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:879)
at org.apache.coyote.http11.Http11BaseProtocol$Http11ConnectionHandler.processConnection(Http11BaseProtocol.java:665)
at org.apache.tomcat.util.net.PoolTcpEndpoint.processSocket(PoolTcpEndpoint.java:528)
at org.apache.tomcat.util.net.LeaderFollowerWorkerThread.runIt(LeaderFollowerWorkerThread.java:81)
at org.apache.tomcat.util.threads.ThreadPool$ControlRunnable.run(ThreadPool.java:689)
at java.lang.Thread.run(Unknown Source)
Oct 4, 2010 10:44:08 AM org.hibernate.event.def.AbstractFlushingEventListener performExecutions
SEVERE: Could not synchronize database state with session
org.hibernate.StaleStateException: Batch update returned unexpected row count from update [0]; actual row count: 0; expected: 1
at org.hibernate.jdbc.Expectations$BasicExpectation.checkBatched(Expectations.java:61)
at org.hibernate.jdbc.Expectations$BasicExpectation.verifyOutcome(Expectations.java:46)
at org.hibernate.jdbc.BatchingBatcher.checkRowCounts(BatchingBatcher.java:68)
at org.hibernate.jdbc.BatchingBatcher.doExecuteBatch(BatchingBatcher.java:48)
at org.hibernate.jdbc.AbstractBatcher.executeBatch(AbstractBatcher.java:242)
at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:235)
at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:140)
at org.hibernate.event.def.AbstractFlushingEventListener.performExecutions(AbstractFlushingEventListener.java:298)
at org.hibernate.event.def.DefaultFlushEventListener.onFlush(DefaultFlushEventListener.java:27)
at org.hibernate.impl.SessionImpl.flush(SessionImpl.java:1000)
at org.hibernate.impl.SessionImpl.managedFlush(SessionImpl.java:338)
at org.hibernate.transaction.JDBCTransaction.commit(JDBCTransaction.java:106)
at com.action.CargoAction.save(CargoAction.java:125)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at org.apache.struts.actions.DispatchAction.dispatchMethod(DispatchAction.java:269)
at org.apache.struts.actions.DispatchAction.execute(DispatchAction.java:170)
at org.apache.struts.chain.commands.servlet.ExecuteAction.execute(ExecuteAction.java:58)
at org.apache.struts.chain.commands.AbstractExecuteAction.execute(AbstractExecuteAction.java:67)
at org.apache.struts.chain.commands.ActionCommandBase.execute(ActionCommandBase.java:51)
at org.apache.commons.chain.impl.ChainBase.execute(ChainBase.java:191)
at org.apache.commons.chain.generic.LookupCommand.execute(LookupCommand.java:305)
at org.apache.commons.chain.impl.ChainBase.execute(ChainBase.java:191)
at org.apache.struts.chain.ComposableRequestProcessor.process(ComposableRequestProcessor.java:283)
at org.apache.struts.action.ActionServlet.process(ActionServlet.java:1913)
at org.apache.struts.action.ActionServlet.doPost(ActionServlet.java:462)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:647)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:729)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:269)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:188)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:213)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:172)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:117)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:108)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:174)
at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:879)
at org.apache.coyote.http11.Http11BaseProtocol$Http11ConnectionHandler.processConnection(Http11BaseProtocol.java:665)
at org.apache.tomcat.util.net.PoolTcpEndpoint.processSocket(PoolTcpEndpoint.java:528)
at org.apache.tomcat.util.net.LeaderFollowerWorkerThread.runIt(LeaderFollowerWorkerThread.java:81)
at org.apache.tomcat.util.threads.ThreadPool$ControlRunnable.run(ThreadPool.java:689)
at java.lang.Thread.run(Unknown Source)
org.hibernate.StaleStateException: Batch update returned unexpected row count from update [0]; actual row count: 0; expected: 1
at org.hibernate.jdbc.Expectations$BasicExpectation.checkBatched(Expectations.java:61)
at org.hibernate.jdbc.Expectations$BasicExpectation.verifyOutcome(Expectations.java:46)
at org.hibernate.jdbc.BatchingBatcher.checkRowCounts(BatchingBatcher.java:68)
at org.hibernate.jdbc.BatchingBatcher.doExecuteBatch(BatchingBatcher.java:48)
at org.hibernate.jdbc.AbstractBatcher.executeBatch(AbstractBatcher.java:242)
at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:235)
at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:140)
at org.hibernate.event.def.AbstractFlushingEventListener.performExecutions(AbstractFlushingEventListener.java:298)
at org.hibernate.event.def.DefaultFlushEventListener.onFlush(DefaultFlushEventListener.java:27)
at org.hibernate.impl.SessionImpl.flush(SessionImpl.java:1000)
at org.hibernate.impl.SessionImpl.managedFlush(SessionImpl.java:338)
at org.hibernate.transaction.JDBCTransaction.commit(JDBCTransaction.java:106)
at com.action.CargoAction.save(CargoAction.java:125)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at org.apache.struts.actions.DispatchAction.dispatchMethod(DispatchAction.java:269)
at org.apache.struts.actions.DispatchAction.execute(DispatchAction.java:170)
at org.apache.struts.chain.commands.servlet.ExecuteAction.execute(ExecuteAction.java:58)
at org.apache.struts.chain.commands.AbstractExecuteAction.execute(AbstractExecuteAction.java:67)
at org.apache.struts.chain.commands.ActionCommandBase.execute(ActionCommandBase.java:51)
at org.apache.commons.chain.impl.ChainBase.execute(ChainBase.java:191)
at org.apache.commons.chain.generic.LookupCommand.execute(LookupCommand.java:305)
at org.apache.commons.chain.impl.ChainBase.execute(ChainBase.java:191)
at org.apache.struts.chain.ComposableRequestProcessor.process(ComposableRequestProcessor.java:283)
at org.apache.struts.action.ActionServlet.process(ActionServlet.java:1913)
at org.apache.struts.action.ActionServlet.doPost(ActionServlet.java:462)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:647)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:729)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:269)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:188)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:213)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:172)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:117)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:108)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:174)
at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:879)
at org.apache.coyote.http11.Http11BaseProtocol$Http11ConnectionHandler.processConnection(Http11BaseProtocol.java:665)
at org.apache.tomcat.util.net.PoolTcpEndpoint.processSocket(PoolTcpEndpoint.java:528)
at org.apache.tomcat.util.net.LeaderFollowerWorkerThread.runIt(LeaderFollowerWorkerThread.java:81)
at org.apache.tomcat.util.threads.ThreadPool$ControlRunnable.run(ThreadPool.java:689)
at java.lang.Thread.run(Unknown Source)
In the Hibernate mapping file for the id property, if you use any generator class, for that property you should not set the value explicitly by using a setter method.
If you set the value of the Id property explicitly, it will lead the error above. Check this to avoid this error.
its happen when you try to delete the same object and then again update the same object
use this after delete
session.clear();
what i have experienced is that this exception raise when updating object have an id which not exist in table. if you read exception message it says "Batch update returned unexpected row count from update [0]; actual row count: 0; expected: 1" which means it was unable to found record with your given id.
To avoid this i always read record with same id if i found record back then i call update otherwise throw "exception record not found".
It looks like, cargo can have one or more item. Each item would have a reference to its corresponding cargo.
From the log, item object is inserted first and then an attempt is made to update the cargo object (which does not exist).
I guess what you actually want is cargo object to be created first and then the item object to be created with the id of the cargo object as the reference - so, essentally re-look at the save() method in the Action class.
It looks like, when you try to delete the same object and then again update the same object, then it gives you this error. As after every update hibernate for safe checking fires how many rows were updated, but during the code the data must have been deleted. Over here hibernate distinguish between the objects based on the key what u have assigned or the equals method.
so, just go through once through your code for this check, or try with implementing equals & hashcode method correctly which might help.
/*
* Thrown when a version number or timestamp check failed, indicating that the
* Session contained stale data (when using long transactions with versioning).
* Also occurs if we try delete or update a row that does not exist.
*
*/
if ( expectedRowCount > rowCount ) {
throw new StaleStateException(
"Batch update returned unexpected row count from update [" + batchPosition +"]; actual row count: " + rowCount +"; expected: " + expectedRowCount);
}
<property name="show_sql">true</property>
This should show you the SQL that is executed and causes the problem.
*The StaleStateException would be thrown only after we successfully deleted one object, and then tried to delete another. The reason for this is, while persisting the objects across sessions, objects must first be deleted from the Session before deleted. Otherwise, subsequent deletes will cause the StaleStateException to be thrown.
Session.Remove(obj);
objectDAO.Delete(obj);
*The problem was that a table must have only one field that is primary key (I had a composite key and this is not a good idea, except for the many to many relation). I have solved using a new id table field auto incremental.
*It can be fix by using Hibernate session.update() -- you need to have the table/view's primary key equal your corresponding bean property (eg. id).
*
For update() and saveOrUpdate() methods, id generator value should be there in the database. For the save() method, id generator is not required.
As mentioned above, be sure that you don't set any id fields which are supposed to be auto-generated.
To cause this problem during testing, make sure that the db 'sees' aka flush this SQL, otherwise everything may seem fine when really its not.
I encountered this problem when inserting my parent with a child into the db:
Insert parent (with manual ID)
Insert child (with autogenerated ID)
Update foreign key in Child table to parent.
The 3. statement failed. Indeed the entry with the autogenerated ID (by Hibernate) was not in the table as a trigger changed the ID upon each insertion, thus letting the update fail with no matching row found.
Since the table can be updated without any Hibernate I added a check whether the ID is null and only fill it in then to the trigger.
This often happens when SQL statements are implemented in some performance costly way (implicit type conversions etc.).
I advice to debug the resulting SQL statements.
To debug turn on SQL logging
Turn on hibernate SQL logging by adding the following lines to your log4j properties file:
# logs the SQL statements
log4j.logger.org.hibernate.SQL=debug
# Logs the JDBC parameters passed to a query
log4j.logger.org.hibernate.type=trace
Before failing you will see the last SQL statement attempted in your log, copy and paste this SQL into an external SQL client and run it.
I had the same as well.Making the Id (0) doing "(your Model value).setId(0)" solved my problem.
please do not set id of child class which is generator class is foreign only set parent class id if your parent class id is assigned...
just do one thing dont set id of child class via setter method your problem will be fix.....definately.
So For my case I noticed hibernate is trying to update the record rather than inserting it and that thrown the exception mentioned.
I finally came to find that my entity had an updatedAt timestamp column:
<timestamp name="updatedDate" column="updated_date" />
and when I was trying to initialize the object i found that the code was
setting this field explicitly.
after removing that setUpdateDate(new Date()) it worked and did an insert instead.
FYI, another way this exception can occur is if:
Your transaction isolation is READ_COMMITTED
Transaction #1 queries for an entity, then deletes that entity
A simultaneous transaction #2 does the same thing
Then this can happen: TX #1 successfully commits before TX #2, then when TX #2 tries to delete the entity (again) it's not there any more - even though it was found by a query earlier in that same transaction. Note this anomaly is allowed with READ_COMMITTED isolation.
In my case the resulting exception looked like this:
HHH000315: Exception executing batch [org.hibernate.StaleStateException:
Batch update returned unexpected row count from update [0]; actual row
count: 0; expected: 1; statement executed: delete from Foobar where id=?],
SQL: delete from Foobar where id=?
if The given id is not exist in the DB ,then you may get this exception.
Exception in thread "main" org.springframework.orm.hibernate3.HibernateOptimisticLockingFailureException: Batch update returned unexpected row count from update [0]; actual row count: 0; expected: 1; nested exception is org.hibernate.StaleStateException: Batch update returned unexpected row count from update [0]; actual row count: 0; expected: 1

Categories