Cassandra failure during read query at consistency QUORUM - ReadFailureException - java

I have a simple scala/java program to demo Cassandra java API.
I have a simple UDT class Address which is used in class User. For some reason userMapper.get(userId) fails with no clear error message.
Code is part of scala project.
Runner code (java):
void exp02() {
log.debug("JAVA -- exp02");
Cluster cluster = null;
try {
CodecRegistry codecRegistry = new CodecRegistry();
cluster = Cluster.builder() // (1)
.withCodecRegistry(codecRegistry)
.addContactPoint("127.0.0.1")
.build();
log.debug("connect...exp02");
Session session = cluster.connect(); // (2)
MappingManager manager = new MappingManager(session);
Mapper<User> userMapper = manager.mapper(User.class);
// For some reason this will break
{
log.debug("create user *********************** isClosed: " + cluster.isClosed());
log.debug("get users");
ResultSet results = session.execute("SELECT * FROM cTest.user;");
Result<User> user = userMapper.map(results);
for (User u : user) {
log.debug("User : " + u);
}
log.debug("Users printed");
UUID userId = UUID.fromString("567378a9-8533-4d1c-80a8-71bf4b77189e");
User u2 = userMapper.get(userId); // <<<--- This line throws exception, (JRunner.java:67)
log.debug("Select user = " + u2);
}
} catch (RuntimeException e) {
log.error("Exception: " + e);
e.printStackTrace();
} finally {
log.debug("close...exp02");
if (cluster != null) cluster.close(); // (5)
}
}
Main (scala):
package com.example.crunner
import org.slf4j.{Logger, LoggerFactory}
object MainRunner {
val log: Logger = LoggerFactory.getLogger(getClass())
def main(args: Array[String]): Unit = {
val jrunner = new JRunner()
jrunner.exp02()
}
}
User class (java):
package com.example.crunner;
import java.util.UUID;
import com.datastax.driver.mapping.annotations.Column;
import com.datastax.driver.mapping.annotations.PartitionKey;
import com.datastax.driver.mapping.annotations.Table;
#Table(keyspace = "cTest", name = "user",
readConsistency = "QUORUM",
writeConsistency = "QUORUM"
// caseSensitiveKeyspace = false,
// caseSensitiveTable = false
)
public class User {
#PartitionKey
#Column(name = "user_id")
private UUID userId;
private String name;
private Address address;
public User(UUID userId, String name, Address address) {
this.userId = userId;
this.name = name;
this.address = address;
}
public User() { address = new Address(); }
public UUID getUserId() {
return userId;
}
public void setUserId(UUID userId) {
this.userId = userId;
}
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
public Address getAddress() {
return address;
}
public void setAddress(Address address) {
this.address = address;
}
#Override
public String toString() {
return "User{" +
"userId=" + userId +
", name='" + name + '\'' +
", address=" + address +
'}';
}
}
UDT Address class (java)
package com.example.crunner;
import com.datastax.driver.mapping.annotations.Field;
import com.datastax.driver.mapping.annotations.UDT;
#UDT(keyspace = "cTest", name = "addressT") //, caseSensitiveType = true)
public class Address {
private String street;
private int zipCode;
public Address(String street, int zipCode) {
this.street = street;
this.zipCode = zipCode;
}
public Address() {
}
public String getStreet() {
return street;
}
public void setStreet(String street) {
this.street = street;
}
public int getZipCode() {
return zipCode;
}
public void setZipCode(int zipCode) {
this.zipCode = zipCode;
}
#Override
public String toString() {
return "Address{" +
"street='" + street + '\'' +
", zipCode=" + zipCode +
'}';
}
}
CQL (other tables not included here):
CREATE TYPE ctest.addresst (
street text,
zipcode int
);
CREATE TABLE ctest.user (
user_id uuid PRIMARY KEY,
address addresst,
name text
) WITH bloom_filter_fp_chance = 0.01
AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'}
AND comment = ''
AND compaction = {'class': 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 'max_threshold': '32', 'min_threshold': '4'}
AND compression = {'chunk_length_in_kb': '64', 'class': 'org.apache.cassandra.io.compress.LZ4Compressor'}
AND crc_check_chance = 1.0
AND dclocal_read_repair_chance = 0.1
AND default_time_to_live = 0
AND gc_grace_seconds = 864000
AND max_index_interval = 2048
AND memtable_flush_period_in_ms = 0
AND min_index_interval = 128
AND read_repair_chance = 0.0
AND speculative_retry = '99PERCENTILE';
build.sbt
name := "CassJExp2"
version := "0.1-SNAPSHOT"
scalaVersion := "2.11.9"
resolvers += "Typesafe Repository" at "http://repo.typesafe.com/typesafe/releases/"
val cassandraVersion = "3.2.0"
val logbackVersion = "1.2.3"
libraryDependencies ++= Seq(
"ch.qos.logback" % "logback-classic" % logbackVersion withSources() withJavadoc(), //
"ch.qos.logback" % "logback-core" % logbackVersion withSources() withJavadoc(), //
"ch.qos.logback" % "logback-access" % logbackVersion withSources() withJavadoc(), //
"org.slf4j" % "slf4j-api" % "1.7.25" withSources() withJavadoc(), //
"joda-time" % "joda-time" % "2.9.9" withSources() withJavadoc(), //
"com.datastax.cassandra" % "cassandra-driver-core" % cassandraVersion withSources() withJavadoc(), //
"com.datastax.cassandra" % "cassandra-driver-mapping" % cassandraVersion withSources() withJavadoc(), //
"com.datastax.cassandra" % "cassandra-driver-extras" % cassandraVersion withSources() withJavadoc() //
)
scalacOptions += "-deprecation"
When I run this code on sbt console, I get following output:
18:08:41.447 [run-main-f] DEBUG com.example.crunner.JRunner - JAVA -- exp02
18:08:41.497 [run-main-f] INFO c.d.driver.core.GuavaCompatibility - Detected Guava >= 19 in the classpath, using modern compatibility layer
18:08:41.634 [run-main-f] INFO c.datastax.driver.core.ClockFactory - Using native clock to generate timestamps.
18:08:41.644 [run-main-f] DEBUG com.example.crunner.JRunner - connect...exp02
18:08:41.674 [run-main-f] INFO com.datastax.driver.core.NettyUtil - Did not find Netty's native epoll transport in the classpath, defaulting to NIO.
18:08:42.049 [run-main-f] INFO c.d.d.c.p.DCAwareRoundRobinPolicy - Using data-center name 'datacenter1' for DCAwareRoundRobinPolicy (if this is incorrect, please provide the correct datacenter name with DCAwareRoundRobinPolicy constructor)
18:08:42.051 [run-main-f] INFO com.datastax.driver.core.Cluster - New Cassandra host /127.0.0.1:9042 added
18:08:42.107 [run-main-f] DEBUG com.example.crunner.JRunner - create user *********************** isClosed: false
18:08:42.108 [run-main-f] DEBUG com.example.crunner.JRunner - get users
18:08:42.139 [run-main-f] DEBUG com.example.crunner.JRunner - User : User{userId=54cbad6e-3f27-4b7e-bce0-8a4a4fbffbdf, name='John Doe', address=Address{street='street', zipCode=512}}
18:08:42.139 [run-main-f] DEBUG com.example.crunner.JRunner - User : User{userId=6122b896-8b28-448d-ac5c-4bc9b5c7c7ab, name='John Doe', address=Address{street='street', zipCode=512}}
... output truncated here, table contains about 150 rows ...
18:08:42.175 [run-main-f] DEBUG com.example.crunner.JRunner - User : User{userId=44f69277-ff97-4ba2-9216-bdf65eccd7c3, name='John Doe', address=Address{street='street', zipCode=512}}
18:08:42.175 [run-main-f] DEBUG com.example.crunner.JRunner - Users printed
18:08:42.203 [run-main-f] ERROR com.example.crunner.JRunner - Exception: com.datastax.driver.core.exceptions.ReadFailureException: Cassandra failure during read query at consistency QUORUM (1 responses were required but only 0 replica responded, 1 failed)
com.datastax.driver.core.exceptions.ReadFailureException: Cassandra failure during read query at consistency QUORUM (1 responses were required but only 0 replica responded, 1 failed)
at com.datastax.driver.core.exceptions.ReadFailureException.copy(ReadFailureException.java:130)
at com.datastax.driver.core.exceptions.ReadFailureException.copy(ReadFailureException.java:30)
at com.datastax.driver.mapping.DriverThrowables.propagateCause(DriverThrowables.java:41)
at com.datastax.driver.mapping.Mapper.get(Mapper.java:435)
at com.example.crunner.JRunner.exp02(JRunner.java:67)
at com.example.crunner.MainRunner$.main(MainRunner.scala:18)
at com.example.crunner.MainRunner.main(MainRunner.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at sbt.Run.invokeMain(Run.scala:67)
at sbt.Run.run0(Run.scala:61)
at sbt.Run.sbt$Run$$execute$1(Run.scala:51)
at sbt.Run$$anonfun$run$1.apply$mcV$sp(Run.scala:55)
at sbt.Run$$anonfun$run$1.apply(Run.scala:55)
at sbt.Run$$anonfun$run$1.apply(Run.scala:55)
at sbt.Logger$$anon$4.apply(Logger.scala:84)
at sbt.TrapExit$App.run(TrapExit.scala:248)
at java.lang.Thread.run(Thread.java:745)
Caused by: com.datastax.driver.core.exceptions.ReadFailureException: Cassandra failure during read query at consistency QUORUM (1 responses were required but only 0 replica responded, 1 failed)
at com.datastax.driver.core.exceptions.ReadFailureException.copy(ReadFailureException.java:142)
at com.datastax.driver.core.Responses$Error.asException(Responses.java:140)
at com.datastax.driver.core.DefaultResultSetFuture.onSet(DefaultResultSetFuture.java:179)
at com.datastax.driver.core.RequestHandler.setFinalResult(RequestHandler.java:179)
at com.datastax.driver.core.RequestHandler.access$2400(RequestHandler.java:49)
at com.datastax.driver.core.RequestHandler$SpeculativeExecution.setFinalResult(RequestHandler.java:799)
at com.datastax.driver.core.RequestHandler$SpeculativeExecution.onSet(RequestHandler.java:633)
at com.datastax.driver.core.Connection$Dispatcher.channelRead0(Connection.java:1075)
at com.datastax.driver.core.Connection$Dispatcher.channelRead0(Connection.java:998)
at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:287)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:293)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:267)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1294)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:911)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:643)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:566)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:480)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:442)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:131)
at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:144)
... 1 more
Caused by: com.datastax.driver.core.exceptions.ReadFailureException: Cassandra failure during read query at consistency QUORUM (1 responses were required but only 0 replica responded, 1 failed)
at com.datastax.driver.core.Responses$Error$1.decode(Responses.java:88)
at com.datastax.driver.core.Responses$Error$1.decode(Responses.java:38)
at com.datastax.driver.core.Message$ProtocolDecoder.decode(Message.java:289)
at com.datastax.driver.core.Message$ProtocolDecoder.decode(Message.java:269)
at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:88)
... 20 more
18:08:42.205 [run-main-f] DEBUG com.example.crunner.JRunner - close...exp02
[success] Total time: 4 s, completed Apr 18, 2017 6:08:45 PM
At the same time I get the following error message into /var/log/cassandra/system.log:
WARN [ReadStage-2] 2017-04-18 18:08:42,202 AbstractLocalAwareExecutorService.java:169 - Uncaught exception on thread Thread[ReadStage-2,10,main]: {}
java.lang.AssertionError: null
at org.apache.cassandra.db.rows.BTreeRow.getCell(BTreeRow.java:212) ~[apache-cassandra-3.9.jar:3.9]
at org.apache.cassandra.db.SinglePartitionReadCommand.canRemoveRow(SinglePartitionReadCommand.java:895) ~[apache-cassandra-3.9.jar:3.9]
at org.apache.cassandra.db.SinglePartitionReadCommand.reduceFilter(SinglePartitionReadCommand.java:859) ~[apache-cassandra-3.9.jar:3.9]
at org.apache.cassandra.db.SinglePartitionReadCommand.queryMemtableAndSSTablesInTimestampOrder(SinglePartitionReadCommand.java:744) ~[apache-cassandra-3.9.jar:3.9]
at org.apache.cassandra.db.SinglePartitionReadCommand.queryMemtableAndDiskInternal(SinglePartitionReadCommand.java:515) ~[apache-cassandra-3.9.jar:3.9]
at org.apache.cassandra.db.SinglePartitionReadCommand.queryMemtableAndDisk(SinglePartitionReadCommand.java:492) ~[apache-cassandra-3.9.jar:3.9]
at org.apache.cassandra.db.SinglePartitionReadCommand.queryStorage(SinglePartitionReadCommand.java:358) ~[apache-cassandra-3.9.jar:3.9]
at org.apache.cassandra.db.ReadCommand.executeLocally(ReadCommand.java:397) ~[apache-cassandra-3.9.jar:3.9]
at org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1801) ~[apache-cassandra-3.9.jar:3.9]
at org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2486) ~[apache-cassandra-3.9.jar:3.9]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ~[na:1.8.0_121]
at org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164) ~[apache-cassandra-3.9.jar:3.9]
at org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$LocalSessionFutureTask.run(AbstractLocalAwareExecutorService.java:136) [apache-cassandra-3.9.jar:3.9]
at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:109) [apache-cassandra-3.9.jar:3.9]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_121]
Cassandra version is [cqlsh 5.0.1 | Cassandra 3.9 | CQL spec 3.4.2 | Native protocol v4]
So userMapper can map ResultSet of users but getting a single user will fail. The userId I try to fetch exists in the user table. It is also possible to save a new user into db using the userMapper without failure.
I don't know if this is somehow related to having a UDT Address in User class. Tables / mappers without UDT classes are working fine.
EDIT:
As Marko Å valjek suggested I tried the query at command line:
cqlsh> SELECT * FROM cTest.user where user_id=567378a9-8533-4d1c-80a8-71bf4b77189e;
ReadFailure: Error from server: code=1300 [Replica(s) failed to execute read] message="Operation failed - received 0 responses and 1 failures" info={'failures': 1, 'received_responses': 0, 'required_responses': 1, 'consistency': 'ONE'}
Looks like same error than with java client.
SELECT * FROM cTest.user works fine.
EDIT 2:
This is single instance environment.
nodetool status
Datacenter: datacenter1
=======================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns Host ID Rack
UN 127.0.0.1 354.4 KiB 256 ? 33490146-da36-4359-bb24-42854bdb3c26 rack1
Note: Non-system keyspaces don't have the same replication settings, effective ownership information is meaningless
What's the reason for this error and how to fix it? Thank you for your support.

Related

Failed to convert from type [java.lang.Object[]] to type [qbr.entity.nameEntity]

I am not asking the question that is already asked here
Failed to convert from type [java.lang.Object[]] to type
My entity look like this :
#Entity
public class DuplicateManagerMetricsRelTagEntity {
#Id
#Column(name = "sn")
String sn;
#Column(name = "clientid")
String clientid;
#Column(name = "ticket_count")
String ticket_count;
public DuplicateManagerMetricsRelTagEntity(String sn, String clientid, String ticket_count) {
this.sn = sn;
this.clientid = clientid;
this.ticket_count = ticket_count;
}
public DuplicateManagerMetricsRelTagEntity() {
}
My controller look like this :
#RequestMapping("/qbr/duplicatemanager/{clientid}/{appid}/{releasetag}/")
#CrossOrigin
public List<DuplicateManagerMetricsRelTagEntity> getAllDuplicateManagerFromReleaseTag(#PathVariable String clientid, #PathVariable String[] appid, #PathVariable String releasetag) {
logger.info("Returing all duplicate managers of client {} appId {} from release tag {} ", clientid, appid, releasetag);
System.out.println("data in controller : " + clientid + " " + appid + " " + releasetag);
return duplicateManagerMetricsService.getAllDuplicateManagerFromReleaseTag(clientid, appid, releasetag);
}
My service look like this :
public List<DuplicateManagerMetricsRelTagEntity> getAllDuplicateManagerFromReleaseTag(String clientid, String[] appid, String releasetag) {
try {
System.out.println("data in service : "+ clientid + " " + appid + " " + releasetag);
return duplicateManagerMetricsRepository.getAllDuplicateManagerfromReleaseTag(clientid, appid, releasetag);
} catch (Exception e) {
logger.error(e);
return new ArrayList<>();
}
}
My Repository look like this :
#Query(value = "select a.sn, a.clientid, a.ticket_count from dbtable as a where a.clientid = ?1 AND a.appid in (?2) AND a.releasetag=?3", nativeQuery = true)
List<DuplicateManagerMetricsRelTagEntity> getAllDuplicateManagerfromReleaseTag(String clientid, String[] appid, String releasetag);
since i am not getting appid data, i was supposed to get[657-001] but it is printing its object and the error i am getting is :
DuplicateManagerMetricsController - Returing all duplicate managers of client 657 appId [657-001] from release tag WIL657.2021.05-001
data in controller : 657 [Ljava.lang.String;#63108943 WIL657.2021.05-001
data in service : 657 [Ljava.lang.String;#63108943 WIL657.2021.05-001
2022-09-02 05:08:54 DEBUG org.hibernate.SQL - select a.sn, a.clientid, a.ticket_count from dbtable as a where a.clientid = ? AND a.appid in (?) AND a.releasetag=?
2022-09-02 05:08:54 WARN o.h.e.jdbc.spi.SqlExceptionHelper - SQL Error: 933, SQLState: 42000
2022-09-02 05:08:54 ERROR o.h.e.jdbc.spi.SqlExceptionHelper - ORA-00933: SQL command not properly ended
[ERROR] 2022-09-02 05:08:54.566 [http-nio-8080-exec-2] DuplicateManagerMetricsService - org.springframework.dao.InvalidDataAccessResourceUsageException: could not extract ResultSet; SQL [n/a]; nested exception is org.hibernate.exception.SQLGrammarException: could not extract ResultSet
This is because you are extracting fields (a.sn, a.clientid, a.ticket_count) which is not a DuplicateManagerMetricsRelTagEntity type. So since you have not asked hibernate to extract the complete object but a few fields(does not matter if your fields are exactly same as the fields in the object) - so hibernate extracts it as Object[] and then is trying to map to DuplicateManagerMetricsRelTagEntity and hence the issue.
You can use: JPA Projection/DTO
In your case you can use JPQL and :
"select a from DuplicateManagerMetricsRelTagEntity a where a.clientid = ?1 AND a.appid in (?2) AND a.releasetag=?3"
Or for native queries you an try:
select * from table_name where conditions;
NOTE: It has been long since I used hibernate or jpa. So the queries might be not completely accurate. Please do check up on the proper syntax. Main idea is to let you know how hibernate understands and tries to map the type. Since you have a,b,c -> Object[] is extracted and cannot be mapped to your entity.

Resilience4j Retry is not working as expected

I have two services "product-service" and "rating-service". I am making a rest call from product-service to rating-service to get the data. I have written the Retry configuration in product-service and expecting whenever an exception raised from rating-service, product-service retry the rest call as per configuration. But it is not happening. Whenever an exception thrown from rating-service, product-service also throws the exception without retrying and also fallback. Please find below code of both the services.
Check the code of the both services here.
product-service >> ProductServiceImpl.java
#Retry(name = "rating-service", fallbackMethod = "getDefaultProductRating")
public List<ProductRatingDTO> getProductRating(String id) {
String reqRatingServiceUrl = ratingServiceUrl + "/" + id;
log.info("Making a request to " + reqRatingServiceUrl + " at :" + LocalDateTime.now());
ResponseEntity<List<ProductRatingDTO>> productRatingDTOListRE = restTemplate.exchange(reqRatingServiceUrl,
HttpMethod.GET, null, new ParameterizedTypeReference<List<ProductRatingDTO>>() {
});
List<ProductRatingDTO> productRatingDTOList = productRatingDTOListRE.getBody();
log.info("Retrieved rating for id {} are: {}", id, productRatingDTOList);
return productRatingDTOList;
}
public List<ProductRatingDTO> getDefaultProductRating(String id, Exception ex) {
log.warn("fallback method: " + ex.getMessage());
return new ArrayList<>();
}
product-service >> application.yml
resilience4j.retry:
instances:
rating-service:
maxAttempts: 3
waitDuration: 10s
retryExceptions:
- org.springframework.web.client.HttpServerErrorException
ignoreExceptions:
- java.lang.ArrayIndexOutOfBoundsException
rating-service >> RatingsServiceImpl.java
#Override
public List<RatingsDTO> getRatings(String productId) {
log.info("Ratings required for product id: "+productId);
List<RatingsDTO> ratingsDTOList = ratingsRepository.getRatingsByProductId(productId);
log.info("Ratings fetched for product id {} are : {}",productId,ratingsDTOList);
if (ThreadLocalRandom.current().nextInt(0,5) == 0){ // Erratic block
log.error("Erratic");
throw new org.springframework.web.client.HttpServerErrorException(HttpStatus.INTERNAL_SERVER_ERROR);
}
return ratingsDTOList;
}
Please let me know where I am doing the mistake?

Error when retrieving values from MS SQL using Hibernate

I've done a program where I have to retrieve the data from a table in MS SQL. I'm using Hibernate where AbtDebtbyCAN is an Entity class. The connection so far is fine but the only problem I'm facing is printing out data from MS SQL using Annotation Mapping. Debt is the name of the table(the name of the table is in lower case) which is to be mapped with. Below here is the result I want to print it out using Hibernate. Can anybody help me on how to achieve the fetching of data?
debt
id can bdrl_debt excess_ta_debt posting_ref debt_settlement_id debt_settlement_at debt_business_date
11425 1099112400000003 0 200 501728 137 2020-10-13 10:51:50.000 2020-10-13
AbtDebtbyCAN
#Data
#Entity
#Table(name = "debt")
public class AbtDebtbbyCAN implements Serializable{
#Id
#GeneratedValue(strategy = GenerationType.IDENTITY)
#Column(name = "id")
private int debt_id;
#Column(name = "can")
private int debt_can;
#Column(name = "bdrl_debt")
private int bdrl_debt;
#Column(name = "excess_ta_debt")
private int excess_ta_debt;
#Column(name = "posting_ref")
private int posting_ref;
#Column(name = "debt_settlement_id")
private int debt_settlement_id;
#Column(name = "debt_settlement_at")
private DateTime debt_settlement_at;
#Column(name = "debt_business_date")
private Date debt_business_date;
}
AbtCANMapperTest
public class AbtCANMapperTest {
public static void main(String args[]) {
StandardServiceRegistry ssr = new StandardServiceRegistryBuilder().configure("hibernate.cfg.xml").build();
Metadata meta = new MetadataSources(ssr).getMetadataBuilder().build();
SessionFactory factory = meta.getSessionFactoryBuilder().build();
Session session = factory.openSession();
Transaction tx = null;
try {
tx = session.beginTransaction();
List abtdebt = session.createQuery("FROM AbtDebtbbyCAN WHERE debt_can=1099112400000003").getResultList();
for (Iterator iterator = abtdebt.iterator(); iterator.hasNext(); ) {
AbtDebtbbyCAN abtcan = (AbtDebtbbyCAN) iterator.next();
System.out.print("debt id: " + abtcan.getDebt_id());
System.out.print("debt can: " + abtcan.getDebt_can());
System.out.print("bdrl debt: " + abtcan.getBdrl_debt());
System.out.print("excess debt: " + abtcan.getExcess_ta_debt());
System.out.print("posting ref: " + abtcan.getPosting_ref());
System.out.print("debt settlement id: " + abtcan.getDebt_settlement_id());
System.out.print("debt settlement at: " + abtcan.getDebt_settlement_at());
System.out.println("debt business date: " + abtcan.getDebt_business_date());
}
tx.commit();
} catch (HibernateException e) {
if (tx != null) tx.rollback();
e.printStackTrace();
} finally {
session.close();
}
}
}
Error StackTrace
Exception in thread "main" javax.persistence.PersistenceException: org.hibernate.type.SerializationException: could not deserialize
at org.hibernate.internal.ExceptionConverterImpl.convert(ExceptionConverterImpl.java:154)
at org.hibernate.query.internal.AbstractProducedQuery.list(AbstractProducedQuery.java:1542)
at org.hibernate.query.Query.getResultList(Query.java:165)
at AbtMainTestControl.AbtTestObj.AbtCANMapperTest.main(AbtCANMapperTest.java:36)
Caused by: org.hibernate.type.SerializationException: could not deserialize
at org.hibernate.internal.util.SerializationHelper.doDeserialize(SerializationHelper.java:243)
at org.hibernate.internal.util.SerializationHelper.deserialize(SerializationHelper.java:287)
at org.hibernate.type.descriptor.java.SerializableTypeDescriptor.fromBytes(SerializableTypeDescriptor.java:138)
at org.hibernate.type.descriptor.java.SerializableTypeDescriptor.wrap(SerializableTypeDescriptor.java:113)
at org.hibernate.type.descriptor.java.SerializableTypeDescriptor.wrap(SerializableTypeDescriptor.java:29)
at org.hibernate.type.descriptor.sql.VarbinaryTypeDescriptor$2.doExtract(VarbinaryTypeDescriptor.java:60)
at org.hibernate.type.descriptor.sql.BasicExtractor.extract(BasicExtractor.java:47)
at org.hibernate.type.AbstractStandardBasicType.nullSafeGet(AbstractStandardBasicType.java:257)
at org.hibernate.type.AbstractStandardBasicType.nullSafeGet(AbstractStandardBasicType.java:253)
at org.hibernate.type.AbstractStandardBasicType.nullSafeGet(AbstractStandardBasicType.java:243)
at org.hibernate.type.AbstractStandardBasicType.hydrate(AbstractStandardBasicType.java:329)
at org.hibernate.persister.entity.AbstractEntityPersister.hydrate(AbstractEntityPersister.java:3088)
at org.hibernate.loader.Loader.loadFromResultSet(Loader.java:1907)
at org.hibernate.loader.Loader.hydrateEntityState(Loader.java:1835)
at org.hibernate.loader.Loader.instanceNotYetLoaded(Loader.java:1808)
at org.hibernate.loader.Loader.getRow(Loader.java:1660)
at org.hibernate.loader.Loader.getRowFromResultSet(Loader.java:745)
at org.hibernate.loader.Loader.getRowsFromResultSet(Loader.java:1044)
at org.hibernate.loader.Loader.processResultSet(Loader.java:995)
at org.hibernate.loader.Loader.doQuery(Loader.java:964)
at org.hibernate.loader.Loader.doQueryAndInitializeNonLazyCollections(Loader.java:350)
at org.hibernate.loader.Loader.doList(Loader.java:2887)
at org.hibernate.loader.Loader.doList(Loader.java:2869)
at org.hibernate.loader.Loader.listIgnoreQueryCache(Loader.java:2701)
at org.hibernate.loader.Loader.list(Loader.java:2696)
at org.hibernate.loader.hql.QueryLoader.list(QueryLoader.java:506)
at org.hibernate.hql.internal.ast.QueryTranslatorImpl.list(QueryTranslatorImpl.java:400)
at org.hibernate.engine.query.spi.HQLQueryPlan.performList(HQLQueryPlan.java:219)
at org.hibernate.internal.SessionImpl.list(SessionImpl.java:1415)
at org.hibernate.query.internal.AbstractProducedQuery.doList(AbstractProducedQuery.java:1565)
at org.hibernate.query.internal.AbstractProducedQuery.list(AbstractProducedQuery.java:1533)
... 2 more
Caused by: java.io.StreamCorruptedException: invalid stream header: 0000AC53
at java.io.ObjectInputStream.readStreamHeader(ObjectInputStream.java:899)
at java.io.ObjectInputStream.<init>(ObjectInputStream.java:357)
at org.hibernate.internal.util.SerializationHelper$CustomObjectInputStream.<init>(SerializationHelper.java:309)
at org.hibernate.internal.util.SerializationHelper$CustomObjectInputStream.<init>(SerializationHelper.java:299)
at org.hibernate.internal.util.SerializationHelper.doDeserialize(SerializationHelper.java:218)
... 32 more
Caused by: org.hibernate.type.SerializationException: could not deserialize
> Task :AbtCANMapperTest.main() FAILED
Caused by: java.io.StreamCorruptedException: invalid stream header: 0000AC53
Execution failed for task ':AbtCANMapperTest.main()'.
> Process 'command 'C:/Program Files/Java/jdk1.8.0_241/bin/java.exe'' finished with non-zero exit value 1
* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output. Run with --scan to get full insights.
Not sure what types you are exactly using here, but at least DateTime is not a standard type that is supported by Hibernate. If Hibernate does not know about a type, it tries to flush the value in its serialized form as blob/byte[]. If you would use java.sql.Timestamp, java.util.Calendar or any other type that is supported by Hibernate out of the box, this should work properly.

Apache Ignite : ScanQuery giving exception

I am newbie to Apache Ignite. On my windows box, I have started Apache Ignite with double click on ignite.bat file and trying to run following code -
Cache populating client code
package ignite;
import org.apache.ignite.Ignite;
import org.apache.ignite.IgniteCache;
import org.apache.ignite.Ignition;
import org.apache.ignite.configuration.CacheConfiguration;
public class SpringIgniteClient {
public static void main(String[] args) throws Exception {
System.out.println("Run example!!");
Ignition.setClientMode(true);
// Start Ignite in client mode.
Ignite ignite = Ignition.start();
CacheConfiguration<Integer, Person> cfg = new CacheConfiguration<Integer, Person>("myStreamCache");
cfg.setIndexedTypes(Integer.class, Person.class);
IgniteCache<Integer, Person> cache = ignite.getOrCreateCache(cfg);
//for(int i = 1; i < 1000; i++){ cache.put(i, Integer.toString(i)+"sushil---"); }
for (int i = 0; i < 100; i++) {
Person person = new Person(i, i, "name_" + i, (i * 100) % 3000);
if(person.getSal() < 1000){
System.out.println(person);
}
cache.put(i, person);
}
}
}
Cache ScanQuery client code
package ignite;
import javax.cache.Cache.Entry;
import org.apache.ignite.Ignite;
import org.apache.ignite.IgniteCache;
import org.apache.ignite.Ignition;
import org.apache.ignite.cache.query.QueryCursor;
import org.apache.ignite.cache.query.ScanQuery;
import org.apache.ignite.configuration.CacheConfiguration;
import org.apache.ignite.lang.IgniteBiPredicate;
public class SpringIgniteReceiverClient {
public static void main(String[] args) {
System.out.println("Run Receiver example!!");
Ignition.setClientMode(true);
// Start Ignite in client mode.
Ignite ignite = Ignition.start();
CacheConfiguration<Integer, Person> cfg = new CacheConfiguration<Integer, Person>("myStreamCache");
cfg.setIndexedTypes(Integer.class, Person.class);
IgniteCache<Integer, Person> cache = ignite.getOrCreateCache(cfg);
IgniteBiPredicate<Integer, Person> filter = new MyIgniteBiPredicate();
ScanQuery<Integer, Person> query = new ScanQuery<Integer, Person>(filter);
//query.setLocal(true);
QueryCursor<Entry<Integer, Person>> cursor= cache.query(query);
System.out.println("ALL DATA ->"+cursor.getAll());
}
}
and IgniteBiPredicate implementation is
package ignite;
import java.io.Serializable;
import org.apache.ignite.lang.IgniteBiPredicate;
public class MyIgniteBiPredicate implements IgniteBiPredicate<Integer, Person>, Serializable{
/**
*
*/
private static final long serialVersionUID = 1L;
#Override public boolean apply(Integer key, Person p) {
return p.getSal() < 1000;
}
}
Serialize Java POJO
package ignite;
import java.io.Serializable;
public class Person implements Serializable{
/**
*
*/
private static final long serialVersionUID = 1L;
private int age;
private int empId;
private String name;
private int sal;
public Person(int age, int empId, String name, int sal) {
super();
this.age = age;
this.empId = empId;
this.name = name;
this.sal = sal;
}
public int getAge() {
return age;
}
public void setAge(int age) {
this.age = age;
}
public int getEmpId() {
return empId;
}
public void setEmpId(int empId) {
this.empId = empId;
}
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
public int getSal() {
return sal;
}
public void setSal(int sal) {
this.sal = sal;
}
#Override
public String toString() {
return "Person [age=" + age + ", empId=" + empId + ", name=" + name + ", sal=" + sal + "]";
}
}
During debug, I found that in IgniteCacheProxy.class, following method is called and it is returning null.
/**
* #param loc Enforce local.
* #return Local node cluster group.
*/
private ClusterGroup projection(boolean loc) {
if (loc || ctx.isLocal() || isReplicatedDataNode())
return ctx.kernalContext().grid().cluster().forLocal();
if (ctx.isReplicated())
return ctx.kernalContext().grid().cluster().forDataNodes(ctx.name()).forRandom();
return null;
}
And ScanQuery program gives following error.
Run Receiver example!!
[21:46:52] (wrn) Default Spring XML file not found (is IGNITE_HOME set?): config/default-config.xml
Mar 05, 2017 9:46:52 PM java.util.logging.LogManager$RootLogger log
SEVERE: Failed to resolve default logging config file: config/java.util.logging.properties
[21:46:53] __________ ________________
[21:46:53] / _/ ___/ |/ / _/_ __/ __/
[21:46:53] _/ // (7 7 // / / / / _/
[21:46:53] /___/\___/_/|_/___/ /_/ /___/
[21:46:53]
[21:46:53] ver. 1.8.0#20161205-sha1:9ca40dbe
[21:46:53] 2016 Copyright(C) Apache Software Foundation
[21:46:53]
[21:46:53] Ignite documentation: http://ignite.apache.org
[21:46:53]
[21:46:53] Quiet mode.
[21:46:53] ^-- To see **FULL** console log here add -DIGNITE_QUIET=false or "-v" to ignite.{sh|bat}
[21:46:53]
[21:46:53] OS: Windows 7 6.1 amd64
[21:46:53] VM information: Java(TM) SE Runtime Environment 1.8.0_65-b17 Oracle Corporation Java HotSpot(TM) 64-Bit Server VM 25.65-b01
[21:46:53] Initial heap size is 124MB (should be no less than 512MB, use -Xms512m -Xmx512m).
[21:46:53] Configured plugins:
[21:46:53] ^-- None
[21:46:53]
[21:46:54] Security status [authentication=off, tls/ssl=off]
[21:46:58] To start Console Management & Monitoring run ignitevisorcmd.{sh|bat}
[21:46:58]
[21:46:58] Ignite node started OK (id=ae95174d)
[21:46:58] Topology snapshot [ver=3, servers=1, clients=2, CPUs=4, heap=4.4GB]
Exception in thread "main" javax.cache.CacheException: class org.apache.ignite.IgniteCheckedException: Query execution failed: GridCacheQueryBean [qry=GridCacheQueryAdapter [type=SCAN, clsName=null, clause=null, filter=ignite.MyIgniteBiPredicate#294a6b8e, transform=null, part=null, incMeta=false, metrics=GridCacheQueryMetricsAdapter [minTime=9223372036854775807, maxTime=0, sumTime=0, avgTime=0.0, execs=0, completed=0, fails=0], pageSize=1024, timeout=0, keepAll=true, incBackups=false, dedup=false, prj=null, keepBinary=false, subjId=ae95174d-ff1c-44b2-a7dc-24fab738729e, taskHash=0], rdc=null, trans=null]
at org.apache.ignite.internal.processors.cache.GridCacheUtils.convertToCacheException(GridCacheUtils.java:1440)
at org.apache.ignite.internal.processors.cache.query.GridCacheQueryFutureAdapter.next(GridCacheQueryFutureAdapter.java:174)
at org.apache.ignite.internal.processors.cache.query.GridCacheDistributedQueryManager$5.onHasNext(GridCacheDistributedQueryManager.java:634)
at org.apache.ignite.internal.util.GridCloseableIteratorAdapter.hasNextX(GridCloseableIteratorAdapter.java:53)
at org.apache.ignite.internal.processors.cache.IgniteCacheProxy$2$1.onHasNext(IgniteCacheProxy.java:518)
at org.apache.ignite.internal.util.GridCloseableIteratorAdapter.hasNextX(GridCloseableIteratorAdapter.java:53)
at org.apache.ignite.internal.util.lang.GridIteratorAdapter.hasNext(GridIteratorAdapter.java:45)
at org.apache.ignite.internal.processors.cache.QueryCursorImpl.getAll(QueryCursorImpl.java:117)
at ignite.SpringIgniteReceiverClient.main(SpringIgniteReceiverClient.java:31)
Caused by: class org.apache.ignite.IgniteCheckedException: Query execution failed: GridCacheQueryBean [qry=GridCacheQueryAdapter [type=SCAN, clsName=null, clause=null, filter=ignite.MyIgniteBiPredicate#294a6b8e, transform=null, part=null, incMeta=false, metrics=GridCacheQueryMetricsAdapter [minTime=9223372036854775807, maxTime=0, sumTime=0, avgTime=0.0, execs=0, completed=0, fails=0], pageSize=1024, timeout=0, keepAll=true, incBackups=false, dedup=false, prj=null, keepBinary=false, subjId=ae95174d-ff1c-44b2-a7dc-24fab738729e, taskHash=0], rdc=null, trans=null]
at org.apache.ignite.internal.processors.cache.query.GridCacheQueryFutureAdapter.checkError(GridCacheQueryFutureAdapter.java:260)
at org.apache.ignite.internal.processors.cache.query.GridCacheQueryFutureAdapter.internalIterator(GridCacheQueryFutureAdapter.java:318)
at org.apache.ignite.internal.processors.cache.query.GridCacheQueryFutureAdapter.next(GridCacheQueryFutureAdapter.java:164)
... 7 more
Caused by: class org.apache.ignite.IgniteCheckedException: Failed to execute query on node [query=GridCacheQueryBean [qry=GridCacheQueryAdapter [type=SCAN, clsName=null, clause=null, filter=ignite.MyIgniteBiPredicate#294a6b8e, transform=null, part=null, incMeta=false, metrics=GridCacheQueryMetricsAdapter [minTime=9223372036854775807, maxTime=0, sumTime=0, avgTime=0.0, execs=0, completed=0, fails=0], pageSize=1024, timeout=0, keepAll=true, incBackups=false, dedup=false, prj=null, keepBinary=false, subjId=ae95174d-ff1c-44b2-a7dc-24fab738729e, taskHash=0], rdc=null, trans=null], nodeId=366435c6-5fca-43dc-b1f2-5ff2b0d3ee2d]
at org.apache.ignite.internal.processors.cache.query.GridCacheQueryFutureAdapter.onPage(GridCacheQueryFutureAdapter.java:383)
at org.apache.ignite.internal.processors.cache.query.GridCacheDistributedQueryManager.processQueryResponse(GridCacheDistributedQueryManager.java:398)
at org.apache.ignite.internal.processors.cache.query.GridCacheDistributedQueryManager.access$000(GridCacheDistributedQueryManager.java:63)
at org.apache.ignite.internal.processors.cache.query.GridCacheDistributedQueryManager$1.apply(GridCacheDistributedQueryManager.java:93)
at org.apache.ignite.internal.processors.cache.query.GridCacheDistributedQueryManager$1.apply(GridCacheDistributedQueryManager.java:91)
at org.apache.ignite.internal.processors.cache.GridCacheIoManager.processMessage(GridCacheIoManager.java:827)
at org.apache.ignite.internal.processors.cache.GridCacheIoManager.onMessage0(GridCacheIoManager.java:369)
at org.apache.ignite.internal.processors.cache.GridCacheIoManager.access$400(GridCacheIoManager.java:95)
at org.apache.ignite.internal.processors.cache.GridCacheIoManager$OrderedMessageListener.onMessage(GridCacheIoManager.java:1345)
at org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1082)
at org.apache.ignite.internal.managers.communication.GridIoManager.access$1600(GridIoManager.java:102)
at org.apache.ignite.internal.managers.communication.GridIoManager$GridCommunicationMessageSet.unwind(GridIoManager.java:2332)
at org.apache.ignite.internal.managers.communication.GridIoManager.unwindMessageSet(GridIoManager.java:1042)
at org.apache.ignite.internal.managers.communication.GridIoManager.access$1900(GridIoManager.java:102)
at org.apache.ignite.internal.managers.communication.GridIoManager$6.run(GridIoManager.java:1011)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: class org.apache.ignite.IgniteCheckedException: ignite.MyIgniteBiPredicate
at org.apache.ignite.internal.util.IgniteUtils.unmarshal(IgniteUtils.java:9785)
at org.apache.ignite.internal.processors.cache.query.GridCacheQueryRequest.finishUnmarshal(GridCacheQueryRequest.java:322)
at org.apache.ignite.internal.processors.cache.GridCacheIoManager.unmarshall(GridCacheIoManager.java:1298)
at org.apache.ignite.internal.processors.cache.GridCacheIoManager.onMessage0(GridCacheIoManager.java:364)
at org.apache.ignite.internal.processors.cache.GridCacheIoManager.handleMessage(GridCacheIoManager.java:293)
at org.apache.ignite.internal.processors.cache.GridCacheIoManager.access$000(GridCacheIoManager.java:95)
at org.apache.ignite.internal.processors.cache.GridCacheIoManager$1.onMessage(GridCacheIoManager.java:238)
at org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1082)
at org.apache.ignite.internal.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:710)
at org.apache.ignite.internal.managers.communication.GridIoManager.access$1700(GridIoManager.java:102)
at org.apache.ignite.internal.managers.communication.GridIoManager$5.run(GridIoManager.java:673)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
... 1 more
Caused by: class org.apache.ignite.binary.BinaryInvalidTypeException: ignite.MyIgniteBiPredicate
at org.apache.ignite.internal.binary.BinaryContext.descriptorForTypeId(BinaryContext.java:689)
at org.apache.ignite.internal.binary.BinaryContext.descriptorForTypeId(BinaryContext.java:686)
at org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize0(BinaryReaderExImpl.java:1491)
at org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize(BinaryReaderExImpl.java:1450)
at org.apache.ignite.internal.binary.GridBinaryMarshaller.deserialize(GridBinaryMarshaller.java:298)
at org.apache.ignite.internal.binary.BinaryMarshaller.unmarshal0(BinaryMarshaller.java:100)
at org.apache.ignite.marshaller.AbstractNodeNameAwareMarshaller.unmarshal(AbstractNodeNameAwareMarshaller.java:82)
at org.apache.ignite.internal.util.IgniteUtils.unmarshal(IgniteUtils.java:9779)
... 13 more
Caused by: java.lang.ClassNotFoundException: ignite.MyIgniteBiPredicate
at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:274)
at org.apache.ignite.internal.util.IgniteUtils.forName(IgniteUtils.java:8393)
at org.apache.ignite.internal.MarshallerContextAdapter.getClass(MarshallerContextAdapter.java:185)
at org.apache.ignite.internal.binary.BinaryContext.descriptorForTypeId(BinaryContext.java:680)
... 20 more
You need to deploy MyIgniteBiPredicate on server nodes. Create a JAR file with this class and put this JAR into IGNITE_HOME/libs folder prior to cluster startup.

Spark Datastax Java API Select statements

I'm using a tutorial here in this Github to run spark on cassandra using a java maven project: https://github.com/datastax/spark-cassandra-connector.
I've figured how to use direct CQL statements, as I have previously asked a question about that here: Querying Data in Cassandra via Spark in a Java Maven Project
However, now I'm trying to use the datastax java API in fear that my original code in my original question will not work for Datastax version of Spark and Cassandra. For some weird reason, it won't let me use .where even though it is outlined in the documentation that I can use that exact statement. Here is my code:
import org.apache.commons.lang3.StringUtils;
import org.apache.spark.SparkConf;
import org.apache.spark.api.java.JavaRDD;
import org.apache.spark.api.java.JavaSparkContext;
import org.apache.spark.api.java.function.Function;
import java.io.Serializable;
import static com.datastax.spark.connector.CassandraJavaUtil.*;
public class App implements Serializable
{
// firstly, we define a bean class
public static class Person implements Serializable {
private Integer id;
private String fname;
private String lname;
private String role;
// Remember to declare no-args constructor
public Person() { }
public Integer getId() { return id; }
public void setId(Integer id) { this.id = id; }
public String getfname() { return fname; }
public void setfname(String fname) { this.fname = fname; }
public String getlname() { return lname; }
public void setlname(String lname) { this.lname = lname; }
public String getrole() { return role; }
public void setrole(String role) { this.role = role; }
// other methods, constructors, etc.
}
private transient SparkConf conf;
private App(SparkConf conf) {
this.conf = conf;
}
private void run() {
JavaSparkContext sc = new JavaSparkContext(conf);
createSchema(sc);
sc.stop();
}
private void createSchema(JavaSparkContext sc) {
JavaRDD<String> rdd = javaFunctions(sc).cassandraTable("tester", "empbyrole", Person.class)
.where("role=?", "IT Engineer").map(new Function<Person, String>() {
#Override
public String call(Person person) throws Exception {
return person.toString();
}
});
System.out.println("Data as Person beans: \n" + StringUtils.join("\n", rdd.toArray()));
}
public static void main( String[] args )
{
if (args.length != 2) {
System.err.println("Syntax: com.datastax.spark.demo.JavaDemo <Spark Master URL> <Cassandra contact point>");
System.exit(1);
}
SparkConf conf = new SparkConf();
conf.setAppName("Java API demo");
conf.setMaster(args[0]);
conf.set("spark.cassandra.connection.host", args[1]);
App app = new App(conf);
app.run();
}
}
here is the error:
14/09/23 13:46:53 ERROR executor.Executor: Exception in task ID 0
java.io.IOException: Exception during preparation of SELECT "role", "id", "fname", "lname" FROM "tester"."empbyrole" WHERE token("role") > -5709068081826432029 AND token("role") <= -5491279024053142424 AND role=? ALLOW FILTERING: role cannot be restricted by more than one relation if it includes an Equal
at com.datastax.spark.connector.rdd.CassandraRDD.createStatement(CassandraRDD.scala:310)
at com.datastax.spark.connector.rdd.CassandraRDD.com$datastax$spark$connector$rdd$CassandraRDD$$fetchTokenRange(CassandraRDD.scala:317)
at com.datastax.spark.connector.rdd.CassandraRDD$$anonfun$13.apply(CassandraRDD.scala:338)
at com.datastax.spark.connector.rdd.CassandraRDD$$anonfun$13.apply(CassandraRDD.scala:338)
at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371)
at com.datastax.spark.connector.util.CountingIterator.hasNext(CountingIterator.scala:10)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
at scala.collection.Iterator$class.foreach(Iterator.scala:727)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47)
at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:273)
at scala.collection.AbstractIterator.to(Iterator.scala:1157)
at scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265)
at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157)
at scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:252)
at scala.collection.AbstractIterator.toArray(Iterator.scala:1157)
at org.apache.spark.rdd.RDD$$anonfun$4.apply(RDD.scala:608)
at org.apache.spark.rdd.RDD$$anonfun$4.apply(RDD.scala:608)
at org.apache.spark.SparkContext$$anonfun$runJob$4.apply(SparkContext.scala:884)
at org.apache.spark.SparkContext$$anonfun$runJob$4.apply(SparkContext.scala:884)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:109)
at org.apache.spark.scheduler.Task.run(Task.scala:53)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:205)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: com.datastax.driver.core.exceptions.InvalidQueryException: role cannot be restricted by more than one relation if it includes an Equal
at com.datastax.driver.core.exceptions.InvalidQueryException.copy(InvalidQueryException.java:35)
at com.datastax.driver.core.DefaultResultSetFuture.extractCauseFromExecutionException(DefaultResultSetFuture.java:256)
at com.datastax.driver.core.AbstractSession.prepare(AbstractSession.java:91)
at com.datastax.spark.connector.cql.PreparedStatementCache$.prepareStatement(PreparedStatementCache.scala:45)
at com.datastax.spark.connector.cql.SessionProxy.invoke(SessionProxy.scala:28)
at com.sun.proxy.$Proxy8.prepare(Unknown Source)
at com.datastax.spark.connector.rdd.CassandraRDD.createStatement(CassandraRDD.scala:293)
... 27 more
Caused by: com.datastax.driver.core.exceptions.InvalidQueryException: role cannot be restricted by more than one relation if it includes an Equal
at com.datastax.driver.core.Responses$Error.asException(Responses.java:97)
at com.datastax.driver.core.SessionManager$1.apply(SessionManager.java:156)
at com.datastax.driver.core.SessionManager$1.apply(SessionManager.java:131)
at com.google.common.util.concurrent.Futures$1.apply(Futures.java:711)
at com.google.common.util.concurrent.Futures$ChainingListenableFuture.run(Futures.java:849)
... 3 more
14/09/23 13:46:53 WARN scheduler.TaskSetManager: Lost TID 0 (task 0.0:0)
14/09/23 13:46:53 WARN scheduler.TaskSetManager: Loss was due to java.io.IOException
java.io.IOException: Exception during preparation of SELECT "role", "id", "fname", "lname" FROM "tester"."empbyrole" WHERE token("role") > -5709068081826432029 AND token("role") <= -5491279024053142424 AND role=? ALLOW FILTERING: role cannot be restricted by more than one relation if it includes an Equal
at com.datastax.spark.connector.rdd.CassandraRDD.createStatement(CassandraRDD.scala:310)
at com.datastax.spark.connector.rdd.CassandraRDD.com$datastax$spark$connector$rdd$CassandraRDD$$fetchTokenRange(CassandraRDD.scala:317)
at com.datastax.spark.connector.rdd.CassandraRDD$$anonfun$13.apply(CassandraRDD.scala:338)
at com.datastax.spark.connector.rdd.CassandraRDD$$anonfun$13.apply(CassandraRDD.scala:338)
at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371)
at com.datastax.spark.connector.util.CountingIterator.hasNext(CountingIterator.scala:10)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
at scala.collection.Iterator$class.foreach(Iterator.scala:727)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47)
at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:273)
at scala.collection.AbstractIterator.to(Iterator.scala:1157)
at scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265)
at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157)
at scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:252)
at scala.collection.AbstractIterator.toArray(Iterator.scala:1157)
at org.apache.spark.rdd.RDD$$anonfun$4.apply(RDD.scala:608)
at org.apache.spark.rdd.RDD$$anonfun$4.apply(RDD.scala:608)
at org.apache.spark.SparkContext$$anonfun$runJob$4.apply(SparkContext.scala:884)
at org.apache.spark.SparkContext$$anonfun$runJob$4.apply(SparkContext.scala:884)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:109)
at org.apache.spark.scheduler.Task.run(Task.scala:53)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:205)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
14/09/23 13:46:53 ERROR scheduler.TaskSetManager: Task 0.0:0 failed 1 times; aborting job
14/09/23 13:46:53 INFO scheduler.TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool
14/09/23 13:46:53 INFO scheduler.DAGScheduler: Failed to run toArray at App.java:65
Exception in thread "main" org.apache.spark.SparkException: Job aborted: Task 0.0:0 failed 1 times (most recent failure: Exception failure: java.io.IOException: Exception during preparation of SELECT "role", "id", "fname", "lname" FROM "tester"."empbyrole" WHERE token("role") > -5709068081826432029 AND token("role") <= -5491279024053142424 AND role=? ALLOW FILTERING: role cannot be restricted by more than one relation if it includes an Equal)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$org$apache$spark$scheduler$DAGScheduler$$abortStage$1.apply(DAGScheduler.scala:1020)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$org$apache$spark$scheduler$DAGScheduler$$abortStage$1.apply(DAGScheduler.scala:1018)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$abortStage(DAGScheduler.scala:1018)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$processEvent$10.apply(DAGScheduler.scala:604)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$processEvent$10.apply(DAGScheduler.scala:604)
at scala.Option.foreach(Option.scala:236)
at org.apache.spark.scheduler.DAGScheduler.processEvent(DAGScheduler.scala:604)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$start$1$$anon$2$$anonfun$receive$1.applyOrElse(DAGScheduler.scala:190)
at akka.actor.ActorCell.receiveMessage(ActorCell.scala:498)
at akka.actor.ActorCell.invoke(ActorCell.scala:456)
at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:237)
at akka.dispatch.Mailbox.run(Mailbox.scala:219)
at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:386)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
14/09/23 13:46:53 INFO cql.CassandraConnector: Disconnected from Cassandra cluster: Test Cluster
I know that my error is specifically at this section:
JavaRDD<String> rdd = javaFunctions(sc).cassandraTable("tester", "empbyrole", Person.class)
.where("role=?", "IT Engineer").map(new Function<Person, String>() {
#Override
public String call(Person person) throws Exception {
return person.toString();
}
});
When I remove the .where(), it works. But it says specifically on github that you should be able to execute .where and .map functions respectively. Does anyone have any type of reasoning for this? or solution? Thanks.
edit
i get the error to go away when i use this statement instead:
JavaRDD<String> rdd = javaFunctions(sc).cassandraTable("tester", "empbyrole", Person.class)
.where("id=?", "1").map(new Function<Person, String>() {
#Override
public String call(Person person) throws Exception {
return person.toString();
}
});
I have no idea why this option works but not the rest of my variations. Here are the statements i ran in my cql so that you know what my keyspace looks like:
session.execute("DROP KEYSPACE IF EXISTS tester");
session.execute("CREATE KEYSPACE tester WITH replication = {'class': 'SimpleStrategy', 'replication_factor': 3}");
session.execute("CREATE TABLE tester.emp (id INT PRIMARY KEY, fname TEXT, lname TEXT, role TEXT)");
session.execute("CREATE TABLE tester.empByRole (id INT, fname TEXT, lname TEXT, role TEXT, PRIMARY KEY (role,id))");
session.execute("CREATE TABLE tester.dept (id INT PRIMARY KEY, dname TEXT)");
session.execute(
"INSERT INTO tester.emp (id, fname, lname, role) " +
"VALUES (" +
"0001," +
"'Angel'," +
"'Pay'," +
"'IT Engineer'" +
");");
session.execute(
"INSERT INTO tester.emp (id, fname, lname, role) " +
"VALUES (" +
"0002," +
"'John'," +
"'Doe'," +
"'IT Engineer'" +
");");
session.execute(
"INSERT INTO tester.emp (id, fname, lname, role) " +
"VALUES (" +
"0003," +
"'Jane'," +
"'Doe'," +
"'IT Analyst'" +
");");
session.execute(
"INSERT INTO tester.empByRole (id, fname, lname, role) " +
"VALUES (" +
"0001," +
"'Angel'," +
"'Pay'," +
"'IT Engineer'" +
");");
session.execute(
"INSERT INTO tester.empByRole (id, fname, lname, role) " +
"VALUES (" +
"0002," +
"'John'," +
"'Doe'," +
"'IT Engineer'" +
");");
session.execute(
"INSERT INTO tester.empByRole (id, fname, lname, role) " +
"VALUES (" +
"0003," +
"'Jane'," +
"'Doe'," +
"'IT Analyst'" +
");");
session.execute(
"INSERT INTO tester.dept (id, dname) " +
"VALUES (" +
"1553," +
"'Commerce'" +
");");
The where method adds ALLOW FILTERING to your query under the covers. This is not a magic bullet, as it still doesn't support arbitrary fields as query predicates. In general, the field must either be indexed or a clustering column. If this isn't practical for your data model, you can simply use the filter method on the RDD. The downside is that the filter takes place in Spark and not in Cassandra.
So the id field works because it's supported in a CQL WHERE clause, whereas I'm assuming role is just a regular field. Please note that I am NOT suggesting that you index your field or change it to a clustering column, as I don't know your data model.
There is a limitation in the Spark Cassandra Connector that the where method will not work on partitioning keys. In your table empByRole, role is a partitioning key, hence the error. It should work correctly on clustering columns or indexed columns (secondary indexes).
This is being tracked as issue 37 in the GitHub project and work has been ongoing.
On the Java API doc page, the examples shown used .where("name=?", "Anna"). I assume that name is not a partitioning key, but the example could be more clear about that.

Categories