I prepared a Pcollection<BeamRecord> object from a file containing json objects using beam sql sdk.
The code below parse and map json lines to ChatHistory objects, then it converts the mapped objects to BeamRecord. Finally I try to use BeamSql on the returned PCollection<BeamRecord> but I get the exception SerializableCoder cannot be cast to BeamRecordCoder.
PCollection<ChatHistory> json_objects = lines.apply(ParDo.of(new ExtractObjectsFn()));
// Convert them to BeamRecords with the same schema as defined above via a DoFn.
PCollection<BeamRecord> apps = json_objects.apply(
ParDo.of(new DoFn<ChatHistory, BeamRecord>() {
#ProcessElement
public void processElement(ProcessContext c) {
List<String> fields_list= new ArrayList<String>(Arrays.asList("conversation_id","message_type","message_date","message","message_auto_id"));
List<Integer> types_list= new ArrayList<Integer>(Arrays.asList(Types.VARCHAR,Types.VARCHAR,Types.VARCHAR,Types.VARCHAR,Types.VARCHAR));
BeamRecordSqlType brtype = BeamRecordSqlType.create(fields_list, types_list);
BeamRecord br = new BeamRecord(
brtype,
c.element().conversation_id,
c.element().message_type,
c.element().message_date,
c.element().message,
c.element().message_auto_id
);
c.output(br);
}
}));
return apps.apply(
BeamSql
.query("SELECT conversation_id, message_type, message, message_date, message_auto_id FROM PCOLLECTION")
);
Here is the generated stack trace
java.lang.ClassCastException: org.apache.beam.sdk.coders.SerializableCoder cannot be cast to org.apache.beam.sdk.coders.BeamRecordCoder
at org.apache.beam.sdk.extensions.sql.BeamSql$QueryTransform.registerTables (BeamSql.java:173)
at org.apache.beam.sdk.extensions.sql.BeamSql$QueryTransform.expand (BeamSql.java:153)
at org.apache.beam.sdk.extensions.sql.BeamSql$QueryTransform.expand (BeamSql.java:116)
at org.apache.beam.sdk.Pipeline.applyInternal (Pipeline.java:537)
at org.apache.beam.sdk.Pipeline.applyTransform (Pipeline.java:472)
at org.apache.beam.sdk.values.PCollectionTuple.apply (PCollectionTuple.java:160)
at org.apache.beam.sdk.extensions.sql.BeamSql$SimpleQueryTransform.expand (BeamSql.java:246)
at org.apache.beam.sdk.extensions.sql.BeamSql$SimpleQueryTransform.expand (BeamSql.java:186)
at org.apache.beam.sdk.Pipeline.applyInternal (Pipeline.java:537)
at org.apache.beam.sdk.Pipeline.applyTransform (Pipeline.java:472)
at org.apache.beam.sdk.values.PCollection.apply (PCollection.java:286)
at com.mdm.trial.trial3$JsonParse.expand (trial3.java:123)
at com.mdm.trial.trial3$JsonParse.expand (trial3.java:1)
at org.apache.beam.sdk.Pipeline.applyInternal (Pipeline.java:537)
at org.apache.beam.sdk.Pipeline.applyTransform (Pipeline.java:472)
at org.apache.beam.sdk.values.PCollection.apply (PCollection.java:286)
at com.mdm.trial.trial3.main (trial3.java:160)
at sun.reflect.NativeMethodAccessorImpl.invoke0 (Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke (NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke (DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke (Method.java:498)
at org.codehaus.mojo.exec.ExecJavaMojo$1.run (ExecJavaMojo.java:282)
at java.lang.Thread.run (Thread.java:748)
I saw a similar post, but It still can't fix my error: Running BeamSql WithoutCoder or Making Coder Dynamic
Best regards !
Ismail, in your case using the .setCoder() should work.
I would try extracting the row type out of the ParDo, and then applying it to apps before applying the SQL query:
PCollection<ChatHistory> json_objects = lines.apply(ParDo.of(new ExtractObjectsFn()));
// Create a row type first:
List<String> fields_list= new ArrayList<String>(Arrays.asList("conversation_id","message_type","message_date","message","message_auto_id"));
List<Integer> types_list= new ArrayList<Integer>(Arrays.asList(Types.VARCHAR,Types.VARCHAR,Types.VARCHAR,Types.VARCHAR,Types.VARCHAR));
final BeamRecordSqlType brtype = BeamRecordSqlType.create(fields_list, types_list);
// Convert them to BeamRecords with the same schema as defined above via a DoFn.
PCollection<BeamRecord> apps = json_objects.apply(
ParDo.of(new DoFn<ChatHistory, BeamRecord>() {
#ProcessElement
public void processElement(ProcessContext c) {
BeamRecord br = new BeamRecord(
brtype,
c.element().conversation_id,
c.element().message_type,
c.element().message_date,
c.element().message,
c.element().message_auto_id
);
c.output(br);
}
}));
return apps
.setCoder(brtype.getRecordCoder())
.apply(
BeamSql
.query("SELECT conversation_id, message_type, message, message_date, message_auto_id FROM PCOLLECTION")
);
Couple of examples:
This example sets the coder by using Create.withCoder() which does the same thing;
This example filters and converts from Events using the ToRow.parDo() and then also sets the coder which is specified here;
Please note that BeamRecord has been renamed to Row, and few other changes are reflected in the examples above.
Related
I have a problem trying to access pubsub message's attributes.
The error message is the following:
Coder of type class org.apache.beam.sdk.coders.SerializableCoder has a #structuralValue method which does not return true when the encoding of the elements is equal.
stackTrace: [org.apache.beam.sdk.io.gcp.pubsub.PubsubMessage.getAttribute(PubsubMessage.java:56),
transform1$3.processElement(transform1.java:37),
transform1$3$DoFnInvoker.invokeProcessElement(Unknown Source),
org.apache.beam.repackaged.direct_java.runners.core.SimpleDoFnRunner.invokeProcessElement(SimpleDoFnRunner.java:218),
org.apache.beam.repackaged.direct_java.runners.core.SimpleDoFnRunner.processElement(SimpleDoFnRunner.java:183),
org.apache.beam.repackaged.direct_java.runners.core.SimplePushbackSideInputDoFnRunner.processElementInReadyWindows(SimplePushbackSideInputDoFnRunner.java:78),
org.apache.beam.runners.direct.ParDoEvaluator.processElement(ParDoEvaluator.java:216),
org.apache.beam.runners.direct.DoFnLifecycleManagerRemovingTransformEvaluator.processElement(DoFnLifecycleManagerRemovingTransformEvaluator.java:54),
org.apache.beam.runners.direct.DirectTransformExecutor.processElements(DirectTransformExecutor.java:160), org.apache.beam.runners.direct.DirectTransformExecutor.run(DirectTransformExecutor.java:124),
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511),
java.util.concurrent.FutureTask.run(FutureTask.java:266),
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149),
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624),
java.lang.Thread.run(Thread.java:748)]
I'm using the Dataflow Eclipse SDK to run the pipeline locally:
<dependency>
<groupId>org.apache.beam</groupId>
<artifactId>beam-runners-direct-java</artifactId>
<version>${beam.version}</version>
<scope>runtime</scope>
</dependency>
The line of code which produces the error is this:
String fieldId = c.element().getAttribute("evId");
The full code of the ptransform is the following:
public class transform1 extends DoFn<PubsubMessage, Event> {
public static TupleTag<ErrorHandler> failuresTag=new TupleTag<ErrorHandler>(){};
public static TupleTag<Event> validTag = new TupleTag<Event>(){};
public static PCollectionTuple process(PCollection<PubsubMessage> logStrings)
{
return logStrings.apply("Create PubSub objects", ParDo.of(new DoFn<PubsubMessage, Event>()
{
#ProcessElement
public void processElement(ProcessContext c)
{
try
{
Event event = new Event();
String fieldId = c.element().getAttribute("evId");
event.evId = "asa"; //this line is just to test to set a value
c.output(event);
<...>
I have seen a similar question but I'm not sure how I could fix it
The main pipeline code (if needed)
public static PipelineResult run(Options options) {
Pipeline pipeline = Pipeline.create(options);
/*
* Step 1: Read from PubSub
*/
PCollection<PubsubMessage> messages = null;
if (options.getUseSubscription()) {
messages = pipeline.apply("ReadPubSubSubscription", PubsubIO.readMessagesWithAttributes()
.fromSubscription(options.getInputSubscription()).withIdAttribute("messageId"));
} else {
messages = pipeline.apply("ReadPubSubTopic", PubsubIO.readMessagesWithAttributes()
.fromTopic(options.getInputTopic()).withIdAttribute("messageId"));
}
/*
* Step 2: Transform PubSubMessage to Event
*/
PCollectionTuple eventCollections = transform1.process(messages);
PubSub message:
{ "evId":"id", "payload":"payload" }
I also tried as:
"{ "evId":"id", "payload":"payload" }"
This is how I publish the message in pubsub to test the pipeline:
After making more test, the way I was publishing to pubsub it seems to be the source of the error, because If I added as attribute instead of message body the problem disappear.
The reason was I was trying to access to an attribute here:
String fieldId = c.element().getAttribute("evId");
But when I was sending the message through the pubsub dashboard I didn't add any attribute and it cause all the pipeline crash.
This is my first attempt at trying to use a KTable. I have a Kafka Stream that contains Avro serialized objects of type A,B. And this works fine. I can write a Consumer that consumes just fine or a simple KStream that simply counts records.
The B object has a field containing a country code. I'd like to supply that code to a KTable so it can count the number of records that contain a particular country code. To do so I'm trying to convert the stream into a stream of X,Y (or really: country-code, count). Eventually I look at the contents of the table and extract an array of KV pairs.
The code I have (included) always errors out with the following (see the line with 'Caused by'):
2018-07-26 13:42:48.688 [com.findology.tools.controller.TestEventGeneratorController-16d7cd06-4742-402e-a679-898b9ef78c41-StreamThread-1; AssignedStreamsTasks] ERROR -- stream-thread [com.findology.tools.controller.TestEventGeneratorController-16d7c\
d06-4742-402e-a679-898b9ef78c41-StreamThread-1] Failed to process stream task 0_0 due to the following error:
org.apache.kafka.streams.errors.StreamsException: Exception caught in process. taskId=0_0, processor=KSTREAM-SOURCE-0000000000, topic=com.findology.model.traffic.CpaTrackingCallback, partition=0, offset=962649
at org.apache.kafka.streams.processor.internals.StreamTask.process(StreamTask.java:240)
at org.apache.kafka.streams.processor.internals.AssignedStreamsTasks.process(AssignedStreamsTasks.java:94)
at org.apache.kafka.streams.processor.internals.TaskManager.process(TaskManager.java:411)
at org.apache.kafka.streams.processor.internals.StreamThread.processAndMaybeCommit(StreamThread.java:922)
at org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:802)
at org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:749)
at org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:719)
Caused by: org.apache.kafka.streams.errors.StreamsException: A serializer (key: org.apache.kafka.common.serialization.ByteArraySerializer / value: org.apache.kafka.common.serialization.ByteArraySerializer) is not compatible to the actual key or value type (key type: java.lang.Integer / value type: java.lang.Integer). Change the default Serdes in StreamConfig or provide correct Serdes via method parameters.
at org.apache.kafka.streams.processor.internals.SinkNode.process(SinkNode.java:92)
at org.apache.kafka.streams.processor.internals.AbstractProcessorContext.forward(AbstractProcessorContext.java:174)
at org.apache.kafka.streams.kstream.internals.KStreamFilter$KStreamFilterProcessor.process(KStreamFilter.java:43)
at org.apache.kafka.streams.processor.internals.ProcessorNode$1.run(ProcessorNode.java:46)
at org.apache.kafka.streams.processor.internals.StreamsMetricsImpl.measureLatencyNs(StreamsMetricsImpl.java:211)
at org.apache.kafka.streams.processor.internals.ProcessorNode.process(ProcessorNode.java:124)
at org.apache.kafka.streams.processor.internals.AbstractProcessorContext.forward(AbstractProcessorContext.java:174)
at org.apache.kafka.streams.kstream.internals.KStreamTransform$KStreamTransformProcessor.process(KStreamTransform.java:59)
at org.apache.kafka.streams.processor.internals.ProcessorNode$1.run(ProcessorNode.java:46)
at org.apache.kafka.streams.processor.internals.StreamsMetricsImpl.measureLatencyNs(StreamsMetricsImpl.java:211)
at org.apache.kafka.streams.processor.internals.ProcessorNode.process(ProcessorNode.java:124)
at org.apache.kafka.streams.processor.internals.AbstractProcessorContext.forward(AbstractProcessorContext.java:174)
at org.apache.kafka.streams.processor.internals.SourceNode.process(SourceNode.java:80)
at org.apache.kafka.streams.processor.internals.StreamTask.process(StreamTask.java:224)
... 6 more
Caused by: java.lang.ClassCastException: java.lang.Integer cannot be cast to [B
at org.apache.kafka.common.serialization.ByteArraySerializer.serialize(ByteArraySerializer.java:21)
at org.apache.kafka.streams.processor.internals.RecordCollectorImpl.send(RecordCollectorImpl.java:146)
at org.apache.kafka.streams.processor.internals.RecordCollectorImpl.send(RecordCollectorImpl.java:94)
at org.apache.kafka.streams.processor.internals.SinkNode.process(SinkNode.java:87)
... 19 more
And here is the code I'm using. I've omitted certain classes for brevity. Note that I'm not using the Confluent KafkaAvro classes.
private synchronized void createStreamProcessor2() {
if (streams == null) {
try {
Properties props = new Properties();
props.put(StreamsConfig.APPLICATION_ID_CONFIG, getClass().getName());
props.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
StreamsConfig config = new StreamsConfig(props);
StreamsBuilder builder = new StreamsBuilder();
Map<String, Object> serdeProps = new HashMap<>();
serdeProps.put("schema.registry.url", schemaRegistryURL);
AvroSerde<CpaTrackingCallback> cpaTrackingCallbackAvroSerde = new AvroSerde<>(schemaRegistryURL);
cpaTrackingCallbackAvroSerde.configure(serdeProps, false);
// This is the key to telling kafka the specific Serde instance to use
// to deserialize the Avro encoded value
KStream<Long, CpaTrackingCallback> stream = builder.stream(CpaTrackingCallback.class.getName(),
Consumed.with(Serdes.Long(), cpaTrackingCallbackAvroSerde));
// provide a way to convert CpsTrackicking... info into just country codes
// (Long, CpaTrackingCallback) -> (countryCode:Integer, placeHolder:Long)
TransformerSupplier<Long, CpaTrackingCallback, KeyValue<Integer, Long>> transformer = new TransformerSupplier<Long, CpaTrackingCallback, KeyValue<Integer, Long>>() {
#Override
public Transformer<Long, CpaTrackingCallback, KeyValue<Integer, Long>> get() {
return new Transformer<Long, CpaTrackingCallback, KeyValue<Integer, Long>>() {
#Override
public void init(ProcessorContext context) {
// Not doing Punctuate so no need to store context
}
#Override
public KeyValue<Integer, Long> transform(Long key, CpaTrackingCallback value) {
return new KeyValue(value.getCountryCode(), 1);
}
#Override
public KeyValue<Integer, Long> punctuate(long timestamp) {
return null;
}
#Override
public void close() {
}
};
}
};
KTable<Integer, Long> countryCounts = stream.transform(transformer).groupByKey() //
.count(Materialized.as("country-counts"));
streams = new KafkaStreams(builder.build(), config);
Runtime.getRuntime().addShutdownHook(new Thread(streams::close));
streams.cleanUp();
streams.start();
try {
countryCountsView = waitUntilStoreIsQueryable("country-counts", QueryableStoreTypes.keyValueStore(),
streams);
}
catch (InterruptedException e) {
log.warn("Interrupted while waiting for query store to become available", e);
}
}
catch (Exception e) {
log.error(e);
}
}
}
The bare groupByKey() method on KStream uses the default serializer/deserializer (which you haven't set). Use the method groupByKey(Serialized<K,V> serialized), as in:
.groupByKey(Serialized.with(Serdes.Integer(), Serdes.Long()))
Also note, what you do in your custom TransformerSupplier, you can do simply with a KStream.map call.
I try to write dataframe to ignite using jdbc ,
The Spark version is : 2.1
Ignite version:2.3
JDK:1.8
Scala:2.11.8
this is my code snippet:
def WriteToIgnite(hiveDF:DataFrame,targetTable:String):Unit = {
val conn = DataSource.conn
var psmt:PreparedStatement = null
try {
OperationIgniteUtil.deleteIgniteData(conn,targetTable)
hiveDF.foreachPartition({
partitionOfRecords => {
partitionOfRecords.foreach(
row => for ( i <- 0 until row.length ) {
psmt = OperationIgniteUtil.getInsertStatement(conn, targetTable, hiveDF.schema)
psmt.setObject(i+1, row.get(i))
psmt.execute()
}
)
}
})
}catch {
case e: Exception => e.printStackTrace()
} finally {
conn.close
}
}
and then I run on spark ,it print erro message:
org.apache.spark.SparkException: Task not serializable
at org.apache.spark.util.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:298)
at org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:288)
at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:108)
at org.apache.spark.SparkContext.clean(SparkContext.scala:2094)
at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1.apply(RDD.scala:924)
at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1.apply(RDD.scala:923)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
at org.apache.spark.rdd.RDD.foreachPartition(RDD.scala:923)
at org.apache.spark.sql.Dataset$$anonfun$foreachPartition$1.apply$mcV$sp(Dataset.scala:2305)
at org.apache.spark.sql.Dataset$$anonfun$foreachPartition$1.apply(Dataset.scala:2305)
at org.apache.spark.sql.Dataset$$anonfun$foreachPartition$1.apply(Dataset.scala:2305)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:57)
at org.apache.spark.sql.Dataset.withNewExecutionId(Dataset.scala:2765)
at org.apache.spark.sql.Dataset.foreachPartition(Dataset.scala:2304)
at com.pingan.pilot.ignite.common.OperationIgniteUtil$.WriteToIgnite(OperationIgniteUtil.scala:72)
at com.pingan.pilot.ignite.etl.HdfsToIgnite$.main(HdfsToIgnite.scala:36)
at com.pingan.pilot.ignite.etl.HdfsToIgnite.main(HdfsToIgnite.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:738)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:187)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:212)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:126)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) Caused by: java.io.NotSerializableException:
org.apache.ignite.internal.jdbc2.JdbcConnection Serialization stack:
- object not serializable (class: org.apache.ignite.internal.jdbc2.JdbcConnection, value:
org.apache.ignite.internal.jdbc2.JdbcConnection#7ebc2975)
- field (class: com.pingan.pilot.ignite.common.OperationIgniteUtil$$anonfun$WriteToIgnite$1,
name: conn$1, type: interface java.sql.Connection)
- object (class com.pingan.pilot.ignite.common.OperationIgniteUtil$$anonfun$WriteToIgnite$1,
)
at org.apache.spark.serializer.SerializationDebugger$.improveException(SerializationDebugger.scala:40)
at org.apache.spark.serializer.JavaSerializationStream.writeObject(JavaSerializer.scala:46)
at org.apache.spark.serializer.JavaSerializerInstance.serialize(JavaSerializer.scala:100)
at org.apache.spark.util.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:295)
... 27 more
Anyone konws I to fix it?
Thanks!
The problem here is you cannot serialize the connection to Ignite DataSource.conn. The closure you provide to forEachPartition contains the connection as part of its scope which is why Spark cannot serialize it.
Fortunately, Ignite provides a custom implementation of RDD which allows you to save values to it. You will need to create an IgniteContext first, then retrieve Ignite's shared RDD which provide distributed access to Ignite to save the Row of your RDD:
val igniteContext = new IgniteContext(sparkContext, () => new IgniteConfiguration())
...
// Retrieve Ignite's shared RDD
val igniteRdd = igniteContext.fromCache("partitioned")
igniteRDD.saveValues(hiveDF.toRDD)
More information are accessible from the Apache Ignite documentation.
You have to extend the Serializable interface.
object Test extends Serializable {
def WriteToIgnite(hiveDF:DataFrame,targetTable:String):Unit = {
???
}
}
I hope it would resolve your problem.
I wanted to create a simple template wizard for NetBeans that would take an existing Java class from current user project and create a new Java class from it. To do so, I need to access field and annotation data from the selected class (Java file).
Now, I used the org.netbeans.api.java.source.ui.TypeElementFinder for finding and selecting the wanted class, but as a result I get an ElementHandle and I don't know what to do with it. How do I get class info from this?
I managed to get a TypeMirror (com.sun.tools.javac.code.Type$ClassType) using this code snippet:
Project project = ...; // from Templates.getProject(WizardDescriptor);
ClasspathInfo ci = ClasspathInfoFactory.infoFor(project, true, true, true);
final ElementHandle<TypeElement> element = TypeElementFinder.find(ci);
FileObject fo = SourceUtils.getFile(element, ci);
JavaSource.forFileObject(fo).runUserActionTask(new Task<CompilationController>() {
#Override
public void run(CompilationController p) throws Exception {
p.toPhase(JavaSource.Phase.RESOLVED);
TypeElement typeElement = element.resolve(p);
TypeMirror typeMirror = typeElement.asType();
}
}, true);
But what to do form here? Am I going about this the wrong way altogether?
EDIT:
In, response to francesco foresti's post:
I tried loads of different Reflection/Classloader approaches. I got to a point where a org.netbeans.api.java.classpath.ClassPath instance was created and would contain the wanted Class file, but when I tried loading the said class with a Classloader created from that ClassPath, I would get a ClassNotFoundException. Here's my code:
Project project = ...; // from Templates.getProject(WizardDescriptor);
ClasspathInfo ci = ClasspathInfoFactory.infoFor(project, true, true, true);
final ElementHandle<TypeElement> element = TypeElementFinder.find(ci);
FileObject fo = SourceUtils.getFile(element, ci);
ClassPath cp = ci.getClassPath(ClasspathInfo.PathKind.SOURCE);
System.out.println("NAME: " + element.getQualifiedName());
System.out.println("CONTAINS: " + cp.contains(fo));
try {
Class clazz = Class.forName(element.getQualifiedName(), true, cp.getClassLoader(true));
System.out.println(clazz.getName());
} catch (ClassNotFoundException ex) {
Exceptions.printStackTrace(ex);
}
This yields:
NAME: hr.test.Test
CONTAINS: true
SEVERE [org.openide.util.Exceptions]
java.lang.ClassNotFoundException: hr.test.Test
at java.net.URLClassLoader$1.run(URLClassLoader.java:372)
at java.net.URLClassLoader$1.run(URLClassLoader.java:361)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:360)
at org.openide.execution.NbClassLoader.findClass(NbClassLoader.java:210)
at org.netbeans.api.java.classpath.ClassLoaderSupport.findClass(ClassLoaderSupport.java:113)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:344)
why don't you use plain old reflection? E.g.
Class<?> theClazz = Class.forName("com.example.myClass");
Annotation[] annotations = theClazz.getAnnotations();
Field[] fields = theClazz.getFields();
Method[] methods = theClazz.getMethods();
// from here on, you write a file that happens to be in the classpath,
// and whose extension is '.java'
I'm running my tests using gradle testFlavorType
JSONObject jsonObject1 = new JSONObject();
JSONObject jsonObject2 = new JSONObject();
jsonObject1.put("test", "test");
jsonObject2.put("test", "test");
assertEquals(jsonObject1.get("test"), jsonObject2.get("test"));
The above test succeeds.
jsonObject = new SlackMessageRequest(channel, message).buildBody();
String channelAssertion = jsonObject.getString(SlackMessageRequest.JSON_KEY_CHANNEL);
String messageAssertion = jsonObject.getString(SlackMessageRequest.JSON_KEY_TEXT);
assertEquals(channel, channelAssertion);
assertEquals(message, messageAssertion);
But the above two requests fail. The stack trace says that channelAssertion and messageAssertion are null, but not sure why. My question is: Why are the above two asserts failing?
Below is the SlackMessageRequest.
public class SlackMessageRequest
extends BaseRequest {
// region Variables
public static final String JSON_KEY_TEXT = "text";
public static final String JSON_KEY_CHANNEL = "channel";
private String mChannel;
private String mMessage;
// endregion
// region Constructors
public SlackMessageRequest(String channel, String message) {
mChannel = channel;
mMessage = message;
}
// endregion
// region Methods
#Override
public MethodType getMethodType() {
return MethodType.POST;
}
#Override
public JSONObject buildBody() throws JSONException {
JSONObject body = new JSONObject();
body.put(JSON_KEY_TEXT, getMessage());
body.put(JSON_KEY_CHANNEL, getChannel());
return body;
}
#Override
public String getUrl() {
return "http://localhost:1337";
}
public String getMessage() {
return mMessage;
}
public String getChannel() {
return mChannel;
}
// endregion
}
Below is the stacktrace:
junit.framework.ComparisonFailure: expected:<#tk> but was:<null>
at junit.framework.Assert.assertEquals(Assert.java:100)
at junit.framework.Assert.assertEquals(Assert.java:107)
at junit.framework.TestCase.assertEquals(TestCase.java:269)
at com.example.app.http.request.SlackMessageRequestTest.testBuildBody(SlackMessageRequestTest.java:30)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at junit.framework.TestCase.runTest(TestCase.java:176)
at junit.framework.TestCase.runBare(TestCase.java:141)
at junit.framework.TestResult$1.protect(TestResult.java:122)
at junit.framework.TestResult.runProtected(TestResult.java:142)
at junit.framework.TestResult.run(TestResult.java:125)
at junit.framework.TestCase.run(TestCase.java:129)
at junit.framework.TestSuite.runTest(TestSuite.java:252)
at junit.framework.TestSuite.run(TestSuite.java:247)
at org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:86)
at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:86)
at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:49)
at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:64)
at org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:50)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
at org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
at org.gradle.messaging.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
at org.gradle.messaging.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
at org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:106)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
at org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
at org.gradle.messaging.remote.internal.hub.MessageHub$Handler.run(MessageHub.java:360)
at org.gradle.internal.concurrent.DefaultExecutorFactory$StoppableExecutorImpl$1.run(DefaultExecutorFactory.java:64)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
EDIT 5:55PM EST
I've figured out that I can log with System.out.println("") and then see the results by running gradle testFlavorType --debug and by trial and error I've discovered the following weird situation:
#Override
public JSONObject buildBody() throws JSONException {
System.out.println("buildBody mChannel = " + mChannel);
System.out.println("buildBody mMessage = " + mMessage);
JSONObject body = new JSONObject();
body.put(JSON_KEY_TEXT, getMessage());
body.put(JSON_KEY_CHANNEL, getChannel());
if (body.length() != 0) {
Iterator<String> keys = body.keys();
if (keys.hasNext()) {
do {
String key = keys.next();
System.out.println("keys: " + key);
} while (keys.hasNext());
}
} else {
System.out.println("There are no keys????");
}
return body;
}
For some reason, "There are no keys????" is printing out?!?!?!?! Why?!
EDIT 6:20PM EST
I've figured out how to debug unit tests. According to the debugger, the assigned JSONObject is returning "null". I have no clue what this means (see below). Since I think this is relevant, my gradle file includes the following:
testOptions {
unitTests.returnDefaultValues = true
}
It's especially strange because if I construct a JSONObject inside the test, then everything works fine. But if it is part of the original application's code, then it doesn't work and does the above.
As Lucas says, JSON is bundled up with the Android SDK, so you are working with a stub.
The current solution is to pull JSON from Maven Central like this:
dependencies {
...
testImplementation 'org.json:json:20210307'
}
You can replace the version 20210307 with the the latest one depending on the Android API. It is not known which version of the maven artefact corresponds exactly/most closely to what ships with Android.
Alternatively, you can download and include the jar:
dependencies {
...
testImplementation files('libs/json.jar')
}
Note that you also need to use Android Studio 1.1 or higher and at least build tools version 22.0.0 or above for this to work.
Related issue: #179461
The class JSONObject is part of the android SDK. That means that is not available for unit testing by default.
From http://tools.android.com/tech-docs/unit-testing-support
The android.jar file that is used to run unit tests does not contain
any actual code - that is provided by the Android system image on real
devices. Instead, all methods throw exceptions (by default). This is
to make sure your unit tests only test your code and do not depend on
any particular behaviour of the Android platform (that you have not
explicitly mocked e.g. using Mockito).
When you set the test options to
testOptions {
unitTests.returnDefaultValues = true
}
you are fixing the "Method ... not mocked." problem, but the outcome is that when your code uses new JSONObject() you are not using the real method, you are using a mock method that doesn't do anything, it just returns a default value. That's the reason the object is null.
You can find different ways of solving the problem in this question: Android methods are not mocked when using Mockito
Well, my first hunch would be that your getMessage() method returns null. You could show the body of that method in your question and have us find the answer for you, but you should probably research how to debug android applications using breakpoints.
That way you can run your code step by step and see the values of each variable at every step. That would show you your problem in no time, and it's a skill you should definitely master as soon as possible if you intend to get seriously involved in programming.