I'm trying to make a life Bigquery table creation before the inserting process itself. Here is the code of PTransform that I'm using -> Link
This transform I would like to apply on Pubsub messages that would be inserted in BQ table later.
Phase 1. Getting pubsub messages:
PCollection<PubsubMessage> messages =
pipeline.apply(
"ReadPubSubSubscription",
PubsubIO.readMessagesWithAttributes()
.fromSubscription(options.getInputSubscription()));
Phase 2. Convert all pubsub messages to TableRow:
PCollectionTuple convertedTableRows =
messages
.apply("ConvertMessageToTableRow", new PubsubMessageToTableRow(options));
Phase 3. Here is the problem, I need to check if table exist and upload the result to BQ:
###here is the schema for our BQ table
public static final Schema schema1 =
Schema.of(
Field.of("name", StandardSQLTypeName.STRING),
Field.of("post_abbr", StandardSQLTypeName.STRING));
### here is the method that we using to extract table name from pubsub attributes
static class PubSubAttributeExtractor implements SerializableFunction<ValueInSingleWindow<TableRow>, String> {
private final String attribute;
public PubSubAttributeExtractor(String attribute) {
this.attribute = attribute;
}
#Override
public String apply(ValueInSingleWindow<TableRow> input) {
TableRow row = input.getValue();
String tableName = (String) row.get("name");
return "my-project:myDS.pubsub_" + tableName;
}
}
### here is the part that doesn't work
WriteResult writeResult = convertedTableRows.get(TRANSFORM_OUT)
.apply(new BigQueryAutoCreateTable(
new PubSubAttributeExtractor("event_name"),schema1));
.apply(
"WriteSuccessfulRecords",
BigQueryIO.writeTableRows()
.withoutValidation()
.withCreateDisposition(CreateDisposition.CREATE_NEVER)
.withWriteDisposition(WriteDisposition.WRITE_APPEND)
.withExtendedErrorInfo()
.withMethod(BigQueryIO.Write.Method.STREAMING_INSERTS)
.withFailedInsertRetryPolicy(InsertRetryPolicy.retryTransientErrors())
.to(new ProbPartitionDestinations(options.getOutputTableSpec())
)
);
Error logs:
cannot find symbol
symbol: method apply(java.lang.String,org.apache.beam.sdk.io.gcp.bigquery.BigQueryIO.Write<com.google.api.services.bigquery.model.TableRow>)
location: interface org.apache.beam.sdk.values.POutput
Related
I am attempting to query a table that has a partition key and sort key (however the partition key and sort key are 1:1 and I want to query only using the partition key [in which only one item would be returned]).
QueryRequest query = new QueryRequest()
.withTableName(TABLE_NAME)
.withKeyConditionExpression("testId = :" + "1234567890");
QueryResult result = client.query(query);
This is the code I tried but it did not work (testId is the partition key name and 1234567890 is the partition key value in String form); do y'all know of a method I could use to query by only using the partition key keeping in mind that only one item will be returned since the partition key and sort key are 1:1? Thank you so much in advance. [This is my first Stack Overflow post - my apologies if I worded things poorly, I'm happy to answer any questions about my wording]
FYI: this is the error statement I got when trying to use the code above:
errorMessage": "Invalid KeyConditionExpression: An expression attribute value used in expression is not defined
You should really update to use AWS SDK For Java V2 (Using AWS SDK for V1 is not best practice). Using AWS SDK for Java v2 is best practice for using Amazon DynamoDB API.
To learn more about the AWS V2 Java API, read the Developer Guide here.
Developer guide - AWS SDK for Java 2.x
Now I will answer this question with V2. The solution that worked for me was create a secondary index named year-index. This uses just my partition key named year (and does not use the sort key).
I can successfully query using this index, as shown here that uses the AWS Management Console.
Now only movies with the year 2014 are returned. That is how you query when your table has a composite key made up of a partition key and sort key and you only want to query on partition key.
By the way - you said you have a secondary index. A table can have more then 1 secondary index
Code that you need for V2 to query a secondary Index
I will show you how to use V2 to search for secondary index using three ways.
First way - Use the V2 Enhanced Client
Once you create the secondary index, you can use it to query. As mentioned, I created a secondary index named year-index. I can use this secondary index to query data by using the DynamoDB Enhanced Client.
Because, I am querying the Movies table, I have to create a Class named Movies like this. Notice the use of the #DynamoDbSecondaryPartitionKey annotation.
package com.example.dynamodb;
import software.amazon.awssdk.enhanced.dynamodb.mapper.annotations.DynamoDbBean;
import software.amazon.awssdk.enhanced.dynamodb.mapper.annotations.DynamoDbPartitionKey;
import software.amazon.awssdk.enhanced.dynamodb.mapper.annotations.DynamoDbSecondaryPartitionKey;
import software.amazon.awssdk.enhanced.dynamodb.mapper.annotations.DynamoDbSortKey;
#DynamoDbBean
public class Movies {
private String title;
private int year;
private String info;
#DynamoDbSecondaryPartitionKey(indexNames = { "year-index" })
#DynamoDbPartitionKey
public int getYear() {
return this.year;
}
public void setYear(int year) {
this.year = year;
}
#DynamoDbSortKey
public String getTitle() {
return this.title;
}
public void setTitle(String title) {
this.title = title;
}
public String getInfo() {
return this.info;
}
public void setInfo(String info) {
this.info = info;
}
}
Finally, here is the V2 code that lets you query using the secondary index.
package com.example.dynamodb;
// snippet-start:[dynamodb.java2.get_item_index.import]
import software.amazon.awssdk.auth.credentials.ProfileCredentialsProvider;
import software.amazon.awssdk.core.pagination.sync.SdkIterable;
import software.amazon.awssdk.enhanced.dynamodb.DynamoDbEnhancedClient;
import software.amazon.awssdk.enhanced.dynamodb.DynamoDbIndex;
import software.amazon.awssdk.enhanced.dynamodb.DynamoDbTable;
import software.amazon.awssdk.enhanced.dynamodb.Key;
import software.amazon.awssdk.enhanced.dynamodb.TableSchema;
import software.amazon.awssdk.enhanced.dynamodb.model.Page;
import software.amazon.awssdk.enhanced.dynamodb.model.QueryConditional;
import software.amazon.awssdk.enhanced.dynamodb.model.QueryEnhancedRequest;
import software.amazon.awssdk.regions.Region;
import software.amazon.awssdk.services.dynamodb.DynamoDbClient;
import software.amazon.awssdk.services.dynamodb.model.AttributeValue;
import software.amazon.awssdk.services.dynamodb.model.DynamoDbException;
import java.util.List;
// snippet-end:[dynamodb.java2.get_item_index.import]
/**
* Before running this Java V2 code example, set up your development environment, including your credentials.
*
* For more information, see the following documentation topic:
*
* https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html
*
* To get an item from an Amazon DynamoDB table using the AWS SDK for Java V2, its better practice to use the
* Enhanced Client, see the EnhancedGetItem example.
*
* Create the Movies table by running the Scenario example and loading the Movies data from the JSON file. Next create a secondary
* index for the Movies table that uses only the year column. Name the index **year-index**. For more information, see:
*
* https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/GSI.html
*/
public class EnhancedGetItemUsingIndex {
public static void main(String[] args) {
String tableName = "Movies" ; //args[0];
ProfileCredentialsProvider credentialsProvider = ProfileCredentialsProvider.create();
Region region = Region.US_EAST_1;
DynamoDbClient ddb = DynamoDbClient.builder()
.credentialsProvider(credentialsProvider)
.region(region)
.build();
queryIndex(ddb, tableName);
ddb.close();
}
// snippet-start:[dynamodb.java2.get_item_index.main]
public static void queryIndex(DynamoDbClient ddb, String tableName) {
try {
// Create a DynamoDbEnhancedClient and use the DynamoDbClient object.
DynamoDbEnhancedClient enhancedClient = DynamoDbEnhancedClient.builder()
.dynamoDbClient(ddb)
.build();
//Create a DynamoDbTable object based on Movies.
DynamoDbTable<Movies> table = enhancedClient.table("Movies", TableSchema.fromBean(Movies.class));
String dateVal = "2013";
DynamoDbIndex<Movies> secIndex =
enhancedClient.table("Movies", TableSchema.fromBean(Movies.class))
.index("year-index");
AttributeValue attVal = AttributeValue.builder()
.n(dateVal)
.build();
// Create a QueryConditional object that's used in the query operation.
QueryConditional queryConditional = QueryConditional
.keyEqualTo(Key.builder().partitionValue(attVal)
.build());
// Get items in the table.
SdkIterable<Page<Movies>> results = secIndex.query(
QueryEnhancedRequest.builder()
.queryConditional(queryConditional)
.limit(300)
.build());
//Display the results.
results.forEach(page -> {
List<Movies> allMovies = page.items();
for (Movies myMovies: allMovies) {
System.out.println("The movie title is " + myMovies.getTitle() + ". The year is " + myMovies.getYear());
}
});
} catch (DynamoDbException e) {
System.err.println(e.getMessage());
System.exit(1);
}
}
// snippet-end:[dynamodb.java2.get_item_index.main]
}
This now returns all Movies where the year is 2013.
Second way - Use the V2 Service Client
package com.example.dynamodb;
// snippet-start:[dynamodb.java2.query_items_sec_index.import]
import software.amazon.awssdk.auth.credentials.ProfileCredentialsProvider;
import software.amazon.awssdk.regions.Region;
import software.amazon.awssdk.services.dynamodb.DynamoDbClient;
import software.amazon.awssdk.services.dynamodb.model.AttributeValue;
import software.amazon.awssdk.services.dynamodb.model.DynamoDbException;
import software.amazon.awssdk.services.dynamodb.model.QueryRequest;
import software.amazon.awssdk.services.dynamodb.model.QueryResponse;
import java.util.HashMap;
import java.util.Map;
// snippet-end:[dynamodb.java2.query_items_sec_index.import]
/**
* Before running this Java V2 code example, set up your development environment, including your credentials.
*
* For more information, see the following documentation topic:
*
* https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html
*
* Create the Movies table by running the Scenario example and loading the Movies data from the JSON file. Next create a secondary
* index for the Movies table that uses only the year column. Name the index **year-index**. For more information, see:
*
* https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/GSI.html
*/
public class QueryItemsUsingIndex {
public static void main(String[] args) {
String tableName = "Movies" ; //args[0];
ProfileCredentialsProvider credentialsProvider = ProfileCredentialsProvider.create();
Region region = Region.US_EAST_1;
DynamoDbClient ddb = DynamoDbClient.builder()
.credentialsProvider(credentialsProvider)
.region(region)
.build();
queryIndex(ddb, tableName);
ddb.close();
}
// snippet-start:[dynamodb.java2.query_items_sec_index.main]
public static void queryIndex(DynamoDbClient ddb, String tableName) {
try {
Map<String,String> expressionAttributesNames = new HashMap<>();
expressionAttributesNames.put("#year","year");
Map<String, AttributeValue> expressionAttributeValues = new HashMap<>();
expressionAttributeValues.put(":yearValue", AttributeValue.builder().n("2013").build());
QueryRequest request = QueryRequest.builder()
.tableName(tableName)
.indexName("year-index")
.keyConditionExpression("#year = :yearValue")
.expressionAttributeNames(expressionAttributesNames)
.expressionAttributeValues(expressionAttributeValues)
.build();
System.out.println("=== Movie Titles ===");
QueryResponse response = ddb.query(request);
response.items()
.forEach(movie-> System.out.println(movie.get("title").s()));
} catch (DynamoDbException e) {
System.err.println(e.getMessage());
System.exit(1);
}
}
// snippet-end:[dynamodb.java2.query_items_sec_index.main]
}
**Third way - Use PartQL
Of course, you can query the partition key using PartiQL. For example.
public static void queryTable(DynamoDbClient ddb) {
String sqlStatement = "SELECT * FROM MoviesPartiQ where year = ? ORDER BY info";
try {
List<AttributeValue> parameters = new ArrayList<>();
AttributeValue att1 = AttributeValue.builder()
.n(String.valueOf("2013"))
.build();
parameters.add(att1);
// Get items in the table and write out the ID value.
ExecuteStatementResponse response = executeStatementRequest(ddb, sqlStatement, parameters);
System.out.println("ExecuteStatement successful: "+ response.toString());
} catch (DynamoDbException e) {
System.err.println(e.getMessage());
System.exit(1);
}
}
I have a problem trying to access pubsub message's attributes.
The error message is the following:
Coder of type class org.apache.beam.sdk.coders.SerializableCoder has a #structuralValue method which does not return true when the encoding of the elements is equal.
stackTrace: [org.apache.beam.sdk.io.gcp.pubsub.PubsubMessage.getAttribute(PubsubMessage.java:56),
transform1$3.processElement(transform1.java:37),
transform1$3$DoFnInvoker.invokeProcessElement(Unknown Source),
org.apache.beam.repackaged.direct_java.runners.core.SimpleDoFnRunner.invokeProcessElement(SimpleDoFnRunner.java:218),
org.apache.beam.repackaged.direct_java.runners.core.SimpleDoFnRunner.processElement(SimpleDoFnRunner.java:183),
org.apache.beam.repackaged.direct_java.runners.core.SimplePushbackSideInputDoFnRunner.processElementInReadyWindows(SimplePushbackSideInputDoFnRunner.java:78),
org.apache.beam.runners.direct.ParDoEvaluator.processElement(ParDoEvaluator.java:216),
org.apache.beam.runners.direct.DoFnLifecycleManagerRemovingTransformEvaluator.processElement(DoFnLifecycleManagerRemovingTransformEvaluator.java:54),
org.apache.beam.runners.direct.DirectTransformExecutor.processElements(DirectTransformExecutor.java:160), org.apache.beam.runners.direct.DirectTransformExecutor.run(DirectTransformExecutor.java:124),
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511),
java.util.concurrent.FutureTask.run(FutureTask.java:266),
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149),
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624),
java.lang.Thread.run(Thread.java:748)]
I'm using the Dataflow Eclipse SDK to run the pipeline locally:
<dependency>
<groupId>org.apache.beam</groupId>
<artifactId>beam-runners-direct-java</artifactId>
<version>${beam.version}</version>
<scope>runtime</scope>
</dependency>
The line of code which produces the error is this:
String fieldId = c.element().getAttribute("evId");
The full code of the ptransform is the following:
public class transform1 extends DoFn<PubsubMessage, Event> {
public static TupleTag<ErrorHandler> failuresTag=new TupleTag<ErrorHandler>(){};
public static TupleTag<Event> validTag = new TupleTag<Event>(){};
public static PCollectionTuple process(PCollection<PubsubMessage> logStrings)
{
return logStrings.apply("Create PubSub objects", ParDo.of(new DoFn<PubsubMessage, Event>()
{
#ProcessElement
public void processElement(ProcessContext c)
{
try
{
Event event = new Event();
String fieldId = c.element().getAttribute("evId");
event.evId = "asa"; //this line is just to test to set a value
c.output(event);
<...>
I have seen a similar question but I'm not sure how I could fix it
The main pipeline code (if needed)
public static PipelineResult run(Options options) {
Pipeline pipeline = Pipeline.create(options);
/*
* Step 1: Read from PubSub
*/
PCollection<PubsubMessage> messages = null;
if (options.getUseSubscription()) {
messages = pipeline.apply("ReadPubSubSubscription", PubsubIO.readMessagesWithAttributes()
.fromSubscription(options.getInputSubscription()).withIdAttribute("messageId"));
} else {
messages = pipeline.apply("ReadPubSubTopic", PubsubIO.readMessagesWithAttributes()
.fromTopic(options.getInputTopic()).withIdAttribute("messageId"));
}
/*
* Step 2: Transform PubSubMessage to Event
*/
PCollectionTuple eventCollections = transform1.process(messages);
PubSub message:
{ "evId":"id", "payload":"payload" }
I also tried as:
"{ "evId":"id", "payload":"payload" }"
This is how I publish the message in pubsub to test the pipeline:
After making more test, the way I was publishing to pubsub it seems to be the source of the error, because If I added as attribute instead of message body the problem disappear.
The reason was I was trying to access to an attribute here:
String fieldId = c.element().getAttribute("evId");
But when I was sending the message through the pubsub dashboard I didn't add any attribute and it cause all the pipeline crash.
Connected to azure-cosmosdb and able to fire default queries like findAll() and findById(String Id). But I can't write a native query using #Query annotation as the code is not considering it. Always considering the name of the function in respository class/interface. I need a way to fire a custom or native query to azure-cosmos db. ?!
Tried with #Query annotation. But not working.
List<MonitoringSessions> findBySessionID(#Param("sessionID") String sessionID);
#Query(nativeQuery = true, value = "SELECT * FROM MonitoringSessions M WHERE M.sessionID like :sessionID")
List<MonitoringSessions> findSessions(#Param("sessionID") String sessionID);
findBySessionID() is working as expected. findSessions() is not working. Below root error came while running the code.
Caused by: org.springframework.data.mapping.PropertyReferenceException: No property findSessions found for type MonitoringSessions
Thanks for the response. I got what I exactly wanted from the below link. Credit goes to Author of the link page.
https://cosmosdb.github.io/labs/java/technical_deep_dive/03-querying_the_database_using_sql.html
public class Program {
private final ExecutorService executorService;
private final Scheduler scheduler;
private AsyncDocumentClient client;
private final String databaseName = "UniversityDatabase";
private final String collectionId = "StudentCollection";
private int numberOfDocuments;
public Program() {
// public constructor
executorService = Executors.newFixedThreadPool(100);
scheduler = Schedulers.from(executorService);
client = new AsyncDocumentClient.Builder().withServiceEndpoint("uri")
.withMasterKeyOrResourceToken("key")
.withConnectionPolicy(ConnectionPolicy.GetDefault()).withConsistencyLevel(ConsistencyLevel.Eventual)
.build();
}
public static void main(String[] args) throws InterruptedException, JSONException {
FeedOptions options = new FeedOptions();
// as this is a multi collection enable cross partition query
options.setEnableCrossPartitionQuery(true);
// note that setMaxItemCount sets the number of items to return in a single page
// result
options.setMaxItemCount(5);
String sql = "SELECT TOP 5 s.studentAlias FROM coll s WHERE s.enrollmentYear = 2018 ORDER BY s.studentAlias";
Program p = new Program();
Observable<FeedResponse<Document>> documentQueryObservable = p.client
.queryDocuments("dbs/" + p.databaseName + "/colls/" + p.collectionId, sql, options);
// observable to an iterator
Iterator<FeedResponse<Document>> it = documentQueryObservable.toBlocking().getIterator();
while (it.hasNext()) {
FeedResponse<Document> page = it.next();
List<Document> results = page.getResults();
// here we iterate over all the items in the page result
for (Object doc : results) {
System.out.println(doc);
}
}
}
}
I'm facing a problem in which I don't get results from my query in Flink-SQL.
I have some informations stored in two Kafka Topics, I want to store them in two tables and perform a join between them in a streaming way.
These are my flink instructions :
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
StreamTableEnvironment tableEnv = TableEnvironment.getTableEnvironment(env);
// configure Kafka consumer
Properties props = new Properties();
props.setProperty("bootstrap.servers", "localhost:9092"); // Broker default host:port
props.setProperty("group.id", "flink-consumer"); // Consumer group ID
FlinkKafkaConsumer011<Blocks> flinkBlocksConsumer = new FlinkKafkaConsumer011<>(args[0], new BlocksSchema(), props);
flinkBlocksConsumer.setStartFromEarliest();
FlinkKafkaConsumer011<Transactions> flinkTransactionsConsumer = new FlinkKafkaConsumer011<>(args[1], new TransactionsSchema(), props);
flinkTransactionsConsumer.setStartFromEarliest();
DataStream<Blocks> blocks = env.addSource(flinkBlocksConsumer);
DataStream<Transactions> transactions = env.addSource(flinkTransactionsConsumer);
tableEnv.registerDataStream("blocksTable", blocks);
tableEnv.registerDataStream("transactionsTable", transactions);
Here is my SQL query :
Table sqlResult
= tableEnv.sqlQuery(
"SELECT block_timestamp,count(tx_hash) " +
"FROM blocksTable " +
"JOIN transactionsTable " +
"ON blocksTable.block_hash=transactionsTable.tx_hash " +
"GROUP BY blocksTable.block_timestamp");
DataStream<Test> resultStream = tableEnv
.toRetractStream(sqlResult,Row.class)
.map(t -> {
Row r = t.f1;
String field2 = r.getField(0).toString();
long count = Long.valueOf(r.getField(1).toString());
return new Test(field2,count);
})
.returns(Test.class);
Then, I print the results :
resultStream.print();
But I don't get any answers, my program is stuck...
For the schema used for serialization and deserialization, here is my test class which stores the result of my query (two fields a string and a long for respectively the block_timestamp and the count) :
public class TestSchema implements DeserializationSchema<Test>, SerializationSchema<Test> {
#Override
public Test deserialize(byte[] message) throws IOException {
return Test.fromString(new String(message));
}
#Override
public boolean isEndOfStream(Test nextElement) {
return false;
}
#Override
public byte[] serialize(Test element) {
return element.toString().getBytes();
}
#Override
public TypeInformation<Test> getProducedType() {
return TypeInformation.of(Test.class);
}
}
This is the same principle for BlockSchema and TransactionsSchema classes.
Do you know why I can't get the result of my query ? Should I test with BatchExecutionEnvironment ?
I am facing this issue(getting null response) when i am trying to Query in Java using
I need to based on placed time stamp range and releases desc and status.
// My document as follows:
<ordersAuditRequest>
<ordersAudit>
<createTS>2013-04-19 12:19:17.165</createTS>
<orderSnapshot>
<orderId>43060151</orderId>
<placedTS>2013-04-19 12:19:17.165</placedTS>
<releases>
<ffmCenterDesc>TW</ffmCenterDesc>
<relStatus>d </relStatus>
</releases>
</ordersAudit>
</ordersAuditRequest>
I am using following query but it returns null.
Query query = new Query();
query.addCriteria(Criteria.where("orderSnapshot.releases.ffmCenterDesc").is(ffmCenterDesc)
.and("orderSnapshot.releases.relStatus").is(relStatus)
.andOperator(
Criteria.where("orderSnapshot.placedTS").gt(orderPlacedStart),
Criteria.where("orderSnapshot.placedTS").lt(orderPlacedEnd)
)
);
I can't reproduce your problem, which suggests that the issue is with the values in the database and the values you're passing in to the query (i.e. they're not matching). This is not unusual when you're trying to match dates, as you need to make sure they're stored as ISODates in the database and queried using java.util.date in the query.
I have a test that shows your query working, but I've made a number of assumptions about your data.
My test looks like this, hopefully this will help point you in the correct direction, or if you give me more feedback I can re-create your problem more accurately.
#Test
public void shouldBeAbleToQuerySpringDataWithDates() throws Exception {
// Setup - insert test data into the DB
SimpleDateFormat dateFormat = new SimpleDateFormat("yyyy-MM-dd' 'hh:mm:ss.SSS");
MongoTemplate mongoTemplate = new MongoTemplate(new Mongo(), "TheDatabase");
// cleanup old test data
mongoTemplate.getCollection("ordersAudit").drop();
Release release = new Release("TW", "d");
OrderSnapshot orderSnapshot = new OrderSnapshot(43060151, dateFormat.parse("2013-04-19 12:19:17.165"), release);
OrdersAudit ordersAudit = new OrdersAudit(dateFormat.parse("2013-04-19 12:19:17.165"), orderSnapshot);
mongoTemplate.save(ordersAudit);
// Create and run the query
Date from = dateFormat.parse("2013-04-01 01:00:05.000");
Date to = dateFormat.parse("2014-04-01 01:00:05.000");
Query query = new Query();
query.addCriteria(Criteria.where("orderSnapshot.releases.ffmCenterDesc").is("TW")
.and("orderSnapshot.releases.relStatus").is("d")
.andOperator(
Criteria.where("orderSnapshot.placedTS").gt(from),
Criteria.where("orderSnapshot.placedTS").lt(to)
)
);
// Check the results
List<OrdersAudit> results = mongoTemplate.find(query, OrdersAudit.class);
Assert.assertEquals(1, results.size());
}
public class OrdersAudit {
private Date createdTS;
private OrderSnapshot orderSnapshot;
public OrdersAudit(final Date createdTS, final OrderSnapshot orderSnapshot) {
this.createdTS = createdTS;
this.orderSnapshot = orderSnapshot;
}
}
public class OrderSnapshot {
private long orderId;
private Date placedTS;
private Release releases;
public OrderSnapshot(final long orderId, final Date placedTS, final Release releases) {
this.orderId = orderId;
this.placedTS = placedTS;
this.releases = releases;
}
}
public class Release {
String ffmCenterDesc;
String relStatus;
public Release(final String ffmCenterDesc, final String relStatus) {
this.ffmCenterDesc = ffmCenterDesc;
this.relStatus = relStatus;
}
}
Notes:
This is a TestNG class, not JUnit.
I've used SimpleDateFormat to create Java Date classes, this is just for ease of use.
The XML value you pasted for relStatus included spaces, which I have stripped.
You showed us the document structure in XML, not JSON, so I've had to assume what your data looks like. I've translated it almost directly into JSON, so it looks like this in the database:
{
"_id" : ObjectId("51d689843004ec60b17f50de"),
"_class" : "OrdersAudit",
"createdTS" : ISODate("2013-04-18T23:19:17.165Z"),
"orderSnapshot" : {
"orderId" : NumberLong(43060151),
"placedTS" : ISODate("2013-04-18T23:19:17.165Z"),
"releases" : {
"ffmCenterDesc" : "TW",
"relStatus" : "d"
}
}
}
You can find what yours really looks like by doing a db.<collectionName>.findOne() call in the mongoDB shell.