Firestore not updating values in embedded collections - java

Here is parent node
Inside I have clients
I'm trying to update the lastName of the first client.
I have the correct ID's, I've checked multiple times. I'm using JAVA, here is the code:
public void updateClient(ClientDto client) {
if(client.getUserId() == null){
throw new IllegalArgumentException("userId cannot be null when" +
" updating a client");
}
final ClientDto clientById = this.getClientById(client.getClientId(),client.getUserId());
Map<String, Object> data = new HashMap<>();
data.put("name", client.getName());
data.put("lastName", client.getLastName());
data.put("middleName", client.getMiddleName());
data.put("email", client.getEmail());
data.put("phone", client.getPhone());
data.put("lastUpdatedBy", client.getLastUpdatedBy());
data.put("lastUpdatedDate", client.getLastUpdatedDate());
this.fireDao.getDb()
.collection("users").document(client.getUserId())
.collection("clients")
.document(client.getClientId()).update(data);
}
this.fireDao.getDb() is an instance of firestore and I'm able to preform all other operations.

Here is what worked:
final ApiFuture<WriteResult> update = this.fireDao.getDb().collection("users").document(client.getUserId())
.collection("clients")
.document(client.getClientId()).update(data);
try {
log.info(format(
"Updating userId %s for customer id %s at time %s",
client.getUserId(),
client.getClientId(),
update.get().getUpdateTime()));
} catch (InterruptedException | ExecutionException e) {
log.error(format(
"Error updating userId %s for customer id %s",
client.getUserId(),
client.getClientId()), e);
}
I think this line did it but I'm not sure why I need it:
update.get().getUpdateTime()

Related

How to give DynamoDB update expression(if_not_exists) in TransactionaWriteRequest

I am new to DynamoDB and working on a dynamo project. I am trying to update the item amount in a transaction with condition if_not_exists() with TransactionWriteRequest in DynamoDB Mapper.
As per the Doc, transactionWriteRequest.updateItem() takes DynamoDBTransactionWriteExpression which doesn't have any UpdateExpression. Class definition is attached bellow.,
Wanted to know How can i provide the if_not_exists() in DynamoDBTransactionWriteExpression to update the item in a transaction. Or there is no way to do this in a transactionWrite.
Please help here.
Thanks in advance
Judging from the snippet you shared it seems you are using Java SDK v1. Below is a code snippet which has 1 PutItem and 1 UpdateItem combined in a single TransactWrite request.
AmazonDynamoDB client = AmazonDynamoDBClientBuilder.standard().build();
final String ORDER_TABLE_NAME = "test1";
/*
Update Item with condition
*/
HashMap<String,AttributeValue> myPk =
new HashMap<String,AttributeValue>();
myPk.put("pk", new AttributeValue("pkValue1"));
Map<String, AttributeValue> expressionAttributeValues = new HashMap<>();
expressionAttributeValues.put(":new_status", new AttributeValue("SOLD"));
Update markItemSold = new Update()
.withTableName(ORDER_TABLE_NAME)
.withKey(myPk)
.withUpdateExpression("SET ProductStatus = if_not_exists(createdAt, :new_status)")
.withExpressionAttributeValues(expressionAttributeValues)
.withReturnValuesOnConditionCheckFailure(ReturnValuesOnConditionCheckFailure.ALL_OLD);
/*
Put Item
*/
HashMap<String, AttributeValue> orderItem = new HashMap<>();
orderItem.put("pk", new AttributeValue("pkValue2"));
orderItem.put("OrderTotal", new AttributeValue("100"));
Put createOrder = new Put()
.withTableName(ORDER_TABLE_NAME)
.withItem(orderItem)
.withReturnValuesOnConditionCheckFailure(ReturnValuesOnConditionCheckFailure.ALL_OLD);
/*
Transaction
*/
Collection<TransactWriteItem> actions = Arrays.asList(
new TransactWriteItem().withUpdate(markItemSold),
new TransactWriteItem().withPut(createOrder));
TransactWriteItemsRequest placeOrderTransaction = new TransactWriteItemsRequest()
.withTransactItems(actions)
.withReturnConsumedCapacity(ReturnConsumedCapacity.TOTAL);
try {
client.transactWriteItems(placeOrderTransaction);
System.out.println("Transaction Successful");
} catch (ResourceNotFoundException rnf) {
System.err.println("One of the table involved in the transaction is not found" + rnf.getMessage());
} catch (InternalServerErrorException ise) {
System.err.println("Internal Server Error" + ise.getMessage());
} catch (TransactionCanceledException tce) {
System.out.println("Transaction Canceled " + tce.getMessage());
} catch (AmazonServiceException e){
System.out.println(e.getMessage());
}
With the v2 version of the SDK you can do it like this
var table =
enhancedClient.table(<table name>, TableSchema.fromClass(DynamoEntity.class));
var transactWriteItemsEnhancedRequest = TransactWriteItemsEnhancedRequest
.builder()
.addUpdateItem(table,
TransactUpdateItemEnhancedRequest.builder(LoadTestEntity.class)
.item(<entity>)
.conditionExpression(Expression.builder().expression("attribute_not_exists(ID)").build())
.build())
.build();
enhancedClient.transactWriteItems(transactWriteItemsEnhancedRequest);
You might need to play around with the expression builder, I haven't tested it.

Hbase client reading different user for read and write

I am using hbase client 2.1.7 to connect to my server(same version 2.1.7).
<groupId>org.apache.hbase</groupId>
<artifactId>hbase-client</artifactId>
<version>2.1.7</version>
Now there is an user who have permission to read/write on the table in the server.
User = LTzm#yA$U
For this my code looks like this:
String hadoop_user_key = "HADOOP_USER_NAME";
String user = "LTzm#yA$U";
System.setProperty(hadoop_user_key, token);
Now when I am trying to read the key from the table i am getting following error:
error.log:! Causing:
org.apache.hadoop.hbase.security.AccessDeniedException:
org.apache.hadoop.hbase.security.AccessDeniedException: Insufficient
permissions for user 'LTzm' (table=table_name, action=READ)
Weird part is writes are working fine. To validate that whether right user is getting passed for write, i removed the user and try rerun the code and the write fails with the error:
error.log:! org.apache.hadoop.hbase.ipc.RemoteWithExtrasException:
org.apache.hadoop.hbase.security.AccessDeniedException: Insufficient
permissions (user=LTzm#yA$U,
scope=table_name, family=d:visitId,
params=[table=table_name,family=d:visitId],action=WRITE)
Again read was also failing with:
error.log:! org.apache.hadoop.hbase.ipc.RemoteWithExtrasException:
org.apache.hadoop.hbase.security.AccessDeniedException: Insufficient
permissions for user 'LTzm'
(table=table_name, action=READ)
Somehow Ltzm is getting passed with read call and LTzm#yA$U is getting passed for write.
Does anyone help me what is the issue here, Is # or special symbol not allowed in the user for hbase(then how is it working for write calls).
Edit 1:
Here is the function to create connection:
public static Connection createConnection() {
String hadoop_user_key = "HADOOP_USER_NAME";
String user = "LTzm#yA$U";
Map<String, String> configMap = new HashMap<>();
configMap.put("hbase.rootdir", "hdfs://session/apps/hbase/data"));
configMap.put("hbase.zookeeper.quorum", "ip1, ip2");
configMap.put("zookeeper.znode.parent", "/hbase");
configMap.put("hbase.rpc.timeout", "400");
configMap.put("hbase.rpc.shortoperation.timeout", "400");
configMap.put("hbase.client.meta.operation.timeout", "5000");
configMap.put("hbase.rpc.engine", "org.apache.hadoop.hbase.ipc.SecureRpcEngine");
configMap.put("hbase.client.retries.number", "3");
configMap.put("hbase.client.operation.timeout", "3000"));
configMap.put(HConstants.HBASE_CLIENT_IPC_POOL_SIZE, "30"));
configMap.put("hbase.client.pause", "50"));
configMap.put("hbase.client.pause.cqtbe", "1000"));
configMap.put("hbase.client.max.total.tasks", "500"));
configMap.put("hbase.client.max.perserver.tasks", "50"));
configMap.put("hbase.client.max.perregion.tasks", "10"));
configMap.put("hbase.client.ipc.pool.type", "RoundRobinPool");
configMap.put("hbase.rpc.read.timeout", "200"));
configMap.put("hbase.rpc.write.timeout", "200"));
configMap.put("hbase.client.write.buffer", "20971520"));
System.setProperty(hadoop_user_key, token);
Configuration hConfig = HBaseConfiguration.create();
for (String key : configMap.keySet())
hConfig.set(key, configMap.get(key));
UserGroupInformation.setConfiguration(hConfig);
Connection hbaseConnection;
hbaseConnection = ConnectionFactory.createConnection(config);
return connection;
}
Here are the read and write calls:
protected Result read(String tableName, String rowKey) throws IOException {
Get get = new Get(Bytes.toBytes(rowKey));
get.addFamily(COLUMN_FAMILY_BYTES);
Result res;
Table hTable = null;
try {
hTable = getHbaseTable(tableName);
res = hTable.get(get);
} finally {
if (hTable != null) {
releaseHbaseTable(hTable);
}
}
return res;
}
protected void writeRow(String tableName, String rowKey, Map<String, byte[]> columnData) throws IOException {
Put cellPut = new Put(Bytes.toBytes(rowKey));
for (String qualifier : columnData.keySet()) {
cellPut.addColumn(COLUMN_FAMILY_BYTES, Bytes.toBytes(qualifier), columnData.get(qualifier));
}
Table hTable = null;
try {
hTable = getHbaseTable(tableName);
if (hTable != null) {
hTable.put(cellPut);
}
} finally {
if (hTable != null) {
releaseHbaseTable(hTable);
}
}
}
private Table getTable(String tableName) {
try {
Table table = hbaseConnection.getTable(TableName.valueOf(tableName));
} catch (IOException e) {
LOGGER.error("Exception while adding table in factory.", e);
}
}

why jpa not save data at a time

i want save data and check the data after call save method
but the value is not present in same request
i have two method depend each other
the two function communcation with each other by kafka
the first method save the data and after save using jpa call second method
find the recourd from database using jpa
and check the instanse using isPresent()
but in the second method i cant find the data save
but after this request i can find data
return exciption NoSuchElement
Try out several ways like:
1-use flush and saveAndFlush
2-sleep method 10000 milsec
3-use entityManger with #Transactional
but all of them not correct
i want showing you my two method from code:
i have producer and consumer
and this is SaveOrder method (first method):
note : where in the first method have all ways i used
#PersistenceContext
private EntityManager entityManager;
#Transactional
public void saveOrder(Long branchId,AscOrderDTO ascOrderDTO) throws Exception {
ascOrderDTO.validation();
if (ascOrderDTO.getId() == null) {
ascOrderDTO.setCreationDate(Instant.now());
ascOrderDTO.setCreatedBy(SecurityUtils.getCurrentUserLogin().get());
//add user
ascOrderDTO.setStoreId(null);
String currentUser=SecurityUtils.getCurrentUserLogin().get();
AppUser appUser=appUserRepository.findByLogin(currentUser);
ascOrderDTO.setAppUserId(appUser.getId());
}
log.debug("Request to save AscOrder : {}", ascOrderDTO);
AscOrder ascOrder = ascOrderMapper.toEntity(ascOrderDTO);
//send notify to branch
if(!branchService.orderOk())
{
throw new BadRequestAlertException("branch not accept order", "check order with branch", "branch");
}
ascOrder = ascOrderRepository.save(ascOrder);
/*
* log.debug("start sleep"); Thread.sleep(10000); log.debug("end sleep");
*/
entityManager.setFlushMode(FlushModeType.AUTO);
entityManager.flush();
entityManager.clear();
//ascOrderRepository.flush();
try {
producerOrder.addOrder(branchId,ascOrder.getId(),true);
stateMachineHandler.stateMachine(OrderEvent.EMPTY, ascOrder.getId());
stateMachineHandler.handling(ascOrder.getId());
//return ascOrderMapper.toDto(ascOrder);
}
catch (Exception e) {
// TODO: handle exception
ascOrderRepository.delete(ascOrder);
throw new BadRequestAlertException("cannot deliver order to Branch", "try agine", "Try!");
}
}
in this code go to producer :
producerOrder.addOrder(branchId,ascOrder.getId(),true);
and this is my producer:
public void addOrder(Long branchId, Long orderId, Boolean isAccept) throws Exception {
ObjectMapper obj = new ObjectMapper();
try {
Map<String, String> map = new HashMap<>();
map.put("branchId", branchId.toString());
map.put("orderId", orderId.toString());
map.put("isAccept", isAccept.toString());
kafkaTemplate.send("orderone", obj.writeValueAsString(map));
}
catch (Exception e) {
throw new Exception(e.getMessage());
}
}
and in this code go to consumer:
kafkaTemplate.send("orderone", obj.writeValueAsString(map));
this is my consumer:
#KafkaListener(topics = "orderone", groupId = "groupId")
public void processAddOrder(String mapping) throws Exception {
try {
log.debug("i am in consumer add Order");
ObjectMapper mapper = new ObjectMapper(); Map<String, String> result = mapper.readValue(mapping,
HashMap.class);
branchService.acceptOrder(Long.parseLong(result.get("branchId")),Long.parseLong(result.get("orderId")),
Boolean.parseBoolean(result.get("isAccept")));
log.debug(result.toString());
}
catch (Exception e) {
throw new Exception(e.getMessage());
}
}
**and this code go to AcceptOrder (second method) : **
branchService.acceptOrder(Long.parseLong(result.get("branchId")),Long.parseLong(result.get("orderId")),
Boolean.parseBoolean(result.get("isAccept")));
this is my second method :
public AscOrderDTO acceptOrder(Long branchId, Long orderId, boolean acceptable) throws Exception {
ascOrderRepository.flush();
try {
if (branchId == null || orderId == null || !acceptable) {
throw new BadRequestAlertException("URl invalid query", "URL", "Check your Input");
}
if (!branchRepository.findById(branchId).isPresent() || !ascOrderRepository.findById(orderId).isPresent()) {
throw new BadRequestAlertException("cannot find branch or Order", "URL", "Check your Input");
}
/*
* if (acceptable) { ascOrder.setStatus(OrderStatus.PREPARING); } else {
* ascOrder.setStatus(OrderStatus.PENDING); }
*/
Branch branch = branchRepository.findById(branchId).get();
AscOrder ascOrder = ascOrderRepository.findById(orderId).get();
ascOrder.setDiscount(50.0);
branch.addOrders(ascOrder);
branchRepository.save(branch);
log.debug("///////////////////////////////Add order sucess////////////////////////////////////////////////");
return ascOrderMapper.toDto(ascOrder);
} catch (Exception e) {
// TODO: handle exception
throw new Exception(e.getMessage());
}
}
Adding Thread.sleep() inside saveOrder makes no sense.
processAddOrder executes on a completely different thread, with a completely different persistence context. All the while, your transaction from saveOrder might still be ongoing, with none of the changes made visible to other transactions.
Try splitting saveOrder into a transactional method and sending the notification, making sure that the transaction ends before the event handling has a chance to take place.
(Note that this approach introduces at-most-once semantics. You have been warned)

How to make Vertx MongoClient operation synchronous yet not blocking event loop in Java?

I am trying to save a new document to MongoDB using the Vertx MongoClient as follows:
MongoDBConnection.mongoClient.save("booking", query, res -> {
if(res.succeeded()) {
documentID = res.result();
System.out.println("MongoDB inserted successfully. + document ID is : " + documentID);
}
else {
System.out.println("MongoDB insertion failed.");
}
});
if(documentID != null) {
// MongoDB document insertion successful. Reply with a booking ID
String resMsg = "A confirmed booking has been successfully created with booking id as " + documentID +
". An email has also been triggered to the shared email id " + emailID;
documentID = null;
return new JsonObject().put("fulfillmentText", resMsg);
}
else {
// return intent response
documentID = null;
return new JsonObject().put("fulfillmentText",
"There is some issues while booking the shipment. Please start afreash.");
}
The above code successfully writes the query jsonObject to MongoDB collection booking. However, the function which contains this code always returns with There is some issues while booking the shipment. Please start afreash.
This is happening probably because the MongoClient save() handler "res" is asynchronous. But, I want to return conditional responses based on successful save() operation and on failed save operation.
How to achieve it in Vertx Java?
Your assumption is correct, you dont wait for the async response from the database. What you can do, is to wrap it in a Future like this:
public Future<JsonObject> save() {
Future<JsonObject> future = Future.future();
MongoDBConnection.mongoClient.save("booking", query, res -> {
if(res.succeeded()) {
documentID = res.result();
if(documentID != null) {
System.out.println("MongoDB inserted successfully. + document ID is : " + documentID);
String resMsg = "A confirmed booking has been successfully created with booking id as " + documentID +
". An email has also been triggered to the shared email id " + emailID;
future.complete(new JsonObject().put("fulfillmentText", resMsg));
}else{
future.complete(new JsonObject().put("fulfillmentText",
"There is some issues while booking the shipment. Please start afreash."))
}
} else {
System.out.println("MongoDB insertion failed.");
future.fail(res.cause());
}
});
return future;
}
Then i assume you have and endpoint that eventually calls this, eg:
router.route("/book").handler(this::addBooking);
... then you can call the save method and serve a different response based on the result
public void addBooking(RoutingContext ctx){
save().setHandler(h -> {
if(h.succeeded()){
ctx.response().end(h.result());
}else{
ctx.response().setStatusCode(500).end(h.cause());
}
})
}
You can use RxJava 2 and a reactive Mongo Client (io.vertx.reactivex.ext.mongo.MongoClient)
Here is a code snippet:
Deployer
public class Deployer extends AbstractVerticle {
private static final Logger logger = getLogger(Deployer.class);
#Override
public void start(Future<Void> startFuture) {
DeploymentOptions options = new DeploymentOptions().setConfig(config());
JsonObject mongoConfig = new JsonObject()
.put("connection_string",
String.format("mongodb://%s:%s#%s:%d/%s",
config().getString("mongodb.username"),
config().getString("mongodb.password"),
config().getString("mongodb.host"),
config().getInteger("mongodb.port"),
config().getString("mongodb.database.name")));
MongoClient client = MongoClient.createShared(vertx, mongoConfig);
RxHelper.deployVerticle(vertx, new BookingsStorage(client), options)
.subscribe(e -> {
logger.info("Successfully Deployed");
startFuture.complete();
}, error -> {
logger.error("Failed to Deployed", error);
startFuture.fail(error);
});
}
}
BookingsStorage
public class BookingsStorage extends AbstractVerticle {
private MongoClient mongoClient;
public BookingsStorage(MongoClient mongoClient) {
this.mongoClient = mongoClient;
}
#Override
public void start() {
var eventBus = vertx.eventBus();
eventBus.consumer("GET_ALL_BOOKINGS_ADDRESS", this::getAllBookings);
}
private void getAllBookings(Message msg) {
mongoClient.rxFindWithOptions("GET_ALL_BOOKINGS_COLLECTION", new JsonObject(), sortByDate())
.subscribe(bookings -> {
// do something with bookings
msg.reply(bookings);
},
error -> {
fail(msg, error);
}
);
}
private void fail(Message msg, Throwable error) {
msg.fail(500, "An unexpected error occurred: " + error.getMessage());
}
private FindOptions sortByDate() {
return new FindOptions().setSort(new JsonObject().put("date", 1));
}
}
HttpRouterVerticle
// inside a router handler:
vertx.eventBus().rxSend("GET_ALL_BOOKINGS_ADDRESS", new JsonObject())
.subscribe(bookings -> {
// do something with bookings
},
e -> {
// handle error
});

How to know Gigaspace is connected in Java application at startup

I am using spring Application and my gigaspace is connecting at startup. I am not getting any exception, if gigaspace is down.
#Override
public void onContextRefreshed(ContextRefreshedEvent event) {
String gigaSpaceURL = null;
LOGGER.info("({}) initializing gigaspaces client", getName());
try {
initGSProxy();
Iterator<Map.Entry<ConfiguredSpace, Space>> entries = spaces.entrySet().iterator();
while (entries.hasNext()) {
Map.Entry<ConfiguredSpace, Space> entry = entries.next();
LOGGER.info("({}) initialing space- key=" +
entry.getKey() + ", value = " + entry.getValue(),
getName());
// TODO : Need to verify Boolean Value Input
gigaspace.createSpace(entry.getKey().name(),
entry.getValue().getURL(), false);
gigaSpaceURL = entry.getValue().getURL();
}
} catch (Exception e) {
return;
}
GenericUtil.updateLogLevel("INFO",
"com.renovite.ripps.ap.gs.Spaces");
LOGGER.info("\n************************************\nConnected with Gigaspace successfully:URL:" + gigaSpaceURL
+ "\n************************************\n");
GenericUtil.updateLogLevel("ERROR",
"com.renovite.ripps.ap.gs.Spaces");
}
Take reference of Gigaspace by using getGigaSpace() method which takes spacekey as an argument.If it throw exception at run time, it means application is not able to connect with specified Gigaspace url.
Or more elegant way, In your Gigaspace proxy class (which actually implements IGigaspace) override the getGigaSpace() method such that it will return null if connection is not possible.
/** The spaces. */
private transient Map spaces = new HashMap<>();
#Override
public GigaSpace getGigaSpace(String spaceKey) {
if(spaces.get(spaceKey) != null){
return spaces.get(spaceKey).getGigaSpace();
}
return null;
}
spaces is a Map of all urls that are registered with Gigapsace.If no one is registered, we are returning null in the above method.

Categories