how to acknowledge delete method of morphia (Monogo DB) - java

I morphia datastore there i a delete method.
this method is working fine.
but i have doubt about it is
how we can get the information about the method delete the data successfully.
since the delete() method return WriteResult and in WriteResult which method we have to use so as to we can say data has been delete.
In my case i am using rest web service and in rest web service we have to send http responses like 200,400,500 etc.
while using delete method i should have to get information about the data is really deleted.
how we can achieve this task.
example
int deleteMongoObject(MognoDataObject mongoDataObject) {
Datastore datastore=MorphiaDatastoreTrasaction.getDatastore(MognoDataObject.class);
datastore.delete(mongoDataObject);
if(scuccess) {\\ success should be acknowledgment of delete method
return 200;
}
else {
retrun 403;
}
}
how we can achieve this task

delete() returns a WriteResult that shows the number of documents affected. The n field should list the number of documents deleted.
Also as you can see in Morphia Datastore implementation default WriteConcern is ACKNOWLEDGED unless you specified something else for example annotation under your model.
And description of ACKNOWLEDGED:
Write operations that use this write concern will wait for
acknowledgement from the primary server before returning. Exceptions
are raised for network issues, and server errors.

Related

How to find status of records loaded when we forcefully intercepting batch execution by stopping mssql database

We are implementing connection or flush retry logic for database.
Auto-commit=true;
RetryPolicy retryPolicy = new RetryPolicy()
.retryOn(DataAccessException.class)
.withMaxRetries(maxRetry)
.withDelay(retryInterval, TimeUnit.SECONDS);
result = Failsafe.with(retryPolicy)
.onFailure(throwable -> LOG.warn("Flush failure, will not retry. {} {}"
, throwable.getClass().getName(), throwable.getMessage()))
.onRetry(throwable -> LOG.warn("Flush failure, will retry. {} {}"
, throwable.getClass().getName(), throwable.getMessage()))
.get(cntx -> {
return batch.execute();
});
we want to intercept storing, updating, inserting, deleting records by stopping mssql db service in backend. At some point even If we got org.jooq.exception.DataAccessException, some of the records in batch (subset of batch) are loaded into db.
Is there any way to find failed and successfully loaded records using jooq api?
The jOOQ API cannot help you here out of the box because such a functionality is definitely out of scope for the relatively low level jOOQ API, which helps you write type safe embedded SQL. It does not make any assumptions about your business logic or infrastructure logic.
Ideally, you will run your own diagnostic here. For example, you already have a BATCHID column which should make it possible to detect which records were inserted/updated with which process. When you re-run the batch, you need to detect that you've already attempted this batch, remember the previous BATCHID, and fetch the IDs of the previous attempt to do whatever needs to be done prior to a re-run.

How to send emails from a Java EE Batch Job

I have a requirement to process a list of large number of users daily to send them email and SMS notifications based on some scenario. I am using Java EE batch processing model for this. My Job xml is as follows:
<step id="sendNotification">
<chunk item-count="10" retry-limit="3">
<reader ref="myItemReader"></reader>
<processor ref="myItemProcessor"></processor>
<writer ref="myItemWriter"></writer>
<retryable-exception-classes>
<include class="java.lang.IllegalArgumentException"/>
</retryable-exception-classes>
</chunk>
</step>
MyItemReader's onOpen method reads all users from database, and readItem() reads one user at a time using list iterator. In myItemProcessor, the actual email notification is sent to user, and then the users are persisted in database in myItemWriter class for that chunk.
#Named
public class MyItemReader extends AbstractItemReader {
private Iterator<User> iterator = null;
private User lastUser;
#Inject
private MyService service;
#Override
public void open(Serializable checkpoint) throws Exception {
super.open(checkpoint);
List<User> users = service.getUsers();
iterator = users.iterator();
if(checkpoint != null) {
User checkpointUser = (User) checkpoint;
System.out.println("Checkpoint Found: " + checkpointUser.getUserId());
while(iterator.hasNext() && !iterator.next().getUserId().equals(checkpointUser.getUserId())) {
System.out.println("skipping already read users ... ");
}
}
}
#Override
public Object readItem() throws Exception {
User user=null;
if(iterator.hasNext()) {
user = iterator.next();
lastUser = user;
}
return user;
}
#Override
public Serializable checkpointInfo() throws Exception {
return lastUser;
}
}
My problem is that checkpoint stores the last record that was executed in the previous chunk. If I have a chunk with next 10 users, and exception is thrown in myItemProcessor of the 5th user, then on retry the whole chunck will be executed and all 10 users will be processed again. I don't want notification to be sent again to the already processed users.
Is there a way to handle this? How should this be done efficiently?
Any help would be highly appreciated.
Thanks.
I'm going to build on the comments from #cheng. My credit to him here, and hopefully my answer provides additional value in organizing and presenting the options usefully.
Answer: Queue up messages for another MDB to get dispatched to send emails
Background:
As #cheng pointed out, a failure means the entire transaction is rolled back, and the checkpoint doesn't advance.
So how to deal with the fact that your chunk has sent emails to some users but not all? (You might say it rolled back but with "side effects".)
So we could restate your question then as: How to send email from a batch chunk step?
Well, assuming you had a way to send emails through an transactional API (implementing XAResource, etc.) you could use that API.
Assuming you don't, I would do a transactional write to a JMS queue, and then send the emails with a separate MDB (as #cheng suggested in one of his comments).
Suggested Alternative: Use ItemWriter to send messages to JMS queue, then use separate MDB to actually send the emails
With this approach you still gain efficiency by batching the processing and the updates to your DB (you were only sending the emails one at a time anyway), and you can benefit from simple checkpointing and restart without having to write complicated error handling.
This is also likely to be reusable as a pattern across batch jobs and outside of batch even.
Other alternatives
Some other ideas that I don't think are as good, listed for the sake of discussion:
Add batch application logic tracking users emailed (with ItemProcessListener)
You could build your own list of either/both successful/failed emails using the ItemProcessListener methods: afterProcess and onProcessError.
On restart, then, you could know which users had been emailed in the current chunk, which we are re-positioned to since the entire chunk rolled back, even though some emails have already been sent.
This certainly complicates your batch logic, and you also have to persist this success or failure list somehow. Plus this approach is probably highly specific to this job (as opposed to queuing up for an MDB to process).
But it's simpler in that you have a single batch job without the need for a messaging provider and a separate app component.
If you go this route you might want to use a combination of both a skippable and a "no-rollback" retryable exception.
single-item chunk
If you define your chunk with item-count="1", then you avoid complicated checkpointing and error handling code. You sacrifice efficiency though, so this would only make sense if the other aspects of batch were very compelling: e.g. scheduling and management of jobs through a common interface, the ability to restart at the failing step within a job
If you were to go this route, you might want to consider defining socket and timeout exceptions as "no-rollback" exceptions (using ) since there's nothing to be gained from rolling back, and you might want to retry on a network timeout issue.
Since you specifically mentioned efficiency, I'm guessing this is a bad fit for you.
use a Transaction Synchronization
This could work perhaps, but the batch API doesn't especially make this easy, and you still could have a case where the chunk completes but one or more email sends fail.
Your current item processor is doing something outside the chunk transaction scope, which has caused the application state to be out of sync. If your requirement is to send out emails only after all items in a chunk have successfully completed, then you can move the emailing part to a ItemWriterListener.afterWrite(items).

Verify Azure Table SAS-based credentials

I'm looking for simple way of verifying an arbitrary Azure Table connection string that uses a SAS such as the one below using the Azure Storage's Java SDK:
https://example.table.core.windows.net/example?sig=aaabbbcccdddeeefffggghhh%3D&se=2020-01-01T00%3A00%3A00Z&sv=2015-04-05&tn=example&sp=raud
I tried a bunch of different methods exposed by the CloudTable api, but none of them works.
CloudTable.exists() throws a StorageException, regardless of whether the credentials are valid
getName(), getStorageUri(), getUri(), and other getters - all work locally, regardless of the credentials
getServiceClient().downloadServiceProperties() and getServiceClient().getServiceStats() also throw various exceptions, while getServiceClient().getEndpoint() and getServiceClient().getCredentials() and others always work locally.
Why don't I just query the Table for a row or two? Well, in many cases I need to verify a SAS that gives only write or update premissions (without delete or read permissions), and I do not want to execute a statement that changes something in the table just to check the credentials.
To answer your questions:
CloudTable.exists() throws a StorageException, regardless of whether
the credentials are valid
I believe there's a bug with the SDK when using this method with SAS Token. I remember running into the same issue some time back.
getName(), getStorageUri(), getUri(), and other getters - all work
locally, regardless of the credentials
These will work as they don't make network call. They simply use the data available to them in the different instance variables and return the data.
getServiceClient().downloadServiceProperties() and
getServiceClient().getServiceStats() also throw various exceptions,
while getServiceClient().getEndpoint() and
getServiceClient().getCredentials() and others always work locally.
In order for getServiceClient().someMethod() to work using SAS, you would need Account SAS instead of Service SAS (which you're using right now).
Why don't I just query the Table for a row or two? Well, in many cases
I need to verify a SAS that gives only write or update premissions
(without delete or read permissions), and I do not want to execute a
statement that changes something in the table just to check the
credentials.
One possible way to check the validity of a SAS Token for write operation is to perform a write operation which you know will fail with an error. For example, you can try to insert an entity which is already there. In this case, you should get a Conflict (409) error. Other thing you could try to do is perform an optimistic write by specifying some random Etag value and check for Precondition Failed (412) error. If you get a 403 error or 404 error, that would indicate there's something wrong with your SAS token.

To get database updates using servlets or jsp

What I want is to get database updates.
i.e If any changes occur to the database or a new record is inserted it should notify to the user.
Up to know what I implemented is using jQuery as shown below
$(document).ready(function() {
var updateInterval = setInterval(function() {
$('#chat').load('Db.jsp?elect=<%=emesg%>');
},1000);
});
It worked fine for me, but my teacher told to me that it's not a good way to do recommended using comet or long polling technology.
Can anyone give me examples for getting database updates using comet or long polling
in servlets/jsp? I'm using Tomcat as server.
Just taking a shot in the dark since I don't know your exact environment... You could have the database trigger fire a call to a servlet each time a row is committed which would then run some code that looked like the following:
Get the script sessions that are active for the page that we want to update. This eliminates the need to check every reverse ajax script session that is running on the site. Once we have the script sessions we can use the second code block to take some data and update a table on the client side. All that the second code section does is send javascript to the client to be executed via the reverse ajax connection that is open.
String page = ServerContextFactory.get().getContextPath() + "/reverseajax/clock.html";
Browser.withPage(page, new Runnable() {
public void run() {
Util.setValue("clockDisplay", output);
}
});
// Creates a new Person bean.
Person person = new Person(true);
// Creates a multi-dimensional array, containing a row and the rows column data.
String[][] data = {
{person.getId(), person.getName(), person.getAddress(), person.getAge()+"", person.isSuperhero()+""}
};
// Call DWR's util which adds rows into a table. peopleTable is the id of the tbody and
// data conta
ins the row/column data.
Util.addRows("peopleTable", data);
Note that both of the above sections of code are pulled straight from the documentation examples # http://directwebremoting.org/dwr-demo/. These are only simple examples of how reverse ajax can sent data to the client, but your exact situation seems to be more dependent on how you receive the notification than how you update the client screen.
Without some type of database notification to the java code I think you will have to poll the system at set intervals. You could make the system a little more efficient even when polling by verifying that there are reverse ajax script sessions active for the page before polling the database for info.

Avoiding multiple calls when consuming a web service

I have a task where a user consumes XML from a third party. The XML feed is only updated once a day. The XML is stored in a database and returned to the user when requested. If the XML is not in the database, then it is retrieved from the third party, stored in the database and returned to the user. All subsequent requests will simply read the XML from the database.
Now my question. Say it takes 10 seconds for the request to the third party to return. In this period, there are multiple server calls for the same data. I don't want each of these to fire off requests to the third party and I don't want the user to receive nothing or an error. They should probably wait for the first request to complete at which point the XML would be available. This is a relatively simple problem but I want to know what the best way of catering for it is.
Do I just use a simple flag to control requests or maybe something like a semaphore? Are there better solutions based on the stack I intend to use which is the Play framework and a cassandra backend. Is there something I could do with callbacks or triggers?
By the way, I need to lazy load the data when the first request comes in. So, in this task it isn't an option to get the data in a separate process or when the app starts...
Thanks
All you need to do is create a separate component that is responsible to get the XML from the third party and save it to the database.
In your code the various thread try to "fetch" the XML from this component.
This component returns the XML from the database if it exists. If it does not exist then you use a ReentrantLock to synchronize.
So you do a trylock and only one of your threads succeeds. The rest will be blocked. When the lock is released the other threads are unblocked but the XML has already been fetched from the third party and stored to the database by the thread that first managed to gain the lock. So the other threads just return the XML from the DB.
Example code (this is just a "pseudo code" to get you started. You should handle exceptions etc but the main skeleton can be used. Do NOT forget to unlock in a finally so that your code does not block indefinitelly):
public String getXML() {
String xml = getXMLFromDatabase();
if(xml == null) {
if(glocalLock.tryLock()) {
try{
xml = getXMLFromThirdParty();
storeXMLToDatabase(xml);
}
finally {
globalLock.unlock(); //ok! got XML and stored in DB. Wake-up others!
}
}
else {
try{ //Another thread got the lock and will do the query. Just wait on lock!
globalLock.lock();
}
finally {
//woken up but the xml is already fetched
xml = getXMLFromDatabase();
globalLock.unlock();
}
}
return xml;
}

Categories