I'm currently working with some legacy code whereby transactions are rolled manually.
Consider the following (example for illustrative purposes only, ignore syntax/design issues):
import org.springframework.transaction.TransactionStatus;
import org.springframework.transaction.support.DefaultTransactionDefinition;
private PlatformTransactionManager transactionManager;
private DefaultTransactionDefinition txDefRequired = new DefaultTransactionDefinition(TransactionDefinition.PROPAGATION_REQUIRED);
private DefaultTransactionDefinition txDefRequiresNew new DefaultTransactionDefinition(TransactionDefinition.PROPAGATION_REQUIRES_NEW);
void createPosts(Map<String, String> posts){
TransactionStatus status = transactionManager.getTransaction(txDefRequiresNew)
try {
// posts Map is made up for username:posttext key pair
for (Map.Entry<String, String> entry : posts.entrySet()) {
// Construct online post
createPost(entry.getKey(), entry.getValue());
}
transactionManager.commit(status);
} finally {
if (!status.isCompleted()) {
transactionManager.rollback(status);
}
}
}
void createPost(String username, String text){
TransactionStatus status = transactionManager.getTransaction(txDefRequired)
// Construct online post
OnlinePost p = new OnlinePost(text);
User u = resolveUser(username);
p.setUser(u);
transactionManager.commit(status); // flush the session
}
User resolveUser(String username){
TransactionStatus status = transactionManager.getTransaction(txDefRequired)
User u = getUser(username);
if(u == null)
createUser(username);
return u;
transactionManager.commit(status); // flush the session
}
Would it be correct to say that in this particular flow, tx.commit() within the called methods, such as resolveUser(), will only serve to flush the session (flush-mode = auto) and return a new persistent entity with an artificial primary key generated and assigned to it so that you can use it later on in the same transaction?
No. A commit will invariably end the unit of work and push the updates to the database. A flush on the other hand, does not commit the changes to the db (as in commit them so that it's visible for everyone). You could technically rollback a flushed change.
https://stackoverflow.com/a/14626510/2231632 for a possible duplicate.
And what you're looking for with the artificial primary key, you could use the Session.flush. But don't commit the txn if you want to use that key in the same txn.
Related
Tired to follow the high low game pattern, but somehow the dynamodb isn't picking up any insertion??
I'm using a response interceptor that will save some session data into dynamodb under some condition, as following:
#Override
public void process(HandlerInput handlerInput, Optional<Response> optional) {
var sessionAttributes = handlerInput.getAttributesManager().getSessionAttributes();
var sessionState = (String)sessionAttributes.get((Constants.STATE));
var oldState = (String)sessionAttributes.get(Constants.STATE_OLD);
// Update persistent storage whenever there's a change
if (!oldState.equals(sessionState)) {
handlerInput.getAttributesManager().setPersistentAttributes(new HashMap<>(){{
put(Constants.STATE, sessionState);
}});
sessionAttributes.put(Constants.STATE_OLD, sessionState);
logger.info("Persistent state updated");
}
logger.info("SaveSessionInterceptor called");
}
And in the main streamHandler
private static Skill getSkill() {
logger.info("Skill Stream Handler Initialized");
return Skills.standard()
.withSkillId(config.ALEXA_SKILL_ID)
.addRequestHandlers(
// handlers
)
.addResponseInterceptor(new SaveSessionInterceptor())
// Add persistent storage support
.withAutoCreateTable(true)
.withTableName("AudioAdventure")
.build();
}
On AWS, the table's been created but just isn't responding to insertion through the persistent attributes.
Maybe I'm been doing sth wrong..
I am creating a new state in the flow and then I am trying to consume the state by using reference input. But every time I see in the result as unconsumed state, though I was providing the reference state in the transaction's input.
public SignedTransaction call() throws FlowException {
//------------------------------------------------------------------------------------------------------------
// STEP-1:
// FIRST FLOW MUST CREATE THE NEW STATE WHICH HAS NO INPUT ( THIS WILL CREATE NEW RECORD-ANCHOR WITH LINEARID )
//
//------------------------------------------------------------------------------------------------------------
// We retrieve the notary identity from the network map.
Party notary = getServiceHub().getNetworkMapCache().getNotaryIdentities().get(0);
// We create the transaction components.
AnchorState outputState = new
AnchorState(ownerId,contentHash,description,classid,timestamp,expiry, getOurIdentity(), otherParty,new UniqueIdentifier());
//required signers
List<PublicKey> requiredSigners = Arrays.asList(getOurIdentity().getOwningKey(),otherParty.getOwningKey());
//send create command with required signer signatures as below
Command command = new Command<>(new AnchorStateContract.Commands.CreateRecAnchorCmd(), requiredSigners);
// We create a transaction builder and add the components.
TransactionBuilder txBuilder = new TransactionBuilder(notary)
.addOutputState(outputState, AnchorStateContract.ID)
.addCommand(command);
// Verifying the transaction.
txBuilder.verify(getServiceHub());
// Signing the transaction.
SignedTransaction signedTx = getServiceHub().signInitialTransaction(txBuilder);
// Creating a session with the other party.
FlowSession otherPartySession = initiateFlow(otherParty);
// Obtaining the counterparty's signature.
SignedTransaction fullySignedTx = subFlow(new CollectSignaturesFlow(
signedTx, Arrays.asList(otherPartySession), CollectSignaturesFlow.Companion.tracker()));
//notarized transaction
SignedTransaction notraizedtransaction = subFlow(new FinalityFlow(fullySignedTx, otherPartySession));
//------------------------------------------------------------------------------------------------------------
// STEP-2:
// SINCE NOW WE HAVE A NEW UNCONSUMED RECORD-ANCHOR SO WE MUST MAKE IT CONSUMED ( BY USING THE PREVIOUS OUTPUT AS AN INPUT)
//
//------------------------------------------------------------------------------------------------------------
StateAndRef oldStateref = getServiceHub().toStateAndRef(new StateRef(notraizedtransaction.getId(),0));
Command storeCommand = new Command<>(new AnchorStateContract.Commands.ApproveRecAnchorCmd(), requiredSigners);
TransactionBuilder txBuilder2 = new TransactionBuilder(notary)
.addInputState(oldStateref)
.addOutputState(outputState, AnchorStateContract.ID)
.addCommand(storeCommand);
txBuilder2.verify(getServiceHub());
// signing
SignedTransaction signedTx2 = getServiceHub().signInitialTransaction(txBuilder2);
// Creating a session with the other party.
FlowSession otherPartySession2 = initiateFlow(otherParty);
// Finalising the transaction.
SignedTransaction fullySignedTx2 = subFlow(new CollectSignaturesFlow(
signedTx2, Arrays.asList(otherPartySession2), CollectSignaturesFlow.Companion.tracker()));
//notarized transaction
return subFlow(new FinalityFlow(fullySignedTx2, otherPartySession2));
}
In my flow initiator class I am first creating new state of a hash which I am calling as AnchorState. This state is coming from one of the participants and then it requests to the other participant to sign. afterward the signed record is stored in the ledger but its reference used as an input for a new state change, I simply want to make this state as consumed rather than unconsumed.
The responding flow class of participant B is as below
public SignedTransaction call() throws FlowException
{
//this class is used inside call function for the verification purposes before signed by this party
class SignTxFlow extends SignTransactionFlow
{
private SignTxFlow(FlowSession otherPartySession) {
super(otherPartySession);
}
#Override
protected void checkTransaction(SignedTransaction stx) {
requireThat(require -> {
ContractState output = stx.getTx().getOutputs().get(0).getData();
require.using("This must be an AnchorState transaction.", output instanceof AnchorState);
AnchorState state = (AnchorState) output;
require.using("The AnchorState's value should be more than 6 characters", state.getContentHash().length() > 6);
return null;
});
}
}
SecureHash expectedTxId = subFlow(new SignTxFlow(otherPartySession)).getId();
return subFlow(new ReceiveFinalityFlow(otherPartySession, expectedTxId));
}
This flow successfully runs and returns me unique id for the transaction but I tried everything and could not found how to change the state from unconsumed to consumed?
AFTER FIX
I realized that the vaultQuery on the CordaOS by default returns unconsumed state. Which is now clear why I was not able to get the consumed state in the first place. One more issue which I found, was lack of resources in CORDA for java though I found many kotlin based answers for a transaction with "creation and consumption" in single workflow however converting them into JAVA required some efforts.
Kotlin Based answer
Some differences I observed between Java and Kotlin approach
1) When I have tried to use the same session in my second transaction which was used in the first transaction then I get this error
java.util.concurrent.ExecutionException: net.corda.core.flows.UnexpectedFlowEndException: Tried to access ended session SessionId(toLong=1984916257986245538) with empty buffer
at java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)
at java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1895)
at net.corda.core.internal.concurrent.CordaFutureImpl.get(CordaFutureImpl.kt)
Which means we have to create new session every time for the new transaction regardless if they are in the single workflow.
2) As I understood by looking at the Kotlin solution that we don't need to add output in the transaction if we just want to make it consumed. However when I do not add an output state in the second transaction then I get the following error which means even for the consumed state I must add the same output inside the transaction. Otherwise, the following error will get erupted again.
ava.util.concurrent.ExecutionException: net.corda.core.flows.UnexpectedFlowEndException: Counter-flow errored
at java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)
at java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1895)
at net.corda.core.internal.concurrent.CordaFutureImpl.get(CordaFutureImpl.kt)
at com.etasjil.Client.testFlow(Client.java:92)
So it is clear that unlike kotlin, in java we need to explicitly add the output state and new session if we want to create and consume a state within same workflow.
Note: Since this is a new learning curve for me therefore, if I made any mistake in the above realization then kindly correct me. This answer could be good for the new comers in Corda who wants to code in Java rather than Kotlin.
State
#BelongsToContract(AnchorStateContract.class)
public class AnchorState implements LinearState {
public String ownerId,contentHash,description,classid,timestamp,expiry;
public Party initiatorParty, otherParty;
public UniqueIdentifier linearId;
#Override
public List<AbstractParty> getParticipants() {
return Arrays.asList(initiatorParty, otherParty);
}
public AnchorState() {
}
#ConstructorForDeserialization
public AnchorState(String ownerId, String contentHash, String description, String classid, String timestamp, String expiry, Party initiatorParty, Party otherParty, UniqueIdentifier linearId) {
this.ownerId = ownerId;
this.contentHash = contentHash;
this.description = description;
this.classid = classid;
this.timestamp = timestamp;
this.expiry = expiry;
this.initiatorParty = initiatorParty;
this.otherParty = otherParty;
this.linearId = linearId;
}
...
FlowTest case
...
...
#Test
public void test1() {
Future data = a.startFlow(new Initiator("Owner1", "1234567", "Description", "c1", Instant.now().toString(), Instant.MAX.toString(), b.getInfo().getLegalIdentities().get(0).getName().toString()));
network.runNetwork();
try {
System.out.println(data.get());
}catch (Exception e){
System.out.println(e.getMessage());
}
QueryCriteria.VaultQueryCriteria criteria1 = new QueryCriteria.VaultQueryCriteria(Vault.StateStatus.CONSUMED);
Vault.Page<AnchorState> results1 = a.getServices().getVaultService().queryBy(AnchorState.class, criteria1);
System.out.println("--------------------- "+ results1.getStates().size());
QueryCriteria.VaultQueryCriteria criteria2 = new QueryCriteria.VaultQueryCriteria(Vault.StateStatus.ALL);
Vault.Page<AnchorState> results2 = a.getServices().getVaultService().queryBy(AnchorState.class, criteria2);
System.out.println("--------------------- "+ results2.getStates().size());
QueryCriteria.VaultQueryCriteria criteria3 = new QueryCriteria.VaultQueryCriteria(Vault.StateStatus.CONSUMED);
Vault.Page<AnchorState> results3 = b.getServices().getVaultService().queryBy(AnchorState.class, criteria3);
System.out.println("--------------------- "+ results3.getStates().size());
QueryCriteria.VaultQueryCriteria criteria4 = new QueryCriteria.VaultQueryCriteria(Vault.StateStatus.ALL);
Vault.Page<AnchorState> results4 = b.getServices().getVaultService().queryBy(AnchorState.class, criteria4);
System.out.println("--------------------- "+ results4.getStates().size());
}
I got 1,2,1,2 as the outputs which tells 1 consumed state in node a & b, totally 2 states in node a and b(1 consumed and 1 unconsumed).
public class Register {
#Autowired
private DataSource dataSource;
#Autowired
private DCNListener listener;
private OracleConnection oracleConnection = null;
private DatabaseChangeRegistration dcr = null;
private Statement statement = null;
private ResultSet rs = null;
#PostConstruct
public void init() {
this.register();
}
private void register() {
Properties props = new Properties();
props.put(OracleConnection.DCN_NOTIFY_ROWIDS, "true");
props.setProperty(OracleConnection.DCN_IGNORE_DELETEOP, "true");
props.setProperty(OracleConnection.DCN_IGNORE_UPDATEOP, "true");
try {
oracleConnection = (OracleConnection) dataSource.getConnection();
dcr = oracleConnection.registerDatabaseChangeNotification(props);
statement = oracleConnection.createStatement();
((OracleStatement) statement).setDatabaseChangeRegistration(dcr);
rs = statement.executeQuery(listenerQuery);
while (rs.next()) {
}
dcr.addListener(listener);
String[] tableNames = dcr.getTables();
Arrays.stream(tableNames)
.forEach(i -> log.debug("Table {}" + " registered.", i));
} catch (SQLException e) {
e.printStackTrace();
close();
}
}
}
My Listener:
public class DCNListener implements DatabaseChangeListener {
#Override
public void onDatabaseChangeNotification(DatabaseChangeEvent databaseChangeEvent) {
TableChangeDescription[] tableChanges = databaseChangeEvent.getTableChangeDescription();
for (TableChangeDescription tableChange : tableChanges) {
RowChangeDescription[] rcds = tableChange.getRowChangeDescription();
for (RowChangeDescription rcd : rcds) {
RowOperation op = rcd.getRowOperation();
String rowId = rcd.getRowid().stringValue();
switch (op) {
case INSERT:
//process
break;
case UPDATE:
//do nothing
break;
case DELETE:
//do nothing
break;
default:
//do nothing
}
}
}
}
}
In my Spring boot application, I have an Oracle DCN Register class that listens for INSERTS in an event table of my database. I am listening for insertion new records.
In this Event table, I have different types of events that my application supports, lets say EventA and EventB.
The application gui allows you to upload in bulk these type of events which translate into INSERT into the oracle database table I am listening to.
For one of the event types, my application is not capturing the INSERT ONLY when it is 20 or more events uploaded in bulk, but for the other event type, I do not experience this problem.
So lets say user inserts eventA any number < 20, my application captures the inserts. But if the number of eventA inserts exceeds 20, it does not capture.
This is not the case for eventB which works smoothly. I'd like to understand if I'm missing anything in term of registration and anything I can look out for maybe in the database or what the issue could be here?
You should also look for the ALL_ROWS event from:
EnumSet<TableChangeDescription.TableOperation> tableOps = tableChange.getTableOperations();
if(tableOps.contains(TableChangeDescription.TableOperation.ALL_ROWS)){
// Invalidate the cache
}
Quote fromt the JavaDoc:
The ALL_ROWS event is sent when the table is completely invalidated and row level information isn't available. If the DCN_NOTIFY_ROWIDS option hasn't been turned on during registration, then all events will have this OPERATION_ALL_ROWS flag on. It can also happen in situations where too many rows have changed and it would be too expensive for the server to send the list of them.
https://docs.oracle.com/en/database/oracle/oracle-database/12.2/jajdb/oracle/jdbc/dcn/TableChangeDescription.TableOperation.html#ALL_ROWS
I have a Java Spring based web application and I want to insert a record to a table only if the table does not contain any rows that are "similar" (according to some specific, irrelevant criteria) to the new row.
Because this is a multi-threaded environment, I cannot use a SELECT+INSERT two-step combination as it would expose me to a race condition.
The same question was first asked and answered here and here several years ago. Unfortunately, the questions have got only a little attention and the provided answer is not sufficient to my needs.
Here's the code I currently have and it's not working:
#Component("userActionsManager")
#Transactional
public class UserActionsManager implements UserActionsManagerInterface {
#PersistenceContext(unitName = "itsadDB")
private EntityManager manager;
#Resource(name = "databaseManager")
private DB db;
...
#SuppressWarnings("unchecked")
#Override
#PreAuthorize("hasRole('ROLE_USER') && #username == authentication.name")
public String giveAnswer(String username, String courseCode, String missionName, String taskCode, String answer) {
...
List<Submission> submissions = getAllCorrectSubmissions(newSubmission);
List<Result> results = getAllCorrectResults(result);
if (submissions.size() > 0
|| results.size() > 0) throw new SessionAuthenticationException("foo");
manager.persist(newSubmission);
manager.persist(result);
submissions = getAllCorrectSubmissions(newSubmission);
results = getAllCorrectResults(result);
for (Submission s : submissions) manager.lock(s, LockModeType.OPTIMISTIC_FORCE_INCREMENT);
for (Result r : results ) manager.lock(r, LockModeType.OPTIMISTIC_FORCE_INCREMENT);
manager.flush();
...
}
#SuppressWarnings("unchecked")
private List<Submission> getAllCorrectSubmissions(Submission newSubmission) {
Query q = manager.createQuery("SELECT s FROM Submission AS s WHERE s.missionTask = ?1 AND s.course = ?2 AND s.user = ?3 AND s.correct = true");
q.setParameter(1, newSubmission.getMissionTask());
q.setParameter(2, newSubmission.getCourse());
q.setParameter(3, newSubmission.getUser());
return (List<Submission>) q.getResultList();
}
#SuppressWarnings("unchecked")
private List<Result> getAllCorrectResults(Result result) {
Query q = manager.createQuery("SELECT r FROM Result AS r WHERE r.missionTask = ?1 AND r.course = ?2 AND r.user = ?3");
q.setParameter(1, result.getMissionTask());
q.setParameter(2, result.getCourse());
q.setParameter(3, result.getUser());
return (List<Result>) q.getResultList();
}
...
}
According to the answer provided here I am supposed to somehow use OPTIMISTIC_FORCE_INCREMENT but it's not working. I suspect that the provided answer is erroneous so I need a better one.
edit:
Added more context related code. Right now this code still has a race condition. When I make 10 simultaneous HTTP POST requests approximately 5 rows will get erroneously inserted. Other 5 requests are rejected with HTTP error code 409 (conflict). The correct code would guarantee that only 1 row would get inserted to the database no matter how many concurrent requests I make. Making the method synchronous is not a solution since the race condition still manifests for some unknown reason (I tested it).
Unfortunately after several days of research I was unable to find a short and simple solution to my problem. Since my time budget is not unlimited I had to come up with a workaround. Call it a kludge if you may.
Since the whole HTTP request is a transaction, it will be rolled back at the sight of any conflicts. I am using this for my advantage by locking a special entity within the context of the whole HTTP request. Should multiple HTTP requests be received at the same time, all but one will result in some PersistenceException.
In the beginning of the transaction I am checking whether no other correct answers have been submitted yet. During that check the lock is already effective so no race condition could happen. The lock is effective until the answer is submitted. This basically simulates a critical section as a SELECT+INSERT two step query on the application level (in pure MySQL I would have used the INSERT IF NOT EXISTS construct).
This approach has some drawbacks. Whenever two students submit an answer at the same time, one of them will be thrown an exception. This is sort of bad for performance and bandwidth because the student who received HTTP STATUS 409 has to resubmit their answer.
To compensate the latter, I am automatically retrying to submit the answer on the server side a couple of times between randomly chosen time intervals. See the according HTTP request controller code is below:
#Controller
#RequestMapping("/users")
public class UserActionsController {
#Autowired
private SessionRegistry sessionRegistry;
#Autowired
#Qualifier("authenticationManager")
private AuthenticationManager authenticationManager;
#Resource(name = "userActionsManager")
private UserActionsManagerInterface userManager;
#Resource(name = "databaseManager")
private DB db;
.
.
.
#RequestMapping(value = "/{username}/{courseCode}/missions/{missionName}/tasks/{taskCode}/submitAnswer", method = RequestMethod.POST)
public #ResponseBody
Map<String, Object> giveAnswer(#PathVariable String username,
#PathVariable String courseCode, #PathVariable String missionName,
#PathVariable String taskCode, #RequestParam("answer") String answer, HttpServletRequest request) {
init(request);
db.log("Submitting an answer to task `"+taskCode+"` of mission `"+missionName+
"` in course `"+courseCode+"` as student `"+username+"`.");
String str = null;
boolean conflict = true;
for (int i=0; i<10; i++) {
Random rand = new Random();
int ms = rand.nextInt(1000);
try {
str = userManager.giveAnswer(username, courseCode, missionName, taskCode, answer);
conflict = false;
break;
}
catch (EntityExistsException e) {throw new EntityExistsException();}
catch (PersistenceException e) {}
catch (UnexpectedRollbackException e) {}
try {
Thread.sleep(ms);
} catch(InterruptedException ex) {
Thread.currentThread().interrupt();
}
}
if (conflict) str = userManager.giveAnswer(username, courseCode, missionName, taskCode, answer);
if (str == null) db.log("Answer accepted: `"+answer+"`.");
else db.log("Answer rejected: `"+answer+"`.");
Map<String, Object> hm = new HashMap<String, Object>();
hm.put("success", str == null);
hm.put("message", str);
return hm;
}
}
If for some reason the controller is unable to commit the transaction 10 times in a row then it will try one more time but will not attempt to catch the possible exceptions. When an exception is thrown on the 11th try then it will be processed by the global exception controller and the client will receive HTTP STATUS 409. The global exception controller is defined below.
#ControllerAdvice
public class GlobalExceptionController {
#Resource(name = "staticDatabaseManager")
private StaticDB db;
#ExceptionHandler(SessionAuthenticationException.class)
#ResponseStatus(value=HttpStatus.FORBIDDEN, reason="session has expired") //403
public ModelAndView expiredException(HttpServletRequest request, Exception e) {
ModelAndView mav = new ModelAndView("exception");
mav.addObject("name", e.getClass().getSimpleName());
mav.addObject("message", e.getMessage());
return mav;
}
#ExceptionHandler({UnexpectedRollbackException.class,
EntityExistsException.class,
OptimisticLockException.class,
PersistenceException.class})
#ResponseStatus(value=HttpStatus.CONFLICT, reason="conflicting requests") //409
public ModelAndView conflictException(HttpServletRequest request, Exception e) {
ModelAndView mav = new ModelAndView("exception");
mav.addObject("name", e.getClass().getSimpleName());
mav.addObject("message", e.getMessage());
synchronized (db) {
db.setUserInfo(request);
db.log("Conflicting "+request.getMethod()+" request to "+request.getRequestURI()+" ("+e.getClass().getSimpleName()+").", Log.LVL_SECURITY);
}
return mav;
}
//ResponseEntity<String> customHandler(Exception ex) {
// return new ResponseEntity<String>("Conflicting requests, try again.", HttpStatus.CONFLICT);
//}
}
Finally, the giveAnswer method itself utilizes a special entity with a primary key lock_addCorrectAnswer. I lock that special entity with the OPTIMISTIC_FORCE_INCREMENT flag which makes sure that no two transactions can have overlapping execution times for the giveAnswer method. The respective code can be seen below:
#Component("userActionsManager")
#Transactional
public class UserActionsManager implements UserActionsManagerInterface {
#PersistenceContext(unitName = "itsadDB")
private EntityManager manager;
#Resource(name = "databaseManager")
private DB db;
.
.
.
#SuppressWarnings("unchecked")
#Override
#PreAuthorize("hasRole('ROLE_USER') && #username == authentication.name")
public String giveAnswer(String username, String courseCode, String missionName, String taskCode, String answer) {
.
.
.
if (!userCanGiveAnswer(user, course, missionTask)) {
error = "It is forbidden to submit an answer to this task.";
db.log(error, Log.LVL_MAJOR);
return error;
}
.
.
.
if (correctAnswer) {
.
.
.
addCorrectAnswer(newSubmission, result);
return null;
}
newSubmission = new Submission(user, course, missionTask, answer, false);
manager.persist(newSubmission);
return error;
}
private void addCorrectAnswer(Submission submission, Result result) {
String var = "lock_addCorrectAnswer";
Global global = manager.find(Global.class, var);
if (global == null) {
global = new Global(var, 0);
manager.persist(global);
manager.flush();
}
manager.lock(global, LockModeType.OPTIMISTIC_FORCE_INCREMENT);
manager.persist(submission);
manager.persist(result);
manager.flush();
long submissions = getCorrectSubmissionCount(submission);
long results = getResultCount(result);
if (submissions > 1 || results > 1) throw new EntityExistsException();
}
private long getCorrectSubmissionCount(Submission newSubmission) {
Query q = manager.createQuery("SELECT count(s) FROM Submission AS s WHERE s.missionTask = ?1 AND s.course = ?2 AND s.user = ?3 AND s.correct = true");
q.setParameter(1, newSubmission.getMissionTask());
q.setParameter(2, newSubmission.getCourse());
q.setParameter(3, newSubmission.getUser());
return (Long) q.getSingleResult();
}
private long getResultCount(Result result) {
Query q = manager.createQuery("SELECT count(r) FROM Result AS r WHERE r.missionTask = ?1 AND r.course = ?2 AND r.user = ?3");
q.setParameter(1, result.getMissionTask());
q.setParameter(2, result.getCourse());
q.setParameter(3, result.getUser());
return (Long) q.getSingleResult();
}
}
It is important to note that the entity Global has to have a version annotation in its class for the OPTIMISTIC_FORCE_INCREMENT to work (see code below).
#Entity
#Table(name = "GLOBALS")
public class Global implements Serializable {
.
.
.
#Id
#Column(name = "NAME", length = 32)
private String key;
#Column(name = "INTVAL")
private int intVal;
#Column(name = "STRVAL", length = 4096)
private String strVal;
#Version
private Long version;
.
.
.
}
Such an approach can be optimized even further. Instead of using the same lock name lock_addCorrectAnswer for all giveAnswer calls, I could generate the lock name deterministically from the name of the submitting user. For example, if the student's username is Hyena then the primary key for the lock entity would be lock_Hyena_addCorrectAnswer. That way multiple students could submit answers at the same time without receiving any conflicts. However, if a malicious user spams the HTTP POST method for submitAnswer 10x in parallel they will be prevented by the this locking mechanism.
I've a question on hibernate operation: update.
Here a bit of code:
Campaign campaign = campaignDAO.get(id);
campaign.setStatus(true);
campaignDAO.update(campaign);
If I just have all the data of the campaign object, is there any way to perform an update without perform the first select (campaignDAO.get(id)) ?
Thanks,
Alessio
HQL will definitely help you.
In order to maintain the separation of concerns, you can add a more specialized method in you DAO object:
public void updateStatusForId(long id, boolean status){
//provided you obtain a reference to your session object
session.createQuery("UPDATE Campaign SET status = " + status + " WHERE id = :id").setParameter("id", id).executeUpdate();
//flush your session
}
Then you could simply call this method from your business method. You can check the generated SQL statements inside the logs of your app by setting the show_sql hibernate property to true.
You can use session.load(). It will not hit the database. Here you can find its details and example code.
I had worte a extension to solve this issue in Nhibernate
how to use!
first of all you need enable dynamic-update="true"
using (ISession session = sessionFactory.OpenSession())
{
Customer c1 = new Customer();
c1.CustomerID = c.CustomerID;
session.Mark(c1);
// c1.Name = DateTime.Now.ToString();
c1.Phone = DateTime.Now.ToString();
//需要开启动态更新
session.UpdateDirty(c1);
session.Flush();
}
UpdateExtension.cs
public static class UpdateExtension
{
static readonly Object NOTNULL = new Object();
public static void UpdateDirty<TEntity>(this ISession session, TEntity entity)
{
SessionImpl implementor = session as SessionImpl;
EntityEntry entry = implementor.PersistenceContext.GetEntry(entity);
if (entry == null)
{
throw new InvalidOperationException("找不到对应的实例,请先使用Mask方法标记");
}
IEntityPersister persister = entry.Persister;
// 如果某列不可以为空,新的Entity里也不想更新他。
// 那么LoadState 里的值应该和Entity 中的值相同
Object[] CurrentState = entry.Persister.GetPropertyValues(entity, EntityMode.Poco);
Object[] LoadedState = entry.LoadedState;
int[] dirtys = persister.FindDirty(CurrentState
, LoadedState
, entity
, (SessionImpl)session);
if (dirtys == null || dirtys.Length == 0)
{
return;
}
persister.Update(entry.Id
, CurrentState
, dirtys
, true
, LoadedState
, entry.Version
, entity
, entry.RowId
, (SessionImpl)session);
implementor.PersistenceContext.RemoveEntry(entity);
implementor.PersistenceContext.RemoveEntity(entry.EntityKey);
session.Lock(entity, LockMode.None);
// 防止(implementor.PersistenceContext.EntityEntries.Count == 0)
}
public static void Mark<TEntity>(this ISession session, TEntity entity)
{
session.Lock(entity, LockMode.None);
}
}
here is update sql
command 0:UPDATE Customers SET Phone = #p0 WHERE CustomerID = #p1;#p0 = '2014/12/26 0:12:56' [Type: String (4000)], #p1 = 1 [Type: Int32 (0)]
Only update Phone column .
event Name property can not be null. we can work very well.