Tired to follow the high low game pattern, but somehow the dynamodb isn't picking up any insertion??
I'm using a response interceptor that will save some session data into dynamodb under some condition, as following:
#Override
public void process(HandlerInput handlerInput, Optional<Response> optional) {
var sessionAttributes = handlerInput.getAttributesManager().getSessionAttributes();
var sessionState = (String)sessionAttributes.get((Constants.STATE));
var oldState = (String)sessionAttributes.get(Constants.STATE_OLD);
// Update persistent storage whenever there's a change
if (!oldState.equals(sessionState)) {
handlerInput.getAttributesManager().setPersistentAttributes(new HashMap<>(){{
put(Constants.STATE, sessionState);
}});
sessionAttributes.put(Constants.STATE_OLD, sessionState);
logger.info("Persistent state updated");
}
logger.info("SaveSessionInterceptor called");
}
And in the main streamHandler
private static Skill getSkill() {
logger.info("Skill Stream Handler Initialized");
return Skills.standard()
.withSkillId(config.ALEXA_SKILL_ID)
.addRequestHandlers(
// handlers
)
.addResponseInterceptor(new SaveSessionInterceptor())
// Add persistent storage support
.withAutoCreateTable(true)
.withTableName("AudioAdventure")
.build();
}
On AWS, the table's been created but just isn't responding to insertion through the persistent attributes.
Maybe I'm been doing sth wrong..
Related
I am practicing Java and Spring Boot.
Actually my idea is if we send json delete request means it should change details in the database as inactive instead of deleting that data.
for Example if I want to delete the student record. Base on student ID as 1 means it should change the student status as inactive instead of deleting that record.
In spring boot controller I have a delete method.
Codes are below for your understanding:
Controller:
#DeleteMapping("v1/student/delete/{id}")
public ResponseEntity<String> deleteStudentDetails(#PathVariable("id") Integer getId) {
studentService.deleteStudentdetails(getId);
return new ResponseEntity<>("STUDENT RECORD HAS BEEN DELETED !!!", HttpStatus.OK);
}
studentService = is a service class which sending the information from controller to service.
deleteStudentdetails = is a method in service class.
Service class method
public void deleteStudentdetails(Integer getId) {
Optional<StudentDetails> studentIdDetails = studentRepo.findById(getId); // getting info from DB
StudentDetails studentIsdetail = studentIdDetails.get();
if (studentIsdetail.getActive() == false) {
throw new RuntimeException("Student is Already inActive!!");
}
studentIsdetail.setActive(false); // Here I want to change the Active Status from True to false
}
Here I am changing the values in the Database by retrieving the student ID which is already existing in the DB.
STUDENT_ID ACTIVE BOARD_ID
60 TRUE STATEBOARD
116 TRUE STATEBOARD //here want to change the status as False
120 FALSE STATEBOARD
You never save studentIsdetail into the database after modifying.
So just add studentRepo.save(studentIsdetail); after calling studentIsdetail.setActive(false); and it should work:
public void deleteStudentdetails(Integer getId) {
Optional<StudentDetails> studentIdDetails = studentRepo.findById(getId); // getting info from DB
StudentDetails studentIsdetail = studentIdDetails.get();
if (!studentIsdetail.getActive()) {
throw new RuntimeException("Student is Already inActive!!");
}
studentIsdetail.setActive(false);
studentRepo.save(studentIsdetail);
}
And I recommend that you check if you get a result to avoid exceptions:
public void deleteStudentdetails(Integer getId) {
Optional<StudentDetails> studentIdDetails = studentRepo.findById(getId); // getting info from DB
if (studentIsdetail.isEmpty()) {
// handle e.g. throw exception or just return (with return:)
return;
}
StudentDetails studentIsdetail = studentIdDetails.get();
if (!studentIsdetail.getActive()) {
throw new RuntimeException("Student is Already inActive!!");
}
studentIsdetail.setActive(false);
studentRepo.save(studentIsdetail);
}
For this you can also use Optional#ifPresent:
public void deleteStudentdetails(Integer getId) {
Optional<StudentDetails> studentIdDetails = studentRepo.findById(getId); // getting info from DB
studentIdDetails.ifPresent((studentIsdetail) -> {
if (!studentIsdetail.getActive()) {
throw new RuntimeException("Student is Already inActive!!");
}
studentIsdetail.setActive(false);
studentRepo.save(studentIsdetail);
});
}
I am creating a new state in the flow and then I am trying to consume the state by using reference input. But every time I see in the result as unconsumed state, though I was providing the reference state in the transaction's input.
public SignedTransaction call() throws FlowException {
//------------------------------------------------------------------------------------------------------------
// STEP-1:
// FIRST FLOW MUST CREATE THE NEW STATE WHICH HAS NO INPUT ( THIS WILL CREATE NEW RECORD-ANCHOR WITH LINEARID )
//
//------------------------------------------------------------------------------------------------------------
// We retrieve the notary identity from the network map.
Party notary = getServiceHub().getNetworkMapCache().getNotaryIdentities().get(0);
// We create the transaction components.
AnchorState outputState = new
AnchorState(ownerId,contentHash,description,classid,timestamp,expiry, getOurIdentity(), otherParty,new UniqueIdentifier());
//required signers
List<PublicKey> requiredSigners = Arrays.asList(getOurIdentity().getOwningKey(),otherParty.getOwningKey());
//send create command with required signer signatures as below
Command command = new Command<>(new AnchorStateContract.Commands.CreateRecAnchorCmd(), requiredSigners);
// We create a transaction builder and add the components.
TransactionBuilder txBuilder = new TransactionBuilder(notary)
.addOutputState(outputState, AnchorStateContract.ID)
.addCommand(command);
// Verifying the transaction.
txBuilder.verify(getServiceHub());
// Signing the transaction.
SignedTransaction signedTx = getServiceHub().signInitialTransaction(txBuilder);
// Creating a session with the other party.
FlowSession otherPartySession = initiateFlow(otherParty);
// Obtaining the counterparty's signature.
SignedTransaction fullySignedTx = subFlow(new CollectSignaturesFlow(
signedTx, Arrays.asList(otherPartySession), CollectSignaturesFlow.Companion.tracker()));
//notarized transaction
SignedTransaction notraizedtransaction = subFlow(new FinalityFlow(fullySignedTx, otherPartySession));
//------------------------------------------------------------------------------------------------------------
// STEP-2:
// SINCE NOW WE HAVE A NEW UNCONSUMED RECORD-ANCHOR SO WE MUST MAKE IT CONSUMED ( BY USING THE PREVIOUS OUTPUT AS AN INPUT)
//
//------------------------------------------------------------------------------------------------------------
StateAndRef oldStateref = getServiceHub().toStateAndRef(new StateRef(notraizedtransaction.getId(),0));
Command storeCommand = new Command<>(new AnchorStateContract.Commands.ApproveRecAnchorCmd(), requiredSigners);
TransactionBuilder txBuilder2 = new TransactionBuilder(notary)
.addInputState(oldStateref)
.addOutputState(outputState, AnchorStateContract.ID)
.addCommand(storeCommand);
txBuilder2.verify(getServiceHub());
// signing
SignedTransaction signedTx2 = getServiceHub().signInitialTransaction(txBuilder2);
// Creating a session with the other party.
FlowSession otherPartySession2 = initiateFlow(otherParty);
// Finalising the transaction.
SignedTransaction fullySignedTx2 = subFlow(new CollectSignaturesFlow(
signedTx2, Arrays.asList(otherPartySession2), CollectSignaturesFlow.Companion.tracker()));
//notarized transaction
return subFlow(new FinalityFlow(fullySignedTx2, otherPartySession2));
}
In my flow initiator class I am first creating new state of a hash which I am calling as AnchorState. This state is coming from one of the participants and then it requests to the other participant to sign. afterward the signed record is stored in the ledger but its reference used as an input for a new state change, I simply want to make this state as consumed rather than unconsumed.
The responding flow class of participant B is as below
public SignedTransaction call() throws FlowException
{
//this class is used inside call function for the verification purposes before signed by this party
class SignTxFlow extends SignTransactionFlow
{
private SignTxFlow(FlowSession otherPartySession) {
super(otherPartySession);
}
#Override
protected void checkTransaction(SignedTransaction stx) {
requireThat(require -> {
ContractState output = stx.getTx().getOutputs().get(0).getData();
require.using("This must be an AnchorState transaction.", output instanceof AnchorState);
AnchorState state = (AnchorState) output;
require.using("The AnchorState's value should be more than 6 characters", state.getContentHash().length() > 6);
return null;
});
}
}
SecureHash expectedTxId = subFlow(new SignTxFlow(otherPartySession)).getId();
return subFlow(new ReceiveFinalityFlow(otherPartySession, expectedTxId));
}
This flow successfully runs and returns me unique id for the transaction but I tried everything and could not found how to change the state from unconsumed to consumed?
AFTER FIX
I realized that the vaultQuery on the CordaOS by default returns unconsumed state. Which is now clear why I was not able to get the consumed state in the first place. One more issue which I found, was lack of resources in CORDA for java though I found many kotlin based answers for a transaction with "creation and consumption" in single workflow however converting them into JAVA required some efforts.
Kotlin Based answer
Some differences I observed between Java and Kotlin approach
1) When I have tried to use the same session in my second transaction which was used in the first transaction then I get this error
java.util.concurrent.ExecutionException: net.corda.core.flows.UnexpectedFlowEndException: Tried to access ended session SessionId(toLong=1984916257986245538) with empty buffer
at java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)
at java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1895)
at net.corda.core.internal.concurrent.CordaFutureImpl.get(CordaFutureImpl.kt)
Which means we have to create new session every time for the new transaction regardless if they are in the single workflow.
2) As I understood by looking at the Kotlin solution that we don't need to add output in the transaction if we just want to make it consumed. However when I do not add an output state in the second transaction then I get the following error which means even for the consumed state I must add the same output inside the transaction. Otherwise, the following error will get erupted again.
ava.util.concurrent.ExecutionException: net.corda.core.flows.UnexpectedFlowEndException: Counter-flow errored
at java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)
at java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1895)
at net.corda.core.internal.concurrent.CordaFutureImpl.get(CordaFutureImpl.kt)
at com.etasjil.Client.testFlow(Client.java:92)
So it is clear that unlike kotlin, in java we need to explicitly add the output state and new session if we want to create and consume a state within same workflow.
Note: Since this is a new learning curve for me therefore, if I made any mistake in the above realization then kindly correct me. This answer could be good for the new comers in Corda who wants to code in Java rather than Kotlin.
State
#BelongsToContract(AnchorStateContract.class)
public class AnchorState implements LinearState {
public String ownerId,contentHash,description,classid,timestamp,expiry;
public Party initiatorParty, otherParty;
public UniqueIdentifier linearId;
#Override
public List<AbstractParty> getParticipants() {
return Arrays.asList(initiatorParty, otherParty);
}
public AnchorState() {
}
#ConstructorForDeserialization
public AnchorState(String ownerId, String contentHash, String description, String classid, String timestamp, String expiry, Party initiatorParty, Party otherParty, UniqueIdentifier linearId) {
this.ownerId = ownerId;
this.contentHash = contentHash;
this.description = description;
this.classid = classid;
this.timestamp = timestamp;
this.expiry = expiry;
this.initiatorParty = initiatorParty;
this.otherParty = otherParty;
this.linearId = linearId;
}
...
FlowTest case
...
...
#Test
public void test1() {
Future data = a.startFlow(new Initiator("Owner1", "1234567", "Description", "c1", Instant.now().toString(), Instant.MAX.toString(), b.getInfo().getLegalIdentities().get(0).getName().toString()));
network.runNetwork();
try {
System.out.println(data.get());
}catch (Exception e){
System.out.println(e.getMessage());
}
QueryCriteria.VaultQueryCriteria criteria1 = new QueryCriteria.VaultQueryCriteria(Vault.StateStatus.CONSUMED);
Vault.Page<AnchorState> results1 = a.getServices().getVaultService().queryBy(AnchorState.class, criteria1);
System.out.println("--------------------- "+ results1.getStates().size());
QueryCriteria.VaultQueryCriteria criteria2 = new QueryCriteria.VaultQueryCriteria(Vault.StateStatus.ALL);
Vault.Page<AnchorState> results2 = a.getServices().getVaultService().queryBy(AnchorState.class, criteria2);
System.out.println("--------------------- "+ results2.getStates().size());
QueryCriteria.VaultQueryCriteria criteria3 = new QueryCriteria.VaultQueryCriteria(Vault.StateStatus.CONSUMED);
Vault.Page<AnchorState> results3 = b.getServices().getVaultService().queryBy(AnchorState.class, criteria3);
System.out.println("--------------------- "+ results3.getStates().size());
QueryCriteria.VaultQueryCriteria criteria4 = new QueryCriteria.VaultQueryCriteria(Vault.StateStatus.ALL);
Vault.Page<AnchorState> results4 = b.getServices().getVaultService().queryBy(AnchorState.class, criteria4);
System.out.println("--------------------- "+ results4.getStates().size());
}
I got 1,2,1,2 as the outputs which tells 1 consumed state in node a & b, totally 2 states in node a and b(1 consumed and 1 unconsumed).
public class Register {
#Autowired
private DataSource dataSource;
#Autowired
private DCNListener listener;
private OracleConnection oracleConnection = null;
private DatabaseChangeRegistration dcr = null;
private Statement statement = null;
private ResultSet rs = null;
#PostConstruct
public void init() {
this.register();
}
private void register() {
Properties props = new Properties();
props.put(OracleConnection.DCN_NOTIFY_ROWIDS, "true");
props.setProperty(OracleConnection.DCN_IGNORE_DELETEOP, "true");
props.setProperty(OracleConnection.DCN_IGNORE_UPDATEOP, "true");
try {
oracleConnection = (OracleConnection) dataSource.getConnection();
dcr = oracleConnection.registerDatabaseChangeNotification(props);
statement = oracleConnection.createStatement();
((OracleStatement) statement).setDatabaseChangeRegistration(dcr);
rs = statement.executeQuery(listenerQuery);
while (rs.next()) {
}
dcr.addListener(listener);
String[] tableNames = dcr.getTables();
Arrays.stream(tableNames)
.forEach(i -> log.debug("Table {}" + " registered.", i));
} catch (SQLException e) {
e.printStackTrace();
close();
}
}
}
My Listener:
public class DCNListener implements DatabaseChangeListener {
#Override
public void onDatabaseChangeNotification(DatabaseChangeEvent databaseChangeEvent) {
TableChangeDescription[] tableChanges = databaseChangeEvent.getTableChangeDescription();
for (TableChangeDescription tableChange : tableChanges) {
RowChangeDescription[] rcds = tableChange.getRowChangeDescription();
for (RowChangeDescription rcd : rcds) {
RowOperation op = rcd.getRowOperation();
String rowId = rcd.getRowid().stringValue();
switch (op) {
case INSERT:
//process
break;
case UPDATE:
//do nothing
break;
case DELETE:
//do nothing
break;
default:
//do nothing
}
}
}
}
}
In my Spring boot application, I have an Oracle DCN Register class that listens for INSERTS in an event table of my database. I am listening for insertion new records.
In this Event table, I have different types of events that my application supports, lets say EventA and EventB.
The application gui allows you to upload in bulk these type of events which translate into INSERT into the oracle database table I am listening to.
For one of the event types, my application is not capturing the INSERT ONLY when it is 20 or more events uploaded in bulk, but for the other event type, I do not experience this problem.
So lets say user inserts eventA any number < 20, my application captures the inserts. But if the number of eventA inserts exceeds 20, it does not capture.
This is not the case for eventB which works smoothly. I'd like to understand if I'm missing anything in term of registration and anything I can look out for maybe in the database or what the issue could be here?
You should also look for the ALL_ROWS event from:
EnumSet<TableChangeDescription.TableOperation> tableOps = tableChange.getTableOperations();
if(tableOps.contains(TableChangeDescription.TableOperation.ALL_ROWS)){
// Invalidate the cache
}
Quote fromt the JavaDoc:
The ALL_ROWS event is sent when the table is completely invalidated and row level information isn't available. If the DCN_NOTIFY_ROWIDS option hasn't been turned on during registration, then all events will have this OPERATION_ALL_ROWS flag on. It can also happen in situations where too many rows have changed and it would be too expensive for the server to send the list of them.
https://docs.oracle.com/en/database/oracle/oracle-database/12.2/jajdb/oracle/jdbc/dcn/TableChangeDescription.TableOperation.html#ALL_ROWS
Play 2.5 Highlights states
Better control over WebSocket frames
The Play 2.5 WebSocket API gives you direct control over WebSocket frames. You can now send and receive binary, text, ping, pong and close frames. If you don’t want to worry about this level of detail, Play will still automatically convert your JSON or XML data into the right kind of frame.
However
https://www.playframework.com/documentation/2.5.x/JavaWebSockets has examples around LegacyWebSocket which is deprecated
What is the recommended API/pattern for Java WebSockets? Is using
LegacyWebSocket the only option for java websockets?
Are there any examples using new Message types ping/pong to implement a heartbeat?
The official documentation on this is disappointingly very sparse. Perhaps in Play 2.6 we'll see an update to this. However, I will provide an example below on how to configure a chat websocket in Play 2.5, just to help out those in need.
Setup
AController.java
#Inject
private Materializer materializer;
private ActorRef chatSocketRouter;
#Inject
public AController(#Named("chatSocketRouter") ActorRef chatInjectedActor) {
this.chatSocketRouter = chatInjectedActor;
}
// Make a chat websocket for a user
public WebSocket chatSocket() {
return WebSocket.Json.acceptOrResult(request -> {
String authToken = getAuthToken();
// Checking of token
if (authToken == null) {
return forbiddenResult("No [authToken] supplied.");
}
// Could we find the token in the database?
final AuthToken token = AuthToken.findByToken(authToken);
if (token == null) {
return forbiddenResult("Could not find [authToken] in DB. Login again.");
}
User user = token.getUser();
if (user == null) {
return forbiddenResult("You are not logged in to view this stream.");
}
Long userId = user.getId();
// Create a function to be run when we initialise a flow.
// A flow basically links actors together.
AbstractFunction1<ActorRef, Props> getWebSocketActor = new AbstractFunction1<ActorRef, Props>() {
#Override
public Props apply(ActorRef connectionProperties) {
// We use the ActorRef provided in the param above to make some properties.
// An ActorRef is a fancy word for thread reference.
// The WebSocketActor manages the web socket connection for one user.
// WebSocketActor.props() means "make one thread (from the WebSocketActor) and return the properties on how to reference it".
// The resulting Props basically state how to construct that thread.
Props properties = ChatSocketActor.props(connectionProperties, chatSocketRouter, userId);
// We can have many connections per user. So we need many ActorRefs (threads) per user. As you can see from the code below, we do exactly that. We have an object called
// chatSocketRouter which holds a Map of userIds -> connectionsThreads and we "tell"
// it a lightweight object (UserMessage) that is made up of this connecting user's ID and the connection.
// As stated above, Props are basically a way of describing an Actor, or dumbed-down, a thread.
// In this line, we are using the Props above to
// reference the ActorRef we've just created above
ActorRef anotherUserDevice = actorSystem.actorOf(properties);
// Create a lightweight object...
UserMessage routeThisUser = new UserMessage(userId, anotherUserDevice);
// ... to tell the thread that has our Map that we have a new connection
// from a user.
chatSocketRouter.tell(routeThisUser, ActorRef.noSender());
// We return the properties to the thread that will be managing this user's connection
return properties;
}
};
final Flow<JsonNode, JsonNode, ?> jsonNodeFlow =
ActorFlow.<JsonNode, JsonNode>actorRef(getWebSocketActor,
100,
OverflowStrategy.dropTail(),
actorSystem,
materializer).asJava();
final F.Either<Result, Flow<JsonNode, JsonNode, ?>> right = F.Either.Right(jsonNodeFlow);
return CompletableFuture.completedFuture(right);
});
}
// Return this whenever we want to reject a
// user from connecting to a websocket
private CompletionStage<F.Either<Result, Flow<JsonNode, JsonNode, ?>>> forbiddenResult(String msg) {
final Result forbidden = Results.forbidden(msg);
final F.Either<Result, Flow<JsonNode, JsonNode, ?>> left = F.Either.Left(forbidden);
return CompletableFuture.completedFuture(left);
}
ChatSocketActor.java
public class ChatSocketActor extends UntypedActor {
private final ActorRef out;
private final Long userId;
private ActorRef chatSocketRouter;
public ChatSocketActor(ActorRef out, ActorRef chatSocketRouter, Long userId) {
this.out = out;
this.userId = userId;
this.chatSocketRouter = chatSocketRouter;
}
public static Props props(ActorRef out, ActorRef chatSocketRouter, Long userId) {
return Props.create(ChatSocketActor.class, out, chatSocketRouter, userId);
}
// Add methods here handling each chat connection...
}
ChatSocketRouter.java
public class ChatSocketRouter extends UntypedActor {
public ChatSocketRouter() {}
// Stores userIds to websockets
private final HashMap<Long, List<ActorRef>> senders = new HashMap<>();
private void addSender(Long userId, ActorRef actorRef){
if (senders.containsKey(userId)) {
final List<ActorRef> actors = senders.get(userId);
actors.add(actorRef);
senders.replace(userId, actors);
} else {
List<ActorRef> l = new ArrayList<>();
l.add(actorRef);
senders.put(userId, l);
}
}
private void removeSender(ActorRef actorRef){
for (List<ActorRef> refs : senders.values()) {
refs.remove(actorRef);
}
}
#Override
public void onReceive(Object message) throws Exception {
ActorRef sender = getSender();
// Handle messages sent to this 'router' here
if (message instanceof UserMessage) {
UserMessage userMessage = (UserMessage) message;
addSender(userMessage.userId, userMessage.actorRef);
// Watch sender so we can detect when they die.
getContext().watch(sender);
} else if (message instanceof Terminated) {
// One of our watched senders has died.
removeSender(sender);
} else {
unhandled(message);
}
}
}
Example
Now whenever you want to send a client with a websocket connection a message you can do something like:
ChatSenderController.java
private ActorRef chatSocketRouter;
#Inject
public ChatSenderController(#Named("chatSocketRouter") ActorRef chatInjectedActor) {
this.chatSocketRouter = chatInjectedActor;
}
public static void sendMessage(Long sendToId) {
// E.g. send the chat router a message that says hi
chatSocketRouter.tell(new Message(sendToId, "Hi"));
}
ChatSocketRouter.java
#Override
public void onReceive(Object message) throws Exception {
// ...
if (message instanceof Message) {
Message messageToSend = (Message) message;
// Loop through the list above and send the message to
// each connection. For example...
for (ActorRef wsConnection : senders.get(messageToSend.getSendToId())) {
// Send "Hi" to each of the other client's
// connected sessions
wsConnection.tell(messageToSend.getMessage());
}
}
// ...
}
Again, I wrote the above to help out those in need. After scouring the web I could not find a reasonable and simple example. There is an open issue for this exact topic. There are also some examples online but none of them were easy to follow. Akka has some great documentation but mixing it in with Play was a tough mental task.
Please help improve this answer if you see anything that is amiss.
I'm currently working with some legacy code whereby transactions are rolled manually.
Consider the following (example for illustrative purposes only, ignore syntax/design issues):
import org.springframework.transaction.TransactionStatus;
import org.springframework.transaction.support.DefaultTransactionDefinition;
private PlatformTransactionManager transactionManager;
private DefaultTransactionDefinition txDefRequired = new DefaultTransactionDefinition(TransactionDefinition.PROPAGATION_REQUIRED);
private DefaultTransactionDefinition txDefRequiresNew new DefaultTransactionDefinition(TransactionDefinition.PROPAGATION_REQUIRES_NEW);
void createPosts(Map<String, String> posts){
TransactionStatus status = transactionManager.getTransaction(txDefRequiresNew)
try {
// posts Map is made up for username:posttext key pair
for (Map.Entry<String, String> entry : posts.entrySet()) {
// Construct online post
createPost(entry.getKey(), entry.getValue());
}
transactionManager.commit(status);
} finally {
if (!status.isCompleted()) {
transactionManager.rollback(status);
}
}
}
void createPost(String username, String text){
TransactionStatus status = transactionManager.getTransaction(txDefRequired)
// Construct online post
OnlinePost p = new OnlinePost(text);
User u = resolveUser(username);
p.setUser(u);
transactionManager.commit(status); // flush the session
}
User resolveUser(String username){
TransactionStatus status = transactionManager.getTransaction(txDefRequired)
User u = getUser(username);
if(u == null)
createUser(username);
return u;
transactionManager.commit(status); // flush the session
}
Would it be correct to say that in this particular flow, tx.commit() within the called methods, such as resolveUser(), will only serve to flush the session (flush-mode = auto) and return a new persistent entity with an artificial primary key generated and assigned to it so that you can use it later on in the same transaction?
No. A commit will invariably end the unit of work and push the updates to the database. A flush on the other hand, does not commit the changes to the db (as in commit them so that it's visible for everyone). You could technically rollback a flushed change.
https://stackoverflow.com/a/14626510/2231632 for a possible duplicate.
And what you're looking for with the artificial primary key, you could use the Session.flush. But don't commit the txn if you want to use that key in the same txn.