I have noticed an issue in using timeouts and redis persistence with statemachinefactory.
Above is my uml diagram for the state machine. I have added a stateListener in my code and every time its persisted.
StateMachine stateMachine = factory.getStateMachine();
stateMachine.addStateListener(new CompositeStateMachineListener<String, String>() {
#Override
public void stateContext(StateContext<String, String> arg0) {
String user = (String) arg0.getExtendedState().getVariables().get("imei");
if (user == null) {
return;
}
log.info(arg0.getStage().toString() + "**********" + stateMachine.getState());
try {
redisStateMachinePersister.persist(arg0.getStateMachine(), "testprefixSw:" + user);
} catch (Exception e) {
log.error(e.getMessage(), e);
}
}
});
Note : ExitPointGQ points to a initial state called WAITFORCOMMAND of the parent machine.
Now taking the scenario where I need to wait by giving the signal WAIT, the machine goes back to WaitForGenQueryRes which is right. But by now, the first timer has started and after 60 seconds, the timer fires and exits through the exit point and persists that the state is now at WAITFORCOMMAND whereas it has to be at WaitForGenQueryRes because I looped it.
Please point out my mistake so I could fix this.
Related
Im using Akka 2.5.6 in Java 8 and I want to know the right way to finish de ActorSystem, part of the functionality of my code is to process some XML files and validate them, to achieve this I have created 3 actors:
Controller, Processor and Validator.
The Controller is responsible for initiating the process and sending file by file and other information to the Processor, then the Processor create a digital signature of the file and sends the response to the Validator that finally validates the status and sends an OK message to the Controller which is counting the number of files validated and compares them with the total files. Once the total of files with the total of validated files are equal, I call to finish the ActorSystem with the terminate () method.
The method to finish is as follows:
private void endActors()
{
ActorSystem actorSystem = getContext().system();
Future <Terminated> terminated = actorSystem.terminate();
do {
log.info ("Waiting to finish ...");
try {
Thread.sleep (30000L);
} catch (InterruptedException ex) {
log.error ("Error in Thread.");
}
} while (! ended.isCompleted ());
log.info ("Actors finished processing.");
}
The loop never ends because the future is never complete, I dont know if this is the right way, I hope you have understood me and can help me or give me some advice.
Try the following (the key here is the on complete) . I wrote a class along these lines to use in a setup and teardown for junit, to avoid issues from actor system not fully terminating in the teardown of one test before being created in another test. (that caused port already in use issues)
private static ActorSystem system = null;
private static Future<Terminated> terminatedFuture;
public static ActorSystem getFreshActorSystem() {
tearDownActorSystem();
while(system != null) {
try {
Thread.sleep(500L);
} catch (InterruptedException e) {
}
}
system = ActorSystem.create();
return system;
}
public static void tearDownActorSystem() {
if (system !=null && !isInMiddleOfTerminating()) {
terminatedFuture = system.terminate();
terminatedFuture.onComplete( new OnComplete(){
#Override
public void onComplete(Throwable failure, Object success) throws Throwable {
system = null;
terminatedFuture = null;
}
} , system.dispatcher());
}
}
private static boolean isInMiddleOfTerminating() {
return terminatedFuture !=null;
}
In my KafkaConsumer app I want to read a batch of messages with poll() and process them. But processing may fail. In this case I want to retry until I succeed but only retry if consumer still owns partitions. I don't want to constantly call poll() because I don't want to read more data.
This is a code snippet:
consumer = new KafkaConsumer<>(consumerConfig);
try {
consumer.subscribe(config.topics() /** Callback does not work as I do not call poll in between */ );
while (true) {
ConsumerRecords<byte[], Value> values = consumer.poll(10000);
while (/* I am still owner of partitions */) {
try {
process(values);
} catch (Exception e) {
log.error("I dont care, just retry while I own the partitions", e)
}
}
}
} catch (WakeupException e) {
// shutting down
} finally {
consumer.close();
}
There is a callback method that tells you when your consumers partition assignments are about to be revoked. Keep processing message unless you get an onPartitionRevoked() event.
https://kafka.apache.org/0110/javadoc/org/apache/kafka/clients/consumer/ConsumerRebalanceListener.html#onPartitionsRevoked(java.util.Collection)
What about simply calling assignment() ?
http://kafka.apache.org/0110/javadoc/org/apache/kafka/clients/consumer/KafkaConsumer.html#assignment()
I came to a conclusion that it is impossible to call poll() without reading messages with current kafka consumer 10.2.x. However, it is possible to update offset after a processing failure. So I update offset as if the messages were never read
while (!stopped) {
ConsumerRecords<byte[], Value> values = consumer.poll(timeout);
try {
process(values);
} catch (Exception e) {
rewind(records);
// Ensure a delay after errors to let dependencies recover
Thread.sleep(delay);
}
}
and rewind method is
private void rewind(ConsumerRecords<byte[], Value> records) {
records.partitions().forEach(partition -> {
long offset = records.records(partition).get(0).offset();
consumer.seek(partition, offset);
});
}
It solves the initial problem
So I currently have a lot of code, it will be difficult to break it all down into an SSCCE but maybe I will attempt to do so later if necessary.
Anyways, here is the gist: I have two processes communicating via RMI. It works. However I want to be able continue if the communcation if the host process (JobViewer) exits and then returns all in the life of the client process (Job).
Currently I have the binded name saved to a file everytime a Job starts up, and the JobViewer opens this file on startup. It works great, the correct binded name works. However, I get a NotBoundException every time I try to resume communication with a Job that I know for fact is still running when the JobViewer restarts.
My JobViewer implements an interface that extends Remote with the following methods:
public void registerClient(String bindedName, JobStateSummary jobSummary) throws RemoteException, NotBoundException;
public void giveJobStateSummary(JobStateSummary jobSummary) throws RemoteException;
public void signalEndOfClient(JobStateSummary jobSummary) throws RemoteException;
And my Job also implements a different interface that extends Remote with the following methods:
public JobStateSummary getJobStateSummary() throws RemoteException;
public void killRemoteJob() throws RemoteException;
public void stopRemoteJob() throws RemoteException;
public void resumeRemoteJob() throws RemoteException;
How do I achieve this? Here is some of my current code that inits the RMI if it helps...
JobViewer side:
private Registry _registry;
// Set up RMI
_registry = LocateRegistry.createRegistry(2002);
_registry.rebind("JOBVIEWER_SERVER", this);
Job side:
private NiceRemoteJobMonitor _server;
Registry registry = LocateRegistry.getRegistry(hostName, port);
registry.rebind(_bindedClientName, this);
Remote remoteServer = registry.lookup(masterName);
_server = (NiceRemoteJobMonitor)remoteServer;
_server.registerClient(_bindedClientName, _jobStateSummary);
I get a NotBoundException every time I try to resume communication with a Job that I know for fact is still running when the JobViewer restarts.
That can only happen if the JobViewer didn't rebind itself when it started up. More usually you get a NoSuchObjectException when you use a stale stub, i.e. a stub whose remote object has exited. In this case you should reaquire the stub, i.e. redo the lookup().
Why is the client binding itself to a Registry? If you want to register a callback, just pass this to the registerClient() method instead of the bind-name, and adjust its signature accordingly (using the client's remote interface as the parameter type). No need to have the server doing a lookup to the client Registry. No need for a client Registry at all.
My solution was to have the Job ping the JobViewer every so often:
while (true) {
try {
_server.ping();
// If control reaches here we were able to successfully ping the job monitor.
} catch (Exception e) {
System.out.println("Job lost contact with the job monitor at " + new Date().toString() + " ...");
// If control reaches we were unable to ping the job monitor. Now we will loop until it presumably comes back to life.
boolean foundServer = false;
while (!foundServer) {
try {
// Attempt to register again.
Registry registry = LocateRegistry.getRegistry(_hostName, _port);
registry.rebind(_bindedClientName, NiceSupervisor.this);
Remote remoteServer = registry.lookup(_masterName);
_server = (NiceRemoteJobMonitor)remoteServer;
_server.registerClient(_bindedClientName, _jobStateSummary);
// Ping the server for good measure.
_server.ping();
System.out.println("Job reconnected with the job monitor at " + new Date().toString() + " ...");
// If control reaches here we were able to reconnect to the job monitor and ping it again.
foundServer = true;
} catch (Exception x) {
System.out.println("Job still cannot contact the job monitor at " + new Date().toString() + " ...");
}
// Sleep for 1 minute before we try to locate the registry again.
try {
Thread.currentThread().sleep(PING_WAIT_TIME);
} catch (InterruptedException x) {
}
} // End of endless loop until we find the server again.
}
// Sleep for 1 minute after we ping the server before we try again.
try {
Thread.currentThread().sleep(PING_WAIT_TIME);
} catch (InterruptedException e) {
}
} // End of endless loop that we never exit.
I am using Spring and JPA for a financial data handling project. I have to handle scheduling transactions for future dates. In a daily run quartz cron job and execute all schedule transaction and persist to real table.
My problem is, when going to execute trigger for a given time, one record failed due to some reason, then all other records were not executed.
I need to execute all other transactions and failed transactions should rollback.
Is there a way to handle these things?
following code block get all the schedule job
public void bankingPaymentSchedullerRun(){
// get all pending job
List<BankScheduler> bankSchedulers = bankSchedulerDao.findByStatus(ScheduleStatus.PENDING);
Calendar currentDate = DateUtil.getFormatedCalenderDate(DateUtil.currentDate(), "yyyy-MM-dd");
if (bankSchedulers != null) {
for (BankScheduler bankScheduler : bankSchedulers) {
LOGGER.info("bankingPaymentSchedullerRun " + bankScheduler.toString());
//compare from date and to date
if ((bankScheduler.getFromDate().compareTo(currentDate) <= 0)
&& (bankScheduler.getToDate().compareTo(currentDate)) >= 0) {
if (bankScheduler.getSchedulerType().equals(SchedulerType.FUND_TRANSFER.toString())) {
scheduleFundTransfer(bankScheduler);
}else {
scheduleUtilityPayment(bankScheduler);
}
}
}
}
}
Fund transfer execution code block
public PaymentResponse scheduleFundTransfer(final BankScheduler bankScheduler){
FundTransfer fundTransfer = new FundTransfer();
fundTransfer.setBranchName(bankScheduler.getBankSchedulerFundTransfer().getBeneficiaryBranchName());
fundTransfer.setBeneficiaryBankName(bankScheduler.getBankSchedulerFundTransfer().getBeneficiaryBankName());
fundTransfer.setBeneficiaryType(bankScheduler.getBankSchedulerFundTransfer().getBeneficiaryType());
fundTransfer.setBeneficiaryName(bankScheduler.getBankSchedulerFundTransfer().getBeneficiaryName());
fundTransfer.setCurrency(bankScheduler.getBankSchedulerFundTransfer().getCurrency());
fundTransfer.setNarration(bankScheduler.getDetail());
fundTransfer.setThirdPartyType(bankScheduler.getBankSchedulerFundTransfer().getThirdPartyType());
fundTransfer.setTransferedAmount(bankScheduler.getBankSchedulerFundTransfer().getAmount());
fundTransfer.setUserAccountNumber(bankScheduler.getBankSchedulerFundTransfer().getAccountNumber());
fundTransfer.setUserName(bankScheduler.getBankSchedulerFundTransfer().getUserName());
FundTransfer fundTransferUpdate = null;
//commit to fundtransfer real table
try {
fundTransferUpdate = fundTransferDao.create(fundTransfer);
} catch (Exception e) {
LOGGER.error("Exception occur when call update fund transfer in fundTransfer()", e);
}
// getting from currecy Decimals
BankCurrecyInfor currecyInfo =
bankCurrecyInforDao.getDecimalpointsByCurrecy(fundTransfer.getUserAccount().getCurrencyCode());
fundTransferUpdate.setDecimalAmt(String.valueOf(currecyInfo.getNoOfDecimal()));
//call to bank back-end to update
PaymentResponse response = accountServiceInvoker.fundTransfer(fundTransferUpdate);
//for sending sms
SmsCriteria smsCriteria = new SmsCriteria();
smsCriteria.setBank_name("ABC");
messageServiceInvoker.sentIbSms(smsCriteria);
return response;
}
Yes there is, use proper exception handling so when single transaction fails, other will still (try to) execute.
Pseudocode
for(scheduled task from all scheduled tasks) {
try{
begin transaction
do your stuff with jpa
commit transaction
}catch(Exception e){
rollback transaction, log error and stuff
}finally{
release resources
}
}
Does Smack function properly in Java EE?? I am having issues with presence.
I get the credentials from the login form via doPost method..I can able to successfully authenticate as well as connection.getRoster() also works.Next I want to show only users who are online so when I get the presence of user,presence object stores default value "unavailable" for all users even when they are available!!
The whole chat app works without flaw in a normal java class without any change..
String userName = request.getParameter("username");
String password = request.getParameter("password");
HttpSession session=request.getSession();
session.setAttribute("username", userName);
SmackAPIGtalkServlet gtalk = new SmackAPIGtalkServlet();
ConnectionConfiguration config = new ConnectionConfiguration(
"talk.google.com", 5222, "gmail.com");
connection = new XMPPConnection(config);
config.setSASLAuthenticationEnabled(false);
try {
connection.connect();
} catch (XMPPException e) {
e.printStackTrace();
}
try {
connection.login(userName, password);
} catch (XMPPException e) {
e.printStackTrace();
}
System.out.println(connection.isAuthenticated());
boolean status = connection.isAuthenticated();
if (status == true) {
gtalk.displayOnlineBuddyList();
response.sendRedirect("Roster.jsp");
}
else
{
response.sendRedirect("Failed.jsp");
}
}
public void displayOnlineBuddyList() {
Roster roster = connection.getRoster();
Collection<RosterEntry> entries = roster.getEntries();
int count1 = 0;
int count2 = 0;
for (RosterEntry r : entries) {
Presence presence = roster.getPresence(r.getUser());
if (presence.getType() == Presence.Type.unavailable) {
// System.out.println(user + "is offline");
count1++;
} else {
System.out.println(name+user + "is online");
count2++;
}
}
roster.addRosterListener(new RosterListener() {
// Ignored events public void entriesAdded(Collection<String>
// addresses) {}
public void entriesDeleted(Collection<String> addresses) {
}
public void entriesUpdated(Collection<String> addresses) {
}
public void presenceChanged(Presence presence) {
System.out.println("Presence changed: " + presence.getFrom()
+ " " + presence);
}
#Override
public void entriesAdded(Collection<String> arg0) {
// TODO Auto-generated method stub
}
});
}
I am stuck with this and not able to get the code working with servlets..Can anyone help me out??
Will Smack work inside of Java EE, yes and no.
Smack will work inside of a web container, but since it creates its own threads it will NOT work inside of an EJB container. So it will work depending on where you are running it.
To understand some of your issues, you have to understand that the lifecycle of your objects in a servlet is tied to the request/response cycle of each request. This is not the same as a standard java app where the objects will typically live as long as you need them to, since you control their lifecycle.
For example, in the code you have shown, you create the connection for each request (I assume, since not all the code is shown). Therefore registering listeners against that connection will be pointless since it will pass out of scope as soon as you leave the method, and eventually get garbage collected. You will have to maintain the connections outside of the scope of the servlet requests for this to work, otherwise you will be opening and closing connections for each request.
XMPP is completely asynchronous by nature whereas servlet requests are synchronous. You have to put some effort in to making them work together, so don't expect code that works in a standalone app to simply work in this environment.
You have to implement the RosterListener interface in which you have to override the presenceChanged method in that you can get the presence of the users.
It works for me.
When you are getting the rosters of GTalk all will have status as unavailable.
But after sometime their presence changes and the presence can be get from the presenceChanged method in the RosterListner but for that you have to implement the RosterListener's presenceChnaged method.
And ya it works well in Java EE, Android as well as WAP.
Does Smack function properly in Java EE?? I am having issues with presence. I get the credentials from the login form via doPost method..I can able to successfully authenticate as well as connection.getRoster() also works.Next I want to show only users who are online so when I get the presence of user,presence object stores default value "unavailable" for all users even when they are available!! here my code
<%
Roster rst = roster;
rst.addRosterListener(new RosterListener() {
public void entriesAdded(final Collection args) {}
public void entriesDeleted(final Collection<String> addresses) {}
public void entriesUpdated(final Collection<String> addresses) {}
public void presenceChanged(final Presence presence) {
final Presence prsence1 = presence;
prsenceChanged(prsence1);
if (prsence1.isAvailable()) {
System.out.println("Is Available: " + presence.isAvailable());
}
}
});
%>
<%!void prsenceChanged(Presence presence){ if(null != presence){%>
<script language="javascript">
alert("hai");
</script>