Websocket breaks when the page is reloaded - java

We have a Java application that sends messages through a stomp connection over websocket using SpringBoot messaging support. The data should be sent to specific users once they connect and subscribe to the topic but when we reload the page the websocket breaks and never sends any messages again.
We listen for the SessionSubscribeEvent here (so we can send an initial message after subscription is made):
#Component
#AllArgsConstructor
public class TransactionSubscriptionListener implements ApplicationListener<SessionSubscribeEvent> {
private static final String DESTINATION_HEADER = "simpDestination";
private final RegionTransactionSender regionTransactionSender;
#Override
public void onApplicationEvent(SessionSubscribeEvent subscribeEvent) {
Object simpDestination = subscribeEvent.getMessage().getHeaders().get(DESTINATION_HEADER);
if (simpDestination == null) {
return;
}
String destination = String.valueOf(simpDestination);
if (destination.matches(RegionTransactionSender.REGEXP)) {
regionTransactionSender.send();
}
}
}
Region transaction sender implementation:
#Component
#AllArgsConstructor
public class RegionTransactionSender {
public static final String REGEXP =
ApiVersionConstants.TRANSACTIONS_FOR_REGION_DESTINATION_WITH_SUBSCRIBER + "/\\S*";
private static final String TOPIC_URL_PREFIX = ApiVersionConstants.TRANSACTIONS_FOR_REGION_DESTINATION + "/";
private final SimpMessageSendingOperations sendingOperations;
private final TransactionService transactionService;
private final SimpUserRegistry simpUserRegistry;
public void send() {
Set<SimpUser> users = simpUserRegistry.getUsers();
users.stream()
.filter(SimpUser::hasSessions)
.forEach(this::sendToSubscriptions);
}
private void sendToSubscriptions(SimpUser user) {
user.getSessions().forEach(session -> session.getSubscriptions()
.forEach(subscription -> sendToTopics(user, subscription)));
}
private void sendToTopics(final SimpUser user, final SimpSubscription subscription) {
String destination = subscription.getDestination();
if (destination.matches(REGEXP)) {
Optional<String> regionOptional = WebsocketUtils.retrieveOrganizationRegionFromDestination(destination);
regionOptional.ifPresent(region -> sendForRegionTopic(user, region));
}
}
private void sendForRegionTopic(final SimpUser user, final String region) {
Set<TransactionResponse> transactionsForRegion = transactionService
.getTransactionsForRegion(AbstractWebsocketSender.TRANSACTIONS_COUNT, region);
sendingOperations.convertAndSendToUser(user.getName(), TOPIC_URL_PREFIX + region, transactionsForRegion);
}
}
The send() method is called later on but no messages are sent.
Messages visible in Chrome's network debugging tool
As you can see our other websocket (systemBalanceSummary) works great. The difference is that on systemBalanceSummary we sent messages to a non user-specific destination.
It's also worth mentioning that when we access the website for the first time everything works fine.
Why does that websocket break when we reload the page?
EDIT
After some debugging we've found out that even though the subscription event is fired there are no users in SimpUserRegistry but we do not know what causes that.

I have found solution for this.
First, you need to implement SimpUserRegistry instead of using DefaultSimpUserRegistry. The reason for that is that DefaultSimpUserRegistry seem to add user after SessionConnctedEvent is triggered and it is not always connected. I changed that so user is added after SessionConnectEvent.
This resolves problem of not having users in user registry after reload though. If this is not a problem, you can probably skip it.
After that I changed usage of convertAndSendToUser method. In code provided in question data is being sent to username. I changed that so I am sending data to sessionId, but also added some headers. Here is the code for that:
private void sendForRegionTopic(final String region, final String sessionId) {
Set<TransactionResponse> transactionsForRegion = transactionService
.getTransactionsForRegion(AbstractWebsocketSender.TRANSACTIONS_COUNT, region);
sendingOperations.convertAndSendToUser(sessionId,
TOPIC_URL_PREFIX + region,
transactionsForRegion,
createHeaders(sessionId));
}
private MessageHeaders createHeaders(final String sessionId) {
SimpMessageHeaderAccessor accessor = SimpMessageHeaderAccessor.create(SimpMessageType.MESSAGE);
accessor.setSessionId(sessionId);
accessor.setLeaveMutable(true);
return accessor.getMessageHeaders();
}

Related

AWS SQS Listening vs Polling

I currently have implemented in a Spring Boot project running on Fargate an SQS listener.
It's possible that under the hood, the SqsAsyncClient which appears to be a listener, is actually polling though.
Separately, as a PoC, on I implemented a Lambda function trigger on a different queue. This would be invoked when there are items in the queue and would post to my service. This seems unnecessarily complex to me but removes a single point of failure if I were to only have one instance of the service.
I guess my major point of confusion is whether I am needlessly worrying about polling vs listening on a SQS queue and whether it matters.
Code for example purposes:
#Component
#Slf4j
#RequiredArgsConstructor
public class SqsListener {
private final SqsAsyncClient sqsAsyncClient;
private final Environment environment;
private final SmsMessagingServiceImpl smsMessagingService;
#PostConstruct
public void continuousListener() {
String queueUrl = environment.getProperty("aws.sqs.sms.queueUrl");
Mono<ReceiveMessageResponse> responseMono = receiveMessage(queueUrl);
Flux<Message> messages = getItems(responseMono);
messages.subscribe(message -> disposeOfFlux(message, queueUrl));
}
protected Flux<Message> getItems(Mono<ReceiveMessageResponse> responseMono) {
return responseMono.repeat().retry()
.map(ReceiveMessageResponse::messages)
.map(Flux::fromIterable)
.flatMap(messageFlux -> messageFlux);
}
protected void disposeOfFlux(Message message, String queueUrl) {
log.info("Inbound SMS Received from SQS with MessageId: {}", message.messageId());
if (someConditionIsMet())
deleteMessage(queueUrl, message);
}
protected Mono<ReceiveMessageResponse> receiveMessage(String queueUrl) {
return Mono.fromFuture(() -> sqsAsyncClient.receiveMessage(
ReceiveMessageRequest.builder()
.maxNumberOfMessages(5)
.messageAttributeNames("All")
.queueUrl(queueUrl)
.waitTimeSeconds(10)
.visibilityTimeout(30)
.build()));
}
protected void deleteMessage(String queueUrl, Message message) {
sqsAsyncClient.deleteMessage(DeleteMessageRequest.builder()
.queueUrl(queueUrl)
.receiptHandle(message.receiptHandle())
.build())
.thenAccept(deleteMessageResponse -> log.info("deleted message with handle {}", message.receiptHandle()));
}
}

How to connect OSGI bundles (Send entity though bundles)?

I have a REST service, which contains three classes in one module(bundle)
User.java -> Entity
UserService.java -> REST service
UserValidation.java -> Special validator for the entity. Server send entity to this validator and get validation result (true or false):
User.java
#XmlRootElement(name = "User")
public class User {
private long id;
private String name;
private String surname;
private String patronymic;
/*Getters and Setters*/
}
UserService.java
public class UserServiceImpl implements UserService {
private UserDAO userDbDao = new UserDatabaseDAO();
#POST
#Path("/users/")
public Response addUser(User user) {
UserValidator userValidator = new UserValidator(user);
if (userValidator.isValid()) {
User newUser = userDbDao.createUser(user);
return Response.ok().type("application/xml").entity(newUser).build();
} else {
return Response.status(Response.Status.BAD_REQUEST).entity(userValidator.getErrorMessage()).build();
}
}
}
UserValidator.java
public class UserValidator {
private static final int MAX_SIZE_NAME = 50;
private static final int MIN_SIZE_NAME = 2;
private User user;
public UserValidator(User user) {
this.user = user;
}
private BadUserResponse badUserResponse = new BadUserResponse();
private boolean isNameValid(String name) {
if (name == null) {
badUserResponse.setNsp("Null in fields name/surname/patronymic");
return false;
}
String tempName = name.trim();
if (tempName.length() < MIN_SIZE_NAME || tempName.length() > MAX_SIZE_NAME) {
badUserResponse.setNsp(String.format("Fields name/surname/patronymic too long or too short (Allowed length from %d to %d)", MIN_SIZE_NAME, MAX_SIZE_NAME));
return false;
}
for (int i = 0; i < tempName.length(); i++) {
if (!Character.isLetter(tempName.charAt(i))) {
badUserResponse.setNsp("Fields name/surname/patronymic contains wrong symbols (Only letters allowed)");
return false;
}
}
return true;
}
public boolean isValid() {
return (isNameValid(user.getName()) &
isNameValid(user.getSurname()) &
isNameValid(user.getPatronymic()));
}
public BadUserResponse getErrorMessage() {
return badUserResponse;
}
BadUserResponse.java
#XmlRootElement(name="baduserresponce")
public class BadUserResponse {
private String nsp;
public String getNsp() {
return nsp;
}
public void setNsp(String nsp) {
this.nsp = nsp;
}
}
But now, I need to split this into separate bundles. Because, as you can see, they uses functionality of each other. For example UserService.java
just used this UserValidator userValidator = new UserValidator(user);
I need to connect these bundles somehow (OSGI Service, ActiveMQ).
In my opinion it works something like this:
UserService bundle get User entity from REST method.
Put all of the User fields (name, surname, patronymic) to ActiveMQ queue (because UserValidator bundle don't know what's is User entity).
UserValidator bundle get User's fiels from queue and validate them.
UserValidator bundle put validation result (true/false) to queue.
UserService bundle get validation result from queue and send User to DAO.
But this is just a concept. Am I wrong?
What's the best way to pass entity though bundles and how should I do this?
You current way of simply initiating the UserValidator via new is technically fine even if they live in different bundles. If your validator is only needed in this place and is simple I would even leave it in the same bundle.
The other options can make sense to decouple your bundles. Using messaging allows you to avoid sync calls. It can also be use to send the data to a remote machine. JMS messaging is quite heavy weight though. You need a broker and depend on the API. In your case you also directly need the result of the validation. So you would simulate a sync call with JMS. So I would rather avoid this.
Using an OSGi service allows you to decouple from the implementation of the service. In this case it makes sense to create an interface for UserValidator. I would also put this interface into a separate bundle. You then need to register the service in the bundle that implements the validator and bind the service in the bundle that uses the validator. OSGi services are very light weight and by default synchronous. So I think they would fit your problem well.
For registering and binding services do not use the OSGi API directly. Instead use declarative services with annotations. They take away most of the complexity in dealing with OSGi services.
Btw. I am not sure how you do REST. I suggest to have a look at the Aries JAX-RS Whiteboard.

Running a CompletableFuture from an Actor

I'm playing around with reactive patterns in a Java (8) Spring Boot (1.5.2.RELEASE) application with Akka (2.5.1). It's coming along nicely but now I'm stuck trying to run a CompletableFuture from an actor. To simulate this I have created a very simple service that returns a CompletableFuture. However, when I then try to return the result to the calling controller I get errors about dead-letters and no response is returned.
The error I am getting is:
[INFO] [05/05/2017 13:12:25.650] [akka-spring-demo-akka.actor.default-dispatcher-5] [akka://akka-spring-demo/deadLetters] Message [java.lang.String] from Actor[akka://akka-spring-demo/user/$a#-1561144664] to Actor[akka://akka-spring-demo/deadLetters] was not delivered. [1] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.
Here is my code. This is the controller calling the actor:
#Component
#Produces(MediaType.TEXT_PLAIN)
#Path("/")
public class AsyncController {
#Autowired
private ActorSystem system;
private ActorRef getGreetingActorRef() {
ActorRef greeter = system.actorOf(SPRING_EXTENSION_PROVIDER.get(system)
.props("greetingActor"));
return greeter;
}
#GET
#Path("/foo")
public void test(#Suspended AsyncResponse asyncResponse, #QueryParam("echo") String echo) {
ask(getGreetingActorRef(), new Greet(echo), 1000)
.thenApply((greet) -> asyncResponse.resume(Response.ok(greet).build()));
}
}
Here is the service:
#Component
public class GreetingService {
public CompletableFuture<String> greetAsync(String name) {
return CompletableFuture.supplyAsync(() -> "Hello, " + name);
}
}
Then here is the actor receiving the call. At first I had this:
#Component
#Scope(ConfigurableBeanFactory.SCOPE_PROTOTYPE)
public class GreetingActor extends AbstractActor {
#Autowired
private GreetingService greetingService;
#Autowired
private ActorSystem system;
#Override
public Receive createReceive() {
return receiveBuilder()
.match(Greet.class, this::onGreet)
.build();
}
private void onGreet(Greet greet) {
greetingService.greetAsync(greet.getMessage())
.thenAccept((greetingResponse) -> getSender().tell(greetingResponse, getSelf()));
}
}
This resulted in 2 calls being handled correctly but after that I would get dead-letter errors. Then I read here what was probably causing my problems:
http://doc.akka.io/docs/akka/2.5.1/java/actors.html
Warning
When using future callbacks, inside actors you need to carefully avoid closing over the containing actor’s reference, i.e. do not call methods or access mutable state on the enclosing actor from within the callback. This would break the actor encapsulation and may introduce synchronization bugs and race conditions because the callback will be scheduled concurrently to the enclosing actor. Unfortunately there is not yet a way to detect these illegal accesses at compile time. See also: Actors and shared mutable state
So I figured the idea is that you pipe the result to self() after which you can do getSender().tell(response, getSelf()).
So I altered my code to this:
#Component
#Scope(ConfigurableBeanFactory.SCOPE_PROTOTYPE)
public class GreetingActor extends AbstractActor {
#Autowired
private GreetingService greetingService;
#Autowired
private ActorSystem system;
#Override
public Receive createReceive() {
return receiveBuilder()
.match(Greet.class, this::onGreet)
.match(String.class, this::onGreetingCompleted)
.build();
}
private void onGreet(Greet greet) {
pipe(greetingService.greetAsync(greet.getMessage()), system.dispatcher()).to(getSelf());
}
private void onGreetingCompleted(String greetingResponse) {
getSender().tell(greetingResponse, getSelf());
}
}
The onGreetingCompleted method is being called with the response from the GreetingService but at that time I again get the dead-letters error so for some reason it can't send the response back to the calling controller.
Note that if I change the service to this:
#Component
public class GreetingService {
public String greet(String name) {
return "Hello, " + name;
}
}
And the onGreet in the actor to:
private void onGreet(Greet greet) {
getSender().tell(greetingService.greet(greet.getMessage()), getSelf());
}
Then everything works fine. So it would appear that I have my basic Java/Spring/Akka set up correctly, it's just when trying to call a CompletableFuture from my actor that the problems start.
Any help would be much appreciated, thanks!
The getSender method is only reliably returning the ref of the sender during the synchronous processing of the message.
In your first case, you have:
greetingService.greetAsync(greet.getMessage())
.thenAccept((greetingResponse) -> getSender().tell(greetingResponse, getSelf()));
Which means that getSender() is invoked async once the future completes. Not reliable anymore. You can change that to:
ActorRef sender = getSender();
greetingService.greetAsync(greet.getMessage())
.thenAccept((greetingResponse) -> sender.tell(greetingResponse, getSelf()));
In your second example, you have
pipe(greetingService.greetAsync(greet.getMessage()), system.dispatcher()).to(getSelf());
You are piping the response to "getSelf()", i.e. your worker actor. The original sender will never get anything (thus the ask expires). You can fix that into:
pipe(greetingService.greetAsync(greet.getMessage()), system.dispatcher()).to(getSender());
In the third case, you have getSender() being executed synchronously during the processing of the message, thus it works.

Is there any way creating dynamic #ServerEndpoint address in Java?

For example, I have a room
public class Room {
private int id;
private Set<User> users;
}
So I want it to be endpoint for my websocket application. But there may be a lot of rooms and I want each of them could have own URI (for example, rooms/1, rooms/2 etc.)
Evidently, #ServerEnpoint annotaion allows only constants. So, is there any way to make it?
Something like this:
#ServerEndpoint(value = "/rooms/{roomnumber}")
public class....
static Map<String, Session> openSessions = ...
#OnOpen
public void onConnectionOpen(final Session session, #PathParam("roomnumber") final String roomnumber,
...
//store roomnumber in session
session.getUserProperties().put("roomnumber", roomnumber);
openSessions.put( String.valueOf(session.getId()), session )
To only send messages to specific roomnumbers/clients:
// check if session corresponds to the roomnumber
for (Map.Entry<String, Session> entry : openSessions.entrySet()) {
Session s = entry.getValue();
if (s.isOpen() && s.getUserProperties().get("roomnumber").equals(roomnumber_you_want_to_address)) {
...
And when a client disconnects:
#OnClose
public void onConnectionClose(Session session) {
openSessions.remove(session.getId());
}
You can use this per function to map requests with different variables in the same controller
#RequestMapping(value = "/endpoint/{endpointVariable}", method = RequestMethod.GET)
public ReturnDTO getReturnDTO(<params>){
// Here the variable, endpointVariable, will be accessible
// In my experiences its always been an integer, but I'm sure a string
// would be possible. check with debugger
}
http://www.journaldev.com/3358/spring-mvc-requestmapping-annotation-example-with-controller-methods-headers-params-requestparam-pathvariable

Where should business logic (and what is that?) really live and how to do that with Spring?

I was just reading this article:
http://www.tutorialized.com/view/tutorial/Spring-MVC-Application-Architecture/11986
which I find great. It explains the layer architecture nicely and I was glad that the architecture I'm working with pretty much is what he describes.
But there's one thing, that I don't seem to get:
First: what exactly is business logic and what is it not? In the article he says (and he's not the only one), that business logic should go in the domain model. So an Account class should have an activate() method that knows how to activate an Account. In my understanding this would involve some persistence work probably. But the domain model should not have a dependency of DAOs. Only the service layer should know about DAOs.
So, is business logic just what a domain entity can do with itself? Like the activate()method would set the active property to true, plus set the dateActivated property to new Date() and then it's the service's task to first call account.activate()and second dao.saveAccount(account)? And what needs external dependencies goes to a service? That's what I did until now mostly.
public AccountServiceImpl implements AccountService
{
private AccountDAO dao;
private MailSender mailSender;
public void activateAccount(Account account)
{
account.setActive(true);
account.setDateActivated(new Date());
dao.saveAccount(account);
sendActivationEmail(account);
}
private void sendActivationEmail(Account account)
{
...
}
}
This is in contrast to what he says, I think, no?
What I also don't get is the example on how to have Spring wire domain objects like Account. Which would be needed should Account send its e-mail on its own.
Given this code:
import org.springframework.mail.MailSender;
import org.springframework.mail.SimpleMailMessage;
public class Account {
private String email;
private MailSender mailSender;
private boolean active = false;
public String getEmail() {
return email;
}
public void setEmail(String email) {
this.email = email;
}
public void setMailSender(MailSender mailSender) {
this.mailSender = mailSender;
}
public void activate() {
if (active) {
throw new IllegalStateException("Already active");
}
active = true;
sendActivationEmail();
}
private void sendActivationEmail() {
SimpleMailMessage msg = new SimpleMailMessage();
msg.setTo(email);
msg.setSubject("Congrats!");
msg.setText("You're the best.");
mailSender.send(msg);
}
}
If I use Hibernate, I could use the DependencyInjectionInterceptorFactoryBean in order to wire mailSender. If I use JDBC instead, I'd really write the follwing cumbersome code? Plus, also when I create a new instance for Account in a MVC controller, for let's say populating it to a model??
BeanFactory beanFactory = new XmlBeanFactory(
new ClassPathResource("chapter3.xml"));
Account account = new Account();
account.setEmail("email#example.com");
((AutowireCapableBeanFactory)beanFactory).applyBeanPropertyValues(
account, "accountPrototype");
account.activate();
This is not reliable and very cumbersome, no? I'd have to ask myself where that object has been created, whenever I see an instance of Account. Plus, if I would go with this approach: I have not a single appContext.xml I could pass, but several, one for persistence, one for the service config. How would I do that? Plus, that would create a completely new context every time such an instance is created or am I missing something?
Is there no better solution to that?
Any help is greatly appreciated.
I think send activation email action is not a part of a business-layer here, your domain logic here is the account activation action, that piece of logic should live in the DomainObject with name Account ( activate() method ). The send activation email action is the part of infrastructure or application layers.
Service is the object that handles account activation request and connects business-layer and others. Service takes the given account, activates them and performs send activation email action of MailSenderService or something like this.
Short sample:
public AccountServiceImpl implements AccountService
{
private AccountDAO dao;
private MailSenderService mailSender;
public void activateAccount(AccountID accountID)
{
Account account = dao.findAccount(accountID);
....
account.activate();
dao.updateAccount(account);
....
mailSender.sendActivationEmail(account);
}
}
The next step that I can suggest is a complete separation of business layer
and a layer of infrastructure. This can be obtained by introducing the
business event. Service no longer has to perform an action to send an
email, it creates event notifying other layers about
account activation.
In the Spring we have two tools to work with events, ApplicationEventPublisher and ApplicationListener.
Short example, service that publish domain events:
public AccountActivationEvent extends ApplicationEvent {
private Account account;
AccountActivationEvent(Account account) {
this.account = account;
}
public Account getActivatedAccount() {
return account;
}
}
public AccountServiceImpl implements AccountService, ApplicationEventPublisherAware
{
private AccountDAO dao;
private ApplicationEventPublisher epublisher;
public void setApplicationEventPublisher(ApplicationEventPublisher epublisher) {
this.epublisher = epublisher;
}
public void activateAccount(AccountID accountID)
{
Account account = dao.findAccount(accountID);
....
account.activate();
dao.updateAccount(account);
....
epublisher.publishEvent(new AccountActivationEvent(account));
}
}
And domain event listener, on the infrastructure layer:
public class SendAccountActivationEmailEventListener
implements ApplicationListener<AccountActivationEvent> {
private MailSenderService mailSender;
....
public final void onApplicationEvent(final AccountActivationEvent event) {
Account account = event.getActivatedAccount():
.... perform mail ...
mailSender.sendEmail(email);
}
}
Now you can add another activation types, logging, other infrastructure stuff support without change and pollute your domain(business)-layer.
Ah, you can learn more about spring events in the documentation.

Categories