Multiple vertx instances vs action in the same veticle - java

I have an UDP server created by vertx.
The purpose of the server: it listen logs from another service , then according to message it make one of the following action:
1) Save message to db
2)Delete message from db according to id from message
3)Update message in db
My code is:
#AllArgsConstructor
public final class UdpServerVerticle extends AbstractVerticle {
private final Action action;
#Override
public void start() throws Exception {
final DatagramSocket socket = this.vertx.createDatagramSocket(new DatagramSocketOptions());
socket.listen(9000, "0.0.0.0", asyncRes -> {
if (asyncRes.succeeded()) {
socket.handler(packet -> {
final byte[] bytes = packet.data().getBytes(0, packet.data().length());
final String body = this.body(bytes);
this.action.choose(body);
});
} else {
System.out.println("ERROR");
}
});
}
#SneakyThrows
private String body(final byte[] bytes) {
return new String(bytes, "UTF-8");
}
}
Action class:
public final class DefaultAction implements Action {
private final ServerEvent onConnect;
private final ServerEvent onDisconnect;
private final ServerEvent onMatchBegin;
private final ServerEvent onMatchEnd;
#Autowired
public DefaultAction(#EventQualifier(event = EventTypes.CONNECT)final ServerEvent onConnect,
#EventQualifier(event = EventTypes.DISCONNECT)final ServerEvent onDisconnect,
#EventQualifier(event = EventTypes.MATCH_BEGIN)final ServerEvent onMatchBegin,
#EventQualifier(event = EventTypes.MATCH_END)final ServerEvent onMatchEnd) {
this.onConnect = onConnect;
this.onDisconnect = onDisconnect;
this.onMatchBegin = onMatchBegin;
this.onMatchEnd = onMatchEnd;
}
#Override
public void choose(final String body) {
if (this.diconnect(body)) {
this.onDisconnect.make(body);
} else if (this.connect(body)) {
this.onConnect.make(body);
} else if (this.gameBegin(body)) {
this.onMatchBegin.make(body);
} else if (this.gameOver(body)) {
this.onMatchEnd.make(body);
}
}
I badly know vertx (but i need to use it) what i know is that vertx use single thread to handle all my messages. I want to replace ServerEvent interface to
4 difference verticles and pass my message to appropriate verticle using event bus because of it's not good to block main verticle thread (UDP server) . Is it wisely to do replacement in my situation?

You misunderstand how VertX works. It's multithreaded, but has thread affinity, which is not the same as being single threaded.
As of your suggestion, that's the way to go. Put each of your actions into a separate verticle, and communicate with them using EventBus.

Related

Server Sent Event from Spring Boot application on single server with multiple UI clients

I am new to Server Sent Event concepts. Recently I added SSEemitter in my existing Spring Boot application, taking idea from internet.
In-memory SseEmitter repository class is as follow:
#Repository
public class InMemorySseEmitterRepository implements SseEmitterRepository {
private static final Logger log = LoggerFactory.getLogger(InMemorySseEmitterRepository.class);
private Map<String, SseEmitter> userEmitterMap = new ConcurrentHashMap<>();
#Override
public void addEmitter(String memberId, SseEmitter emitter) {
userEmitterMap.put(memberId, emitter);
}
#Override
public void remove(String memberId) {
if (userEmitterMap != null && userEmitterMap.containsKey(memberId)) {
log.info("Removing emitter for member: {}", memberId);
userEmitterMap.remove(memberId);
} else {
log.info("No emitter to remove for member: {}", memberId);
}
}
#Override
public Optional<SseEmitter> get(String memberId) {
return Optional.ofNullable(userEmitterMap.get(memberId));
}
}
SseEmitter service class is as follows:
#Service
public class SseEmitterServiceImpl implements SseEmitterService {
private static final Logger log = LoggerFactory.getLogger(SseEmitterServiceImpl.class);
private final SseEmitterRepository repository;
public SseEmitterServiceImpl(#Value(SseEmitterRepository repository) {
this.repository = repository;
}
#Override
public SseEmitter subscribe(String entityId, EventType eventType) {
SseEmitter emitter = new SseEmitter(-1L); //SO THAT IT NEVER TIMES OUT
String memberId = entityId + "_" + eventType; //MEMBER ID HAS BEEN INTRODUCED BECAUSE FOR DIFFERENT ENTITIES, FOR DIFFERENT EVENTS WE CAN SEND EVENTS FROM SERVER TO CLIENT
emitter.onCompletion(() -> repository.remove(memberId));
emitter.onTimeout(() -> repository.remove(memberId));
emitter.onError(e -> {
log.error("Create SseEmitter exception", e);
repository.remove(memberId);
});
repository.addEmitter(memberId, emitter);
return emitter;
}
}
Associated resource class is as follows:
#RestController
#RequestMapping("/api")
public class SseEmitterResource {
private static final Logger log = LoggerFactory.getLogger(SseEmitterResource.class);
private final SseEmitterService emitterService;
public SseEmitterResource(SseEmitterService emitterService) {
this.emitterService = emitterService;
}
#GetMapping("/subscribe/{eventType}")
public SseEmitter subscribeToEvents(
#PathVariable EventType eventType,
#RequestParam(required = false, defaultValue = "") String entityId) {
log.info("Subscribing member with event type {} for entity id {}", eventType, entityId);
return emitterService.subscribe(entityId, eventType);
}
}
Actually, there can be multiple event type and multiple entity ids for which we should be able to send events from Server to Client.
The service class, responsible for sending the notification is as follows:
#Service
public class SseNotificationServiceImpl implements SseNotificationService {
private static final Logger log = LoggerFactory.getLogger(SseNotificationServiceImpl.class);
private final SseEmitterRepository emitterRepository;
private final DataMapper dataMapper;
public SseNotificationServiceImpl(
SseEmitterRepository emitterRepository,
DataMapper dataMapper) {
this.emitterRepository = emitterRepository;
this.dataMapper = dataMapper;
}
#Override
public void sendSseNotification(String memberId, MyDTO dto) {
sendNotification(memberId, dto);
}
private void sendNotification(String memberId, MyDTO dto) {
emitterRepository.get(memberId).ifPresentOrElse(sseEmitter -> {
try {
sseEmitter.send(dataMapper.map(dto));
} catch (IOException e) {
...
emitterRepository.remove(memberId);
}
}, () -> ...;
}
}
Finally, the place, where I send this Server Sent Event notification is as follows:
sseNotificationService.sendSseNotification(...);
Now, my questions are as follows:
if there are multiple, say 6 clients for the Spring Boot application deployed on single POD, will all of them receive notification through the single SseEmitter instance, as I have shown above ?
If not, could anyone please help me with the changes which I need to incorporate in the above code, in order to send Server Sent Event notification to all clients ? Thanks.

How do I test code that uses ExecutorService without using timeout in Mockito?

I have a class.
public class FeedServiceImpl implements FeedService {
private final Map<FeedType, FeedStrategy> strategyByType;
private final ExecutorService executorService = Executors.newSingleThreadExecutor();
public FeedServiceImpl(Map<FeedType, FeedStrategy> strategyByType) {
if (strategyByType.isEmpty()) throw new IllegalArgumentException("strategyByType map is empty");
this.strategyByType = strategyByType;
}
#Override
public void feed(LocalDate feedDate, FeedType feedType, String uuid) {
if (!strategyByType.containsKey(feedType)) {
throw new IllegalArgumentException("Not supported feedType: " + feedType);
}
executorService.submit(() -> runFeed(feedType, feedDate, uuid));
}
private FeedTaskResult runFeed(FeedType feedType, LocalDate feedDate, String uuid) {
return strategyByType.get(feedType).feed(feedDate, uuid);
}
}
How can I verify with Mockito that strategyByType.get(feedType).feed(feedDate, uuid) was called when I call feed method?
#RunWith(MockitoJUnitRunner.class)
public class FeedServiceImplTest {
private LocalDate date = new LocalDate();
private String uuid = "UUID";
private FeedService service;
private Map<FeedType, FeedStrategy> strategyByType;
#Before
public void setUp() {
strategyByType = strategyByTypeFrom(TRADEHUB);
service = new FeedServiceImpl(strategyByType);
}
private Map<FeedType, FeedStrategy> strategyByTypeFrom(FeedSource feedSource) {
return bySource(feedSource).stream().collect(toMap(identity(), feedType -> mock(FeedStrategy.class)));
}
#Test
public void feedTest() {
service.feed(date, TH_CREDIT, uuid);
verify(strategyByType.get(TH_CREDIT), timeout(100)).feed(date, uuid);
}
}
This is my version. But I don't want to use Mockito timeout method. It's in my opinion not a good solution. Help me please!
When I test code that are dependent on executors I usually try to use an executor implementation that runs the task immediately in the same thread as submitted in order to remove all the hassle of multithreading. Perhaps you can add another constructor for your FeedServiceImpl class that lets you supply your own executor? The google guava lib has a MoreExecutors.directExecutor() method that will return such an executor. That would mean your test setup would change to something like:
service = new FeedServiceImpl(strategyByType, MoreExecutors.directExecutor());
The rest of the test code should then work as it is, and you can safely drop the timeout verification mode parameter.
A little improvement for #Per Huss answer so you won't have to change your constructor you can use ReflectionTestUtils class which is part of the spring-test package.
#Before
public void setUp() {
strategyByType = strategyByTypeFrom(TRADEHUB);
service = new FeedServiceImpl(strategyByType);
ReflectionTestUtils.setField(service , "executorService", MoreExecutors.newDirectExecutorService());
}
Now in the tests your executorService will implement the MoreExecutors.newDirectExecutorService() and not Executors.newSingleThreadExecutor().
If you want to run a real FeedStrategy (rather than a mocked one, as suggested in another answer) then you'll need a way for the FeedStrategy to somehow indicate to the test thread when it is finished. For example, by using a CountDownLatch or setting a flag which your test can wait on.
public class FeedStrategy {
private boolean finished;
public void feed(...) {
...
this.finished = true;
}
public boolean isFinished() {
return this.finished;
}
}
Or perhaps the FeedStrategy has some side effect which you could wait on?
If any of the above are true then you could use Awaitility to implement a waiter. For example:
#Test
public void aTest() {
// run your test
// wait for completion
await().atMost(100, MILLISECONDS).until(feedStrategyHasCompleted());
// assert
// ...
}
private Callable<Boolean> feedStrategyHasCompleted() {
return new Callable<Boolean>() {
public Boolean call() throws Exception {
// return true if your condition has been met
return ...;
}
};
}
You need to make a mock for FeedStrategy.class visible to your test, so that you will be able to verify if the FeedStrategy.feed method was invoked:
#RunWith(MockitoJUnitRunner.class)
public class FeedServiceImplTest {
private LocalDate date = new LocalDate();
private String uuid = "UUID";
private FeedService service;
private Map<FeedType, FeedStrategy> strategyByType;
private FeedStrategy feedStrategyMock;
#Before
public void setUp() {
strategyByType = strategyByTypeFrom(TRADEHUB);
service = new FeedServiceImpl(strategyByType);
feedStrategyMock = Mockito.mock();
}
private Map<FeedType, FeedStrategy> strategyByTypeFrom(FeedSource feedSource) {
return bySource(feedSource).stream().collect(toMap(identity(), feedType -> feedStrategyMock));
}
#Test
public void feedTest() {
service.feed(date, TH_CREDIT, uuid);
verify(feedStrategyMock).feed(date, uuid);
}
}

RxJava - Flowable.map not getting called after flatMap

I am having weird issue with the Flowable<T> vs Observable<T> in rxjava (io.reactivex.rxjava2 - v2.0.8). Here I have code that looks like below, where map(...).subscribe(...) functions are not getting called / executed.
flowable.flatMap(...return new flowable...).map(...).subscribe(...)
Surprisingly, if I flip my code to make use of Observable<T> instead of Flowable<T>, the map(...).subscribe(...) is getting called / executed as expected. I might be missing something simple, let me know what it could be?
Thank you
`
public class Application {
public static void main(String[] args) {
// Note: Working; get the list of databases
DatabaseFlowable databases = new DatabaseFlowable(sourceClient);
// Note: Working; for each database get the list of collections in it
Flowable<Resource> resources = databases
.flatMap(db -> {
logger.info(" ==> found database {}", db.getString("name"));
return new CollectionFlowable(sourceClient, db.getString("name"));
// Note: Working; CollectionFlowable::subscribeActual works as well
});
resources
.map(resource -> {
// Note: Nothing in here gets executed
logger.info(" ====> found resource {}", resource.toString());
return resource;
})
.subscribe(m -> {
// Note: Nothing in here gets executed
logger.info(m.toString());
});
}
}
public class DatabaseFlowable extends Flowable<Document> {
private final static Logger logger = LoggerFactory.getLogger(DatabaseFlowable.class);
private final MongoClient client;
public DatabaseFlowable(MongoClient client) {
this.client = client;
}
#Override
protected void subscribeActual(Subscriber<? super Document> subscriber) {
ListDatabasesIterable<Document> cursor = client.listDatabases();
MongoCursor<Document> iterator = cursor.iterator();
while(iterator.hasNext()) {
Document item = iterator.next();
if (!item.isEmpty()) {
String message = String.format(" found database name: %s, sizeOnDisk: %s",
item.getString("name"), item.get("sizeOnDisk"));
logger.info(message);
subscriber.onNext(item);
}
}
subscriber.onComplete();
}
}
public class CollectionFlowable extends Flowable<Resource> {
private final static Logger logger = LoggerFactory.getLogger(CollectionFlowable.class);
private final MongoClient client;
private final String databaseName;
public CollectionFlowable(MongoClient client, String databaseName) {
this.databaseName = databaseName;
this.client = client;
}
#Override
protected void subscribeActual(Subscriber<? super Resource> subscriber) {
MongoDatabase database = client.getDatabase(databaseName);
ListCollectionsIterable<Document> cursor = database.listCollections();
MongoCursor<Document> iterator = cursor.iterator();
while(iterator.hasNext()) {
Document item = iterator.next();
if (!item.isEmpty()) {
logger.info(" ... found collection: {}.{}", database.getName(), item.getString("name"));
Resource resource = new Resource(databaseName,
item.getString("name"),
(Document) item.get("options"));
subscriber.onNext(resource);
}
}
subscriber.onComplete();
}
}
`
That is because you are not correctly following the Flowable protocol. There is no call to subscriber.onSubscribe(...) and it does not observe the limit imposed by subscriber.request(...).
Since your implementation in no way observes backpressure, either use Flowable.create to create a buffered version that does or move your implementation to Observable which is not backpressured.
The reason for the behavior you see is that the downstream observer has not requested any items, so your calls to onNext are discarded.

Encrypting communication in Eneter between single server and multiple clients, each using different key

With Eneter it is possible to set custom serializer in DuplexTypedMessagesFactory, which could be used for encrypting communication between client and server.
DuplexTypedMessagesFactory sender_factory = new DuplexTypedMessagesFactory();
sender_factory.setSerializer(new EncryptingSerializer(client_private_key, server_public_key));
sender_ = sender_factory.createDuplexTypedMessageSender(MyResponse.class, MyRequest.class);
In my app I have single server and multiple clients, but obviously each channel should be encrypted individually using its own keys (or relevant RSA key pairs). Unfortunately serializer interface does not expose any identifier of channel from which the message is coming:
public class EncryptingSerializer implements ISerializer {
private RsaSerializer rsa_serializer_;
public EncryptingSerializer(RSAPrivateKey priv, RSAPublicKey pub) {
rsa_serializer_ = new RsaSerializer(pub, priv);
}
#Override
public <T> Object serialize(T t, Class<T> aClass) throws Exception {
return rsa_serializer_.serialize(t, aClass);
}
#Override
public <T> T deserialize(Object o, Class<T> aClass) throws Exception {
return rsa_serializer_.deserialize(o, aClass);
}
}
so above code is ok for client side (it seems I can even delay initialization of rsa_serializer_ until after connection is established, so e.g. server public key is fetched during authentication / pairing process). However for server side the same serializer object needs to serve multiple communication channels, but each needs a different rsa_serializer_ object.
Is there any way around this or do I just need to push encryption into the higher level (sending and receiving simple BlobRequest / BlobResponse objects, whose content would be encrypted before passing to framework and decrypt after getting from framework)?
I ended up re-implementing DuplexTypedMessageReceiver on top of DuplexStringMessageReceiver fetching and storing appropriate serializer on demand from outside:
class EncryptingTypedMessageReceiver<TResponse, TRequest> : IDuplexTypedMessageReceiver<TResponse, TRequest> {
public event EventHandler<TypedRequestReceivedEventArgs<TRequest>> MessageReceived;
public event EventHandler<ResponseReceiverEventArgs> ResponseReceiverConnected;
public event EventHandler<ResponseReceiverEventArgs> ResponseReceiverDisconnected;
private IDuplexStringMessageReceiver string_receiver_;
private Func<string, ISerializer> serializer_factory_;
private Dictionary<string, ISerializer> serializers_by_receiver_id_ = new Dictionary<string, ISerializer>();
public EncryptingTypedMessageReceiver(IDuplexStringMessageReceiver receiver, Func<string, ISerializer> serializer_factory) {
serializer_factory_ = serializer_factory;
string_receiver_ = receiver;
string_receiver_.RequestReceived += OnRequestReceived;
string_receiver_.ResponseReceiverConnected += OnResponseReceiverConnected;
string_receiver_.ResponseReceiverDisconnected += OnResponseReceiverDisconnected;
}
private void OnRequestReceived(object sender, StringRequestReceivedEventArgs e) {
TRequest typedRequest = serializers_by_receiver_id_[e.ResponseReceiverId].Deserialize<TRequest>(e.RequestMessage);
var typedE = new TypedRequestReceivedEventArgs<TRequest>(e.ResponseReceiverId, e.SenderAddress, typedRequest);
if (MessageReceived != null) {
MessageReceived(sender, typedE);
}
}
public void SendResponseMessage(string responseReceiverId, TResponse responseMessage) {
string stringMessage = (string)serializers_by_receiver_id_[responseReceiverId].Serialize(responseMessage);
string_receiver_.SendResponseMessage(responseReceiverId, stringMessage);
}
private void OnResponseReceiverConnected(object sender, ResponseReceiverEventArgs e) {
if (ResponseReceiverConnected != null)
ResponseReceiverConnected(sender, e);
serializers_by_receiver_id_.Add(e.ResponseReceiverId, serializer_factory_(e.ResponseReceiverId));
}
private void OnResponseReceiverDisconnected(object sender, ResponseReceiverEventArgs e) {
if (ResponseReceiverDisconnected != null)
ResponseReceiverDisconnected(sender, e);
serializers_by_receiver_id_.Remove(e.ResponseReceiverId);
}
public IDuplexInputChannel AttachedDuplexInputChannel {
get {
return string_receiver_.AttachedDuplexInputChannel;
}
}
public bool IsDuplexInputChannelAttached {
get {
return string_receiver_.IsDuplexInputChannelAttached;
}
}
public void AttachDuplexInputChannel(IDuplexInputChannel duplexInputChannel) {
string_receiver_.AttachDuplexInputChannel(duplexInputChannel);
}
public void DetachDuplexInputChannel() {
string_receiver_.DetachDuplexInputChannel();
}
}

Determine When LoopJ has concluded all background connections

Im trying to determine when LoopJ has finished all background thread http calls. So that i can then display the results of an array that is populated based on the results of my onSuccess methods.
First off, I have a String[] of file names. I'm then looping through the array and creating loopj connections like such.
ArrayList<String> files_to_update = new ArrayList<String>(file_names.length);
AsyncHttpClient client = new AsyncHttpClient();
for (final String file_name : file_names) {
client.get(BASE_URL + file_name, new AsyncHttpResponseHandler() {
public void onStart() {
Local_Last_Modified_Date = preferences.getString(file_name, "");
}
public void onSuccess(int statusCode, Header[] headers, byte[] response) {
Server_Last_Modified_Date = headers[3].getValue();
}
#Override
public void onFinish() {
if (!Local_Last_Modified_Date.trim().equalsIgnoreCase(Server_Last_Modified_Date.trim())) {
files_to_update.add(file_name);
}
}
});
}
What i'm doing here is comparing 2 date strings, The first Local_Last_Modified_Date is pulled from a preference file and the 2nd is determined by the last-modified date in the header. and then compared in the OnFinish(). This determines if the file needs to be update because the server file is newer than the preference date. Now! i know this is not the best way for comparing dates, however it will work interm for what i'm trying to do.
The issue i'm having is determining that all of the background http calls from loopj have completed so that i can now display the results of array list in a list dialog or whatever ui element i choose. I've tried looping through the arraylist, but because the loopj / http connections are background threads, the loop gets executed prior to the completion of all of the connection and therefore displays an empty or not populated fully array.
Is there a if conditional that i can write to determine if loopj has not finished executing all of the connection and when it has then execute my ui code?
The following code should address your problem:
Class file: UploadRunner.java
public class UploadRunner extends AsyncHttpResponseHandler implements Runnable {
private final AsyncHttpClient client;
private final ArrayList<String> filesList;
private final int filesCount;
private final Handler handler;
private String baseURI;
private boolean isFired;
private int filesCounter;
// Use in case you have no AHC ready beforehand.
public UploadRunner(ArrayList<String> filesList) {
this(new AsyncHttpClient(), filesList);
}
public UploadRunner(
AsyncHttpClient client,
ArrayList<String> filesList,
Handler handler
) {
assert null != client;
assert null != filesList;
assert null != handler;
this.client = client;
this.filesList = filesList;
this.handler = handler;
this.baseURI = "";
this.filesCount = filesList.size();
this.filesCounter = 0;
}
public String getBaseURI() {
return baseURI;
}
public void setBaseURI(String uri) {
baseURI = uri;
}
#Override
public void run() {
// Request to download all files.
for(final String file : filesList) {
client.get(baseURI + file, this);
}
}
#Override
public void onSuccess(int statusCode, Header[] headers, byte[] response) {
// This shouldn't happen really...
if(isFired) {
return;
}
// One more file downloaded.
filesCounter++;
// If all files downloaded, fire the callback.
if(filesCounter >= filesCount) {
isFired = true;
handler.onFinish(getLastModificationDate(headers));
}
}
private String getLastModificationDate(Header[] headers) {
// Simple mechanism to get the date, but maybe a proper one
// should be implemented.
return headers[3].getValue();
}
public static interface Handler {
public void onFinish(String lastModificationDate);
// TODO: Add onError() maybe?
}
}
In this case, you encapsulate the uploading mechanism in one place, plus expose just an interface for calling back a handler when all files are uploaded.
Typical use case:
// TODO: This typically should run in a different thread.
public class MyTask implements UploadRunner.Handler, Runnable {
private final static BASE_URI = "http://www.example.com/";
private final AsyncHttpClient client = new AsyncHttpClient();
private final ArrayList<String> filesList = new ArrayList<>();
#Override
public void run() {
filesList.add("1.png");
filesList.add("2.png");
filesList.add("3.png");
filesList.add("4.png");
filesList.add("5.png");
// Create a new runner.
UploadRunner ur = new UploadRunner(client, filesList, this);
// Set base URI.
ur.setBaseURI(BASE_URI);
// Spring the runner to life.
ur.run();
}
#Override
public void onFinish(String lastModificationDate) {
// All files downloaded, and last modification date is supplied to us.
}
}

Categories