How Polling mechanism can be realized with RMI? - java

Following the design/architecture i created for multiuser/network turn-based game with RMI server callbacks, I have tried to create a distributed animation in which my model(Ball) is remote object and it updates the clients via callback mechanism from server.
The current situation of code is :
The model remote object, which is iterating client list and calling update method of them,
public class BallImpl extends UnicastRemoteObject implements Ball,Runnable {
private List<ICallback> clients = new ArrayList<ICallback>();
protected static ServerServices chatServer;
static ServerServices si;
BallImpl() throws RemoteException {
super();
}
....
public synchronized void move() throws RemoteException {
loc.translate((int) changeInX, (int) changeInY);
}
public void start() throws RemoteException {
if (gameThread.isAlive()==false )
if (run==false){
gameThread.start();
}
}
/** Start the ball bouncing. */
// Run the game logic in its own thread.
public void run() {
while (true) {
run=true;
// Execute one game step
try {
updateClients();
} catch (RemoteException e) {
e.printStackTrace();
}
try {
Thread.sleep(50);
} catch (InterruptedException ex) {
}
}
}
public void updateClients() throws RemoteException {
si = new ServerServicesImpl();
List<ICallback> j = si.getClientNames();
System.out.println("in messimpl " + j.size());
if (j != null) {
System.out.println("in ballimpl" + j.size());
for (ICallback aClient : j) {
aClient.updateClients(this);
}
} else
System.err.println("Clientlist is empty");
}
}
The client which is implementing callback interface and has update method implementation :
public final class thenewBallWhatIwant implements Runnable, ICallback {
.....
#Override
public void updateClients(final Ball ball) throws RemoteException {
try {
ball.move();
try {
Thread.sleep(50);
} catch (Exception e) {
System.exit(0);
}
} catch (Exception e) {
System.out.println("Exception: " + e);
}
}
.....
}
My general perception is that i m implementing pushing mechanism with RMI and in that scenario i need to implement polling)
if that is the case how can i implement the polling mechanism with RMI?
thanks for any feedback.
jibbylala

Polling is independent of the protocol you use to implement the client and server.
A client polls by looping endlessly. Inside the loop there's a request to the server for information. The server sends either the desired information or a "not ready" message back. The client does its thing and waits until the next request needs to be sent.
If you happen to choose RMI, it means an RMI client and server. But the polling mechanism is the same regardless.
Break the problem into pieces - it'll be easier to think about and solve that way.
Forget about polling to start. Can you write an RMI server, start it up, and create a separate client to make a single request? If you can do that, then you put it inside a loop with a sleep to implement the delay and you're done.

I don't belive you can implement a callback via Java RMI. You need to either setup polling as you have suggested, or make your "client" RMI servers can you can send message to them directly.
How could you do this differently? I would suggest using JMS messaging to send command objects to the clients, this would handle all the distribution for you.

Related

How to design publish-subscribe pattern properly in grpc?

i'm trying to implement pub sub pattern using grpc but i'm confusing a bit about how to do it properly.
my proto: rpc call (google.protobuf.Empty) returns (stream Data);
client:
asynStub.call(Empty.getDefaultInstance(), new StreamObserver<Data>() {
#Override
public void onNext(Data value) {
// process a data
#Override
public void onError(Throwable t) {
}
#Override
public void onCompleted() {
}
});
} catch (StatusRuntimeException e) {
LOG.warn("RPC failed: {}", e.getStatus());
}
Thread.currentThread().join();
server service:
public class Sender extends DataServiceGrpc.DataServiceImplBase implements Runnable {
private final BlockingQueue<Data> queue;
private final static HashSet<StreamObserver<Data>> observers = new LinkedHashSet<>();
public Sender(BlockingQueue<Data> queue) {
this.queue = queue;
}
#Override
public void data(Empty request, StreamObserver<Data> responseObserver) {
observers.add(responseObserver);
}
#Override
public void run() {
while (!Thread.currentThread().isInterrupted()) {
try {
// waiting for first element
Data data = queue.take();
// send head element
observers.forEach(o -> o.onNext(data));
} catch (InterruptedException e) {
LOG.error("error: ", e);
Thread.currentThread().interrupt();
}
}
}
}
How to remove clients from global observers properly? How to received some sort of a signal when connection drops?
How to manage client-server reconnections? How to force client reconnect when connection drops?
Thanks in advance!
In the implementation of your service:
#Override
public void data(Empty request, StreamObserver<Data> responseObserver) {
observers.add(responseObserver);
}
You need to get the Context of the current request, and listen for cancellation. For single-request, multi-response calls (a.k.a. Server streaming) the gRPC generated code is simplified to pass in the the request directly. This means that you con't have direct access to the underlying ServerCall.Listener, which is how you would normally listen for clients disconnecting and cancelling.
Instead, every gRPC call has a Context associated with it, which carries the cancellation and other request-scoped signals. For your case, you just need to listen for cancellation by adding your own listener, which then safely removes the response observer from your linked hash set.
As for reconnects: gRPC clients will automatically reconnect if the connection is broken, but usually will not retry the RPC unless it is safe to do so. In the case of server streaming RPCs, it is usually not safe to do, so you'll need to retry the RPC on your client directly.

Problems in achieving inter thread communication.

I am trying to learn threads in java, and got this idea of implementing a coin phone functionality using threads.
I am able to write down the basic tasks. My Flow chart is as below.
I have tried writing a class for checking hook status.
public class Hook {
static Logger log = Logger.getLogger(Hook.class.getName());
OffTheHook offTheHook= new OffTheHook();
void checkHook(Boolean hookStatus, String keyPressed){
log.debug("Hook Status "+hookStatus);
if(hookStatus==true){
offTheHook.beforeDroppingCoin(hookStatus);
}else{
if(keyPressed!=null){
DisplayMessages.displayMessage("FollowInstruction");
}else{
displayReadyMessage();
}
}
}
public static void displayReadyMessage(){
DisplayMessages.displayMessage("ready");
}
}
Another timer class..
public class TimerClass extends Thread{
int timeInMilli;
boolean status=false;
public TimerClass(int timeInMilli){
this.timeInMilli=timeInMilli;
}
#Override
public void run() {
timer();
}
private void timer(){
try {
Thread.currentThread().sleep(timeInMilli);
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}
How do I make these classes communicate with each other(small example will be enough). Moreover my requirement is if the headset is back on the hook the call must get cut.. How should I write the code to monitor that status? based on that status I need to make decision. At the same time I need to have another Thread that shall input status of hooks.
A small snippet of code will be that does the similar functionality, will be of great help.

Java wait(), notifyall(), not working as expected

I have one Server which accepts multiple clients connections and performs the following operations
Client 1 transmits one line of information to the server, waits for the server side operation to complete
Clients 2 Transmits one line of information to the server, waits for the server side operation to complete
Server when it has received information from both clients performs a certain operation, notify both the clients and again goes to wait state for both clients to transmit their line of information, but some how with the code i have written it seems not to be working in a proper way.
Server Code Snippet :
class ServerPattern extends Thread{
#Override
public void run() {
try
{
while(moreData){
if(clientId==1){
synchronized (BaseStation.sourceAReadMonitor){
BaseStation.sourceAReadComplete = false;
SourceARead.complete();
BaseStation.sourceAReadMonitor.notifyAll();
}
}
else if(clientId==2){
BaseStation.sourceBReadComplete = false;
}
//this.wait();
synchronized(BaseStation.patternGenerationReadMonitor){
BaseStation.patternGenerationReadMonitor.wait();
}
}
}
newSock.close();
}
catch(Exception e){
e.printStackTrace();
}
}
Client Code Snippet :
class sReadA extends Thread {
public static void serverJobComplete(){
System.out.println("Source A Server job complete , Notifying Thread");
synchronized(BaseStation.sourceAReadMonitor){
BaseStation.sourceAReadMonitor.notifyAll();
}
}
//public void readFile(){
public void run() {
try {
while((line = br.readLine())!= null){
synchronized (BaseStation.sourceAReadMonitor){
if(BaseStation.patternGenerationComplete == true && BaseStation.sourceAReadComplete == false){
BaseStation.sourceAReadComplete = true;
BaseStation.sourceAReadMonitor.wait();
}
else if (BaseStation.sourceAReadComplete == true)
{
synchronized (BaseStation.patternGenerationReadMonitor){
BaseStation.patternGenerationReadMonitor.notifyAll();
}
}
}
}
//ToDo : Wait for ServerSide Operation to Complete, later iterate till end of file
}
catch(Exception e){
e.printStackTrace();
}
}
}
public class SourceARead {
public static void complete(){
System.out.println("Complete Called");
sReadA.serverJobComplete();
}
public static void main(String args[]){
sReadA sAR = new sReadA(fName);
sAR.start();
}
}
Can you describe what is the problem you are facing with this.
You should use HTTP server port for this requirement. On server side open a server type of port and clients will connect to this port. java.net API can be useful for this.
You should not use wait/notify for this requirement as they are thread level control and they depends on the JVM implementation. All objects will have their thread wait list, once current running thread exits waiting threads are notified.
If this is your requirement then make sure that you are synchronizing on the correct object.
The wait notify or notify all is for multiple threads running under same JVM
BaseStation.patternGenerationReadMonitor.notifyAll(); will notify threads waiting for current object and not on BaseStation.
Hope this solves your problem

Sending multiple files in java callback?

How can I add to the sending queue, for example I choose a file with JFileChooser and then send it in a new thread to the client, and I want to choose another and send it as well. What happens is that it sends the files simultaneously and the output on the client side is broken.
I'd like to be able to add to a "queue" of some sort, so that when the first file is sent, the server will start sending the next one.
A good aproach for socket communication between server->client, is to have 1 thread per client and have this thread reading from a java.util.concurrent.BlockingQueue. Such interface is ideal (just like all the java.util.concurrent objects) for managing multithreading concurrency.
The idea, is that the Server has a ClientThread like this:
class BroadCastThread extends Thread{
LinkedBlockingQueue<SendFileTask> bcQueue = new LinkedBlockingQueue<>();
#Override
public void run() {
while( true ){
try {
SendFileTask task = bcQueue.take();
task.sendFile();
} catch (Exception ex) {
ex.printStackTrace();
}
}
}
void addTask(SendFileTask rt) throws InterruptedException{
bcQueue.put(rt);
}
}
interface SendFileTask{
void sendFile() throws Exception;
}
And use this by adding tasks to you thread object:
BroadCastThread bct = new BroadCastThread();
bct.start();
//With lambda
bct.addTask(() -> {
//Send file code
});
//Without lambda
bct.addTask(new SendFileTask() {
#Override
void sendFile() throws Exception {
//Send file code
}
});
You can even store the Socket information with the thread, and pass it throw the task interface, if you want the sendFile method to receive it as parameter.

Synchronize on DataOutputStream

I have gone through so many tutorials on Synchronization now that my head is spinning. I have never truly understood it :(.
I have a Java server(MainServer), that when a client connects creates a new thread(ServerThread) with a DataOutputStream.
The client talks to the ServerThread and the ServerThread responds. Every now and then the MainServer will distribute a message to all clients utilizing each ServerThread's DataOutputStream object.
I am quite certain that every now and then my issue is because both the MainServer and ServerThread are trying to send something to the client at the same time. Therefore I need to lock on the DataOutputStream object. For the life of me I cannot understand this concept any further. Every example I read is confusing.
What is the correct way to handle this?
ServerThread's send to client method:
public void replyToOne(String reply){
try {
commandOut.writeUTF(reply);
commandOut.flush();
} catch (IOException e) {
logger.fatal("replyToOne", e);
}
logger.info(reply);
}
MainServer's distribute to all clients method:
public static void distribute(String broadcastMessage){
for (Map.Entry<String, Object[]> entry : AccountInfoList.entrySet()) {
Object[] tmpObjArray = entry.getValue();
DataOutputStream temporaryCOut = (DataOutputStream) tmpObjArray[INT_COMMAND_OUT]; //can be grabbed while thread is using it
try {
temporaryCOut.writeUTF(broadcastMessage);
temporaryCOut.flush();
} catch (IOException e) {
logger.error("distribute: writeUTF", e);
}
logger.info(broadcastMessage);
}
}
I am thinking I should have something like this in my ServerThread class.
public synchronized DataOutputStream getCommandOut(){
return commandOut;
}
Is it really that simple? I know this has likely been asked and answered, but I don't seem to be getting it still, without individual help.
If this were me.....
I would have a LinkedBlockingQueue on each client-side thread. Then, each time the client thread has a moment of idleness on the socket, it checks the queue. If there's a message to send from the queue, it sends it.
Then, the server, if it needs to, can just add items to that queue, and, when the connection has some space, it will be sent.
Add the queue, have a method on the ServerThread something like:
addBroadcastMessage(MyData data) {
broadcastQueue.add(data);
}
and then, on the socket side, have a loop that has a timeout-block on it, so that it breaks out of the socket if it is idle, and then just:
while (!broadcastQueue.isEmpty()) {
MyData data = broadcastQueue.poll();
.... send the data....
}
and you're done.
The LinkedBlockingQueue will manage the locking and synchronization for you.
You are on the right track.
Every statement modifying the DataOutputStream should be synchronized on this DataOutputStream so that it is not concurrently accessed (and thus do not have any concurrent modification):
public void replyToOne(String reply){
try {
synchronized(commandOut) { // writing block
commandOut.writeUTF(reply);
commandOut.flush();
}
} catch (IOException e) {
logger.fatal("replyToOne", e);
}
logger.info(reply);
}
And:
public static void distribute(String broadcastMessage){
for (Map.Entry<String, Object[]> entry : AccountInfoList.entrySet()) {
Object[] tmpObjArray = entry.getValue();
DataOutputStream temporaryCOut = (DataOutputStream) tmpObjArray[INT_COMMAND_OUT]; //can be grabbed while thread is using it
try {
synchronized(temporaryCOut) { // writing block
temporaryCOut.writeUTF(broadcastMessage);
temporaryCOut.flush();
}
} catch (IOException e) {
logger.error("distribute: writeUTF", e);
}
logger.info(broadcastMessage);
}
}
Just putting my 2 cents:
The way I implement servers is this:
Each server is a thread with one task only: listening for connections. Once it recognizes a connection it generates a new thread to handle the connection's input/output (I call this sub-class ClientHandler).
The server also keeps a list of all connected clients.
ClientHandlers are responsible for user-server interactions. From here, things are pretty simple:
Disclaimer: there are no try-catches blocks here! add them yourself. Of course you can use thread executers to limit the number of concurrent connections.
Server's run() method:
#Override
public void run(){
isRunning = true;
while(isRunning){
ClientHandler ch = new ClientHandler(serversocket.accept());
clients.add(ch);
ch.start();
}
}
ClientHandler's ctor:
public ClientHandler(Socket client){
out = new ObjectOutputStream(client.getOutputStream());
in = new ObjectInputStream(client.getInputStream());
}
ClientHandler's run() method:
#Override
public void run(){
isConnected = true;
while(isConnected){
handle(in.readObject());
}
}
and handle() method:
private void handle(Object o){
//Your implementation
}
If you want a unified channel say for output then you'll have to synchronize it as instructed to avoid unexpected results.
There are 2 simple ways to do this:
Wrap every call to output in synchronized(this) block
Use a getter for output (like you did) with synchronized keyword.

Categories