Design Pattern for Server Emulator - java

I wanna build server socket emulator, but I want implement some design pattern there.
I will described my case study that I have simplified like these:
My Server Socket will always listen client socket. While some request message come from the client socket, the server emulator will response the client through the socket.
the response is response code. '00' will describe request message processed successfully, and another response code expect '00' will describe there are some error while processing the message request.
IN the server there are some UI, this UI contain check response parameter such as.
response code
timeout interval
While the server want to response the client message,
the response code taken from input parameter response form UI
check the timeout interval, it will create sleep thread and the interval taken from timeout interval input from UI.
I have implement the function, but I create it in one class. I feel it so sucks.
Can you suggest me what class / interface that I must create to refactor my code.

The need to refactor the code really depends on what task your server is performing based on the client request. If it is something simple then a single class may very well be the best design. If it is doing something more complicated then you may want to move the various operations that can be performed to various service classes. If your results are standard you could create an object (maybe enum?) to describe them.
This is the approach I have taken in one of my own applications. The server handles essentially only the IO between itself and the client. When the client sends a message the server parses it into a standard format "operation" object. This object is then passed to manager object which finds an appropriate "request servicing object". This object then does the actual work. When it is finished it generates a return object that describes the status/ results of the object. This is then taken by the server and formatted in an appropriate manner to send across the wire to the client.
Hopefully this can give you some ideas as to what might be appropriate for your application.

Related

Having a "worker" in Java

I have a REST API created in Java with the Spark framework, but right now a lot of work is being done on the request thread that is significantly slowing down requests.
I'm wanting to solve this by creating some kind of background worker/queue that will do all the needed work off of the request thread. The response from the server contains data that the client will need (it's data that will be displayed). In these examples the client is a web browser.
Here's what the current cycle looks like
API request from client to server
Server does blocking work; Response from server after several seconds/minutes
Client receives response. It has all the data it needs in the response
Here's what I would like
API request from client to server
Server does work off-thread
Client receives response from server almost instantly, but it doesn't have the data it needs. This response will contain some ID (Integer or UUID), which can be used to check the progress of the work being done
Client regularly checks the status of the work being done, the response will contain a status (like a percentage or time estimate). Once the work is done, the response will also contain the data we need
What I dislike about this approach is that it will significantly complicate my API. If I want to get any data, I will have to make two requests. One to initiate the blocking work, and another to check the status (and get the result of the blocking work). Not only will the API become more complicated, but the backend will too.
Is this efficient, or is there a better way to implement what I want to accomplish?
Neither way is more efficient than the other since the same amount and time of work will be done in either case. In the first case it will be done on the request thread, the client will not know of progress and the request will take as long as it takes to run the task. This has the client wait on the reply.
In the second case you need to add complexity, but you get progress status and possibly other advantages depending on the task. This has the client poll on the reply.
You can use async processing to perform work on non-request threads, but that probably won't make any difference if most of your requests are long running ones. So it's up to you to decide what you want, the client will have to wait the same amount anyway.

Can server sends more than one responds to client's request?

I am learning socket and server/client model concept and having a hard time understanding the server client concept. If a client sends a request, can server sends more than one respond? Or we have to put everything in one respond?
For a memory game program, when a client click a card, the action will send a request to server in order to turn the card in every player's program, if the second card does not match, the server tells players wait 2 secs, turn the 2 cards back, and then assign turn to next player. Can a server does this in multiple responds or it has to do it in single respond? Since no client requests for some responds, so I don't know if it is achievable or not.
If you're talking about TCP connections, after the connection has established client and server are equivalent, both are free to send data as long and as much they like and/or shut down their end of the connection.
Edit: After several passes I think i have understood what the second paragraph of your question is aiming for.
There is, of course, nothing which would stop the server from doing anything.. What your server seems to do, most of the time, is blocking on a InputStream.read() operation. If you want the server to operate even when no network input happens, one solution might be to use a read timeout, or check the input stream for readability before actually reading.
This is not your complete answer.
For one request, you get one response back.
Please read on this information in wikipedia for the basics
"Request-response, also known as request-reply, is a message exchange pattern in which a requestor sends a request message to a replier system which receives and processes the request, ultimately returning a message in response. This is a simple, but powerful messaging pattern which allows two applications to have a two-way conversation with one another over a channel. This pattern is especially common in client-server architectures.1
For simplicity, this pattern is typically implemented in a purely synchronous fashion, as in web service calls over HTTP, which holds a connection open and waits until the response is delivered or the timeout period expires. However, request-response may also be implemented asynchronously, with a response being returned at some unknown later time. This is often referred to as "sync over async", or "sync/async", and is common in enterprise application integration (EAI) implementations where slow aggregations, time-intensive functions, or human workflow must be performed before a response can be constructed and delivered."

To queue or not to queue with Java mailing

The scenario is the sending of a password reset mail to the user from a web request (and possibly other mail related tasks in the future).
The arguments I bring to the table for queuing:
I believe web requests should be handled as fast as possible
Decoupling the send action from the request, more easily allows externalization of the mail system (if required in the future)
The arguments I recognize against queuing:
The user does not get feedback if something goes wrong during the sending of the message
What are more arguments in this discussion? And to those in favor of queuing, how would you implement the queue? Scheduled action? Infinite dequeuing task (with interval, of course)?
Thanks!
I would suggest you to decouple actual sending of mail from your app business logic.
Do this asynchronously: Use queue or at least different thread for sending such notifications.
Sending of email could be time consuming operation,
even if you use your own internal mail server which is close to your app.
SMTP conversation consists of several requests/responses.
Do not treat sending of a mail as a transactional action.
When target SMTP server replies with 250 OK as a response for DATA command - it just takes responsibility for this mail nothing else.
Delivery could fail in future if next server in the chain is not able to deliver mail (read about DSN, aka bounce).
Last but not least think about failure modes.
What if your business critical functionality is slowed down / blocked by auxiliary one (email notification), not good I guess.
You definitely don't want to do the send synchronously since the mail server may be slow.
Send a JMS message and use an MDB to send the email.
In a Java EE 6+ scenario you can use #Asynchronous annotation in a EJB method. It returns a Future<V>. So you can continue with proccesing and ask later for task ending, while it is executed in another thread.
So you can accept a lot of request fastly, you decouple the send action from request, and you can get feedback.
http://docs.oracle.com/javaee/6/tutorial/doc/gkkqg.html
You may think that requests should be serviced as fast as possible, but what about the user? What does he think?
The user needs his password reset. He doesn't care how long that takes. If he can't complete that request he can't do anything at all.
Don't queue.
I think u should go to queue. Because it help in fast performance and to check whether the password reset request is arrived from correct source.
So u can use Map for queue implementation. Because in map u can use email id as key and a unique request reference as value. And this map element should be deleted within a time period.
Also for fast email service u can create a simple thread class that send emails and start a new thread by passing some data arguments in it. and scheduling will automatically managed by web container for these threads.

Handling Java Interupts

I am making an application that will work much like a real time chat. A user will be constantly writing on lets say a text area and messages will be send to other users. On the communications class I have set up a receiver. When a message from someone reaches the client, the receive method will be invoked and will get the message. What I can't understand is how the code will be executed. What happens if, while the user is typing/sending a message the receive message is invoked ? What do I need to do in order for this to work properly ?
Hope the question is clear enough.
ps : Im still in the design phase thats why I haven't tested it to see what happens.
Also atm I only use a second thread to receive messages which calls the receive method.
There should not be a problem at all.
When a message from someone reaches the client, the receive method
will be invoked and will get the message. What I can't understand is
how the code will be executed?
You should have a Receiver class that will encapsulate a socket (from which your receive data) and keep a set of listeners (see Observer pattern). A GUI can be one of the listeners. When a message is received via the socket, you need to notify all listeners by forwarding the data received. This way, you have a clean and nice way to notify the GUI about new messages arrivals.
What happens if, while the user is typing/sending a message the
receive message is invoked ?
This depends on the type of IP protocol you are using but in general your don't have to worry about this although I suggest you protect your sockets using lock mechanisms.
What do I need to do in order for this to work properly ?
Here is a nice example that can give you some inspiration :)
EDIT: As for your question regarding execution flow, sending and receiving are two different and uncorrelated operations that can happen at the same time. This can be achieved by implementing send and receive operations in two different threads. Here is an article on socket communications and multithreading.
You should either do what traditional Java EE app servers have done, which is assign a separate thread for processing each incoming message, or try a Java NIO solution along the lines of Netty.

Communication between Servlets

I would like to write a method which handles the flow of communication on XMPP. The sequence of things I'd like to do is:
Send message.
Wait for response.
Process the response.
Since we could be waiting longer than 30s for the response (step 2) I'll be teeing up a task to take care of this. This task will need to send the message and then wait for a response on the XMPP servlet handling the incoming message. My question is: How do I wait in the task servlet thread for the response to arrive in the XMPP Servlet?
I'd normally use a listener pattern where the listener would store the message in a field in the Task object and then trigger a Semaphore to signal that a message has arrived. Like this:
Install listener in XMPP servlet in a static field.
Send message.
Wait for semaphore. ........ Meanwhile, in the XMPP servlet thread, a response will arrive and it will call the listener's callback method which stores the message and releases the semaphore.
Get message from field and process.
I tried this and it worked fine on the development server. However, when I uploaded to the cloud I found that I'd install the listener on the XMPP servlet (step 1) but then a new instance of the servlet would be instantiated when the message came in and there would no longer be a reference to the listener to call, event through the listener is a static field. My conclusion is XMPPServlet is run in a completely different VM meaning the static field is not shared between that servlet and the task one. Is this correct?
In general what is the best practice for communication between these servlets? How to I share data (normally I would've stored it in an object's field) and how do I signal from one to the other when events occur (normally I would've used a semaphore)?
Sorry about the long winded question. Tell me if it's not clear and I'll refine it a bit.
Reposting my answer to the same question you asked on the mailing list:
You can't [wait for a response in the sending process]. Instead, you
should use an asynchronous pattern: Send the message, and register a
handler for incoming XMPP messages. That handler should match up the
response to the corresponding request (stored in the datastore if
necessary) and perform appropriate processing on it.
An App Engine app can be run on any number of machines;
synchronization primitives designed for communication between threads
will not work.

Categories