I'm writing a client-server program in java.
The basis is that the program presents eulas and options to the user and the user responds accordingly, moving through menus until he can get the server to provide the client with the requested item. For example, a document or file.
My question is where should I handle the state of each individual client. Should each client maintain it's own state, should the server create threads to maintain the state of each of it's client, or is there an even better approach?
What would be the simplest and/or most efficient method of approaching this problem?
I would assign each client an ID (for instance, a session ID) and track the state on the server. This would make it harder, I think, to game the system (under the principle that the less sensitive info there is on the client side, the better.)
What kind of client/server protocol are you using? If you're using HTTP, you could use the built-in session capability provided by Java Servlets (assuming you're using those, too.)
Here's a tutorial:
http://docs.oracle.com/javaee/6/tutorial/doc/bnagm.html
Related
I need to secure the connection between my primary java app and my MYSQL server. Right now I have a class in my primary java app with the info about my SQL server (login details; user, password, schema etc).
I tried obfuscating that class but it didn't succeed. Then I heard something about calling an external java app with the connection info, and retrieve that info securely.
How can I execute such a thing?
Runtime run;
Process pr = null;
run = Runtime.getRuntime();
pr = run.exec("your program.jar");
pr.getInputStream().close();
InputStream eos = pr.getErrorStream();
and you can use a file to pass your info to the jar application
When dealing with a client/server style application, all the business logic, including the persistence layer, should be maintained on the server side.
That is, the client connects to some server process and makes requests. It should never care about how the data is managed or stored. It just cares about getting and manipulating the data. This also means that you centralise the business associated with that data, which means that should it change, you are less likely to need to change the client.
This also means that all the access information for the database never leaves the domain of the server.
Now the question is, how do you achieve. This answer will come down to exactly what it is you want to achieve an the means by which you want to achieve it, but, I would also add, the client should be authenticating with the server first, meaning that the user must be made to enter and user name and password in order to be able access the data (unless it's a public accessible API, then you probably don't care).
You could use
RMI. This would allow you to expose server objects that the client could interact with. This is good if you wish to send objects from the server to the client. It allows the client to interact with Java objects as if they were local objects.
From a coding point of view, this is a (relatively) simple solution, as you are dealing with Java Objects. The problem is though, only Java clients (with the right libraries) will be able to access the server.
You could use
Plain Sockets. This will allow you to connect to a service on the server and communicate with it.
You can even serialize objects between the client and server, allow the application to deal with Java Objects as well.
This is also a much more difficult approach, as you become responsible for dealing with the low level protocol and error handling (which RMI takes care of for you).
This approach does, however, provide you with the opportunity for other clients to connect to your server (so long as you are using just a plain sockets and serializing objects ;)).
This is a lot of work...
You could use
Some kind of web service (Servlet's under Tomcat for example or event a J2EE server), that would use simple HTTP requests to list of available services/functions that would return either something like JSON or XML response which the client would then need to parse.
This is, by far, the most open and probably the most common solution. It would take some work to get running, but is far less involved then using something like sockets and is also the most flexible, as you wouldn't need release no libraries each time you want to change or update a service.
Now all these allow you to provide secure connections over the wire, through SSL, you just need to establish the correct connection from the client to the server, so you've got an added level of security.
Each hides the database access behind a server layer, adding additional protection to the database.
Writing a simple multi client <-> server system in Corba.
I am stuck on unique identification of the client. Is there a mechanism in Corba, like some POA policy that would allow a unique user id to be generated by the server and carried along with all that clients communication.
Basically I have the system setup so I manually do this unique user ID. Client connects, server generates a key which is sent to the client and stored on both ends. A similar setup that you might employ in many environments. What I am asking is if Corba has its own mechanism for this that I can leverage.
CORBA doesn't have any inbuilt client ID mechanism that you can use, unfortunately. The main reason why CORBA never specified it is because it's difficult to define what a "client" really is: is it a process or a thread? Is it an entire tier or a single application instance? What about clients in the same process as the server? In addition, certain developers might want different behavior spanning any of those options.
Personally, I think that your approach of having the server dictate an ID for the client is fine, but keep in mind that it's basically a "session ID" approach, and that can be tough to scale horizontally. Make sure that you absolutely, positively need to ID your clients, because something as simple as client authentication via IIOP/TLS might not do the trick just fine.
Scenario: User logs in on the client software which forms a persistent bidirectional connection with the serverside entity (server) which would process user specified tasks. When the serverside entity, while processing user's task, encounters an error or requires further user input, it will notify the client software, and wait until the client decides what to do. The client software will take the new user specifiefd inputs and send this to the serverside. The serverside continue where it last stopped with the new user specified inputs. This feedback cycle will continue until it's finished processing. The progressively updated user inputs will all be stored on the serverside and accessible and modifiable from the client software. So if a client deletes a specific input, that change will be immediately reflected on the serverside. On the serverside, an extra interface is probably required to route different user's clients to available hardware nodes (cloud) to support concurrent multi-user tasks running on the serverside.
On the client side, I suspect using sockets to connect to the server...
Now for the server, I am a little lost because there seems to be many different Java servers like Jetty & Netty. I am also practicing caution in order to not try and reinvent any wheels here.
Is building a server the right approach? or Build a webservice that will complete a specific task on demand?
I am also not just looking for a one size fits all solution (wishful thinking probably) but open to any insights on my current situation.
Netty will provide a lot of what it sounds like you need for this, without making you reinvent a socket server. That said, I would make certain that you actually need bidirectional, real-time communication between the client and server. If you can rework the problem such that the client-server communications do not need to be real-time, then things like RESTful webservices become a possibility, and (in my experience) are much less complicated and error prone.
I am developing a chat website using jsp/servlet.I will be hosting my website on gooogle appengine .Now i have some doubts regarding whether to use server push or client pull technology
1)If i use server push and if i dont close the response of servlet will it cause the server to go slow?How many simultanious connection can a tyicall tomcat server can handle if i keep the socket open for the entire chat session between 2 clinets??
2)Will server push or clinet push be better??
If you are using a servlet (prior to 3.0), then I guess you'll have to go with pull because of the programming model of servlet. However, there ARE advantages in using a push model. Primarily, wasted load on server and the limitation in latency. That's why there are technologies such as comet. Servlet 3.0 also supports push model. These are commonly used in ajax based apps.
In fact I believe a push model is more suited for a chatting app. because of the fast response time (=better user experience) it can provide.
If you use a nio based implementation for push-model, you can support thousands or even more than 10k concurrent connections (obviously, your millage varies).
If you use a conventional IO based implementation, it will be likely in the range of hundreds of concurrent connections (don't take this estimation too seriously though. I'm just giving these numbers to give a very, very rough feeling).
As for tomcat, last time I checked, people were saying that it won't have a good push-model support until version 7.0. But I'm not following the current status so I'm not sure (Sorry, perhaps somebody else can help you on this). If that is the case, you might want to check out comet support of jetty.
grizzly and netty are also good NIO based network frameworks, but if you want to use JSP, and find that tomcat is not sufficient, I guess jetty would be the best bet.
edit: (some additional info)
In this "push models", it's not like the server opens a connection to the client. The connection will be kept alive, and the server will push messages as it sees fit.
Also, it's not like there are only "push" and "pull" models. You can have a hybrid, like long polling.
I don't know how are you thinking of achieving server push here. As far as I can see, server needs a request to respond over HTTP. So, when there is a request, server will respond to that.
If i use server push and if i dont close the response of servlet will it cause the server to go slow?
App Engine will not let you do that. You have to finish your response within thirty seconds, or it will be killed. The thirty seconds is also an edge case, most calculations they do (for quota and such) are based on a 75 millisecond response time.
How many simultanious connection can a tyicall tomcat server
Tomcat? I thought you are planning to use App Engine?
Pull. Always pull.
I know it's a manufacturing-oriented book but the advice from Lean Thinking (Womack & Jones) is invaluable in any context (roughly, from memory):
Start by defining value,
line up the activities that create value in the value-stream,
create flow across the value-stream,
let customers pull value from the value-stream,
compete against perfection rather than other organizations
If I misquoted them, I apologize. Anyway, all of those principles can easily be applied in the development of any software product just as they could in the production of any physical product but the one that matters for you is pull.
Letting consumers of a service pull rather than pushing to them not only makes your programming model easier, it aligns activity with demand. You can still use queuing to load-level over time, if you have to, just the way you could with push but, this way, you have complete visibility into what, exactly happens in any given transaction.
I don't quite get your first question but the answer is still pull.
The answer to your query depends on what underlying protocol you wish to use.
Since you have mentioned JSP/servlets, your app will be implemented over the HTTP protocol.
HTTP is a protocol over TCP. TCP is connection oriented and remains alive, until the connection is ended. However, HTTP connections are persistent, only for the duration of a single request-response cycle. The TCP connection is broken after every request-response cycle. So that should answer your doubt with regards to how many socket connections a typical TOMCAT server will be able to handle. The connections will not be persistent, at all. They will only last the duration of a HTTP request-response cycle.
Given this basic idea, I would suggest , you use a client pull strategy, to implement your app.
Even with server push, over HTTP, even though the name says "server push", it is always the web client that polls the server at regular intervals, which just gives an illusion of "server-push". HTTP specification mandates that the client makes a request to which the server responds.
I have considerable experience in developing chat applications (both mobile and web).
Let me know , if you need any assistance. I will be more that willing to help.
We have a string processing service (c++, uses stdin/out for in/output) that has different layouts, each layout runs separately (eventually will run on separate machines), each layout takes time to load, thats why it must keep running after first run.
I must implement a system with client that will ask the master server to connect it to a relevant slave server which actually runs the relevant layout service. The slave server will communicate the data passed from the client to the service, and when finished will become available on the master server for other clients.
The question is what is the best way to go about implementing the servers? Should I keep an open connection between slave/master until the process is complete to notify the master that the connection is over or keep some sort of var in a synchronized function to check that?
Any other important inputs (or other designs) I have overlooked are also very welcomed, Thanx!
Assuming you can't replace the C++ stuff, here is how I would do it off the top of my head.
I would setup one master server. That server would run a process that accepts requests (probably by HTTP, so it'd be a webservice) and I would have it read the request, parse out what it is, and then call the correct slave. Basically it acts as a proxy. Once it receives the response from the slave it forwards it back to the caller. The simplicity here means that if you start getting more of one type of request, you can set up additional servers for that and round-robin requests to them.
The slaves would be webservices that open the C++ program and forward input and retrieve output. That's all it would do.
I wouldn't bother keeping open connections (except between the slave and the C++ program based on your description). Just using a web request for this stuff will keep the connection between the master and the slave open during the process, but it shouldn't be a problem. This way you don't need to worry about this detail.
Now if I were you I would seriously look at reimplementing the C++ code in Java or calling it via JNI or something. If you can avoid it, I think avoiding the Java wrapper around C++ thing would be a good design goal. The Java could do whatever expensive process it is during start up once, and then hold things ready in memory like the C++ code does.
I hope this helps.
Depending on your scalability needs, you may want to take a look at the Java NIO package. This will give you a starting point to build a scalable, non-blocking server implementation.