Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I'm working on a project that involves several applications on several computers. The main application is a C++ socket server running on a CentOS server, and the client application is a Java program running on client PCs.
These will communicate back and forth using sockets. I have defined a set of commands and arguments that they will need to implement in order to support everything.
I've thought of several options, but I can't seem to find the perfect one..
Should the C++ and Java program write their own classes/parsers for validate the messages?
Should I create an XML file (served over HTTP) that defines all of the communication messages? (That the server/client would parse and create actions for)
Or use some kind of third party library (Google Protocol Buffers?)
The point is that when the Socket server sends a message X, then the client must know what to do with it. Same applies the other direction.
What would be the best way to implement this? Having the XML file would be nice, as the client/server may parse it and create methods/actions based on the data. But a more clearer approach would be to create classes that would do the parsing.
I always do this the binary way. First you must decide what underlying transport protocol to use, it could be UDP, TCP, TCS, SSL. I would start with the TCP, since it's very stable and easy to use.
A simple way to handle packages is by in each package begin with a number that specifies which package this is. The based on this number you send the package to a corresponding class that handles the data. This can be done easily in both C++ and Java. I think it's easier in C++ since there you can base don't he first number read in a entire struct, but in Java you generally have to read it primitive by primitive.
Remember that the standard over the internet is to use big-endian values, but the normal on the most machines today (Intel, AMD, ARM...) uses little-endian values. So in C++ you will have to flip all primitives before sending them. And you have to flip the received values as well. I don't know if Java does this for you...
ICE by ZeroC is a cross platform and cross language library for TCP/IP communication between C++ and Java. I've used ICE to communicate between Linux/AIX/Solaris for C++/Java programs without problems. ICE uses a binary transfer protocol that does the big-endian/little-endian conversion for you. The downside of ICE is that you need to define the messages and calls using its custom language.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I'm designing a chat aplication where the data will be stored in an mysql database and will be manipulated by php scripts.
I want to have the possibility of developing severall diferent clients. What are the best options to expose the funcionality of the php scripts to clients?
Thanks
(As I said in a comment above, this isn't an MVC pattern at all.)
Typically, what you're trying to achieve can be done by developing a web-service to expose certain features of your application running on your server (and storing data in your database). You would need to define message formats to be exchanged between your client and your service. This is typically based on JSON or XML syntax.
Just a few more points:
-Data- To store the data, that is, the messages and user info, i selected mysql because that's what's available on apache.
MySQL isn't available on Apache (Httpd). Apache and MySQL have little to do with one another, besides the fact there are "LAMP" stacks that bundle them together. In principle, nothing prevents you from using another RDBMS (e.g. PostgreSQL, MS SQL, ...) or even NoSQL databases.
-Controller- To access and manipulate data i've chosen php because that's what's available on apache.
Again, PHP is a popular choice to run on Apache Httpd, but PHP is far from the only choice (you can implement services in Python or Perl, for example).
-View(Client)- It's possible to develop diferent clients, as long as they can interact with the php scripts that have access to the
database. For now, i'm using Java to build the client. It has the
advantage of being used either as an applet or as a standalone
aplication that can be downloaded.
It's 2013, Java applets are a technology of the past. (Standalone Java applications or server-side Java are different.)
I'll have cron jobs to select the last messages from each of the chat
rooms. Theses messages will be writen on to a file. Each chatroom will
have its file. To read the messages, the client has to ask for the
corresponding file and present its content to the user. To send
messages to the chat rooms, the client has to call the php script
passing informations like the destination chatroom, user id and so
on.. Insertions will be heavy on the database but reads will be a bit
lighter.
This is a clear case of premature optimisation, or inadequate optimisation (cron jobs run at best every minute, not ideal for a chat room). A well designed database (e.g. with appropriate indexes) might not have problems handling chat room traffic. You might want to read a bit more about web services and databases before trying to dive into this sort of details.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
I am attempting to contact everyone on a LAN to discover which devices are currently using the ip and running my service. Every device running the service will know which other devices are connected when they come online. I have basic networking experience(tcp/udp), but I haven't done much with more complicated communication packages. I wanted to post what I have researched/tried so far and get some expert responses to limit my trial and error time on future potential solutions.
Requirements:
Currently using java, but require cross-language communication.
Must be done in an acceptable time frame(couple seconds max) and preferably reliably.
I would like to use similar techniques for both broadcast and later communications to avoid introducing added complexity of multiple packages/technologies.
Currently I am planning on a heartbeat to known ip's to alert that still connected, but I may want to continuously broadcast to lan later.
I am interested in using cross-language rpc communication for this service, but this technique doesn't necessarily have to use that.
Later communication(non-broadcast) must be reliable.
Research and things attempted:
UDP - Worried about cross-language communication, lack of reliable delivery, and would add another way of communicating rather than using one solution like the ones below. I would prefer to avoid it if another more complete solution can be found.
Apache Thrift - Currently I have tried to iterate through all potential ip's and try to connect to each one. This is far too slow since the timeout is long for each attempted connection(when I call open). I have yet to find any broadcast option.
ZeroMQ - Done very little testing with basic zeromq, but I have only used a wrapper of it in the past. The pub/sub features seem to be useful for this scenario, but I am worried about subscribing to every ip in the lan. Also worried what will happen when attempt to subscribe to an ip that doesn't yet have a service running on it.
Do any of these recommendations seem like they will work better than the others given my requirements? Do you have any other suggestions of technologies which might work better?
Thanks.
What you specify is basically two separate problems; discovery/monitoring and a service provider. Since these two issues are somewhat orthogonal, I would use two different approaches to implement this.
Discovery/monitoring
Let each device continuously broadcast a (small) heartbeat/state message on the LAN over UDP on a predefined port. This heartbeat should contain the ip/port (sender) of the device, along with other interesting data, for example an address (URL) to the service(s) this device provides. Choose a compact message format if you need to keep the bandwidth utilization down, for example Protocol Buffers (available in many languages) or JSON for readability. These messages shall be published periodically, for example every 5th second.
Now, let each device listen to incoming messages on the broadcast address and keep an in-memory map [sender, last-recorded-time + other data] of all known devices. Iterate the map say every second and remove senders who has been silent for x heartbeat intervals (e.g. 3 x 5 seconds). This way each nodes will know about all other responding nodes.
You do not have to know about any IP:s, do not need any extra directory server and do not need to iterate all possible IP addresses. Also, sending/receiving data over UDP is much simpler than over TCP and it does not require any connections. It also generates less overhead, meaning less bandwidth utilization.
Service Provider
I assume you would like some kind of request-response here. For this I would choose a simple REST-based API over HTTP, talking JSON. Switch out the JSON payload for Protocol Buffers if your payload is fairly large, but in most cases JSON would probably work just fine.
All-in-all this would give you a solid, performant, reliable, cross-platform and simple solution.
Take a look at the Zyre project in the ZeroMQ Guide (Chapter 8). It's a fairly complete local network discovery and messaging framework, developed step by step. You can definitely reuse the UDP broadcast and discovery, maybe the rest as well. There's a full Java implementation too, https://github.com/zeromq/zyre.
I would use JMS as it can cross platform (for the client at least) You still have to decide how you want to encode data and unless you have specific ideas I would use XML or JSon as these are easy to read and check.
You can use ZeroMQ for greater performance and lower level access. Unless you know you need this, I suspect you don't.
You may benefit from the higher level features of JMS.
BTW: These services do service discovery implicitly. There is no particular need (except for monitoring) to know about IP addresses or whether services are up or down. Their design assumes you want to protected from have to know these details.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
what would be the good starting point to learn TCP Socket programming using java.
I have reasonably good experience in java software programming but new to netwrk/socket programming.
I am working on to develop a proxy cache server. But not able to read post requests/302/405 requests.
I referred to this below code.
http://blog.edendekker.me/a-java-proxy-server-with-caching-and-validation/
But unable to modify the code to read urls like www.gmail.com that return 302 Moved Permanently Error OR 405 Method Not valid Error. And also not able to read POST requests.
What would be the starting point where I can read about handling errors and handling POST requests.
Any reference links, example codes would be helpful.
My prev question in similar topic:
Handle a POST request and write response to client socket
Thanks
It looks like your problems are more related to HTTP than to TCP as such. Do you want to implement a proxy server in order to learn the HTTP protocol? If not, there are several good proxies freely available, often including source code. If you just want to learn TCP socket programming, try something simpler such as e.g. POP3. Also, if you want to do TCP in Java, be aware that there are 2 major ways to implement them:
One thread per connection
One thread per application, shared between connections (Java NIO and NIO2)
Assuming you really want to tackle the HTTP proxy. HTTP is not trivial if you want to implement all of the functionality that e.g. browsers use, like caching, authentication, etc. plus the additional complexities that incurs when implementing a proxy.
If you really want to bite the bullet, here's a more lightweight version of the HTTP protocol, for all the details, refer to RFC 2616 . But be aware that RFC 2616, the HTTP 1.1 specification, refers to other RFCs that you might have to consult as well for specific areas such as authentication.
Update:
One other thing that might be easier in some cases is using an HTTP proxy to sniff the communication between say a browser and an off-the-shelf proxy to quickly see what others are doing.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I'm currently writing a simple client/server in Java using sockets. I want the server to make decisions based on different "commands" and/or serialized objects that are received from the client via the socket, and vice-versa.
Something like:
[Receive Command 'DoSomething' From Client]
[Call Method 'DoSomething' on the Server]
[Send result/status to Client]
etc...
Is there a convention for flow control like this using ordinary socket communication, perhaps with serialization? Should I be using RMI in Java instead?
I would recommend KryoNet for doing any RMI-type stuff without the overhead of RMI and the inflexibility it brings.
http://code.google.com/p/kryonet/
KryoNet makes the assumptions that it
will only be used for client/server
architectures and that KryoNet will be
used on both sides of the network.
Because KryoNet solves a specific
problem, the KryoNet API can do so
very elegantly.
The Apache MINA project is similar to
KryoNet. MINA's API is lower level and
a great deal more complicated. Even
the simplest client/server will
require a lot more code to be written.
MINA also is not integrated with a
robust serialization framework and
doesn't intrinsically support RMI.
The Priobit project is a minimal layer
over NIO. It provides TCP networking
similar to KryoNet, but without the
higher level features. Priobit
requires all network communication to
occur on a single thread.
The Java Game Networking project is a
higher level library similar to
KryoNet. JGN does not have as simple
of an API.
There is not. If you create client/server communication with sockets, you'll have to define your own protocol and the rules that apply for that protocol.
RMI may ease this step by executing specific object methods. The trade of is, the initial setup for the rmi server etc. which I've heard in recent years is not that hard as it use to be.
Here's a RMI tutorial you may find helpful
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I have not had much experience with Webservices or API's.
We have a website built on Oracle->Sun App Server->Java->Struts2 framework. We have a requirement to provide an API for our system. This API will be used by other systems outside of our system. API is just for a simple SP that we have on our database. The other system does not want to connect to our DB to gain access to the SP but instead and an API as a 'webservice'
Can the community please shed some light on how to go about this? Will the API be put on our webserver? is that how the other system will connect to it? And how to go about creating a public API?
Some things you'll need to think about are:
SOAP vs REST (Why would one use REST instead of SOAP based services?)
How will you handle authentication?
Does it need to scale?
You might want to take a look at https://jersey.dev.java.net/.
It would also be helpful to look at how another company does it, check http://www.flickr.com/services/api/ for some ideas.
If you are using the Sun App Server, it should be fairly trivial to make an EJB exposed as a web service with the #WebService tag, and then have that EJB call the Stored Proceedure and return the data. The app server gives you tools to publish a WSDL which is what they will use to know how to call you API.
That being said, what sounds easy at 50,000 feet is a real pain to deal with all the details. First, what about security? Second, are WebServices really required, or is there a better communication mechanism that is more obvious, such as (at a minimum) REST, if not some simple servlet communication. And the hardest part: In exactly what format will you return this result set?
Anyway, you could be dealing with a bit of a political football here ("what, you don't know how to do web services, everyone knows that, etc.") so it can be a bit hard to probe the requirements. The good news is that publishing a web service is pretty trivial in the latest Java EE (much easier than consuming one). The bad news is that the details will be a killer. I have seen experienced web service developers spend hours on namespace issues, for example.
Soap or Rest or .. is one side of the medal and depends on what the clients want.
The other (more) important thing is the api design itself. Shall it be stateless or stateful. Are clients co-located in the same VM (Appserver) or remote in the same LAN or even in a Wan.
As soon as the communication goes over the wire, it gets slow due to serialization. So you want API methods to obtain bigger (but not too big) chunks of data at a time.
Or in other words, your question can not really be answered without knowing a lot more about what you want and need to do.