I have a system I am implementing where I have a MATLAB Server who uses a socket to accept a TCP connection, and a Java Client which connects to that server.
My problem is that when the server accepts the client's connection, apparently, the client manages to send the input before the server manages to reach a line of code that locks it into reading the expected input from the client...
Assuming I do not know how much time to wait would be safe, in a generic case, is there any way to solve this problem for all situations?
Could I use some sort of lock object, shared between MATLAB and Java? Should I assume that the client always awaits some sort of confirmation from the server? and if so, how exactly can i have a guarantee that the server will rush to listen after sending such a notification to the client, fast enough?
Thanks in advance!
By the way, if anyone knows of a simple way of getting the system time from Java (System.currentTimeMilis()) in MATLAB, it would be useful to further test this. I know there are quite some functions to access time in MATLAB, but I don't really know if there is any (or any way) to get it exactly the same way as in Java.
There are easier ways to call Matlab from Java - JMI for example:
http://undocumentedmatlab.com/blog/jmi-java-to-matlab-interface/
Regarding the system time, run this in Matlab:
javaTime = java.lang.System.currentTimeMillis
Related
I'm new at java but I 've experience on as3. When I was using as3 , I was using some callbacks to notify both server and client when I send either of them a message from the other. But it was a pre-build server named player.io. (as I recall, server was built with c#). There were listeners for messages on both client and server.
But as I started practicing sockets on java, all the examples I saw implement BufferedReader and readLine methods to recieve data from server or client. Problem is , this method waits until it receives data. And this waiting is ofcourse is a big problem, especially for the client.
Is there any way to use callbacks or other methods for communicating between server & client which doesn't cause either of programs to wait?
I was wondering how, if possible, could the task of creating/emulating a java serversocket in C++ be performed? I'm new to C++ but I'm fairly proficient in Java. My server (written in java) needs to receive data from all both Java/C++ clients (data is transferred using JSON Strings) but I'm not sure how to establish a connection in C++ with the NIO Server.
Thanks in advance for any help!
Start by reading the following man pages:
socket(2)
bind(2)
listen(2)
accept(2)
connect(2)
After you determine whether you need to create a server or a client-side socket, you will then proceed to implement it using the appropriate combination of these system calls.
A socket is a socket. Whether or not the other end of the socket is an application written in Java, C++, Perl, Ruby, or any other language, it makes no difference. All sockets are created the same way. It does make a difference in terms of the format of the data exchanged across the socket, but looks like you have that covered.
My application uploads files to server of my client, but he want special "pause upload" function. I cant simply close connection, not even kill process - he need to lost connection otherwise his server application delete unfinished file - so i have to simulate in code "cable unplug" - do you have any suggestion?
thanks for your help and sorry for my english :)
jirka
You can use Mockito to create a Socket mock for your unit tests, as suggested on this question: Testing Java Sockets
If you wrote both the client and the server, maybe the better option is to send some OOB data to tell the server that you are pausing (or a message to tell the server to not delete the file on next socket close). It is usually better to be explicit instead of relying on side effects of certain actions - in your case, closing a socket without explicit connection termination.
I'm new to Java and RMI, but I'm trying to write my app in such a way that there are many clients connecting to a single server. So far, so good....
But when I close the server (simulating a crash or communication issue) my clients remain unaware until I make my next call to the server. It is a requirement that my clients continue to work without the server in an 'offline mode' and the sooner I know that I'm offline the better the user-experience will be.
Is there an active connection that remains open that the client can detect a problem with or something similar - or will I simply have to wait until the next call fails? I figured I could have a 'health-check' ping the server but it seemed like it might not be the best approach.
Thanks for any help
actually i'm just trying to learn more about RMI and CORBA but i'm not that far as you are. all i know is that those systems are also built to be less expensive, and as far as i know an active conneciton is an expensive thing.
i would suggest you use a multicast address to which your server sends somehow "i'm still here" but without using TCP connections, UDP should be enough for that purpose and more efficient.
I looked into this a bit when I was writing an RMI app (uni assignment) but I didn't come across any inbuilt functionality for testing whether a remote system is alive. I would just use a UDP heartbeat mechanism for this.
(Untested). Having a separate RMI call repeatedly into the server, which just does a "wait X seconds" and then return, should be told that the execution has failed when the server is brought down.
well, i am developing a single server multiple-client program in java. My problem is can i use single stream for all the clients or do i have to create a seperate stream for each client?
please help
thank you
Typically you'd need a stream per client. In some cases you can get away with UDP and multicasting, but it doesn't sound like a great idea for a chat server.
Usually it's easy to get a stream per client with no extra work, because each client will connect to the server anyway, and a stream can easily be set up over that connection.
Yes, you can but I think it would be harder.
If you're using java.net.ServerSocket then each client accepted through:
Socket client = server.accept();
Will have it's own stream so you don't have to do anything else.
Is there a real need for a single stream for all clients or is just something you think it would help.
For the later it could cause more problems than those is solve.
Can you do it?
Yes, as Jon Skeet said, you can use multicasting.
Should you do it?
That depends on what you are using the streams for.
For most client server applications, you will need a stream per client to maintain independent communications. Of course, there are applications where using multicasting is the right approach, such as live video streaming. In such a case, you would not want to overwhelm your network while streaming the same data to multiple clients. Of course, even in this case there will typically be a single control channel of some sort between each client and server.