I have java server-client application using CORBA connection. the application running well with wired connection, but when connected via WiFi the client app running very slowly. Anybody has an idea why CORBA very slow over WiFi?
thanks in advance.
You haven't quantified at all what is slow and fast. There are a few things to look at, first the design of your IDL interfaces. Normally each invocation of an IDL operation results in a remote call which goes over the network. For example when you want to retrieve 1M values, don't perform 1M operations, but retrieve them in bigger chunks. Secondly, what is the payload of the invocation, what is the size of the data to transmit. If that is large and your wifi link is slow, than it just takes time to transmit the data, ZIOP (CORBA Compression) adds the ability to CORBA that it will compress your application data, something to look at. Last, is your network setup correctly, do all host names and ip addresses you use do work correctly, if for example in your wifi setup the DNS settings are not ok, than reverse lookups can kill performance.
Check your CORBA implementation to enable logging, see what is happening, how much data is transmitted, do you see errors, etc.
CORBA can be a very network intensive protocol if developers design CORBA objects like ordinary C++/Java Objects, this will cause several small interactions over the network. This makes it very susceptible to network latency. I.E. not the overall speed of the network but the time it takes to open a stream and send a single packet. Wireless networks can be very fast sending large packets once a connection is established, but, I suspect your wireless network is quite slow to route packets.
Related
We are working on a Android project with the below requirements.
The application should be able to send data to all the devices which are running our application which exists in the WiFi LAN.
Some payloads are expected to be of size >= 5MB.
The data shouldn't be lost and if lost the client should know the failure.
All the devices should be able to communicate with all other. There will be no message targeted to a specific device instead all the messages should be reached all the devices in the N/W.
No internet hence no remote server.
Study we have done:-
UDP Broadcasting - UDP doesn't guarantee the message delivery but this is a prime requirement in our case. Hence not an option.
TCP - TCP guarantees the message delivery but requires the receiver IP address to be known before hand and in our case we need to send the message to all the devices inside the LAN. Hence not a straight option.
Solutions we are looking into:-
A Hybrid approach - Name one of the devices in the N/W as Server. Post all the messages to a local Server. The Server keeps a open socket to all the devices(which have our application) & when there is a message from a device then it routes the message to all the devices. The disadvantages of this approach are,
Server having multiple sockets open each per device. But in our case we are expecting devices <=5 in LAN.
Server discovery using continuous UDP broadcast.
We want to have all the data in all the devices. So if we newly introduce any device into the LAN then that device needs to get all the data from the server.
So my question, have you any time worked on these kind of hybrid approaches? Or can you suggest any other approaches?
Your hybrid approach is the way to go.
Cleanly split your problem into parts and solve them independently:
Discovery: Devices need to be able to discover the server, if there is any.
Select server: Decide which of your devices assumes the server role.
Server implementation: The server distributes all data to all devices and sends notifications as necessary. Push or pull with notifications does not matter.
Client implementation: Clients only talk to the server. The device which contains the server should also contain a normal client, potentially passing data to the server directly, but using the same abstract protocol.
You could use mDNS (aka Bonjour or zeroconf) for the discovery, but I would not even recommend that. It often createsmore problems than it solves, and it does not solve your 'I need one server' problem. I would suggest you handcraft a simple UDP broadcast protocol for the discovery, which already tells you who the server is, if there is any.
Select server: One approach is to use network meta data which you have anyway, for example 'use the device with the highest IP address'. This often works better than fancy arbitration algorithms. Once you established a server new devices would use this, rather than switching the server role.
Use UDP broadcast for the discovery, with manual heuristic repeats. No fancy logic, just make your protocol resilient against repeated packets and repeat your packets. (Your WLAN router may repeat your packets without your knowledge anyway.)
You may want to use two TCP connections per client, potentially to two different server ports, but that does not matter much: One control connection (always very responsive, no big amounts of data, just a few hundred bytes per message) and one data connection (for the bulk of the data, your > 5 MB chunks). This is so that everything stays responsive.
I have a Scala application which maintains (or tries to) TCP connections to various servers for hours (possibly > 24) at a time. Each server sends a short, ~30 character message about twice a second. These messages are fed into an iteratee where they are parsed and eventually end up making state changes to a database.
If any of these connections fail for any reason, my app needs to continually try to reconnect until I specify otherwise. Any messages getting lost is Bad. I have no control over the servers I connect to, or the protocols used.
It is conceivable there would be as many as 300 of these connections at once. No exactly a high-load scenario, so I don't think NIO is needed, though it might be nice to have? Other bits of the app are high-load.
I'm looking for some sort of socket controller / manager which can keep these connections as reliably as possible. I am running my own blocking controller now, but as I'm inexperienced with socket coding (and all the various settings, options, timeouts, etc.) I doubt its will achieve the best possible uptime. Plus I may need SSL support at some point down the line.
Would NIO offer any real advantages?
Would Netty be the best choice here? I've seen the Uptime example here, and was thinking of simply duplicating it, but being new to lower-level networking I wasn't sure if there were better options.
However I'm uncertain of the best strategies for ensuring as few packets are lost as possible, and assumed this would be a "solved" problem in one library or another.
Yup. JMS is an example.
I suppose a lot of it would come down to a timeout guessing strategy? Close and re-open a socket too early and you've lost whatever packets were en-route.
That is correct. That approach is not going to be reliable, especially if connections go up and down regularly.
A real solution involves having the other end keep track of what it has received, and letting the sender know when then connection is re-established. If that can't be done, you have no real way of controlling how much gets lost. (This is what the reliable messaging services do ...)
I have no control over the servers I connect to. So unless there's another way to adapt JMS to a generic TCP stream I don't think it will work.
Yup. And the same applies if you try to implement this by hand. The other end has to cooperate.
I guess you could construct something where you run (say) a JMS end point on each of the remote servers, and have the endpoint use UNIX domain sockets or loopback (i.e. 127.0.0.1) to talk to the server. But you still have potential for message loss.
The system I am developing potentially has a very large number of clients (lets say one million) that need to periodically update a central server with some information. Clients are written in Java.
The specific use-case is that the server backend needs to have an up to date mapping of IP address to clients. But the client IPs are dynamic and subject to (effectively random) change.
The solution I have in mind requires the clients to ping the server to update their IP. The period ideally should be once every minute, but even 1 ping/10 mins is acceptable.
My questions, in sequence:
1M pings per 1 min is over 10k/sec. So first off I want to know
the approaches can scale to handle such a load. This is to know the options available.
Assuming you have more than one solution in mind, which of these
would be the most economical? The cost effectiveness is critically important. I don't have my own data center or
static and fat end-point on the net, so the server application will
need to run on some sort of provider or ultimately on the cloud.
Notes:
I considered running the server from home using my own ISP provided connection, but I am neither sure of the performance issues, nor what my ISP will think about a constant stream of pings.
I can't see how the server can auto-discover these IP changes.
Erik, your problem is much simpler than it seems to have been made to sound.
This problem been around for a decade maybe two. No need to re-invent the wheel here.
Why Polling/Pinging is a Bad Idea
The dynamic IPs provided by ISPs can have a variable lease time, but will often be at least 24-72 hours. Pinging your server every 1-10m will be a horrible waist of resources potentially making over a 4,320 useless HTTP requests PER CLIENT in a 72 hour period. Each request will be say around 300 bytes * 4,320 wasted http requests equals 1.3mb wasted bandwidth multiplied by your target client count of 1 million clients, you are talking about a monthly wasted bandwidth of ~1.2 TB! And that's just the wasted bandwidth, not the other bandwidth you might need to run your app and provide useful info.
The clients need to be smarter than just pinging frequently. Rather they should be able to check if their IP address matches the DNS on startup, then only when the IP changes, send a notification to the server. This will cut down your bandwidth and server processing requirements by thousands of times.
What you are describing is Dynamic DNS
What you are talking about is "Dynamic DNS" (both a descriptive name for the technology and also the name of one company that provides a SaaS solution).
Dynamic DNS is quite simply a DNS server that allows you to very rapidly change the mapping between a name and an IP address. Normally this is useful for devices using an ISP which only provides dynamic IPs. Whenever the IP changes for the router/server on a dynamic IP it will inform the Dynamic DNS server of the change.
The defacto standard protocol for dynamic DNS is well documented. Start here: DNS Update API, I think the specifics you are looking for are here: DynDNS Perform Update. Most commercial implementations out there are very close to the same protocol due to the fact that router hardware usually has a built in DynDNS client which everyone wants to use.
Most routers (even cheap ones) already have Dynamic DNS clients built into them. (You can write your own soft client, but the router is likely the most efficient location for this as your clients are likely being NAT'd with a private IP - you can still do it but at a cost of more bandwidth for public IP discovery)
A quick google search for "dynamic DNS java client" brings up full source projects like this one: Java DynDNS client (untested, just illustrating the power of search)
Other Considerations for your System Architecture
Lets say the IP-client mapping thing gets resolved. You figured it all out and it works perfectly, you always knows the IP for each client. Would you then have a nice reliable system for transferring files to clients from mobile devices? I would say no.
Both mobiles and home computers can have multiple connection types, Wi-Fi, Cellular Data, maybe wired data. Each of these networks may have different security systems in place. So a connection from a cellular data mobile to a wifi laptop behind a home router is going to look very different than a wifi mobile device connecting to laptop on the same wifi network.
You may have physical router firewalls to contend with. Also home computers may have windows firewall enabled, maybe norton internet security, maybe symantec, maybe AVG, maybe zone alarm, etc... Do you know the firewall considerations for all these potential clients?
Maybe you could use SIP as protocol for that purpose ?
Probably the java SIP libs already solved your problem.
Nice app by the way.
I would suggest better tweak you java program to know the IP change and then only hit the web service.
You can do it like,
on your java program initiation extract the IP of machine and store
it in Global variable or better some property file.
Run a batch process/scheduler which will check your IP every 30sec/1 minute for change.Java Quartz Scheduler will come very handy for you.
Invoke the web service in case of a change of IP.
This way it reduces your server role and thus traffic and connections.
You could create your own protocol on top of UDP, for example XML based. Define 3 messages:
request - client requests a challenge from server
challenge - server replies with challenge (basically a random number)
response - client sends username and hashed password + challenge back to the server
It's lightweight and not too traffic-heavy. You can load-balance it to multiple servers at any layer or using load-balancer.
Any average PC could handle million such hits per minute, provided you do server-side in C/C++ (I don't know about java network performance)
Please have a look at how no-ip works. Your requirement is exactly same as what it does.
Do I have the use case right? A community of users all want to receive pictures from each other? You don't want to host the images on the server but broadcast them directly to all the users?
There are two questions here. The first question is "how to know if my own WAN IP address has changed."
If you are not NATed then:
InetAddress.getLocalHost()
will tell you your IP address.
If you are NATed, then using dynamic DNS and resolving your own host name will work.
The second question is something like "How to share pictures between hosts which come and go on the internet".
The possible solution space includes:
IP Multicast, probably with Forward Error Correction and Carouseling, e.g. FLUTE.
File Swarming - e.g. bittorrent.
A Publish/Subscribe message bus solution using Jabber, AMQP, JMS, STOMP or similar. Suitable implementations include RabbitMQ, ActiveMQ, etc. JMS Topics are a key concept here.
The solution should avoid the massive overheads of doing things at the IP level.
The point of my question is to ask if it is accepted to use both TCP and UDP to communicate between client and server.
I am making a real-time client server game with parts of the communication that need to be guaranteed (logging in, etc..), but other parts will be ok to lose packets (state updates, etc). So, I would like to use UDP for most of the data communication but I do not want to have to implement my own framework to insure that my control communication (logging in) is guaranteed.
So, would it be reasonable to initially use TCP to manage a connection, and then on a separate port send data communication pack and forth?
You should absolutely do it that way (use TCP and UDP to accomplish different communication tasks.) And you don't even have to use two different ports. One will suffice. You can listen to the two different protocols on the same port.
It is quite reasonable and already used in mainstream. Even when browsing the Web, DNS operations are UDP-based and HTTP connections are TCP-based.
Keep in mind that you should either consider the two connection types to be completely independent or employ additional measures to properly handle any inter-dependencies. TCP connections can have timing issues at the OS and network levels and UDP connections have packet loss issues. You should take specific measures to avoid deadlocks and performance problems when the TCP part of your application stalls or a UDP packet is lost.
It is not only accepted but is widely used. As a good example, BATS Exchange is using this approach in their market data distribution system, to implement a recovery mechanisms.
I have a serial hardware device that I'd like to share with multiple applications, that may reside on different machines within or spanning multiple networks. A key requirement is that the system must support bi-directional communication, such that clients/serial device can exist behind firewalls and/or on different networks and still talk to each other (send and receive) through a central server. Another requirement of the system is that the clients must be able to determine if the gateway/serial device is offline/online.
This serial device is capable of receiving and sending packets to a wireless network. The software that communicates with the serial device is written in Java, and I'd like to keep it a 100% Java solution, if possible.
I am currently looking at XMPP, using Jive software's Openfire server and Smack API. With this solution, packets coming off the serial device are delivered to clients via XMPP. Similarly, any client application may send packets to the serial device, via the Smack API. Packets are just byte arrays, and limited is size to around 100 bytes, so they can be converted to hex strings and sent as text in the body of a message. The system should be tolerant of the clients/serial device going offline, meaning they will automatically reconnect when they are available again, but packets will be discarded if the client is offline. The packets must be sent and received in near real-time, so offline delivery is not desired. Security should be provided by messaging system and provided client API.
I'd like to hear of other possible solutions. I thought of using JMS but it seems a bit too heavyweight and I'm not sure it will support the requirement of knowing if clients and/or the gateway/serial device is offline.
Jini might fit the job. It works really well in distributed environments where multicast is available but it also works in unicast, and is quite fast. Not only it provides remote services, but also remote events, and distributed transactions if you need them. A downside is that it only works with Java.
Where I work, Jini is used in a infrastructure with more that 1000 machines, with each machine providing remote services used to access the devices connect to the machine serial ports.
You really need to provide a bit more detail... do the clients need guaranteed delivery? What about offline delivery? Is this part of a larger system? Do you need encryption? Security?
If you want the smallest footprint possible, then should transmit data using SocketServer, Sockets, and serialization. But then you lose all of the advantages of the 3rd party solutions you mentioned, which typically include reliability, delivery guarantees, security, management, etc.
I would personally use JMS, but that's because I'm familiar with it. There are a number of stand-alone servers that can be deployed out-of-the-box with virtually no configuration. They all provide for guaranteed delivery, some security, encryption, and a number of other easy-to-use features. Coding a JMS publisher or subscriber is pretty easy.
Update:
If you want the most ease in coding, then I would look at the third-party solutions. Looking at Smack/XMPP, the API seems to be a little easier than a JMS for the functionality you asked for. You still have to setup/configure a server, etc.
The Smack API also has a lot of extra baggage that you don't need either, but its "concepts" are a little more intuitive since its all chat/IM concepts.
I would still look at OpenJMS or ActiveMQ. I think knowing JMS will be more valuable in the future as compared to knowing XMPP. Take a look at their Getting Started documentation or the Sun Tutorial to see how much coding is involved. In JMS parlance, you will want an administered "Topic" and a "Queue" where the Serial Port App will receive and send messages respectively. All of your clients will open a subscription to the Topic and send their outbound messages to the Queue. When you send messages, their delivery mode should be non-persistent.
I ended up using XMPP via the Smack API. What led me to this decision was its native support for presence (is the client online/offline) and robust connection handling (it automatically reconnects if a the underlying connection breaks). Another benefit of XMPP is that it's compatible with Google Talk, so I don't need to setup a server. Thanks for the suggestions. In case anyone is interested, I have released the code on Google Code http://code.google.com/p/xbee-xmpp/