UDP or Unix Socket for efficient non blocking server? [closed] - java

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
I am writing a server (Java based) which will be only called by program on the same host.
So, in term of performance & reliability, should I use UDP or just Unix Socket?

UDP is not reliable. I'm picking nits, but there is no guarantee of either delivery order or delivery at all with UDP. Between two processes on the same host, this is probably never going to manifest as a problem, but it may be possible under extreme load scenarios.
There's also a lot more overhead in a UDP packet than there is in a Unix socket. Again, this is unlikely to be a practical problem except under the most extreme load, and you'd have a lot of other load-related problems before that was a concern, because the overhead for both is nominal in modern computing terms.
If you're really worried about performance and reliability, stick with Unix sockets.
If you have any plan to distribute and load-balance it in the future, UDP will give you more flexibility if you need to support multiple hosts.
Having said all that, none of this is a practical concern these days. Most services use TCP for even local communication, and then layer other services like ZeroMQ on top of that. You almost definitely should not be worrying about that level of performance. Use software that makes your code easier to write and maintain, and scale up the system in the unlikely event that you need to. It's easier and cheaper to throw new servers at problems than it is to spend man-hours re-engineering software that wasn't written to be flexible.
Also note that ZeroMQ (and other message queueing systems) will pick the most efficient transfer mechanism available. For example, ZeroMQ will use IPC (inter-process communication) if possible, which is far faster than either UDP or Unix sockets, and it will also scale up to thousands of hosts worldwide over the Internet if you need that, and you basically won't have to change your code.
Never prematurely optimize.

A unix socket will certainly spare you the overhead of context switching and encapsulation/decapsulation through the tcp/ip stack. But how perceptible that gain will be ? I think it depends on your requirements for performance and reliability and on the load you're expecting this server to handle.

Related

Evented, Threaded, and Go Routines, why not used more? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
The Evented and Threaded models are quite popular and are usually discussed a lot.
The 'Threaded' approach, where every I/O operation can block, is simpler. It's easier to write and debug synchronous code, and the fact of the matter is that most ecosystems provide blocking I/O libraries, just use a thread pool, configure it properly and you're good to go.
But.. it doesn't scale.
And then there's the 'Evented' approach, where there is one thread (or one per cpu) that never blocks and performs only CPU instructions. When IO returns, it activates the appropriate computation, allowing better utilization of the CPU.
But.. it's harder to code, easier to create unreadable spaghetti code, there aren't enough libraries for async I/O etc... And non-blocking and blocking I/O don't mix well. It's very problematic to use in ecosystems that weren't designed from the ground up to be non-blocking. In NodeJS all I/O is non-blocking from the beginning (because javascript never had an I/O library to begin with). Good luck trying to implement the same in C++/Java. You can try your best, but it would take one synchronous call to kill your performance.
And then came Go. I started looking into Go recently because i found it's concurrency model interesting. Go gives you the ability to "get the best out of both worlds", all I/O is blocking, write synchronous code, but enjoy the full utilization of the CPU.
Go has an abstraction to Threads called 'Go Routines', which are basically user level threads, the 'Go Runtime' (which gets compiled in with your program) is in charge of scheduling the different Go Routines on real OS threads (let's say 1 per CPU) whenever a Go Routine performs a system call the 'Go Runtime' schedules another Go Routine to run in one of the OS threads, it 'multiplexes' the go-routines onto the OS threads.
User level threads isn't a new concept, Go's approach is nice, and simple, so I started wondering, why doesn't the JVM world use a similar abstraction, it's child's play compared to what usually happens there under the hood.
Then I found out it did, Sun's 1.2 JVM called them green threads which were user level threads, but the were only multiplexed into a single OS thread, they moved on to real OS threads to allow utilizing multi-core CPU's.
Why wasn't this relevant in the JVM world after 1.2? Am I failing to see the downsides of the Go approach? Maybe some concept that applies to Go, but would not be implementable on the JVM?

Web App. Vs. Desktop App (Java Swing App) [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I want to develop an application where server pushes a lot of data to client. (20 kb every 20 milliseconds) 500kbps. All of the data are double/float values.
I am trying to answer the question, if there is something inherent to desktop apps (Java Swing app) which will make it a better option for this use case as compared to a web app where data will be pushed over http.
Is there something about Java swing app and how data transfer takes place there from server to client, that makes them faster as compared to web apps (tomcat as app server .. JS at client side).
And how answer varies, if I say that web server and application are on the same local network.
My vote is desktop, but I'm bias (when the only tool you have is a hammer...)
My first thought is threads and custom networking. You get the added benefit of push and pull protocols as you need (yeah you can get this in a web environment to, but Java was designed for this, AJAX has been bent this need)
I'd also push a diverse and customisable UI toolkit, but one might argue that you can achieve this using HTML, but I've, personally, found the Swing toolkit faster to get running & easier to maintain, IMHO.
The downside would have to the need to install the app on each client machine and deal with updating
That's my general opinion anyway, hope it helps
The other question is, what does the app need to do?
It is highly unlikely that the UI will be displaying 1000 meters all at once. The users will most likely be looking at small number of meters at a time. The UI only needs to be updated for the meters that are displayed on the screen. This should cut down on the load considerably. Assuming that networking and cache database components will be about the same for both web as well as desktop app, the real differentiator then becomes how fast the charts/graphs can be rendered, and how often or how many people will be inclined to use it.
MadProgrammer's suggestion of prototyping make sense. The test data gained from the prototypes would answer the performance question.
Web based will be more useful/valuable because it can be used from any desktop, tablet or smartphone. I am assuming that it is desirable to get the data in front of as many users as possible, anytime and anywhere. Also, I don't think human eye can detect 20ms updates. You could probably make that longer and users would not even notice it. Movies are about 25 frames a second, i.e. 40ms/frame.
How many concurrent user are you anticipating? I don't think that should affect the solution as both can be made scalable.

java vs php benchmark [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 12 years ago.
I'm a php developer, but recently had to write the same application twice, once in php and once in java, for a class I'm taking at school. For curiosity I did a benchmark on the two and found that the java version was 2 to 20 times slower than the php version if the database is accessed, and 1 to 10 times slower without DB access. I see two immediate possibilites:
I suck at java.
I can finally tell people to quit whining about php.
I posted my servlet code here. I don't want any nit-picky whining or minor improvements, but can someone see a horrible glaring performance issue in there? Or can anybody explain why Java feels like it has to suck?
I've always heard people say that java is faster and more scalable than php, especially my teacher, he is convinced of it, but the more requests that are made, the slower java gets. php doesn't seem to be affected by increased loads but remains constant.
In a mature Java web application the Servlet would make use of an existing JDBC connection pool. Establishing a new connection will be by far the greatest cost you pay in time.
Calling Class.forName for every attempt to get the connection will also cause an unnecessary slow down.
JVM tuning could also be a factor. In an enterprise environment the JVM memory and possibly GC configurations would be adjusted and tuned to achieve a desirable balance between responsiveness and resource utilization.
As Stephen C points out, the JVM also has a concept of a sort of "warm up".
All that said, I have no idea how PHP compares to Java and I feel both languages offer great solutions to separate non-disjoint sets of needs.
Based on not much info (where the best decisions are made), my guess is the Class.forName("com.mysql.jdbc.Driver"); in getConnection() is the big timesink.
Creating a new String in importFile when the char[] can be passed to out.println is me nitpicking.
Your test seems to reflect initial overhead moreso than steady-state performance. Try doing the non-DB tests multiple times in a loop (so that each test wold run the code multiple times) and look at the linear relationship between runtime and number of iterations. I suspect the incremental cost for java is lower than that for php

Java Memcached Client [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
Which is the best Java memcached client, and why?
As the author of spymemcached, I'm a bit biased, but I'd say it's mine for the following reasons:
Designed from scratch to be non-blocking everywhere possible.
When you ask for data, issue a set, etc... there's one tiny concurrent queue insertion and you get a Future to block on results (with some convenience methods for common cases like get).
Optimized Aggressively
You can read more on my optimizations page, but I do whole-application optimization.
I still do pretty well in micro-benchmarks, but to compare fairly against the other client, you have to contrive unrealistic usage patterns (for example, waiting for the response on every set operation or building locks around gets to keep them from doing packet optimization).
Tested Obsessively
I maintain a pretty rigorous test suite with coverage reports on every release.
Bugs still slip in, but they're usually pretty minor, and the client just keeps getting better. :)
Well Documented
The examples page provides a quick introduction, but the javadoc goes into tremendous detail.
Provides High-level Abstractions
I've got a Map interface to the cache as well as a functional CAS abstraction. Both binary and text support an incr-with-default mechanism (provided by the binary protocol, but rather tricky in text).
Keeps up with the Specs
I do a lot of work on the server itself, so I keep up with protocol changes.
I did the first binary protocol server implementations (both a test server and in memcached itself), and this was the first production-ready client to support it, and does so first-class.
I've also got support for several hash algorithms and node distribution algorithms, all of which are well-tested for every build. You can do a stock ketama consistent hash, or a derivative using FNV-1 (or even java's native string hashing) if you want better performance.
I believe memcached java client is the best client.
Features
Binary protocol support. fastest way to access the key/value stored in memcached server.
UDP protocol support. You can set key with tcp protocol, and get with udp protocol. Acctually, some big corporations are doing like this.
Support customized serialization and deserialization.
Connection pool with NIO and direct buffer. Dynamically increase connections when out of use for the connection pool.
Performance
Refer to performance for a benchmark test of existing popular memcached java clients.
Deserializing while receiving the response
Performance tuning into each line of the source code.
If these are numbers still valid, then ... http://xmemcached.googlecode.com/svn/trunk/benchmark/benchmark.html
As of about a year ago, when I had to use a memcached java client, the spymemcached connector was described as an optimized API with more features. Since then there've been a number of new releases of the memcached client so it may be worth checking out.
FWIW the spy client has worked perfectly for me.
I have been using SpyMemcached and have to agree that it is the best one available out there, with lots of newer improvements.
There is the memcached client for Java and spymemcached. Not much experience with either though.
Please try xmemcached, it is also nio based and have some powerful features.

Server architecture for a multiplayer game? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
I'm planning to build a small multiplayer game which could be run as a java applet or a flash file in the web browser. I haven't done any server programming before, so I'm wondering what sort of server architecture i should have.
It'll be easy for me to create perl/php files on the server, which the java/flash code contacts to update the player position/actions, etc. But I'm considering whether i should get a dedicated web host, which OS to use, which database, etc. Also, the amount of bandwidth used and scalability is a consideration.
Another option could be using a cloud hosting system (as opposed to a dedicated server), so they would take care of adding additional machines as the game grows. As long as each server ran the core perl/php files for updating the database, it should work fine.
Yet another option could be using Google app engine.
Any thoughts regarding the server architecture, OS/database choice, and whether my method of using perl/php/python scripts for server-side programing is a good one, will be appreciated!
You need to clarify more about the game, and think more about architecture rather than specific implementation details.
The main question is whether your game is going to be in real time, turn based, or long-delay based (e.g., email chess). Another question is whether or not you are going to be freezing the state for subsequent reloads.
I would highly recommend figuring out in advance whether or not all players in the same game are going to be hosted on the same server (e.g., 1000 of 4 player matches compared to 4 matches of 1000 players each). If possible, go with the first and stick everyone who is in the same game under the same server. You will have a hard enough time synchronizing multiple clients to one server, rather than having multiple servers against which players are synchronized. Otherwise, the definition of consistency is problematic.
If possible, have each client communicate with the server and then the server distributing updates to the clients. This way you have one "official state", and can do a variety of conflict resolutions, phantoms, etc. Peer to peer gives better performance in faster games (e.g., FPSs) but introduces tons of problems.
I cannot for the life of me see any convincing reason to do this and perl or PHP. Your game is not web based, why write it in a web oriented language? Use good old J2EE for the server, and exchange data with your clients via XML and AJAX. If possible, run a real Java application on clients rather than servlets. You can then benefit from using JMS which will take a huge load off your back by abstracting a lot of the communication details for you.
For your server architecture, you might have a look at Three Rings' code. They have written a number of very scalable games in Java (both client- and server-side).
I would also discourage from using PHP, also HTTP isnt the best idea as it is stateless and talkative. I was working for some time in company currently developing really massive multiplayer game. The back-end is plain JVM (being connected thru tomcat by multiple clients and from mobiles one per client). So I know the less data you transfer the smaller buffers you need on server -> more clients on one machine and also a bit faster responses. Also consider security, https is quite expensive, especially if you need to transfer graphics and sounds. Binnary protocol of your own with non-browser client container would do the best (good choice is switchable protocol for the development-debugging time). Maybe sounds complicated but it isn't.
#Sarah nice hint, thanks too ;)

Categories