This question already has answers here:
What's the easiest way to persist java objects?
(6 answers)
Closed 2 years ago.
I have a program that creates objects that should store their data (recipes) on my computer in a way that should allow me to store a couple thousand (and I would like to save storage space). But when looking at serialization I don't know what approach to take. I don't want the risk of losing data in the future which I heard is a problem and I would also like to be able to store thousands without loading up disk space. Any suggestions help. Thanks
Serialization protocols tend to either be error prone, or quite complicated, with you re-inventing either a database engine or journalled systems - at least, if you want them to not cause permanent corruption if your app crashes or you trip over a power cable at the wrong time.
So why not just.. bite the bullet, and use something like H2 (a database engine) together with something like JDBI (a library to talk to that database engine easily)?
Any options for writing to disk do not carry any storage risks.
Once a write has occurred, the file system (or database) is responsible for storing the file.
For greater data security, you can duplicate them.
And from the point of view of system security, you can organize raid-1 (or another raid). But most likely you don't need for this task
Related
This question already has answers here:
Calling Python in Java?
(12 answers)
Closed 3 years ago.
I am working on a food application. It is an Android based application. The scenario is that there is a text box in that application for users to enter comments. Now I want to apply NLP (Semantic analysis) to these comments.
Please guide me that how could I pass the comments from Java to Python so that I can apply NLP to them.
There are two approaches that come to mind depending on the architecture that makes the most sense for you. They both have their pros and cons depending on your requirements so use your best judgement.
One approach (that it sounds like you're already considering) is starting a Python runtime from within Java. As #Leo Leontev mentioned, this approach has an answer you can find here. The pros of this approach is that you don't need any extra infrastructure. The cons are that you'll need to package a (potentially large) model with your app, running two runtimes at once is probably not great for performance or battery life, and your start-up time could take a hit when loading the model.
Another approach would be creating a separate Python web server that your app can make requests to as necessary. This could be a simple REST API with whatever endpoints you need. If you're making and hosting your own model, this can speed up your app since you can persist the model in memory rather than loading it every time a user starts your app. One pro to this approach is that it's extensible (you can always build more endpoints into your API including non-ML ones). If your model is non-generic and you want to protect it from being copied, this also has added security benefits since users won't have access to the model itself.
For most use-cases, I'd recommend the second approach.
I'm planning to write in JAVA simple, but easy to develop in future MMORPG. I know more or less how it should looks like, but I have some questions:
Which kind of data should have client? I know that, for example, server informs client if "that field" is free or not, but what about loading map? Client contains sprites etc., but should they also have map files or server should tell client where is the grass and where is water etc...
How to keep data by server? Players should be represented as files in one folder where server have to find right file, open it, get data and send it back for many players? Maybe database server + database + sql would be better idea?
Any ideas/knowledge about MMORPG structure?
Is Java a good choice for 2D MMORPG?
MMO's are not easy programs to develop. It sometimes takes experienced teams years to develop one, and the questions you ask here don't seem to indicate you are a very experienced programmer. Having said that, I would suggest taking a look here:
http://slick.ninjacave.com/
http://www.13thmonkey.org/~boris/jgame/
These resources might be good starting points and get you up to speed quickly, but I'd suggest looking for a good tutorial on how to sync client/server data, en get a bit up to speed on programming Java in general.
I developed Browsergames in the past. Usually it is a good idea to put "static" data (data that doesn't change very often, like map layouts) into the client, so that you don't need to resend it every time.
I would definitly prefer a database (some sql or nosql) to file-based storage. If you wan't to improve you coding skills and make them more marketable than definitly go for a database.
If you really want to release a simple game, than I would go for html5 as frontend. The graphic capabilities of JAVA a pretty limited. Also nearly nobody would download your game prior to playing it.
This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
What is object serialization?
I know the details of how the JVM does the serialization of object graphs. What I am more interested in is the use of serialization? What was the necessity that drove the specification of serialization? More specifically, what practical uses does serialization have these days? Is is used as a means to store the data? Or is it used to send data across the network?
I will appreciate any links to this question if a full answer is not possible.
Thanks in advance.
Very simple. Suppose you have object graph and want to store it in file and then read it back. The luck of java programmer is that he/she does not have to implement gory details of field-by-field writing and reading the data. If whole graph is consists of serializable objects java does this work for you.
The same is relevant if 2 applications exchange the data.
Serialization is mainly used to send objects across the network. It's used in RMI, Spring's HttpInvoker, and other binary remote method invocation systems.
Using it for durable persistent storage is questionable, because it's impossible to query, binary and thus hard to analyze with a simple text editor, and hard to maintain when the classes change and their serialization format is thus modified. So a more open format is often chosen (XML, JSON, etc.)
Yes and yes! You can use it to send objects across the network, cache them, save them to disk, whatever you like. It is used for things like session replication between clustered JVM instances. Much of the time it is used under the covers by libraries that you use as well.
I was thinking of building an app to serve audio content.
The first question I get is how to store it. Two obvious solutions that occur are:
Dump in database as BLOB
Dump in filesystem, and store path in DB
There was a similar question here and the answer urged to store in file-system. I can think of at least one disadvantage of storing in files, i.e. I loose all backup, recovery and other awesome features of databases.
Also I wanted to know how both solutions would fare in terms of scalability.
Does anyone know how flickr or youtube does it?
Or does anyone has even more creative(scalable :)) ideas?
Your file system should have backup and recovery procedures setup if this data is important. (The rest of the application is backed up right?). So you shouldn't use a database just for the backup and restore capability.
Storing the files outside of the database allows you to separate your database and file servers which will be a plus on the scalability side.
I would definitely go for Filesystem. storing and deliviring (large) files is exactly what it was made for.
Storing files in a file system would allow for using Content Delivery Networks. Outsource the storage may bring several benefits.
This is a classic question. And a classic argument, with good points for both solutions. Scalability can be achieved with both solutions. Distributed databases are usually easier to handle than distributed filesystems if you grow to the size where all you media dont fit on a single server (but even that is open to debate). Think MongoDB or other NoSQL scalable databases.
It boils down to what features you need. It is very hard to implement transactionality on a filesystem, so if it is a concern to you, you should use a database.
Backup and recovery of filesystem is much easier to implement than proper and consistent backup of the database. Also if you lose a file on the disk, it's just a file. If you lose a part of the huge table, it's a loss of all files contained or referenced in that table (as the table becomes unreadable).
Of course, for small databases where you can turn off the DBMS and quickly copy all DB files all of the above is not applicable, but this scenario is almost the same as having data files on the disk.
I think that both ways are viable. But the backup issue i definately there. Both solutions are scalable given the right design. But big files are probably better of in the file system.
Regards,
Morten
I have a number of rather large binary files (fixed length records, the layout of which is described in another –textual– file). Data files can get as big as 6 GB. Layout files (cobol copybooks) are small in size, usually less than 5 KB.
All data files are concentrated in a GNU/Linux server (although they were generated in a mainframe).
I need to provide the testers with the means to edit those binary files. There is a free product called RecordEdit (http://record-editor.sourceforge.net/), but it has two severe drawbacks:
It forces the testers to download
the huge files through SFTP, only to
upload them once again every time a slight
change has been made. Very
inefficient.
It loads the entire
file into working memory, rendering
it useless for all but the relatively small
data files.
What I have in mind is a client/server architecture based in Java:
The server would be running a permanent
process, listening for
edition-oriented requests coming from
the client. Such requests would
include stuff like
return the list of available files
lock certain file for edition
modify this data in that record
return the n-th page of records
and so on…
The client could take any form
(RCP-based in a desktop –which is my first candidate-, ncurses in the same server, a middle web
application…) as long as it is able to
send requests to the server.
I've been exploring NIO (because of its buffers) and MINA (because of protocol transparency) in order to implement the scheme. However, before any further advancement of this endeavor, I would like to collect your expert opinions.
Is mine a reasonable way to frame the problem?
Is it feasible to do it using the language and frameworks I'm thinking of? Is it convenient?
Do you know of any patterns, blue prints, success cases or open projects that resemble or have to do with what I'm trying to do?
As I see it, the tricky thing here is decoding the files on the server. Once you've written that, it should be pretty easy.
I would suggest that, whatever the thing you use client-side is, it should basically upload a 'diff' of the person's changes.
Might it make sense to make something that acts like a database (or use an existing database) for this data? Or is there just too much of it?
Depending on how many people need to do this, the quick-and-dirty solution is to run the program via X forwarding -- that eliminates a number of the issues.. as long as that server has quite a lot of RAM free.
Is mine a reasonable way to frame the problem?
IMO, yes.
Is it feasible to do it using the language and frameworks I'm thinking of?
I think so. But there are other alternatives. For example:
Put the records into a database, and access by a key consisting of a filename + a record number. Could be a full RDBMS, or a more lightweight solution.
Implement as a RESTful web service with a UI implemented in HTML + javascript.
Implement using a scalable distributed file-system.
Also, from your description there doesn't seem to be a pressing need to use a highly scalable / transport independent layer ... unless you need to support hundreds of simultaneous users.
Is it convenient?
Convenient for who? If you are talking about you the developer, it depends if you are already familiar with those frameworks.
Have you considered using a distributed file system like OpenAFS? That should be able to handle very large files. Then you can write a client-side app for editing the files as if they are local.