In the application Im writting the server will have information abuot the users, using a XML databse. The admin user will be able to write/read information on those files too.
How can I deal with concurrent access to those files?
The only way users/admin can read/write to those files is by requesting to the server(Sockets, TCP connection), so the server will have to handle this.
What can I do? I could synchronize server methods, but I dont want to avoid USER A to access his files while the admin is writing on USER B files.
Use a database instead of files is my first suggestion, they handle locks already.
You should post an example of file structure. It could be done if User A has his data in fileA.xml and user b has his in fileB.xml by locking the given file and synchronizing based on that.
As Jes says use a database.
MySQL Supports XML: http://dev.mysql.com/tech-resources/articles/xml-in-mysql5.1-6.0.html
Most databases support XML, or you could simply use a VARCHAR that is long enough and get and put the data in there. If that is your plan then maybe a NoSQL Solution would work also, it is just a persistent HashMap that supports record locking as well as other features.
It sounds like there is no conflict between users, what you could also do is have an area for the admins to modify the files, which you would copy daily to where the data is read from for the users.
Related
I want to create some kind of grid of javaee applications. To identify each device i would generate an uuid on first start, but what is a good practice to store it?
i am using javaee 7 (wildfly) as platform. Is there probably a "native" javaee way or one specific to wildfly? I don't want to use database (jpa) for a single uuid. If there is a ways that don't need further configuration (set path, datasources, properties) it would be fine.
thanks in advance
your major problem is to store the device id . And you don't want to use any kind of DB?
Cording to me use JSON or XML file for storing so you can use in memory object, which stores the new device id at the run time and while can write into the the a file every time the application is shutdown or the server is shutdown. If this doesn't seems effective as the server or the application can get shutdown due many reason and u need to take care of all those . So its better go with writing into file on the interval of certain time.
Next time when the application is restarted load the JSON or XML in the in memory object. and keep the file appending.
I just wonder if i can(or is it a good way to use it) set location of an embedded database on a server computer and run my desktop app on a computer which have access to server folders and get/insert data from database?
For example, i have one server machine and 3 computers accessing it. I want them to insert/update data of server database which is installed as embedded style.
If i can't which method is easier and free way of doing it?
EDIT: Actually that server is a not server.. it is just a computer others can access to.
It isn't a good idea to share the embedded database's files between different applications. For most embedded database implementations it is even not possible, because the embedded database engine needs exclusive access to the underlying data files. Furthermore it is a performance penalty to access the database files over a shared folder.
I know only two databases allowing shared database file access: SQLite and MS Access. Java and MS Access is not a good combination. Avoid it, use it only if you are forced to. For SQLite I don't know if it performs well for different processes on the same machine. But over a shared folder, I think this would work only for the simplest cases.
So if you have multiple client applications accessing the same database then you should install a database server. A database server is exactly made for such a sceanario. It manages the server local database files efficiently and can handle many clients at the same time. There are simple ones like Apache Derby or H2 which are Java only implementations and very easy to use. If you need more performance then you can go with MySQL or PostgreSQL, but these are more complex to administer.
The word "embedded" normally means running inside a given JVM. To access it from clients, as opposed to from other code running in the same JVM, an method of connecting will need to be supplied, such as a connection protocol + port. Well, by the time you do all that, you have in fact rolled your own server.
If you just want filesystem access, well normally databases lock the files they're using. And if they don't, you will anyway be missing all of the control and ACID constraints that a database normally gives you.
H2 database can be run in different modes: embedded, in-memory, standalone and mixed.
I think you are asking about the last one "mixed" mode.
I'm working on a project that has two different parts. It's an e-voting system, so there's the part where voters vote, and there's the part where the admin can make changes like adding a new position, candidate, etc. I put these two parts in two different project folders called the client and the server. Each candidate has the URL of their picture, which is also stored on the server machine, which should be displayed to the client depending on which candidate is selected. The problem I'm having is how to read the picture from the server into the client application. Any tips on the best location to store the files such that I can pass just the server name as a parameter to the client and it's able to retrieve the file.
The application uses MySQL, and I'm so far assuming that the database server is the same as the application server.
Also, I was wondering of the possibility of storing the file in the database itself, and if so, how practical that would be in terms of speed.
Thanks.
Single point of information is helpful - so put the picts in the database, if possible. If you do it right, there is not more performance penalty than with other client-server-communication. If the client keeps running you can cache the pictures.
I haven't understood the two folder thing. The server folder must be synchronized to the client? Why? Why don't you store thinks like a new position in the database also?
You can use mySQL database but it's not designed to do that it might be slow, you can use MongoDB with GridFS or use some kind of file repository like Apache Jackrabbit.
I'm using JDBC to connect to a PostgreSQL database. We are trying to block access to the database for the users themselves; instead they should be forced to use our frontend. We blocked access to any table, and gave only procedures, which do all the work for users, still not giving them any opportunity to access data directly. We tried to block access to schema pg_catalog, which limits users to procedures we created, but it seems that this access is needed for JDBC to call any procedure.
Anyway, the question is either how to use JDBC without access to pg_catalog, or how to authorize only connections made by application, not user.
There is no fool proof way but the simplest is to use a username and password for the connection that you do not give to your users. Store the password in an encrypted configuration file. Ofcourse the encryption key can be retrieved from the application by a smart person.
For a really save system it would probably be best to put a service in front of the database that handles all security and provides a high level API to access the data and let the client connect to this.
The DBMS is being presented with a Catch-22 situation:
When a user runs a specific JDBC program to access the database, let it do its stuff.
When a user runs any other JDBC program to access the database, do not let it do its stuff.
How can the DBMS tell the difference between the two programs? As far as it is concerned, they are both clients that are using the correct protocol to communicate with the DBMS, and have identified themselves as a legitimate user of the database.
To make it work, you have to find a non-subvertible way to distinguish between the two applications. That is not trivial - to say the least.
There are kludges, but there isn't a clean solution. It is a generic problem that any DBMS faces when the problem is presented as in the question.
Well, just don't give your users an account on your postgresql database and create only an postgresql account for your application.
I am designing an enterprise security server for our company - we own many different applications, most written in java and a few written in PHP. I could provide a remote API that would give each application access to the server. I could also create 'agents' that each application could include that would do all the work for them, but allow my server control over their sessions and thus their authentications/authorizations. Issue is I would probably be better to write the agent in java because 80% or more of our apps are in java.
If I wrote the agent in java does anyone know if there was a way this program could access the php session? If not does anyone have a suggestion regarding a better way to go about doing this?
The session data is stored as a (php) serialized array in a temporary folder. The locations for these are set in the php.ini file.
But you can change both the format of the data and the place it is stored (e.g. to a database or shared memory or somewhere else) by writing your own handler.
A quick google suggests that several people have written [de]serializers in Java for PHP data. e.g. http://hurring.com/scott/code/java/serialize/
If you have problems with the built-in PHP serialize function - have a google for WDDX (which IIRC comes as standard) and serializes data into XML.
You might want to think about how you keep the session data appearing to be active to PHP if you want the agent to continue independently of the web session.
C.
You can hook into PHP's session handling using session_set_save_handler() (an example for a simple but complete custom handler is included in the manual). You should be able to synchronize PHP's session management with a central Java server that way.
Your PHP application would receive a session ID through a cookie ($_COOKIE["SESSION_ID"] or whatever).
Your custom session_save_handler would, instead of maintaining a session store of its own, pass that session ID to your central Java-based security server, and get all the session data in return. Writing into a session from PHP would be routed the same way.
You could of course also go the other way, and poll PHP's internal session data from the outside, but wouldn't quite understand what exactly for. If that is the case, can you go into more detail there?