I am tasked with writing an application in Java (it is not web hosted, but rather deployed to multiple platforms - this decision is not in my control).
I know Java well enough to do this and MySQL enough to seem clever but is probably dangerous. I am not an expert though which is why I'm here asking for design help with this.
Here are the requirements:
1) The java application will require generic read access to certain tables in a database - This does not need access control
2) The java application will need to be able to modify a few specific tables (not necessarily on the same database - but some are), but only when the user accid matches the rows in that table
3) Due to the nature of the infrastructure the users cannot access the database directly for the read data (though they could for the write data)
How do I give users write access to a table so that they only insert/modify on their account id.
If they have write access to a table they could modify rows that aren't tied to their account id. And while I can set the app to make sure it ties it to their account, if they have write access they can access it outside the app? And if I make a generic write account for the app they could simply view the user/pass in the java code which is as bad?
The infrastructure will not allow me to set up a real server-client communication system (which could handle auth and processing). The the best I can do is background process that only communicates with internal systems. This app will be external. The best I could do is allow the external users access to a database.
I was thinking something like having the app work on a seperate replicated database and then have the background process transfer it to the real one (this doesn't need to be real time so..) but it doesn't solve the inherent security issue with write access.
Is there a way to give users conditional write access to a table (condtiional on their accid matching) wholly within MySQLs security features?
The read data is obviously easy, just open read access to the table. Its the write access I"m not sure about and I'm just hoping I'm missing something.
I appreciate any comment and suggestions. Even if the answer is its not possible.
Thanks.
Related
I'm currently getting into Socket Programming and building a multi-threaded console application where I need to register/login users. The data needs to be saved locally, but I can not seem find the right structure for it.
Here are the ideas I though about:
Simply saving the data to .txt file. (will be troublesome to search and authenticate the logins)
Using the Java Preferences API but since the application is multi-threaded I keep on overwriting the data each time a new client connects to my server. Can I create a new node for each new user?
What do you guys think is the ideal structure for saving login credentials? (security isn't currently a concern for this application)
I would consider the H2 database engine.
quote:"Very fast, open source, JDBC API Embedded and server modes; in-memory
databases Browser based Console application Small footprint: around 2
MB jar file size"
http://www.h2database.com
It really depends on what you want to do with the application. The result would be different, depending on what you would answer to the following questions:
Do you want/need to persist the databases?
Is there any other data which you need to store along with that?
are you using plain java or a framework like Spring?
Some options:
if you're just prototyping and you don't have any persistence: consider using an in-memory storage for it. For simplicity in coding/dependencies, something like a ConcurrentMap can be completely sufficient. If you wrap it properly, you can exchange it later - and you don't add dependencies and complexities at an early state.
If you're prototyping but you still need persistence, using properties files on top of the ConcurrentMaps can give you a quick win.
There might be some more stages to this, depending on where you want to go with this, choosing a database at one point can be an option. Depending on your experience and needs, you can use a SQL or NoSQL database. Personally, I get faster results with NoSQL (MongoDB in my case) but prefer SQL in production for use cases like account management.
I'm looking to make a web that makes use of two sets of databases, given in CSV format and both are 10 MB in size. I've chosen to use Java dynamic web app with JSP, that users can use to search and sort through the data provided through the CSV.
From what I understand, the user/client sends a request to the server, the server will call upon the Java cases in the backend, which has the different sorting methods and data from the CSV that can be manipulated.
This data, that sits in the backend, is where I'm running into confusion. I know its possible to load the data to a database, and have that sitting on the server that I could call upon.
If I use a class that reads the CSV and loads the data to arrays, Would this reading work be done every time someone accesses the website causing latency or would it already be loaded into arrays in the server?
Depending on the scope you use it would be loaded in an application context, therefore one time (say in a singleton class loaded at the application startup).
But I wouldn't recommend this approach, I would recommend a proper designed database where you can put your csv data into. This way you would have the database engine to help you organize your data which would give you scalability and maintainability (although with a proper design of your classes say a DAO pattern would give you the same).
Organized data in a database would give you more flexibility to search through your data using already made SQL functions.
In order to make my case here are some advantages of a Database system over a file system:
No redundant data – Redundancy removed by data normalization
Data Consistency and Integrity – data normalization takes care of it too
Secure – Each user has a different set of access
Privacy – Limited access
Easy access to data
Easy recovery
Flexible
Concurrency - The database engine will allow you to concurrent read the data or even write to it.
I'm not listing the disadvantages since I'm making my case :)
I can read from a CSV file to build your arrays. You can then add the arrays to session scope. The CSV file will only be read at the servlet that processes it. Future usage will be retrieved from session.
General question here. I have a Spring Web MVC Application that allows users to enter data one record at a time. Validation checks are run when adding/editing each individual record (database calls, client side validation, etc)
We want to provide users a way to bulk insert many records on a single load. Right now the obvious choice is importing an excel spreadsheet, however I feel like this will require a ton of redundant work as we will have to provide all the same validation checks, dynamic string building, and preloading drop downs in this excel sheet as we have done in our application. Thus my question is, is there a simple solution of recreating this process via a web interface that would imitate entering data into a spreadsheet (Any tool or framework of sorts)? If this could be done on the front end we would be able to utilize all the functionality we have already implemented
Hope this isn't a poor question, I would just really like to avoid spreadsheets all together
I use http://handsontable.com - it is a javascript component that I use.
You can get quite close to Excel-like behaviour, in a browser. You can also copy / paste to and from Excel with it.
Summary:
I am trying to write a utility program that is based on the information contained in a separate file. The object has to be such that any information on the physical file can be retrieved quickly and can be updated quickly as well.
Details:
The file is a normal ANSI encoded file that is supposed to store definitions of the physical quantities stated in the SI system. What I really want is that I should be able to read and write changes to the definitions whenever required. I'll be using markers(like ":") to get the headings and definitions like:
Length:metre:m:"..length of path traveled by light in vacuum in
1/299792458th of a second"
and so on.
So in this case is extending RandomAccessFile an option? Will it help me in quick retrieval and syncing of data? Should I use another approach?
If you want these things, then I'd advise you to use an embedded ACID database like H2:
Guarantee that you don't lose changes that you made
Have more than one program access the info
This is because coding up something that correctly does this using low level facilities like RandomAccessFile is quite hard. Storing persistent application state in embedded DBs is commonly done. H2 is probably the most popular among DBs implemented in pure Java.
On how to actually do this, see this: Embedding the Java h2 database programmatically
You prob. want to look at introduction on relational DBs & SQL if you aren't familiar with them.
I'm wondering how wordpress.com or google group host multiple applications for different party. For wordpress, I guess it will create a subdomain for each user and configured a virtual host in Apache for this installation. Of course, a database is installed for this user (or tables with prefix). Will the wordpress application need to be copied? So each blog is independent and they don't have to do anything in the blog application (I guess).
In java, life is not easy. I think the multiple applications instance has to be implemented programmingly. Almost every domain object need to add an attribute, for example, A Post need to be identified by a blog attribute.
This leave the database design more work to do. There might be three solutions:
add one more column. For example, the table post need to add "blog_id". Posts from all blogs are stored in one table. This solution add extra work in SQL query since you have to add "where blog_id=1" almost in every query.
Table prefix, such as blog1_post.
New database. "blog1.post"
I would use spring+hibernate in this project.
What do you think I might miss?
Wordpress is probably running multiple installations for each blog, and using something like puppet to rollout the codebase into production, and to manage updates etc.