I need some guidance on how to retrieving data from a database.
The database is called Drug Combination DataBase and so far I'm just using a small text file that contains a small portion of the data, but the complete database is available as a 14mb sql-file. Can I load this in an efficient way during run-time in my java application so I can look up a few entries? I've never used an sql-file to retrieve data in java before so I don't know what is the best strategy.
By the way, I'm creating an application that reads large graphs and another xml database so memory usage is fairly high.
the way to connect a Java program to a database is through JDBC. the file needs to be read int and saved to a database like MySQL or PostgresQL in order to be accessed. check out this link for a good tutorial:
jdbc tutorial
Related
I'm new to druid and currently I'm working on a project where data is collected in monthly or weekly basis and further this data is used for some analysis purpose. Currently I'm storing all the data collected in postgres with timestamp for each row. Now i've decided to go with timeseries databases(Druid), i've gone through the Druid docs and got to know how to load data into Druid through druid console (Basically I exported data into csv from my postgres and loaded that through druid console) and through commands. Now if i want to load and query data using Java how can i do that?.
As I'm not finding much resources regarding this, especially how to load data (in the form of CSV) into druid using Java, it would be very much helpful if someone guide me through this.
You can use druid client-apis
or use druidry
examples are here
New to Oracle here but I have now read about the various bulk insert options for Oracle. In essence, true bulk loading is done using Direct Path loading mechanism via SQL*Loader. There's also APPEND hint options that use serial or parallel Direct Path loading. But each of these have the following limitations -
SQL*Loader works off of a Control File, which contains the path of the data file. In my case, there is no file.
APPEND hint option for INSERT can only use the syntax - insert into select from. In my case, the source data is not in any table.
Source of my data is actually a Spark dataframe. I am looking for options to push this data in chunks to Oracle tables, but using Direct Path loading option. For example, in Postgres, the PGConnection interface provides getCopyAPI.copyIn functionality and you can create a huge serialized blob than can be sent over as one big chunk using COPY tableName FROM STDIN yourBlob command. I am unable to find anything similar Java API for Oracle that works on in-memory records and is able to push data directly (without any insert statements).
Any ideas on how to achieve this? Anyone done this before?
In general, how do folks using Oracle and Spark push data to Oracle from a dataframe in an optimized way?
Thanks in advance!
I use a licensed software for data processing, Nuix. It creates an embedded Derby Database to store info about the data it processes.
My question is: is it possible for me to access the database the program creates even if I do not run the program? I want to access the database from my own JVM application.
Please note: I have never used Derby before nor am I fluent in Java.
Yes, I have used RazorSQL to browse Nuix DBs. Point it to a store folder and it should display the tables. Not the easiest Schema to understand but you should be able to find what you need.
Derby, like all databases, ultimately stores it's data on your HDD or SSD. And like many others, it stores them I files. So any other program with access to the files can access the data, in theory. You could shutdown the other program and have your own program or a Derby server daemon access the files, using the same Derby Java library version.
But you will face one problem: you will not know the database schema. So it might be difficult to interpret the data you read.
I want to create a temporary database on memory to read ans store XML data from the API.
I have been doing this in C# and .Net by simply creating a structured DataSet/DataTable and reading XML API data and store in it. Then use it for the other work and at the end dump it.
The XML data structure is already known, so I would create the datatable structure and then read XML and save rows one by one.
I would like to achieve the same flexibility in Java too. Still a newbie in Java desktop application development.
Try using HSQLDB with this connection URL: jdbc:hsqldb:mem:testDb
This will ensure that your dataset is created in-memory.
You can either use collection framework to store temporary data or you can use Sqlite for local database if you really need database.
I'm currently working on a simple Java application that calculates and graphs the different types of profit for a company. A company can have many branches, and each branch can have many years, and each year can have up to 12 months.
The hierarchy looks as follows:
-company
+branch
-branch
+year
-year
+month
-month
My intention was to have the data storage as simple as possible for the user. The structure I had in mind was an XML file that stored everything to do with a single company. Either as a single XML file or have multiple XML files that are linked together with unique IDs.
Both of these options would also allow the user to easily transport the data, as apposed to using a database.
The problem with a database that is stopping me right now, is that the user would have to setup a database by him/herself which would be very difficult for them if they aren't the technical type.
What do you think I should go for XML file, database, or something else?
It will be more complicated to use XML, XML is more of an interchange format, not a substitute for a DB.
You can use an embeddedable database such as H2 or Apache Derby / JavaDB, in this case the user won't have to set up a database. The data will be stored only locally though, so if this is ok for your application, you can consider it.
I would defintely go for the DB:
you have relational data, a thing DBs are very good at
you can query your data in that relational much easier than in XML
the CRUD operations (create, read, update, delete) are much more easier in DB than in XML
You can avoid the need for the user to install a DB engine by embedding SQLite with your app for example.
If it's a single-user application and the amount of data is unlikely to exceed a couple of megabytes, then using an XML file for the persistent storage might well make sense in that it reduces the complexity of the package and its installation process. But you're limiting the scalability: is that wise?