I need to design a Table in Oracle/SQL & data will be upload via Java/C# application via CSV with 50 fields (mapped to columns of Table). How to design Table/DB with below constraints during data importing from CSV
CSV may have new fields being added to existing 50 fields.
In that case instead of adding column to table manually & load data. How can we design table for smooth/automatic file handling with dynamic fields
EX:
CSV has S_ID, S_NAME, SUBJECT, MARK_VALUE fields in it
+------+---------+-------------+------------+
| S_ID | S_NAME | SUBJECT | MARK_VALUE |
+------+---------+-------------+------------+
| 1 | Stud | SUB_1 | 50 |
| 2 | Stud | SUB_2 | 60 |
| 3 | Stud | SUB_3 | 70 |
+------+---------+-------------+------------+
What if CSV has new field "RANK" (similar more fields) added to it & i need to store all new fields in Table.
Please suggest DB design for this consideration
So there are few approaches come to my mind, one of the way would be having metadata(Record) information in one table (column name, data type, any constraint) and have another free form table with large enough no. of columns which will hold the data. Use the metadata table while inserting data into this table to maintain data integrity and other stuff.
Related
I will try to explain the problem with the best of my ability.
SO I have 2 tables in 2 different Schemas with few columns in both & I own only 1 of the schemas.
What I need to do is Update Table A in Schema 1 with a value from one of the fields from Table B from Schema 2.
I need to update only few rows in this table
The problem lies in when table A is populated the data in Table B is not ready with the data.
I am trying to this programmatically if possible.
Since, they are in different schemas & update size is comparatively smaller than the A's table size what should be the best way to do this?
SAMPLE DATA
**
Table A
orderNum | orderNumInternal | validity | averageSales |type
1000 | 5636 | 2020-06-30 00:00:00.000 | NULL |valid
Table B
orderNum | orderNumInternal | validity | averageSales
1000 | 5636 | 2020-06-30 00:00:00.000 | 65
**
Here I need to update Table A with the averageSales value from Tabel B whenever the type in Table A is valid & there is match in table B for the first 3 columns
Table A is created in an overnight whilst I don't have control over when the data would be available in Table B
Would this not simply be an UPDATE with a JOIN?
UPDATE A
SET averageSales = B.averageSales
FROM Schema1.TableA A
JOIN Schema2.TableB B ON A.orderNum = B.orderNum
WHERE A.averageSales IS NULL; --Unsure if this WHERE is needed
I am using Microsoft SQL Server with already stored data.
In one of my tables I can find data like:
+--------+------------+
| id | value |
+--------+------------+
| 1 | 12-34 |
| 2 | 5678 |
| 3 | 1-23-4 |
+--------+------------+
I realized that the VALUE column was not properly formatted when inserted.
What I am trying to achieve is to get id by given value:
SELECT d.id FROM data d WHERE d.value = '1234';
Is there any way to format data in column just before SELECT clause?
Should I create new view and modify column in that view or maybe use complicated REGEX to get only digits (with LIKE comparator)?
P.S. I manage database in Jakarta EE project using Hibernate.
P.S.2. I am not able to modify stored data.
One method is to use replace() before the comparison:
WHERE REPLACE(d.value, '-', '') = '1234'
I have created simple entity with Hibernate with #Lob String field. Everything works fine in Java, however I am not able to check the values directly in DB with psql or pgAdmin.
Here is the definition from DB:
=> \d+ user_feedback
Table "public.user_feedback"
Column | Type | Modifiers | Storage | Stats target | Description
--------+--------+-----------+----------+--------------+-------------
id | bigint | not null | plain | |
body | text | | extended | |
Indexes:
"user_feedback_pkey" PRIMARY KEY, btree (id)
Has OIDs: no
And here is that I get from select:
=> select * from user_feedback;
id | body
----+-------
34 | 16512
35 | 16513
36 | 16514
(3 rows)
The actual "body" content is for all rows "normal" text, definitely not these numbers.
How to retrieve actual value of body column from psql?
This will store the content of LOB 16512 in file out.txt :
\lo_export 16512 out.txt
Although using #Lob is usually not recommended here (database backup issues ...). See store-strings-of-arbitrary-length-in-postgresql for alternatives.
Hibernate is storing the values as out-of-line objects in the pg_largeobject table, and storing the Object ID for the pg_largeobject entry in your table. See PostgreSQL manual - large objects.
It sounds like you expected inline byte array (bytea) storage instead. If so, you may want to map a byte[] field without a #Lob annotation, rather than a #Lob String. Note that this change will not be backward compatible - you'll have to export your data from the database then drop the table and re-create it with Hibernate's new definition.
The selection of how to map your data is made by Hibernate, not PostgreSQL.
See related:
proper hibernate annotation for byte[]
How to store image into postgres database using hibernate
Im trying to store a list,collection of data objects in Hbase. For example ,a User table where a the userId is the Rowkey and column family Contacts with column Contacts:EmailIds where EmailIds is a list of emails as
{abcd#example.com,bpqrs#gmail.com....etc}
How do we model this in Hbase ? How do we do this in Java?/Python?Ive tried pickling and unpickling data in Python but this is one solution which I do not want to use due to performance issues.
You can use it in the following manner:
| userid | contacts |
| test | c:email1=test#example.com; c:email2=te.st#example.com |
or
| userid | contacts |
| test | c:test#example.com=1; c:te.st#example.com=2 |
This way you can use versioning, add/remove as much email addresses as you want, use filters, and it is really easy to iterate over these KV pairs in the client code
I have a project with a jTable called AchTable, that is like this:
+-------+------+
| File | Type |
+-------+------+
| | |
| | |
| | |
+--------------+
And I have a mySQL table that is like the same, then I want to know how could I populate the jTable.
So what is the problem, creating a table or creating an SQL query?
Read the section from the Swing tutorial on How to Use Tables.
Read the tutorial on JDBC Database Access.
Put the two together and you've got your problem solved. That is first create your query and create a ResultSet. Then you use the meta data to get the column names. Then you loop through the ResultSet and add rows of data to your table. You can use a DefaultTableModel for this.