Load database tables from MySQL - java

I am working on a simulation of a blood supply chain and created and imported some tables to manage the masterdata of various agent populations like blood processing centres, testing centres, hospitals and so on. These tables contain the name of said agent and the lat/lon coordinates.
These tables are all part of a MySQL database I connected to AnyLogic with its interface and as I said imported these. So far so good, however, when I want to create the agent populations for each database entry and assign the parameters of the agents to the respective fields of the table, AnyLogic cant assign the name (Varchar in MySQL, String in the imported AnyLogic database) to the parameter name of type String of the agent. Any other type works, just Strings are giving me trouble.
Database in AnyLogic
Agent and parameter
Create population from database
As a sidenote, when I copy all of the database contents into Excel and import the Excel sheet it works just fine, it just struggles with imported databases form MySQL but the database in AnyLogic looks exactly the same, no matter the import method.

Looks like a bug either in the population properties (e.g., the types are compatible, it just thinks they're not), or in the MySQL import (e.g., some special Unicode characters in that column cause the import to give it a weird HSQLDB type which can be setup but not then converted to String --- the AnyLogic DB is a normal HSQLDB database). To rule out the former, try not setting the name parameter in the population properties and then read all the rows at model startup (use the Insert Database Query wizard to help you) and try to assign the name parameter then. (That may also give you a more useful exception/error message...)
(I can't easily set up a MySQL DB to confirm this. It would be worth also trying with a minimal example model with the MySQL table only having that 'string' column, and then sending that to AnyLogic support if the problem persists.)

Related

Get or Retrieve Generated PKs after a massive insert SQLLDR

I'll be direct about my situation right now. I'm working in a project which will perform a "Base load" procedure based on an excel (xlsx, xls) file. It has been developed in java with JDBC drivers. right now this project is working, It takes an excel file and based on a configuration It performances the insert into differents tables. The point is: It's taking too long doing the job, which makes it inefficient. (It takes around 2 hours inserting 3000 records on DB). in the future, this software will be inserted around 30k records and it will be painfully slow. So I need to improve its efficience and I was thinking in: Instead of inserting from java via JDBC drivers. I will generate control files and data files to be inserted in the DB using SQLLDR.
The point I'm facing right now, I need to insert these data into several tables, and this tables are related to each other. That's means, If I insert a person into "Person_table" I will need the Primary Key generated by a database sequence to insert the "Address, Phone, email, etc." into other table, so I do not know how to get the primary keys generated in the first insert via SQLLDR.
I'm not sure sure yet if SQLLDR is my best way to do this, but I guess It is, because the DBMS is Oracle
Can you guys lead me about how could I do what I explained you guys I need to do? any suggestion is welcome and well received. It does not matter if your suggestions are not about how to do this with SQLLDR.
I'm a kind of stuck at this point right now, I really appreciate the help you could give me.
SQL*Loader can't read native Excel files (at least, as far as I know). Therefore, you'll have to save the result as a CSV file.
As you need to manipulate foreign key constraints, consider switching to external tables feature - basically, the background is still SQL*Loader, but you can write (PL/)SQL against those files/tables (yes - a CSV file, stored on a hard disk, acts as if it was an Oracle table).
So, you'd "load" one table, populate primary key values, populate another (child) table - possibly into a "temporary" (not necessarily a global temporary table) which doesn't have any constraints enabled, populate foreign key values and move data into a "real" target table whose constraints now won't fail.
Possible drawback: CSV files have to reside in a directory that is accessible to the database server, as you'll have to create a directory (Oracle object) and grant required privileges (usually read, write) to user who will be using it. Directory is usually created on a server itself; if not, you'll have to use UNC while creating it.
Now you have something to read about/research; see if it makes sense to you.

How to manipulate the production database data format using Spring?

The situation looks like this:
the Spring production application uses a table with a varchar column (MySQL),
I need to change the column to binary blob,
and encrypt the existing data (using Java, not database).
The two known steps of this process are:
I need to update the Entity, changing the annotations (from varchar to blob)
I need to run the migration changing the column format (using SQL with Flyway)
all this will be done when I stop the application, replace the application jar (to the new version with changes) and run it again.
The problem:
I need to take the old data from the column when it is still a varchar, encrypt it with Java, and after migration store it again in the blob column. (The new data which will be entered after the changes is not a problem, it will be automatically encrypted; the problem is with the old data.)
what approach should I use to deal with this update? What general steps would be correct?
This is more a general Spring question - how do you deal with the situation, when you need to change the structure existing on production and manipulate the old data to fit the new format?
For example in PHP I'm using a terminal script integrated with application and run in the application environment (with "artisan" command in Laravel); I can easily create a proper order of the actions: take the old data and remember it, change the database structure, manipulate the old data, and insert the data to new structure - it's all in one script and one transaction. I don't know how to do this in Spring.
I've found an answer - the Flyway Java-based migrations: https://flywaydb.org/documentation/migrations#java-based-migrations
Java-based migrations are a great fit for all changes that can not easily be expressed using SQL. These would typically be things like BLOB & CLOB changes, Advanced bulk data changes (Recalculations, advanced format changes, …)
It looks like this:
package db.migration;
import org.flywaydb.core.api.migration.spring.SpringJdbcMigration;
import org.springframework.jdbc.core.JdbcTemplate;
/**
* Example of a Spring Jdbc migration.
*/
public class V1_2__Another_user implements SpringJdbcMigration {
public void migrate(JdbcTemplate jdbcTemplate) throws Exception {
jdbcTemplate.execute("INSERT INTO test_user (name) VALUES ('Obelix')");
}
}
Database manipulation can be interleaved here with Java code and data manipulation.
I suspect that Liquibase has similar functionality.

Appropriate way to pass a dataset to Java from Oracle PL/SQL

I need to pass datasets from Oracle to Java through JDBC.
How is it better to organize it so that everything works well and it would be convenient both for Java developers and PL/SQL developers to maintain the code in case of changing, for example, table column types?
I see such variants:
Pass the sys_refcursor via stored procedure, and in Java expect that there will be certain fields with a certain type of data.
Pass a strong ref cursor and in Java do the same, that in item 1, but in the PL/SQL package there is a type description.
Pass SQL "table of" type, described at the schema level. If I understand correctly, in Java apparently it can somehow be applied to the object. The problem is that in these types it is impossible to do fields with the column type - Column_Name%TYPE.
Conduct in the PL/SQL package "table of object / record" type, and using JPublisher to work with it - JPublisher apparently converts it into a SQL type. It is not entirely clear for me how this is implemented, and what needs to be done for the same case when the data type of the column changes.
Using the pipelined function instead of the cursor (does this even make sense for such a task?).
What to choose? Or maybe something else, not from these points?
P.S. Sorry for bad English.
I'm not sure that i've understood your queston right, but i think you confused.
The variants, which you discribed is way to execute Java package on server side (for example, when you have database with application servers and want execute java package on it with data of database).
But if you thinking about JDBC then i guess that you want to make some java-app which could work with database. So that you don't have to user some sys_refcursor of subtupes like table of object / record. The JDBC provides capabilities to work with datasets using simple SQL. You should just connect to database as user (via JDBC) and execute sql query. After that you can get any data from result set.
Examples:
Connection example via JDBC
Execute select after connection
So the answer for your question depends on yours goals.

How to tell initial data load to insert only the values which are not there in target db?

i have some large data in one table and small data in other table,is there any way to run initial load of golden gate so that same data in both tables wont be changed and rest of the data got transferred from one table to other.
Initial loads are typically for when you are setting up the replication environment; however, you can do this as well on single tables. Everything in the Oracle database is driven by System Change Numbers/Change System Numbers (SCN/CSN).
By using the SCN/CSN, you can identify what the starting point in the table should be and start CDC from there. Any prior to the SCN/CSN will not get captured and would require you to manually move that data in some fashion. That can be done by using Oracle Data Pump (Export/Import).
Oracle GoldenGate also provided a parameter called SQLPredicate that allows you to use a "where" clause against a table. This is handy with initial load extracts because you would do something like TABLE ., SQLPredicate "as of ". Then data before that would be captured and moved to the target side for a replicat to apply into a table. You can reference that here:
https://www.dbasolved.com/2018/05/loading-tables-with-oracle-goldengate-and-rest-apis/
Official Oracle Doc: https://docs.oracle.com/en/middleware/goldengate/core/19.1/admin/loading-data-file-replicat-ma-19.1.html
On the replicat side, you would use HANDLECOLLISIONS to kick out any ducplicates. Then once the load is complete, remove it from the parameter file.
Lots of details, but I'm sure this is a good starting point for you.
That would require programming in java.
1) First you would read your database
2) Decide which data has to be added in which table on the basis of data that was read.
3) Execute update/ data entry queries to submit data to tables.
If you want to run Initial Load using GoldenGate:
Target tables should be empty
Data: Make certain that the target tables are empty. Otherwise, there
may be duplicate-row errors or conflicts between existing rows and
rows that are being loaded. Link to Oracle Documentations
If not empty, you have to treat conflicts. For instance if the row you are inserting already exists in the target table (INSERTROWEXISTS) you should discard it, if that's what you want to do. Link to Oracle Documentation

Populating a MySQL database with values

I have a locally installed MySQL server on my laptop, and I want to use the information in it for a unit-test, so I want to create a script to generate all the data automatically. I'm using MySQL Workbench which already generates the tables (from the model). Is it possible to use it, or another tool, to create an automatic script to populate it with data?
EDIT: I see now that I wasn't clear. I do have meaningful data for the unit test. When I said "generate all the data automatically", I meant the tool should take the meaningful data I have in my local DB today and create a script to generate the same data in other developers' DBs.
The most useful unit tests are those that reflect data you expect or have seen in practice. Pumping your schema full of random bits is not a substitute for carefully crafted test data. As #McWafflestix suggested mysqldump is a useful tool, but if you want something simplier, consider using LOAD DATA with INFILE, which populates a table from a CSV.
Some other things to think about:
Test with a database in a known state. Wrap all your database interaction unit tests in transactions that always roll back.
Use dbunit to achieve the same end.
Update
If you're in a Java environment, dbUnit is a good solution:
You can import and export data in an XML format through its APIs, which would solve the issue of going from your computer to other members on your team.
It's designed to restore database state. So it snapshots the database before tests are executed and then restores at then end. So tests are side effect free (i.e. they don't permanently change data).
You can populate with defaults (if defined)
CREATE TABLE #t(c1 int DEFAULT 0,c2 varchar(10) DEFAULT '-')
GO
--This insert 50 rows in table
INSERT INTO #t( c1, c2 )
DEFAULT VALUES
GO 50
SELECT * FROM #t
DROP TABLE #t

Categories