Access data from a BIRT dataset within a Java class - java

I am relatively new with BIRT eclipse. I created a java project and a report file. The report has an Oracle DB connection, and I am working with one table. I want to access BIRT datasets with Java, so I can manipulate data rather than using only JavaScript. I have an idea in which I could connect them through importing the Java class to the onFetch() method on the dataset, but I'm unsure.

What you want to use are "computed columns". For a data set you can add computed columns additional to the fetched columns from your data source. In the computation script you can reuse the fetched columns as input for your computation. As a result your data set contains your fetched columns and your computed columns.
This example adds 1 to the fetched value of the column PK:

Related

Load database tables from MySQL

I am working on a simulation of a blood supply chain and created and imported some tables to manage the masterdata of various agent populations like blood processing centres, testing centres, hospitals and so on. These tables contain the name of said agent and the lat/lon coordinates.
These tables are all part of a MySQL database I connected to AnyLogic with its interface and as I said imported these. So far so good, however, when I want to create the agent populations for each database entry and assign the parameters of the agents to the respective fields of the table, AnyLogic cant assign the name (Varchar in MySQL, String in the imported AnyLogic database) to the parameter name of type String of the agent. Any other type works, just Strings are giving me trouble.
Database in AnyLogic
Agent and parameter
Create population from database
As a sidenote, when I copy all of the database contents into Excel and import the Excel sheet it works just fine, it just struggles with imported databases form MySQL but the database in AnyLogic looks exactly the same, no matter the import method.
Looks like a bug either in the population properties (e.g., the types are compatible, it just thinks they're not), or in the MySQL import (e.g., some special Unicode characters in that column cause the import to give it a weird HSQLDB type which can be setup but not then converted to String --- the AnyLogic DB is a normal HSQLDB database). To rule out the former, try not setting the name parameter in the population properties and then read all the rows at model startup (use the Insert Database Query wizard to help you) and try to assign the name parameter then. (That may also give you a more useful exception/error message...)
(I can't easily set up a MySQL DB to confirm this. It would be worth also trying with a minimal example model with the MySQL table only having that 'string' column, and then sending that to AnyLogic support if the problem persists.)

Is it possible to create a Spring Batch job that reads and writes specific CSV columns?

I have a working implementation of a user uploading a CSV, and selecting specific columns to persist in the db (these will be used for math operations/transformed later on). This same operation also creates a "dump" table using those column names.
This CSV has no specific layout or format - meaning that I do not have information about their column order or column names other than what they have specified. They will also always have a bunch of additional columns that I won't need (1000+) - which means I would only like to get the very specific columns that the user selected. The CSV, however, will always be comma-delimited and with minimal quoting mode. Obviously nothing is static here, as the selected columns and their layout will vary from CSV to CSV.
Is it possible to use the columns specified by the already created functionality and use them to read the CSV and insert them in a table? If so where do I start? And what additional information would I need (column position/index, etc)?

Get the database fields programmatically in Java

I have a jrxml file (sample shown below) which has the database query embedded in it. Now this query will return different columns against different databases. Since the columns are varying, I am planning to programmatically load the jrxml file, read the fields returned from the query (embedded in jrxml) and then place them on the jrxml
Have 2 questions
How do I get the field names returned from the query (embedded in jrxml)
How do we iterate through those fields so that they can be placed on the jrxml
Amy sample code would be appreciated.
Please note my preference is to use Jasper API's only.
How do you intend to query the database if you do not know the column names? The only case I can think of is that you are always going to select all the columns.
What I think you need is a parameterized query which will allow you to pass column names as parameters. See this page on using report parameters.
If you really want to always select all the table columns then before filling the report you will have to retrieve the table metadata and pass the column names to the report as parameters. If you are using JDBC then you simply need to call java.sql.Connection.getMetaData() and query the MetaData object for column names. However, hardcoding SELECT * is potentially dangerous because your result sets will keep growing in size as new columns get inserted into the table.

How to tell initial data load to insert only the values which are not there in target db?

i have some large data in one table and small data in other table,is there any way to run initial load of golden gate so that same data in both tables wont be changed and rest of the data got transferred from one table to other.
Initial loads are typically for when you are setting up the replication environment; however, you can do this as well on single tables. Everything in the Oracle database is driven by System Change Numbers/Change System Numbers (SCN/CSN).
By using the SCN/CSN, you can identify what the starting point in the table should be and start CDC from there. Any prior to the SCN/CSN will not get captured and would require you to manually move that data in some fashion. That can be done by using Oracle Data Pump (Export/Import).
Oracle GoldenGate also provided a parameter called SQLPredicate that allows you to use a "where" clause against a table. This is handy with initial load extracts because you would do something like TABLE ., SQLPredicate "as of ". Then data before that would be captured and moved to the target side for a replicat to apply into a table. You can reference that here:
https://www.dbasolved.com/2018/05/loading-tables-with-oracle-goldengate-and-rest-apis/
Official Oracle Doc: https://docs.oracle.com/en/middleware/goldengate/core/19.1/admin/loading-data-file-replicat-ma-19.1.html
On the replicat side, you would use HANDLECOLLISIONS to kick out any ducplicates. Then once the load is complete, remove it from the parameter file.
Lots of details, but I'm sure this is a good starting point for you.
That would require programming in java.
1) First you would read your database
2) Decide which data has to be added in which table on the basis of data that was read.
3) Execute update/ data entry queries to submit data to tables.
If you want to run Initial Load using GoldenGate:
Target tables should be empty
Data: Make certain that the target tables are empty. Otherwise, there
may be duplicate-row errors or conflicts between existing rows and
rows that are being loaded. Link to Oracle Documentations
If not empty, you have to treat conflicts. For instance if the row you are inserting already exists in the target table (INSERTROWEXISTS) you should discard it, if that's what you want to do. Link to Oracle Documentation

Can a Derby database contain rows with different numbers of columns?

ie, I want to dynamically create extra columns for specific users if required from the JSP pages of my web app - is this possible, or is there another way of achieving the same thing?
Short answer is no. Every row in a table must have the same number of columns.
If there is no applicable value for a column one typically inserts NULL (SQL NULL which is different from Java null). Alternatively you could change your data model and put the optional values in a different table, and use a join when you want to read the optional columns.
Finally, you could also represent the optional info in a Java object and serialize that into a Blob which you store in your table, but I would caution you against this approach since it prevents you form querying on the values in the Blob and you get an upgrade problem if the format of the Blob object changes.
Hibernate allows you to dynamically create database table provided
you know rows and columns
If you dont want to add that and still keep it simple and sweet use "<% scriptlet %>" codes in jsp containing Java code to create or modify table
Instead of using scriplet you could use Struts / JSF/ Spring tags for cleaner code

Categories