I am doing a Java project connected to Documentum and I need to retrieve data from object table. The thing is when I retrieve from 1 table I can get the answeres in max 2 seconds for each of the following tables with the following DQLs:
SELECT * FROM cosec_general
and
SELECT * FROM dm_dbo.cosec_general_view
however once I want to join those two tables together in retrieve from the result it takes 5 min to do so.
Is there any way that I can make it faster?
Here is the DQL that I use to join them I get teh columns that I need:
SELECT dm_dbo.cosec_general_view.name, dm_dbo.cosec_general_view.comp_id,
dm_dbo.cosec_general_view.bg_name, dm_dbo.cosec_general_view.incorporation_date,
dm_dbo.cosec_general_view.status, dm_dbo.cosec_general_view.country_name,
cosec_general.acl_domain, cosec_general.acl_name
FROM dm_dbo.cosec_general_view, cosec_general
There is no condition on which fields you are trying to join,
Add WHERE clause containing condition for join, like
WHERE dm_dbo.cosec_general_view.field_1=cosec_general.field_2
You are using wrong approach. In query
SELECT * FROM cosec_general
the asterisk * means return me everything. Once you loaded information to the memory object manipulation with it should be measured in milliseconds.
Related
I am currently trying to push some data from Java into BigQuery, in order to be able to make use of it only for the next query and then to get rid of it.
The data consists of 3 columns which exists in the table I want to query. Therefore, by creating a temporary table containing this data, I could make a left join and get the query results I am in need of.
This process will happen on a scheduled basis, having different sets of data.
Can you please tell me if that can be done ?
Many thanks !
Using the jobs.query API, you can specify a destinationTable as part of configuration.query. Does that help? You can control the table expiration time using the tables.update API and setting expirationTime.
Alternatively, you can "inline" the table as part of the query that you want to run using a WITH clause in standard SQL rather than writing to a temporary table.
We want to programmably copy all records from one table to another periodically.
Now I use SELECT * FROM users LIMIT 2 OFFSET <offset> for fetch records.
The table records like below:
user_1
user_2
user_3
user_4
user_5
user_6
When I fetched the first page (user_1, user_2), then the record "user_2" was be deleted at the source table.
And now I fetched the second page is (user_4, user_5), the third page is (user_6).
This lead to I lost the records "user_3" at the destination table.
And the real source table may be has 1000 000 records, How can I resolve the problem effectively?
First you should use an unique index on the source table and use it in an order clause to make sure that the order or the rows is consistent over time. Next you do not use offsets but start after the last element fetched.
Something like:
SELECT * FROM users ORDER BY id LIMIT 2;
for the first time, and then
SELECT * FROM users WHERE ID > last_recieved_id ORDER BY id LIMIT 2;
for the next ones.
This will be immune to asynchronous deletions.
I you have no unique index but have a non unique one in your table, you can still apply the above solution with a non-strict comparison operator. You will consistently re-get the last rows and it would certainly break with a limit 2, but it could work for reasonable values.
If you have no index - which is known to cause different other problems - the only reliable way is to have one single big select and use the SQL cursor to page.
I have 2 DBs, Database A and Database B.
What I want to achieve:
build records from Database A and insert them to Database B
Process those records in my java app
What I'm currently doing:
I use two separate queries:
For (1) I use INSERT INTO ... SELECT ...
For (2) I perform another SELECT.
My solution works but it isn't optimal since I'm getting the records from Database A twice (instead of just one time).
Is there a way to execute the INSERT INTO ... SELECT ... and get the inner select result as a ResultSet?
I know I can perform only a SELECT and then insert the records in a batch, but thats a bit cumbersome and I want to find out if there's a cleaner solution.
Your cleaner solution look more cumbersome than simple read and write operation.
As you have to manipulate data in database B. You simply do this
Read Data from A to your app
Process data
Write data to B from your app
Then you have singe read single write and is simple.
You can not gain the result of INSERT INTO as Result set as this is INSERT statement
Sadly, I do not think that this is possible. What you are trying to achieve are two distinct operations i.e. an INSERT and a SELECT. However you cut it you are still going have to do at least one INSERT and one SELECT.
use this for two database
INSERT INTO Database2 (field1,field2,field3){
SELECT * FROM Database1;);
Both the database have the same field name.
I'm dealing with some sort of problem here. A web application built on java stuff calls a stored procedure in oracle which has as out parameters some varchars and a parameter whose type is a ref cursor returning a record type (both explicitly defined).
The content of the ref cursor is gathered using a complex query which I guess runs O(n) depending on the number of records in a table.
The idea is to paginate the result in the server because getting all the data causes a long delay (500 records take about 40-50 seconds due to the calculation and the join resolution). I've already rebuilt the query using row_number()
open out_rcvar for
SELECT *
FROM ( select a, b, c,..., row_number() over (order by f, g) rn
from t1, t2,...
where some_conditions
) where rn between initial_row and final_row
order by rn;
in order to avoid the offset-limit approach (and its equivalence in oracle). But, here's the catch, the user wants a pagination menu like
[first || <<5previous || 1 2 3 4 5 || next5>> || last ]
and knowing the total rows implies counting (hence, "querying") the whole package and taking the whole 50secs. What approach could I use here?
Thanks in advance for your help.
EDIT: The long query should not be setted a s a materialized view because the data in the records is required to be updated as it is requested (the web app does some operations with the data and needs to know if the selected item is "available" or "sold")
You could do something like:
SELECT *
FROM ( select count(*),a, b, c,..., row_number() over (order by f, g) rn
from t1, t2,...
where some_conditions
) where rn between initial_row and final_row
order by rn;
This is probably inefficient given your description, but if you find some quicker way to calculate the total rows, you could stick it the inner select, and return it with every row. It's not great, but it works and it's a single select (as opposed to having one for the total row number and a second one for the actual rows).
What is the performance if you do not select any columns but just a count to determine the rows? Is that acceptable?
And use that as a guide to build the pagination.
Otherwise we have no option without knowing the count to build the number of pages as (1,2,3,45)
The other option is to not show the number of pages, but just show next and previous.
Just my thoughts.
Perhaps you might consider creating temporary table. You might store your results there and then use some paging mechanism. This way the computation will be done once. Then you will only select the data, which will be pretty fast.
There is one catch, in this approach. You have to ensure that you will not break session, since temporary tables are private and exists only for your session. Take a look at this link.
I have been asked in an interview to write a SQL query which fetches the first three records with highest value on some column from a table. I had written a query which fetched all the records with highest value, but didn't get how exactly i can get only first three records of those.
Could you help me in this.
Thanks.
SELECT TOP 3 * FROM Table ORDER BY FieldName DESC
From here, but might be a little out of date:
Postgresql:
SELECT * FROM Table ORDER BY FieldName DESC LIMIT 3
MS SQL Server:
SELECT TOP 3 * FROM Table ORDER BY FieldName DESC
mySQL:
SELECT * FROM Table ORDER BY FieldName DESC LIMIT 3
Select Top 3....
Depending on the database engine, either
select top 3 * from table order by column desc
or
select * from table order by column desc limit 3
The syntax for TOP 3 varies widely from database to database.
Unfortunately, you need to use those constructs for the best performance.
Libraries like Hibernate help here, because they can translate a common API into the various SQL dialects.
Since you are asking about Java, it is possible to just SELECT everything from the database (with an ORDER BY), but just fetch only the first three rows. Depending on how the query needs to be executed this might be good enough (especially if no sorting on the database has to happen thanks to appropriate indexes, for example when you sort by primary key fields).
But in general, you want to go with an SQL solution.
In oracle you can also use where rownum < 4...
Also on mysql there is a Limit keyword (i think)