Data Validation through selenium scripts between reports and backend database - java

Problem Statement: I am having a report that is view-able from an online portal and the data is populated from the data-mart using various stored procs for this report.
I want to validate the report's data from the online screen against the SQL queries that I have developed for testing. The problem is that the report is having many fields say about 20 different fields and 2 or 3 sections. For populating the different sections and fields we are having individual queries or stored procs.
Now the major challenge that I am facing is that I could get the data from the online screen easily but am not sure how to get the data from the backend for the validations.
I tried writing a macro for the same and it returned the results but then to format the results in the form of the report is becoming a cumbersome job. And this needs to be done for around 40 + reports.
Any ideas of tackling these kind of situations would really help me out.
Thanks in advance.
For generalization we can think of the scenario as a report testing scenario where in we will view the reports from online screen and validate its data from the backend by using custom queries developed by the testing team based on the logics( and not using the developers queries) so that an independent validation can be carried through.
And this whole testing portion would run as a part of the automated regression suite being developed for the portal with the help of selenium and java.

Your problem statements indicates that you are attempting to verify a specific reports data, as opposed to the display of that data in the GUI. For this problem I would recommend that you eliminate the GUI from the test, and use the product's API to retrieve the report content. From this, you could possibly store the results as a SQLite table, for example, then write code to compare table contents with the results you obtain with your comparison queries.
This approach will eliminate the need to process the GUI content, allowing you to focus on the task at hand, verifying the report content.
By the way, if your GUI does a lot of additional processing of the report data (e.g. filtering, sorting, etc.), you'll need to have a different set of test cases to verify that functionality. It's important to conceptually differentiate that from the data content.

Related

Tosca angular table steering

I am a tester in a Scrumteam trying to automate our test regressionset.
Our front-end is developed in Java Angular and we use Tosca testsuite to automate our testset. The problem I am encountering is as follows:
With Tosca you can scan the application and all the fields, atributes, divs, and so on are shown to the user. The moment I scan one of our datatables I see them as what they are: a table. However, every field/button/icon/etc is being scanned as a seperate object.The table has 1 body, but the individual rows are not found. Meaning that the rows within the tables are not identified.
This makes it impossible for me to perform an automated search on a table, because the rows and therefor the colums are not identified, only the header is.
Anyone ever encountered this issue with a testtool or found a sollution how to fix this in the coding of the front-end in Java Angular?
This is a common scenario when the application under test is developed using UI Libraries, where the complex controls (e.g. Table, Combobox etc.) are rendered not as a single HTML tag (<TABLE> for Table or <SELECT> for combobox). Instead you will find bunch of other HTML tags (<DIV>, <SPAN>, <TABLE>, <UL> and what not!)
If I understood correctly, there are two ways to automate this scenario -
You mentioned that you are able to find a <TABLE> tag (The header). There good
chances that each row in the table is itself a <TABLE> and that's
why you are not able to see all the contents in a single one (you
can cross check this in the Content View section of XScan
window). If you just need a single row for verification (I am just
assuming!), you can select any one of them and use ConstraintIndex to get to the right row data. You can also look for a parent control (basically another <TABLE>) which clubs all the child table. This parent table might show all the data in one place. Table Verification will work with this control. Please remember that it is
just a workaround and might not fit into your scenario.
You can write a custom control to handle this. Custom control is a way where user can define how a control looks like. Once you implement this, Tosca will be able to recognize the table as a single control containing all the data. For more information on this, check the Tosca API reference here

XML or Database for Auto form generation

So I'm creating a program that auto generates forms for data entry. The form is created by a user (its a simple table setup with the ability to merge cells). Some of the cells contain text views, others contain text inputs (all based on how the user draws it).
This form is then sent to another application that draws it back out. I was wondering what the best method is to represent the form. I though either use XML to represent the form or use a database that would basically function as a grid and row 1 column 1 in the database would match the form cell row 1 column 1 and so on (kind of an odd way to use a database).
The form creation program is made in C++ and the form regeneration program is created in Java.
Is there an even better way to do this?
Thanks,
I am also thinking the same thing because I am in to creating dynamic forms for my framework to. So I will share some thoughts with you. Using database to add new forms like adding a record in one table that specifies the form and its fields in another having the ability to select it's field types to, or creating one table for each form and each time create a new table or altering its fields (sound messy).. or create a folder with a bunch of xmls that are used for the structure of your forms?
When it comes to database:
Your application is stricted with a specific database application
like sql server 2008 or mysql or mysqli or oracle etc.
Your application is causing network traffic, not that bad but it is
doing it eveytime you need to create or use a form.
You need a panel that creates those forms using the database, and
can be accessed if its web even from your mobile.
When it comes to XML:
Your application is free from database version restrictions.
you need the impersonator to have the right to create files in a
spesific directory in your frameowork.
You don't need a panel even though you can create one, because XML are human readable files. So you can make one while eating your dinner and serve it to your system,
and wala, you have your form generated.
These are my thoughts for now.
How about the methods that will be used in the form? will those also be dynamic? How can you specify what calls what? this is also what you need to take in account.
I think that XML is a much better choice here. Using database as a grid could be more of a headache than needed. You will have to deal with all the problems related to having the database and not really get any benefits of the database. The industry decides to go with xml more often than not as well (xbrl being one example).

Is a good idea do processing of a large amount of data directly on database?

I have a database with a lot of web pages stored.
I will need to process all the data I have so I have two options: recover the data to the program or process directly in database with some functions I will create.
What I want to know is:
do some processing in the database, and not in the application is a good
idea?
when this is recommended and when not?
are there pros and cons?
is possible to extend the language to new features (external APIs/libraries)?
I tried retrieving the content to application (worked), but was to slow and dirty. My
preoccupation was that can't do in the database what can I do in Java, but I don't know if this is true.
ONLY a example: I have a table called Token. At the moment, it has 180,000 rows, but this will increase to over 10 million rows. I need to do some processing to know if a word between two token classified as `Proper NameĀ“ is part of name or not.
I will need to process all the data. In this case, doing directly on database is better than retrieving to application?
My preoccupation was that can't do in the database what can I do in
Java, but I don't know if this is true.
No, that is not a correct assumption. There are valid circumstances for using database to process data. For example, if it involves calling a lot of disparate SQLs that can be combined in a store procedure then you should do the processing the in the stored procedure and call the stored proc from your java application. This way you avoid making several network trips to get to the database server.
I do not know what are you processing though. Are you parsing XML data stored in your database? Then perhaps you should use XQuery and a lot of the modern databases support it.
ONLY an example: I have a table called Token. At the moment, it has
180,000 rows, but this will increase to over 10 million rows. I need
to do some processing to know if a word between two token classified
as `Proper NameĀ“ is part of name or not.
Is there some indicator in the data that tells it's a proper name? Fetching 10 million rows (highly susceptible to OutOfMemoryException) and then going through them is not a good idea. If there are certain parameters about the data that can be put in a where clause in a SQL to limit the number of data being fetched is the way to go in my opinion. Surely you will need to do explains on your SQL, check the correct indices are in place, check index cluster ratio, type of index, all that will make a difference. Now if you can't fully eliminate all "improper names" then you should try to get rid of as many as you can with SQL and then process the rest in your application. I am assuming this is a batch application, right? If it is a web application then you definitely want to create a batch application to do the staging of the data for you before web applications query it.
I hope my explanation makes sense. Please let me know if you have questions.
Directly interacting with the DB for every single thing is a tedious job and affects the performance...there are several ways to get around this...you can use indexing, caching or tools such as Hibernate which keeps all the data in the memory so that you don't need to query the DB for every operation...there are tools such as luceneIndexer which are very popular and could solve your problem of hitting the DB everytime...

Create graphs and charts in Java Swing application using MySql ResultSet data

I have heard of JfreeChart but is there any general steps for using data returned from an SQL query to create graphs and chart.
I have an application that shows as a menu option "Analytic's", this Jframe window uses complicated query to retrieve data using business logic but i want to then display this data in a more viable way (rather than a long Jtable result). How can i filter my data and create a graph for the user to analyze?
Check java2s.com/Code/Java/Chart/CatalogChart.htm for a lot examples.
Generally you can fill your own dataset based on your ResultSet. But if you are query is returning results close enough to what you are loading into your dataset you can just use the JDBCCategoryDataset from JFreeChart.
Since you are in search... I think that there is no better tool for designing and generating reports and graphs from your database than iReport. It offers a very consistent GUI, and gives you all support necessary to start creating business graphs from your database data.
The produced report is dynamic. Can be embedded into any application. The reports can be exported as PDF, XML, XHTML, .doc, .odt and more ways. One can pass variables to the report at run-time like from-to Dates, code-id's etc.
iReport is built on top of JasperReports. It is free and open source.
Let me make also clear that i am not in any way affiliated with iReport! :-)

Populating a MySQL database with values

I have a locally installed MySQL server on my laptop, and I want to use the information in it for a unit-test, so I want to create a script to generate all the data automatically. I'm using MySQL Workbench which already generates the tables (from the model). Is it possible to use it, or another tool, to create an automatic script to populate it with data?
EDIT: I see now that I wasn't clear. I do have meaningful data for the unit test. When I said "generate all the data automatically", I meant the tool should take the meaningful data I have in my local DB today and create a script to generate the same data in other developers' DBs.
The most useful unit tests are those that reflect data you expect or have seen in practice. Pumping your schema full of random bits is not a substitute for carefully crafted test data. As #McWafflestix suggested mysqldump is a useful tool, but if you want something simplier, consider using LOAD DATA with INFILE, which populates a table from a CSV.
Some other things to think about:
Test with a database in a known state. Wrap all your database interaction unit tests in transactions that always roll back.
Use dbunit to achieve the same end.
Update
If you're in a Java environment, dbUnit is a good solution:
You can import and export data in an XML format through its APIs, which would solve the issue of going from your computer to other members on your team.
It's designed to restore database state. So it snapshots the database before tests are executed and then restores at then end. So tests are side effect free (i.e. they don't permanently change data).
You can populate with defaults (if defined)
CREATE TABLE #t(c1 int DEFAULT 0,c2 varchar(10) DEFAULT '-')
GO
--This insert 50 rows in table
INSERT INTO #t( c1, c2 )
DEFAULT VALUES
GO 50
SELECT * FROM #t
DROP TABLE #t

Categories