Mocking Bigquery for integration tests - java

While other interfaces are relatively easy to mock in my Java integration tests, I couldn't find a proper way of mocking Bigquery.
One possibility is to mock the layer I wrote on top of Bigquery itself, but I prefer mocking Bigquery in a more natural way. I'm looking for a limited, lightweight implementation, which allows defining the table contents, and supports queries using the standard API.
Is there such a library? If not, what alternative approaches are recommended?

In unit testing it is perfectly fine to mock all external dependencies, and as long you are using interfaces to abstract out access to BigQuery client, mocking should not be an issue.
With integration testing I would rather get all my 3rd parties dependencies tested
to the extend an application needs it.
For instance one case would be an ETL which streams data from external sources to BigQuery, in this case an integration test needs to verify that all data is in BigQuery as expected, which means that verification stage needs to take into account repeated, and nested messages as required.
Another case would an application that runs some business SQLs, in this case you would have populate BigQuery with some test data before applicaiton run,
then the applicaiton needs to publishe the SQL output either as view/new table/or stream out of data out of for verification.
There are already some libraries taking care of integration testing with datastores including BigQuery/NoSQL/SQL
They would provide an easy solution for the cases described above and full support for SQL, dynamic macro/predicate etc ....
Dsunit (go-lang)
JDsunit (java)
Endly(language agnostic)
See more how to use endly for ETL and BiqQuery testing
If datastore integration test library is not an option for you and you are looking for just testing BigQuery client, the good news is that the client uses REST, so using network sniffers you can easy record what is being send back and forth, then you can use it in replayer. In order to redirect BigQuery from
public BG endpoints to your replayer you would use http java proxy.

Related

Spring Integration Test Naming Conventions

I have a rest controller for creating and getting employees. To test these endpoints I have two classes. In one I mock the service layer and do status code checks such as:
Mockito.when(employeeService.createEmployee(any(Employee.class))).then(returnsFirstArg());
this.employeeClient.createEmployee(this.testEmployee)
.andExpect(status().isCreated());
In the other I start up a docker postgres database and don't mock anything. These tests involve multiple endpoints e.g. "If I create an employee and then get all the employees, the created employee should be returned".
Both these classes create an application context - the first uses the WebMvcTest annotation, the second uses SpringBootTest. For this reason I see them both as integration tests.
So then what do I call these tests? I'm currently using EmployeeControllerTest and EmployeeContorllerIT (running IT with the failsafe plugin), but the name for the first is a little misleading because it's not really a unit test.
I was thinking of using EmployeControllerMockedIT for the first, but not convinced that's the right way to go.
If you're mocking responses, I'd say its a unit test as you're testing the module of code in isolation from a network and 3rd party systems like a database.
If you're running a test against (real) 3rd party code/systems you don't control, its an integration (or functional) test.
Functional tests will invoke your service as if your system is a black-box where we don't know the internal details (vs. white-box where we know, and have access to change, the internals and invoke code as if we're making a method call rather than sending an HTTP request).
Functional tests may run against real/fake servers but there will typically be a running application waiting for requests. The point being here the responses are not "mocked", the request is made for real, from your code, to the third party code/system, and that system is expected to respond as the real server would.
We do not fake responding in functional tests but arbitrary responses may be provided from the real/fake server to simulate the data we expect to receive. It could be we write a fake HTTP service to "mock" a real API. It will provide the exact same type of responses over the network, and be invoked over the network. Or we could just spin up a dockerised database and send queries to it and receive responses from it. The database will be a real database-server, rather than a mocked response we pre-specified in code.
Functional tests (of a web service) will begin from outside your application and ensure requests are answered over the network layer. i.e. you'll be making an HTTP request to invoke the service.
Integration tests of a web service will invoke the service at an application level (not over the network layer) so you may call a Controller class directly or invoke some code that is supposed to connect/integrate with a database from within the application. Integration tests will test your code is integrated correctly with third party code/a library/a database. Your tests should alert you to any situation where if the 3rd party code (code/system you do not control) changes in such a way that your software would break.

Can I mock external systems requests/replies using Camel?

We are starting design of a system in Java that will have to integrate with a number of existing external systems. To support testing in a DEV environment where those external systems do not exist, I was wondering if Camel would provide a config-based approach to support mocking those external systems, recording the data in each request and returning the expected response. For example, each test scenario has a defined sequence of the expected interaction with each external system:
where VAL_X are the individual fields of each request/response. From a testing standpoint, I was looking for a config-based approach in my DEV environment to specify that instead of actually calling REQUEST_A1 on SYS_A, I instead append the data in that request to a file and unmarshal the values from another file to create the response object. With this approach, I would be able to build up a set of test scenarios with expected results and automate my test suite. Note that I'm not talking about writing a unit test to test my interface - I want to deploy my application in the DEV environment (with an alternate configuration) that allows me to then interact with my application, and this alternate configuration records the request data to a file and unmarshals the previously-created expected results from a file to confirm that my deployed application operates properly. I know I could write alternate implementations of each of those external systems that provide this functionality, but I was hoping that there would be a way to leverage the built-in capabilities of Camel to allow this approach generically. Does anyone have suggestions or a recommendation of another approach?
For a given interface "OrderService":
Functional implementation "OrderServiceImpl"
Mock/Test implementation "OrderServiceMockImpl"
a. Have OderServiceMockImpl load and use data from a text/csv file
Use blueprint to specify which class to use as the implementation backing the interface.
a. Use config admin (aka compendium) to wire in configuration dynamically
The jar/bundle would contain both implementation classes
This will allow you to swap implementations at runtime and you can readily switch between the mock and functional implementation as needed.

Mocking a database for system testing

I've read the following posts:
Is there a way to run MySQL in-memory for JUnit test cases?
Stubbing / mocking a database in .Net
SQL server stub for java
They seem to address unit/component level testing (or have no answers), but I'm doing system testing of an application which has few test hooks. I have a RESTful web service backed by a database with JPA. I'm using NUnit to run tests against the API, but those tests often need complex data setup and teardown. To reduce the cost of doing this within a test via API calls, I would like to create (ideally in memory) databases which can be connected to via a DB provider using a connection string. The idea would be to have a test resource management service which builds databases of specific types, allowing a test to re-point the SUT to a new database with the expected data when it starts - one which can simply be dropped on teardown.
Is there a way, using Oracle or MSSQL, to create a database in memory (could be something as simple as a C# DataSet) which the web server can talk to as if it were a production database? Quick/cheap creation and disposal would be as good as in memory, to be honest.
I feel like this is a question that should have an answer already, but can't find it/ don't understand enough to know that I've found it.

Integration test for Application that interacts with web

How do I write integration test for an application that interacts with website ?
More specifically I have an application that interacts with Flickr website.During the OAuth authorization process flickr website display's the verifier code which the user has to copy and paste into my application. Now how do I automate this process so that I can test the application automatically.I am using swing for GUI.
Writing automation that depends on external services can be tricky. For something like this, I would advise you to set up a mock service, or some other way of using canned responses.
I've had success doing this a couple of ways:
Writing an external mock service, using something like bottle.py. This has the advantage of requiring little to no modification to your existing codebase, but obviously requires a bit of work to ensure that this external process is managed correctly as part of your test suite, especially if you are running tests in a CI environment.
Using dependency injection, you can write mock network components, and swap the real network components for your mock components for testing. I recommend this approach, but it will require a bit of modification to your codebase.

How best to enable a web service consumer to integration test a transactional web service?

I want to allow consumers of a web services layer (web services are written in Java) to create automated integration tests to validate that the version of the web services layer that the consumers will use will still work for the consumer (i.e. the web services are on a different release lifecycle than the consumers and their APIs or behavior might change-- they shouldn't change wihtout notifying the consumer, but the point of this automated test is to validate that they haven't changed)
What would I do if the web service actually executes a transaction (updates database tables). Is there a common practice for how to handle this without having to put logic into the web service itself to know its in a unit test and rollback the transaction once finished? (basically baking in the capability to deal with testing of the web service). Or is that the recommended way to do it?
The consumers are created by one development team at our company and the web services are created by a seperate team. The tests would run in an integration environment (the integration environment is one environment behind the test environment used by QA functional testers, one environment behind the prod environment)
The best approach to this sort of thing is dependency injection.
Put your database handling code in a service or services that are injected into the webservice, and create mock versions that are used in your testing environment and do not actually update a database, or in which you add capability of reset under test control.
This way your tests can exercise the real webservices but not a real database, and the tests can be more easily made repeatable.
Dependency injection in Java can be done using (among others) Spring or Guice. Guice is a bit more lightweight.
It may be sensible in your situation to have the injection decision made during application startup based on a system property as you note.
If some tests need to actually update a database to be valid, then your testing version of the database handling will need to use a real database, but should also provide a way accessible from tests to reset your database to a known (possibly empty) state so that tests can be repeated.
My choice would be to host the web services layer in production and in pre-production. Your customers can test against pre-production, but won't get billed for their transactions.
Obviously, this requires you to update production and pre-production at the same time.
Let the web services run unchanged and update whatever they need to in the database.
Your integration tests should check that the correct database records have been written/updated after each test step.
We use a soapUI testbed to accomplish this.
You can write your post-test assertion scripts in Groovy and Java, which can easily connect to the db using JDBC and check records.
People get worried about using the actual database - I wouldn't get hung up on this, it's actually a GOOD thing and makes for a really accurate testbed.
Regarding the "state" of the db, you can approach this in a number of ways:
Restore a db in a known state before the tests run
Get the test to cleanup after themselves
Let the db fill up as more tests run and clean it out occassionally
We've taken the last approach for now but may change in future if it becomes problematic.
I actually don't mind filling the db up with records as it makes it even more like a real customer database. It also is really useful when investigating test failures.
e.g. cxf allows you to change the transport layer. so you can just change the configuration and use localTransport. then you can have to objects: client and server without any network activity. it's great for testing (un)marhasling. everything else should be separated so the business logic is not aware of webservices so it can be tested as any other class

Categories