We are starting design of a system in Java that will have to integrate with a number of existing external systems. To support testing in a DEV environment where those external systems do not exist, I was wondering if Camel would provide a config-based approach to support mocking those external systems, recording the data in each request and returning the expected response. For example, each test scenario has a defined sequence of the expected interaction with each external system:
where VAL_X are the individual fields of each request/response. From a testing standpoint, I was looking for a config-based approach in my DEV environment to specify that instead of actually calling REQUEST_A1 on SYS_A, I instead append the data in that request to a file and unmarshal the values from another file to create the response object. With this approach, I would be able to build up a set of test scenarios with expected results and automate my test suite. Note that I'm not talking about writing a unit test to test my interface - I want to deploy my application in the DEV environment (with an alternate configuration) that allows me to then interact with my application, and this alternate configuration records the request data to a file and unmarshals the previously-created expected results from a file to confirm that my deployed application operates properly. I know I could write alternate implementations of each of those external systems that provide this functionality, but I was hoping that there would be a way to leverage the built-in capabilities of Camel to allow this approach generically. Does anyone have suggestions or a recommendation of another approach?
For a given interface "OrderService":
Functional implementation "OrderServiceImpl"
Mock/Test implementation "OrderServiceMockImpl"
a. Have OderServiceMockImpl load and use data from a text/csv file
Use blueprint to specify which class to use as the implementation backing the interface.
a. Use config admin (aka compendium) to wire in configuration dynamically
The jar/bundle would contain both implementation classes
This will allow you to swap implementations at runtime and you can readily switch between the mock and functional implementation as needed.
Related
I'm developing an API which will connect with several endpoints. The uri for each endpoint is something like this:
rest/services/General/directory1/MapServer/export
rest/services/General/directory2/MapServer/export
rest/services/General/directory3/MapServer/export
rest/services/General/directory4/MapServer/export
and so on...
I don't know if it's possible, but would like to have something like this instead:
rest/services/General/${value}/MapServer/export
and then on my code just call the endpoint above injecting the specific directory that I want on ${value}
Is it possible? Don't know what I'm missing as I googled but couldn't find anything related.
Cheers
You can do so by means of Spring Cloud Config.
Spring Cloud Config provides server and client-side support for
externalized configuration in a distributed system. With the Config
Server you have a central place to manage external properties for
applications across all environments. The concepts on both client and
server map identically to the Spring Environment and PropertySource
abstractions, so they fit very well with Spring applications, but can
be used with any application running in any language. As an
application moves through the deployment pipeline from dev to test and
into production you can manage the configuration between those
environments and be certain that applications have everything they
need to run when they migrate. The default implementation of the
server storage backend uses git so it easily supports labelled
versions of configuration environments, as well as being accessible to
a wide range of tooling for managing the content. It is easy to add
alternative implementations and plug them in with Spring
configuration.
For more details, kindly go through spring documentation here.
I solve the problem using the simplest approach possible. In my code I used the string.replace() method. Was looking for something more robust, but as it was taking lots of time to sort it out how to do it, I decided to go with the string.replace method.
While other interfaces are relatively easy to mock in my Java integration tests, I couldn't find a proper way of mocking Bigquery.
One possibility is to mock the layer I wrote on top of Bigquery itself, but I prefer mocking Bigquery in a more natural way. I'm looking for a limited, lightweight implementation, which allows defining the table contents, and supports queries using the standard API.
Is there such a library? If not, what alternative approaches are recommended?
In unit testing it is perfectly fine to mock all external dependencies, and as long you are using interfaces to abstract out access to BigQuery client, mocking should not be an issue.
With integration testing I would rather get all my 3rd parties dependencies tested
to the extend an application needs it.
For instance one case would be an ETL which streams data from external sources to BigQuery, in this case an integration test needs to verify that all data is in BigQuery as expected, which means that verification stage needs to take into account repeated, and nested messages as required.
Another case would an application that runs some business SQLs, in this case you would have populate BigQuery with some test data before applicaiton run,
then the applicaiton needs to publishe the SQL output either as view/new table/or stream out of data out of for verification.
There are already some libraries taking care of integration testing with datastores including BigQuery/NoSQL/SQL
They would provide an easy solution for the cases described above and full support for SQL, dynamic macro/predicate etc ....
Dsunit (go-lang)
JDsunit (java)
Endly(language agnostic)
See more how to use endly for ETL and BiqQuery testing
If datastore integration test library is not an option for you and you are looking for just testing BigQuery client, the good news is that the client uses REST, so using network sniffers you can easy record what is being send back and forth, then you can use it in replayer. In order to redirect BigQuery from
public BG endpoints to your replayer you would use http java proxy.
I have a service that calls out to a third-party endpoint using java.net.URLConnection. As part of an integration test that uses this service I would like to use a fake endpoint of my own construction.
I have made a Spring MVC Controller that simulates that behaviour of the endpoint I require. (I know this endpoint works as expected as I included it in my web app's servlet config and hit it from a browser once started).
I am having trouble figuring out how I can get this fake endpoint available for my integration test.
Is there some feature of Spring-Test that would help me here?
Do I somehow need to start up a servlet at the beginning of my test?
Are there any other solutions entirely?
It's a bad idea to use a Spring MVC controller as a fake endpoint. There is no way to simply have the controller available for the integration test and starting a servlet with just that controller alongside whatever you are testing requires a lot of configuration.
It is much better to use a mocking framework like MockServer (http://www.mock-server.com/) to create your fake endpoint. MockServer should be powerful enough to cover even complex responses from the fake endpoint, with relatively little setup.
Check out Spring MVC Test that was added to Spring in version 3.2.
Here are some tutorials: 1, 2, 3
First I think we should get the terminology right. There are two general groups of "fake" objects in testing (simplified): a mock, which returns predefined answers on predefined input and stubs which are a simplified version of the object the SUT (system under test) communicates with. While a mock basically does nothing than to provide a response, a stub might use a live algorithm, but not store it's results in a database or send them to customers via eMail for example. I am no expert in testing, but those two fake objects are rather to be used in unit and depending on their scope in acceptance tests.
So your sut communicates with a remote system during integration test. In my book this is the perfect time to actually test how your software integrates with other systems, so your software should be tested against a test version of the remote system. In case this is not possible (they might not have a test system) you are conceptually in some sort of trouble. You can shape your stub or mock only in a way you expect it to work, very much like the part of the software you have written to communicate with that remote service. This leaves out some important things you want to test with integration tests: Was the client side implemented correctly so that it will work with the live server. Do we have to develop work around as there are implementation errors on the server side? In which scale will the communication with the remote system affect our software's performance? Do our authentication credentials work? Does the authentication mechanism work? What are the technical and conceptual implications of this communication relationship no one has thought of so far? (Believe me, the latter will happen more often than you might expect!)
Generally speaking: What will happen if you do integration tests against a mock or a stub is that you test against your own understanding of how to implement the client and the server side of communication, and you do not test how your client works with the actual remote server or at least the best thing next to that, a test system. I can tell you from experience: never make assumptions on how a remote system should behave - test it. Even when talking of a JMS server: test it!
In case you are working for a company, testing against a provided test system is even more important: if you software works against a test system and you can prove it (selenium is a good helper here, as well as good logging, believe it or not) and your software does not work with a live version, you have a situation which I call "instablame": it is immediately obvious that it is not your fault the software isn't working. I myself hate fingerpointing to the bone, but most suits tend to ask "Who's fault was it?" even before "Can we fix that immediately?" and way before "How can we solve that problem?". And there is a special group of suits called lawyers, you know ... ;)
That being said: if you absolutely have to use those stubs during your integration tests, I would create an own project for them (let's say "MyProject-IT-Stubs" and build and run the latest version of MyProject-IT-Stubs before I run the IT of my main project. When using maven, you could create MyProject-IT-Stubs with war packaging, call it as a dependency during the pre-integration-test phase and fire up a jetty for this war in the same phase. Then your integration tests run, either successful or not and you can tear down the jetty in the post-integration-test phase.
The IMHO best way to organize your project with maven would be to have a project with three modules: MyProject,MyProject-IT-Stubs and MyProject-IT(declaring dependencies on MyProject and MyProject-IT-Stubs. This keeps your projects nice and tidy and the stubs do not pollute your project. You might want to think about organizing MyProject-IT-Stubs into modules as well, one for each remote system you have to talk to. As soon as you have test access, you can simply deactivate the according module in MyProject-IT-Stubs.
I am sure according options exist for InsertYourBuildToolHere.
I have meta data that can be used to describe several hundred new web services and would like to dynamically create WSDL files from within my own Java class. I see many ways to do this when you have Java methods you want to expose as web services. Unfortunately that approach does not work for me as I have a single runtime method that can service many different operations and services. It's dynamic and as such does not have static classes that can be bound via map.xml. My plan is to generate WSDL files that will allow incoming SOAP envelopes to be received via HTTP POST, recognized, transformed and handled by my existing method.
This is to allow web service access is a 20 year old proprietary dynamically callable back end. I am certain that the meta data for each service can be easily presented to the outside world as web services and operations.
I could always write a custom builder by appending text to a StringBuilder but that least desirable choice. It would be far more reliable if there was an API I could use that would take in the essential items and attributes and when complete, validate and render a properly formed WSDL file.
I would like this to be generic and not require proprietary add on classes from others like what I might find in WebSphere.
In such case I would consider implementing web service using Provider API (standard part of JAX-WS).
In the end we built a WSDL generator using the .NET 4 System.Xml.Schema and System.Web.Services.Description namespaces. The generated WSDL is used in both Java and .NET to build client and server interface classes. It took a while but we have most of the services up and running, and they are totally platform agnostic.
I prefer to use Spring Web Services. Where it can take an XSD with reasonable defaults and turn it into a WSDL.
See sws:dynamic-wsdl in http://docs.spring.io/spring-ws/site/reference/html/server.html#server-automatic-wsdl-exposure
our product is built on a client-server architecture, with the server implemented in Java (we are using POJO's with Spring framework). We have two API levels on the server:
the external API, which uses REST web services - useful for external clients and integrations with other servers.
the internal API, which uses pure Java classes - useful for the actual code inside (as many times the business logic invokes an API call) and for integration with plusins developed inside out company and deployed as parts of our product. The external REST API also uses the internal API.
We implemented permission checking (using Spring security) in the internal API because we wanted to control access at the lowest API level.
But here comes the problem: there are some operations defined on the API level that are regarded as forbidden for a currently logged user, but which should be performed smoothly by the server itself. For example, deleting some entity could be forbidden for the user, but the server might want to delete this entity as a side effect of some other operation performed by the user and we want this to be allowed.
So what is the best approach for allowing the server to perform an operation (in some kind of super-user mode) that might be forbidden for the actual logged-in user?
As I see it we have several options each of which have its pros and cons:
Implement permission checking in external level API (REST) - bad because plugins will bypass permissions checks.
Turn off permission checking for the current thread after the request was granted - too dangerous, we might allow too many server actions that should be forbidden.
Explicitly ask the internal API level to perform the operation in the privileged mode (just like PrivilegedAction in java security framework) - too verbose.
As none of the above approaches is ideal, I wonder what is the best-practice approach for this problem?
Thanks.
Security is applied at the bounds of a module. If I understand you, your system applies security on two levels of abstraction of the (roughly) same API. It sounds complex, as you have to make a double security check on the whole two APIs.
Consider migrating the REST needed methods from the internal API to the external one, and deleting security stuff in the internal API.
external API will manage security for external clients (at the boundaries of your app)
internal API will be strictly reserved for internal app and plugin use (and you would happy hack it, as no external clients are bounded to it)
Do you really need to control the plugin's permissions to your application logic ? Is there a good reason for it ? Plugins are developped by your company, after all. Maybe a formal document explaining to plugin's developpers what should not be done, or a safety test suite validation for the plugin (e.g. assert plugin does not call "this" method) will do the job either.
If you still need to consider these plugins as "untrusted", add the methods they need to your external API (on your app boundary) and create specific security profile for each use: "restProfile", "clientProfile" & "pluginProfile". Each will have specific rights on your external API methods.
It sounds like you need two levels of internal API, one exposed to plugins and one not.
The best way of enabling that would be using OSGi (or Spring Modules). It allows you to explicitly state which packages and classes can be accessed by other modules (ie REST modules and plugin modules). Those would be the exposed level of your new internal API and you would use Spring Security to further restrict access selectively. The internal packages and classes would contain the methods which did all the low level stuff (like deleting entities) and you wouldn't be able to call them directly. Some of the exposed API would just duplicate the internal API with a security check, but that would be ok.
The problem with the best way is that Spring Modules strikes me as still a bit too immature even to put into a new webapp project. There's no way I'd want to shoehorn it into an old project.
You could probably achieve something similar using Spring Security and AspectJ, but it strikes me that the performance overhead would be prohibitive.
One solution that would be quite cool if you could re-architect your system would be to take tasks requiring security elevation offline, or rather make them asynchronous. Using Quartz and/or Apache Camel (or a proper ESB) you could make the "delete my account" method create an offline task that can at a future date be executed as an atomic unit of work with admin priveliges. That means you can cleanly do your security checks for the user requesting account deletion in a completely separate thread to where the deletion actually takes place. This would have the advantage of making the web thread more responsive, although you'd still want to do somethings immediately to preserve the illusion that the requested action had been completed.
If you're using Spring, you may as well utilize it fully. Spring offers AOP that allows you to use interceptors and perform these cross-system checks, and in the event of an unauthorized action, prevent the action.
You can read more about this in Spring's online documentation here.
Hope this helps...
Yuval =8-)