I have recently started out with Spring and am unsure about how to approach this issue. I have a Spring boot program which makes calls to remote REST APIs. For example an AddressService class with getAddress(String user) method, which makes a HTTP call and returns a JSON response. I would like to set up Spring profiles for development purposes local, dev, uat, prod.
When the program is running with the local profile, I would like to "mock" these external API calls with an expected JSON response, so I can just test logic, but when it is run in any of the other profiles I would like to make the actual calls. How can I go about doing this? From what I read, there's many ways people approach this, using WireMock, RestTemplate, Mockito etc. I'm confused about which is the way to go.
Any advice would be greatly appreciated. Thanks.
WireMock,Mockit is for unittest, to mock the real request. Example here:
How do I mock a REST template exchange?
When you need a running implementation with a mock, i think the easiest way is that you have a interface
public interface AdressAdapter {
public List<Adress> getAddress(String name);
}
And two different implementations depending on the profile.
#Profile("local")
public class DummyAdress implements AdressAdapter{
#Override
public List<Adress> getAddress(String name) {
//Mock here something
return null;
}
}
! means NOT locale profile in this case.
#Profile("!local")
public class RealAdress implements AdressAdapter{
#Override
public List<Adress> getAddress(String name) {
//Make Restcall
return null;
}
}
What you could do is use different application.properties files depending on your profile. That way, you just change the url to a mock server for your local profile.
So what you have to do is :
Create another application.properties in your resources folder named : application-local.properties.
Change the url of the desired service.
Start your application with the VM option -Dspring.profiles.active=local.
Here is a link that describe well what you want to achieve.
For your mock server, you could use Wiremock, Mountebank, Postman,... that can be start separately and mock specific endpoints to return what you want.
Related
I'm recently working with microservices, developed as Spring Boot applications (v 2.2) and in my company we're using Keycloak as authorization server.
We chose it because we need complex policies, roles and groups, and we also need the User Managed Authorization (UMA) to share resources between users.
We configured Keycloak with a single realm and many clients (one client per microservice).
Now, I understand that I need to explicitly define Resources within Keycloak and this is fine, but the question is: do I really need to duplicate all of them in my microservice's property file?
All the documentation, examples and tutorials end up with the same thing, that is something like:
keycloak.policy-enforcer-config.enforcement-mode=PERMISSIVE
keycloak.policy-enforcer-config.paths[0].name=Car Resource
keycloak.policy-enforcer-config.paths[0].path=/cars/create
keycloak.policy-enforcer-config.paths[0].scopes[0]=car:create
keycloak.policy-enforcer-config.paths[1].path=/cars/{id}
keycloak.policy-enforcer-config.paths[1].methods[0].method=GET
keycloak.policy-enforcer-config.paths[1].methods[0].scopes[0]=car:view-detail
keycloak.policy-enforcer-config.paths[1].methods[1].method=DELETE
keycloak.policy-enforcer-config.paths[1].methods[1].scopes[0]=car:delete
(this second example fits better our case because it also uses different authorization scopes per http method).
In real life each microservice we're developing has dozens of endpoints and define them one by one seems to me a waste of time and a weakness in the code's robustness: we change an endpoint, we need to reconfigure it in both Keycloak and the application properties.
Is there a way to use some kind of annotation at Controller level? Something like the following pseudo-code:
#RestController
#RequestMapping("/foo")
public class MyController {
#GetMapping
#KeycloakPolicy(scope = "foo:view")
public ResponseEntity<String> foo() {
...
}
#PostMapping
#KeycloakPolicy(scope = "bar:create")
public ResponseEntity<String> bar() {
...
}
}
In the end, I developed my own project that provides auto-configuration capabilities to a spring-boot project that needs to work as a resource server.
The project is released under MIT2 license and it's available on my github:
keycloak-resource-autoconf
I'm still quite new to microservices and have a few basic architectural questions that I can't get solved right now.
I'm using the Quarkus framework with the standard extensions like quarkus-resteasy and quarkus-rest-client for the realization.
The scenario:
I have an example of a "Persistence" service that I want to externally populate with data via a REST call in a dedicated Maven project.
#Path("/api/persistence")
#Products(MediaType.APPLICATION_JSON)
public class Persistence{
#Inject
EntityManager entityManager;
#POST
#Transactional
public Response create(PostDto postDto) {
Post post = toPostMapper.toResource(postDto);
entityManager.persist(post);
return Response.ok(postDto).status(201).build();
}
}
At the same time I would like to have a microservice DataGenerator which generates the corresponding data and passes it to the Persistence Service.
My problem : API sharing
Both services were created as Maven projects.
According to the tutorials I found the correct way would be to declare an interface (here called PersistenceApi) in the DataGenerator project like this
#Path("/api/persistence")
#Products(MediaType.APPLICATION_JSON)
#RegisterRestClient
public interface PersistenceApi {
#POST
#Transactional
public Response create(PostDto post) ;
}
This interface is then integrated into the DataGenerator service via #Inject, which leads to the following exemplary service.
#RequestScoped
#Path("/api/datagenerator")
#Products("application/json")
#Consumes("application/json")
public class DataGenerator{
#Inject
#RestClient
PersistenceApi persistenceApi
#POST
public void getPostExamplePostToPersistence() {
PostDto post = new PostDto();
post.setTitle("Find me in db in persistence-service")
persistenceApi.create(post);
}
}
I have the PersistenceService running locally on port 8181 and have added the following entry in the application.properties of the DataGenerator project so that the service can be found.
furnace.collection.item.service.PersistenceApi/mp-rest/url=http://localhost:8181
furnace.collection.item.service.PersistenceApi/mp-rest/scope=javax.inject.Singleton
I find it "wrong" to declare the interface in my DataGenerator, because at this point I don't notice when the api provided by the Persistence service changes. Accordingly one could come up with the idea to position the interface in the Persistence service, which is then implemented by my concrete Persistence implementation and leads to the following code.
#Path("/api/persistence")
#Products(MediaType.APPLICATION_JSON)
#RegisterRestClient
public class PersistenceApiImpl implements PersistenceApi {
#Inject
EntityManager entityManager;
#POST
#Transactional
public Response create(PostDto fruit) {
Post post = toPostMapper.toResource(fruit);
entityManager.persist(post);
return Response.ok(fruit).status(201).build();
}
}
In order to use them in my DataGenerator project, I would have to include the Persistence project as a dependency in my DataGenerator project, which sounds like a "monolith with extra steps" to me and therefore feels wrong in terms of "separation of concerns".
I have tried the following approach:
I created another Maven project called PersistenceApi which only contains the corresponding PersistenceApi. This PersistenceApi project was then included as a dependency in both the "Persistence" and "DataGenerator" projects. In the "Persistence"-Project I implement the service from the example above and try to address the corresponding interface in the "DataGenerator"-Project via #Inject.
Unfortunately this does not work. When I'm building the service, I get the message that the required dependency PersistenceApi, which I want to include via #Inject in the DataGenerator service, cannot be injected in the form of an UnsatisfiedResolutionException.
Now my questions:
I don't see what I'm missing here. Could you help me?
Is this kind of API-sharing with dedicated Api projects a viable way or is the "monolith with extra steps" approach really the way to go?
Thank you in advance.
Thats a common problem with microservices. Like in the book "Microservices: Grundlagen flexibler Softwarearchitekturen" by Eberhard Wolff (I saw that you are German too) i follow the idea that microservices should have the same coupling like the teams developing them and like the organization your developing it for(have a look at Conway's law). Therefore services of mostly independent teams should be developed independly and the api changes of one service should not affect another at the time of the update.
If you develop both services in your team then i think you can couple them the way you are doing it because you dont have to work together with other teams and there will be no huge overhead. Note that you will be forced to release both services together. If that is always ok for you then save your time and do it your way, if not have a look at API-Versioning:
I use api versioning so the old api is still reachable under "v1/" and the new one under "v2/". This way the team behind the other microservice has enough time to update their service.
Have a look at Domain-driven Design for different ways of integrating bounded contexts (=services) and the coupling consequences. Without API-Versioning you are forced to a partnership and you need to release together. Maybe you prefer Customer-Supplier or even conformist.
To test compatibility between both services have a look at consumer driven contracts and Pact. You can also generate open api files and track their changes but that will only help to notify people about changes.
I was developing the RESTful web service with springmvc4 and spring data jpa.Well, I have about 100+ apis for frontend to pull data.What I am want to do is how to test all of my apis automatically with random data.
The apis look like:
#RestController
#Api(tags = "index")
#RequestMapping("/index")
public class IndexController {
#Autowired
private IndexService indexService;
#RequestMapping(value = "/data", method = RequestMethod.GET)
#ApiOperation(value="today's data",notes="today's data",consumes="application/json",produces="application/json")
public Object getTodayData() {
return indexService.getTodayData();
}
#RequestMapping(value = "/chartData", method = RequestMethod.GET)
#ApiOperation(value="charts data",notes="charts data",consumes="application/json",produces="application/json")
public Object getLast7Data() {
return indexService.getLast7Data();
}
}
if I test it with postman one by one,it was waste a lot of time.When we developping,we should make sure the service is ok by ourselves.
I have got a solution but which is not satisfied me well.
here are my solution:
Scaned the controller of the specified package,then use reflection
get the annotation of the class,which could get the value of
#RequestMapping("/index").
Iterate through the method of the class and get the method's
annotation the same way,and get the full url.
Create random data for request, execute request and log the response.
Could anyone provide a solution for this, very appreciate for your help.
I see that you are using swagger in your api, you can use it to generate client code https://github.com/swagger-api/swagger-codegen for automatic testing.
Since you are using the Spring framework, you can try the following :
Use Spring Integration Test for testing the API. It spawns an
instance of your service and tests against it.
Use RestAssured & JUnit to hit the API and assert the response.
Use RequestMappingHandlerMapping.getHandlerMethods(), which you can simply get with Spring injection, e.g. via #Autowired. This will give you a map RequestMappingInfo->HandlerMethod, which contains all the information you need.
You can run the tests as regular JUnit tests, without the need for postman etc. using Spring integration testing support:
#RunWith(SpringJUnit4ClassRunner.class)
#WebAppConfiguration
#ContextHierarchy({
#ContextConfiguration(name = "root", locations = "classpath:applicationContext.xml"),
#ContextConfiguration(name = "web", locations = "classpath:xxx-servlet.xml)
})
public class YourTest extends AbstractTransactionalJUnit4SpringContextTests {...}
In this test, use #Autowired WebApplicationContext and pass it to MockMvcBuilders.webAppContextSetup(webApplicationContext) to create a MockMvc instance. It allows to submit HTTP request to the Spring's MockMvc infrastructure via an easy interface.
Note that Spring's MockMvc framework will not run any real app server such as Tomcat. But this might be exactly what you need, since it is much faster. By default, Spring integration testing framework will only initialize your Spring application context once for all the tests with the same Spring configuration (use #DirtiesContext on a test class or method to signal that a new Spring app context is required after a specific test).
If you feel you need to run an actual app server such as Tomcat within your tests, check maven plugins such as tomcat7-maven-plugin.
What's the right way to read configuration in dropwizard from something like a database, or a REST call? I have a use case where I cannot have a yml file with some values, and should retrieve settings/config at startup time from a preconfigured URL with REST calls.
Is it right to just invoke these REST calls in the get methods of the ApplicationConfiguration class?
Similar to my answer here, you implement the ConfigurationSourceProvider interface the way you wish to implement and configure your dropwizard application to use it on your Application class by:
#Override
public void initialize(Bootstrap<MyConfiguration> bootstrap){
bootstrap.setConfigurationSourceProvider(new MyDatabaseConfigurationSourceProvider());
}
By default, the InputStream you return is read as YAML and mapped to the Configuration object. The default implementation
You can override this via
bootstrap.setConfigurationFactoryFactory(new MyDatabaseConfigurationFactoryFactory<>());
Then you have your FactoryFactory :) that returns a Factory which reads the InputStream and returns your Configuration.
public T build(ConfigurationSourceProvider provider, String path {
Decode.onWhateverFormatYouWish(provider.open(path));
}
elaborating a bit further on Nathan's reply, you might want to consider using the UrlConfigurationSourceProvider , which is also provided with dropwizard, and allows to retrieve the configuration from an URL.
Something like:
#Override
public void initialize(Bootstrap<MyRestApplicationConfiguration> bootstrap) {
bootstrap.setConfigurationSourceProvider(new UrlConfigurationSourceProvider());
}
I have the below route. In unit test, since I doesn't have the FTP server available, I'd like to use camel's test support and send an invalid message to "ftp://hostname/input" and verify that it failed and routed to "ftp://hostname/error".
I gone through the documentation which mainly talks about using the "mock:" endpoint but I am not sure how to use it in this scenario.
public class MyRoute extends RouteBuilder
{
#Override
public void configure()
{
onException(EdiOrderParsingException.class).handled(true).to("ftp://hostname/error");
from("ftp://hostname/input")
.bean(new OrderEdiTocXml())
.convertBodyTo(String.class)
.convertBodyTo(Document.class)
.choice()
.when(xpath("/cXML/Response/Status/#text='OK'"))
.to("ftp://hostname/valid").otherwise()
.to("ftp://hostname/invalid");
}
}
As Ben says you can either setup a FTP server and use the real components. The FTP server can be embedded, or you can setup a FTP server in-house. The latter is more like an integration testing, where you may have a dedicated test environment.
Camel is very flexible in its test kit, and if you want to build an unit test that do not use the real FTP component, then you can replace that before the test. For example in your example you can replace the input endpoint of a route to a direct endpoint to make it easier to send a message to the route. Then you can use an interceptor to intercept the sending to the ftp endpoints, and detour the message.
The advice with part of the test kit offers these capabilities: http://camel.apache.org/advicewith.html. And is also discussed in chapter 6 of the Camel in action book, such as section 6.3, that talks about simulating errors.
In your example you could do something a like
public void testSendError() throws Exception {
// first advice the route to replace the input, and catch sending to FTP servers
context.getRouteDefinitions().get(0).adviceWith(context, new AdviceWithRouteBuilder() {
#Override
public void configure() throws Exception {
replaceFromWith("direct:input");
// intercept valid messages
interceptSendToEndpoint("ftp://hostname/valid")
.skipSendToOriginalEndpoint()
.to("mock:valid");
// intercept invalid messages
interceptSendToEndpoint("ftp://hostname/invalid")
.skipSendToOriginalEndpoint()
.to("mock:invalid");
}
});
// we must manually start when we are done with all the advice with
context.start();
// setup expectations on the mocks
getMockEndpoint("mock:invalid").expectedMessageCount(1);
getMockEndpoint("mock:valid").expectedMessageCount(0);
// send the invalid message to the route
template.sendBody("direct:input", "Some invalid content here");
// assert that the test was okay
assertMockEndpointsSatisfied();
}
From Camel 2.10 onwards we will make the intercept and mock a bit easier when using advice with. As well we are introducing a stub component. http://camel.apache.org/stub
Have a look at MockFtPServer!
<dependency>
<groupId>org.mockftpserver</groupId>
<artifactId>MockFtpServer</artifactId>
<version>2.2</version>
<scope>test</scope>
</dependency>
With this one you can simulate all sorts of behaviors like permission problems, etc:
Example:
fakeFtpServer = new FakeFtpServer();
fakeFtpServer.setServerControlPort(FTPPORT);
FileSystem fileSystem = new UnixFakeFileSystem();
fileSystem.add(new DirectoryEntry(FTPDIRECTORY));
fakeFtpServer.setFileSystem(fileSystem);
fakeFtpServer.addUserAccount(new UserAccount(USERNAME, PASSWORD, FTPDIRECTORY));
...
assertTrue("Expected file to be transferred", fakeFtpServer.getFileSystem().exists(FTPDIRECTORY + "/" + FILENAME));
take a look at this unit test and those in the same directory...they'll show you how to standup a local FTP server for testing and how to use CamelTestSupport to validate scenarios against it, etc...
example unit test...
https://svn.apache.org/repos/asf/camel/trunk/components/camel-ftp/src/test/java/org/apache/camel/component/file/remote/FromFileToFtpTest.java
which extends this test support class...
https://svn.apache.org/repos/asf/camel/trunk/components/camel-ftp/src/test/java/org/apache/camel/component/file/remote/FtpsServerTestSupport.java
In our project we do not create mock FTP Server to test the route but we use properties that can be replaced by a file Camel Component for the local development and unit testing.
Your code would look like this:
public class MyRoute extends RouteBuilder
{
#Override
public void configure()
{
onException(EdiOrderParsingException.class)
.handled(true)
.to("{{myroute.error}}");
from("{{myroute.input.endpoint}}")
.bean(new OrderEdiTocXml())
.convertBodyTo(String.class)
.convertBodyTo(Document.class)
.choice()
.when(xpath("/cXML/Response/Status/#text='OK'"))
.to("{{myroute.valid.endpoint}}}")
.otherwise()
.to("{{myroute.invalid.endpoint}}");
}
}
And locally and for system test we use a file endpoint declared in the property file:
myroute.input.endpoint=file:/home/user/myproject/input
myroute.valid.endpoint=file:/home/user/myproject/valid
myroute.invalid.endpoint=file:/home/user/myproject/invalid
myroute.error=file:/home/user/myproject/error
or in a JUnit CamelTestSupport you can use the useOverridePropertiesWithPropertiesComponent method to set the properties you want to overrides.
As an alternative you can also use a "direct" route instead but you can miss some File options that can be tested by the unit test.
And we only test the FTP connection with the real system by setting the properties like this:
myroute.input.endpoint=ftp://hostname/input
myroute.valid.endpoint=ftp://hostname/valid
myroute.invalid.endpoint=ftp://hostname/invalid
myroute.error=ftp://hostname/error
With this you can also have different configuration for e.g production server that will differentiate from the Integration Test Environment.
Example of Properties for Production environment:
myroute.input.endpoint=ftp://hostname-prod/input
myroute.valid.endpoint=ftp://hostname-prod/valid
myroute.invalid.endpoint=ftp://hostname-prod/invalid
myroute.error=ftp://hostname-prod/error
In my opinion it is totally acceptable to use file endpoint to simplify the JUnit code and it will test the route only and not the connection.
Testing the connection is more like an Integration Test and should be executed on the real server connected with the real external system (in your case FTP servers, but can be other endpoints/systems as well).
By using properties you can also configure different URL's per environment (For example: we have 3 testing environments and one production environment, all with different endpoints).