Mockserver fails to read request: unknown message format - java

I need to test some REST client. For that purpose I'm using org.mockserver.integration.ClientAndServer
I start my server. Create some expectation. After that I mock my client. Run this client. But when server receives request I see in logs:
14:00:13.511 [MockServer-EventLog0] INFO org.mockserver.log.MockServerEventLog - received binary request:
505249202a20485454502f322e300d0a0d0a534d0d0a0d0a00000604000000000000040100000000000408000000000000ff0001
14:00:13.511 [MockServer-EventLog0] INFO org.mockserver.log.MockServerEventLog - unknown message format
505249202a20485454502f322e300d0a0d0a534d0d0a0d0a00000604000000000000040100000000000408000000000000ff0001
This is my test:
#RunWith(PowerMockRunner.class)
#PrepareForTest({NrfClient.class, SnefProperties.class})
#PowerMockIgnore({"javax.net.ssl.*"})
#TestPropertySource(locations = "classpath:test.properties")
public class NrfConnectionTest {
private String finalPath;
private UUID uuid = UUID.fromString("8d92d4ac-be0e-4016-8b2c-eff2607798e4");
private ClientAndServer mockServer;
#Before
public void startMockServer() {
mockServer = startClientAndServer(8888);
}
#After
public void stopServer() {
mockServer.stop();
}
#Test
public void NrfRegisterTest() throws Exception {
//create some expectation
new MockServerClient("127.0.0.1", 8888)
.when(HttpRequest.request()
.withMethod("PUT")
.withPath("/nnrf-nfm/v1/nf-instances/8d92d4ac-be0e-4016-8b2c-eff2607798e4"))
.respond(HttpResponse.response().withStatusCode(201));
//long preparations and mocking the NrfClient (client that actually make request)
//NrfClient is singleton, so had to mock a lot of methods.
PropertiesConfiguration config = new PropertiesConfiguration();
config.setAutoSave(false);
File file = new File("test.properties");
if (!file.exists()) {
String absolutePath = file.getAbsolutePath();
finalPath = absolutePath.substring(0, absolutePath.length() - "test.properties".length()) + "src\\test\\resources\\test.properties";
file = new File(finalPath);
}
try {
config.load(file);
config.setFile(file);
} catch(ConfigurationException e) {
LogUtils.warn(NrfConnectionTest.class, "Failed to load properties from file " + "classpath:test.properties", e);
}
SnefProperties spyProperties = PowerMockito.spy(SnefProperties.getInstance());
PowerMockito.doReturn(finalPath).when(spyProperties, "getPropertiesFilePath");
PowerMockito.doReturn(config).when(spyProperties, "getProperties");
PowerMockito.doReturn(config).when(spyProperties, "getLastUpdatedProperties");
NrfConfig nrfConfig = getNrfConfig();
NrfClient nrfClient = PowerMockito.spy(NrfClient.getInstance());
SnefAddressInfo snefAddressInfo = new SnefAddressInfo("127.0.0.1", "8080");
PowerMockito.doReturn(nrfConfig).when(nrfClient, "loadConfiguration", snefAddressInfo);
PowerMockito.doReturn(uuid).when(nrfClient, "getUuid");
Whitebox.setInternalState(SnefProperties.class, "instance", spyProperties);
nrfClient.initialize(snefAddressInfo);
//here the client makes request
nrfClient.run();
}
private NrfConfig getNrfConfig() {
NrfConfig nrfConfig = new NrfConfig();
nrfConfig.setNrfDirectConnection(true);
nrfConfig.setNrfAddress("127.0.0.1:8888");
nrfConfig.setSnefNrfService(State.ENABLED);
nrfConfig.setSmpIp("127.0.0.1");
nrfConfig.setSmpPort("8080");
return nrfConfig;
}
}
Looks like I miss some server configuration, or use it in wrong way.
Or, maybe the reason is in powermock: could it be that mockserver is incompatible with powermock or PowerMockRunner?

Related

Mixed up Test configuration when using #ResourceArg

TL:DR; When running tests with different #ResourceArgs, the configuration of different tests get thrown around and override others, breaking tests meant to run with specific configurations.
So, I have a service that has tests that run in different configuration setups. The main difference at the moment is the service can either manage its own authentication or get it from an external source (Keycloak).
I firstly control this using test profiles, which seem to work fine. Unfortunately, in order to support both cases, the ResourceLifecycleManager I have setup supports setting up a Keycloak instance and returns config values that break the config for self authentication (This is due primarily to the fact that I have not found out how to get the lifecycle manager to determine on its own what profile or config is currently running. If I could do this, I think I would be much better off than using #ResourceArg, so would love to know if I missed something here).
To remedy this shortcoming, I have attempted to use #ResourceArgs to convey to the lifecycle manager when to setup for external auth. However, I have noticed some really odd execution timings and the config that ends up at my test/service isn't what I intend based on the test class's annotations, where it is obvious the lifecycle manager has setup for external auth.
Additionally, it should be noted that I have my tests ordered such that the profiles and configs shouldn't be running out of order; all the tests that don't care are run first, then the 'normal' tests with self auth, then the tests with the external auth profile. I can see this working appropriately when I run in intellij, and the fact I can tell the time is being taken to start up the new service instance between the test profiles.
Looking at the logs when I throw a breakpoint in places, some odd things are obvious:
When breakpoint on an erring test (before the external-configured tests run)
The start() method of my TestResourceLifecycleManager has been called twice
The first run ran with Keycloak starting, would override/break config
though the time I would expect to need to be taken to start up keycloak not happening, a little confused here
The second run is correct, not starting keycloak
The profile config is what is expected, except for what the keycloak setup would override
When breakpoint on an external-configured test (after all self-configured tests run):
The start() method has now been called 4 times; appears that things were started in the same order as before again for the new run of the app
There could be some weirdness in how Intellij/Gradle shows logs, but I am interpreting this as:
Quarkus initting the two instances of LifecycleManager when starting the app for some reason, and one's config overrides the other, causing my woes.
The lifecycle manager is working as expected; it appropriately starts/ doesn't start keycloak when configured either way
At this point I can't tell if I'm doing something wrong, or if there's a bug.
Test class example for self-auth test (same annotations for all tests in this (test) profile):
#Slf4j
#QuarkusTest
#QuarkusTestResource(TestResourceLifecycleManager.class)
#TestHTTPEndpoint(Auth.class)
class AuthTest extends RunningServerTest {
Test class example for external auth test (same annotations for all tests in this (externalAuth) profile):
#Slf4j
#QuarkusTest
#TestProfile(ExternalAuthTestProfile.class)
#QuarkusTestResource(value = TestResourceLifecycleManager.class, initArgs = #ResourceArg(name=TestResourceLifecycleManager.EXTERNAL_AUTH_ARG, value="true"))
#TestHTTPEndpoint(Auth.class)
class AuthExternalTest extends RunningServerTest {
ExternalAuthTestProfile extends this, providing the appropriate profile name:
public class NonDefaultTestProfile implements QuarkusTestProfile {
private final String testProfile;
private final Map<String, String> overrides = new HashMap<>();
protected NonDefaultTestProfile(String testProfile) {
this.testProfile = testProfile;
}
protected NonDefaultTestProfile(String testProfile, Map<String, String> configOverrides) {
this(testProfile);
this.overrides.putAll(configOverrides);
}
#Override
public Map<String, String> getConfigOverrides() {
return new HashMap<>(this.overrides);
}
#Override
public String getConfigProfile() {
return testProfile;
}
#Override
public List<TestResourceEntry> testResources() {
return QuarkusTestProfile.super.testResources();
}
}
Lifecycle manager:
#Slf4j
public class TestResourceLifecycleManager implements QuarkusTestResourceLifecycleManager {
public static final String EXTERNAL_AUTH_ARG = "externalAuth";
private static volatile MongodExecutable MONGO_EXE = null;
private static volatile KeycloakContainer KEYCLOAK_CONTAINER = null;
private boolean externalAuth = false;
public synchronized Map<String, String> startKeycloakTestServer() {
if(!this.externalAuth){
log.info("No need for keycloak.");
return Map.of();
}
if (KEYCLOAK_CONTAINER != null) {
log.info("Keycloak already started.");
} else {
KEYCLOAK_CONTAINER = new KeycloakContainer()
// .withEnv("hello","world")
.withRealmImportFile("keycloak-realm.json");
KEYCLOAK_CONTAINER.start();
log.info(
"Test keycloak started at endpoint: {}\tAdmin creds: {}:{}",
KEYCLOAK_CONTAINER.getAuthServerUrl(),
KEYCLOAK_CONTAINER.getAdminUsername(),
KEYCLOAK_CONTAINER.getAdminPassword()
);
}
String clientId;
String clientSecret;
String publicKey = "";
try (
Keycloak keycloak = KeycloakBuilder.builder()
.serverUrl(KEYCLOAK_CONTAINER.getAuthServerUrl())
.realm("master")
.grantType(OAuth2Constants.PASSWORD)
.clientId("admin-cli")
.username(KEYCLOAK_CONTAINER.getAdminUsername())
.password(KEYCLOAK_CONTAINER.getAdminPassword())
.build();
) {
RealmResource appsRealmResource = keycloak.realms().realm("apps");
ClientRepresentation qmClientResource = appsRealmResource.clients().findByClientId("quartermaster").get(0);
clientSecret = qmClientResource.getSecret();
log.info("Got client id \"{}\" with secret: {}", "quartermaster", clientSecret);
//get private key
for (KeysMetadataRepresentation.KeyMetadataRepresentation curKey : appsRealmResource.keys().getKeyMetadata().getKeys()) {
if (!SIG.equals(curKey.getUse())) {
continue;
}
if (!"RSA".equals(curKey.getType())) {
continue;
}
String publicKeyTemp = curKey.getPublicKey();
if (publicKeyTemp == null || publicKeyTemp.isBlank()) {
continue;
}
publicKey = publicKeyTemp;
log.info("Found a relevant key for public key use: {} / {}", curKey.getKid(), publicKey);
}
}
// write public key
// = new File(TestResourceLifecycleManager.class.getResource("/").toURI().toString() + "/security/testKeycloakPublicKey.pem");
File publicKeyFile;
try {
publicKeyFile = File.createTempFile("oqmTestKeycloakPublicKey",".pem");
// publicKeyFile = new File(TestResourceLifecycleManager.class.getResource("/").toURI().toString().replace("/classes/java/", "/resources/") + "/security/testKeycloakPublicKey.pem");
log.info("path of public key: {}", publicKeyFile);
// if(publicKeyFile.createNewFile()){
// log.info("created new public key file");
//
// } else {
// log.info("Public file already exists");
// }
try (
FileOutputStream os = new FileOutputStream(
publicKeyFile
);
) {
IOUtils.write(publicKey, os, UTF_8);
} catch (IOException e) {
log.error("Failed to write out public key of keycloak: ", e);
throw new IllegalStateException("Failed to write out public key of keycloak.", e);
}
} catch (IOException e) {
log.error("Failed to create public key file: ", e);
throw new IllegalStateException("Failed to create public key file", e);
}
String keycloakUrl = KEYCLOAK_CONTAINER.getAuthServerUrl().replace("/auth", "");
return Map.of(
"test.keycloak.url", keycloakUrl,
"test.keycloak.authUrl", KEYCLOAK_CONTAINER.getAuthServerUrl(),
"test.keycloak.adminName", KEYCLOAK_CONTAINER.getAdminUsername(),
"test.keycloak.adminPass", KEYCLOAK_CONTAINER.getAdminPassword(),
//TODO:: add config for server to talk to
"service.externalAuth.url", keycloakUrl,
"mp.jwt.verify.publickey.location", publicKeyFile.getAbsolutePath()
);
}
public static synchronized void startMongoTestServer() throws IOException {
if (MONGO_EXE != null) {
log.info("Flapdoodle Mongo already started.");
return;
}
Version.Main version = Version.Main.V4_0;
int port = 27018;
log.info("Starting Flapdoodle Test Mongo {} on port {}", version, port);
IMongodConfig config = new MongodConfigBuilder()
.version(version)
.net(new Net(port, Network.localhostIsIPv6()))
.build();
try {
MONGO_EXE = MongodStarter.getDefaultInstance().prepare(config);
MongodProcess process = MONGO_EXE.start();
if (!process.isProcessRunning()) {
throw new IOException();
}
} catch (Throwable e) {
log.error("FAILED to start test mongo server: ", e);
MONGO_EXE = null;
throw e;
}
}
public static synchronized void stopMongoTestServer() {
if (MONGO_EXE == null) {
log.warn("Mongo was not started.");
return;
}
MONGO_EXE.stop();
MONGO_EXE = null;
}
public synchronized static void cleanMongo() throws IOException {
if (MONGO_EXE == null) {
log.warn("Mongo was not started.");
return;
}
log.info("Cleaning Mongo of all entries.");
}
#Override
public void init(Map<String, String> initArgs) {
this.externalAuth = Boolean.parseBoolean(initArgs.getOrDefault(EXTERNAL_AUTH_ARG, Boolean.toString(this.externalAuth)));
}
#Override
public Map<String, String> start() {
log.info("STARTING test lifecycle resources.");
Map<String, String> configOverride = new HashMap<>();
try {
startMongoTestServer();
} catch (IOException e) {
log.error("Unable to start Flapdoodle Mongo server");
}
configOverride.putAll(startKeycloakTestServer());
return configOverride;
}
#Override
public void stop() {
log.info("STOPPING test lifecycle resources.");
stopMongoTestServer();
}
}
The app can be found here: https://github.com/Epic-Breakfast-Productions/OpenQuarterMaster/tree/main/software/open-qm-base-station
The tests are currently failing in the ways I am describing, so feel free to look around.
Note that to run this, you will need to run ./gradlew build publishToMavenLocal in https://github.com/Epic-Breakfast-Productions/OpenQuarterMaster/tree/main/software/libs/open-qm-core to install a dependency locally.
Github issue also tracking this: https://github.com/quarkusio/quarkus/issues/22025
Any use of #QuarkusTestResource() without restrictToAnnotatedClass set to true, means that the QuarkusTestResourceLifecycleManager will be applied to all tests no matter where the annotation is placed.
Hope restrictToAnnotatedClass will solve the problem.

How to integrate a MessageHandler into the SFTP scenario based on SftpRemoteFileTemplate?

I have implemented a service of getting a file from, putting a file to, and removing a file from the SFTP server based on the SftpRemoteFileTemplate within Spring's Integration Package.
Here sftpGetPayload gets a file from the SFTP server and delivers its content.
This is my code so far:
public String sftpGetPayload(String sessionId,
String host, int port, String user, String password,
String remoteDir, String remoteFilename, boolean remoteRemove) {
LOG.info("sftpGetPayload sessionId={}", sessionId);
LOG.debug("sftpGetPayLoad host={}, port={}, user={}", host, port, user);
LOG.debug("sftpGetPayload remoteDir={}, remoteFilename={}, remoteRemove={}",
remoteDir, remoteFilename, remoteRemove);
final AtomicReference<String> refPayload = new AtomicReference<>();
SftpRemoteFileTemplate template = getSftpRemoteFileTemplate(host, port,
user, password, remoteDir, remoteFilename);
template.get(remoteDir + "/" + remoteFilename,
is -> refPayload.set(getAsString(is)));
LOG.info("sftpGetToFile {} read.", remoteDir + "/" + remoteFilename);
deleteRemoteFile(template, remoteDir, remoteFilename, remoteRemove);
return refPayload.get();
}
private SftpRemoteFileTemplate getSftpRemoteFileTemplate(String host, int port,
String user, String password, String remoteDir, String remoteFilename) {
SftpRemoteFileTemplate template =
new SftpRemoteFileTemplate(sftpSessionFactory(host, port, user, password));
template.setFileNameExpression(
new LiteralExpression(remoteDir + "/" + remoteFilename));
template.setRemoteDirectoryExpression(new LiteralExpression(remoteDir));
//template.afterPropertiesSet();
return template;
}
private void deleteRemoteFile(SftpRemoteFileTemplate template,
String remoteDir, String remoteFilename, boolean remoteRemove) {
LOG.debug("deleteRemoteFile remoteRemove={}", remoteRemove);
if (remoteRemove) {
template.remove(remoteDir + "/" + remoteFilename);
LOG.info("sftpGetToFile {} removed.", remoteDir + "/" + remoteFilename);
}
}
All those GET actions are active actions, meaning the file to get is considered to be already there. I would like to have a kind of a polling process, which calls my payload consuming method as soon as a file is received on the SFTP server.
I have found another implementation based on Spring beans, configured as Spring Integration Dsl, which declares a SftpSessionFactory, a SftpInboundFileSynchronizer, a SftpMessageSource, and a MessageHandler which polls a SFTP site for reception of a file and initiates a message handler automatically for further processing.
This code is as follows:
#Bean
public SessionFactory<ChannelSftp.LsEntry> sftpSessionFactory() {
DefaultSftpSessionFactory factory = new DefaultSftpSessionFactory(true);
factory.setHost(myHost);
factory.setPort(myPort);
factory.setUser(myUser);
factory.setPassword(myPassword);
factory.setAllowUnknownKeys(true);
return new CachingSessionFactory<LsEntry>(factory);
}
#Bean
public SftpInboundFileSynchronizer sftpInboundFileSynchronizer() {
SftpInboundFileSynchronizer fileSynchronizer = new SftpInboundFileSynchronizer(sftpSessionFactory());
fileSynchronizer.setDeleteRemoteFiles(false);
fileSynchronizer.setRemoteDirectory(myRemotePath);
fileSynchronizer.setFilter(new SftpSimplePatternFileListFilter(myFileFilter));
return fileSynchronizer;
}
#Bean
#InboundChannelAdapter(channel = "sftpChannel", poller = #Poller(fixedDelay = "5000"))
public MessageSource<File> sftpMessageSource() {
SftpInboundFileSynchronizingMessageSource source = new SftpInboundFileSynchronizingMessageSource(
sftpInboundFileSynchronizer());
source.setLocalDirectory(myLocalDirectory);
source.setAutoCreateLocalDirectory(true);
source.setLocalFilter(new AcceptOnceFileListFilter<File>());
return source;
}
#Bean
#ServiceActivator(inputChannel = "sftpChannel")
public MessageHandler handler() {
return new MessageHandler() {
#Override
public void handleMessage(Message<?> message) throws MessagingException {
System.out.println(message.getPayload());
}
};
}
How can I include this #Poller/MessageHandler/#ServiceActivator concept into my implementation above? Or is there a way to implement this feature in the template-based implementation?
The scenario could be following:
I have a Spring Boot Application with several classes which represent tasks. Some of those tasks are called automatically via the Spring #Scheduled annotation and a CRON specification, others are not.
#Scheduled(cron = "${task.to.start.automatically.frequency}")
public void runAsTask() {
...
}
First task will start at ist #Sheduled specification and get a file from SFTP server and process it. It will do that with its own channel (host1, port1, user1, password1, remoteDir1, remoteFile1).
Second task will also be run by the scheduler and generate something to put to the SFTP server. It will do that with its own channel (host2, port2, user2, password2, remoteDir2, remoteFile2). Very likely will host2 = host1 and port2 = port1, but it is not a must.
Third task will aslo be run by the scheduler and generate something to put to the SFTP server. It will do that with the same channel as task1, but this task is a producer (not a consumer like task1) and will write another file than task1 (host1, port1, user1, password1, remoteDir3, remoteFile3).
Task four has no #Scheduled annotation because it should realize when the file, it has to process, is received from third party and hence available on its channel (host4, port4, user4, password4, remoteDir4, remoteFile4) to get its content to process it.
I have read the whole Integration stuff, but it is hard to transform for this use case, either from the XML configuration schemes to Java configuration with annotations and also by the reather static Spring bean approach to a merly dynamic approach at runtime.
I understood to use an IntegrationFlow to register the artefacts, an inbound adapter for task1, an outbound adapter for task2, an inbound adapter for task3 with the same (anywhere else registrated) session factory of task1, and - last but not least - an inbound adapter with poller feature for task4.
Or should all of them be gateways with its command feature? Or should I register SftpRemoteFileTemplate?
To define the channel I have:
public class TransferChannel {
private String host;
private int port;
private String user;
private String password;
/* getters, setters, hash, equals, and toString */
}
To have all SFTP settings together, I have:
public class TransferContext {
private boolean enabled;
private TransferChannel channel;
private String remoteDir;
private String remoteFilename;
private boolean remoteRemove;
private String remoteFilenameFilter;
private String localDir;
/* getters, setters, hash, equals, and toString */
}
As the heart of the SFTP processing each job will inject kind of a DynamicSftpAdapter:
#Scheduled(cron = "${task.to.start.automatically.frequency}")
public void runAsTask() {
#Autowired
DynamicSftpAdapter sftp;
...
sftp.connect("Task1", context);
File f = sftp.getFile("Task1", "remoteDir", "remoteFile");
/* process file content */
sftp.removeFile("Task1", "remoteDir", "remoteFile");
sftp.disconnect("Task1", context);
}
The DynamicSftpAdapter is not much more than a fragment yet:
#Component
public class DynamicSftpAdapter {
private static final Logger LOG = LoggerFactory.getLogger(DynamicTcpServer.class);
#Autowired
private IntegrationFlowContext flowContext;
#Autowired
private ApplicationContext appContext;
private final Map<TransferChannel, IntegrationFlowRegistration> registrations = new HashMap<>();
private final Map<String, TransferContext> sessions = new ConcurrentHashMap<>();
#Override
public void connect(String sessionId, TransferContext context) {
if (this.registrations.containsKey(context.getChannel())) {
LOG.debug("connect, channel exists for {}", sessionId);
}
else {
// register the required SFTP Outbound Adapter
TransferChannel channel = context.getChannel();
IntegrationFlow flow = f -> f.handle(Sftp.outboundAdapter(cashedSftpSessionFactory(
channel.getHost(), channel.getPort(),
channel.getUser(), channel.getPassword())));
this.registrations.put(channel, flowContext.registration(flow).register());
this.sessions.put(sessionId, context);
LOG.info("sftp session {} for {} started", sessionId, context);
}
}
private DefaultSftpSessionFactory sftpSessionFactory(String host, int port, String user, String password) {
LOG.debug("sftpSessionFactory");
DefaultSftpSessionFactory factory = new DefaultSftpSessionFactory(true);
factory.setHost(host);
factory.setPort(port);
factory.setUser(user);
factory.setPassword(password);
factory.setAllowUnknownKeys(true);
return factory;
}
private CachingSessionFactory<LsEntry> cashedSftpSessionFactory(String host, int port, String user, String password) {
LOG.debug("cashedSftpSessionFactory");
CachingSessionFactory<LsEntry> cashedSessionFactory =
new CachingSessionFactory<LsEntry>(
sftpSessionFactory(host, port, user, password));
return cashedSessionFactory;
}
#Override
public void sftpGetFile(String sessionId, String remoteDir, String remoteFilename) {
TransferContext context = sessions.get(sessionId);
if (context == null)
throw new IllegalStateException("Session not established, sessionId " + sessionId);
IntegrationFlowRegistration register = registrations.get(context.getChannel());
if (register != null) {
try {
LOG.debug("sftpGetFile get file {}", remoteDir + "/" + remoteFilename);
register.getMessagingTemplate().send(
MessageBuilder.withPayload(msg)
.setHeader(...).build());
}
catch (Exception e) {
appContext.getBean(context, DefaultSftpSessionFactory.class)
.close();
}
}
}
#Override
public void disconnect(String sessionId, TransferContext context) {
IntegrationFlowRegistration registration = this.registrations.remove(context.getChannel());
if (registration != null) {
registration.destroy();
}
LOG.info("sftp session for {} finished", context);
}
}
I did not get how to initiate a SFTP command. I also did not get when using an OutboundGateway and having to specify the SFTP command (like GET) instantly, then would the whole SFTP handling be in one method, specifying the outbound gateway factory and getting an instance with get() and probably calling the message .get() in any way.
Obviously I need help.
First of all if you already use Spring Integration channel adapters, there is probably no reason to use that low-level API like RemoteFileTemplate directly.
Secondly there is a technical discrepancy: the SftpInboundFileSynchronizingMessageSource will produce a local file - a whole copy of the remote file. So, when we would come to your SftpRemoteFileTemplate logic downstream it would not work well since we would bring already just a local file (java.io.File), not an entity for remote file representation.
Even if your logic in the sftpGetPayload() doesn't look as complicated and custom as it would require such a separate method, it is better to have an SftpRemoteFileTemplate as a singleton and share it between different components when you work against the same SFTP server. It is just stateless straightforward Spring template pattern implementation.
If you still insist to use your method from the mentioned integration flow, you should consider to have a POJO method call for that #ServiceActivator(inputChannel = "sftpChannel"). See more in docs: https://docs.spring.io/spring-integration/docs/current/reference/html/configuration.html#annotations.
You also may find an SFTP Outbound Gateway as useful component for your use-case. It has some common scenarios implementations: https://docs.spring.io/spring-integration/docs/current/reference/html/sftp.html#sftp-outbound-gateway

How to spock integration test with standalone tomcat runner?

Our project is not currently using a spring framework.
Therefore, it is being tested based on the standalone tomcat runner.
However, since integration-enabled tests such as #SpringBootTest are not possible, Tomcat is operated in advance and the HTTP API test is carried out using Spock.
Is there a way to turn this like #SpringBootTest?
TomcatRunner
private Tomcat tomcat = null;
private int port = 8080;
private String contextPath = null;
private String docBase = null;
private Context rootContext = null;
public Tomcat8Launcher(){
init();
}
public Tomcat8Launcher(int port, String contextPath, String docBase){
this.port = port;
this.contextPath = contextPath;
this.docBase = docBase;
init();
}
private void init(){
tomcat = new Tomcat();
tomcat.setPort(port);
tomcat.enableNaming();
if(contextPath == null){
contextPath = "";
}
if(docBase == null){
File base = new File(System.getProperty("java.io.tmpdir"));
docBase = base.getAbsolutePath();
}
rootContext = tomcat.addContext(contextPath, docBase);
}
public void addServlet(String servletName, String uri, HttpServlet servlet){
Tomcat.addServlet(this.rootContext, servletName, servlet);
rootContext.addServletMapping(uri, servletName);
}
public void addListenerServlet(ServletContextListener listener){
rootContext.addApplicationListener(listener.getClass().getName());
}
public void startServer() throws LifecycleException {
tomcat.start();
tomcat.getServer().await();
}
public void stopServer() throws LifecycleException {
tomcat.stop();
}
public static void main(String[] args) throws Exception {
System.setProperty("java.util.logging.manager", "org.apache.logging.log4j.jul.LogManager");
System.setProperty(javax.naming.Context.INITIAL_CONTEXT_FACTORY, "org.apache.naming.java.javaURLContextFactory");
System.setProperty(javax.naming.Context.URL_PKG_PREFIXES, "org.apache.naming");
Tomcat8Launcher tomcatServer = new Tomcat8Launcher();
tomcatServer.addListenerServlet(new ConfigInitBaseServlet());
tomcatServer.addServlet("restServlet", "/rest/*", new RestServlet());
tomcatServer.addServlet("jsonServlet", "/json/*", new JsonServlet());
tomcatServer.startServer();
}
Spock API Test example
class apiTest extends Specification {
//static final Tomcat8Launcher tomcat = new Tomcat8Launcher()
static final String testURL = "http://localhost:8080/api/"
#Shared
def restClient
def setupSpec() {
// tomcat.main()
restClient = new RESTClient(testURL)
}
def 'findAll user'() {
when:
def response = restClient.get([path: 'user/all'])
then:
with(response){
status == 200
contentType == "application/json"
}
}
}
The test will not work if the annotations are removed from the annotations below.
// static final Tomcat8Launcher tomcat = new Tomcat8Launcher()
This line is specified API Test at the top.
// tomcat.main()
This line is specified API Test setupSpec() method
I don't know why, but only logs are recorded after Tomcat has operated and the test method is not executed.
Is there a way to fix this?
I would suggest to create a Spock extension to encapsulate everything you need. See writing custom extensions of the Spock docs as well as the built-in extensions for inspiration.

upwork-api return 503 ioexception

I created app for getting info from upwork.com. I use java lib and Upwork OAuth 1.0. The problem is local request to API works fine, but when I do deploy to Google Cloud, my code does not work. I get ({"error":{"code":"503","message":"Exception: IOException"}}).
I create UpworkAuthClient for return OAuthClient and next it is used for requests in JobClient.
run() {
UpworkAuthClient upworkClient = new UpworkAuthClient();
upworkClient.setTokenWithSecret("USER TOKEN", "USER SECRET");
OAuthClient client = upworkClient.getOAuthClient();
//set query
JobQuery jobQuery = new JobQuery();
jobQuery.setQuery("query");
List<JobQuery> jobQueries = new ArrayList<>();
jobQueries.add(jobQuery);
// Get request of job
JobClient jobClient = new JobClient(client, jobQuery);
List<Job> result = jobClient.getJob();
}
public class UpworkAuthClient {
public static final String CONSUMERKEY = "UPWORK KEY";
public static final String CONSUMERSECRET = "UPWORK SECRET";
public static final String OAYTРCALLBACK = "https://my-app.com/main";
OAuthClient client ;
public UpworkAuthClient() {
Properties keys = new Properties();
keys.setProperty("consumerKey", CONSUMERKEY);
keys.setProperty("consumerSecret", CONSUMERSECRET);
Config config = new Config(keys);
client = new OAuthClient(config);
}
public void setTokenWithSecret (String token, String secret){
client.setTokenWithSecret(token, secret);
}
public OAuthClient getOAuthClient() {
return client;
}
public String getAuthorizationUrl() {
return this.client.getAuthorizationUrl(OAYTРCALLBACK);
}
}
public class JobClient {
private JobQuery jobQuery;
private Search jobs;
public JobClient(OAuthClient oAuthClient, JobQuery jobQuery) {
jobs = new Search(oAuthClient);
this.jobQuery = jobQuery;
}
public List<Job> getJob() throws JSONException {
JSONObject job = jobs.find(jobQuery.getQueryParam());
jobList = parseResponse(job);
return jobList;
}
}
Local dev server works fine, I get resilts on local machine, but in Cloud not.
I will be glad to any ideas, thanks!
{"error":{"code":"503","message":"Exception: IOException"}}
doesn't seem like a response return by Upwork API. Could you please provide the full response including the returned headers? So, we will take a more precise look into it.

ElasticSearch in-memory for testing

I would like to write some integration with ElasticSearch. For testing I would like to run in-memory ES.
I found some information in documentation, but without example how to write those kind of test. Elasticsearch Reference [1.6] » Testing » Java Testing Framework » integration tests
« unit tests
Also I found following article, but it's out of data. Easy JUnit testing with Elastic Search
I looking example how to start and run ES in-memory and access to it over REST API.
Based on the second link you provided, I created this abstract test class:
#RunWith(SpringJUnit4ClassRunner.class)
public abstract class AbstractElasticsearchTest {
private static final String HTTP_PORT = "9205";
private static final String HTTP_TRANSPORT_PORT = "9305";
private static final String ES_WORKING_DIR = "target/es";
private static Node node;
#BeforeClass
public static void startElasticsearch() throws Exception {
removeOldDataDir(ES_WORKING_DIR + "/" + clusterName);
Settings settings = Settings.builder()
.put("path.home", ES_WORKING_DIR)
.put("path.conf", ES_WORKING_DIR)
.put("path.data", ES_WORKING_DIR)
.put("path.work", ES_WORKING_DIR)
.put("path.logs", ES_WORKING_DIR)
.put("http.port", HTTP_PORT)
.put("transport.tcp.port", HTTP_TRANSPORT_PORT)
.put("index.number_of_shards", "1")
.put("index.number_of_replicas", "0")
.put("discovery.zen.ping.multicast.enabled", "false")
.build();
node = nodeBuilder().settings(settings).clusterName("monkeys.elasticsearch").client(false).node();
node.start();
}
#AfterClass
public static void stopElasticsearch() {
node.close();
}
private static void removeOldDataDir(String datadir) throws Exception {
File dataDir = new File(datadir);
if (dataDir.exists()) {
FileSystemUtils.deleteRecursively(dataDir);
}
}
}
In the production code, I configured an Elasticsearch client as follows. The integration test extends the above defined abstract class and configures property elasticsearch.port as 9305 and elasticsearch.host as localhost.
#Configuration
public class ElasticsearchConfiguration {
#Bean(destroyMethod = "close")
public Client elasticsearchClient(#Value("${elasticsearch.clusterName}") String clusterName,
#Value("${elasticsearch.host}") String elasticsearchClusterHost,
#Value("${elasticsearch.port}") Integer elasticsearchClusterPort) throws UnknownHostException {
Settings settings = Settings.settingsBuilder().put("cluster.name", clusterName).build();
InetSocketTransportAddress transportAddress = new InetSocketTransportAddress(InetAddress.getByName(elasticsearchClusterHost), elasticsearchClusterPort);
return TransportClient.builder().settings(settings).build().addTransportAddress(transportAddress);
}
}
That's it. The integration test will run the production code which is configured to connect to the node started in the AbstractElasticsearchTest.startElasticsearch().
In case you want to use the elasticsearch REST api, use port 9205. E.g. with Apache HttpComponents:
HttpClient httpClient = HttpClients.createDefault();
HttpPut httpPut = new HttpPut("http://localhost:9205/_template/" + templateName);
httpPut.setEntity(new FileEntity(new File("template.json")));
httpClient.execute(httpPut);
Here is my implementation
import java.io.File;
import java.io.IOException;
import java.nio.file.Files;
import java.util.UUID;
import org.elasticsearch.client.Client;
import org.elasticsearch.common.settings.ImmutableSettings;
import org.elasticsearch.node.Node;
import org.elasticsearch.node.NodeBuilder;
/**
*
* #author Raghu Nair
*/
public final class ElasticSearchInMemory {
private static Client client = null;
private static File tempDir = null;
private static Node elasticSearchNode = null;
public static Client getClient() {
return client;
}
public static void setUp() throws Exception {
tempDir = File.createTempFile("elasticsearch-temp", Long.toString(System.nanoTime()));
tempDir.delete();
tempDir.mkdir();
System.out.println("writing to: " + tempDir);
String clusterName = UUID.randomUUID().toString();
elasticSearchNode = NodeBuilder
.nodeBuilder()
.local(false)
.clusterName(clusterName)
.settings(
ImmutableSettings.settingsBuilder()
.put("script.disable_dynamic", "false")
.put("gateway.type", "local")
.put("index.number_of_shards", "1")
.put("index.number_of_replicas", "0")
.put("path.data", new File(tempDir, "data").getAbsolutePath())
.put("path.logs", new File(tempDir, "logs").getAbsolutePath())
.put("path.work", new File(tempDir, "work").getAbsolutePath())
).node();
elasticSearchNode.start();
client = elasticSearchNode.client();
}
public static void tearDown() throws Exception {
if (client != null) {
client.close();
}
if (elasticSearchNode != null) {
elasticSearchNode.stop();
elasticSearchNode.close();
}
if (tempDir != null) {
removeDirectory(tempDir);
}
}
public static void removeDirectory(File dir) throws IOException {
if (dir.isDirectory()) {
File[] files = dir.listFiles();
if (files != null && files.length > 0) {
for (File aFile : files) {
removeDirectory(aFile);
}
}
}
Files.delete(dir.toPath());
}
}
You can start ES on your local with:
Settings settings = Settings.settingsBuilder()
.put("path.home", ".")
.build();
NodeBuilder.nodeBuilder().settings(settings).node();
When ES started, access it over REST like:
http://localhost:9200/_cat/health?v
As of 2016 embedded elasticsearch is no-longer supported
As per a response from one of the devlopers in 2017 you can use the following approaches:
Use the Gradle tools elasticsearch already has. You can read some information about this here: https://github.com/elastic/elasticsearch/issues/21119
Use the Maven plugin: https://github.com/alexcojocaru/elasticsearch-maven-plugin
Use Ant scripts like http://david.pilato.fr/blog/2016/10/18/elasticsearch-real-integration-tests-updated-for-ga
Using Docker: https://www.testcontainers.org/modules/elasticsearch
Using Docker from maven: https://github.com/dadoonet/fscrawler/blob/e15dddf72b1ed094dad279d492e4e0314f73683f/pom.xml#L241-L289

Categories