ElasticSearch in-memory for testing - java

I would like to write some integration with ElasticSearch. For testing I would like to run in-memory ES.
I found some information in documentation, but without example how to write those kind of test. Elasticsearch Reference [1.6] » Testing » Java Testing Framework » integration tests
« unit tests
Also I found following article, but it's out of data. Easy JUnit testing with Elastic Search
I looking example how to start and run ES in-memory and access to it over REST API.

Based on the second link you provided, I created this abstract test class:
#RunWith(SpringJUnit4ClassRunner.class)
public abstract class AbstractElasticsearchTest {
private static final String HTTP_PORT = "9205";
private static final String HTTP_TRANSPORT_PORT = "9305";
private static final String ES_WORKING_DIR = "target/es";
private static Node node;
#BeforeClass
public static void startElasticsearch() throws Exception {
removeOldDataDir(ES_WORKING_DIR + "/" + clusterName);
Settings settings = Settings.builder()
.put("path.home", ES_WORKING_DIR)
.put("path.conf", ES_WORKING_DIR)
.put("path.data", ES_WORKING_DIR)
.put("path.work", ES_WORKING_DIR)
.put("path.logs", ES_WORKING_DIR)
.put("http.port", HTTP_PORT)
.put("transport.tcp.port", HTTP_TRANSPORT_PORT)
.put("index.number_of_shards", "1")
.put("index.number_of_replicas", "0")
.put("discovery.zen.ping.multicast.enabled", "false")
.build();
node = nodeBuilder().settings(settings).clusterName("monkeys.elasticsearch").client(false).node();
node.start();
}
#AfterClass
public static void stopElasticsearch() {
node.close();
}
private static void removeOldDataDir(String datadir) throws Exception {
File dataDir = new File(datadir);
if (dataDir.exists()) {
FileSystemUtils.deleteRecursively(dataDir);
}
}
}
In the production code, I configured an Elasticsearch client as follows. The integration test extends the above defined abstract class and configures property elasticsearch.port as 9305 and elasticsearch.host as localhost.
#Configuration
public class ElasticsearchConfiguration {
#Bean(destroyMethod = "close")
public Client elasticsearchClient(#Value("${elasticsearch.clusterName}") String clusterName,
#Value("${elasticsearch.host}") String elasticsearchClusterHost,
#Value("${elasticsearch.port}") Integer elasticsearchClusterPort) throws UnknownHostException {
Settings settings = Settings.settingsBuilder().put("cluster.name", clusterName).build();
InetSocketTransportAddress transportAddress = new InetSocketTransportAddress(InetAddress.getByName(elasticsearchClusterHost), elasticsearchClusterPort);
return TransportClient.builder().settings(settings).build().addTransportAddress(transportAddress);
}
}
That's it. The integration test will run the production code which is configured to connect to the node started in the AbstractElasticsearchTest.startElasticsearch().
In case you want to use the elasticsearch REST api, use port 9205. E.g. with Apache HttpComponents:
HttpClient httpClient = HttpClients.createDefault();
HttpPut httpPut = new HttpPut("http://localhost:9205/_template/" + templateName);
httpPut.setEntity(new FileEntity(new File("template.json")));
httpClient.execute(httpPut);

Here is my implementation
import java.io.File;
import java.io.IOException;
import java.nio.file.Files;
import java.util.UUID;
import org.elasticsearch.client.Client;
import org.elasticsearch.common.settings.ImmutableSettings;
import org.elasticsearch.node.Node;
import org.elasticsearch.node.NodeBuilder;
/**
*
* #author Raghu Nair
*/
public final class ElasticSearchInMemory {
private static Client client = null;
private static File tempDir = null;
private static Node elasticSearchNode = null;
public static Client getClient() {
return client;
}
public static void setUp() throws Exception {
tempDir = File.createTempFile("elasticsearch-temp", Long.toString(System.nanoTime()));
tempDir.delete();
tempDir.mkdir();
System.out.println("writing to: " + tempDir);
String clusterName = UUID.randomUUID().toString();
elasticSearchNode = NodeBuilder
.nodeBuilder()
.local(false)
.clusterName(clusterName)
.settings(
ImmutableSettings.settingsBuilder()
.put("script.disable_dynamic", "false")
.put("gateway.type", "local")
.put("index.number_of_shards", "1")
.put("index.number_of_replicas", "0")
.put("path.data", new File(tempDir, "data").getAbsolutePath())
.put("path.logs", new File(tempDir, "logs").getAbsolutePath())
.put("path.work", new File(tempDir, "work").getAbsolutePath())
).node();
elasticSearchNode.start();
client = elasticSearchNode.client();
}
public static void tearDown() throws Exception {
if (client != null) {
client.close();
}
if (elasticSearchNode != null) {
elasticSearchNode.stop();
elasticSearchNode.close();
}
if (tempDir != null) {
removeDirectory(tempDir);
}
}
public static void removeDirectory(File dir) throws IOException {
if (dir.isDirectory()) {
File[] files = dir.listFiles();
if (files != null && files.length > 0) {
for (File aFile : files) {
removeDirectory(aFile);
}
}
}
Files.delete(dir.toPath());
}
}

You can start ES on your local with:
Settings settings = Settings.settingsBuilder()
.put("path.home", ".")
.build();
NodeBuilder.nodeBuilder().settings(settings).node();
When ES started, access it over REST like:
http://localhost:9200/_cat/health?v

As of 2016 embedded elasticsearch is no-longer supported
As per a response from one of the devlopers in 2017 you can use the following approaches:
Use the Gradle tools elasticsearch already has. You can read some information about this here: https://github.com/elastic/elasticsearch/issues/21119
Use the Maven plugin: https://github.com/alexcojocaru/elasticsearch-maven-plugin
Use Ant scripts like http://david.pilato.fr/blog/2016/10/18/elasticsearch-real-integration-tests-updated-for-ga
Using Docker: https://www.testcontainers.org/modules/elasticsearch
Using Docker from maven: https://github.com/dadoonet/fscrawler/blob/e15dddf72b1ed094dad279d492e4e0314f73683f/pom.xml#L241-L289

Related

Mockserver fails to read request: unknown message format

I need to test some REST client. For that purpose I'm using org.mockserver.integration.ClientAndServer
I start my server. Create some expectation. After that I mock my client. Run this client. But when server receives request I see in logs:
14:00:13.511 [MockServer-EventLog0] INFO org.mockserver.log.MockServerEventLog - received binary request:
505249202a20485454502f322e300d0a0d0a534d0d0a0d0a00000604000000000000040100000000000408000000000000ff0001
14:00:13.511 [MockServer-EventLog0] INFO org.mockserver.log.MockServerEventLog - unknown message format
505249202a20485454502f322e300d0a0d0a534d0d0a0d0a00000604000000000000040100000000000408000000000000ff0001
This is my test:
#RunWith(PowerMockRunner.class)
#PrepareForTest({NrfClient.class, SnefProperties.class})
#PowerMockIgnore({"javax.net.ssl.*"})
#TestPropertySource(locations = "classpath:test.properties")
public class NrfConnectionTest {
private String finalPath;
private UUID uuid = UUID.fromString("8d92d4ac-be0e-4016-8b2c-eff2607798e4");
private ClientAndServer mockServer;
#Before
public void startMockServer() {
mockServer = startClientAndServer(8888);
}
#After
public void stopServer() {
mockServer.stop();
}
#Test
public void NrfRegisterTest() throws Exception {
//create some expectation
new MockServerClient("127.0.0.1", 8888)
.when(HttpRequest.request()
.withMethod("PUT")
.withPath("/nnrf-nfm/v1/nf-instances/8d92d4ac-be0e-4016-8b2c-eff2607798e4"))
.respond(HttpResponse.response().withStatusCode(201));
//long preparations and mocking the NrfClient (client that actually make request)
//NrfClient is singleton, so had to mock a lot of methods.
PropertiesConfiguration config = new PropertiesConfiguration();
config.setAutoSave(false);
File file = new File("test.properties");
if (!file.exists()) {
String absolutePath = file.getAbsolutePath();
finalPath = absolutePath.substring(0, absolutePath.length() - "test.properties".length()) + "src\\test\\resources\\test.properties";
file = new File(finalPath);
}
try {
config.load(file);
config.setFile(file);
} catch(ConfigurationException e) {
LogUtils.warn(NrfConnectionTest.class, "Failed to load properties from file " + "classpath:test.properties", e);
}
SnefProperties spyProperties = PowerMockito.spy(SnefProperties.getInstance());
PowerMockito.doReturn(finalPath).when(spyProperties, "getPropertiesFilePath");
PowerMockito.doReturn(config).when(spyProperties, "getProperties");
PowerMockito.doReturn(config).when(spyProperties, "getLastUpdatedProperties");
NrfConfig nrfConfig = getNrfConfig();
NrfClient nrfClient = PowerMockito.spy(NrfClient.getInstance());
SnefAddressInfo snefAddressInfo = new SnefAddressInfo("127.0.0.1", "8080");
PowerMockito.doReturn(nrfConfig).when(nrfClient, "loadConfiguration", snefAddressInfo);
PowerMockito.doReturn(uuid).when(nrfClient, "getUuid");
Whitebox.setInternalState(SnefProperties.class, "instance", spyProperties);
nrfClient.initialize(snefAddressInfo);
//here the client makes request
nrfClient.run();
}
private NrfConfig getNrfConfig() {
NrfConfig nrfConfig = new NrfConfig();
nrfConfig.setNrfDirectConnection(true);
nrfConfig.setNrfAddress("127.0.0.1:8888");
nrfConfig.setSnefNrfService(State.ENABLED);
nrfConfig.setSmpIp("127.0.0.1");
nrfConfig.setSmpPort("8080");
return nrfConfig;
}
}
Looks like I miss some server configuration, or use it in wrong way.
Or, maybe the reason is in powermock: could it be that mockserver is incompatible with powermock or PowerMockRunner?

Using kinesis async client AWS SDK 2 with Java

I am trying to use the KinesisAsyncClient as described in https://docs.aws.amazon.com/code-samples/latest/catalog/javav2-kinesis-src-main-java-com-example-kinesis-KinesisStreamRxJavaEx.java.html
I have a mac OS and I have configured the following dependencies for async http client
'software.amazon.awssdk:netty-nio-client:2.16.101'
'software.amazon.awssdk:kinesis:2.16.99'
Caused by: java.lang.NoClassDefFoundError: io/netty/internal/tcnative/SSLPrivateKeyMethod
at software.amazon.awssdk.http.nio.netty.internal.AwaitCloseChannelPoolMap.newPool(AwaitCloseChannelPoolMap.java:119)
at software.amazon.awssdk.http.nio.netty.internal.AwaitCloseChannelPoolMap.newPool(AwaitCloseChannelPoolMap.java:49)
at java.base/java.util.concurrent.ConcurrentHashMap.computeIfAbsent(ConcurrentHashMap.java:1705)
at software.amazon.awssdk.http.nio.netty.internal.SdkChannelPoolMap.get(SdkChannelPoolMap.java:44)
at software.amazon.awssdk.http.nio.netty.NettyNioAsyncHttpClient.createRequestContext(NettyNioAsyncHttpClient.java:140)
at software.amazon.awssdk.http.nio.netty.NettyNioAsyncHttpClient.execute(NettyNioAsyncHttpClient.java:121)
at software.amazon.awssdk.core.client.builder.SdkDefaultClientBuilder$NonManagedSdkAsyncHttpClient.execute(SdkDefaultClientBuilder.java:463)
at software.amazon.awssdk.core.internal.http.pipeline.stages.MakeAsyncHttpRequestStage.doExecuteHttpRequest(MakeAsyncHttpRequestStage.java:219)
at software.amazon.awssdk.core.internal.http.pipeline.stages.MakeAsyncHttpRequestStage.executeHttpRequest(MakeAsyncHttpRequestStage.java:191)
at software.amazon.awssdk.core.internal.http.pipeline.stages.MakeAsyncHttpRequestStage.lambda$execute$1(MakeAsyncHttpRequestStage.java:100)
at java.base/java.util.concurrent.CompletableFuture.uniAcceptNow(CompletableFuture.java:753)
at java.base/java.util.concurrent.CompletableFuture.uniAcceptStage(CompletableFuture.java:731)
at java.base/java.util.concurrent.CompletableFuture.thenAccept(CompletableFuture.java:2108)
at software.amazon.awssdk.core.internal.http.pipeline.stages.MakeAsyncHttpRequestStage.execute(MakeAsyncHttpRequestStage.java:96)
at software.amazon.awssdk.core.internal.http.pipeline.stages.MakeAsyncHttpRequestStage.execute(MakeAsyncHttpRequestStage.java:61)
at software.amazon.awssdk.core.internal.http.pipeline.RequestPipelineBuilder$ComposingRequestPipelineStage.execute(RequestPipelineBuilder.java:206)
at software.amazon.awssdk.core.internal.http.pipeline.stages.AsyncApiCallAttemptMetricCollectionStage.execute(AsyncApiCallAttemptMetricCollectionStage.java:55)
at software.amazon.awssdk.core.internal.http.pipeline.stages.AsyncApiCallAttemptMetricCollectionStage.execute(AsyncApiCallAttemptMetricCollectionStage.java:37)
at software.amazon.awssdk.core.internal.http.pipeline.stages.AsyncRetryableStage$RetryingExecutor.attemptExecute(AsyncRetryableStage.java:110)
In the same example folders, the syncClient works well and connects to kinesis on AWS. Does anyone know what can be done to fix this?
I just tested this code:
package com.example.kinesis.asny;
import java.util.concurrent.CompletableFuture;
import io.reactivex.Flowable;
import software.amazon.awssdk.core.async.SdkPublisher;
import software.amazon.awssdk.services.kinesis.KinesisAsyncClient;
import software.amazon.awssdk.services.kinesis.model.ShardIteratorType;
import software.amazon.awssdk.services.kinesis.model.StartingPosition;
import software.amazon.awssdk.services.kinesis.model.SubscribeToShardEvent;
import software.amazon.awssdk.services.kinesis.model.SubscribeToShardRequest;
import software.amazon.awssdk.services.kinesis.model.SubscribeToShardResponseHandler;
import software.amazon.awssdk.utils.AttributeMap;
public class KinesisStreamRxJavaEx {
private static final String CONSUMER_ARN = "arn:aws:kinesis:us-east-1:814548xxxxxx:stream/LamDataStream/consumer/MyConsumer:162645xxxx";
public static void main(String[] args) {
KinesisAsyncClient client = KinesisAsyncClient.create();
SubscribeToShardRequest request = SubscribeToShardRequest.builder()
.consumerARN(CONSUMER_ARN)
.shardId("shardId-000000000000")
.startingPosition(StartingPosition.builder().type(ShardIteratorType.LATEST).build())
.build();
responseHandlerBuilder_RxJava(client, request).join();
System.out.println("Done");
client.close();
}
/**
* Uses RxJava via the onEventStream lifecycle method. This gives you full access to the publisher, which can be used
* to create an Rx Flowable.
*/
private static CompletableFuture<Void> responseHandlerBuilder_RxJava(KinesisAsyncClient client, SubscribeToShardRequest request) {
// snippet-start:[kinesis.java2.stream_rx_example.event_stream]
SubscribeToShardResponseHandler responseHandler = SubscribeToShardResponseHandler
.builder()
.onError(t -> System.err.println("Error during stream - " + t.getMessage()))
.onEventStream(p -> Flowable.fromPublisher(p)
.ofType(SubscribeToShardEvent.class)
.flatMapIterable(SubscribeToShardEvent::records)
.limit(1000)
.buffer(25)
.subscribe(e -> System.out.println("Record batch = " + e)))
.build();
// snippet-end:[kinesis.java2.stream_rx_example.event_stream]
return client.subscribeToShard(request, responseHandler);
}
/**
* Because a Flowable is also a publisher, the publisherTransformer method integrates nicely with RxJava. Notice that
* you must adapt to an SdkPublisher.
*/
private static CompletableFuture<Void> responseHandlerBuilder_OnEventStream_RxJava(KinesisAsyncClient client, SubscribeToShardRequest request) {
// snippet-start:[kinesis.java2.stream_rx_example.publish_transform]
SubscribeToShardResponseHandler responseHandler = SubscribeToShardResponseHandler
.builder()
.onError(t -> System.err.println("Error during stream - " + t.getMessage()))
.publisherTransformer(p -> SdkPublisher.adapt(Flowable.fromPublisher(p).limit(100)))
.build();
// snippet-end:[kinesis.java2.stream_rx_example.publish_transform]
return client.subscribeToShard(request, responseHandler);
}
}
It successfully completed:
Make sure that you specify a valid consumer ARN value; otherwise the code does not work.
You can get a valid consumer ARN using this code;
import software.amazon.awssdk.regions.Region;
import software.amazon.awssdk.services.kinesis.KinesisClient;
import software.amazon.awssdk.services.kinesis.model.KinesisException;
import software.amazon.awssdk.services.kinesis.model.RegisterStreamConsumerRequest;
import software.amazon.awssdk.services.kinesis.model.RegisterStreamConsumerResponse;
public class RegisterStreamConsumer {
public static void main(String[] args) {
final String USAGE = "\n" +
"Usage:\n" +
" ListShards <streamName>\n\n" +
"Where:\n" +
" streamName - The Amazon Kinesis data stream (for example, StockTradeStream)\n\n" +
"Example:\n" +
" ListShards StockTradeStream\n";
// if (args.length != 1) {
// System.out.println(USAGE);
// System.exit(1);
// }
String streamARN = "arn:aws:kinesis:us-east-1:8145480xxxxx:stream/LamDataStream" ; //args[0];
Region region = Region.US_EAST_1;
KinesisClient kinesisClient = KinesisClient.builder()
.region(region)
.build();
String arnValue = regConsumer(kinesisClient, streamARN);
System.out.println(arnValue);
kinesisClient.close();
}
public static String regConsumer(KinesisClient kinesisClient, String streamARN) {
try {
RegisterStreamConsumerRequest regCon = RegisterStreamConsumerRequest.builder()
.consumerName("MyConsumer")
.streamARN(streamARN)
.build();
RegisterStreamConsumerResponse resp = kinesisClient.registerStreamConsumer(regCon);
return resp.consumer().consumerARN();
} catch (KinesisException e) {
System.err.println(e.getMessage());
System.exit(1);
}
return "";
}
}
This worked for me after adding a couple of dependencies for netty
implementation 'io.netty:netty-tcnative:2.0.40.Final'
implementation 'io.netty:netty-tcnative-boringssl-static:2.0.40.Final'
I followed the https://docs.hazelcast.com/imdg/4.2/security/integrating-openssl.html to understand what was going on and why the netty.nio was having issues.

How to spock integration test with standalone tomcat runner?

Our project is not currently using a spring framework.
Therefore, it is being tested based on the standalone tomcat runner.
However, since integration-enabled tests such as #SpringBootTest are not possible, Tomcat is operated in advance and the HTTP API test is carried out using Spock.
Is there a way to turn this like #SpringBootTest?
TomcatRunner
private Tomcat tomcat = null;
private int port = 8080;
private String contextPath = null;
private String docBase = null;
private Context rootContext = null;
public Tomcat8Launcher(){
init();
}
public Tomcat8Launcher(int port, String contextPath, String docBase){
this.port = port;
this.contextPath = contextPath;
this.docBase = docBase;
init();
}
private void init(){
tomcat = new Tomcat();
tomcat.setPort(port);
tomcat.enableNaming();
if(contextPath == null){
contextPath = "";
}
if(docBase == null){
File base = new File(System.getProperty("java.io.tmpdir"));
docBase = base.getAbsolutePath();
}
rootContext = tomcat.addContext(contextPath, docBase);
}
public void addServlet(String servletName, String uri, HttpServlet servlet){
Tomcat.addServlet(this.rootContext, servletName, servlet);
rootContext.addServletMapping(uri, servletName);
}
public void addListenerServlet(ServletContextListener listener){
rootContext.addApplicationListener(listener.getClass().getName());
}
public void startServer() throws LifecycleException {
tomcat.start();
tomcat.getServer().await();
}
public void stopServer() throws LifecycleException {
tomcat.stop();
}
public static void main(String[] args) throws Exception {
System.setProperty("java.util.logging.manager", "org.apache.logging.log4j.jul.LogManager");
System.setProperty(javax.naming.Context.INITIAL_CONTEXT_FACTORY, "org.apache.naming.java.javaURLContextFactory");
System.setProperty(javax.naming.Context.URL_PKG_PREFIXES, "org.apache.naming");
Tomcat8Launcher tomcatServer = new Tomcat8Launcher();
tomcatServer.addListenerServlet(new ConfigInitBaseServlet());
tomcatServer.addServlet("restServlet", "/rest/*", new RestServlet());
tomcatServer.addServlet("jsonServlet", "/json/*", new JsonServlet());
tomcatServer.startServer();
}
Spock API Test example
class apiTest extends Specification {
//static final Tomcat8Launcher tomcat = new Tomcat8Launcher()
static final String testURL = "http://localhost:8080/api/"
#Shared
def restClient
def setupSpec() {
// tomcat.main()
restClient = new RESTClient(testURL)
}
def 'findAll user'() {
when:
def response = restClient.get([path: 'user/all'])
then:
with(response){
status == 200
contentType == "application/json"
}
}
}
The test will not work if the annotations are removed from the annotations below.
// static final Tomcat8Launcher tomcat = new Tomcat8Launcher()
This line is specified API Test at the top.
// tomcat.main()
This line is specified API Test setupSpec() method
I don't know why, but only logs are recorded after Tomcat has operated and the test method is not executed.
Is there a way to fix this?
I would suggest to create a Spock extension to encapsulate everything you need. See writing custom extensions of the Spock docs as well as the built-in extensions for inspiration.

How to use network proxy when connecting to Microsoft Azure Media Services

When I run Microsoft Azure Media Services code written using Java in local it is working but when I deploy the same code in dev environment , I am unable to access the Azure and its throwing java.net.HostNotFoundException.
What is the best approach to use network proxy to connect to Azure
Below is the code I am using via java and using azure-java-sdk
import java.io.*;
import java.security.NoSuchAlgorithmException;
import java.util.EnumSet;
import com.microsoft.windowsazure.Configuration;
import com.microsoft.windowsazure.exception.ServiceException;
import com.microsoft.windowsazure.services.media.MediaConfiguration;
import com.microsoft.windowsazure.services.media.MediaContract;
import com.microsoft.windowsazure.services.media.MediaService;
import com.microsoft.windowsazure.services.media.WritableBlobContainerContract;
import com.microsoft.windowsazure.services.media.models.AccessPolicy;
import com.microsoft.windowsazure.services.media.models.AccessPolicyInfo;
import com.microsoft.windowsazure.services.media.models.AccessPolicyPermission;
import com.microsoft.windowsazure.services.media.models.Asset;
import com.microsoft.windowsazure.services.media.models.AssetFile;
import com.microsoft.windowsazure.services.media.models.AssetFileInfo;
import com.microsoft.windowsazure.services.media.models.AssetInfo;
import com.microsoft.windowsazure.services.media.models.Job;
import com.microsoft.windowsazure.services.media.models.JobInfo;
import com.microsoft.windowsazure.services.media.models.JobState;
import com.microsoft.windowsazure.services.media.models.ListResult;
import com.microsoft.windowsazure.services.media.models.Locator;
import com.microsoft.windowsazure.services.media.models.LocatorInfo;
import com.microsoft.windowsazure.services.media.models.LocatorType;
import com.microsoft.windowsazure.services.media.models.MediaProcessor;
import com.microsoft.windowsazure.services.media.models.MediaProcessorInfo;
import com.microsoft.windowsazure.services.media.models.Task;
public class HelloMediaServices
{
// Media Services account credentials configuration
private static String mediaServiceUri = "https://media.windows.net/API/";
private static String oAuthUri = "https://wamsprodglobal001acs.accesscontrol.windows.net/v2/OAuth2-13";
private static String clientId = "account name";
private static String clientSecret = "account key";
private static String scope = "urn:WindowsAzureMediaServices";
private static MediaContract mediaService;
// Encoder configuration
private static String preferedEncoder = "Media Encoder Standard";
private static String encodingPreset = "H264 Multiple Bitrate 720p";
public static void main(String[] args)
{
try {
// Set up the MediaContract object to call into the Media Services account
Configuration configuration = MediaConfiguration.configureWithOAuthAuthentication(
mediaServiceUri, oAuthUri, clientId, clientSecret, scope);
mediaService = MediaService.create(configuration);
// Upload a local file to an Asset
AssetInfo uploadAsset = uploadFileAndCreateAsset("BigBuckBunny.mp4");
System.out.println("Uploaded Asset Id: " + uploadAsset.getId());
// Transform the Asset
AssetInfo encodedAsset = encode(uploadAsset);
System.out.println("Encoded Asset Id: " + encodedAsset.getId());
// Create the Streaming Origin Locator
String url = getStreamingOriginLocator(encodedAsset);
System.out.println("Origin Locator URL: " + url);
System.out.println("Sample completed!");
} catch (ServiceException se) {
System.out.println("ServiceException encountered.");
System.out.println(se.toString());
} catch (Exception e) {
System.out.println("Exception encountered.");
System.out.println(e.toString());
}
}
private static AssetInfo uploadFileAndCreateAsset(String fileName)
throws ServiceException, FileNotFoundException, NoSuchAlgorithmException {
WritableBlobContainerContract uploader;
AssetInfo resultAsset;
AccessPolicyInfo uploadAccessPolicy;
LocatorInfo uploadLocator = null;
// Create an Asset
resultAsset = mediaService.create(Asset.create().setName(fileName).setAlternateId("altId"));
System.out.println("Created Asset " + fileName);
// Create an AccessPolicy that provides Write access for 15 minutes
uploadAccessPolicy = mediaService
.create(AccessPolicy.create("uploadAccessPolicy", 15.0, EnumSet.of(AccessPolicyPermission.WRITE)));
// Create a Locator using the AccessPolicy and Asset
uploadLocator = mediaService
.create(Locator.create(uploadAccessPolicy.getId(), resultAsset.getId(), LocatorType.SAS));
// Create the Blob Writer using the Locator
uploader = mediaService.createBlobWriter(uploadLocator);
File file = new File("BigBuckBunny.mp4");
// The local file that will be uploaded to your Media Services account
InputStream input = new FileInputStream(file);
System.out.println("Uploading " + fileName);
// Upload the local file to the asset
uploader.createBlockBlob(fileName, input);
// Inform Media Services about the uploaded files
mediaService.action(AssetFile.createFileInfos(resultAsset.getId()));
System.out.println("Uploaded Asset File " + fileName);
mediaService.delete(Locator.delete(uploadLocator.getId()));
mediaService.delete(AccessPolicy.delete(uploadAccessPolicy.getId()));
return resultAsset;
}
// Create a Job that contains a Task to transform the Asset
private static AssetInfo encode(AssetInfo assetToEncode)
throws ServiceException, InterruptedException {
// Retrieve the list of Media Processors that match the name
ListResult<MediaProcessorInfo> mediaProcessors = mediaService
.list(MediaProcessor.list().set("$filter", String.format("Name eq '%s'", preferedEncoder)));
// Use the latest version of the Media Processor
MediaProcessorInfo mediaProcessor = null;
for (MediaProcessorInfo info : mediaProcessors) {
if (null == mediaProcessor || info.getVersion().compareTo(mediaProcessor.getVersion()) > 0) {
mediaProcessor = info;
}
}
System.out.println("Using Media Processor: " + mediaProcessor.getName() + " " + mediaProcessor.getVersion());
// Create a task with the specified Media Processor
String outputAssetName = String.format("%s as %s", assetToEncode.getName(), encodingPreset);
String taskXml = "<taskBody><inputAsset>JobInputAsset(0)</inputAsset>"
+ "<outputAsset assetCreationOptions=\"0\"" // AssetCreationOptions.None
+ " assetName=\"" + outputAssetName + "\">JobOutputAsset(0)</outputAsset></taskBody>";
Task.CreateBatchOperation task = Task.create(mediaProcessor.getId(), taskXml)
.setConfiguration(encodingPreset).setName("Encoding");
// Create the Job; this automatically schedules and runs it.
Job.Creator jobCreator = Job.create()
.setName(String.format("Encoding %s to %s", assetToEncode.getName(), encodingPreset))
.addInputMediaAsset(assetToEncode.getId()).setPriority(2).addTaskCreator(task);
JobInfo job = mediaService.create(jobCreator);
String jobId = job.getId();
System.out.println("Created Job with Id: " + jobId);
// Check to see if the Job has completed
checkJobStatus(jobId);
// Done with the Job
// Retrieve the output Asset
ListResult<AssetInfo> outputAssets = mediaService.list(Asset.list(job.getOutputAssetsLink()));
return outputAssets.get(0);
}
public static String getStreamingOriginLocator(AssetInfo asset) throws ServiceException {
// Get the .ISM AssetFile
ListResult<AssetFileInfo> assetFiles = mediaService.list(AssetFile.list(asset.getAssetFilesLink()));
AssetFileInfo streamingAssetFile = null;
for (AssetFileInfo file : assetFiles) {
if (file.getName().toLowerCase().endsWith(".ism")) {
streamingAssetFile = file;
break;
}
}
AccessPolicyInfo originAccessPolicy;
LocatorInfo originLocator = null;
// Create a 30-day readonly AccessPolicy
double durationInMinutes = 60 * 24 * 30;
originAccessPolicy = mediaService.create(
AccessPolicy.create("Streaming policy", durationInMinutes, EnumSet.of(AccessPolicyPermission.READ)));
// Create a Locator using the AccessPolicy and Asset
originLocator = mediaService
.create(Locator.create(originAccessPolicy.getId(), asset.getId(), LocatorType.OnDemandOrigin));
// Create a Smooth Streaming base URL
return originLocator.getPath() + streamingAssetFile.getName() + "/manifest";
}
private static void checkJobStatus(String jobId) throws InterruptedException, ServiceException {
boolean done = false;
JobState jobState = null;
while (!done) {
// Sleep for 5 seconds
Thread.sleep(5000);
// Query the updated Job state
jobState = mediaService.get(Job.get(jobId)).getState();
System.out.println("Job state: " + jobState);
if (jobState == JobState.Finished || jobState == JobState.Canceled || jobState == JobState.Error) {
done = true;
}
}
}
}
I verified following code below which is working through fiddler proxy. Thanks to how to Capture https with fiddler, in java post which gave me hints:
System.setProperty("http.proxyHost", "127.0.0.1");
System.setProperty("https.proxyHost", "127.0.0.1");
System.setProperty("http.proxyPort", "8888");
System.setProperty("https.proxyPort", "8888");
System.setProperty("javax.net.ssl.trustStore", "C:\\Program Files\\Java\\jdk1.8.0_102\\bin\\FiddlerKeyStore");
System.setProperty("javax.net.ssl.trustStorePassword", "mypassword");
For others who face issue like me we can connect to azure mediaservices using network proxy by using below code
// Set up the MediaContract object to call into the Media Services account
Configuration configuration = MediaConfiguration.configureWithOAuthAuthentication(
mediaServiceUri, oAuthUri, clientId, clientSecret, scope);
configuration.getProperties().put(Configuration.PROPERTY_HTTP_PROXY_HOST, "Hostvalue");
configuration.getProperties().put(Configuration.PROPERTY_HTTP_PROXY_PORT, "Portvalue");
configuration.getProperties().put(Configuration.PROPERTY_HTTP_PROXY_SCHEME, "http");
MediaContract mediaService = MediaService.create(configuration);
Now use the mediaService to perform other operations.

Java file encoding magic

Strange thing happened in Java Kingdom...
Long story short: I use Java API V3 to connect to QuickBooks and fetch the data form there (services for example).
Everything goes fine except the case when a service contains russian symbols (or probably non-latin symbols).
Here is Java code that does it (I know it's far from perfect)
package com.mde.test;
import static com.intuit.ipp.query.GenerateQuery.$;
import static com.intuit.ipp.query.GenerateQuery.select;
import java.util.LinkedList;
import java.util.List;
import com.intuit.ipp.core.Context;
import com.intuit.ipp.core.ServiceType;
import com.intuit.ipp.data.Item;
import com.intuit.ipp.exception.FMSException;
import com.intuit.ipp.query.GenerateQuery;
import com.intuit.ipp.security.OAuthAuthorizer;
import com.intuit.ipp.services.DataService;
import com.intuit.ipp.util.Config;
public class TestEncoding {
public static final String QBO_BASE_URL_SANDBOX = "https://sandbox-quickbooks.api.intuit.com/v3/company";
private static String consumerKey = "consumerkeycode";
private static String consumerSecret = "consumersecretcode";
private static String accessToken = "accesstokencode";
private static String accessTokenSecret = "accesstokensecretcode";
private static String appToken = "apptokencode";
private static String companyId = "companyidcode";
private static OAuthAuthorizer oauth = new OAuthAuthorizer(consumerKey, consumerSecret, accessToken, accessTokenSecret);
private static final int PAGING_STEP = 500;
public static void main(String[] args) throws FMSException {
List<Item> res = findAllServices(getDataService());
System.out.println(res.get(1).getName());
}
public static List<Item> findAllServices(DataService service) throws FMSException {
Item item = GenerateQuery.createQueryEntity(Item.class);
List<Item> res = new LinkedList<>();
for (int skip = 0; ; skip += PAGING_STEP) {
String query = select($(item)).skip(skip).take(PAGING_STEP).generate();
List<Item> items = (List<Item>)service.executeQuery(query).getEntities();
if (items.size() > 0)
res.addAll(items);
else
break;
}
System.out.println("All services fetched");
return res;
}
public static DataService getDataService() throws FMSException {
Context context = getContext();
if (context == null) {
System.out.println("Context is null, something wrong, dataService also will null.");
return null;
}
return getDataService(context);
}
private static Context getContext() {
try {
return new Context(oauth, appToken, ServiceType.QBO, companyId);
} catch (FMSException e) {
System.out.println("Context is not loaded");
return null;
}
}
protected static DataService getDataService(Context context) throws FMSException {
DataService service = new DataService(context);
Config.setProperty(Config.BASE_URL_QBO, QBO_BASE_URL_SANDBOX);
return new DataService(context);
}
}
This file is saved in UTF-8. And it prints something like
All services fetched
Сэрвыс, отнюдь
But! When I save this file in UTF-8 with BOM.... I get the correct data!
All services fetched
Сэрвыс, отнюдь
Does anybody can explain what is happening? :)
// I use Eclipse to run the code
You are fetching data from a system that doesn't share the same byte ordering as you, so when you save the file with BOM, it adds enough information in the file that future programs will read it in the remote system's byte ordering.
When you save it without BOM, it wrote the file in the remote system's byte ordering without any indication of the stored byte order, so when you read it you read it with the local system's (different) byte order. This jumbles up the bytes within the multi-byte characters, making the output appear as nonsense.

Categories