I used a screenshot plugin for ios when I developed an iphone/ipad app. I am now creating an android version of the app and am trying to implement the android version of the plugin.
My java part fo the plugin looks like this:
package org.apache.cordova;
import java.io.File;
import java.io.FileOutputStream;
import java.io.IOException;
import org.apache.cordova.api.Plugin;
import org.apache.cordova.api.PluginResult;
import org.json.JSONArray;
import android.graphics.Bitmap;
import android.os.Environment;
import android.view.View;
public class Screenshot extends Plugin {
private PluginResult result = null;
#Override
public PluginResult execute(String action, JSONArray args, String callbackId) {
// starting on ICS, some WebView methods
// can only be called on UI threads
super.cordova.getActivity().runOnUiThread(new Runnable() {
public void run() {
View view = webView.getRootView();
view.setDrawingCacheEnabled(true);
Bitmap bitmap = Bitmap.createBitmap(view.getDrawingCache());
view.setDrawingCacheEnabled(false);
try {
File folder = new File(Environment.getExternalStorageDirectory(), "Pictures");
if (!folder.exists()) {
folder.mkdirs();
}
File f = new File(folder, "screenshot_" + System.currentTimeMillis() + ".png");
FileOutputStream fos = new FileOutputStream(f);
bitmap.compress(Bitmap.CompressFormat.PNG, 100, fos);
result = new PluginResult(PluginResult.Status.OK);
} catch (IOException e) {
result = new PluginResult(PluginResult.Status.IO_EXCEPTION, e.getMessage());
}
}
});
// waiting ui thread to finish
while (this.result == null) {
try {
Thread.sleep(100);
} catch (InterruptedException e) {
// ignoring exception, since we have to wait
// ui thread to finish
}
}
return this.result;
}
}
My Screenshot.js looks like this:
(function() {
/* Get local ref to global PhoneGap/Cordova/cordova object for exec function.
- This increases the compatibility of the plugin. */
var cordovaRef = window.PhoneGap || window.Cordova || window.cordova; // old to new fallbacks
/**
* This class exposes the ability to take a Screenshot to JavaScript
*/
function Screenshot() { }
/**
* Save the screenshot to the user's Photo Library
*/
Screenshot.prototype.saveScreenshot = function() {
cordovaRef.exec(null, null, "Screenshot", "saveScreenshot", []);
};
if (!window.plugins) {
window.plugins = {};
}
if (!window.plugins.screenshot) {
window.plugins.screenshot = new Screenshot();
}
})(); /* End of Temporary Scope. */
Now I try to call my screenshot.js function by using this code:
function takeScreenShot() {
cordovaRef.exec("Screenshot.saveScreenshot");
}
However all I get is JSON errors, I know somewhere im asking to to convert it to JSON from a java string but i just can't figure out how to change it. Ok well I think that is what is wrong...
My errors look like this:
ERROR: org.json.JSONException: Value undefined of type java.lang.String cannot be converted to JSONArray.
Error: Status=8 Message=JSON error
file:///android_asset/www/cordova-2.0.0.js: Line 938 : Error: Status=8 Message=JSON error
Error: Status=8 Message=JSON error at file:///android_asset_/www/cordova-2.0.0.js:938
Can anyone guide me where Im going wrong please?
Question resolved by:
Phonegap Screenshot plugin in Cordova 2.0.0
Answer provided by Simon MacDonald.
Related
Recently I have got involved into Storlet project which is a middleware of OpenStack Swift Project. I do not intend to talk about Storlet but, in short Storlet running a java code on objects(files) that stored into swift object storage. Files read by the storlet and send to the java application in form of InputSream which means we don't direct access to files.
this is a sample code of a storlet which gets the image as an inputstream and make a thumbnail of it.
import java.io.IOException;
import java.util.ArrayList;
import java.util.Date;
import java.util.HashMap;
import java.util.Iterator;
import java.util.Map;
import java.io.InputStream;
import java.io.OutputStream;
import org.openstack.storlet.common.IStorlet;
import org.openstack.storlet.common.StorletException;
import org.openstack.storlet.common.StorletInputStream;
import org.openstack.storlet.common.StorletLogger;
import org.openstack.storlet.common.StorletObjectOutputStream;
import org.openstack.storlet.common.StorletContainerHandle;
import org.openstack.storlet.common.StorletOutputStream;
import org.openstack.storlet.common.StorletUtils;
import javax.imageio.ImageIO;
import java.awt.image.BufferedImage;
import java.awt.Image;
import java.awt.Transparency;
import java.awt.Graphics2D;
import java.awt.RenderingHints;
public class ThumbnailStorlet implements IStorlet {
#Override
public void invoke(ArrayList<StorletInputStream> inputStreams,
ArrayList<StorletOutputStream> outputStreams,
Map<String, String> parameters, StorletLogger log)
throws StorletException {
log.emitLog("ThumbnailStorlet Invoked");
/*
* Get input stuff
*/
HashMap<String, String> object_md;
StorletInputStream storletInputStream = inputStreams.get(0);
InputStream thumbnailInputStream = storletInputStream.getStream();
object_md = storletInputStream.getMetadata();
/*
* Get output stuff
*/
StorletObjectOutputStream storletObjectOutputStream = (StorletObjectOutputStream)outputStreams.get(0);
OutputStream thumbnailOutputStream = storletObjectOutputStream.getStream();
/*
* Set the output metadata
*/
log.emitLog("Setting metadata");
storletObjectOutputStream.setMetadata(object_md);
/*
* Read Input to BufferedImage
*/
log.emitLog("Reading Input");
BufferedImage img = null;
try {
img = ImageIO.read(thumbnailInputStream);
} catch (Exception e) {
log.emitLog("Failed to read input stream to buffered image");
throw new StorletException("Failed to read input stream to buffered image " + e.getMessage());
} finally {
try {
thumbnailInputStream.close();
} catch (IOException e) {
log.emitLog("Failed to close input stream");
}
}
try {
thumbnailInputStream.close();
} catch (IOException e) {
log.emitLog("Failed to close input stream");
}
/*
* Convert
*/
log.emitLog("Converting");
int newH = img.getHeight()/8;
int newW = img.getWidth()/8;
int type = img.getTransparency() == Transparency.OPAQUE ? BufferedImage.TYPE_INT_RGB : BufferedImage.TYPE_INT_ARGB;
BufferedImage thumbnailImage = new BufferedImage(newW, newH, type);
Graphics2D g = thumbnailImage.createGraphics();
g.setRenderingHint(RenderingHints.KEY_INTERPOLATION, RenderingHints.VALUE_INTERPOLATION_BILINEAR);
g.drawImage(img, 0, 0, newW, newH, null);
g.dispose();
/*
* Write
*/
log.emitLog("Writing Output");
try {
ImageIO.write(thumbnailImage, "PNG" , thumbnailOutputStream);
} catch (Exception e) {
log.emitLog("Failed to write image to out stream");
throw new StorletException("Failed to write image to out stream " + e.getMessage());
} finally {
try {
thumbnailOutputStream.close();
} catch (IOException e) {
}
}
try {
thumbnailOutputStream.close();
} catch (IOException e) {
}
log.emitLog("Done");
}
}
Now ..
I want to using some external application which running on images such as GDAL inside my java code ,assume code like the code above(not exactly doing the same as above). GDLA has some cli commands. For example this command
gdal_translate -of JPEG -co QUALITY=50 input.tif output.jpg
The input.tif has already stored into my object storage and storlet can read it and give it to me as inputstream, also I have some practice how to run external process inside java with java ProcessBuilder but, imagine I receive input.tif as InputStream not a file.
Next, I don't want to write back InputStream to local storage where my application running there because of lack of storage ( maybe the object are very large, more the 2GBs) and also degrading performance.
Is there any way in java to pass InputStream to external process as a file argument without storing it on disk.
I am running my code on Ubuntu Docker
I don´t think you can do that, a File needs to be stored in a File system (either local or remote) to be read. You could try to base a ByteArrayInputStream to the reading process but the GDAL process should support that type of input:
https://gdal.org/programs/gdal_translate.html#cmdoption-gdal_translate-arg-src_dataset
As per the GDAL documentation, it does not seem possible:
<src_dataset>
The source dataset name. It can be either file name, URL of data source or subdataset name for multi-dataset files.
I am trying to run the java streaming speech recognition example here: https://cloud.google.com/speech-to-text/docs/streaming-recognize#speech-streaming-mic-recognize-java
I created a new gradle project in Eclipse, added compile 'com.google.cloud:google-cloud-speech:1.1.0' and compile 'com.google.cloud:google-cloud-bigquery:1.70.0' to the dependencies, and then copied in the example code from the link to the main class. Nothing from that second dependency is used in the example script that I can see, but I need it in there, otherwise I get an error like this: Error: Could not find or load main class com.google.cloud.bigquery.benchmark.Benchmark
Upon run with both dependencies added, I get the error in the title (need path to queries.json) immediately and the app exits. What is the queries.json file and how can I provide the application a path to it to get the example project to run? The google API is set up with proper environment variables on my system and API calls are configured to be allowed from the IP of the machine I am working on.
Here is the entire class script (only script in the project):
import com.google.api.gax.rpc.ClientStream;
import com.google.api.gax.rpc.ResponseObserver;
import com.google.api.gax.rpc.StreamController;
import com.google.cloud.speech.v1.RecognitionAudio;
import com.google.cloud.speech.v1.RecognitionConfig;
import com.google.cloud.speech.v1.RecognitionConfig.AudioEncoding;
import com.google.cloud.speech.v1.RecognizeResponse;
import com.google.cloud.speech.v1.SpeechClient;
import com.google.cloud.speech.v1.SpeechRecognitionAlternative;
import com.google.cloud.speech.v1.SpeechRecognitionResult;
import com.google.cloud.speech.v1.StreamingRecognitionConfig;
import com.google.cloud.speech.v1.StreamingRecognitionResult;
import com.google.cloud.speech.v1.StreamingRecognizeRequest;
import com.google.cloud.speech.v1.StreamingRecognizeResponse;
import com.google.protobuf.ByteString;
import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.Paths;
import java.util.ArrayList;
import java.util.List;
import javax.sound.sampled.AudioFormat;
import javax.sound.sampled.AudioInputStream;
import javax.sound.sampled.AudioSystem;
import javax.sound.sampled.DataLine;
import javax.sound.sampled.DataLine.Info;
import javax.sound.sampled.TargetDataLine;
public class GoogleSpeechRecognition {
/** Performs microphone streaming speech recognition with a duration of 1 minute. */
public static void streamingMicRecognize() throws Exception {
ResponseObserver<StreamingRecognizeResponse> responseObserver = null;
try (SpeechClient client = SpeechClient.create()) {
responseObserver =
new ResponseObserver<StreamingRecognizeResponse>() {
ArrayList<StreamingRecognizeResponse> responses = new ArrayList<>();
public void onStart(StreamController controller) {}
public void onResponse(StreamingRecognizeResponse response) {
responses.add(response);
}
public void onComplete() {
for (StreamingRecognizeResponse response : responses) {
StreamingRecognitionResult result = response.getResultsList().get(0);
SpeechRecognitionAlternative alternative = result.getAlternativesList().get(0);
System.out.printf("Transcript : %s\n", alternative.getTranscript());
}
}
public void onError(Throwable t) {
System.out.println(t);
}
};
ClientStream<StreamingRecognizeRequest> clientStream =
client.streamingRecognizeCallable().splitCall(responseObserver);
RecognitionConfig recognitionConfig =
RecognitionConfig.newBuilder()
.setEncoding(RecognitionConfig.AudioEncoding.LINEAR16)
.setLanguageCode("en-US")
.setSampleRateHertz(16000)
.build();
StreamingRecognitionConfig streamingRecognitionConfig =
StreamingRecognitionConfig.newBuilder().setConfig(recognitionConfig).build();
StreamingRecognizeRequest request =
StreamingRecognizeRequest.newBuilder()
.setStreamingConfig(streamingRecognitionConfig)
.build(); // The first request in a streaming call has to be a config
clientStream.send(request);
// SampleRate:16000Hz, SampleSizeInBits: 16, Number of channels: 1, Signed: true,
// bigEndian: false
AudioFormat audioFormat = new AudioFormat(16000, 16, 1, true, false);
DataLine.Info targetInfo =
new Info(
TargetDataLine.class,
audioFormat); // Set the system information to read from the microphone audio stream
if (!AudioSystem.isLineSupported(targetInfo)) {
System.out.println("Microphone not supported");
System.exit(0);
}
// Target data line captures the audio stream the microphone produces.
TargetDataLine targetDataLine = (TargetDataLine) AudioSystem.getLine(targetInfo);
targetDataLine.open(audioFormat);
targetDataLine.start();
System.out.println("Start speaking");
long startTime = System.currentTimeMillis();
// Audio Input Stream
AudioInputStream audio = new AudioInputStream(targetDataLine);
while (true) {
long estimatedTime = System.currentTimeMillis() - startTime;
byte[] data = new byte[6400];
audio.read(data);
if (estimatedTime > 60000) { // 60 seconds
System.out.println("Stop speaking.");
targetDataLine.stop();
targetDataLine.close();
break;
}
request =
StreamingRecognizeRequest.newBuilder()
.setAudioContent(ByteString.copyFrom(data))
.build();
clientStream.send(request);
}
} catch (Exception e) {
System.out.println(e);
}
responseObserver.onComplete();
}
}
Ok, so for me it turned out to be something simple and my fault. The default run configuration was set incorrectly to one of the google classes instead of whatever you called your test class containing the example code. This is what caused both the bigquery error and the queries.json error. Just correct the main class to your test class with the example code and it works. Also, you don't need to include compile 'com.google.cloud:google-cloud-bigquery:1.70.0' in your gradle dependencies at all, the error that complained about needing it was caused by incorrect main class setting in run configuration.
Example output
I am new to Java and using karate for API automation. I need help to integrate testrail with karate. I want to use tags for each scenario which will be the test case id (from testrail) and I want to push the result 'after the scenario'.
Can someone guide me on this? Code snippets would be more appreciated. Thank you!
I spent a lot of effort for this.
That's how I implement. Maybe you can follow it.
First of all, you should download the APIClient.java and APIException.java files from the link below.
TestrailApi in github
Then you need to add these files to the following path in your project.
For example: YourProjectFolder/src/main/java/testrails/
In your karate-config.js file, after each test, you can send your case tags, test results and error messages to the BaseTest.java file, which I will talk about shortly.
karate-config.js file
function fn() {
var config = {
baseUrl: 'http://111.111.1.111:11111',
};
karate.configure('afterScenario', () => {
try{
const BaseTestClass = Java.type('features.BaseTest');
BaseTestClass.sendScenarioResults(karate.scenario.failed,
karate.scenario.tags, karate.info.errorMessage);
}catch(error) {
console.log(error)
}
});
return config;
}
Please dont forget give tag to scenario in Feature file.
For example #1111
Feature: ExampleFeature
Background:
* def conf = call read('../karate-config.js')
* url conf.baseUrl
#1111
Scenario: Example
Next, create a runner file named BaseTests.java
BaseTest.java file
package features;
import com.intuit.karate.junit5.Karate;
import net.minidev.json.JSONObject;
import org.junit.jupiter.api.BeforeAll;
import testrails.APIClient;
import testrails.APIException;
import java.io.IOException;
import java.time.LocalDateTime;
import java.time.format.DateTimeFormatter;
import java.util.HashMap;
import java.util.List;
import java.util.Locale;
import java.util.Map;
public class BaseTest {
private static APIClient client = null;
private static String runID = null;
#BeforeAll
public static void beforeClass() throws Exception {
String fileName = System.getProperty("karate.options");
//Login to API
client = new APIClient("Write Your host, for example
https://yourcompanyname.testrail.io/");
client.setUser("user.name#companyname.com");
client.setPassword("password");
//Create Test Run
Map data = new HashMap();
data.put("suite_id", "Write Your Project SuitId(Only number)");
data.put("name", "Api Test Run");
data.put("description", "Karate Architect Regression Running");
JSONObject c = (JSONObject) client.sendPost("add_run/" +
TESTRAİL_PROJECT_ID, data);
runID = c.getAsString("id");
}
//Send Scenario Result to Testrail
public static void sendScenarioResults(boolean failed, List<String> tags, String errorMessage) {
try {
Map data = new HashMap();
data.put("status_id", failed ? 5 : 1);
data.put("comment", errorMessage);
client.sendPost("add_result_for_case/" + runID + "/" + tags.get(0),
data);
} catch (IOException e) {
e.printStackTrace();
} catch (APIException e) {
e.printStackTrace();
}
}
#Karate.Test
Karate ExampleFeatureRun() {
return Karate.run("ExampleFeatureRun").relativeTo(getClass());
}
}
Please look at 'hooks' documented here: https://github.com/intuit/karate#hooks
And there is an example with code over here: https://github.com/intuit/karate/blob/master/karate-demo/src/test/java/demo/hooks/hooks.feature
I'm sorry I can't help you with how to push data to testrail, but it may be as simple as an HTTP request. And guess what Karate is famous for :)
Note that values of tags can be accessed within a test, here is the doc for karate.tagValues (with link to example): https://github.com/intuit/karate#the-karate-object
Note that you need to be on the 0.7.0 version, right now 0.7.0.RC8 is available.
Edit - also see: https://stackoverflow.com/a/54527955/143475
I have images of codes that I want to decode. How can I use zxing so that I specify the image location and get the decoded text back, and in case the decoding fails (it will for some images, that's the project), it gives me an error.
How can I setup zxing on my Windows machine? I downloaded the jar file, but I don't know where to start. I understand I'll have to create a code to read the image and supply it to the library reader method, but a guide how to do that would be very helpful.
I was able to do it. Downloaded the source and added the following code. Bit rustic, but gets the work done.
import com.google.zxing.NotFoundException;
import com.google.zxing.ChecksumException;
import com.google.zxing.FormatException;
import com.google.zxing.BarcodeFormat;
import com.google.zxing.DecodeHintType;
import com.google.zxing.Reader;
import com.google.zxing.BinaryBitmap;
import com.google.zxing.Result;
import com.google.zxing.LuminanceSource;
import com.google.zxing.client.j2se.BufferedImageLuminanceSource;
import com.google.zxing.common.HybridBinarizer;
import java.awt.image.BufferedImage;
import javax.imageio.ImageIO;
import java.io.File;
import java.io.IOException;
import java.util.*;
import com.google.zxing.qrcode.QRCodeReader;
class qr
{
public static void main(String args[])
{
Reader xReader = new QRCodeReader();
BufferedImage dest = null;
try
{
dest = ImageIO.read(new File(args[0]));
}
catch(IOException e)
{
System.out.println("Cannot load input image");
}
LuminanceSource source = new BufferedImageLuminanceSource(dest);
BinaryBitmap bitmap = new BinaryBitmap(new HybridBinarizer(source));
Vector<BarcodeFormat> barcodeFormats = new Vector<BarcodeFormat>();
barcodeFormats.add(BarcodeFormat.QR_CODE);
HashMap<DecodeHintType, Object> decodeHints = new HashMap<DecodeHintType, Object>(3);
decodeHints.put(DecodeHintType.POSSIBLE_FORMATS, barcodeFormats);
decodeHints.put(DecodeHintType.TRY_HARDER, Boolean.TRUE);
Result result = null;
try
{
result = xReader.decode(bitmap, decodeHints);
System.out.println("Code Decoded");
String text = result.getText();
System.out.println(text);
}
catch(NotFoundException e)
{
System.out.println("Decoding Failed");
}
catch(ChecksumException e)
{
System.out.println("Checksum error");
}
catch(FormatException e)
{
System.out.println("Wrong format");
}
}
}
The project includes a class called CommandLineRunner which you can simply call from the command line. You can also look at its source to see how it works and reuse it.
There is nothing to install or set up. It's a library. Typically you don't download the jar but declare it as a dependency in your Maven-based project.
If you just want to send an image to decode, use http://zxing.org/w/decode.jspx
I created an eclipse plugin that will hook into the save action to create a minified javascript file with the goolge closure compiler. See files below.
That worked until eclipse 3.7.2. Unfortunately now in eclipse 4.2.1 it seems that this creates an endless loop sometimes. The job "compile .min.js" (line 64 in ResourceChangedListener.java) seems the be the cause. It results in the case that the workspaced starts to build over and over. I guess this is because that job creates or changes a file triggering the workspace build again, which again triggers the job which triggers the build and so on.
But I can not figure out how to prevent this.
// Activator.java
package closure_compiler_save;
import org.eclipse.core.resources.ResourcesPlugin;
import org.eclipse.ui.plugin.AbstractUIPlugin;
import org.osgi.framework.BundleContext;
/**
* The activator class controls the plug-in life cycle
*/
public class Activator extends AbstractUIPlugin {
// The plug-in ID
public static final String PLUGIN_ID = "closure-compiler-save"; //$NON-NLS-1$
// The shared instance
private static Activator plugin;
/**
* The constructor
*/
public Activator() {
}
#Override
public void start(BundleContext context) throws Exception {
super.start(context);
Activator.plugin = this;
ResourceChangedListener listener = new ResourceChangedListener();
ResourcesPlugin.getWorkspace().addResourceChangeListener(listener);
}
#Override
public void stop(BundleContext context) throws Exception {
Activator.plugin = null;
super.stop(context);
}
/**
* Returns the shared instance
*
* #return the shared instance
*/
public static Activator getDefault() {
return plugin;
}
}
// ResourceChangedListener.java
package closure_compiler_save;
import java.io.ByteArrayInputStream;
import java.io.IOException;
import java.io.InputStream;
import org.eclipse.core.resources.IFile;
import org.eclipse.core.resources.IProject;
import org.eclipse.core.resources.IResource;
import org.eclipse.core.resources.IResourceChangeEvent;
import org.eclipse.core.resources.IResourceChangeListener;
import org.eclipse.core.resources.IResourceDelta;
import org.eclipse.core.runtime.CoreException;
import org.eclipse.core.runtime.IPath;
import org.eclipse.core.runtime.IProgressMonitor;
import org.eclipse.core.runtime.IStatus;
import org.eclipse.core.runtime.Status;
import org.eclipse.core.runtime.jobs.Job;
public class ResourceChangedListener implements IResourceChangeListener {
public void resourceChanged(IResourceChangeEvent event) {
if (event.getType() != IResourceChangeEvent.POST_CHANGE)
return;
IResourceDelta delta = event.getDelta();
try {
processDelta(delta);
} catch (CoreException e) {
e.printStackTrace();
}
}
// find out which class files were just built
private void processDelta(IResourceDelta delta) throws CoreException {
IResourceDelta[] kids = delta.getAffectedChildren();
for (IResourceDelta delta2 : kids) {
if (delta2.getAffectedChildren().length == 0) {
if (delta.getKind() != IResourceDelta.CHANGED)
return;
IResource res = delta2.getResource();
if (res.getType() == IResource.FILE && "js".equalsIgnoreCase(res.getFileExtension())) {
if (res.getName().contains("min"))
return;
compile(res);
}
}
processDelta(delta2);
}
}
private void compile(final IResource res) throws CoreException {
final IPath fullPath = res.getFullPath();
final IPath fullLocation = res.getLocation();
final String fileName = fullPath.lastSegment().toString();
final String outputFilename = fileName.substring(0, fileName.lastIndexOf(".")).concat(".min.js");
final String outputPath = fullPath.removeFirstSegments(1).removeLastSegments(1).toString();
final IProject project = res.getProject();
final IFile newFile = project.getFile(outputPath.concat("/".concat(outputFilename)));
Job compileJob = new Job("Compile .min.js") {
public IStatus run(IProgressMonitor monitor) {
byte[] bytes = null;
try {
bytes = CallCompiler.compile(fullLocation.toString(), CallCompiler.SIMPLE_OPTIMIZATION).getBytes();
InputStream source = new ByteArrayInputStream(bytes);
if (!newFile.exists()) {
newFile.create(source, IResource.NONE, null);
} else {
newFile.setContents(source, IResource.NONE, null);
}
} catch (IOException e) {
e.printStackTrace();
} catch (CoreException e) {
e.printStackTrace();
}
return Status.OK_STATUS;
}
};
compileJob.setRule(newFile.getProject()); // this will ensure that no two jobs are writing simultaneously on the same file
compileJob.schedule();
}
}
After I setup a blank eclipse classic environment, started a new eclipse plugin project there and recreated all files it works again partly.
In this environment starting a debug session I can save .js files and .min.js files are created automatically.
So far so good!
But when I install the plugin to my real developing eclipse environment automatic saving does not work.
At least one step further!
Step 2:
There were some files not included in the build obviously needed, like manifest. No idea why they were deselected.
Anyway it seems just setting up a blank eclipse 4 classic and going through the eclipse plugin wizard fixed my original problem. I still would love to know what was the actual problem...