I use intellij idea, i have included the google cloud speech api but he dont finde the SpeechClient (https://image.prntscr.com/image/3S9bjQWgRdGGB1olHihxkA.png) (https://image.prntscr.com/image/RVspqW2-QuqD2mytN3V8Qw.png) why? I dont now why the java code not found this data in google documenten is this code working its the example code.
https://cloud.google.com/speech-to-text/docs/reference/libraries#client-libraries-usage-java
https://image.prntscr.com/image/fFVm7P7SRheGYWCqTvdDAQ.png
package de.****.test;
import com.google.cloud.speech.v1.RecognitionAudio;
import com.google.cloud.speech.v1.RecognitionConfig;
import com.google.cloud.speech.v1.RecognitionConfig.AudioEncoding;
import com.google.cloud.speech.v1.RecognizeResponse;
import com.google.cloud.speech.v1.SpeechClient;
import com.google.cloud.speech.v1.SpeechRecognitionAlternative;
import com.google.cloud.speech.v1.SpeechRecognitionResult;
import com.google.protobuf.ByteString;
import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.Paths;
import java.util.List;
public class main {
/**
* Demonstrates using the Speech API to transcribe an audio file.
*/
public static void main(String... args) throws Exception {
// Instantiates a client
try (SpeechClient speechClient = SpeechClient.create()) {
// The path to the audio file to transcribe
String fileName = "./resources/audio.raw";
// Reads the audio file into memory
Path path = Paths.get(fileName);
byte[] data = Files.readAllBytes(path);
ByteString audioBytes = ByteString.copyFrom(data);
// Builds the sync recognize request
RecognitionConfig config = RecognitionConfig.newBuilder()
.setEncoding(AudioEncoding.LINEAR16)
.setSampleRateHertz(16000)
.setLanguageCode("en-US")
.build();
RecognitionAudio audio = RecognitionAudio.newBuilder()
.setContent(audioBytes)
.build();
// Performs speech recognition on the audio file
RecognizeResponse response = speechClient.recognize(config, audio);
List<SpeechRecognitionResult> results = response.getResultsList();
for (SpeechRecognitionResult result : results) {
// There can be several alternative transcripts for a given chunk of speech. Just use the
// first (most likely) one here.
SpeechRecognitionAlternative alternative = result.getAlternativesList().get(0);
System.out.printf("Transcription: %s%n", alternative.getTranscript());
}
}
}
}
Related
How do I declare the application credentials? I have my .json file which is the key.
package shyam;
// Imports the Google Cloud client library
import com.google.cloud.vision.v1.AnnotateImageRequest;
import com.google.cloud.vision.v1.AnnotateImageResponse;
import com.google.cloud.vision.v1.BatchAnnotateImagesResponse;
import com.google.cloud.vision.v1.EntityAnnotation;
import com.google.cloud.vision.v1.Feature;
import com.google.cloud.vision.v1.Feature.Type;
import com.google.cloud.vision.v1.Image;
import com.google.cloud.vision.v1.ImageAnnotatorClient;
import com.google.protobuf.ByteString;
import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.Paths;
import java.util.ArrayList;
import java.util.List;
public class App {
public static void main(String[] args) throws Exception {
// Initialize client that will be used to send requests. This client only needs to be created
// once, and can be reused for multiple requests. After completing all of your requests, call
// the "close" method on the client to safely clean up any remaining background resources.
try (ImageAnnotatorClient vision = ImageAnnotatorClient.create()) {
// The path to the image file to annotate
String fileName = "./resources/wakeupcat.jpg";
// Reads the image file into memory
Path path = Paths.get(fileName);
byte[] data = Files.readAllBytes(path);
ByteString imgBytes = ByteString.copyFrom(data);
// Builds the image annotation request
List<AnnotateImageRequest> requests = new ArrayList<>();
Image img = Image.newBuilder().setContent(imgBytes).build();
Feature feat = Feature.newBuilder().setType(Type.LABEL_DETECTION).build();
AnnotateImageRequest request =
AnnotateImageRequest.newBuilder().addFeatures(feat).setImage(img).build();
requests.add(request);
// Performs label detection on the image file
BatchAnnotateImagesResponse response = vision.batchAnnotateImages(requests);
List<AnnotateImageResponse> responses = response.getResponsesList();
for (AnnotateImageResponse res : responses) {
if (res.hasError()) {
System.out.format("Error: %s%n", res.getError().getMessage());
return;
}
// for (EntityAnnotation annotation : res.getLabelAnnotationsList()) {
// annotation
// .getAllFields()
// .forEach((k, v) -> System.out.format("%s : %s%n", k, v.toString()));
// }
}
}
}
}
I'm getting the error
Application default credentials are not available
I have already set it in my cmd using set GOOGLE_APPLICATION_CREDENTIALS='key_path'. I have a lot initialized my Google Cloud Account in the cli. Hope someone can help me. Thank you.
Here is my use case:
I want users to be able to upload video content from their devices directly to my youtube or a dedicated youtube account without leaving my app. First of all, is this possible? Please I will appreciate code samples. Thanks
This shows how to upload a video using an HTTP request.
There are code snippets that are also available in java (and kotlin is pretty similar).
In order to see how to upload a video, go to https://developers.google.com/youtube/v3/code_samples/code_snippets, select Videos under Resource and insert under method.
If you then click on Show code and select JAVA at the top, you will get this example code:
/**
* Sample Java code for youtube.videos.insert
* See instructions for running these code samples locally:
* https://developers.google.com/explorer-help/guides/code_samples#java
*/
import com.google.api.client.googleapis.javanet.GoogleNetHttpTransport;
import com.google.api.client.googleapis.json.GoogleJsonResponseException;
import com.google.api.client.http.InputStreamContent;
import com.google.api.client.http.javanet.NetHttpTransport;
import com.google.api.client.json.JsonFactory;
import com.google.api.client.json.jackson2.JacksonFactory;
import com.google.api.services.youtube.YouTube;
import com.google.api.services.youtube.model.Video;
import com.google.api.services.youtube.model.VideoSnippet;
import com.google.api.services.youtube.model.VideoStatus;
import java.io.BufferedInputStream;
import java.io.File;
import java.io.FileInputStream;
import java.io.IOException;
import java.security.GeneralSecurityException;
import java.util.Arrays;
import java.util.Collection;
public class ApiExample {
// You need to set this value for your code to compile.
// For example: ... DEVELOPER_KEY = "YOUR ACTUAL KEY";
private static final String DEVELOPER_KEY = "YOUR_API_KEY";
private static final String APPLICATION_NAME = "API code samples";
private static final JsonFactory JSON_FACTORY = JacksonFactory.getDefaultInstance();
/**
* Build and return an authorized API client service.
*
* #return an authorized API client service
* #throws GeneralSecurityException, IOException
*/
public static YouTube getService() throws GeneralSecurityException, IOException {
final NetHttpTransport httpTransport = GoogleNetHttpTransport.newTrustedTransport();
return new YouTube.Builder(httpTransport, JSON_FACTORY, null)
.setApplicationName(APPLICATION_NAME)
.build();
}
/**
* Call function to create API service object. Define and
* execute API request. Print API response.
*
* #throws GeneralSecurityException, IOException, GoogleJsonResponseException
*/
public static void main(String[] args)
throws GeneralSecurityException, IOException, GoogleJsonResponseException {
YouTube youtubeService = getService();
// Define the Video object, which will be uploaded as the request body.
Video video = new Video();
// Add the snippet object property to the Video object.
VideoSnippet snippet = new VideoSnippet();
snippet.setCategoryId("22");
snippet.setDescription("Description of uploaded video.");
snippet.setTitle("Test video upload.");
video.setSnippet(snippet);
// Add the status object property to the Video object.
VideoStatus status = new VideoStatus();
status.setPrivacyStatus("private");
video.setStatus(status);
// TODO: For this request to work, you must replace "YOUR_FILE"
// with a pointer to the actual file you are uploading.
// The maximum file size for this operation is 137438953472.
File mediaFile = new File("YOUR_FILE");
InputStreamContent mediaContent =
new InputStreamContent("application/octet-stream",
new BufferedInputStream(new FileInputStream(mediaFile)));
mediaContent.setLength(mediaFile.length());
// Define and execute the API request
YouTube.Videos.Insert request = youtubeService.videos()
.insert("snippet,status", video, mediaContent);
Video response = request.setKey(DEVELOPER_KEY).execute();
System.out.println(response);
}
}
You can also look at another example java code for uploading a video to YouTube on GitHub.
Note that you will need to make a few changes to this:
Under android, you don't have a main. You will need to (1) put that code whereever you need to and you might want ro (2) do it asynchronously.
When using kotlin, you will also (3) need to modify the syntax accordingly.
I'm trying this: https://cloud.google.com/speech-to-text/docs/reference/libraries#client-libraries-install-java
But the import import com.google.cloud.speech.v1.SpeechClient; shows error. Rest of the classes under the cloud speech api are importing just fine.
I have created the GCP Service account and downloaded the json file for my project, and I even set my google credential to that json file using powershell.
// Imports the Google Cloud client library
import com.google.cloud.speech.v1.RecognitionAudio;
import com.google.cloud.speech.v1.RecognitionConfig;
import com.google.cloud.speech.v1.RecognitionConfig.AudioEncoding;
import com.google.cloud.speech.v1.RecognizeResponse;
import com.google.cloud.speech.v1.SpeechClient;
import com.google.cloud.speech.v1.SpeechRecognitionAlternative;
import com.google.cloud.speech.v1.SpeechRecognitionResult;
import com.google.protobuf.ByteString;
import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.Paths;
import java.util.List;
public class QuickstartSample {
/**
* Demonstrates using the Speech API to transcribe an audio file.
*/
public static void main(String... args) throws Exception {
// Instantiates a client
try (SpeechClient speechClient = SpeechClient.create()) {
// The path to the audio file to transcribe
String fileName = "./resources/audio.raw";
// Reads the audio file into memory
Path path = Paths.get(fileName);
byte[] data = Files.readAllBytes(path);
ByteString audioBytes = ByteString.copyFrom(data);
// Builds the sync recognize request
RecognitionConfig config = RecognitionConfig.newBuilder()
.setEncoding(AudioEncoding.LINEAR16)
.setSampleRateHertz(16000)
.setLanguageCode("en-US")
.build();
RecognitionAudio audio = RecognitionAudio.newBuilder()
.setContent(audioBytes)
.build();
// Performs speech recognition on the audio file
RecognizeResponse response = speechClient.recognize(config, audio);
List<SpeechRecognitionResult> results = response.getResultsList();
for (SpeechRecognitionResult result : results) {
// There can be several alternative transcripts for a given chunk of speech. Just use the
// first (most likely) one here.
SpeechRecognitionAlternative alternative = result.getAlternativesList().get(0);
System.out.printf("Transcription: %s%n", alternative.getTranscript());
}
}
}
}
I have around 100 files in my Dropbox account I am trying to make shareable link for all of the files using Dropbox API.
Tried using
DbxClient = new DbxClient(config, accessToken);
client.createShareableUrl(path);
but got an error on DbxClient cannot find symbol, or class not found.
import com.dropbox.core.DbxRequestConfig;
import com.dropbox.core.v2.*;
import static com.dropbox.core.v2.files.AlphaGetMetadataError.path;
import com.dropbox.core.v2.files.FileMetadata;
import com.dropbox.core.v2.files.ListFolderResult;
import com.dropbox.core.v2.files.Metadata;
import com.dropbox.core.v2.sharing.RequestedVisibility;
import com.dropbox.core.v2.sharing.SharedLinkMetadata;
import com.dropbox.core.v2.sharing.SharedLinkSettings;
import com.dropbox.core.v2.users.FullAccount;
import java.io.FileInputStream;
import java.io.FileNotFoundException;
import java.io.IOException;
import java.io.InputStream;
public class DBX {
static boolean doYouWantMeToUpload = false;
private static final String ACCESS_TOKEN = "My access token here I removed it";
public static void main(String args[]) throws DbxException, FileNotFoundException, IOException {
// Create Dropbox client
DbxRequestConfig config = DbxRequestConfig.newBuilder("dropbox/java-tutorial").build();
DbxClientV2 client = new DbxClientV2(config, ACCESS_TOKEN);
// Get current account info
FullAccount account = client.users().getCurrentAccount();
System.out.println(account.getName().getDisplayName());
if(doYouWantMeToUpload == true){
// Get files and folder metadata from Dropbox root directory
ListFolderResult result = client.files().listFolder("");
while (true) {
for (Metadata metadata : result.getEntries()) {
System.out.println(metadata.getPathLower());
}
if (!result.getHasMore()) {
break;
}
result = client.files().listFolderContinue(result.getCursor());
}
// Upload "test.txt" to Dropbox
try (InputStream in = new FileInputStream("test.txt")) {
FileMetadata metadata = client.files().uploadBuilder("/test.txt")
.uploadAndFinish(in);
}
// Get shareable link for a file
DbxClient = new DbxClient(config, ACCESS_TOKEN);
client.createShareableUrl(test.txt);
}
}
}
I want to get shareable link for all files in my Dropbox.
I followed these instructions in Dropbox GitHub.
You're attempting to use the old createShareableUrl which is for Dropbox API v1, which is now retired.
You should instead use Dropbox API v2, via DbxClientV2, like you do for the other calls in your code.
Specifically, to create a shared link, you should use createSharedLinkWithSettings. That would look something like:
DbxClientV2 client = new DbxClientV2(config, ACCESS_TOKEN);
client.sharing().createSharedLinkWithSettings(path);
I am trying to run the java streaming speech recognition example here: https://cloud.google.com/speech-to-text/docs/streaming-recognize#speech-streaming-mic-recognize-java
I created a new gradle project in Eclipse, added compile 'com.google.cloud:google-cloud-speech:1.1.0' and compile 'com.google.cloud:google-cloud-bigquery:1.70.0' to the dependencies, and then copied in the example code from the link to the main class. Nothing from that second dependency is used in the example script that I can see, but I need it in there, otherwise I get an error like this: Error: Could not find or load main class com.google.cloud.bigquery.benchmark.Benchmark
Upon run with both dependencies added, I get the error in the title (need path to queries.json) immediately and the app exits. What is the queries.json file and how can I provide the application a path to it to get the example project to run? The google API is set up with proper environment variables on my system and API calls are configured to be allowed from the IP of the machine I am working on.
Here is the entire class script (only script in the project):
import com.google.api.gax.rpc.ClientStream;
import com.google.api.gax.rpc.ResponseObserver;
import com.google.api.gax.rpc.StreamController;
import com.google.cloud.speech.v1.RecognitionAudio;
import com.google.cloud.speech.v1.RecognitionConfig;
import com.google.cloud.speech.v1.RecognitionConfig.AudioEncoding;
import com.google.cloud.speech.v1.RecognizeResponse;
import com.google.cloud.speech.v1.SpeechClient;
import com.google.cloud.speech.v1.SpeechRecognitionAlternative;
import com.google.cloud.speech.v1.SpeechRecognitionResult;
import com.google.cloud.speech.v1.StreamingRecognitionConfig;
import com.google.cloud.speech.v1.StreamingRecognitionResult;
import com.google.cloud.speech.v1.StreamingRecognizeRequest;
import com.google.cloud.speech.v1.StreamingRecognizeResponse;
import com.google.protobuf.ByteString;
import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.Paths;
import java.util.ArrayList;
import java.util.List;
import javax.sound.sampled.AudioFormat;
import javax.sound.sampled.AudioInputStream;
import javax.sound.sampled.AudioSystem;
import javax.sound.sampled.DataLine;
import javax.sound.sampled.DataLine.Info;
import javax.sound.sampled.TargetDataLine;
public class GoogleSpeechRecognition {
/** Performs microphone streaming speech recognition with a duration of 1 minute. */
public static void streamingMicRecognize() throws Exception {
ResponseObserver<StreamingRecognizeResponse> responseObserver = null;
try (SpeechClient client = SpeechClient.create()) {
responseObserver =
new ResponseObserver<StreamingRecognizeResponse>() {
ArrayList<StreamingRecognizeResponse> responses = new ArrayList<>();
public void onStart(StreamController controller) {}
public void onResponse(StreamingRecognizeResponse response) {
responses.add(response);
}
public void onComplete() {
for (StreamingRecognizeResponse response : responses) {
StreamingRecognitionResult result = response.getResultsList().get(0);
SpeechRecognitionAlternative alternative = result.getAlternativesList().get(0);
System.out.printf("Transcript : %s\n", alternative.getTranscript());
}
}
public void onError(Throwable t) {
System.out.println(t);
}
};
ClientStream<StreamingRecognizeRequest> clientStream =
client.streamingRecognizeCallable().splitCall(responseObserver);
RecognitionConfig recognitionConfig =
RecognitionConfig.newBuilder()
.setEncoding(RecognitionConfig.AudioEncoding.LINEAR16)
.setLanguageCode("en-US")
.setSampleRateHertz(16000)
.build();
StreamingRecognitionConfig streamingRecognitionConfig =
StreamingRecognitionConfig.newBuilder().setConfig(recognitionConfig).build();
StreamingRecognizeRequest request =
StreamingRecognizeRequest.newBuilder()
.setStreamingConfig(streamingRecognitionConfig)
.build(); // The first request in a streaming call has to be a config
clientStream.send(request);
// SampleRate:16000Hz, SampleSizeInBits: 16, Number of channels: 1, Signed: true,
// bigEndian: false
AudioFormat audioFormat = new AudioFormat(16000, 16, 1, true, false);
DataLine.Info targetInfo =
new Info(
TargetDataLine.class,
audioFormat); // Set the system information to read from the microphone audio stream
if (!AudioSystem.isLineSupported(targetInfo)) {
System.out.println("Microphone not supported");
System.exit(0);
}
// Target data line captures the audio stream the microphone produces.
TargetDataLine targetDataLine = (TargetDataLine) AudioSystem.getLine(targetInfo);
targetDataLine.open(audioFormat);
targetDataLine.start();
System.out.println("Start speaking");
long startTime = System.currentTimeMillis();
// Audio Input Stream
AudioInputStream audio = new AudioInputStream(targetDataLine);
while (true) {
long estimatedTime = System.currentTimeMillis() - startTime;
byte[] data = new byte[6400];
audio.read(data);
if (estimatedTime > 60000) { // 60 seconds
System.out.println("Stop speaking.");
targetDataLine.stop();
targetDataLine.close();
break;
}
request =
StreamingRecognizeRequest.newBuilder()
.setAudioContent(ByteString.copyFrom(data))
.build();
clientStream.send(request);
}
} catch (Exception e) {
System.out.println(e);
}
responseObserver.onComplete();
}
}
Ok, so for me it turned out to be something simple and my fault. The default run configuration was set incorrectly to one of the google classes instead of whatever you called your test class containing the example code. This is what caused both the bigquery error and the queries.json error. Just correct the main class to your test class with the example code and it works. Also, you don't need to include compile 'com.google.cloud:google-cloud-bigquery:1.70.0' in your gradle dependencies at all, the error that complained about needing it was caused by incorrect main class setting in run configuration.
Example output