Can someone tell why I get the following error: getDevTools() is undefined for the type ChromeDriver ?
I want to test network in the chrome browser
import org.junit.Before;
import org.junit.Test;
import org.openqa.selenium.By;
import org.openqa.selenium.Proxy;
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.chrome.ChromeDriver;
import org.openqa.selenium.chrome.ChromeDriverService;
import org.openqa.selenium.chrome.ChromeOptions;
import org.openqa.selenium.remote.CapabilityType;
import org.openqa.selenium.remote.DesiredCapabilities;
import io.github.bonigarcia.wdm.WebDriverManager;
import net.lightbody.bmp.BrowserMobProxyServer;
import net.lightbody.bmp.client.ClientUtil;
import net.lightbody.bmp.core.har.Har;
import net.lightbody.bmp.core.har.HarEntry;
import net.lightbody.bmp.proxy.CaptureType;
import org.openqa.selenium.devtools.DevTools;
import org.openqa.selenium.devtools.v96.network.Network;
ChromeDriverService service = new ChromeDriverService.Builder()
.usingDriverExecutable(file)
.usingAnyFreePort().build();
ChromeOptions options = new ChromeOptions().addArguments("--incognito");
ChromeDriver driver = new ChromeDriver(service, options);
DevTools chromeDevTools = ((ChromeDriver) driver).getDevTools();
chromeDevTools.createSession();
Thanks
I have the following Spring Boot Controller:
import java.io.InputStream;
import javax.servlet.http.HttpServletRequest;
import javax.ws.rs.Consumes;
import javax.ws.rs.GET;
import javax.ws.rs.Path;
import javax.ws.rs.PathParam;
import javax.ws.rs.Produces;
import javax.ws.rs.QueryParam;
import javax.ws.rs.core.Context;
import javax.ws.rs.core.MediaType;
import javax.ws.rs.core.Response;
import com.att.aft.dme2.internal.javaxwsrs.PUT;
import com.att.bttw.model.HelloWorld;
import io.swagger.annotations.Api;
import io.swagger.annotations.ApiOperation;
import io.swagger.annotations.ApiResponse;
import io.swagger.annotations.ApiResponses;
#Api
#Path("/service")
#Produces({ MediaType.APPLICATION_JSON })
public interface RestService {
#PUT
#Path("/datarouter/{filename}")
#Consumes("application/gzip")
#Produces(MediaType.APPLICATION_JSON)
Response testEndpoint(#Context HttpServletRequest request, #PathParam("filename") String filename, InputStream inputStream);
}
The implementation of this method is simply:
import javax.servlet.http.HttpServletRequest;
import javax.ws.rs.core.Response;
import org.springframework.stereotype.Component;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestMethod;
import org.springframework.web.client.RestTemplate;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.beans.factory.annotation.Value;
import com.att.bttw.model.ApplicationResults;
import com.att.bttw.model.CountsDataModel;
import com.att.bttw.model.CountsDataResults;
import com.att.bttw.model.DisplayFieldRecord;
import com.att.bttw.model.DisplayRecord;
import com.att.bttw.model.WatermarkEventObject;
import com.fasterxml.jackson.databind.ObjectMapper;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.io.IOException;
import java.io.InputStream;
import java.sql.Timestamp;
import java.time.Instant;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.stream.Collectors;
#Override
public Response testEndpoint(HttpServletRequest request, String filename, InputStream inputStream) {
System.out.println("File name consumed: " + filename);
return null;
}
When I try to hit this endpoint, I get the following error:
"Following issues have been detected: Response testEndpoint(javax.servlet.http.HttpServletRequest, java.lang.String, java.io.InputStream), can not have an entity parameter. Try to move the parameter to the corresponding resource method".
When I remove the InputStream and HttpServletRequest parameters, the endpoint works just fine, I'm just not understanding the error message I'm getting.
I am trying to upload a file to S3 and getting error like
Target exception: java.lang.NoSuchMethodError:
com.amazonaws.SDKGlobalConfiguration.isInRegionOptimizedModeEnabled()Z
Code is
String accessKey="accesskey";
String secretKey="mysecretkey"
AWSCredentials credentials = new BasicAWSCredentials(accessKey, secretKey);
AmazonS3 conn = new AmazonS3Client(credentials);
In the line of AmazonS3 conn = new AmazonS3Client(credentials); I am getting the target exception.
Totally I imported these many java packages showing below. still getting the same error.
import java.io.File;
import java.io.IOException;
import java.io.ByteArrayInputStream;
import java.io.File;
import java.util.List;
import com.amazonaws.auth.*;
import com.amazonaws.auth.AWSCredentials;
import com.amazonaws.auth.BasicAWSCredentials;
import com.amazonaws.util.StringUtils;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3Client;
import com.amazonaws.SDKGlobalConfiguration;
import com.amazonaws.services.s3.model.Bucket;
import com.amazonaws.services.s3.model.CannedAccessControlList;
import com.amazonaws.services.s3.model.GeneratePresignedUrlRequest;
import com.amazonaws.services.s3.model.GetObjectRequest;
import com.amazonaws.services.s3.model.ObjectListing;
import com.amazonaws.services.s3.model.ObjectMetadata;
import com.amazonaws.services.s3.model.S3ObjectSummary;
import com.amazonaws.services.s3.*;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.services.s3.AmazonS3Builder;
import com.amazonaws.services.s3.AmazonS3Client;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.ClientConfigurationFactory;
import com.amazonaws.annotation.NotThreadSafe;
import com.amazonaws.annotation.SdkTestInternalApi;
import com.amazonaws.client.AwsSyncClientParams;
import com.amazonaws.internal.SdkFunction;
import com.amazonaws.regions.AwsRegionProvider;
import com.amazonaws.ClientConfiguration;
import com.amazonaws.Protocol;
import com.amazonaws.AmazonS3Client;
import com.amazonaws.AmazonClientException;
import com.amazonaws.AmazonServiceException;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3Client;
import com.amazonaws.services.s3.model.PutObjectRequest;
I am writing a mapreduce project.
I want to send an array from mapper to reducer.
But it has an error and I can't fix It.
I import these classes:
import java.io.DataInput;
import java.io.DataOutput;
import java.io.EOFException;
import java.io.IOException;
import java.net.Socket;
import java.util.HashMap;
import java.util.HashSet;
import java.util.Map;
import java.util.Set;
import org.apache.hadoop.conf.Configured;
import java.io.BufferedInputStream;
import java.io.BufferedReader;
import java.util.*;
import java.io.File;
import java.io.FileInputStream;
import java.io.FileOutputStream;
import java.io.FileReader;
import java.io.IOException;
import java.io.InputStream;
import java.io.InputStreamReader;
import java.io.ObjectInputStream;
import java.io.ObjectOutputStream;
import java.io.OutputStream;
import java.util.Iterator;
import hadoop.DENCLUE;
//import javafx.scene.text.Text;
import sun.security.krb5.Config;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.fs.viewfs.Constants;
import org.apache.hadoop.mapred.FileInputFormat;
import org.apache.hadoop.mapred.FileOutputFormat;
import org.apache.hadoop.mapred.JobClient;
import org.apache.hadoop.mapred.JobConf;
import org.apache.hadoop.mapred.MapReduceBase;
import org.apache.hadoop.mapred.Mapper;
import org.apache.hadoop.mapred.OutputCollector;
import org.apache.hadoop.mapred.Reducer;
import org.apache.hadoop.mapred.Reporter;
import org.apache.hadoop.mapred.TextInputFormat;
import org.apache.hadoop.mapred.TextOutputFormat;
//import org.apache.hadoop.mapred.jobcontrol.Job;
import org.apache.hadoop.util.Tool;
import org.apache.hadoop.util.ToolRunner;
import org.omg.CORBA.PUBLIC_MEMBER;
import com.sun.org.apache.bcel.internal.generic.NEW;
import org.apache.hadoop.conf.*;
import org.apache.hadoop.fs.*;
import org.apache.hadoop.io.*;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.*;
import org.apache.hadoop.mapreduce.lib.output.*;
import org.apache.hadoop.mapreduce.Mapper.Context;
import org.apache.hadoop.mapreduce.task.JobContextImpl;
import org.apache.hadoop.mapreduce.lib.input.*;
import org.apache.hadoop.mapreduce.lib.output.*;
import org.apache.hadoop.util.*;
import java.io.DataOutput;
import java.io.DataInput;
import java.io.IOException;
This is my Map class:
public static class Mapn extends MapReduceBase implements Mapper<LongWritable, Text, Text, Text> {
#SuppressWarnings("rawtypes")
Context con ;
#SuppressWarnings("unchecked")
public void map(LongWritable key, Text value, OutputCollector< Text,Text >
output, Reporter reporter) throws IOException {
String line = value.toString();
String[] words=line.split(",");
for(String word: words )
{
Text outputKey = new Text(word.toUpperCase().trim());
try {
con.write(outputKey, words);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
This is the job:
public static void main(String[] args) throws Exception {
Configuration c=new Configuration();
String[] files=new GenericOptionsParser(c,args).getRemainingArgs();
Path input=new Path(files[0]);
Path output=new Path(files[1]);
Job j=new Job(c,"wnt");
j.setJarByClass(projectmr.class);
j.setMapperClass(Mapn.class);
j.setReducerClass(Reduce.class);
j.setOutputKeyClass(Text.class);
j.setOutputValueClass(Text.class);
FileInputFormat.addInputPaths(j, input);
FileOutputFormat.setOutputPath(j, output);
System.exit(j.waitForCompletion(true)?0:1);
and this is the error I get:
Exception in thread "main" java.lang.RuntimeException: class hadoop.projectmr$Mapn not org.apache.hadoop.mapreduce.Mapper
at org.apache.hadoop.conf.Configuration.setClass(Configuration.java:1969)
at org.apache.hadoop.mapreduce.Job.setMapperClass(Job.java:891)
at hadoop.projectmr.main(projectmr.java:191)
This is the old, Hadoop 1 API
import org.apache.hadoop.mapred.*;
You should be importing from classes within
org.apache.hadoop.mapreduce.*;
As the error says
not org.apache.hadoop.mapreduce.Mapper
So, basically, you don't need MapReduceBase, and Mapper is a class now, not an interface
So, you now would have
public static class MyMapper extends Mapper<Kin, Vin, Kout, Vout>
Look at the WordCount code
I am trying to create a new excel file using below code. I have added all the necessary JAR files as you can see from import code lines. But I am still seeing the error that "WritableWorkbook cannot be resolved to a type" and "Workbook can not be resolved to a type".
package myPackage;
import org.openqa.selenium.By;
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.ie.InternetExplorerDriver;
import org.openqa.selenium.WebElement;
import java.util.List;
import java.io.File;
import java.io.OutputStream;
import org.apache.poi.xssf.usermodel.XSSFCell;
import org.apache.poi.xssf.usermodel.XSSFRow;
import org.apache.poi.xssf.usermodel.XSSFSheet;
import org.apache.poi.xssf.usermodel.XSSFWorkbook;
import org.apache.poi.ss.usermodel.Workbook;
public class myClass {
public static void main(String [] args)
{
File eFile = new File ("C:\\Users\\bondrea\\AutomationRelatedFiles\\TrialExcel.xlsx");
WritableWorkbook FinalExcel = Workbook.createWorkbook(eFile);
}
}