I have an issue here that I'm hoping to resolve. First, when I call the cloud Translate service with source and target languages, I encounter the following error:
java.lang.NoSuchMethodError: com.google.common.base.Preconditions.checkArgument(ZLjava/lang/String;Ljava/lang/Object;)V
at com.google.cloud.translate.TranslateImpl.optionMap(TranslateImpl.java:131)
at com.google.cloud.translate.TranslateImpl.access$000(TranslateImpl.java:40)
at com.google.cloud.translate.TranslateImpl$4.call(TranslateImpl.java:113)
at com.google.cloud.translate.TranslateImpl$4.call(TranslateImpl.java:110)
This is what I'm doing:
protected Translate getTranslationServiceClient() throws IOException {
if (translationServiceClient == null) {
synchronized (this) {
if (translationServiceClient == null) {
try (InputStream is = new FileInputStream(new File(getCredentialFilePath()))) {
final GoogleCredentials myCredentials = GoogleCredentials.fromStream(is);
translationServiceClient = TranslateOptions.newBuilder().setCredentials(myCredentials).build().getService();
} catch (IOException ioe) {
throw new NuxeoException(ioe);
}
}
}
}
return translationServiceClient;
}
public TranslationResponse translateText(String text, String sourceLanguage, String targetLanguage) throws IOException {
Translation response = translationService.translate(text, TranslateOption.sourceLanguage("en"), TranslateOption.sourceLanguage("es"));
//System.out.println(response.getTranslatedText());
GoogleTranslationResponse gtr = new GoogleTranslationResponse(response);
return gtr;
}
The error points to the Cloud's TranslateImpl class optionMap method and spills the NoSuchMethodError on the checkArgument. Am I Passing the TranslateOption's incorrectly??:
private Map<TranslateRpc.Option, ?> optionMap(Option... options) {
Map<TranslateRpc.Option, Object> optionMap = Maps.newEnumMap(TranslateRpc.Option.class);
for (Option option : options) {
Object prev = optionMap.put(option.getRpcOption(), option.getValue());
checkArgument(prev == null, "Duplicate option %s", option);
}
return optionMap;
}
In an effort to get any kind of response from the API, I've tried calling the service without passing any options or just the targetLanguage. Without any options, I don't have any errors and my texted is translated into english, as expected. If I just add TranslateOption.targetLanguage("es"), I still get the NoSuchMethodError.
I had this exact same error. The problem was an ancient version of Google Guava being brought in by some other dependency. I found this by running mvn dependency:tree. I had to exclude the ancient version of Guava like this
<exclusions>
<exclusion>
<groupId>com.google.guava</groupId>
<artifactId>guava</artifactId>
</exclusion>
</exclusions>
Related
I have a class that scans a column from a dynamo db table, whilst using the aws sdk for java(main method taken out for simplicity):
public class fetchCmdbColumn {
public static List<String> CMDB(String tableName, String tableColumn) throws Exception {
DynamoDbClient client = DynamoDbClient.builder()
.region(Region.EU_WEST_1)
.build();
List<String> ListValues = new ArrayList<>();
try {
ScanRequest scanRequest = ScanRequest.builder()
.tableName(tableName)
.build();
ScanResponse response = client.scan(scanRequest);
for (Map<String, AttributeValue> item : response.items()){
Set<String> keys = item.keySet();
for (String key : keys) {
if (key == tableColumn) {
ListValues.add(item.get(key).s()) ;
}
}
}
//To check what is being returned, comment out below
// System.out.println(ListValues);
} catch (DynamoDbException e){
e.printStackTrace();
System.exit(1);
}
client.close();
return ListValues;
}
}
I also have a junit tests created for that class:
public class fetchCMDBTest {
// Define the data members required for the test
private static String tableName = "";
private static String tableColumn = "";
#BeforeAll
public static void setUp() throws IOException {
// Run tests on Real AWS Resources
try (InputStream input = fetchCMDBTest.class.getClassLoader().getResourceAsStream("config.properties")) {
Properties prop = new Properties();
if (input == null) {
System.out.println("Sorry, unable to find config.properties");
return;
}
//load a properties file from class path, inside static method
prop.load(input);
// Populate the data members required for all tests
tableName = prop.getProperty("environment_list");
tableColumn = prop.getProperty("env_name");
} catch (IOException ex) {
ex.printStackTrace();
}
}
#Test
void fetchCMDBtable() throws Exception{
try {
fetchCMDB.CMDB(tableName, tableColumn);
System.out.println("Test 1 passed");
} catch (Exception e) {
System.out.println("Test 1 failed!");
e.printStackTrace();
}
}
}
When i run the test using mvn test I get the error:
software.amazon.awssdk.core.exception.SdkClientException: Multiple HTTP implementations were found on the classpath ,
even though I have only declared the client builder once in the class.
What am i missing?
I run the UNIT tests from the IntelliJ IDE. I find using the IDE works better then from the command line. Once I setup the config.properties file that contains the values for the tests and run them, all tests pass -- as shown here:
In fact - we test all Java V2 code examples in this manner to ensure they all work.
I also tested all DynamoDB examples from the command line using mvn test . All passed:
Amend your test to build a single instance of the DynamoDB client and then as your first test, make sure it was created successfully. See if this works for you. Once you get this working, add more tests!
public class DynamoDBTest {
private static DynamoDbClient ddb;
#BeforeAll
public static void setUp() throws IOException {
// Run tests on Real AWS Resources
Region region = Region.US_WEST_2;
ddb = DynamoDbClient.builder().region(region).build();
try (InputStream input = DynamoDBTest.class.getClassLoader().getResourceAsStream("config.properties")) {
Properties prop = new Properties();
if (input == null) {
System.out.println("Sorry, unable to find config.properties");
return;
}
//load a properties file from class path, inside static method
prop.load(input);
} catch (IOException ex) {
ex.printStackTrace();
}
}
#Test
#Order(1)
public void whenInitializingAWSService_thenNotNull() {
assertNotNull(ddb);
System.out.println("Test 1 passed");
}
Turns out my pom file contained other clients, so had to remove the likes of :
<dependency>
<groupId>software.amazon.awssdk</groupId>
<artifactId>s3</artifactId>
<exclusions>
<exclusion>
<groupId>software.amazon.awssdk</groupId>
<artifactId>netty-nio-client</artifactId>
</exclusion>
<exclusion>
<groupId>software.amazon.awssdk</groupId>
<artifactId>apache-client</artifactId>
</exclusion>
</exclusions>
</dependency>
and replaced them with :
<dependency>
<groupId>software.amazon.awssdk</groupId>
<artifactId>aws-crt-client</artifactId>
<version>2.14.13-PREVIEW</version>
</dependency>
as mentioned in https://aws.amazon.com/blogs/developer/introducing-aws-common-runtime-http-client-in-the-aws-sdk-for-java-2-x/
as a complement to the other answers, for me only worked the option 4 from the reference.
Option 4: Change the default HTTP client using a system property in Java code.
I defined it on the setUp() method of my integration test using JUnit 5.
#BeforeAll
public static void setUp() {
System.setProperty(
SdkSystemSetting.SYNC_HTTP_SERVICE_IMPL.property(),
"software.amazon.awssdk.http.apache.ApacheSdkHttpService");
}
and because I am using gradle:
implementation ("software.amazon.awssdk:s3:${awssdk2Version}") {
exclude group: 'software.amazon.awssdk', module: 'netty-nio-client'
exclude group: 'software.amazon.awssdk', module: 'apache-client'
}
implementation "software.amazon.awssdk:aws-crt-client:2.17.71-PREVIEW"
I am writing a servlet program, which aim to accept both xml and json, my request in json is this,
{"Symbol":["OLM","ASC"]}
and it is working well.
protected void doPost(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {
PrintWriter out = response.getWriter();
Connection connection = null;
BufferedReader reader1 = request.getReader();
StringBuffer jb = new StringBuffer();
String line = null;
while ((line = reader1.readLine()) != null) {
jb.append(line);
}
String str = jb.toString();
JSONObject obj2 = null;
try {
obj2 = new JSONObject(str);
} catch (JSONException e1) {
e1.printStackTrace();
}
JSONArray array = null;
try {
array = (JSONArray) obj2.get("Symbol");
} catch (JSONException e1) {
// TODO Auto-generated catch block
e1.printStackTrace();
}
I know that it is working for json because of I am casting the obtained string(in my case str) to JSONObject, but if I want to accept XML also and obtain Symbol from it, how to change this code?
Thanks in advance
Iam updating my question,
ObjectMapper objectMapper = new ObjectMapper();
if(request.getHeader("content-type")=="application/json") {
System.out.println("json ");
Symbol symbolContainerFromJson = objectMapper.readValue(request.getReader(), Symbol.class);
System.out.println(symbolContainerFromJson.getSymbolName());
}
else if (request.getHeader("content-type")=="application/xml") {
System.out.println("xml");
Symbol symbolContainerFromXml = new XmlMapper().readValue(request.getReader(), Symbol.class);
System.out.println(symbolContainerFromXml.getSymbolName());
}
But it is not entering both the loops, kindly help
The most robust way to deserialize is by creating the data structure in the back-end and let a framework like Jackson do the heavy lifting.
So we first create our object representation that we expect in either XML or JSON. No magic, it's just a POJO. I add a JsonProperty annotation because you expect Symbol upper-case and I hate upper-case fields in Java.
public class SymbolContainer {
#JsonProperty("Symbol")
private List<String> symbol;
public List<String> getSymbol() {
return symbol;
}
}
Then I use a Jackson Object/Xml mapper to transform the content from the request body to an in-memory object.
if(contentTypeIsJson(request.getHeader("content-type"))) {
SymbolContainer symbolContainerFromJson = new ObjectMapper().readValue("{\"Symbol\":[\"OLM\",\"ASC\"]}", SymbolContainer.class);
System.out.println(symbolContainerFromJson.getSymbol()); // [OLM, ASC]
} else if (contentTypeIsXml(request.getHeader("content-type"))) {
SymbolContainer symbolContainerFromXml = new XmlMapper().readValue("<root>\n" +
" <Symbol>\n" +
" <element>OLM</element>\n" +
" <element>ASC</element>\n" +
" </Symbol>\n" +
"</root>", SymbolContainer.class);
System.out.println(symbolContainerFromXml.getSymbol()); // [OLM, ASC]
}
This using only these three Jackson dependencies
<dependency>
<groupId>com.fasterxml.jackson.core</groupId>
<artifactId>jackson-core</artifactId>
<version>2.9.0</version>
</dependency>
<dependency>
<groupId>com.fasterxml.jackson.core</groupId>
<artifactId>jackson-databind</artifactId>
<version>2.9.0</version>
</dependency>
<dependency>
<groupId>com.fasterxml.jackson.dataformat</groupId>
<artifactId>jackson-dataformat-xml</artifactId>
<version>2.9.5</version>
</dependency>
Note that these ObjectMappers can be configured. If you are only interested in a part of the request (eg Symbol) and want to ignore the rest of the passed object, best to configure your ObjectMapper like so, which lets you ignore unmapped fields.
new ObjectMapper().configure(DeserializationFeature.FAIL_ON_UNKNOWN_PROPERTIES, false)
When I'm running my test class the later tests are using the mocks of the previous ones. I use JMockit in maven. I've read that they might be running on the same jvm branch? If this is the case can someone explain how I run them on different branches? If its not, then can anyone explain why the re-use of mocks is occurring (and thus breaking tests).
public class ServiceUploadTest {
private String filePath = "src/test/resources/AudioTestFile.mp3";
private ServiceUpload serviceUpload = new ServiceUpload();
#Test
#DisplayName("TestConversionOfMp4ToMp3")
void testConversionOfMp4ToMp3() {
new MockUp<Encoder>() {
#Mock
public void encode(MultimediaObject multimediaObject, File target, EncodingAttributes attributes) throws IllegalArgumentException, InputFormatException, EncoderException {
}
};
assertEquals("src/test/resources/Audio.mp3", serviceUpload.convertToMp3(filePath));
}
#Test
#DisplayName("Test cutting loop when length is over 5000000")
void testLongCuttingLoop() throws IOException {
InputStream inputStream = new FileInputStream("/Users/hywelgriffiths/Documents/IntellijProjects/sipho/transcriptionSoftware/audio.transcribe.front/src/test/java/resources/base64.txt");
BufferedReader bufferedReader = new BufferedReader(new InputStreamReader(inputStream));
String base64 = bufferedReader.readLine();
ServiceUpload serviceUpload = new ServiceUpload();
new MockUp<ProviderUpload>() {
#Mock
public String executeUploadHttp(String mp3Base64, String jobName, String tag, String email) {
return null;
}
};
assertNull(serviceUpload.cuttingLoop(base64, "JOBNAME", null));
}
#Test
#DisplayName("Test cutting loop when length is under 5000000")
void testShortCuttingLoop() throws IOException {
ServiceUpload serviceUpload = new ServiceUpload();
new MockUp<ProviderUpload>() {
#Mock
public String executeUploadHttp(String mp3Base64, String jobName, String tag, String email) {
return null;
}
};
assertNull(serviceUpload.cuttingLoop("SHORTBASE64", "JOBNAME", null));
}
#Test
#DisplayName("Test convertToBase64AndSend")
void testConvertToBase64AndSend(){
ServiceUpload serviceUpload = new ServiceUpload();
File file = new File ("src/test/java/resources/fakeMp4.txt");
String jobName = "JOBNAME";
new MockUp<ServiceUpload>() {
#Mock
public String convertToMp3(String mp4File) {
return "src/test/java/resources/fakeMp4.txt";
}
};
assertNull("\"complete\"", serviceUpload.convertToBase64AndSend(jobName, file, null, false));
}
#Test
#DisplayName("Test convertToBase64andSendCatchBlock")
void testConvertToBase64AndSendCatch(){
ServiceUpload serviceUpload = new ServiceUpload();
File file = new File ("src/test/java/resources/fakeMp4.txt");
String jobName = "JOBNAME";
new MockUp<ServiceUpload>() {
#Mock
public String convertToMp3(String mp4File) throws Exception {
throw new Exception("Forced Exception");
}
};
assertEquals("\"complete\"", serviceUpload.convertToBase64AndSend(jobName, file, null, false));
}
#Test
#DisplayName("Test convertToMp3 catch block")
void testConvertToMp3CatchBlock() {
new MockUp<ServiceUpload>() {
#Mock
public String createMp3(String mp4file) throws Exception {
throw new Exception("Forced Exception");
}
};
assertNull(serviceUpload.convertToMp3(filePath));
}
}
NOTE:
It turns out it was my dependencies in the POM (thanks Jeff) I was using :
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>4.13</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.junit.jupiter</groupId>
<artifactId>junit-jupiter</artifactId>
<version>RELEASE</version>
and changed it to
<dependency>
<groupId>org.junit.jupiter</groupId>
<artifactId>junit-jupiter-params</artifactId>
<version>5.3.1</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.junit.jupiter</groupId>
<artifactId>junit-jupiter-engine</artifactId>
<version>5.3.1</version>
<scope>test</scope>
</dependency>
You've got something subtle going on, and I'd check your assumptions before you pull your hair out. First, confirm that the MockUp is truly leaking between tests (it shouldn't be). An easy way to do that would be to add a System.out.println in each MockUp (and maybe in setup/teardown), and then as it runs each test, you should see printlns that are not expected. If you don't, then JMockIt is behaving as one would expect.
Assuming your theory is sound, I'd take a look at the pom. Specifically, the surefire settings (it would be nice if you posted it). I'm guessing your comment on 'branches' is really addressed at the forking/threading/test-parallelization that surefire does. You may have something glitchy there and it can be tricky to get it tuned properly.
I think you missed the annotation the top of the test class, see this hint.
Im tried upload file with java graphql. I looked at a solution to this topic: How to upload files with graphql-java?
I'm using graphql-java version 11.0, graphql-spring-boot-starter version 5.0.2, graphql-java-kickstart version 7.5.0 .
public class PartDeserializer extends JsonDeserializer {
#Override
public Part deserialize(JsonParser p, DeserializationContext ctxt) throws IOException, JsonProcessingException {
return null;
}
#Bean
public ObjectMapper objectMapper() {
ObjectMapper objectMapper = new ObjectMapper();
objectMapper.configure(SerializationFeature.FAIL_ON_EMPTY_BEANS, false);
SimpleModule module = new SimpleModule();
module.addDeserializer(Part.class, new PartDeserializer());
objectMapper.registerModule(module);
return objectMapper;
}
}
#Configuration
public class GraphqlConfig {
#Bean
public GraphQLScalarType uploadScalarDefine() {
return ApolloScalars.Upload;
}
}
public Boolean testMultiFilesUpload(List<Part> parts, DataFetchingEnvironment env) {
// get file parts from DataFetchingEnvironment, the parts parameter is not use
List<Part> attachmentParts = env.getArgument("files");
int i = 1;
for (Part part : attachmentParts) {
String uploadName = "copy" + i;
try {
part.write("your path:" + uploadName);
} catch (IOException e) {
e.printStackTrace();
}
i++;
}
return true;
}
scalar Upload
testMultiFilesUpload(files: [Upload!]!): Boolean
My query from-data in Postman like that
operations
{ "query": "mutation($files: [Upload!]!) {testMultiFilesUpload(files:$files)}", "variables": {"files": [null,null] } }
map
{ "file0": ["variables.files.0"] , "file1":["variables.files.1"]}
file0
0.jpeg
file1
1.jpeg
this is server response
INFO 11663 --- [0.1-1100-exec-7] g.servlet.AbstractGraphQLHttpServlet : Bad POST multipart request: no part named "graphql" or "query"
what I'm doing wrong?
you can try this dependencies :
<properties>
<graphql-java.version>13.0</graphql-java.version>
<graphql-java-kickstart-springboot.version>5.10.0</graphql-java-kickstart-springboot.version>
<graphql-java-kickstart-tools.version>5.6.1</graphql-java-kickstart-tools.version>
<graphql-java-kickstart-servlet.version>8.0.0</graphql-java-kickstart-servlet.version>
</properties>
<dependency>
<groupId>com.graphql-java-kickstart</groupId>
<artifactId>graphql-spring-boot-starter</artifactId>
<version>5.9.0</version>
</dependency>
<dependency>
<groupId>com.graphql-java-kickstart</groupId>
<artifactId>graphql-java-tools</artifactId>
<version>5.6.1</version>
</dependency>
<dependency>
<groupId>com.graphql-java-kickstart</groupId>
<artifactId>graphiql-spring-boot-starter</artifactId>
<version>5.6.0</version>
</dependency>
But there is a problem with graphql file upload,we can't delete the temp file that generate by graphql, because it always be used by graphql and the file stream didn't closed.
I suggest you use Apollo
https://github.com/apollographql/apollo-android
It uses RxJava Integration, Retrofit, Subscriptions and support for AutoValue. This will make your work easier as there are no straightforward ways in building Queries & Parsing responses for GraphQL.
I am trying to run a PDI transformation involving database (any database, but noSQL one are more preferred) from Java.
I've tried using mongodb and cassandradb and got missing plugins, I've already asked here: Running PDI Kettle on Java - Mongodb Step Missing Plugins, but no one replied yet.
I've tried switching to SQL DB using PostgreSQL too, but it still doesn't work. From the research I did, I think it was because I didn't connect the database from the Java thoroughly, yet I haven't found any tutorial or direction that works for me. I've tried following directions from this blog : http://ameethpaatil.blogspot.co.id/2010/11/pentaho-data-integration-java-maven.html : but still got some problems about repository (because I don't have any and there seems to be required).
The transformations are fine when I run it from Spoon. It only failed when I run it from Java.
Can anyone help me how to run PDI transformation involving database? Where did I go wrong?
Is anyone ever succeeded in running PDI transformation from involving either noSQL and SQL database? what DB did you use?
I'm sorry if I asked too many questions, I am so desperate. any kind of information will be very appreciated. Thank you.
Executing PDI Jobs from Java is pretty straight forward. You just need to import all the necessary jar files (for the databases) and then call in the kettle class. The best way is obviously to use "Maven" to control the dependency. In the maven pom.xml file, just call the database drivers.
A Sample Maven file would be something like below, assuming you are using pentaho v5.0.0GA and Database as PostgreSQL:
<dependencies>
<!-- Pentaho Kettle Core dependencies development -->
<dependency>
<groupId>pentaho-kettle</groupId>
<artifactId>kettle-core</artifactId>
<version>5.0.0.1</version>
</dependency>
<dependency>
<groupId>pentaho-kettle</groupId>
<artifactId>kettle-dbdialog</artifactId>
<version>5.0.0.1</version>
</dependency>
<dependency>
<groupId>pentaho-kettle</groupId>
<artifactId>kettle-engine</artifactId>
<version>5.0.0.1</version>
</dependency>
<dependency>
<groupId>pentaho-kettle</groupId>
<artifactId>kettle-ui-swt</artifactId>
<version>5.0.0.1</version>
</dependency>
<dependency>
<groupId>pentaho-kettle</groupId>
<artifactId>kettle5-log4j-plugin</artifactId>
<version>5.0.0.1</version>
</dependency>
<!-- The database dependency files. Use it if your kettle file involves database connectivity. -->
<dependency>
<groupId>postgresql</groupId>
<artifactId>postgresql</artifactId>
<version>9.1-902.jdbc4</version>
</dependency>
You can check my blog for more. It works for database connections.
Hope this helps :)
I had the same problem in a application using the pentaho libraries. I resolved the problem with this code:
The singleton to init Kettle:
import org.pentaho.di.core.KettleEnvironment;
import org.pentaho.di.core.exception.KettleException;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
/**
* Inicia as configurações das variáveis de ambiente do kettle
*
* #author Marcos Souza
* #version 1.0
*
*/
public class AtomInitKettle {
private static final Logger LOGGER = LoggerFactory.getLogger(AtomInitKettle.class);
private AtomInitKettle() throws KettleException {
try {
LOGGER.info("Iniciando kettle");
KettleJNDI.protectSystemProperty();
KettleEnvironment.init();
LOGGER.info("Kettle iniciado com sucesso");
} catch (Exception e) {
LOGGER.error("Message: {} Cause {} ", e.getMessage(), e.getCause());
}
}
}
And the code that saved me:
import java.io.File;
import java.util.Properties;
import org.pentaho.di.core.Const;
import org.pentaho.di.core.exception.KettleException;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
public class KettleJNDI {
private static final Logger LOGGER = LoggerFactory.getLogger(KettleJNDI.class);
public static final String SYS_PROP_IC = "java.naming.factory.initial";
private static boolean init = false;
private KettleJNDI() {
}
public static void initJNDI() throws KettleException {
String path = Const.JNDI_DIRECTORY;
LOGGER.info("Kettle Const.JNDI_DIRECTORY= {}", path);
if (path == null || path.equals("")) {
try {
File file = new File("simple-jndi");
path = file.getCanonicalPath();
} catch (Exception e) {
throw new KettleException("Error initializing JNDI", e);
}
Const.JNDI_DIRECTORY = path;
LOGGER.info("Kettle null > Const.JNDI_DIRECTORY= {}", path);
}
System.setProperty("java.naming.factory.initial", "org.osjava.sj.SimpleContextFactory");
System.setProperty("org.osjava.sj.root", path);
System.setProperty("org.osjava.sj.delimiter", "/");
}
public static void protectSystemProperty() {
if (init) {
return;
}
System.setProperties(new ProtectionProperties(SYS_PROP_IC, System.getProperties()));
if (LOGGER.isInfoEnabled()) {
LOGGER.info("Kettle System Property Protector: System.properties replaced by custom properies handler");
}
init = true;
}
public static class ProtectionProperties extends Properties {
private static final long serialVersionUID = 1L;
private final String protectedKey;
public ProtectionProperties(String protectedKey, Properties prprts) {
super(prprts);
if (protectedKey == null) {
throw new IllegalArgumentException("Properties protection was provided a null key");
}
this.protectedKey = protectedKey;
}
#Override
public synchronized Object setProperty(String key, String value) {
// We forbid changes in general, but do it silent ...
if (protectedKey.equals(key)) {
if (LOGGER.isDebugEnabled()) {
LOGGER.debug("Kettle System Property Protector: Protected change to '" + key + "' with value '" + value + "'");
}
return super.getProperty(protectedKey);
}
return super.setProperty(key, value);
}
}
}
I think your problem is with connection of data base. You can configure in transformation and do not need use JNDI.
public class DatabaseMetaStep {
private static final Logger LOGGER = LoggerFactory.getLogger(DatabaseMetaStep.class);
/**
* Adds the configurations of access to the database
*
* #return
*/
public static DatabaseMeta createDatabaseMeta() {
DatabaseMeta databaseMeta = new DatabaseMeta();
LOGGER.info("Carregando informacoes de acesso");
databaseMeta.setHostname("localhost");
databaseMeta.setName("stepName");
databaseMeta.setUsername("user");
databaseMeta.setPassword("password");
databaseMeta.setDBPort("port");
databaseMeta.setDBName("database");
databaseMeta.setDatabaseType("MonetDB"); // sql, MySql ...
databaseMeta.setAccessType(DatabaseMeta.TYPE_ACCESS_NATIVE);
return databaseMeta;
}
}
Then you need set the databaseMeta to Transmeta
DatabaseMeta databaseMeta = DatabaseMetaStep.createDatabaseMeta();
TransMeta transMeta = new TransMeta();
transMeta.setUsingUniqueConnections(true);
transMeta.setName("ransmetaNeame");
List<DatabaseMeta> databases = new ArrayList<>();
databases.add(databaseMeta);
transMeta.setDatabases(databases);
I tried your code with a "tranformation without jndi" and works!
But I needed add this repository in my pom.xml:
<repositories>
<repository>
<id>pentaho-releases</id>
<url>http://repository.pentaho.org/artifactory/repo/</url>
</repository>
</repositories>
Also when I try with a datasource I have this error : Cannot instantiate class: org.osjava.sj.SimpleContextFactory [Root exception is java.lang.ClassNotFoundException: org.osjava.sj.SimpleContextFactory]
Complete log here:
https://gist.github.com/eb15f8545e3382351e20.git
[FIX] : Add this dependency :
<dependency>
<groupId>pentaho</groupId>
<artifactId>simple-jndi</artifactId>
<version>1.0.1</version>
</dependency>
After that a new error occurs:
transformation_with_jndi - Dispatching started for transformation [transformation_with_jndi]
Table input.0 - ERROR (version 5.0.0.1.19046, build 1 from 2013-09-11_13-51-13 by buildguy) : An error occurred, processing will be stopped:
Table input.0 - Error occured while trying to connect to the database
Table input.0 - java.io.File parameter must be a directory. [D:\opt\workspace-eclipse\invoke-ktr-jndi\simple-jndi]
Complete log : https://gist.github.com/jrichardsz/9d74c7263f3567ac4b45
[EXPLANATION] This is due to in
KettleEnvironment.init();
https://github.com/jrichardsz/pentaho-pdi-spoon-usefull-templates/blob/master/running-etl-transformation-using-java/researching-pentaho-classes/KettleEnvironment.java
There is a inicialization :
if (simpleJndi) {
JndiUtil.initJNDI();
}
And in JndiUtil:
String path = Const.JNDI_DIRECTORY;
if ((path == null) || (path.equals("")))
https://github.com/jrichardsz/pentaho-pdi-spoon-usefull-templates/blob/master/running-etl-transformation-using-java/researching-pentaho-classes/JndiUtil.java
And in Const class :
public static String JNDI_DIRECTORY = NVL(System.getProperty("KETTLE_JNDI_ROOT"), System.getProperty("org.osjava.sj.root"));
https://github.com/jrichardsz/pentaho-pdi-spoon-usefull-templates/blob/master/running-etl-transformation-using-java/researching-pentaho-classes/Const.java
So wee need set this variable KETTLE_JNDI_ROOT
[FIX] A small change in your example : Just add this
System.setProperty("KETTLE_JNDI_ROOT", jdbcPropertiesPath);
before
KettleEnvironment.init();
A complete example based in your code :
import java.io.File;
import org.pentaho.di.core.KettleEnvironment;
import org.pentaho.di.core.exception.KettleException;
import org.pentaho.di.trans.Trans;
import org.pentaho.di.trans.TransMeta;
public class ExecuteSimpleTransformationWithJndiDatasource {
public static void main(String[] args) {
String resourcesPath = (new File(".").getAbsolutePath())+"\\src\\main\\resources";
String ktr_path = resourcesPath+"\\transformation_with_jndi.ktr";
//KETTLE_JNDI_ROOT could be the simple-jndi folder in your pdi or spoon home.
//in this example, is the resources folder
String jdbcPropertiesPath = resourcesPath;
try {
/**
* Initialize the Kettle Enviornment
*/
System.setProperty("KETTLE_JNDI_ROOT", jdbcPropertiesPath);
KettleEnvironment.init();
/**
* Create a trans object to properly assign the ktr metadata.
*
* #filedb: The ktr file path to be executed.
*
*/
TransMeta metadata = new TransMeta(ktr_path);
Trans trans = new Trans(metadata);
// Execute the transformation
trans.execute(null);
trans.waitUntilFinished();
// checking for errors
if (trans.getErrors() > 0) {
System.out.println("Erroruting Transformation");
}
} catch (KettleException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}
For a complete example check my github channel:
https://github.com/jrichardsz/pentaho-pdi-spoon-usefull-templates/tree/master/running-etl-transformation-using-java/invoke-transformation-from-java-jndi/src/main/resources