I'm trying to call a stored procedure on my iSeries System (RPG program) but I'm not able to activate the corresponding menu under "tools"->"Java Generator".
The RPG program that I want to call (we'll name it RPG00) has 5 input parameters and 1 output value.
I performed the following operations:
Create an "external object" (type: stored procedure) whose name is "RPG00"
Create a method named "RPG00" as well in the external object above and set the "external name" property to "RPG00"
Create a Web Panel with a variable &test (type: external object RPG00) and call it with the right parameters
Change the following settings in iSeries datastore configuration:
"access technology to set" --> JDBC
"list of external stored procedure" --> RPG00
At this point if i try to build the KB, it ends up always in error. In the project folder i can't find the "crtjdccalls.java" file and the corresponding "class" file that stores the instructions for the stored procedure..
What's going wrong? Any idea? Any suggestion?
The appropriate element in the "Java generator" menu never appears!!
My Configuration:
Gx Ev2 U5
Environment: Web\Java
DB: iSeries 6.1
I think you forgot set the data store property (JDBC) 'Library list' with the name of the library in which the RPG progrma RPG00 is found.
Check this and make a rebuild all.
Regards, Luis.
Thanks to the Genuxs development team I found a solution!
The problem is related to the way parameters are passed to the stored procedure.
REMEBER:
You can't use SDT elements as input parameters
You can't use direct values as input parameters
YOU CAN USE ONLY VARIABLES!!!
E.G.
SDT.value1, SDT.value2
&variable1 = SDT.value1
&variable2 = SDT.value2
&RPG00.RPG00(SDT.value1, SDT.value2, ecc) --> ERROR
&RPG00.RPG00(&variable1, XXX, ecc) --> ERROR where XXX is for example an integer value
&RPG00.RPG00(&variable1, &variable2, ecc) --> ONLY VARIABLES WORK FINE!!
Hope this help someone else
Related
I have defined a data table and associated objects in Liferay 6, but when I run the code it says the table doesn't exists, and it's true. The code runs fine when I create the table by hand just copy-pasting the create table from the model implementation, but I expected the table to be created when deploying.
The user has all the privileges needed to create it.
What I'm missing?
I face the same problem. and #urvish is correct you have to change build number in
service properties file.
problem
When multiple developers working on portlet that uses servicebuilder
will give below exception “Build namespace has build number which is
newer than “. When developer commits service.properties file and that
deployed on other developer machine then it will throw below
exception
Best Practice: To avoid these kind of errors, follow these:
create service-ext.properties file at the same locaiton of service.properties
add build.number={higher-value or same value in exception)
Deploy portlet again
.
Check values of build.namespace in service.properties file and value of
select buildNumber from servicecomponent where buildNamespace = <<build.namespace from service.properties>>
Now the buildNumber return from query must be lesser than value of build.number propert in service.properties. If it is not then just set the value of build.number to 9999.
Sometimes due to mismatch, changes are not applied to database.
I am forced to use Pentaho Report Designer 5.4 and have following problem: Every time i try to make changes to Database Connection of the report, inside Options section designer adds parameter "ce" without any value - even if i previously deleted it (please see images below). When i open .prpt file and look inside sql-ds.xml, connection url looks as follows: "jdbc:sqlserver://192.168.1.194:1433;databaseName=statdb;integratedSecurity=false;ce"
problem is that my report was not loading at all. I looked into report log and found following lines:
Caused by: com.microsoft.sqlserver.jdbc.SQLServerException: The connection string contains a badly formed name or value.
at com.microsoft.sqlserver.jdbc.SQLServerException.makeFromDriverError(SQLServerException.java:190) ~[sqljdbc4.jar!/:na]
at com.microsoft.sqlserver.jdbc.Util.parseUrl(Util.java:445) ~[sqljdbc4.jar!/:na]
at com.microsoft.sqlserver.jdbc.SQLServerDriver.parseAndMergeProperties(SQLServerDriver.java:1026) ~[sqljdbc4.jar!/:na]
at com.microsoft.sqlserver.jdbc.SQLServerDriver.connect(SQLServerDriver.java:1008) ~[sqljdbc4.jar!/:na]
I assume that appending parameter "ce" is causing the problem.
I have 2 questions:
Why is Pentaho Report Designer adding parameter named "ce" to Database Connection of the report?
Is Connection url in right format? Is parameter "ce" appended correctly?
This is 'instance name' defined such fashion. In your case option is blank since as I see on UI Instance name is also blank.
This is a bug most probably - 'ce' automatically added when connection edited. If you don't want database editor attempts to tweak you connection string such fashion - use 'Generic database' connection. You will have to specify full java class name and connection jdbc connection string. Also you have to have this jdbc driver on classpath and it can workaround most of such 'edit connection' issues.
Full guide can be found jdbc sqlserver dirver class
Run this query and get your instance name, and assign that value to the ce property in the Options tab and there you go :)
SELECT HOST_NAME() AS HostName, SUSER_NAME() LoggedInUser,SERVERPROPERTY('MachineName') AS [ServerName],
SERVERPROPERTY('ServerName') AS [ServerInstanceName],
SERVERPROPERTY('InstanceName') AS [Instance],
SERVERPROPERTY('Edition') AS [Edition],
SERVERPROPERTY('ProductVersion') AS [ProductVersion],
Left(##Version, Charindex('-', ##version) - 2) As VersionName
Ok, I am reposting this question because it really drives me crazy.
I have enterprise.wsdl downloaded from salesforce and generated to some jars.
I build path those jars to my Android project in Eclipse.
Here is my code:
ConnectorConfig config = new ConnectorConfig();
config.setAuthEndpoint(authEndPoint);
config.setUsername(userID);
config.setPassword(password + securityToken);
config.setCompression(true);
con = new EnterpriseConnection(config);
con.setSessionHeader(UserPreference.getSessionID(mContext));
String sql = "SELECT something FROM myNameSpace__myCustomObject__c";
con.query(sql);
but it returns me this error:
[InvalidSObjectFault [ApiQueryFault [ApiFault
exceptionCode='INVALID_TYPE' exceptionMessage='sObject type 'abc__c'
is not supported.'] row='-1' column='-1' ]]
I am pretty sure that my userID has been assigned with profile that has read, edit access to that custom object.
My code also can query standard object.
Anyone can advise me what could be wrong?
From what I know there are three reasons it may give this error.
1. User permission which you said is setup correctly.
2. Do you have the custom object deployed to the org where you are trying to establish the connection?
3. Check the enterprise WSDL if it contains the custom object name which you are trying to query.
Hope it helps.
I currently have a job that works like this:
tPrejob-->tOracleConnection1--->tOracleConnection2--->tSetGlobalVar1(timestamp)--->tRunjob(runs prejob to gather file from FTP)
Then there is a tPostjob that is supposed to rename the processed file on the FTP server.
tPostjob--->tFTPRename
It should be renaming the file with "File Processed On " + ((String)globalMap.get("timestamp")) + "This is where I would put the orginal file name"
If I put a standard filename into the Filemask then it will run correctly, however if I try to make the filemask dynamic by passing the filename into it through globalMap.get then I get the Error:
"Exception in component tFTPRename_1 java.lang.NullPointerException"
I've tried several methods for passing the file name into the tFTPRename component, but none are working.
I'm currently capturing the file name in the subjob and outputting it to a txt file and then using tFileInputFullRow on the main job to create that variable. I tried passing this into a tSetGlobalVar and then adding it into the filemask as ((String)globalMap.get("FileName")), but had no luck.
I also tried several methods on the internet, but none of them worked, so I wasn't sure if it was me or if it has something to do with tFTPRename capabilities.
Main Job:
PreJob:
tFTPRename Component:
tFileInputFullRow:
It sounds like you're using the globalMap wrong at some point which would certainly explain the null pointer exception as the globalMap variable doesn't appear to have been set.
Typically the tSetGlobalVar component is for setting static or run time generated variables into the globalMap and I don't think you can actually pass data into it that it can then directly use and push to the globalMap. Your datetime stamp is a good use of the component but you'll need to either use a tFlowToIterate component or use a tJava(Row) component to force the data into the globalMap using something like:
globalMap.put("fileName",inputrow.fileName);
Looking at your previous question then you should have the name of the file from the FTP in the job you are calling in your pre job. Typically here you would be able to then run that as part of the main flow into a tBufferOutput component and then read the data directly into the parent job (simply connect a main flow connector from the tRunJob component to the next component you want to process the data flow and don't forget to give the tRunJob component the same schema as your child job's tBufferOutput).
However, you have a complication here in that you have already used the buffer to capture all of the iterables from the tFTPList component so you're right in the fact that you need to go to a temporary flat file or database to push the state back to the parent job.
From here though you should be able to read in the flat file or database table that contains the field name in your parent job and then run for ease you can just connect this to a tFlowToIterate component which will then store that data in the globalMap (you should have 1 row and 1 column of data here so it's a single variable).
Here's a basic example of running some hard coded data in a tFixedFlowInput to a tFlowToIterate to get it into the globalMap and then retrieve it again with another tFixedFlowInput component:
Once the data is in the tFlowToIterate component then you can easily call it with globalMap.get(rowName.schemaColumnName) or by hitting ctrl+space and selecting it under the tFlowToIterate component:
There is a h2-database file in my src directory (Java, Eclipse): h2test.db
The problem:
starting the h2.jar from the command line (and thus the h2 browser interface on port 8082), I have created 2 tables, 'test1' and 'test2' in h2test.db and I have put some data in them;
when trying to access them from java code (JDBC), it throws me "table not found exception". A "show tables" from the java code shows a resultset with 0 rows.
Also, when creating a new table ('newtest') from the java code (CREATE TABLE ... etc), I cannot see it when starting the h2.jar browser interface afterwards; just the other two tables ('test1' and 'test2') are shown (but then the newly created table 'newtest' is accessible from the java code).
I'm inexperienced with embedded databases; I believe I'm doing something fundamentally wrong here. My assumption is, that I'm accessing the same file - once from the java app, and once from the h2 console-browser interface. I cannot seem to understand it, what am I doing wrong here?
EDIT: as requested, adding some code:
Java code:
Class.forName("org.h2.Driver");
String url = "jdbc:h2:" + "db/h2test.db";
String user = "aeter";
String password = "aeter";
Connection conn = DriverManager.getConnection(url, user, password);
PreparedStatement ps2 = conn.prepareStatement("Show tables;");
ResultSet rs = ps2.executeQuery();
This resultset has 0 rows (no tables), instead of showing me the 2 tables.
H2 Console-browser interface settings:
Settings: Generic h2(embedded)
driver class: org.h2.Driver
JDBC URL: jdbc:h2:../../workspace/project_name/src/db/h2test.db
user name: aeter
password: aeter
EDIT2: I copied the database to a new folder. Now the db file in the new folder is shown with the 'newtest' table (from the java code) and with the 'test1' and 'test2' tables (from the console-browser h2 interface) - exactly the same way the older db file was shown. So the problem persists with the copy of the db file.
For embedded mode, you'll need to check the path. For example, use a path relative to your home directory:
"jdbc:h2:file:~/db/h2test.db"
To be sure, use a full path:
"jdbc:h2:file:/users/aeter/db/h2test.db"
For convenience, append ;IFEXISTS=TRUE to avoid creating spurious database files.
See Connecting to a Database using JDBC for more.
H2 Server URLs are relative to the -baseDir specified as a parameter to main().
Also there can be a problem if you use some special parameters in your JDBC url, the database file name can differ for various cases.
In my case, I had two URLs:
jdbc:h2:~/XXX;MVCC=FALSE;MV_STORE=FALSE
jdbc:h2:~/XXX
This first case created XXX.h2.db file, the second one XXX.mv.db, beware.
Also you can like this
"jdbc:h2:file:db/h2test.db"
then java looks db folder from project folder
->projectName // project folder
-->src // src folder
-->db // here your database folder
-->....
If you are using Hibernate try this in hibernate.cfg.xml file:
<property name="connection.url">jdbc:h2:file:db/h2test</property>
without *.db extension at the end