I am trying to read a table from an SAP system and I am always getting this error:
Exception in thread "main" com.sap.conn.jco.JCoRuntimeException: (127)
JCO_ERROR_FIELD_NOT_FOUND: Field EMPLOYEE is not a member of INPUT
at com.sap.conn.jco.rt.AbstractMetaData.indexOf(AbstractMetaData.java:404)
at com.sap.conn.jco.rt.AbstractRecord.setValue(AbstractRecord.java:4074)
at testConf.StepServer.main(StepServer.java:50)
And here is my code :
public static void main(String[] args) {
// This will create a file called mySAPSystem.jcoDestination
System.out.println("executing");
String DESTINATION_NAME1 = "mySAPSystem";
Properties connectProperties = new Properties();
connectProperties.setProperty(DestinationDataProvider.JCO_ASHOST, "xxx.xxx.x.xxx");
connectProperties.setProperty(DestinationDataProvider.JCO_SYSNR, "xx");
connectProperties.setProperty(DestinationDataProvider.JCO_CLIENT, "xxx");
connectProperties.setProperty(DestinationDataProvider.JCO_USER, "username");
connectProperties.setProperty(DestinationDataProvider.JCO_PASSWD, "test");
connectProperties.setProperty(DestinationDataProvider.JCO_LANG, "en");
createDestinationDataFile(DESTINATION_NAME1, connectProperties);
// This will use that destination file to connect to SAP
try {
JCoDestination destination = JCoDestinationManager.getDestination("mySAPSystem");
System.out.println("Attributes:");
System.out.println(destination.getAttributes());
System.out.println();
destination.ping();
} catch (JCoException e) {
e.printStackTrace();
}
try{
//here starts the problem
JCoDestination destination = JCoDestinationManager.getDestination(DESTINATION_NAME1);
JCoFunction function = destination.getRepository().getFunction("RFC_READ_TABLE");
JCoParameterList listParam = function.getImportParameterList();
listParam.setValue("EMPLOYEE", "EMPLOYEE"); // I have found this in an example and I don't understand exactly what should I put there
// I was thinking maybe is the column name but I am not sure
function.execute(destination);
JCoTable table = function.getTableParameterList().getTable("ZEMPLOYEES");//name of my table from SAP
System.out.println(table);
}
catch (JCoException e)
{
System.out.println(e.toString());
return;
}
}
The error is clear when it says JCO_ERROR_FIELD_NOT_FOUND: Field EMPLOYEE is not a member of INPUT but the employee is a field in my table.
The documentation doesn't help too much, it only says:
Sets the object as the value for the named field.
Parameters:
value - the value to set for the field
name - the name of the field to set
Witch, in my opinion, I have already done.
Should I make any additional modification in sap, in order to read this new table from java? All I have done is to create a new table following this tutorial (Create a simple table in SAP).
Maybe someone with more experience can tell me how should I configure this sample code in order to work.
General use of RFC_READ_TABLE
I never used JCo, but as far as I know its interface is very similar to NCo, the .Net connector. This is basically NCo code with some guesswork added to it, but it should work.
// get the table parameter FIELDS that should be in the parameter list
// the parameter table has several fields, only the field FIELDNAME has to be set before calling the function module
JCOTable inputTableParam = function.getTableParameterList().getTable("FIELDS");
// add a row to the FIELDS table parameter
inputTableParam.appendRow();
// set values for the new row
inputTableParam.setValue("FIELDNAME", "EMPLOYEE");
// just for fun, add another field to retrieve
inputTableParam.appendRow();
inputTableParam.setValue("FIELDNAME", "SURNAME");
// now we have to set the non-table parameters
JCoParameterList inputParamList = function.getImportParameterList();
// parameter QUERY_TABLE, defines which table to query
inputParamList.setValue("QUERY_TABLE", "ZEMPLOYEES");
// parameter DELIMITER - we get a single string as the return value, the field values within that string are delimited by this character
inputParamList.setValue("DELIMITER", ";");
// execute the function
function.execute(destination);
// the parameter table DATA contains the rows
JCoTable table = function.getTableParameterList().getTable("DATA");
in the end, your variable table will hold a table object with a single field called WA. That field contains the contents of the fields you selected in input parameter table FIELDS. You can iterate over table and get the values row by row.
Queries with RFC_READ_TABLE
RFC_READ_TABLE doesn't really allow queries, it only allows you to define WHERE clauses. The TABLE parameter OPTIONS has a single field TEXT, 72 characters wide, that can only take ABAP compliant WHERE clauses.
to extend the example above, we'll add a where clause to only select entries from table ZEMPLOYEES with SURNAME = "SMITH" and FORNAME = "JOHN".
JCOTable optionsTableParam = function.getTableParameterList().getTable("OPTIONS");
// add a row to the FIELDS table parameter
optionsTableParam.appendRow();
optionsTableParam.setValue("TEXT", "SURNAME EQ 'SMITH' AND FORNAME EQ 'JOHN');
the field TEXT is only 72 characters long, so if you want to add a longer clause, you manually have to break your conditions into several rows. RFC_READ_TABLE is a bit crude and limited.
Complex joins between tables can be achieved by creating a view within the SAP system (transaction SE11) and then query that view with RFC_READ_TABLE.
If you want to call function modules from JCo, it would be very helpful if you made yourself familiar with the basic function module properties. You can look at a function module definition in transaction SE37. There you can see the IMPORT, EXPORT, CHANGING and TABLE parameters. The parameters you have to fill and the parameters that contain the results depend on the function module you call - RFC_READ_TABLE has different ones from, say, BAPI_DELIVERY_GETLIST.
Here is the documentation for JCoFunction and one of the differences between JCo and NCo, JCo has individual functions to get and set the different parameter types: https://help.hana.ondemand.com/javadoc/com/sap/conn/jco/JCoFunction.html
You are trying to call the function RFC_READ_TABLE and you try to pass a value to its parameter named "EMPLOYEE". This is NOT a parameter of RFC_READ_TABLE, hence the error.
RFC_READ_TABLE has 3 important input parameters :
QUERY_TABLE : the name of the database table you want to query
OPTIONS : the WHERE clause (you may pass an empty value)
FIELDS : the list of columns from the database table you want to query
RFC_READ_TABLE has 1 return parameter :
DATA : the contents of the table
See this example : https://vishalmasih.wordpress.com/2014/10/31/sap-jco-searching-for-a-user-in-the-usr04-table/
Related
I'm developing a MySQL database project using JDBC. It uses parent/child tables linked with foreign keys.
TL;DR: I want to be able to get the AUTO_INCREMENT id of a table before an INSERT statement. I am already aware of the getGeneratedKeys() method in JDBC to do this following an insert, but my application requires the ID before insertion. Maybe there's a better solution to the problem for this particular application? Details below:
In a part of this application, the user can create a new item via a form or console input to enter details - some of these details are in the form of "sub-items" within the new item.
These inputs are stored in Java objects so that each row of the table corresponds to one of these objects - here are some examples:
MainItem
- id (int)
- a bunch of other details...
MainItemTitle
- mainItemId (int)
- languageId (int)
- title (String)
ItemReference
- itemId (int) <-- this references MainItem id
- referenceId (int) <-- this references another MainItem id that is linked to the first
So essentially each Java object represents a row in the relevant table of the MySQL database.
When I store the values from the input into the objects, I use a dummy id like so:
private static final int DUMMY_ID = 0;
...
MainItem item = new MainItem(DUMMY_ID, ...);
// I read each of the titles and initialise them using the same dummy id - e.g.
MainItemTitle title = new MainItemTitle(DUMMY_ID, 2, "Here is a a title");
// I am having trouble with initialising ItemReference so I will explain this later
Once the user inputs are read, they are stored in a "holder" class:
class MainItemValuesHolder {
MainItem item;
ArrayList<MainItemTitle> titles;
ArrayList<ItemReference> references;
// These get initialised and have getters and setters, omitted here for brevity's sake
}
...
MainItemValuesHolder values = new MainItemValuesHolder();
values.setMainItem(mainItem);
values.addTitle(englishTitle);
values.addTitle(germanTitle);
// etc...
In the final layer of the application (in another class where the values holder was passed as an argument), the data from the "holder" class is read and inserted into the database:
// First insert the main item, belonging to the parent table
MainItem mainItem = values.getMainItem();
String insertStatement = mainItem.asInsertStatement(true); // true, ignore IDs
// this is an oversimplification of what actually happens, but basically constructs the SQL statement while *ignoring the ID*, because...
int actualId = DbConnection.insert(insertStatement);
// updates the database and returns the AUTO_INCREMENT id using the JDBC getGeneratedKeys() method
// Then do inserts on sub-items belonging to child tables
ArrayList<MainItemTitle> titles = values.getTitles();
for (MainItemTitle dummyTitle : titles) {
MainItemTitle actualTitle = dummyTitle.replaceForeignKey(actualId);
String insertStatement = actualTitle.asInsertStatement(false); // false, use the IDs in the object
DbConnection.insert(insertStatement);
}
Now, the issue is using this procedure for ItemReference. Because it links two MainItems, using the (or multiple) dummy IDs to construct the objects beforehand destroys these relationships.
The most obvious solution seems to be being able to get the AUTO_INCREMENT ID beforehand so that I don't need to use dummy IDs.
I suppose the other solution is inserting the data as soon as it is input, but I would prefer to keep different functions of the application in separate classes - so one class is responsible for one action. Moreover, by inserting as soon as data is input, then if the user chooses to cancel before completing entering all data for the "main item", titles, references, etc., the now invalid data would need to be deleted.
In conclusion, how would I be able to get AUTO_INCREMENT before insertion? Is there a better solution for this particular application?
You cannot get the value before the insert. You cannot know what other actions may be taken on the table. AUTO_INCREMENT may not be incrementing by one, you may have set that but it could be changed.
You could use a temporary table to store the data with keys under your control. I would suggest using a Uuid rather than an Id so you can assume it will always be unique. Then your other classes can copy data into the live tables, you can still link the data using the Uuids to find related data in your temporary table(s), but write it in the order that makes sense to the database (so the 'root' record first to get it's key and then use that where required.
As the hbase Support flexible schema and my usecase needs the qualifier is dynamic value and available only on some logic (if true add column else skip) in this case we are expecting put execution should be OK even without adding any columns to it.
But we are end up of getting this error:
java.lang.IllegalArgumentException: No columns to insert
at org.apache.hadoop.hbase.client.HTable.validatePut(HTable.java:1500)
at org.apache.hadoop.hbase.client.BufferedMutatorImpl.validatePut(BufferedMutatorImpl.java:152)
at org.apache.hadoop.hbase.client.BufferedMutatorImpl.mutate(BufferedMutatorImpl.java:127)
at org.apache.hadoop.hbase.client.HTable.put(HTable.java:1028)
Put p = new Put(Bytes.toBytes("rowkey"))
if(condition1){
p.addColumn(Bytes.toBytes("cf1"),Bytes.toBytes("Q1")
}
table.put(p)
All HBase data has to be associated with a column family, even if nothing else is populated. In your case, if !condition1 then you simply shouldn't write anything:
if(condition1){
Put p = new Put(Bytes.toBytes("rowkey"))
p.addColumn(Bytes.toBytes("cf1"),Bytes.toBytes("Q1")
table.put(p)
}
I have a Spring Batch project running in Spring Boot that is working perfectly fine. For my reader I'm using JdbcPagingItemReader with a MySqlPagingQueryProvider.
#Bean
public ItemReader<Person> reader(DataSource dataSource) {
MySqlPagingQueryProvider provider = new MySqlPagingQueryProvider()
provider.setSelectClause(ScoringConstants.SCORING_SELECT_STATEMENT)
provider.setFromClause(ScoringConstants.SCORING_FROM_CLAUSE)
provider.setSortKeys("p.id": Order.ASCENDING)
JdbcPagingItemReader<Person> reader = new JdbcPagingItemReader<Person>()
reader.setRowMapper(new PersonRowMapper())
reader.setDataSource(dataSource)
reader.setQueryProvider(provider)
//Setting these caused the exception
reader.setParameterValues(
startDate: new Date() - 31,
endDate: new Date()
)
reader.afterPropertiesSet()
return reader
}
However, when I modified my query with some named parameters to replace previously hard coded date values and set these parameter values on the reader as shown above, I get the following exception on the second page read (the first page works fine because the _id parameter hasn't been made use of by the paging query provider):
org.springframework.dao.InvalidDataAccessApiUsageException: No value supplied for the SQL parameter '_id': No value registered for key '_id'
at org.springframework.jdbc.core.namedparam.NamedParameterUtils.buildValueArray(NamedParameterUtils.java:336)
at org.springframework.jdbc.core.namedparam.NamedParameterJdbcTemplate.getPreparedStatementCreator(NamedParameterJdbcTemplate.java:374)
at org.springframework.jdbc.core.namedparam.NamedParameterJdbcTemplate.query(NamedParameterJdbcTemplate.java:192)
at org.springframework.jdbc.core.namedparam.NamedParameterJdbcTemplate.query(NamedParameterJdbcTemplate.java:199)
at org.springframework.batch.item.database.JdbcPagingItemReader.doReadPage(JdbcPagingItemReader.java:218)
at org.springframework.batch.item.database.AbstractPagingItemReader.doRead(AbstractPagingItemReader.java:108)
Here is an example of the SQL, which has no WHERE clause by default. One does get created automatically when the second page is read:
select *, (select id from family f where date_created between :startDate and :endDate and f.creator_id = p.id) from person p
On the second page, the sql is modified to the following, however it seems that the named parameter for _id didn't get supplied:
select *, (select id from family f where date_created between :startDate and :endDate and f.creator_id = p.id) from person p WHERE id > :_id
I'm wondering if I simply can't use the MySqlPagingQueryProvider sort keys together with additional named parameters set in JdbcPagingItemReader. If not, what is the best alternative to solving this problem? I need to be able to supply parameters to the query and also page it (vs. using the cursor). Thank you!
I solved this problem with some intense debugging. It turns out that MySqlPagingQueryProvider utilizes a method getSortKeysWithoutAliases() when it builds up the SQL query to run for the first page and for subsequent pages. It therefore appends and (p.id > :_id) instead of and (p.id > :_p.id). Later on, when the second page sort values are created and stored in JdbcPagingItemReader's startAfterValues field it will use the original "p.id" String specified and eventually put into the named parameter map the pair ("_p.id",10). However, when the reader tries to fill in _id in the query, it doesn't exist because the reader used the non-alias removed key.
Long story short, I had to remove the alias reference when defining my sort keys.
provider.setSortKeys("p.id": Order.ASCENDING)
had to change to in order for everything to work nicely together
provider.setSortKeys("id": Order.ASCENDING)
I had the same issue and got another possible solution.
My table T has a primary key field INTERNAL_ID.
The query in JdbcPagingItemReader was like this:
SELECT INTERNAL_ID, ... FROM T WHERE ... ORDER BY INTERNAL_ID ASC
So, the key is: in some conditions, the query didn't return results, and then, raised the error above No value supplied for...
The solution is:
Check in a Spring Batch decider element if there are rows.
If it is, continue with chunk: reader-processor-writer.
It it's not, go to another step.
Please, note that they are two different scenarios:
At the beginning, there are rows. You get them by paging and finally, there are no more rows. This has no problem and decider trick is not required.
At the beginning, there are no rows. Then, this error raised, and the decider solved it.
Hope this helps.
I am attempting to create a test database (based off of my production db) at runtime, but rather than have to maintain an exact duplicate test db i'd like to copy the entire data structure of my production db at runtime and then when I close the test database, drop the entire database.
I assume I will be using statements such as:
CREATE DATABASE test //to create the test db
CREATE TABLE test.sampleTable LIKE production.sampleTable //to create each table
And when I am finished with the test db, calling a close method will run something like:
DROP DATABASE test //delete the database and all its tables
But how do I go about automatically finding all the tables within the production database without having to manually write them out. The idea is that I can manipulate my production db without having to be concerned with maintaining the structure identically within the test db.
Would a stored procedure be necessary in this case? Some sample code on how to achieve something like this would be appreciated.
If the database driver you are using supports it, you can use DatabaseMetaData#getTables to get the list of tables for a schema. You can get access to DatabaseMetaData from Connection#getMetaData.
In your scripting language, you call "SHOW TABLES" on the database you want to copy. Reading that result set a row at a time, your program puts the name of the table into a variable (let's call it $tablename) and can generate the sql: "CREATE TABLE test.$tablename LIKE production.$tablename". Iterate through the result set and you're done.
(You won't get foreign key constraints that way, but maybe you don't need those. If you do, you can run "SHOW CREATE TABLE $tablename" and parse the results to pick out the constraints.)
I don't have a code snippet for java, but here is one for perl that you could treat as pseudo-code:
$ref = $dbh->selectall_arrayref("SHOW TABLES");
unless(defined ($ref)){
print "Nothing found\n";
} else {
foreach my $row_ref (#{$ref}){
push(#tables, $row_ref->[0]);
}
}
The foreach statement iterates over the result set in an array reference returned by the database interface library. The push statement puts the first element of the current row of the result set into an array variable #tables. You'd be using the database library appropriate for your language of choice.
I would use mysqldump : http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html
It will produce a file containing all the sql commands needed to replicate the prod database
The solutions was as follows:
private static final String SQL_CREATE_TEST_DB = "CREATE DATABASE test";
private static final String SQL_PROD_TABLES = "SHOW TABLES IN production";
JdbcTemplate jdbcTemplate = new JdbcTemplate(dataSource);
jdbcTemplate.execute(SQL_CREATE_TEST_DB);
SqlRowSet result = jdbcTemplate.queryForRowSet(SQL_PROD_TABLES);
while(result.next()) {
String tableName = result.getString(result.getMetaData().getColumnName(1)); //Retrieves table name from column 1
jdbcTemplate.execute("CREATE TABLE test2." + tableName + " LIKE production." + tableName); //Create new table in test2 based on production structure
}
This is using Spring to simplify the database connection etc, but the real magic is in the SQL statements. As mentioned by D Mac, this will not copy foreign key constraints, but that can be achieved by running another SQL statement and parsing the results.
Using the GeoTools WFS-T plugin, I have created a new row, and after a commit, I have a FeatureId whos .getId() returns an ugly string that looks something like this:
newmy_database:my_table.9223372036854775807
Aside from the fact that the word "new" at the beginning of "my_database" is a surprise, the number in no way reflects the primary key of the new row (which in this case is "23"). Fair enough, I thought this may be some internal numbering system. However, now I want a foreign key in another table to get the primary key of the new row in this one, and I'm not sure how to get the value from this FID. Some places suggest that you can use an FID in a query like this:
Filter filter = filterFactory.id(Collections.singleton(fid));
Query query = new Query(tableName, filter);
SimpleFeatureCollection features = simpleFeatureSource.getFeatures(query);
But this fails at parsing the FID, at the underscore of all places! That underscore was there when the row was created (I had to pass "my_database:my_table" as the table to add the row to).
I'm sure that either there is something wrong with the id, or I'm using it incorrectly somehow. Can anyone shed any light?
It appears as if a couple things are going wrong - and perhaps a bug report is needed.
The FeatureId with "new" at the beginning is a temporary id; that should be replaced with the real result once commit has been called.
There are a number of way to be aware of this:
1) You can listen for a BatchFeatureEvent; this offers the information on "temp id" -> "wfs id"
2) Internally this information is parsed from the Transaction Result returned from your WFS. The result is saved in the WFSTransactionState for you to access. This was before BatchFeatureEvent was invented.
Transaction transaction = new transaction("insert");
try {
SimpleFeatureStore featureStore =
(SimpleFeatureStore) wfs.getFeatureSource( typeName );
featureStore.setTransaction( transaction );
featureStore.addFeatures( DataUtilities.collection( feature ) );
transaction.commit();
// get the final feature id
WFSTransactionState wfsts = (WFSTransactionState) transaction.getState(wfs);
// In this example there is only one fid. Get it.
String result = wfsts.getFids( typeName )[0];
}
finally {
transaction.close();
}
I have updated the documentation with the above example:
http://docs.geotools.org/latest/userguide/library/data/wfs.html