Spring JDBC template query - java

I am facing issue, while passing the list of values to the SQL query which is placed in the properties file. Is there anyway that i can dynamically generate the placeholders ? according to the values received. If so, static query will work ? please advise.
DAO Code:
public List<DestinationDTO> fetchEmsStatistics(DestinationDTO destinationDTO)
//throws Exception
{
LOG.info("start of -- MonitorDAOImpl.fetchEmsStatistics()");
List<DestinationDTO> destinationDTOList = new ArrayList<DestinationDTO>();
String str = destinationDTO.getDestinationNames();
String dest[] = str.split(",");
/* "example 1" , "example2" are hardcoded,however i want the values of dest array and according to count the placeholders should be placed in the query */
try{
destinationDTOList = this.jdbcTemplate.query(
this.fetchEmsStatisticsQuery,
new Object[] {
"%" + destinationDTO.getDestinationTypeId() + "%",
destinationDTO.getDestinationSourceId(),
"example1","example2",
destinationDTO.getStartTime() + ":00",
destinationDTO.getEndTime() + ":00" },new DestinationDTORowMapper());
System.out.println("finally"+ destinationDTOList.size());
}
catch(Exception e)
{
e.printStackTrace();
}
LOG.info("end of -- MonitorDAOImpl.fetchEmsStatistics()");
return destinationDTOList;
}
Query:
Select
DESTINATION
,MIN_IC
, MAX_IC
,MAX_OC
,MAX_IC-MIN_IC as PROCESSEDMSGS
from (
select
DESTINATION
,min(IN_MSG_COUNT) MIN_IC
,max(IN_MSG_COUNT) MAX_IC
, max(OUT_MSG_COUNT) MAX_OC
from
EMS_MONITOR C
where
C.DESTINATION_TYPE_ID like ? and
C.SOURCE_ID=? and
C.DESTINATION IN (?,?) and
C.RCVD_DATE >=? and C.RCVD_DATE <=?
group by
DESTINATION
)
Desired query format:example
Select
DESTINATION
,MIN_IC
, MAX_IC
,MAX_OC
,MAX_IC-MIN_IC as PROCESSEDMSGS
from (
select
DESTINATION
,min(IN_MSG_COUNT) MIN_IC
,max(IN_MSG_COUNT) MAX_IC
, max(OUT_MSG_COUNT) MAX_OC
from
EMS_MONITOR C
where
C.DESTINATION_TYPE_ID like '%1%' and
C.SOURCE_ID=1 and
C.DESTINATION in ('M.COM.CAT.AVAIL.STORE.Q.FCC','M.COM.CAT.ELIG.BOPS.STORE.Q.FCC') and
C.RCVD_DATE >='2014-09-16 17:00:00' and
C.RCVD_DATE <='2014-09-16 20:01:00'
group by
DESTINATION
)

Considering your extra comment.
Try something like:
in properties file:
myquery=SELECT * from somehting where in (%IN_CLAUSE%);
In code you can get the string and:
String q = originalQuery.replace("%IN_CLAUSE%", str);
then check the q value.

Why do not try to use a subselect instead send parameter in IN clause?
I know that my answer can't help by is a different approach.

Related

Java EE + Oracle: how to use GTT(global temporary table) to avoid possible long(1000+) IN clause?

I have a query for Oracle database, built with CriteriaBuilder of Hibernate. Now, it has a IN clause which already takes about 800+ params.
Team says this may surpass 1000 and hitting the hard upper limit of Oracle itself, which only allows 1000 param for an IN clause. We need to optimize that.
select ih from ItemHistory as ih
where ih.number=:param0
and
ih.companyId in (
select c.id from Company as c
where (
( c.organizationId in (:param1) )
or
( c.organizationId like :param2 )
) and (
c.organizationId in (:param3, :param4, :param5, :param6, :param7, :param8, :param9, :param10, ..... :param818)
)
)
order by ih.eventDate desc
So, two solutions I can think of:
The easy one, as now the list from :param3 to :param818 are below 1000, and in the future, we may hit 1000, we can separate the list if size > 1000, into another IN clause, so it becomes:
c.organizationId in (:param3, :param4, :param5, :param6, :param7, :param8, :param9, :param10, ..... :param1002) or c.organizationId in (:param1003, ...)
Both the original code and solution 1 are not very efficient already. Although it can fetch 40K records in 25 seconds, we should use a GTT(Global Temporary Table), as per what I can find on AskTom or other sites, by professional DBAs. But I can only find SQL examples, not Java code.
What I can imagine is:
createNativeQuery("create global temporary table GTT_COMPANIES if not exist (companyId varchar(32)) ON COMMIT DELETE ROWS;"); and execute(Do we need index here?)
createNativeQuery("insert into GTT_COMPANIES (list)"); query.bind("1", query.getCompanyIds()); and execute(can we bind a list and insert it?)
use CriteriaQuery to select from this table(but I doubt, as CriteriaQueryBuilder will require type safe meta model class to be generated beforehand, and here we don't have the entity; this is an ad-hoc table and no model entity is mapped to it)
and, do we need to create GTT even the list size is < 1000? As often it is big, 700~800.
So, any suggestion? Someone got a working example of Hibernate CriteriaQuery + Oracle GTT?
The whole method is like this:
public List<ItemHistory> findByIdTypePermissionAndOrganizationIds(final Query<String> query, final ItemIdType idType) throws DataLookupException {
String id = query.getObjectId();
String type = idType.name();
Set<String> companyIds = query.getCompanyIds();
Set<String> allowedOrgIds = query.getAllowedOrganizationIds();
Set<String> excludedOrgIds = query.getExcludedOrganizationIds();
// if no orgs are allowed, we should return empty list
if (CollectionUtils.isEmpty(allowedOrgIds)) {
return Collections.emptyList();
}
try {
CriteriaBuilder builder = entityManager.getCriteriaBuilder();
CriteriaQuery<ItemHistory> criteriaQuery = builder.createQuery(ItemHistory.class);
Subquery<String> subQueryCompanyIds = filterByPermissionAndOrgIdsInSubquery(query, builder, criteriaQuery);
Subquery<String> subQueryCompanyIds = criteriaQuery.subquery(String.class);
Root<Company> companies = subQueryCompanyIds.from(Company.class);
companies.alias(COMPANY_ALIAS);
Path<String> orgIdColumn = companies.get(Company_.organizationId);
/* 1. get permission based restrictions */
// select COMPANY_ID where (ORG_ID in ... or like ...) and (ORG_ID not in ... and not like ...)
// actually query.getExcludedOrganizationIds() can also be very long list(1000+), but let's do it later
Predicate permissionPredicate = getCompanyIdRangeByPermission(
builder, query.getAllowedOrganizationIds(), query.getExcludedOrganizationIds(), orgIdColumn
);
/* 2. get org id based restrictions, which was done on top of permission restrictions */
// ... and where (ORG_ID in ... or like ...)
// process companyIds with and without "*" by adding different predicates, like (xxx%, yyy%) vs in (xxx, yyy)
// here, query.getCompanyIds() could be very long, may be 1000+
Predicate orgIdPredicate = groupByWildcardsAndCombine(builder, query.getCompanyIds(), orgIdColumn, false);
/* 3. Join two predicates with AND, because originally filtering is done twice, 2nd is done on basis of 1st */
Predicate subqueryWhere = CriteriaQueryUtils.joinWith(builder, true, permissionPredicate, orgIdPredicate); // join predicates with AND
subQueryCompanyIds.select(companies.get(Company_.id)); // id -> COMPANY_ID
if (subqueryWhere != null) {
subQueryCompanyIds.where(subqueryWhere);
} else {
LOGGER.warn("Cannot build subquery of org id and permission. " +
"Org ids: {}, allowed companies: {}, excluded companies: {}",
query.getCompanyIds(), query.getAllowedOrganizationIds(), query.getExcludedOrganizationIds());
}
Root<ItemHistory> itemHistory = criteriaQuery.from(ItemHistory.class);
itemHistory.alias(ITEM_HISTORY_ALIAS);
criteriaQuery.select(itemHistory)
.where(builder.and(
builder.equal(getColumnByIdType(itemHistory, idType), id),
builder.in(itemHistory.get(ItemHistory_.companyId)).value(subQueryCompanyIds)
))
.orderBy(builder.desc(itemHistory.get(ItemHistory_.eventDate)));
TypedQuery<ItemHistory> finalQuery = entityManager.createQuery(criteriaQuery);
LOGGER.trace(LOG_MESSAGE_FINAL_QUERY, finalQuery.unwrap(org.hibernate.Query.class).getQueryString());
return finalQuery.setMaxResults(MAX_LIST_FETCH_SIZE).getResultList();
} catch (NoResultException e) {
LOGGER.info("No item history events found by permission and org ids with {}={}", type, id);
throw new DataLookupException(ErrorCode.DATA_LOOKUP_NO_RESULT);
} catch (Exception e) {
LOGGER.error("Error when fetching item history events by permission and org ids with {}={}", type, id, e);
throw new DataLookupException(ErrorCode.DATA_LOOKUP_ERROR,
"Error when fetching item history events by permission and org ids with " + type + "=" + id);
}
}

My Customer data is being truncated when added to my List [duplicate]

I am running data.bat file with the following lines:
Rem Tis batch file will populate tables
cd\program files\Microsoft SQL Server\MSSQL
osql -U sa -P Password -d MyBusiness -i c:\data.sql
The contents of the data.sql file is:
insert Customers
(CustomerID, CompanyName, Phone)
Values('101','Southwinds','19126602729')
There are 8 more similar lines for adding records.
When I run this with start > run > cmd > c:\data.bat, I get this error message:
1>2>3>4>5>....<1 row affected>
Msg 8152, Level 16, State 4, Server SP1001, Line 1
string or binary data would be truncated.
<1 row affected>
<1 row affected>
<1 row affected>
<1 row affected>
<1 row affected>
<1 row affected>
Also, I am a newbie obviously, but what do Level #, and state # mean, and how do I look up error messages such as the one above: 8152?
From #gmmastros's answer
Whenever you see the message....
string or binary data would be truncated
Think to yourself... The field is NOT big enough to hold my data.
Check the table structure for the customers table. I think you'll find that the length of one or more fields is NOT big enough to hold the data you are trying to insert. For example, if the Phone field is a varchar(8) field, and you try to put 11 characters in to it, you will get this error.
I had this issue although data length was shorter than the field length.
It turned out that the problem was having another log table (for audit trail), filled by a trigger on the main table, where the column size also had to be changed.
In one of the INSERT statements you are attempting to insert a too long string into a string (varchar or nvarchar) column.
If it's not obvious which INSERT is the offender by a mere look at the script, you could count the <1 row affected> lines that occur before the error message. The obtained number plus one gives you the statement number. In your case it seems to be the second INSERT that produces the error.
Just want to contribute with additional information: I had the same issue and it was because of the field wasn't big enough for the incoming data and this thread helped me to solve it (the top answer clarifies it all).
BUT it is very important to know what are the possible reasons that may cause it.
In my case i was creating the table with a field like this:
Select '' as Period, * From Transactions Into #NewTable
Therefore the field "Period" had a length of Zero and causing the Insert operations to fail. I changed it to "XXXXXX" that is the length of the incoming data and it now worked properly (because field now had a lentgh of 6).
I hope this help anyone with same issue :)
Some of your data cannot fit into your database column (small). It is not easy to find what is wrong. If you use C# and Linq2Sql, you can list the field which would be truncated:
First create helper class:
public class SqlTruncationExceptionWithDetails : ArgumentOutOfRangeException
{
public SqlTruncationExceptionWithDetails(System.Data.SqlClient.SqlException inner, DataContext context)
: base(inner.Message + " " + GetSqlTruncationExceptionWithDetailsString(context))
{
}
/// <summary>
/// PArt of code from following link
/// http://stackoverflow.com/questions/3666954/string-or-binary-data-would-be-truncated-linq-exception-cant-find-which-fiel
/// </summary>
/// <param name="context"></param>
/// <returns></returns>
static string GetSqlTruncationExceptionWithDetailsString(DataContext context)
{
StringBuilder sb = new StringBuilder();
foreach (object update in context.GetChangeSet().Updates)
{
FindLongStrings(update, sb);
}
foreach (object insert in context.GetChangeSet().Inserts)
{
FindLongStrings(insert, sb);
}
return sb.ToString();
}
public static void FindLongStrings(object testObject, StringBuilder sb)
{
foreach (var propInfo in testObject.GetType().GetProperties())
{
foreach (System.Data.Linq.Mapping.ColumnAttribute attribute in propInfo.GetCustomAttributes(typeof(System.Data.Linq.Mapping.ColumnAttribute), true))
{
if (attribute.DbType.ToLower().Contains("varchar"))
{
string dbType = attribute.DbType.ToLower();
int numberStartIndex = dbType.IndexOf("varchar(") + 8;
int numberEndIndex = dbType.IndexOf(")", numberStartIndex);
string lengthString = dbType.Substring(numberStartIndex, (numberEndIndex - numberStartIndex));
int maxLength = 0;
int.TryParse(lengthString, out maxLength);
string currentValue = (string)propInfo.GetValue(testObject, null);
if (!string.IsNullOrEmpty(currentValue) && maxLength != 0 && currentValue.Length > maxLength)
{
//string is too long
sb.AppendLine(testObject.GetType().Name + "." + propInfo.Name + " " + currentValue + " Max: " + maxLength);
}
}
}
}
}
}
Then prepare the wrapper for SubmitChanges:
public static class DataContextExtensions
{
public static void SubmitChangesWithDetailException(this DataContext dataContext)
{
//http://stackoverflow.com/questions/3666954/string-or-binary-data-would-be-truncated-linq-exception-cant-find-which-fiel
try
{
//this can failed on data truncation
dataContext.SubmitChanges();
}
catch (SqlException sqlException) //when (sqlException.Message == "String or binary data would be truncated.")
{
if (sqlException.Message == "String or binary data would be truncated.") //only for EN windows - if you are running different window language, invoke the sqlException.getMessage on thread with EN culture
throw new SqlTruncationExceptionWithDetails(sqlException, dataContext);
else
throw;
}
}
}
Prepare global exception handler and log truncation details:
protected void Application_Error(object sender, EventArgs e)
{
Exception ex = Server.GetLastError();
string message = ex.Message;
//TODO - log to file
}
Finally use the code:
Datamodel.SubmitChangesWithDetailException();
Another situation in which you can get this error is the following:
I had the same error and the reason was that in an INSERT statement that received data from an UNION, the order of the columns was different from the original table. If you change the order in #table3 to a, b, c, you will fix the error.
select a, b, c into #table1
from #table0
insert into #table1
select a, b, c from #table2
union
select a, c, b from #table3
on sql server you can use SET ANSI_WARNINGS OFF like this:
using (SqlConnection conn = new SqlConnection("Data Source=XRAYGOAT\\SQLEXPRESS;Initial Catalog='Healthy Care';Integrated Security=True"))
{
conn.Open();
using (var trans = conn.BeginTransaction())
{
try
{
using cmd = new SqlCommand("", conn, trans))
{
cmd.CommandText = "SET ANSI_WARNINGS OFF";
cmd.ExecuteNonQuery();
cmd.CommandText = "YOUR INSERT HERE";
cmd.ExecuteNonQuery();
cmd.Parameters.Clear();
cmd.CommandText = "SET ANSI_WARNINGS ON";
cmd.ExecuteNonQuery();
trans.Commit();
}
}
catch (Exception)
{
trans.Rollback();
}
}
conn.Close();
}
I had the same issue. The length of my column was too short.
What you can do is either increase the length or shorten the text you want to put in the database.
Also had this problem occurring on the web application surface.
Eventually found out that the same error message comes from the SQL update statement in the specific table.
Finally then figured out that the column definition in the relating history table(s) did not map the original table column length of nvarchar types in some specific cases.
I had the same problem, even after increasing the size of the problematic columns in the table.
tl;dr: The length of the matching columns in corresponding Table Types may also need to be increased.
In my case, the error was coming from the Data Export service in Microsoft Dynamics CRM, which allows CRM data to be synced to an SQL Server DB or Azure SQL DB.
After a lengthy investigation, I concluded that the Data Export service must be using Table-Valued Parameters:
You can use table-valued parameters to send multiple rows of data to a Transact-SQL statement or a routine, such as a stored procedure or function, without creating a temporary table or many parameters.
As you can see in the documentation above, Table Types are used to create the data ingestion procedure:
CREATE TYPE LocationTableType AS TABLE (...);
CREATE PROCEDURE dbo.usp_InsertProductionLocation
#TVP LocationTableType READONLY
Unfortunately, there is no way to alter a Table Type, so it has to be dropped & recreated entirely. Since my table has over 300 fields (😱), I created a query to facilitate the creation of the corresponding Table Type based on the table's columns definition (just replace [table_name] with your table's name):
SELECT 'CREATE TYPE [table_name]Type AS TABLE (' + STRING_AGG(CAST(field AS VARCHAR(max)), ',' + CHAR(10)) + ');' AS create_type
FROM (
SELECT TOP 5000 COLUMN_NAME + ' ' + DATA_TYPE
+ IIF(CHARACTER_MAXIMUM_LENGTH IS NULL, '', CONCAT('(', IIF(CHARACTER_MAXIMUM_LENGTH = -1, 'max', CONCAT(CHARACTER_MAXIMUM_LENGTH,'')), ')'))
+ IIF(DATA_TYPE = 'decimal', CONCAT('(', NUMERIC_PRECISION, ',', NUMERIC_SCALE, ')'), '')
AS field
FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_NAME = '[table_name]'
ORDER BY ORDINAL_POSITION) AS T;
After updating the Table Type, the Data Export service started functioning properly once again! :)
When I tried to execute my stored procedure I had the same problem because the size of the column that I need to add some data is shorter than the data I want to add.
You can increase the size of the column data type or reduce the length of your data.
A 2016/2017 update will show you the bad value and column.
A new trace flag will swap the old error for a new 2628 error and will print out the column and offending value. Traceflag 460 is available in the latest cumulative update for 2016 and 2017:
https://support.microsoft.com/en-sg/help/4468101/optional-replacement-for-string-or-binary-data-would-be-truncated
Just make sure that after you've installed the CU that you enable the trace flag, either globally/permanently on the server:
...or with DBCC TRACEON:
https://learn.microsoft.com/en-us/sql/t-sql/database-console-commands/dbcc-traceon-trace-flags-transact-sql?view=sql-server-ver15
Another situation, in which this error may occur is in
SQL Server Management Studio. If you have "text" or "ntext" fields in your table,
no matter what kind of field you are updating (for example bit or integer).
Seems that the Studio does not load entire "ntext" fields and also updates ALL fields instead of the modified one.
To solve the problem, exclude "text" or "ntext" fields from the query in Management Studio
This Error Comes only When any of your field length is greater than the field length specified in sql server database table structure.
To overcome this issue you have to reduce the length of the field Value .
Or to increase the length of database table field .
If someone is encountering this error in a C# application, I have created a simple way of finding offending fields by:
Getting the column width of all the columns of a table where we're trying to make this insert/ update. (I'm getting this info directly from the database.)
Comparing the column widths to the width of the values we're trying to insert/ update.
Assumptions/ Limitations:
The column names of the table in the database match with the C# entity fields. For eg: If you have a column like this in database:
You need to have your Entity with the same column name:
public class SomeTable
{
// Other fields
public string SourceData { get; set; }
}
You're inserting/ updating 1 entity at a time. It'll be clearer in the demo code below. (If you're doing bulk inserts/ updates, you might want to either modify it or use some other solution.)
Step 1:
Get the column width of all the columns directly from the database:
// For this, I took help from Microsoft docs website:
// https://learn.microsoft.com/en-us/dotnet/api/system.data.sqlclient.sqlconnection.getschema?view=netframework-4.7.2#System_Data_SqlClient_SqlConnection_GetSchema_System_String_System_String___
private static Dictionary<string, int> GetColumnSizesOfTableFromDatabase(string tableName, string connectionString)
{
var columnSizes = new Dictionary<string, int>();
using (var connection = new SqlConnection(connectionString))
{
// Connect to the database then retrieve the schema information.
connection.Open();
// You can specify the Catalog, Schema, Table Name, Column Name to get the specified column(s).
// You can use four restrictions for Column, so you should create a 4 members array.
String[] columnRestrictions = new String[4];
// For the array, 0-member represents Catalog; 1-member represents Schema;
// 2-member represents Table Name; 3-member represents Column Name.
// Now we specify the Table_Name and Column_Name of the columns what we want to get schema information.
columnRestrictions[2] = tableName;
DataTable allColumnsSchemaTable = connection.GetSchema("Columns", columnRestrictions);
foreach (DataRow row in allColumnsSchemaTable.Rows)
{
var columnName = row.Field<string>("COLUMN_NAME");
//var dataType = row.Field<string>("DATA_TYPE");
var characterMaxLength = row.Field<int?>("CHARACTER_MAXIMUM_LENGTH");
// I'm only capturing columns whose Datatype is "varchar" or "char", i.e. their CHARACTER_MAXIMUM_LENGTH won't be null.
if(characterMaxLength != null)
{
columnSizes.Add(columnName, characterMaxLength.Value);
}
}
connection.Close();
}
return columnSizes;
}
Step 2:
Compare the column widths with the width of the values we're trying to insert/ update:
public static Dictionary<string, string> FindLongBinaryOrStringFields<T>(T entity, string connectionString)
{
var tableName = typeof(T).Name;
Dictionary<string, string> longFields = new Dictionary<string, string>();
var objectProperties = GetProperties(entity);
//var fieldNames = objectProperties.Select(p => p.Name).ToList();
var actualDatabaseColumnSizes = GetColumnSizesOfTableFromDatabase(tableName, connectionString);
foreach (var dbColumn in actualDatabaseColumnSizes)
{
var maxLengthOfThisColumn = dbColumn.Value;
var currentValueOfThisField = objectProperties.Where(f => f.Name == dbColumn.Key).First()?.GetValue(entity, null)?.ToString();
if (!string.IsNullOrEmpty(currentValueOfThisField) && currentValueOfThisField.Length > maxLengthOfThisColumn)
{
longFields.Add(dbColumn.Key, $"'{dbColumn.Key}' column cannot take the value of '{currentValueOfThisField}' because the max length it can take is {maxLengthOfThisColumn}.");
}
}
return longFields;
}
public static List<PropertyInfo> GetProperties<T>(T entity)
{
//The DeclaredOnly flag makes sure you only get properties of the object, not from the classes it derives from.
var properties = entity.GetType()
.GetProperties(System.Reflection.BindingFlags.Public
| System.Reflection.BindingFlags.Instance
| System.Reflection.BindingFlags.DeclaredOnly)
.ToList();
return properties;
}
Demo:
Let's say we're trying to insert someTableEntity of SomeTable class that is modeled in our app like so:
public class SomeTable
{
[Key]
public long TicketID { get; set; }
public string SourceData { get; set; }
}
And it's inside our SomeDbContext like so:
public class SomeDbContext : DbContext
{
public DbSet<SomeTable> SomeTables { get; set; }
}
This table in Db has SourceData field as varchar(16) like so:
Now we'll try to insert value that is longer than 16 characters into this field and capture this information:
public void SaveSomeTableEntity()
{
var connectionString = "server=SERVER_NAME;database=DB_NAME;User ID=SOME_ID;Password=SOME_PASSWORD;Connection Timeout=200";
using (var context = new SomeDbContext(connectionString))
{
var someTableEntity = new SomeTable()
{
SourceData = "Blah-Blah-Blah-Blah-Blah-Blah"
};
context.SomeTables.Add(someTableEntity);
try
{
context.SaveChanges();
}
catch (Exception ex)
{
if (ex.GetBaseException().Message == "String or binary data would be truncated.\r\nThe statement has been terminated.")
{
var badFieldsReport = "";
List<string> badFields = new List<string>();
// YOU GOT YOUR FIELDS RIGHT HERE:
var longFields = FindLongBinaryOrStringFields(someTableEntity, connectionString);
foreach (var longField in longFields)
{
badFields.Add(longField.Key);
badFieldsReport += longField.Value + "\n";
}
}
else
throw;
}
}
}
The badFieldsReport will have this value:
'SourceData' column cannot take the value of
'Blah-Blah-Blah-Blah-Blah-Blah' because the max length it can take is
16.
Kevin Pope's comment under the accepted answer was what I needed.
The problem, in my case, was that I had triggers defined on my table that would insert update/insert transactions into an audit table, but the audit table had a data type mismatch where a column with VARCHAR(MAX) in the original table was stored as VARCHAR(1) in the audit table, so my triggers were failing when I would insert anything greater than VARCHAR(1) in the original table column and I would get this error message.
I used a different tactic, fields that are allocated 8K in some places. Here only about 50/100 are used.
declare #NVPN_list as table
nvpn varchar(50)
,nvpn_revision varchar(5)
,nvpn_iteration INT
,mpn_lifecycle varchar(30)
,mfr varchar(100)
,mpn varchar(50)
,mpn_revision varchar(5)
,mpn_iteration INT
-- ...
) INSERT INTO #NVPN_LIST
SELECT left(nvpn ,50) as nvpn
,left(nvpn_revision ,10) as nvpn_revision
,nvpn_iteration
,left(mpn_lifecycle ,30)
,left(mfr ,100)
,left(mpn ,50)
,left(mpn_revision ,5)
,mpn_iteration
,left(mfr_order_num ,50)
FROM [DASHBOARD].[dbo].[mpnAttributes] (NOLOCK) mpna
I wanted speed, since I have 1M total records, and load 28K of them.
This error may be due to less field size than your entered data.
For e.g. if you have data type nvarchar(7) and if your value is 'aaaaddddf' then error is shown as:
string or binary data would be truncated
You simply can't beat SQL Server on this.
You can insert into a new table like this:
select foo, bar
into tmp_new_table_to_dispose_later
from my_table
and compare the table definition with the real table you want to insert the data into.
Sometime it's helpful sometimes it's not.
If you try inserting in the final/real table from that temporary table it may just work (due to data conversion working differently than SSMS for example).
Another alternative is to insert the data in chunks, instead of inserting everything immediately you insert with top 1000 and you repeat the process, till you find a chunk with an error. At least you have better visibility on what's not fitting into the table.

how to handle string list when it is returned by any BAPI using jco3.jar?

I have a BAPI function to be called, which takes input a string and return a string list as output. I was using jco3.jar file in my java code but not able to find any inbuilt method which takes care of string list (String[]) as output parameter, though we have ByteArray/CharArray instead.
function.getExportParameterList().getString("I_DOCNUM"); // it will work if the return parameter - "I_DOCNUM" is of type String only, but not working for String List.
Please help me. Thanks in advance.
Java Code
JCoDestination destination = JCoDestinationManager.getDestination("mySAPSystem");
System.out.println("Attributes:");
System.out.println(destination.getAttributes());
System.out.println(destination.getRepository());
destination.ping();
JCoFunction function = destination.getRepository().getFunction("INBOUND_IDOCS_FOR_TID");
if(function == null)
throw new RuntimeException("INBOUND_IDOCS_FOR_TID not found in SAP.");
function.getImportParameterList().setValue("TID", "0A80351B1927589833E57997");
try
{
function.execute(destination);
}
catch(AbapException e)
{
System.out.println(e.toString());
return;
}
System.out.println("STFC_CONNECTION finished:");
System.out.println(" Echo: " + function.getExportParameterList().getString("I_DOCNUM"));
Function Module:-
INBOUND_IDOCS_FOR_TID.
*"----------------------------------------------------------------------
*"*"Lokale Schnittstelle:
*" IMPORTING
*" VALUE(TID) TYPE EDIDS-TID
*" CHANGING
*" VALUE(I_DOCNUM) TYPE IDOC_TT
*" EXCEPTIONS
*" NO_IDOC_FOUND
*"----------------------------------------------------------------------
data: wa_docnum like edidc-docnum.
select docnum from edids into wa_docnum
where ( status eq '50'
or status eq '56' )
and tid eq tid.
append wa_docnum to i_docnum.
endselect.
if sy-subrc ne 0.
raise no_idoc_found.
endif.
ENDFUNCTION.
IDOC_TT is a table type. So you can access this parameter with
JCoTable tabIDocnums = function.getChangingParameterList().getTable("I_DOCNUM");
Then loop through the rows of the table and access the single field value of each row with:
String strIDocNumber = tabIDocnums.getString("EDI_DOCNUM");
or a little bit more performant via field index:
String strIDocNumber = tabIDocnums.getString(0);

Mule-Creating dynamic where condition for sql query through DB connector

I need to create dynamic query wherein the where condition will be changed based on the request coming in to mule.The request will be always get with query parameter. Here goes the example:
http://localhost:8084/basePath?name=balwant&age=26 , OR
http://localhost:8084/basePath?name=balwant&age=26&gender=M
Likewise it will be dynamic.Now I need a way by which I can create a query where the WHERE condition will be added based on the query parameters in request.
Haven't tested this, but something like this. Check if the inboundProperty is there and build up the query programatically :
SELECT * FROM USERS WHERE NAME = '#[message.inboundProperties.name]' #[message.inboundProperties.gender !=null ? ' AND GENDER=' + message.inboundProperties.gender] #[message.inboundProperties.age !=null ? ' AND AGE=' + message.inboundProperties.age]
I had this in my mind of using Custom Transformers. So I used java transformer for this.
The logic looks something like this :
public class QueryBuilder extends AbstractMessageTransformer {
#Override
public Object transformMessage(MuleMessage message, String outputEncoding)
throws TransformerException {
System.out.println("Query Params : "
+ message.getInboundProperty("http.query.params").getClass()
.getName());
Map<?, ?> map = message.getInboundProperty("http.query.params");
System.out.println("Map keys : " + map.keySet());
String where = "";
for (Map.Entry<?, ?> entry : map.entrySet()) {
System.out.println(entry.getKey() + "/" + entry.getValue());
where = where+" "+entry.getKey()+"="+"'"+entry.getValue()+"'"+" and";
}
String whereCondition = where.substring(0, where.lastIndexOf(" "));
System.out.println("Where condition is : "+ where.substring(0, where.lastIndexOf(" ")));
return whereCondition;
}}
Now this returns the payload which is string type.
In DB connector, select Query type as Dynamic. After WHERE condition add #[payload].
Cheers
Above query works if the values are available in the message inbound properties. But if you want to build you SQL query with request query param values then you need to use like below (as the query param values will be available under message inbound properties http.query.param from mule 3.6.0 onward)
SELECT * FROM USERS WHERE NAME = '#[message.inboundProperties.'http.query.params'.name]' #[message.inboundProperties.'http.query.params'.gender !=null ? ' AND GENDER=' + message.inboundProperties.'http.query.params'.gender] #[message.inboundProperties.'http.query.params'.age !=null ? ' AND AGE=' + message.inboundProperties.'http.query.params'.age]

Java regex to remove SQL comments from a string

Hope someone can help me out with this one !
I have a sql file that looks like this:
CREATE TABLE IF NOT EXISTS users(
id INT UNSIGNED NOT NULL AUTO_INCREMENT,
firstname VARCHAR(30) NOT NULL,
lastname VARCHAR(30) NOT NULL,
PRIMARY KEY (id),
CONSTRAINT UNIQUE (firstname,lastname)
)
ENGINE=InnoDB
;
INSERT IGNORE INTO users (firstname,lastname) VALUES ('x','y');
/*
INSERT IGNORE INTO users (firstname,lastname) VALUES ('a','b');
*/
I have buit a web application that initializes a mysql database at startup with this function:
public static void initDatabase(ConnectionPool pool, File sqlFile){
Connection con = null;
Statement st = null;
String mySb=null;
try{
con = pool.getConnection();
mySb=IOUtils.copyToString(sqlFile);
// We use ";" as a delimiter for each request then we are sure to have well formed statements
String[] inst = mySb.split(";");
st = con.createStatement();
for(int i = 0; i<inst.length; i++){
// we ensure that there is no spaces before or after the request string
// in order not to execute empty statements
if(!inst[i].trim().isEmpty()){
st.executeUpdate(inst[i]);
}
}
st.close();
}catch(IOException e){
throw new RuntimeException(e);
}catch(SQLException e){
throw new RuntimeException(e);
}finally{
SQLUtils.safeClose(st);
pool.close(con);
}
}
(This function was found on the web. Author, please forgive me for not citing your name, I lost it !!)
It works perfectly as long as there is not SQL comment blocks.
The copyToString() function basically does what it says.
What I would like now is build a regex that will remove block comments from the string. I only have block comments /* */ in the file, no --.
What I have tried so far:
mySb = mySb.replaceAll("/\\*.*\\*/", "");
Unfortunatly, I'm not very good at regex...
I get all the troubles of "The matched string look something like /* comment */ real statement /* another comment*/ " and so on...
Try
mySb = mySb.replaceAll("/\\*.*?\\*/", "");
(notice the ? which stands for "lazy").
EDIT: To cover multiline comments, use this approach:
Pattern commentPattern = Pattern.compile("/\\*.*?\\*/", Pattern.DOTALL);
mySb = commentPattern.matcher(mySb).replaceAll("");
Hope this works for you.
You need to use a reluctant qualifier like this:
public class Main {
public static void main(String[] args) {
String s = "The matched string look something like /* comment */ real statement /* another comment*/";
System.err.println(s.replaceAll("/\\*.*?\\*/", ""));
}
}
Try the following approach:
String s = "/* comment */ select * from XYZ; /* comment */";
System.out.println(s.replaceAll("/\\*.*?\\*/", ""));
Outputs:
select * from XYZ;
The .*? stands for use Laziness Instead of Greediness (that means the .* matches the largest string possible by default, i.e. is greedy => you have to configure it to be non-greedy using the ? in the .*? expression).
it won't work 100%
the comments can be a part of a valid string specified in the SQL and in that case they need to be kept...
I am just researching a solution... seems to be complicated
so far I have:
\G(?:[^']*?|'(?:[^']|'')*?'(?!'))*?\/\*.*?\*\/
but it matches all while I need to match the comment only... and just found out it could fail when preceded by a single-line comment... damn

Categories