I'm using
jdbcTemplate to make JDBC connections to a mySQL DB
prepared statements to protect myself as much as possible from SQL injection attacks
in need to accept requests from the user to sort the data on any of a dozen different columns
the following statement
jdbcTemplate.query("SELECT * FROM TABLE1 ORDER BY ? ?", colName, sortOrder);
Of course this doesn't work, because the variable bindings aren't supposed to specify column names just parameter values for expressions in the query.
So...how are people solving this issue? Just doing the sort in Java code seems like an easy solution, but since I'm getting a variable string for the column to sort on, and a variable telling me the sort order....that's an ugly number of comparator-conditions to cover. This seems like it should be a common problem with a common pattern to solve it...
Placeholders ? can only be used for parameter values but not with column and sort order directions. So the standard way to do this as is pointed e.g. here is to use String#format() or something similar to append your column name and order value to your query.
Another option is to use Spring Data JPA where you can give to your method as an argument an instance of type Sort which can contain all needed info for database to sort.
I would just concatenate the column name and the order to the SQL query, but only after
verifying that the column name and order are valid in this context.
sanitizing them to counter any attempt of SQL Injection attack.
I feel this is efficient compared to fetching the results to the application layer and sorting them here.
My suggestion is the mapping of keys and columns. It's a safe solution.
At the beginning, we initiate our map in the simplest possible way. For convenience, I overloaded the get (Obiect key) method to return the default column ("fullName") in case of failure. This will protect against SqlExeption.
static Map<String,String> sortCol;
{
sortCol = new HashMap<String, String>(){
{//Enter all data for mapping
put("name","fullName");
put("rok","year");
put("rate","likes");
put("count-rate","countRate");
}
/**
*
* #param key for column name
* #return column name otherwise default "fullName"
*/
#Override
public String get(Object key) {
String col =super.get(key);
return null==col?"fullName":col;
}
};
}
Here is a simple example of use.
String sqlQuery= "Select \"fullName\",year,likes,count-rate, country ..."+
"from blaBla..."+
"where blaBla..."+
"order by "+sortCol.get("keySort") "\n"; // keySort can have the value name, count-rate etc ..
By the way, you should never reveal the real names of columns in user interfaces, such as REST or SOAP etc ... For the attacker, this is a great help.
Related
I want to filter results by a specific value in the aggregated array in the query.
Here is a little description of the problem.
Section belongs to the garden. Garden belongs to District and District belongs to the province.
Users have multiple sections. Those sections belong to their gardens and they are to their Districts and them to Province.
I want to get user ids that have value 2 in district array.
I tried to use any operator but it doesn't work properly. (syntax error)
Any help would be appreciated.
ps: This is possible writing using plain SQL
rs = dslContext.select(
field("user_id"),
field("gardens_array"),
field("province_array"),
field("district_array"))
.from(table(select(
arrayAggDistinct(field("garden")).as("gardens_array"),
arrayAggDistinct(field("province")).as("province_array"),
arrayAggDistinct(field("distict")).as("district_array"))
.from(table("lst.user"))
.leftJoin(table(select(
field("section.user_id").as("user_id"),
field("garden.garden").as("garden"),
field("garden.province").as("province"),
field("garden.distict").as("distict"))
.from(table("lst.section"))
.leftJoin("lst.garden")
.on(field("section.garden").eq(field("garden.garden")))
.leftJoin("lst.district")
.on(field("district.district").eq(field("garden.district")))).as("lo"))
.on(field("user.user_id").eq(field("lo.user_id")))
.groupBy(field("user.user_id"))).as("joined_table"))
.where(val(2).equal(DSL.any("district_array"))
.fetch()
.intoResultSet();
Your code is calling DSL.any(T...), which corresponds to the expression any(?) in PostgreSQL, where the bind value is a String[] in your case. But you don't want "district_array" to be a bind value, you want it to be a column reference. So, either, you assign your arrayAggDistinct() expression to a local variable and reuse that, or you re-use your field("district_array") expression or replicate it:
val(2).equal(DSL.any(field("district_array", Integer[].class)))
Notice that it's usually a good idea to be explicit about data types (e.g. Integer[].class) when working with the plain SQL templating API, or even better, use the code generator.
Using the InfluxDB Java client to communicate w/ my InfluxDB instance from a Java app.
I'm trying to accomplish something that doesn't appear to be documented anywhere inside the project. For a particular measurement, I need to list its latest (timestamp-wise) field key(s) and their values without knowing what they are called!
Hence let's say I have a measurement called FizzBuzz:
> SHOW measurements
name: measurements
name
----
FizzBuzz
I believe ORDER BY ASC is InfluxDb's default meaning the last/latest measurement data is always the last to come back in the query by default. So I think I'm looking to run something like this from Java:
SELECT * FROM FizzBuzz ORDER BY DESC LIMIT 1;
However my app will not know what the field key(s) on that returned result will be, and so I need a way (via the API) to inspect the record that is return and obtain the name of any field keys (so I can save them as strings) and obtain their respective values (which I can safely assume can be cast to BigDecimals).
My best attempt thus far:
Query query = new Query("SELECT * FROM FizzBuzz ORDER BY DESC LIMIT 1", "my-app-db");
QueryResult queryResult = connection.query(query);
for(Result result : queryResult.getResults()) {
for(Series series : result.getSeries()) {
List<String> cols = series.getColumns();
// But how to tell whats a field vs whats a tag?!
}
}
However this doesn't allow me to discern which columns are fields vs. which ones are just tags...any ideas?
According to documentation
must_not The clause (query) must not appear in the matching documents.
I have query like this:
// searching for URI which contains smart and doesn't contain vip.vs.csin.cz
BoolQueryBuilder builder = QueryBuilders.boolQuery();
builder.must(QueryBuilders.termQuery(URI, "smart")));
builder.mustNot(QueryBuilders.termQuery(URI, "vip.vs.csin.cz")));
There're two URIs im my elasticsearch repository
1)
/smart-int-vip.vs.csin.cz:5080/smart/api/runtime/case/SC0000000000558648/record/generate/4327/by/SMOBVA002/as/true?espisRecordForm=ANALOG&accountNumber=2318031033/0800
2)
/smart/api/runtime/case/SC0000000000558648/record/generate/4327/by/SMOBVA002/as/true?espisRecordForm=ANALOG&accountNumber=2318031033/0800
When I execute query via ElasticSearchTemplate
elasticsearchTemplate.getClient().search(searchRequest);
I get back 0 records. When I execute same query without mustNot clause I get back 2 records.
In kibana I can write:
uri: "smart" NOT uri: "vip.vs.csin.cz"
And get 1 record as expected.
I was expecting the same behaviour from Java ElasticSearchClient. How can I filter records which contains "vip.vs.csin.cz" from Java and why It filtered second record even though it doesn't contain anything from mustNot clause I specified ?
Edit here's my mapping
#Document(indexName = "audit-2018", type = "audit")
public class Trace {
#Id
private String id;
#Field(type = FieldType.Text)
private String uri;
// more columns, getter & setters
}
The Java code you've provided shows a bool query using the must and must_not clauses, wherein you are doing a term query. The thing about term queries is that they are subject to the analyzer you have on your fields, the standard analyzer for text (which is the data type of your uri field, read more here) fields will remove all punctuation (in other words the dots in your word) and split your word up. vip.vs.csin.cz becomes vip vs csin cz. The text field type should be reserved for full-text searches only, in you case I would go for keyword field type (read more here) The reason your Kibana query works as expected is because that one is not actually doing a terms query, but rather a query_string query containing a lucene query: uri: "smart" NOT uri: "vip.vs.csin.cz".
So you have a couple of options to fix your problem. You could change your terms query to match_phrase queries, which would allow you to retain the order of your tokenized terms and probably net the correct result. An alternative would be to do a query_string query instead of a terms query in your Java code, since you have already determined that this does give you the correct result.
My proposed solution would however be to reindex with uri being of field type keyword, since this field type will not result in unwanted tokenization of you field values into multiple terms. You can read more about the default analyzer and tokenizer for the keyword field type here. This would save you headache in the future since you know that your queries are matching your field values exactly "as is".
I have to work with a POJO "Order" that 8 fields and each of these fields is a column in the "order" table. The DB schema is denormalized (and worse, deemed final and unchangeable) so now I have to write a search module that can execute a search with any combination of the above 8 fields.
Are there any approaches on how to do this? Right now I get the input in a new POJO and go through eight IF statements looking for values that are not NULL. Each time I find such a value I add it to the WHERE condition in my SELECT statement.
Is this the best I can hope for? Is it arguably better to select on some minimum of criteria and then iterate over the received collection in memory, only keeping the entries that match the remaining criteria? I can provide pseudo code if that would be useful. Working on Java 1.7, JSF 2.2 and MySQL.
Each time I find such a value I add it to the WHERE condition in my SELECT statement.
This is a prime target for Sql Injection attacks!
Would something like the following work with MySql?
SELECT *
FROM SomeTable
WHERE (#param1 IS NULL OR SomeTable.SomeColumn1 = #param1) OR
(#param2 IS NULL OR SomeTable.SomeColumn2 = #param2) OR
(#param3 IS NULL OR SomeTable.SomeColumn3 = #param3) OR
/* .... */
Using com.netflix.astyanax, I add entries for a given row as follows:
final ColumnListMutation<String> columnList = m.withRow(columnFamily, key);
columnList.putEmptyColumn(columnName);
Later I retrieve all my columns with:
final OperationResult<ColumnList<String>> operationResult = keyspace
.prepareQuery(columnFamily).getKey(key).execute();
operationResult.getResult().getColumnNames();
The following correctly return all the columns I have added but the columns are not ordered accordingly to when they were entered in the database. Since each column has a timestamp associated to it, there ought to be a way to do exactly this but I don't see it. Is there?
Note: If there isn't, I can always change the code above to:
columnList.putColumn(ip,new Date());
and then retrieve the column values, order them accordingly, but that seems cumbersome, inefficient, and silly since each column already has a timestamp.
I know from PlayOrm that if you do column Slices, it returns those in order. In fact, playorm uses that do enable S-SQL in partitions and basically batches the column slicing which comes back in order or reverse order depending on how requested. You may want to do a column slice from 0 to MAXLONG.
I am not sure about getting the row though. I haven't tried that.
oh, and PlayOrm is just a mapping layer on top of astyanax though not really relational and more noSql'ish really as demonstrated by it's patterns pages
http://buffalosw.com/wiki/Patterns-Page/
Cassandra will never order your columns in "insertion order".
Columns are always ordered lowest first. It also depends on how cassandra interprets your column names. You can define the interpretation with the comparator you set when defining your column family.
From what you gave it looks you use String timestamp values. If you simply serialized your timestamps as e.g. "123141" and "231" be aware that with an UTF8Type comparator "231">"123131".
Better approach: Use Time-based UUIDs as column names, as many examples for Time-series data in Cassandra propose. Then you can use the UUIDType comparator.
CREATE COLUMN FAMILY timeseries_data
WITH comparator = UUIDType
AND key_validation_class=UTF8Type;