public class Foo {
private long id;
private String name;
private boolean isBar;
public long getId() {
return id;
}
public void setId(long id) {
this.id = id;
}
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
public boolean isBar() {
return isBar;
}
public void setBar(boolean isBar) {
this.isBar = isBar;
}
}
#Component
public class FooDAO {
private JdbcTemplate jdbcTemplate;
private FooDAO(JdbcTemplate jdbcTemplate) {
this.jdbcTemplate = jdbcTemplate;
}
public List<Foo> findAll() {
return jdbcTemplate.query( "SELECT * FROM foo", new BeanPropertyRowMapper<>(Foo.class);
}
}
When I setup a custom FooRowMapper and manually call setBar(rs.getBoolean("is_bar")) Foo.isBar is properly getting set to true when db value is 1, but not when using the BeanPropertyRowMapper instead of a custom row mapper.
According to this, BeanPropertyRowMapper should properly convert 1 to true, so why isn't it in my case?
p.s. I already figured out why but thought I'd post it in case it's helpful to anybody. I'm sure it won't take long for someone else to figure it out and post the answer.
I knew this:
Column values are mapped based on matching the column name as obtained from result set meta-data to public setters for the corresponding properties. The names are matched either directly or by transforming a name separating the parts with underscores to the same name using "camel" case.
But got thrown off because my Foo.isBar property had the correct camel case equivalent of my db field name (is_bar), however, my public setter name was incorrect as setBar; the setter should be setIsBar.
After googling I was also thrown off by others wanting to use BeanPropertyRowMapper to convert database values of Y/N to boolean values.
And I also assumed BeanPropertyRowMapper was actually setting the value to false even though it wasn't and the false value simply remained as the default boolean primitive value.
Another solution if for whatever reason setBar instead setIsBar was actually desired would be to use an field alias in the sql select statement like it says in the docs:
To facilitate mapping between columns and fields that don't have matching names, try using column aliases in the SQL statement like "select fname as first_name from customer".
Related
Below is our entity class
#Entity(defaultKeyspace = CASSANDRA_KEYSPACE)
#CqlName(CASSANDRA_TABLE)
public static class Scientist implements Serializable {
#CqlName("person_name")
public String name;
#Computed("writetime(person_name)")
#CqlName("name_ts")
public Long nameTs;
#CqlName("person_id")
#PartitionKey
public Integer id;
public Scientist() {}
public Scientist(int id, String name) {
super();
this.id = id;
this.name = name;
}
public Integer getId() {
return id;
}
public void setId(Integer id) {
this.id = id;
}
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
#Override
public String toString() {
return id + ":" + name;
}
#Override
public boolean equals(#Nullable Object o) {
if (this == o) {
return true;
}
if (o == null || getClass() != o.getClass()) {
return false;
}
Scientist scientist = (Scientist) o;
return id.equals(scientist.id) && Objects.equal(name, scientist.name);
}
#Override
public int hashCode() {
return Objects.hashCode(name, id);
}
}
#Dao
public interface ScientistDao {
#GetEntity
MappedAsyncPagingIterable<Scientist> map(AsyncResultSet resultSet);
#Delete
CompletionStage<Void> deleteAsync(Scientist entity);
#Insert
CompletionStage<Void> saveAsync(Scientist entity);
}
The problem faced is, when the computed fields (in the above case writetime(person_name) )are not selected as part of the query, the mapping fails.
In 3.x driver: mapped fields that are not present in the ResultSet were ignored. link
In 4.x driver: for each entity field, the database table or UDT must contain a column with the corresponding name. link
Please suggest a possible solution/workaround where this computed field can be part of the query on a need basis and the mapping happens successfully without throwing IllegalArgumentException.
Edit:
scientist table schema
CREATE TABLE beam_ks.scientist (person_id int PRIMARY KEY,person_name text);
Below is the query tried:
select person_id,writetime(person_name) as name_ts from beam_ks.scientist where person_id=10
Mapping of the resultset with #GetEntity fails with below error:
Caused by: java.lang.IllegalArgumentException: person_name is not a column in this row
at com.datastax.oss.driver.internal.core.cql.DefaultRow.firstIndexOf(DefaultRow.java:110)
at com.datastax.oss.driver.api.core.data.GettableByName.get(GettableByName.java:144)
at org.apache.beam.sdk.io.cassandra.CassandraIOTest_ScientistHelper__MapperGenerated.get(CassandraIOTest_ScientistHelper__MapperGenerated.java:89)
get method in CassandraIOTest_ScientistHelper__MapperGenerated:
#Override
public CassandraIOTest.Scientist get(GettableByName source) {
CassandraIOTest.Scientist returnValue = new CassandraIOTest.Scientist();
Integer propertyValue = source.get("person_id", Integer.class);
returnValue.setId(propertyValue);
String propertyValue1 = source.get("person_name", String.class);
returnValue.setName(propertyValue1);
return returnValue;
}
Also, the documentation does not specify whether to add getter and setter methods for computed values. So, they are removed from entity class
When using #GetEntity methods, it is your responsibility to provide a result set object that is 100% compatible with the entity definition.
Here your Scientist entity contains two regular fields: person_id (integer) and person_name (text). Therefore your result set must contain (at least) two columns with these names and types.
But you said you provided the following query: select person_id,writetime(person_name) as name_ts from beam_ks.scientist where person_id=10.
This query does not contain the required columns. You should change your query to the one below, or something similar:
select person_id, person_name from beam_ks.scientist where person_id=10
Note that #GetEntity methods do not recognize computed values, only regular ones. It is not necessary to include writetime(person_name) as name_ts, it won't be mapped anyway.
I'm making my own Java class called SelectRequestBuilder in order to easily create SQL requests. There a function addColumnToSelect which must take the column name as a parameter. The issue is that I want to make sure that the column name specified by the user is in the table where he want to select informations from.
So I thought that the type of the parameter column_name should be an enum like so :
public enum USER_COLUMN {
ID("id"),
USERNAME("username"),
PASSWORD("password"),
private final String name;
USER_COLUMN(String name) {this.name = name;}
#Override
public String toString() {
return name;
}
}
Then, in my function I could get the column name and I would be sure that the column name passed as a parameter is a valid one.
Yet, I got stuck when I wanted to be able to extend this class to not only the users table but to every table. What I mean is that my SelectRequestBuilder must be able to select values from an other table genders for example.
The reason why it's giving troubles is that my function can no longer take a parameter column_name of type USER_COLUMN because it's only for the users table.
Finally, my solution would be something like so:
private void addColumnToSelect(USER_COLUMN col) {
addColumnToSelect(col.toString());
}
private void addColumnToSelect(GENDER_COLUMN col) {
addColumnToSelect(col.toString());
}
private void addColumnToSelect(ROLE_COLUMN col) { // Role is an other table
addColumnToSelect(col.toString());
}
private void addColumnToSelect(String col_name) {...}
But this solution is not satisfying in the sense that I must create an other function for every table in the database. This is why I ask you this question, I want your help to find a more satisfying solution ! :)
Enums can implement interfaces, you can use that to your advantage:
interface DatabaseColumn {
String columnName();
}
enum UserColumns implements DatabaseColumn {
ID("id"),
USERNAME("username"),
PASSWORD("password");
private final String name;
UserColumns(String name) {
this.name = name;
}
#Override public String columnName() {
return name;
}
}
Then other enums could implement the same interface, and your signature would become
private void addColumnToSelect(DatabaseColumn col) {
}
I'm creating a simple REST application with dropwizard using JDBI. The next step is to integrate a new resource that has a one-to-many relationship with another one. Until now I couldn't figure out how to create a method in my DAO that retrieves a single object that holds a list of objects from another table.
The POJO representations would be something like this:
User POJO:
public class User {
private int id;
private String name;
public User(int id, String name) {
this.id = id;
this.name = name;
}
public int getId() {
return id;
}
public void setId(int id) {
this.id = id;
}
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
}
Account POJO:
public class Account {
private int id;
private String name;
private List<User> users;
public Account(int id, String name, List<User> users) {
this.id = id;
this.name = name;
this.users = users;
}
public int getId() {
return id;
}
public void setId(int id) {
this.id = id;
}
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
public List<User> getUsers() {
return users;
}
public void setUsers(List<User> users) {
this.users = users;
}
}
The DAO should look something like this
public interface AccountDAO {
#Mapper(AccountMapper.class)
#SqlQuery("SELECT Account.id, Account.name, User.name as u_name FROM Account LEFT JOIN User ON User.accountId = Account.id WHERE Account.id = :id")
public Account getAccountById(#Bind("id") int id);
}
But when the method has a single object as return value (Account instead of List<Account>) there seems to be no way to access more than one line of the resultSet in the Mapper class. The only solution that comes close I could find is described at https://groups.google.com/d/msg/jdbi/4e4EP-gVwEQ/02CRStgYGtgJ but that one also only returns a Set with a single object which does not seem very elegant. (And can't be properly used by the resouce classes.)
There seems to be a way using a Folder2 in the fluent API. But I don't know how to integrate that properly with dropwizard and I'd rather stick to JDBI's SQL object API as recommended in the dropwizard documentation.
Is there really no way to get a one-to-many mapping using the SQL object API in JDBI? That is such a basic use case for a database that I think I must be missing something.
All help is greatly appreciated,
Tilman
OK, after a lot of searching, I see two ways dealing with this:
The first option is to retrieve an object for each column and merge it in the Java code at the resource (i.e. do the join in the code instead of having it done by the database).
This would result in something like
#GET
#Path("/{accountId}")
public Response getAccount(#PathParam("accountId") Integer accountId) {
Account account = accountDao.getAccount(accountId);
account.setUsers(userDao.getUsersForAccount(accountId));
return Response.ok(account).build();
}
This is feasible for smaller join operations but seems not very elegant to me, as this is something the database is supposed to do. However, I decided to take this path as my application is rather small and I did not want to write a lot of mapper code.
The second option is to write a mapper, that retrieves the result of the join query and maps it to the object like this:
public class AccountMapper implements ResultSetMapper<Account> {
private Account account;
// this mapping method will get called for every row in the result set
public Account map(int index, ResultSet rs, StatementContext ctx) throws SQLException {
// for the first row of the result set, we create the wrapper object
if (index == 0) {
account = new Account(rs.getInt("id"), rs.getString("name"), new LinkedList<User>());
}
// ...and with every line we add one of the joined users
User user = new User(rs.getInt("u_id"), rs.getString("u_name"));
if (user.getId() > 0) {
account.getUsers().add(user);
}
return account;
}
}
The DAO interface will then have a method like this:
public interface AccountDAO {
#Mapper(AccountMapper.class)
#SqlQuery("SELECT Account.id, Account.name, User.id as u_id, User.name as u_name FROM Account LEFT JOIN User ON User.accountId = Account.id WHERE Account.id = :id")
public List<Account> getAccountById(#Bind("id") int id);
}
Note: Your abstract DAO class will quietly compile if you use a non-collection return type, e.g. public Account getAccountById(...);. However, your mapper will only receive a result set with a single row even if the SQL query would have found multiple rows, which your mapper will happily turn into a single account with a single user. JDBI seems to impose a LIMIT 1 for SELECT queries that have a non-collection return type. It is possible to put concrete methods in your DAO if you declare it as an abstract class, so one option is to wrap up the logic with a public/protected method pair, like so:
public abstract class AccountDAO {
#Mapper(AccountMapper.class)
#SqlQuery("SELECT Account.id, Account.name, User.id as u_id, User.name as u_name FROM Account LEFT JOIN User ON User.accountId = Account.id WHERE Account.id = :id")
protected abstract List<Account> _getAccountById(#Bind("id") int id);
public Account getAccountById(int id) {
List<Account> accountList = _getAccountById(id);
if (accountList == null || accountList.size() < 1) {
// Log it or report error if needed
return null;
}
// The mapper will have given a reference to the same value for every entry in the list
return accountList.get(accountList.size() - 1);
}
}
This still seems a little cumbersome and low-level to me, as there are usually a lot of joins in working with relational data. I would love to see a better way or having JDBI supporting an abstract operation for this with the SQL object API.
In JDBI v3, you can use #UseRowReducer to achieve this. The row reducer is called on every row of the joined result which you can "accumulate" into a single object. A simple implementation in your case would look like:
public class AccountUserReducer implements LinkedHashMapRowReducer<Integer, Account> {
#Override
public void accumulate(final Map<Integer, Account> map, final RowView rowView) {
final Account account = map.computeIfAbsent(rowView.getColumn("a_id", Integer.class),
id -> rowView.getRow(Account.class));
if (rowView.getColumn("u_id", Integer.class) != null) {
account.addUser(rowView.getRow(User.class));
}
}
}
You can now apply this reducer on a query that returns the join:
#RegisterBeanMapper(value = Account.class, prefix = "a")
#RegisterBeanMapper(value = User.class, prefix = "u")
#SqlQuery("SELECT a.id a_id, a.name a_name, u.id u_id, u.name u_name FROM " +
"Account a LEFT JOIN User u ON u.accountId = a.id WHERE " +
"a.id = :id")
#UseRowReducer(AccountUserReducer.class)
Account getAccount(#Bind("id") int id);
Note that your User and Account row/bean mappers can remain unchanged; they simply know how to map an individual row of the user and account tables respectively. Your Account class will need a method addUser() that is called each time the row reducer is called.
I have a small library which will be very useful to maintain one to many & one to one relationship.
It also provide more feature for default mappers.
https://github.com/Manikandan-K/jdbi-folder
There's an old google groups post where Brian McAllistair (One of the JDBI authors) does this by mapping each joined row to an interim object, then folding the rows into the target object.
See the discussion here. There's test code here.
Personally this seems a little unsatisfying since it means writing an extra DBO object and mapper for the interim structure. Still I think this answer should be included for completeness!
I want to be able to execute the following console command to return all rows with only a subset of fields populated but using Spring's MongoTemplate class:
Console Command
db.person.find(null,{name:1})
MongoTemplate
mongoTemplate.find(new Query(...), Person.class)
Info on projection (subset) queries can be found in the MongoDB manual.
Query q = new Query();
q.fields().include("name");
mongoTemplate.find(q, Person.class);
mongoTemplate.getCollection(COLLECTION).find(null, new BasicDBObject(FIELD, "1"))
You can use:
mongoTemplate.findDistinct(String field, Class<?> entityClass, Class<T> resultClass);
If the goal is to populate the standard domain object with just the subset of fields, using d.fields().include() as described in another answer is the way to go. However, often time I find having the full object is undesirable (having a partially-populated could easily mislead future developers reading the code), and I'd rather have an object with just the subset of the fields I'm retrieving. In this case, creating and retrieving a projection object with just the subset of fields works well.
Projection class
#Document("person") // Must be the same collection name used by Person
public class PersonNameOnly {
private String name;
public String getName() { return name; }
public void setName(String name) { this.name = name; }
}
MongoTemplate query
mongoTemplate.find(new Query(...), PersonNameOnly.class);
If you want to use the same projection object for multiple types, you can omit the #Document declaration with the collection name from the projection object, and specify the collection name in the MongoTemplate query.
Projection class
public class NameOnly {
private String name;
public String getName() { return name; }
public void setName(String name) { this.name = name; }
}
MongoTemplate query
mongoTemplate.find(new Query(...), NameOnly.class, "person");
This is my enum :
package enums;
public enum SessionType {
SESSION_NORMAL(12), SESSION_PERFECT(5), SESSION_SOLO(1);
private int value;
private SessionType(int value) {
this.setValue(value);
}
public void setValue(int value) {
this.value = value;
}
public int getValue() {
return value;
}
public String toString(){
return this.name();
}
}
I've got a models class Session with an attribut type :
#Required
#Enumerated(EnumType.STRING)
#Column(name="type")
private SessionType type;
And I would like to do a query like that :
Session.find("type.value = 1");
Regards.
You can't access the value inside the enum via a SQL query, but you could just use the Ordinal value of the enumeration to store this in the database with the annotation:
#Enumerated(EnumType.ORDINAL)
That would return 1, 2 or 3 right now, but you can either remap the values (so instead of 1,5,12 you use 1,2,3) or simply add some extra entries to the enumeration until you get the values yo want (if it's so important for the rest of the system that the values are 1,5,12)
By default the enum name is stored in the DB, unless you have some wrapper or something that saves the actual value.
Therefore you query should be something like the following:
Session.find("type='SESSION_SOLO'");