I'm working on some webservices, which I'm implementing with hibernate. The issue is that I need to access the names of the entity tables (say, those in #Table("Table_A")).
The code is generated, so I cannot alter the entity classes themselves to give me what I need.
This is what I was doing so far:
public static
<T extends Entity<K>, K extends Serializable>
String getTableName(Class<T> objClass) {
Table table = objClass.getAnnotation(Table.class);
return (table != null)? table.name() : null;
}
However, after some research I discovered that reflection is not the best performance-wise, and since this method will be called a lot I'm looking for another way to go at it.
I'm trying to follow the advice given here:
Performance of calling Method/Field.getAnnotation(Class) several times vs. Pre-caching this data in a Map.
This is what I came up with:
public final class EntityUtils {
private static HashMap<String, String> entityCache;
private EntityUtils() {
entityCache = new HashMap<String, String>();
}
public static
<T extends Entity<K>, K extends Serializable>
String getTableName_New(Class<T> objClass) {
String tableName = entityCache.get(objClass.getCanonicalName());
if (tableName == null) {
Table table = objClass.getAnnotation(Table.class);
if (table != null) {
tableName = table.name();
entityCache.put(objClass.getCanonicalName(), tableName);
}
}
return tableName;
}
}
However, I'm not sure about this. Is it a good idea to cache reflection data in a static map? Is there any alternative way to accomplish this?
Ideally, I would use a Guava cache, with weak keys, that way you're not keeping any references to the class object in case you use some advanced ClassLoader magic.
LoadingCache<Class<?>, String> tableNames = CacheBuilder.newBuilder()
.weakKeys()
.build(
new CacheLoader<Class<?>, String>() {
public String load(Class<?> key) {
Table table = objClass.getAnnotation(Table.class);
return table == null ? key.getSimpleName() : table.name();
}
});
Usage:
String tablename = tableNames.get(yourClass);
See: Caches Explained
On the other hand: annotations are cached by the JDK also, so your previous approach is probably fine
Related
I am working on a java application with Spring and JDBC connection.
I did not code the application myself and I am kind of new to some of the frameworks and implications of this.
I traced the path of how sql statements are passed, however I am stuck at some point where I do not find any details on the method that is called and the class it belongs to.
The method is imported via import de.dit.icr.frontend.ProxyBean;
Basically all queries are contained in a single "query.xml" file, that is passed into a "ProxyBean" object, which is then fed with the parameters map and then a "getStatement()" method is called that returns the prepared query string.
What I would like to do is to split the query.xml file into single sql files (one for each query) and implement a new method instead of the proxyBean.getStatement(), that would still take the parameters map, the name of the query, and prepare the statement.
In order to do that, I require your light on something:
- where does this ProxyBean class come from? From an external library, and if so, which one?
- Which method, which library could I use to create a string of an sql prepared statement from a sql file and a parameters map?
Thanks a lot for your help!
Here is a simplified view of the code:
import de.dit.icr.frontend.ProxyBean;
import de.dit.icr.util.Url;
import de.dit.itsales.dbi.util.DBUtil;
import de.dit.itsales.dbi.util.Log;
public class IASAdapterImpl implements IASAdapter {
private static final String[] QUERY_FILES = {"query.xml"};
private static final String RELATIVE_PATH_TO_QUERY_FILE = "queries";
public IASAdapterImpl() {
init();
}
private void init() {
String key = "queries";
String pathToQueryFile = DBUtil.getInstance().getConfigDir() + RELATIVE_PATH_TO_QUERY_FILE;
Url.register(key, pathToQueryFile);
createProxyBean();
}
public synchronized String getQuery(String queryName, Map<String,String> queryVariables, boolean resolveNames) {
ProxyBean proxyBean = createProxyBean();
setParameter(queryVariables, proxyBean);
String temp = proxyBean.getStatement(queryName, resolveNames);
return temp;
}
private ProxyBean createProxyBean() {
ProxyBean bean = new ProxyBean();
for (int i = 0; i < QUERY_FILES.length; i++) {
bean.setQuerySet(QUERY_FILES[i]);
}
return bean;
}
private void setParameter(Map<String,String> map, ProxyBean bean) {
if(map == null || map.isEmpty()) {
return;
}
for (Map.Entry<String,String> entry : map.entrySet()) {
String key = entry.getKey();
bean.set(key, entry.getValue());
}
}
Sample of query.xml:
<query name = "alle-fonds"><![CDATA[
select fondsnummer,
spokid,
mfnummer,
fondsname,
bewertungsdatum,
letztebewertung,
hauptfonds,
performancegraphrelevant
from rep.v_alle_fonds where anwender_login = '${loginname}'
and sprache = lower('${sprache}')
order by mfnummer
]]></query>
I am working on measuing my application metrics using below class in which I increment and decrement metrics.
public class AppMetrics {
private final AtomicLongMap<String> metricCounter = AtomicLongMap.create();
private static class Holder {
private static final AppMetrics INSTANCE = new AppMetrics();
}
public static AppMetrics getInstance() {
return Holder.INSTANCE;
}
private AppMetrics() {}
public void increment(String name) {
metricCounter.getAndIncrement(name);
}
public AtomicLongMap<String> getMetricCounter() {
return metricCounter;
}
}
I am calling increment method of AppMetrics class from multithreaded code to increment the metrics by passing the metric name.
Problem Statement:
Now I want to have metricCounter for each clientId which is a String. That means we can also get same clientId multiple times and sometimes it will be a new clientId, so somehow then I need to extract the metricCounter map for that clientId and increment metrics on that particular map (which is what I am not sure how to do that).
What is the right way to do that keeping in mind it has to be thread safe and have to perform atomic operations. I was thinking to make a map like that instead:
private final Map<String, AtomicLongMap<String>> clientIdMetricCounterHolder = Maps.newConcurrentMap();
Is this the right way? If yes then how can I populate this map by passing clientId as it's key and it's value will be the counter map for each metric.
I am on Java 7.
If you use a map then you'll need to synchronize on the creation of new AtomicLongMap instances. I would recommend using a LoadingCache instead. You might not end up using any of the actual "caching" features but the "loading" feature is extremely helpful as it will synchronizing creation of AtomicLongMap instances for you. e.g.:
LoadingCache<String, AtomicLongMap<String>> clientIdMetricCounterCache =
CacheBuilder.newBuilder().build(new CacheLoader<String, AtomicLongMap<String>>() {
#Override
public AtomicLongMap<String> load(String key) throws Exception {
return AtomicLongMap.create();
}
});
Now you can safely start update metric counts for any client without worrying about whether the client is new or not. e.g.
clientIdMetricCounterCache.get(clientId).incrementAndGet(metricName);
A Map<String, Map<String, T>> is just a Map<Pair<String, String>, T> in disguise. Create a MultiKey class:
class MultiKey {
public String clientId;
public String name;
// be sure to add hashCode and equals
}
Then just use an AtomicLongMap<MultiKey>.
Edited:
Provided the set of metrics is well defined, it wouldn't be too hard to use this data structure to view metrics for one client:
Set<String> possibleMetrics = // all the possible values for "name"
Map<String, Long> getMetricsForClient(String client) {
return Maps.asMap(possibleMetrics, m -> metrics.get(new MultiKey(client, m));
}
The returned map will be a live unmodifiable view. It might be a bit more verbose if you're using an older Java version, but it's still possible.
I'm using Hibernate and QueryDSL along with PostgreSQL on a Spring application, and face some performance issues with my filtered lists. Using the StringPath class, I'm calling either startsWithIgnoreCase, endsWithIgnoreCase or containsIgnoreCase.
It appears the generated query has the following where clause :
WHERE lower(person.firstname) LIKE ? ESCAPE '!'
Using the lower, the query is not taking advantage of the Postgres indexes. On a dev Database, queries take up to 1sec instead of 10ms with the ILIKE keyword.
Is there a way to get a Predicate using Postgres' ILIKE, as Ops doesn't seem to provide it?
Thanks
I've got exactly the same issue - lower(column) causes wrong pg statistics calculation and request is planned not efficiently, ilike solves the problem. I hadn't understood what parts of OP's answer are relevant to solution so reinvented the same approach, but a bit shorter.
Introduce new dialect with my_ilike function and it's implementation:
public class ExtendedPostgresDialect extends org.hibernate.dialect.PostgreSQL9Dialect {
public ExtendedPostgresDialect() {
super();
registerFunction("my_ilike", new SQLFunctionTemplate(BooleanType.INSTANCE, "(?1 ilike ?2)"));
}
}
Specify this dialect to be used by Hibernate (I use Java config):
Properties props = new Properties();
props.setProperty("hibernate.dialect", "com.example.ExtendedPostgresDialect");
factory.setJpaProperties(props);
That's it, now you can use it:
BooleanTemplate.create("function('my_ilike', {0}, {%1%})", stringPath, value).isTrue();
Had to update this :
We found a way to create the needed Postgres operators by registering a SQL function using ilike, in our custom Hibernate Dialect.
Example with ilike :
//Postgres Constants Operators
public class PostgresOperators {
private static final String NS = PostgresOperators.class.getName();
public static final Operator<Boolean> ILIKE = new OperatorImpl<>(NS, "ILIKE");
}
//Custom JPQLTemplates
public class PostgresTemplates extends HQLTemplates {
public static final PostgresTemplates DEFAULT = new PostgresTemplates();
public PostgresTemplates() {
super();
add(PostgresOperators.ILIKE, "my_ilike({0},{1})");
}
}
Specify the JPQLTemplates when using jpaquery
new JPAQuery(entityManager, PostgresTemplates.DEFAULT);
now it gets tricky, we couldn't use ilike directly, there is an issue with an "ilike" keyword already registered, so we made an ilike function and registered it to a custom spring hibernate Dialect.
Our application.yml specifying :
#SEE JPA http://docs.spring.io/spring-boot/docs/current/reference/html/common-application-properties.html
spring.data.jpa:com.example.customDialect.config.database.ExtendedPostgresDialect
Then
public class ExtendedPostgresDialect extends org.hibernate.dialect.PostgreSQL82Dialect {
public ExtendedPostgresDialect() {
super();
registerFunction("my_ilike", new PostgreSQLIlikeFunction());
}
}
We tried to use the registerKeyword("ilike"), didn't work, we stayed with our function and the following implementation.
public class PostgreSQLIlikeFunction implements SQLFunction {
#Override
public Type getReturnType(Type columnType, Mapping mapping)
throws QueryException {
return new BooleanType();
}
#SuppressWarnings("unchecked")
#Override
public String render(Type firstArgumentType, List args, SessionFactoryImplementor factory) throws QueryException {
if (args.size() != 2) {
throw new IllegalArgumentException(
"The function must be passed 2 arguments");
}
String str1 = (String) args.get(0);
String str2 = (String) args.get(1);
return str1 + " ilike " + str2;
}
#Override
public boolean hasArguments() {
return true;
}
#Override
public boolean hasParenthesesIfNoArguments() {
return false;
}
}
That's pretty much it, now we can use ILIKE the following way :
BooleanOperation.create(PostgresOperators.ILIKE, expression1, expression2).isTrue()
I'm in the process of creating a front end for a Database Driven application and Could do with some advice. I have the following basic entities in my database:
aspect
aspect_value
As you can image I can have many aspect value to each aspect so that when a user records an aspect they can select more than one value per aspect... simple.
I've created an POJO entity to model each aspect an my question is this... using Spring and the jdbcTemplate how would I be able to create the desired composite relationship using org.springframework.jdbc.core.RowMapper i.e. each aspect object containing one or more aspect value ojects? And for that matter I would appreciate if you could please let me know if this would be the best way to do it. I'm keen at some point to delve deeper into ORM but I've been put off so far by the number of issues I've encountered which has slowed down my development and led to the decision to use jdbcTemplate instead.
Thanks
You can use a RowMapper if you are storing aspect_values in your aspect object as objects. Each call to RowMapper returns an object so you'll end up with a collection of aspect_values. If you need to build an aspect object (or objects) with values contained in the aspect_value table then a ResultSetExtractor is the better choice.
Here are my examples as promised. I have to type these in by hand because our development network is on an internal network only so any typos are copy errors and not errors in the code. These are abbreviated versions of inner classes in my DAO:
This maps a single row in the ResultSet to an object:
public List<MessageSummary> getMessages(Object[] params)
{
// mList is filled with objects created in MessageRowMapper,
// so the length of the list equal to the number of rows in the ResultSet
List<MessageSummary> mList = jdbcTemplate.query(sqlStr, new MessageRowMapper(),
params);
return mList;
}
private final class MessageRowMapper implements RowMapper<MessageSummary>
{
#Override
public MessageSummary mapRow(ResultSet rs, int i) throws SQLException
{
MessageSummary ms = new MessageSummary();
ms.setId(rs.getInt("id"));
ms.setMessage(rs.getString("message"));
return ms;
}
}
ResultSetExtractor works on the same idea except you map the entire set yourself instead of just converting a row into an object. This is useful when your object has attributes from multiple rows.
public Map<Integer, List<String>> getResults(Object[] params)
{
Map<Integer, List<String>> result = jdbcTemplate.query(sqlStr, new ResultExtractor(),
params);
return result;
}
private final class ResultExtractor implements ResultSetExtractor<Map<Integer, List<String>>>
{
#Override
public Map<Integer, List<String>> extractData(ResultSet rs)
throws SQLException, DataAccessException
{
Map<Integer, List<String>> resultMap = new HashMap<Integer, List<String>>();
while (rs.next())
{
int id = rs.getInt("id");
List<String> nameList = resultMap.get(id);
if (nameList == null)
{
nameList = new ArrayList<String>();
resultMap.put(id, nameList);
}
nameList.add(rs.getString("name"));
}
return resultMap;
}
}
The RowMapper interface provides a method
Object mapRow(ResultSet rs,
int rowNum)
throws SQLException
You implement this method in a class and provide code to populate your entity object with values held in the row of the ResultSet rs. To obtain the resultset itself from database , you can use JdbcTemplate.select method's overload
List jdbcTemplate.query(String sql, RowMapper mapper )
Which ORM supports a domain model of immutable types?
I would like to write classes like the following (or the Scala equivalent):
class A {
private final C c; //not mutable
A(B b) {
//init c
}
A doSomething(B b) {
// build a new A
}
}
The ORM has to initialized the object with the constructor. So it is possible to check invariants in the constructor. Default constructor and field/setter access to intialize is not sufficient and complicates the class' implementation.
Working with collections should be supported. If a collection is changed it should create a copy from the user perspective. (Rendering the old collection state stale. But user code can still work on (or at least read) it.) Much like the persistent data structures work.
Some words about the motivation. Suppose you have a FP-style domain object model. Now you want to persist this to a database. Who do you do that? You want to do as much as you can in a pure functional style until the evil sides effect come in. If your domain object model is not immutable you can for example not share the objects between threads. You have to copy, cache or use locks. So unless your ORM supports immutable types your constrainted in your choice of solution.
UPDATE: I created a project focused on solving this problem called JIRM:
https://github.com/agentgt/jirm
I just found this question after implementing my own using Spring JDBC and Jackson Object Mapper. Basically I just needed some bare minimum SQL <-> immutable object mapping.
In short I just use Springs RowMapper and Jackson's ObjectMapper to map Objects back and forth from the database. I use JPA annotations just for metadata (like column name etc...). If people are interested I will clean it up and put it on github (right now its only in my startup's private repo).
Here is a rough idea how it works here is an example bean (notice how all the fields are final):
//skip imports for brevity
public class TestBean {
#Id
private final String stringProp;
private final long longProp;
#Column(name="timets")
private final Calendar timeTS;
#JsonCreator
public TestBean(
#JsonProperty("stringProp") String stringProp,
#JsonProperty("longProp") long longProp,
#JsonProperty("timeTS") Calendar timeTS ) {
super();
this.stringProp = stringProp;
this.longProp = longProp;
this.timeTS = timeTS;
}
public String getStringProp() {
return stringProp;
}
public long getLongProp() {
return longProp;
}
public Calendar getTimeTS() {
return timeTS;
}
}
Here what the RowMapper looks like (notice it mainly delegats to Springs ColumnMapRowMapper and then uses Jackson's objectmapper):
public class SqlObjectRowMapper<T> implements RowMapper<T> {
private final SqlObjectDefinition<T> definition;
private final ColumnMapRowMapper mapRowMapper;
private final ObjectMapper objectMapper;
public SqlObjectRowMapper(SqlObjectDefinition<T> definition, ObjectMapper objectMapper) {
super();
this.definition = definition;
this.mapRowMapper = new SqlObjectMapRowMapper(definition);
this.objectMapper = objectMapper;
}
public SqlObjectRowMapper(Class<T> k) {
this(SqlObjectDefinition.fromClass(k), new ObjectMapper());
}
#Override
public T mapRow(ResultSet rs, int rowNum) throws SQLException {
Map<String, Object> m = mapRowMapper.mapRow(rs, rowNum);
return objectMapper.convertValue(m, definition.getObjectType());
}
}
Now I just took Spring JDBCTemplate and gave it a fluent wrapper. Here are some examples:
#Before
public void setUp() throws Exception {
dao = new SqlObjectDao<TestBean>(new JdbcTemplate(ds), TestBean.class);
}
#Test
public void testAll() throws Exception {
TestBean t = new TestBean(IdUtils.generateRandomUUIDString(), 2L, Calendar.getInstance());
dao.insert(t);
List<TestBean> list = dao.queryForListByFilter("stringProp", "hello");
List<TestBean> otherList = dao.select().where("stringProp", "hello").forList();
assertEquals(list, otherList);
long count = dao.select().forCount();
assertTrue(count > 0);
TestBean newT = new TestBean(t.getStringProp(), 50, Calendar.getInstance());
dao.update(newT);
TestBean reloaded = dao.reload(newT);
assertTrue(reloaded != newT);
assertTrue(reloaded.getStringProp().equals(newT.getStringProp()));
assertNotNull(list);
}
#Test
public void testAdding() throws Exception {
//This will do a UPDATE test_bean SET longProp = longProp + 100
int i = dao.update().add("longProp", 100).update();
assertTrue(i > 0);
}
#Test
public void testRowMapper() throws Exception {
List<Crap> craps = dao.query("select string_prop as name from test_bean limit ?", Crap.class, 2);
System.out.println(craps.get(0).getName());
craps = dao.query("select string_prop as name from test_bean limit ?")
.with(2)
.forList(Crap.class);
Crap c = dao.query("select string_prop as name from test_bean limit ?")
.with(1)
.forObject(Crap.class);
Optional<Crap> absent
= dao.query("select string_prop as name from test_bean where string_prop = ? limit ?")
.with("never")
.with(1)
.forOptional(Crap.class);
assertTrue(! absent.isPresent());
}
public static class Crap {
private final String name;
#JsonCreator
public Crap(#JsonProperty ("name") String name) {
super();
this.name = name;
}
public String getName() {
return name;
}
}
Notice in the above how easy it is to map any query into immutable POJO's. That is you don't need it 1-to-1 of entity to table. Also notice the use of Guava's optionals (last query.. scroll down). I really hate how ORM's either throw exceptions or return null.
Let me know if you like it and I'll spend the time putting it on github (only teste with postgresql). Otherwise with the info above you can easily implement your own using Spring JDBC. I'm starting to really dig it because immutable objects are easier to understand and think about.
Hibernate has the #Immutable annotation.
And here is a guide.
Though not a real ORM, MyBatis may able to do this. I didn't try it though.
http://mybatis.org/java.html
AFAIK, there are no ORMs for .NET supporting this feature exactly as you wish. But you can take a look at BLTookit and LINQ to SQL - both provide update-by-comparison semantics and always return new objects on materialization. That's nearly what you need, but I'm not sure about collections there.
Btw, why you need this feature? I'm aware about pure functional languages & benefits of purely imutable objects (e.g. complete thread safety). But in case with ORM all the things you do with such objects are finally transformed to a sequence of SQL commands anyway. So I admit the benefits of using such objects are vaporous here.
You can do this with Ebean and OpenJPA (and I think you can do this with Hibernate but not sure). The ORM (Ebean/OpenJPA) will generate a default constructor (assuming the bean doesn't have one) and actually set the values of the 'final' fields. This sounds a bit odd but final fields are not always strictly final per say.
SORM is a new Scala ORM which does exactly what you want. The code below will explain it better than any words:
// Declare a model:
case class Artist ( name : String, genres : Set[Genre] )
case class Genre ( name : String )
// Initialize SORM, automatically generating schema:
import sorm._
object Db extends Instance (
entities = Set() + Entity[Artist]() + Entity[Genre](),
url = "jdbc:h2:mem:test"
)
// Store values in the db:
val metal = Db.save( Genre("Metal") )
val rock = Db.save( Genre("Rock") )
Db.save( Artist("Metallica", Set() + metal + rock) )
Db.save( Artist("Dire Straits", Set() + rock) )
// Retrieve values from the db:
val metallica = Db.query[Artist].whereEqual("name", "Metallica").fetchOne() // Option[Artist]
val rockArtists = Db.query[Artist].whereEqual("genres.name", "Rock").fetch() // Stream[Artist]