For a MORPG Hack'n'Slash game i am currently using Neo4j with a pattern like this :
I have a Neo4J connector class, creating my connection and implementing Singleton and this instance is used by every xxxMapper classes, calling Neo4jConnetor.getInstance().query(String query) which returns the iterator of the queryresult.
Atm I'm asking myself a question, the game will have a ton of queries per second (like 5 per player per second). So I don't know, in terms of perfs, which pattern to use, if I should keep using my Singleton system or using another one like a pool of Neo4jConnector or anything else i don't know yet.
Here is the connector class :
public class Neo4jConnector{
private String urlRest;
private String url = "http://localhost:7474";
protected QueryEngine<?> engine;
protected static Neo4jConnector INSTANCE = new Neo4jConnector();
private Neo4jConnector(){
urlRest = url+"/db/data";
final RestAPI graphDb = new RestAPIFacade(urlRest);
engine = new RestCypherQueryEngine(graphDb);
}
public static Neo4jConnector getInstance(){
if (INSTANCE == null)
{
INSTANCE = new Neo4jConnector();
}
return INSTANCE;
}
#SuppressWarnings("unchecked")
public Iterator<Map<String, Object>> query(String query){
QueryResult<Map<String, Object>> row = (QueryResult<Map<String, Object>>) engine.query(query, Collections.EMPTY_MAP);
return row.iterator();
}
}
and an example call of this class :
Iterator<Map<String, Object>> iterator = Neo4jConnector.getInstance().query("optional Match(u:User{username:'"+username+"'}) return u.password as password, u.id as id");
Neo4j's embedded GraphDatabaseService is not pooled and threadsafe.
I would not recommend RestGraphDatabase and friends, because it is slow and outdated.
Just use parameters instead of literal strings and don't use optional match to start a query.
If you look for faster access look into the JDBC driver (which will be updated soonish).
Related
My flink program should do a Cassandra look up for each input record and based on the results, should do some further processing.
But I'm currently stuck at reading data from Cassandra. This is the code snippet I've come up with so far.
ClusterBuilder secureCassandraSinkClusterBuilder = new ClusterBuilder() {
#Override
protected Cluster buildCluster(Cluster.Builder builder) {
return builder.addContactPoints(props.getCassandraClusterUrlAll().split(","))
.withPort(props.getCassandraPort())
.withAuthProvider(new DseGSSAPIAuthProvider("HTTP"))
.withQueryOptions(new QueryOptions().setConsistencyLevel(ConsistencyLevel.LOCAL_QUORUM))
.build();
}
};
for (int i=1; i<5; i++) {
CassandraInputFormat<Tuple2<String, String>> cassandraInputFormat =
new CassandraInputFormat<>("select * from test where id=hello" + i, secureCassandraSinkClusterBuilder);
cassandraInputFormat.configure(null);
cassandraInputFormat.open(null);
Tuple2<String, String> out = new Tuple8<>();
cassandraInputFormat.nextRecord(out);
System.out.println(out);
}
But the issue with this is, it takes nearly 10 seconds for each look up, in other words, this for loop takes 50 seconds to execute.
How do I speed up this operation? Alternatively, is there any other way of looking up Cassandra in Flink?
I came up with a solution that is fairly fast at querying Cassandra with streaming data. Would be of use to someone with the same issue.
Firstly, Cassandra can be queried with as little code as,
Session session = secureCassandraSinkClusterBuilder.getCluster().connect();
ResultSet resultSet = session.execute("SELECT * FROM TABLE");
But the problem with this is, creating Session is a very time-expensive operation and something that should be done once per key space. You create Session once and reuse it for all read queries.
Now, since Session is not Java Serializable, it cannot be passed as an argument to Flink operators like Map or ProcessFunction. There are a few ways of solving this, you can use a RichFunction and initialize it in its Open method, or use a Singleton. I will use the second solution.
Make a Singleton Class as follows where we create the Session.
public class CassandraSessionSingleton {
private static CassandraSessionSingleton cassandraSessionSingleton = null;
public Session session;
private CassandraSessionSingleton(ClusterBuilder clusterBuilder) {
Cluster cluster = clusterBuilder.getCluster();
session = cluster.connect();
}
public static CassandraSessionSingleton getInstance(ClusterBuilder clusterBuilder) {
if (cassandraSessionSingleton == null)
cassandraSessionSingleton = new CassandraSessionSingleton(clusterBuilder);
return cassandraSessionSingleton;
}
}
You can then make use of this session for all future queries. Here I'm using the ProcessFunction to make queries as an example.
public class SomeProcessFunction implements ProcessFunction <Object, ResultSet> {
ClusterBuilder secureCassandraSinkClusterBuilder;
// Constructor
public SomeProcessFunction (ClusterBuilder secureCassandraSinkClusterBuilder) {
this.secureCassandraSinkClusterBuilder = secureCassandraSinkClusterBuilder;
}
#Override
public void ProcessElement (Object obj) throws Exception {
ResultSet resultSet = CassandraLookUp.cassandraLookUp("SELECT * FROM TEST", secureCassandraSinkClusterBuilder);
return resultSet;
}
}
Note that you can pass ClusterBuilder to ProcessFunction as it is Serializable. Now for the cassandraLookUp method where we execute the query.
public class CassandraLookUp {
public static ResultSet cassandraLookUp(String query, ClusterBuilder clusterBuilder) {
CassandraSessionSingleton cassandraSessionSingleton = CassandraSessionSingleton.getInstance(clusterBuilder);
Session session = cassandraSessionSingleton.session;
ResultSet resultSet = session.execute(query);
return resultSet;
}
}
The singleton object is created only the first time the query is run, after that, the same object is reused, so there is no delay in look up.
I am working on measuing my application metrics using below class in which I increment and decrement metrics.
public class AppMetrics {
private final AtomicLongMap<String> metricCounter = AtomicLongMap.create();
private static class Holder {
private static final AppMetrics INSTANCE = new AppMetrics();
}
public static AppMetrics getInstance() {
return Holder.INSTANCE;
}
private AppMetrics() {}
public void increment(String name) {
metricCounter.getAndIncrement(name);
}
public AtomicLongMap<String> getMetricCounter() {
return metricCounter;
}
}
I am calling increment method of AppMetrics class from multithreaded code to increment the metrics by passing the metric name.
Problem Statement:
Now I want to have metricCounter for each clientId which is a String. That means we can also get same clientId multiple times and sometimes it will be a new clientId, so somehow then I need to extract the metricCounter map for that clientId and increment metrics on that particular map (which is what I am not sure how to do that).
What is the right way to do that keeping in mind it has to be thread safe and have to perform atomic operations. I was thinking to make a map like that instead:
private final Map<String, AtomicLongMap<String>> clientIdMetricCounterHolder = Maps.newConcurrentMap();
Is this the right way? If yes then how can I populate this map by passing clientId as it's key and it's value will be the counter map for each metric.
I am on Java 7.
If you use a map then you'll need to synchronize on the creation of new AtomicLongMap instances. I would recommend using a LoadingCache instead. You might not end up using any of the actual "caching" features but the "loading" feature is extremely helpful as it will synchronizing creation of AtomicLongMap instances for you. e.g.:
LoadingCache<String, AtomicLongMap<String>> clientIdMetricCounterCache =
CacheBuilder.newBuilder().build(new CacheLoader<String, AtomicLongMap<String>>() {
#Override
public AtomicLongMap<String> load(String key) throws Exception {
return AtomicLongMap.create();
}
});
Now you can safely start update metric counts for any client without worrying about whether the client is new or not. e.g.
clientIdMetricCounterCache.get(clientId).incrementAndGet(metricName);
A Map<String, Map<String, T>> is just a Map<Pair<String, String>, T> in disguise. Create a MultiKey class:
class MultiKey {
public String clientId;
public String name;
// be sure to add hashCode and equals
}
Then just use an AtomicLongMap<MultiKey>.
Edited:
Provided the set of metrics is well defined, it wouldn't be too hard to use this data structure to view metrics for one client:
Set<String> possibleMetrics = // all the possible values for "name"
Map<String, Long> getMetricsForClient(String client) {
return Maps.asMap(possibleMetrics, m -> metrics.get(new MultiKey(client, m));
}
The returned map will be a live unmodifiable view. It might be a bit more verbose if you're using an older Java version, but it's still possible.
I am developing a java application where need to implement cache service to serve the requests. The requirement is like:
1) 1 or more threads come to fetch some data and if data is null is
cache then only one thread goes to DB to load the data in cache.
2) Once done , all subsequent threads will be served from cache.
So for this the implementation is like:
public List<Tag> getCachedTags() throws Exception
{
// get data from cache
List<Tag> tags = (List<Tag>) CacheUtil.get(Config.tagCache,Config.tagCacheKey);
if(tags == null) // if data is null
{
// one thread will go to DB and others wait here
synchronized(Config.tagCacheLock)
{
// first thread get this null and go to db, subsequent threads returns from here.
tags = (List<Tag>) CacheUtil.get(Config.tagCache,Config.tagCacheKey);
if(tags == null)
{
tags = iTagService.getTags(null);
CacheUtil.put(Config.tagCache, Config.tagCacheKey, tags);
}
}
}
return tags;
}
Now is this the correct approach, and as I am making lock in a static String, then is not it will be a class level lock? please suggest me some better approach
If you want to globally synchronize, just use custom object for this purpose:
private static final Object lock = new Object();
Do not use the String constant as they are interned, so the string constant with the same content declared in completely different part of your program will be the same String object. And in general avoid locking on the static fields. Better to instantiate your class and declare the lock as non-static. Currently you may use it as singleton (with some method like Cache.getInstance()), but later when you realize that you have to support several independent caches you will need less refactoring to achieve this.
In Java-8 preferred way to fetch object once is using the ConcurrentHashMap.computeIfAbsent like this:
private final ConcurrentHashMap<String, Object> cache = new ConcurrentHashMap<>();
public List<Tag> getCachedTags() throws Exception
List<Tag> tags = (List<Tag>)cache.computeIfAbsent(Config.tagCacheKey,
k -> iTagService.getTags(null));
return tags;
}
This is simple and robust. In previous Java versions you may probably use AtomicReference to wrap the objects:
private final ConcurrentHashMap<String, AtomicReference<Object>> cache =
new ConcurrentHashMap<>();
public List<Tag> getCachedTags() throws Exception
AtomicReference<Object> ref = cache.get(key);
if(ref == null) {
ref = new AtomicReference<>();
AtomicReference<Object> oldRef = cache.putIfAbsent(key, ref);
if(oldRef != null) {
ref = oldRef;
}
synchronized(ref) {
if(ref.get() == null) {
ref.set(iTagService.getTags(null));
}
}
}
return (List<Tag>)ref.get();
}
Every time I load certain values from database, a HashMap is loaded with certain keys and values from the database, how do I make this HashMap available to all the other classes without having to load the values repeatedly into the HashMap each time it is called:
This is the class which contains method where HashMap is loaded:
public class Codes {
List<CODES> List = null;
private CodesDAO codesDAO = new CodesDAO(); //DAO Class
public HashMap <MultiKey,String> fetchCodes(){
MultiKey multiKey;
HashMap <MultiKey,String> map = new HashMap<MultiKey,String>();
List = codesDAO.fetchGuiCodes();//fetches codes from DB
for(CODES gui:List){
multiKey = new MultiKey(gui.getCode(), gui.getKEY());
map.put(multiKey,gui.getDESC());
}
return map;
}
}
You can save your map in a static field, and initialize it in a static block. This way it is done only once:
public class Codes {
private static Map<MultiKey, String> codes;
static {
CodesDAO codesDAO = new CodesDAO(); // DAO Class
HashMap<MultiKey, String> map = new HashMap<MultiKey, String>();
List<CODES> list = codesDAO.fetchGuiCodes();// fetches codes from DB
for (CODES gui : list) {
MultiKey multiKey = new MultiKey(gui.getCode(), gui.getKEY());
map.put(multiKey, gui.getDESC());
}
codes = Collections.unmodifiableMap(map);
}
public static Map<MultiKey, String> fetchCodes() {
return codes;
}
}
Then you can retrieve the codes with:
Codes.fetchCodes();
If static fields are not an option, you could lazily initialise as follows:
private HashMap<MultiKey, String> map = null;
public HashMap<MultiKey, String> fetchCodes() {
if (map == null) {
map = new HashMap<MultiKey, String>();
list = codesDAO.fetchGuiCodes();// fetches codes from DB
for (CODES gui : list) {
MultiKey multiKey = new MultiKey(gui.getCode(), gui.getKEY());
map.put(multiKey, gui.getDESC());
}
}
return map;
}
Note: this is not thread-safe, but could be with some additional synchronization.
May be load the data only once? Use memoization(I would) from guava:
Suppliers.memoize(//Implementation of Supplier<T>)
If you use Spring, you could simply declare a bean (singleton) and implement the InitializingBean interface.
You would be forced to implement a method called afterPropertiesSet() and load your Map there.
If you don't use Spring, you could initialize your map at the start like you did and put it in
the servletConext. this scope is availbale from all session.
This is all good for read-only data. if you need to update it, be carefull because this will not be thread-safe. you will have to make it thread-safe.
hope it help
regards
I'm not sure how the OP designed his Java EE application and if any 3rd party frameworks are been used, but in a properly designed standard Java EE application using EJB, CDI, JPA, transactions and all on em, the DB is normally not available in static context. The answers which suggest to initialize it statically are in such case severely misleading and broken.
The canonical approach is to just create one instance holding the preinitialized data and reuse it throughout application's lifetime. With the current Java EE standards, this can be achieved by creating and initializing the bean once during application's startup and storing it in the application scope. For example, an application scoped CDI bean:
#Named
#ApplicationScoped
public class Data {
private List<Code> codes;
#EJB
private DataService service;
#PostConstruct
public void init() {
codes = Collections.unmodifiableList(service.getAllCodes());
}
public List<Code> getCodes() {
return codes;
}
}
This is then available by #{data.codes} anywhere else in the application.
Which ORM supports a domain model of immutable types?
I would like to write classes like the following (or the Scala equivalent):
class A {
private final C c; //not mutable
A(B b) {
//init c
}
A doSomething(B b) {
// build a new A
}
}
The ORM has to initialized the object with the constructor. So it is possible to check invariants in the constructor. Default constructor and field/setter access to intialize is not sufficient and complicates the class' implementation.
Working with collections should be supported. If a collection is changed it should create a copy from the user perspective. (Rendering the old collection state stale. But user code can still work on (or at least read) it.) Much like the persistent data structures work.
Some words about the motivation. Suppose you have a FP-style domain object model. Now you want to persist this to a database. Who do you do that? You want to do as much as you can in a pure functional style until the evil sides effect come in. If your domain object model is not immutable you can for example not share the objects between threads. You have to copy, cache or use locks. So unless your ORM supports immutable types your constrainted in your choice of solution.
UPDATE: I created a project focused on solving this problem called JIRM:
https://github.com/agentgt/jirm
I just found this question after implementing my own using Spring JDBC and Jackson Object Mapper. Basically I just needed some bare minimum SQL <-> immutable object mapping.
In short I just use Springs RowMapper and Jackson's ObjectMapper to map Objects back and forth from the database. I use JPA annotations just for metadata (like column name etc...). If people are interested I will clean it up and put it on github (right now its only in my startup's private repo).
Here is a rough idea how it works here is an example bean (notice how all the fields are final):
//skip imports for brevity
public class TestBean {
#Id
private final String stringProp;
private final long longProp;
#Column(name="timets")
private final Calendar timeTS;
#JsonCreator
public TestBean(
#JsonProperty("stringProp") String stringProp,
#JsonProperty("longProp") long longProp,
#JsonProperty("timeTS") Calendar timeTS ) {
super();
this.stringProp = stringProp;
this.longProp = longProp;
this.timeTS = timeTS;
}
public String getStringProp() {
return stringProp;
}
public long getLongProp() {
return longProp;
}
public Calendar getTimeTS() {
return timeTS;
}
}
Here what the RowMapper looks like (notice it mainly delegats to Springs ColumnMapRowMapper and then uses Jackson's objectmapper):
public class SqlObjectRowMapper<T> implements RowMapper<T> {
private final SqlObjectDefinition<T> definition;
private final ColumnMapRowMapper mapRowMapper;
private final ObjectMapper objectMapper;
public SqlObjectRowMapper(SqlObjectDefinition<T> definition, ObjectMapper objectMapper) {
super();
this.definition = definition;
this.mapRowMapper = new SqlObjectMapRowMapper(definition);
this.objectMapper = objectMapper;
}
public SqlObjectRowMapper(Class<T> k) {
this(SqlObjectDefinition.fromClass(k), new ObjectMapper());
}
#Override
public T mapRow(ResultSet rs, int rowNum) throws SQLException {
Map<String, Object> m = mapRowMapper.mapRow(rs, rowNum);
return objectMapper.convertValue(m, definition.getObjectType());
}
}
Now I just took Spring JDBCTemplate and gave it a fluent wrapper. Here are some examples:
#Before
public void setUp() throws Exception {
dao = new SqlObjectDao<TestBean>(new JdbcTemplate(ds), TestBean.class);
}
#Test
public void testAll() throws Exception {
TestBean t = new TestBean(IdUtils.generateRandomUUIDString(), 2L, Calendar.getInstance());
dao.insert(t);
List<TestBean> list = dao.queryForListByFilter("stringProp", "hello");
List<TestBean> otherList = dao.select().where("stringProp", "hello").forList();
assertEquals(list, otherList);
long count = dao.select().forCount();
assertTrue(count > 0);
TestBean newT = new TestBean(t.getStringProp(), 50, Calendar.getInstance());
dao.update(newT);
TestBean reloaded = dao.reload(newT);
assertTrue(reloaded != newT);
assertTrue(reloaded.getStringProp().equals(newT.getStringProp()));
assertNotNull(list);
}
#Test
public void testAdding() throws Exception {
//This will do a UPDATE test_bean SET longProp = longProp + 100
int i = dao.update().add("longProp", 100).update();
assertTrue(i > 0);
}
#Test
public void testRowMapper() throws Exception {
List<Crap> craps = dao.query("select string_prop as name from test_bean limit ?", Crap.class, 2);
System.out.println(craps.get(0).getName());
craps = dao.query("select string_prop as name from test_bean limit ?")
.with(2)
.forList(Crap.class);
Crap c = dao.query("select string_prop as name from test_bean limit ?")
.with(1)
.forObject(Crap.class);
Optional<Crap> absent
= dao.query("select string_prop as name from test_bean where string_prop = ? limit ?")
.with("never")
.with(1)
.forOptional(Crap.class);
assertTrue(! absent.isPresent());
}
public static class Crap {
private final String name;
#JsonCreator
public Crap(#JsonProperty ("name") String name) {
super();
this.name = name;
}
public String getName() {
return name;
}
}
Notice in the above how easy it is to map any query into immutable POJO's. That is you don't need it 1-to-1 of entity to table. Also notice the use of Guava's optionals (last query.. scroll down). I really hate how ORM's either throw exceptions or return null.
Let me know if you like it and I'll spend the time putting it on github (only teste with postgresql). Otherwise with the info above you can easily implement your own using Spring JDBC. I'm starting to really dig it because immutable objects are easier to understand and think about.
Hibernate has the #Immutable annotation.
And here is a guide.
Though not a real ORM, MyBatis may able to do this. I didn't try it though.
http://mybatis.org/java.html
AFAIK, there are no ORMs for .NET supporting this feature exactly as you wish. But you can take a look at BLTookit and LINQ to SQL - both provide update-by-comparison semantics and always return new objects on materialization. That's nearly what you need, but I'm not sure about collections there.
Btw, why you need this feature? I'm aware about pure functional languages & benefits of purely imutable objects (e.g. complete thread safety). But in case with ORM all the things you do with such objects are finally transformed to a sequence of SQL commands anyway. So I admit the benefits of using such objects are vaporous here.
You can do this with Ebean and OpenJPA (and I think you can do this with Hibernate but not sure). The ORM (Ebean/OpenJPA) will generate a default constructor (assuming the bean doesn't have one) and actually set the values of the 'final' fields. This sounds a bit odd but final fields are not always strictly final per say.
SORM is a new Scala ORM which does exactly what you want. The code below will explain it better than any words:
// Declare a model:
case class Artist ( name : String, genres : Set[Genre] )
case class Genre ( name : String )
// Initialize SORM, automatically generating schema:
import sorm._
object Db extends Instance (
entities = Set() + Entity[Artist]() + Entity[Genre](),
url = "jdbc:h2:mem:test"
)
// Store values in the db:
val metal = Db.save( Genre("Metal") )
val rock = Db.save( Genre("Rock") )
Db.save( Artist("Metallica", Set() + metal + rock) )
Db.save( Artist("Dire Straits", Set() + rock) )
// Retrieve values from the db:
val metallica = Db.query[Artist].whereEqual("name", "Metallica").fetchOne() // Option[Artist]
val rockArtists = Db.query[Artist].whereEqual("genres.name", "Rock").fetch() // Stream[Artist]