I am new to Akka and want to create a CRUD application by using any RDBMS. I have gone through the akka-persistence which is quite confusing for me. But it lacks any implementation code.Can anybody point me to the relative links/ git hub repository which will help in understanding.
Till now I have created a simple hello world application.
I suggest to use Slick. There are several supported RDBMS connectors such as Postgres, Sql Server (Mssql), Mysql, Oracle, DB2. Previously I had to connect to an Oracle 12 database.
I added the dependency (in my case it was gradle) which can be found in docs.
"com.lightbend.akka:akka-stream-alpakka-slick_2.11:<version>",
After that I configured the slick as you can see below.
slick-oracle {
profile = "slick.jdbc.OracleProfile$"
db {
dataSourceClass = "slick.jdbc.DriverDataSource"
connectionPool = disabled
properties {
driver = "oracle.jdbc.OracleDriver"
url = "jdbc:oracle:thin:#//localhost:1521/xe"
user = user
password = password
}
}
}
Akka persistence allows you to persist the internal state of stateful actors in your system. Its rather convoluted to use it for CRUD applications, Instead you should start looking at a orm solution such as slick. You can use akka-http or spray(deprecated) to create a REST API that enables you to expose CRUD endpoints.
Related
My Question- I have mongodb installed in ubuntu server and it has multiple databases and I want to connect to mongodb installed on server and get all the databases (name of databases, collections of every databases) and perform search operations on all the databases.
Note - I did not know the names of database
It won't work out-of-the-box. If you include the spring-boot mongodb starter project, it will look for the property 'spring.data.mongodb.uri' to connect to a single database, and if it does not find it, it will try to connect to 'localhost:27017'. This single database will then be used for all Spring Data Repositories automatically.
You can add extra MongoClient beans for different databases, but you'll need some work to connect different Spring Data Repositories to those different beans.
And if you want to work with a dynamic set of databases, i.e. when you don't know which databases or how many, you can't work with fixed MongoClients as spring beans anyway. You'll need some sort of factory that creates multiple MongoClient beans based on the number of databases, and multiple SearchRepository beans connected to each of those MongoClient beans.
In depends on the complexity of your project, but for this usecase I would not use Spring Data at all, but stick to the MongoDB Java client API:
Map<String, MongoClient> clientsPerDatabase = new HashMap<>();
// Setup
MongoClient mongoClient = new MongoClient(/* default database */);
MongoCursor<String> dbsCursor = mongoClient.listDatabaseNames().iterator();
while(dbsCursor.hasNext()) {
String databaseName = dbsCursor.next();
if (!"admin".equals(databaseName)) {
clientsPerDatabase.put(databaseName, new MongoClient(...+databaseName);
}
}
// Query
for(MongoClient client: clientsPerDatabase.values()){
...
}
I'd configure the Spring Boot application with admin access to the database and then use native queries to retrieve info on all existing databases, schemas, etc
I'm currently developing a web application using Spring MVC (without Maven).
What I need is to create a distributed transaction between two local databases, so that the code will update all of them in (theoretically) a 2phase commit.
Now, since I'm doing it for a school project, I'm in a simple environment which needs only to take a row from a table in one db and put it in a table on the other db, of course atomically (theoretically, such a transaction should be distributed because I'm using two different databases and not only one).
My question is, how can I deploy a Spring bean that firstly connects to both MySQL databases and then does that distributed transaction? Should I use some external library or could I achieve all with only using the Spring framework? In which case, could you please kindly link me an example or a guide to do this?
Thank you in advance for your help :)
Spring has an interface PlatformTransactionManager which is an
abstraction And it has many implementations like
DataSourceTransactionManager,HibernateTransactionManager etc.
Since you are using distributed transactions so you need to use
JTATransactionManager
These TransactionManagers provided by spring
are wrapper around the implementations provided by other frameworks
Now in case of JTA , you would be using either an application server
or a standalone JTA implementation like Atomikos
Following are the steps :-
Configure Transaction Manager in spring using application server
or standalone JTA implementation
Enable Transaction Management in spring
And then configure in your code the #Transactional annotation above
your method
Have a look at following links
http://docs.spring.io/spring/docs/current/spring-framework-reference/html/transaction.html
http://www.byteslounge.com/tutorials/spring-jta-multiple-resource-transactions-in-tomcat-with-atomikos-example
I am developing a Spring Boot application that uses Spring Data JPA and will need to connect to many different databases e.g. PostreSQL, MySQL, MS-SQL, MongoDB.
I need to create all datasources in runtime i.e. user choose these data by GUI in started application:
-driver(one of the list),
-source,
-port,
-username,
-password.
And after all he writes native sql to choosen database and get results.
I read a lot of things about it in stack and spring forums(e.g. AbstractRoutingDataSource) but all of these tutorials show how to create datasources from xml configuration or static definition in java bean. It is possible to create many datsources in runtime? How to manage transactions and how to create many sessionFactories? It is possible to use #Transactional annotation? What is the best method to do this? Can someone explain me how to do this 'step by step'?
Hope it's not too late for an answer ;)
I developed a module which can be easily integrated in any spring project. It uses a meta-datasource to hold the tenant-datasource connection details.
For the tenant-datasource an AbstractRoutingDataSource is used.
Here you find my core implementation using the AbstractRoutingDataSource.
https://github.com/Dactabird/multitenancy
Here is an example to show how to integrate it. https://github.com/Dactabird/multitenancy-sample
In this example I'm using H2 embedded db. But of course you can use whatever you want.
Feel free to modify it for your purposes or to ask if questions are left!
Has anyone used GraniteDS successfully with a plain Java client and lazy-loading (a real Java client or a Java server application calling another server)?
Is any special client-side initialization needed? (the docs say nothing about this so we assumed no need, simply took the example code)
Based on the docs (3.0.M2), we have created a Spring backend and a Java client which works for simple POJOs but fails when Hibernate-loaded POJOs would need to be returned (both the RemoteService and Tide versions fail with the same deserialization exceptions).
Currently, we don't have a client-side GraniteDS configuration file, only this code:
String baseURL = "http://localhost:8080/WebApp_Development_Client_Maven/";
URI uri = new URI(baseURL + "graniteamf/amf.txt");
Transport tr = new ApacheAsyncTransport();
tr.start();
AMFRemotingChannel ch = new AMFRemotingChannel(tr, "graniteamf", uri);
RemoteService srv = new RemoteService(ch, "userService");
List users = (List)srv.newInvocation("listUsers").invoke().get().getData();
The de-serialization exception:
Caused by: java.lang.RuntimeException: The ActionScript3 class bound to limes.core.model.security.User (ie: [RemoteClass(alias="limes.core.model.security.User")]) implements flash.utils.IExternalizable but this Java class neither implements java.io.Externalizable nor is in the scope of a configured externalizer (please fix your granite-config.xml)
at org.granite.messaging.amf.io.AMF3Deserializer.readAMF3Object(AMF3Deserializer.java:500)
at org.granite.messaging.amf.io.AMF3Deserializer.readObject(AMF3Deserializer.java:130)
at org.granite.messaging.amf.io.AMF3Deserializer.readObject(AMF3Deserializer.java:92)
... 36 more
Context:
We have a client-server Java/Swing application that was originally designed for intranet use (utilizes Hibernate 3 as the ORM). It also works through the internet but the PostgreSQL database connection very often breaks which makes the client unreliable (random freezes due to lost/broken db connection). This seems to be impossible to solve correctly (the easy measures like manual re-connection is already implemented)
We need to deploy the app over the internet and since the complex logic is already refactored into service classes we would like to leave the GUI mostly unchanged and remote the service classes. We are moving the persistence layer and the service classes into a Spring backend and would like to use GraniteDS because transparent lazy-loading is heavily utilized in the application so it would be very hard to replace it with DTO usage and/or Initializers.
I have not found plain-Java client examples, only a JavaFX example app which is so heavily tied to JavaFX that it seems very hard to transform into a plain-Java client (even trying it out is slightly problematic on Linux since it has no Webstart config included).
It seems that lazy-loading doesn't work in this version of GraniteDS (3.0.0.M2) with the plain-Java client.
https://groups.google.com/forum/#!topic/graniteds/KDWNY31lG0I
In theory, it works in the JavaFX environment but it was implemented in a way that plain-Java clients cannot use transparent lazy loading.
Also, GraniteDS doesn't support lazy-loading on single entities, only on collections which makes it unsuitable for projects using such relations. Personally, I think this is a glaring omission especially since they often refer to their lazy-loading support as "full".
Unfortunately, the documentation doesn't say anything about the lazy-loading limitations and also fails to make the distinction between the capabilities of GraniteDS with JavaFX and plain Java.
I have an application - more like a utility - that sits in a corner and updates two different databases periodically.
It is a little standalone app that has been built with a Spring Application Context. The context has two Hibernate Session Factories configured in it, in turn using Commons DBCP data sources configured in Spring.
Currently there is no transaction management, but I would like to add some. The update to one database depends on a successful update to the other.
The app does not sit in a Java EE container - it is bootstrapped by a static launcher class called from a shell script. The launcher class instantiates the Application Context and then invokes a method on one of its beans.
What is the 'best' way to put transactionality around the database updates?
I will leave the definition of 'best' to you, but I think it should be some function of 'easy to set up', 'easy to configure', 'inexpensive', and 'easy to package and redistribute'. Naturally FOSS would be good.
The best way to distribute transactions over more than one database is: Don't.
Some people will point you to XA but XA (or Two Phase Commit) is a lie (or marketese).
Imagine: After the first phase have told the XA manager that it can send the final commit, the network connection to one of the databases fails. Now what? Timeout? That would leave the other database corrupt. Rollback? Two problems: You can't roll back a commit and how do you know what happened to the second database? Maybe the network connection failed after it successfully committed the data and only the "success" message was lost?
The best way is to copy the data in a single place. Use a scheme which allows you to abort the copy and continue it at any time (for example, ignore data which you already have or order the select by ID and request only records > MAX(ID) of your copy). Protect this with a transaction. This is not a problem since you're only reading data from the source, so when the transaction fails for any reason, you can ignore the source database. Therefore, this is a plain old single source transaction.
After you have copied the data, process it locally.
Setup a transaction manager in your context. Spring docs have examples, and it is very simple. Then when you want to execute a transaction:
try {
TransactionTemplate tt = new TransactionTemplate(txManager);
tt.execute(new TransactionCallbackWithoutResult(){
protected void doInTransactionWithoutResult(
TransactionStatus status) {
updateDb1();
updateDb2();
}
} catch (TransactionException ex) {
// handle
}
For more examples, and information perhaps look at this:
XA transactions using Spring
When you say "two different databases", do you mean different database servers, or two different schemas within the same DB server?
If the former, then if you want full transactionality, then you need the XA transaction API, which provides full two-phase commit. But more importantly, you also need a transaction coordinator/monitor which manages transaction propagation between the different database systems. This is part of JavaEE spec, and a pretty rarefied part of it at that. The TX coordinator itself is a complex piece of software. Your application software (via Spring, if you so wish) talks to the coordinator.
If, however, you just mean two databases within the same DB server, then vanilla JDBC transactions should work just fine, just perform your operations against both databases within a single transaction.
In this case you would need a Transaction Monitor (server supporting XA protocol) and make sure your databases supports XA also. Most (all?) J2EE servers comes with Transaction Monitor built in. If your code is running not in J2EE server then there are bunch of standalone alternatives - Atomicos, Bitronix, etc.
You could try Spring ChainedTransactionManager - http://docs.spring.io/spring-data/commons/docs/1.6.2.RELEASE/api/org/springframework/data/transaction/ChainedTransactionManager.html that supports distributed db transaction. This could be a better alternative to XA