use spring data to connect to datastore google cloud - java

How can i use Spring Data in order to connect to DataStore google, actually i use com.google.api.services.datastore.DatastoreV1
But my lead Manager want use spring-Data with dataStore how can i do that?
for example to insert an Entity i actually use:
public void insert(Entity entity) {
Datastore datastore = this.datastoreFactory.getInstance();
CommitRequest request =
CommitRequest.newBuilder().setMode(CommitRequest.Mode.NON_TRANSACTIONAL)
.setMutation(Mutation.newBuilder().addInsertAutoId(entity)).build();
try {
CommitResponse response = datastore.commit(request);
} catch (DatastoreException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
#Override
#SuppressWarnings("deprecation")
public Datastore getInstance() {
if(datastore != null)
return datastore;
try {
// Setup the connection to Google Cloud Datastore and infer
// credentials
// from the environment.
//the environment variables DATASTORE_SERVICE_ACCOUNT and
DATASTORE_PRIVATE_KEY_FILE must be set
datastore = DatastoreFactory.get().create(
DatastoreHelper.getOptionsfromEnv().dataset(Constant.ProjectId)
.build());
} catch (GeneralSecurityException exception) {
System.err.println("Security error connecting to the datastore: "
+ exception.getMessage());
return null;
} catch (IOException exception) {
System.err.println("I/O error connecting to the datastore: "
+ exception.getMessage());
return null;
}
return datastore;
}
any help will be appreciated

To use Spring Data with a specific storage you need to implement a bunch of interfaces from Spring Data Commons. Take a look at the GCP Spanner Spring Data implementation as an example (https://github.com/spring-cloud/spring-cloud-gcp/tree/master/spring-cloud-gcp-data-spanner)

Related

Spring R2DBC Postgres Row Level Security

I'm trying to implement Postgres Row Level Security on my app that uses R2DBC.
I found this AWS post that implements this but uses a non-reactive approach.
I'm having problems converting this to a reactive approach since I can't find a class equivalent to the AbstractRoutingDataSource:
public class TenantAwareDataSource extends AbstractRoutingDataSource {
private static final Logger LOGGER = LoggerFactory.getLogger(TenantAwareDataSource.class);
#Override
protected Object determineCurrentLookupKey() {
Object key = null;
// Pull the currently authenticated tenant from the security context
// of the HTTP request and use it as the key in the map that points
// to the connection pool (data source) for each tenant.
Authentication authentication = SecurityContextHolder.getContext().getAuthentication();
try {
if (!(authentication instanceof AnonymousAuthenticationToken)) {
Tenant currentTenant = (Tenant) authentication.getPrincipal();
key = currentTenant.getId();
}
} catch (Exception e) {
LOGGER.error("Failed to get current tenant for data source lookup", e);
throw new RuntimeException(e);
}
return key;
}
#Override
public Connection getConnection() throws SQLException {
// Every time the app asks the data source for a connection
// set the PostgreSQL session variable to the current tenant
// to enforce data isolation.
Connection connection = super.getConnection();
try (Statement sql = connection.createStatement()) {
LOGGER.info("Setting PostgreSQL session variable app.current_tenant = '{}' on {}", determineCurrentLookupKey().toString(), this);
sql.execute("SET SESSION app.current_tenant = '" + determineCurrentLookupKey().toString() + "'");
} catch (Exception e) {
LOGGER.error("Failed to execute: SET SESSION app.current_tenant = '{}'", determineCurrentLookupKey().toString(), e);
}
return connection;
}
#Override
public String toString() {
return determineTargetDataSource().toString();
}
}
What would be the equivalent on R2DBC to AbstractRoutingDataSource?
Thanks
Full source code here.

Aggregate Kafka processed data

I have a Kafka consumer that performs some migrations. Pretty simple flow
#KafkaListener(topics="client-migration-blah", groupId="migration-group",
containerFactory="kafkaListnerContainerFactory")
public void consume(ConsumerRecord<String, Object> payload) {
try {
Client client = payload.value();
if( migrationClient.clientExists(client)) {
updateClient(ClientEvent.UPDATE, client);
}
else {
migrationClient.importClient(ClientEvent.CREATE, client);
}
} catch (Exception ex) {
log.error("yada yada yada");
sendToDLQ(ClientEvent.ERROR, client);
}
}
I need a breakup of the three usecases, create, update and error(DLQ). Short of building a streaming (?) solution to collect these aggregates, what would be a simple way to gather these events to extract the breakup?

How to create plugin openfire for crud

I'm very new in openfire and first time using java, I got stuck when I trying to develop plugin for crud. Could you give me some sample to make crud plugin ability? Thanks for your help before...
You can start from this answer: Mapping Openfire Custom plugin with aSmack Client
and follow the official tutorial with first 3 points of the answer.
About CRUD:
Let's assume you want to audit all your messages as XML in your database, so you'll implement a PacketInterceptor just to keep an easy scenario.
Your class plugin will looks like:
public class MyCustomPlugin implements Plugin, PacketInterceptor {//foo}
in method initializePlugin you'll have an invokation like:
public void initializePlugin(PluginManager manager, File pluginDirectory)
{
InterceptorManager.getInstance().addInterceptor(this);
}
and in method interceptPacket something like that:
#Override
public void interceptPacket(Packet packet, Session session,
boolean incoming, boolean processed) throws PacketRejectedException {
if (!processed)
{
boolean done = doMyCRUDAction(packet);
}
if (!done)
{ //do something if error occourred}
}
now let's write on database:
private static final String AUDIT_CHAT =
"INSERT INTO MYTABLE(MESSAGEASXML) VALUES (?)";
private boolean doMyCRUDAction(Packet packet)
{
if ((packet instanceof Message))
{
Message message = (Message) packet.createCopy();
boolean isAudited = false;
Connection con = null;
PreparedStatement statement = null;
try {
con = DbConnectionManager.getConnection();
statement = con.prepareStatement(AUDIT_CHAT);
statement.setString(1, message.toString());
statement.executeQuery();
isAudited = true;
}
catch (SQLException e) {
Log.error(e.getMessage(), e);
}
catch (Exception ex)
{
Log.error(ex.getMessage(), ex);
}
finally {
DbConnectionManager.closeConnection(statement, con);
}
return isAudited;
}
}
please keep in mind this is a reduced snippet of a working code, so there can be some sintax to fix
If your CRUD must follow an explicit IQ request, you'll have to extends an IQHandler and create a custom IQ and send to the client in handleIQ(IQ packet) method. You can check in Openfire sourcecode about detailed and complex implementations.

MongoDB + Morphia, how to avoid a lot of "xxx connections now open"

My MongoDB is getting a lot of "xxx connections now open" show at the console, i cant understand why, i created a DataStore Factory and im using Inject, why it dont close the connection? im using .getDB.requestDone() too...
where i get my DS:
public class DSFactory {
Morphia morphia = new Morphia();
Datastore ds = null;
public DSFactory() {
morphia.map(User.class);
try {
this.ds = morphia.createDatastore(new MongoClient("localhost"),
"userDB");
} catch (UnknownHostException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
public Datastore getInstance() {
return ds;
}
}
my DAO:
public class UserDAO {
#Inject DSFactory dsFactory;
public void newUser(User user) {
dsFactory.getInstance().save(user);
dsFactory.getInstance().getDB().requestDone();
}
}
every newUser i add, is a new connection in the MongoDB, so for 30new users, i will finish with "30 or more connections open", it just go to low number or 0 when i close the eclipse
I assume you inject UserDao somewhere, when that happens the container will try to create a new DSFactory. Each UserDao will try to create its own DSFactory, with each factory having its own Morphia datastore.
You probably one want one morphia datastore at a time, you could try to make the datastore into a #Singleton so each DAO gets the same DSFactory.

Java Datasource, how to dispose it

I'm working on a webapp where i manually create my DataSource. (also see my other question why: How to use Spring to manage connection to multiple databases) because I need to connect to other databases (dev, prod, qa, test).
Now I have solved it to choose and switch between databases. But if a user logs out of my app. He wants to try to connect to an other database. He is still connected to the same datasource because at runtime the myDs is not null. How can I properly dispose of this Datasource when user logs out? I don't want the user to create the datasource every time he queries the database.
private DataSource createDataSource(Environment e) {
OracleDataSource ds = null;
String url = null;
try {
if (myDs != null) {
logger.info("myDs connection: " + etmetaDs.getConnection().getMetaData().getURL());
url = myDs.getConnection().getMetaData().getURL();
}
} catch (SQLException exc) {
// TODO Auto-generated catch block
exc.printStackTrace();
}
if (myDs == null) {
try {
ds = new OracleDataSource();
} catch (SQLException ex) {
ex.printStackTrace();
}
ds.setDriverType("oracle.jdbc.OracleDriver");
ds.setURL(e.getUrl());
try {
Cryptographer c = new Cryptographer();
ds.setUser(c.decrypt(e.getUsername()));
ds.setPassword(c.decrypt(e.getPassword()));
} catch (CryptographyException ex) {
logger.error("Failed to connect to my environment [" + e.getName() + "]");
ex.printStackTrace();
return null;
}
logger.info("Connecting to my environment [" + e.getName() + "]");
myDs = ds;
} else if (url.equals(e.getUrl())) {
} else {
}
return myDs;
}
If you read the answer of Reza in you other question you can see how to create multiple DataSource.
I think here that the problem is not the DataSource but the way you store information in your code. I suppose that your etmetaDs is shared but all your users, so dispose it when a user log out (= set it to null) is not the good option.
What you have to do, is to maintain the status of the connection for each user. And when a user log off, you can reset is status in order to obtain a new connection the next time it connects.
Update: There are many way to achieve this. I give here an example of what I imagine, but you have to adapt it to your needs. Suppose that you have a UserData object that holds information :
public class UserData
{
String id;
String name;
String database;
}
You may have in your application a dropdown with the name of the database (dev, test, ...) with an empty first item. When the user selects a database, you get the connection with createDataSource(). If it already exists you returns the DataSource else you create a new one. When your user disconnect (or when the user log on), you set the database to "" to force him to select the database in the dropdown. There is no need to reset the datasource.

Categories