I'm trying to implement Postgres Row Level Security on my app that uses R2DBC.
I found this AWS post that implements this but uses a non-reactive approach.
I'm having problems converting this to a reactive approach since I can't find a class equivalent to the AbstractRoutingDataSource:
public class TenantAwareDataSource extends AbstractRoutingDataSource {
private static final Logger LOGGER = LoggerFactory.getLogger(TenantAwareDataSource.class);
#Override
protected Object determineCurrentLookupKey() {
Object key = null;
// Pull the currently authenticated tenant from the security context
// of the HTTP request and use it as the key in the map that points
// to the connection pool (data source) for each tenant.
Authentication authentication = SecurityContextHolder.getContext().getAuthentication();
try {
if (!(authentication instanceof AnonymousAuthenticationToken)) {
Tenant currentTenant = (Tenant) authentication.getPrincipal();
key = currentTenant.getId();
}
} catch (Exception e) {
LOGGER.error("Failed to get current tenant for data source lookup", e);
throw new RuntimeException(e);
}
return key;
}
#Override
public Connection getConnection() throws SQLException {
// Every time the app asks the data source for a connection
// set the PostgreSQL session variable to the current tenant
// to enforce data isolation.
Connection connection = super.getConnection();
try (Statement sql = connection.createStatement()) {
LOGGER.info("Setting PostgreSQL session variable app.current_tenant = '{}' on {}", determineCurrentLookupKey().toString(), this);
sql.execute("SET SESSION app.current_tenant = '" + determineCurrentLookupKey().toString() + "'");
} catch (Exception e) {
LOGGER.error("Failed to execute: SET SESSION app.current_tenant = '{}'", determineCurrentLookupKey().toString(), e);
}
return connection;
}
#Override
public String toString() {
return determineTargetDataSource().toString();
}
}
What would be the equivalent on R2DBC to AbstractRoutingDataSource?
Thanks
Full source code here.
Related
I am dealing with high traffic in my Spring Boot project and my goal is serving clients as much fast as possible. In this case, I have more than 500 requests per second. In each rest endpoint call, I should connect my schema and gather multiple information from multiple tables. To be able to do that, should I create new connection for each eendpoint call or create & close before each db query?
I wrote a JDBC connection class but I am not sure that it is a good way. Maybe you can give me some opinion.
JDBC Connection Class
#PropertySource({"classpath:application.properties"})
#Configuration
public class FraudJDBConfiguration {
private final Logger LOGGER = LogManager.getLogger(FraudJDBConfiguration.class);
private final Environment env;
#Autowired
public FraudJDBConfiguration(Environment env) {
this.env = env;
}
#Bean
public Connection getFraudConnection() {
// Step 1: Loading or
// registering Oracle JDBC driver class
String connectionClass = env.getProperty("fraud.db.driver-class-name");
try {
Class.forName(connectionClass);
} catch (ClassNotFoundException cnfex) {
LOGGER.error(cnfex.getMessage());
throw new RuntimeException("JDBC driver class'ı bulunamadı");
}
// Step 2: Opening database connection
try {
String environmentType = env.getProperty("environment");
if (environmentType == null) {
LOGGER.error("environment Tip Hatası (TEST - UAT - LIVE)");
throw new RuntimeException("environment Tip Hatası (TEST - UAT - LIVE)");
} else {
String connectionString = null;
String username = null;
String password = null;
switch (environmentType.toLowerCase()) {
case "dev":
connectionString = env.getProperty(/*someurl*/);
username = env.getProperty(/*someusername*/);
password = env.getProperty(/*somepassword*/);
break;
case "tst":
connectionString = env.getProperty(/*someurl*/);
username = env.getProperty(/*someusername*/);
password = env.getProperty(/*somepassword*/);
break;
case "liv":
connectionString = env.getProperty(/*someurl*/);
username = env.getProperty(/*someusername*/);
password = env.getProperty(/*somepassword*/);
break;
case "uat":
connectionString = env.getProperty(/*someurl*/);
username = env.getProperty(/*someusername*/);
password = env.getProperty(/*somepassword*/);
break;
}
// Step 2.A: Create and
// get connection using DriverManager class
if (connectionString == null) {
LOGGER.error("fraud şeması için connection string bulunamadı");
throw new RuntimeException("fraud şeması için connection string bulunamadı");
}
return DriverManager.getConnection(connectionString, username, password);
}
} catch (SQLException e) {
LOGGER.error(e.getMessage());
}
return null;
}
}
DAO
#Component
public interface FraudCommTransactionsDao {
Long count();
}
DAO IMPL
#Service
public class FraudCommTransactionsDaoImpl implements FraudCommTransactionsDao {
private final FraudJDBConfiguration fraudJDBConfiguration;
#Autowired
public FraudCommTransactionsDaoImpl(FraudJDBConfiguration fraudJDBConfiguration) {
this.fraudJDBConfiguration = fraudJDBConfiguration;
}
#Override
public Long count() {
try(Connection connection = fraudJDBConfiguration.getFraudConnection()) {
Statement stmt = connection.createStatement();
ResultSet rs = stmt.executeQuery(/*some query*/);
if (rs.next()) {
return rs.getLong("transaction_id");
} else {
return 0L;
}
} catch (SQLException ex) {
ex.printStackTrace();
}
return null;
}
}
No, establishing a new physical connection to a database server is costly. It involves multiple steps: user authorization, establishing session defaults, allocating memory on both client and server, etc. This overhead should not be added to every single request.
It's a common practice to create a connection pool to share the physical connections between application threads. This introduces a concept of logical connections e.g. a Connection object created with DriverManager.getConnection() is a physical connection while DataSource.getConnection() returns a logical connection which is a proxy.
There are multiple database connection pooling libraries for Java that you can use e.g. HikariCP. Don't write your own, this is not simple.
Get fast data and deliver to client could be possible using the simplest way of using application.properties file. You may use this to get database connection to your datasource.
Hello I have problem with my jms code when I try to send over 1000 messages to MDB. Following code:
#Stateless(mappedName = "RequestProcessingQueue")
public class RequestProcessingQueue {
private static final Logger logger = Logger.getLogger(RequestProcessingQueue.class);
#Resource(mappedName = "jmsRequestsFactory")
private ConnectionFactory connectionFactory;
#Resource(mappedName = "jmsRequestsDestination")
private Queue queue;
public void add(String participant, String password, List<Long> documents) throws JmsAppException {
try {
logger.debug("requests to process " + documents);
Connection connecton = connectionFactory.createConnection();
connecton.start();
Session session = connecton.createSession(false, Session.AUTO_ACKNOWLEDGE);
QueueSender sender = (QueueSender) session.createProducer(queue);
Message msg = msg = session.createMessage();
msg.setStringProperty("participant", participant);
msg.setStringProperty("password", password);
for (Long id : documents) {
msg.setLongProperty("request", id);
sender.send(msg);
}
sender.close();
session.close();
connecton.close();
} catch (JMSException e) {
throw new JmsAppException(e);
} catch (Throwable e) {
throw new JmsAppException("Fatal error occured while sending request to be processed", e);
}
}
}
throws
MQJMSRA_DS4001: JMSServiceException on send message:sendMessage: Sending message failed. Connection ID: 2979509408914231552 com.sun.messaging.jms.ra.DirectSession._sendMessage(DirectSession.java:1844) / sendMessage: Sending message failed. Connection ID: 2979509408914231552 com.sun.messaging.jmq.jmsserver.service.imq.IMQDirectService.sendMessage(IMQDirectService.java:1955) / transaction failed: [B4303]: The maximum number of messages [1 000] that the producer can process in a single transaction (TID=2979509408914244096) has been exceeded. Please either limit the # of messages per transaction or increase the imq.transaction.producer.maxNumMsgs property. com.sun.messaging.jmq.jmsserver.data.handlers.DataHandler.routeMessage(DataHandler.java:467)'}
at jms.example.RequestProcessingQueue.add(RequestProcessingQueue.java:48)
I do not understand why cus when I create session I pass false as first param indicating that session is non transactional mode.
Your code does not work because the basic JMS API was designed to work in any environment, not just from within an EJB container. Runtime environment programming restrictions and behaviour are described in the EJB specifications and JavaDoc, in particular javax.jms.Connection.createSession(boolean transacted, int acknowledgeMode).
Your code can be simplified (assuming you're using at least Java 7) to:
#TransactionAttribute(TransactionAttributeType.NOTSUPPORTED)
public void add(String participant, String password, List<Long> documents) throws OgnivoException {
try (Connection connection = connectionFactory.createConnection();
Session session = connection.createSession();
// session.start() not required
MessageProducer sender = session.createProducer(queue)) {
logger.debug("requests to process " + documents);
for (Long id : documents) {
Message msg = msg = session.createMessage();
msg.setStringProperty("participant", participant);
msg.setStringProperty("password", password);
msg.setLongProperty("request", id);
sender.send(msg);
}
} catch (JMSException e) {
throw new JmsAppException(e);
}
// Don't catch throwable because it hides bugs
}
Remember that EJB methods are automatically associated with a transaction unless you specify otherwise. Additionally, be sure to check the javadoc for javax.jms.Connection.createSession() and associated methods, particularly the sections describing behaviour in different runtime environments.
How can i use Spring Data in order to connect to DataStore google, actually i use com.google.api.services.datastore.DatastoreV1
But my lead Manager want use spring-Data with dataStore how can i do that?
for example to insert an Entity i actually use:
public void insert(Entity entity) {
Datastore datastore = this.datastoreFactory.getInstance();
CommitRequest request =
CommitRequest.newBuilder().setMode(CommitRequest.Mode.NON_TRANSACTIONAL)
.setMutation(Mutation.newBuilder().addInsertAutoId(entity)).build();
try {
CommitResponse response = datastore.commit(request);
} catch (DatastoreException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
#Override
#SuppressWarnings("deprecation")
public Datastore getInstance() {
if(datastore != null)
return datastore;
try {
// Setup the connection to Google Cloud Datastore and infer
// credentials
// from the environment.
//the environment variables DATASTORE_SERVICE_ACCOUNT and
DATASTORE_PRIVATE_KEY_FILE must be set
datastore = DatastoreFactory.get().create(
DatastoreHelper.getOptionsfromEnv().dataset(Constant.ProjectId)
.build());
} catch (GeneralSecurityException exception) {
System.err.println("Security error connecting to the datastore: "
+ exception.getMessage());
return null;
} catch (IOException exception) {
System.err.println("I/O error connecting to the datastore: "
+ exception.getMessage());
return null;
}
return datastore;
}
any help will be appreciated
To use Spring Data with a specific storage you need to implement a bunch of interfaces from Spring Data Commons. Take a look at the GCP Spanner Spring Data implementation as an example (https://github.com/spring-cloud/spring-cloud-gcp/tree/master/spring-cloud-gcp-data-spanner)
I have a support website that I would like to show some stats gathered from another Java app via JMX. We have noticed the support app sometimes cannot get the stats after the other app has been restarted. I guess this is because the support app has opening a JMX connection to the other app and keeps hold of it. Then every time you go to the page to display the JMX stats it tries to gather them using the connection and it fails.
My question is, is it better to have a single JMX connection and try and work out when we should reconnect it?
Or each time we load the page with JMX stats on it should we create a new JMX connection then close it once we have the values we need?
As per my knowledge,
JMX connections are RMI Connector objects and hence can be held in the client app. + use a heartbeat approach to reconnect.
This way we can avoid overhead of re-establishing RMI connections which are not light weight.
Refer: javax.management.remote.rmi.RMIConnector
We didn't end up using a heartbeat but after reading Girish's answer came up with the following
public class JmxMetricsRetriever {
private final JMXServiceURL jmxUrl;
private final Map<String, Object> env;
private MBeanServerConnection connection;
private JmxMetricsRetriever(JMXServiceURL jmxUrl, Map<String, Object> env) {
this.jmxUrl = jmxUrl;
this.env = env;
reconnect();
}
public synchronized Object getAttributeValue(String jmxObjectName, String attributeName) {
try {
if (connection == null) {
reconnect();
}
try {
return getAttributeValuePrivate(jmxObjectName, attributeName);
} catch (ConnectException exc) {
//This is to reconnect after the Server has been restarted.
reconnect();
return getAttributeValuePrivate(jmxObjectName, attributeName);
}
} catch (MalformedObjectNameException |
AttributeNotFoundException |
MBeanException |
ReflectionException |
InstanceNotFoundException |
IOException ex) {
throw new RuntimeException(ex);
}
}
private synchronized Object getAttributeValuePrivate(String jmxObjectName, String attributeName) throws MalformedObjectNameException, MBeanException, AttributeNotFoundException, InstanceNotFoundException, ReflectionException, IOException {
ObjectName replication = new ObjectName(jmxObjectName);
return connection.getAttribute(replication, attributeName);
}
private synchronized void reconnect() {
logger.info(String.format("Reconnecting to [%s] via JMX", jmxUrl.toString()));
try {
JMXConnector jmxConnector = JMXConnectorFactory.connect(jmxUrl, env);
this.connection = jmxConnector.getMBeanServerConnection();
jmxConnector.connect();
} catch (IOException e) {
//Log something but don't throw an exception otherwise our app will fail to start.
}
}
public static JmxMetricsRetriever build(String url, String port, String user, String password) {
try {
JMXServiceURL jmxUrl = new JMXServiceURL("service:jmx:rmi:///jndi/rmi://" + url + ":" + port + "/jmxrmi");
Map<String, Object> env = new HashMap<>();
env.put(JMXConnector.CREDENTIALS, new String[]{user, password});
return new JmxMetricsRetriever(jmxUrl, env);
} catch (MalformedURLException ex) {
throw new RuntimeException(ex);
}
}
}
When we start our app we try and create a JMX connect an hold on to it. Every time we get a JMX attribute we check the connection has been created (might not of been if the server we are connecting to was not up when we started our service). Then try and retrieve our attribute. If it failed try and reconnect and get the attribute value. We could not find a better way to test of a JMX connect was still usable so had to catch the exception.
I'm working on a webapp where i manually create my DataSource. (also see my other question why: How to use Spring to manage connection to multiple databases) because I need to connect to other databases (dev, prod, qa, test).
Now I have solved it to choose and switch between databases. But if a user logs out of my app. He wants to try to connect to an other database. He is still connected to the same datasource because at runtime the myDs is not null. How can I properly dispose of this Datasource when user logs out? I don't want the user to create the datasource every time he queries the database.
private DataSource createDataSource(Environment e) {
OracleDataSource ds = null;
String url = null;
try {
if (myDs != null) {
logger.info("myDs connection: " + etmetaDs.getConnection().getMetaData().getURL());
url = myDs.getConnection().getMetaData().getURL();
}
} catch (SQLException exc) {
// TODO Auto-generated catch block
exc.printStackTrace();
}
if (myDs == null) {
try {
ds = new OracleDataSource();
} catch (SQLException ex) {
ex.printStackTrace();
}
ds.setDriverType("oracle.jdbc.OracleDriver");
ds.setURL(e.getUrl());
try {
Cryptographer c = new Cryptographer();
ds.setUser(c.decrypt(e.getUsername()));
ds.setPassword(c.decrypt(e.getPassword()));
} catch (CryptographyException ex) {
logger.error("Failed to connect to my environment [" + e.getName() + "]");
ex.printStackTrace();
return null;
}
logger.info("Connecting to my environment [" + e.getName() + "]");
myDs = ds;
} else if (url.equals(e.getUrl())) {
} else {
}
return myDs;
}
If you read the answer of Reza in you other question you can see how to create multiple DataSource.
I think here that the problem is not the DataSource but the way you store information in your code. I suppose that your etmetaDs is shared but all your users, so dispose it when a user log out (= set it to null) is not the good option.
What you have to do, is to maintain the status of the connection for each user. And when a user log off, you can reset is status in order to obtain a new connection the next time it connects.
Update: There are many way to achieve this. I give here an example of what I imagine, but you have to adapt it to your needs. Suppose that you have a UserData object that holds information :
public class UserData
{
String id;
String name;
String database;
}
You may have in your application a dropdown with the name of the database (dev, test, ...) with an empty first item. When the user selects a database, you get the connection with createDataSource(). If it already exists you returns the DataSource else you create a new one. When your user disconnect (or when the user log on), you set the database to "" to force him to select the database in the dropdown. There is no need to reset the datasource.