I have created a router class and marked as a #Bean in #Configuration. One thing I am not very much sure is how frequently camel would be making a database call to get the select result? As soon as I have a new entry in the database, camel retrieve and process it.
public class SQLRouteBuilderForNewUserProcessing extends RouteBuilder {
#Override
public void configure() throws Exception {
//files refer camel files endpoint
//noop will not move or delete the files
from("sql:" +
"select id from users where status=" + Status.NEW.ordinal() +
"?" +
"consumer.onConsume=update users set status = " + Status.PROCESSING.ordinal()
" where id = :#id")
.bean(UserDataTranslator.class, "transformToUserData")
.to("log:uk.co.infogen.users?level=INFO");
}
}
by default, the sql consumer pool the database every 500ms. You can configure this with consumer.delay
from("sql:select ... &consumer.delay=5000")
.to(...)
see the documentation of the sql component
consumer.delay
long
500
Camel 2.11: SQL consumer only: Delay in milliseconds between each poll.
from http://camel.apache.org/sql-component.html
Related
I am developing a program that, based on a configuration file, allows different types of databases (e.g., YAML, MySQL, SQLite, and others to be added in the future) to be used to store data.
Currently it is all running on the main thread but I would like to start delegating to secondary threads so as not to block the execution of the program.
For supported databases that use a connection, I use HikariCP so that the process is not slowed down too much by opening a new connection every time.
The main problem is the multitude of available databases. For example, for some databases it might be sufficient to store the query string in a queue and have an executor check it every X seconds; if it is not empty it executes all the queries. For others, however, it is not, because perhaps they require other operations (e.g., YAML files that use a key-value system with a map).
What I can't do is something "universal", that doesn't give problems with the order of queries (cannot just create a Thread and execute it, because then one fetch thread might execute before another insertion thread and the data might not be up to date) and that can return data on completion (in the case of get functions).
I currently have an abstract Database class that contains all the get() and set(...) methods for the various data to be stored. Some methods need to be executed synchronously (must be blocking) while others can and should be executed asynchronously.
Example:
public abstract class Database {
public abstract boolean hasPlayedBefore(#Nonnull final UUID uuid);
}
public final class YAMLDatabase extends Database {
#Override
public boolean hasPlayedBefore(#Nonnull final UUID uuid) { return getFile(uuid).exists(); }
}
public final class MySQLDatabase extends Database {
#Override
public boolean hasPlayedBefore(#Nonnull final UUID uuid) {
try (
final Connection conn = getConnection(); // Get a connection from the poll
final PreparedStatement statement = conn.prepareStatement("SELECT * FROM " + TABLE_NAME + " WHERE UUID= '" + uuid + "';");
final ResultSet result = statement.executeQuery()
) {
return result.isBeforeFirst();
} catch (final SQLException e) {
// Notifies the error
Util.sendMessage("Database error: " + e.getMessage() + ".");
writeLog(e, uuid, "attempt to check whether the user is new or has played before");
}
return true;
}
}
// Simple example class that uses the database
public final class Usage {
private final Database db;
public Usage(#Nonnull final Database db) { this.db = db; }
public User getUser(#Nonnull final UUID uuid) {
if(db.hasPlayedBefore(uuid))
return db.getUser(uuid); // Sync query
else {
// Set default starting balance
final User user = new User(uuid, startingBalance);
db.setBalance(uuid, startingBalance); // Example of sync query that I would like to be async
return user;
}
}
}
Any advice? I am already somewhat familiar with Future, CompletableFuture and Callback.
I have these routes:
#Override
public void configure() throws Exception {
String overviewRoute = this.routingProperties.getReportingRoute(OverviewtRouteConstants.OVERVIEW);
this.from(overviewRoute).routeId(overviewRoute).threads(1, 100).choice()
.when(this.simple(BODY_GRAPH_NAME + GraphConstants.OVERVIEW_OPEN_LANE + "'"))
.to(this.routingProperties.getReportingRoute(OVERVIEW_OPENLANES_TO))
.when(this.simple(BODY_GRAPH_NAME + GraphConstants.OVERVIEW_BELT_DOWNTIME + "'"))
.to(this.routingProperties.getReportingRoute(OVERVIEW_BELTDOWNTIME_TO))
.when(this.simple(BODY_GRAPH_NAME + GraphConstants.OVERVIEW_LUGGAGE_THROUGHPUT + "'"))
.to(this.routingProperties.getReportingRoute(OVERVIEW_LUGGAGETHROUGHPUT_TO))
.when(this.simple(BODY_GRAPH_NAME + GraphConstants.OVERVIEW_LANE_UTILIZATION + "'"))
.to(this.routingProperties.getReportingRoute(OVERVIEW_LUGGAGETHROUGHPUT_TO))
.when(this.simple(BODY_GRAPH_NAME + GraphConstants.OVERVIEW_LUGGAGE_SCANNED + "'"))
.to(this.routingProperties.getReportingRoute(OVERVIEW_LUGGAGESCANNED_TO));
}
Rest service endpoint:
import javax.ws.rs.core.Response;
import org.springframework.stereotype.Service;
#Service(SERVICE_NAME)
public class OverviewServicesImpl extends BaseServices implements OverviewServices {
#Override
public Response overview(OverviewSearchDTO dto) {
return this.executeRouting(OverviewtRouteConstants.OVERVIEW, dto);
}
}
Context :
The main route overviewRoute is called from a ws REST endpoint. The others routes are called according to when clause.
My front end calls the main route multiple times in parallel.
What I see:
All routes defined in "choice" clause are called sequentially (The next route is called once the previous one has finished).
What I want:
I want that the route defined in choice clause must be called as soon as a ws called is done and not once the previous call is done.
What I have tried:
Apache seda
Spring Scope: #Scope(BeanDefinition.SCOPE_PROTOTYPE)
It sounds like all the .when clauses are returning True, so it is following all choices. I am not sure the part within your .when clauses is an actual comparison? Am I missing how you are doing the compare to check parts of the message to compare and route in the Context Based Router?
I have an issue with connection pool being exhausted when querying a database using Spring Boot and JdbcNamedTemplate.
This is how it is supposed to work:
Get a request from outside, with some parameters about how the house is supposed to be built.
Based on parameters recieved, use some REST endpoints and two DAOs to gather data.
Use data to create House object and return it.
Rcieve more requests...
How it works as of now:
Request is good.
Data from REST endpoints - OK. Data with DAO -> OK only for first 50 requests with two DAOs, after that NOT OK. When one of the DAOs is disabled, no connections are blocked.
After 50 houses are built OK, rest will take forever to finish and have no windows in the end.
Makes it unusable for more requests as they will simply timeout.
I get this exception when I call endpoint more than 50 times (max pool size):
com.atomikos.jdbc.AtomikosSQLException: Connection pool exhausted - try increasing 'maxPoolSize' and/or 'borrowConnectionTimeout' on the DataSourceBean.
And it will stay like this until I restart the app. It seems like there is something off about my DAO or configuration, but I haven't been able to figure out what, despite searching all day. If anyone can help, I will be thankfull.
Extra info:
No other exceptions are thrown that I am aware of.
All the other data is retrieved correcly.
Send help.
UPDATE:
I did some more experiments:
This app uses another dao that I didnt mention before because I forgot.
It works almost the same, only it connects to a different database, so it has a separate configuration. It also takes advantage of JdbcNamedTemplate and #Qualifier is used to select the correct one.
Now, what I discovered is that disbling one or the other DAO will not eat the connections anymore. So the question is: What can't they coexist in peace?
This is the dao.
#Component
public class WindowsDao {
private static final String PARAM_1 = "param";
private final String SELECT_ALL = ""
+ " SELECT "
+ " STUFF "
+ " FROM TABLE "
+ " WHERE "
+ " THING =:" + PARAM_1
+ " WITH UR";
#Autowired
private NamedParameterJdbcTemplate myTemplate;
public Optional<List<BottomDealerText>> getWindows(
final WindowsCode windowCode {
final MapSqlParameterSource queryParameters = new MapSqlParameterSource()
.addValue(PARAM_1, windowCode.getValue())
final Optional<List<Window>> windows;
try {
windows = Optional.of(myTemplate.query(
SELECT_ALL,
queryParameters,
new WindowsRowMapper()));
}
catch (final EmptyResultDataAccessException e) {
LOG.warn("No results were found.");
return Optional.empty();
}
return windows;
}
}
DAO is called from this service:
#Service
#Transactional
public class WindowsService {
#Autowired
private WindowsDao windowsDao;
public Optional<List<Stuff>> getWindows(
final WindowCode windowCode) {
final Optional<List<Window>> windows = windowsDao.getWindows(
windowCode;
return windows;
}
}
Which is called from this service:
#Service
#Transactional
public class AssembleHouseService {
// some things
#Autowired
private WindowsService windowsService;
public House buildHouse(final SomeParams params) {
// This service will fetch parts of the house
HouseBuilder builder = House.builder();
// call other services and then...
builder.windows(windowsService.getWindows(args).orElse(/*something*/));
//and then some more things...
}
}
This is what I use to configure the datasource:
myDb:
driver: db2
schema: STUFF
unique-resource-name: STUFF
database-name: STUFF1
server-name: myServer
port: 12312
username: hello
password: world
driver-type: 4
min-pool-size: 2
max-pool-size: 50
RowMapper:
public class WindowsRowMapper implements RowMapper<Window> {
#Override
public Windows mapRow(final ResultSet rs, final int rowNum)
throws SQLException {
return new BottomDealerText(
re.getString("WSIZE"),
rs.getString("DESCRIPTION"),
rs.getString("COLOR"));
}
}
If you have two readonly DAOs in the same transaction then you may have hit a known bug in the Atomikos open source edition that manifests itself only in this particular scenario.
It's been fixed in the commercial edition but not (yet) in the open source.
Hope that helps
Just posting here for those who look for a workaround:
If you cant change to different version of atomikos (or just ditch it), what worked for me was adding
Propagation.REQUIRES_NEW
to services that used those different data sources, so it would be:
#Service
#Transactional(propagation = Propagation.REQUIRES_NEW)
It seems like putting this two read operations into separate transactions makes atomikos close the transaction and release connection properly.
I have a transactional class in my project with following 2 methods:
#Repository(value = "usersDao")
#Transactional(propagation = Propagation.REQUIRED)
public class UsersDaoImpl implements UsersDao {
#Autowired
SessionFactory sessionFactory;
/* some methods here... */
#Override
#Transactional(propagation = Propagation.REQUIRES_NEW,readOnly = false,rollbackFor = {Exception.class})
public void pay(Users payer, int money) throws Exception {
if( payer.getMoney() < money ) {
throw new Exception("");
}
payer.setMoney(payer.getMoney()-money);
this.sessionFactory.getCurrentSession().update(payer);
}
#Override
#Transactional(readOnly = false,rollbackFor = {Exception.class})
public void makeTransfer(Users from, Users to, int money) throws Exception {
System.out.println("Attempting to make a transfer from " + from.getName() + " to " + to.getName() + "... sending "+ money +"$");
to.setMoney(to.getMoney()+money);
if( from.getMoney() < 10 ) {
throw new Exception("");
}
pay(from, 10);
if( from.getMoney() < money ) {
throw new Exception("");
}
from.setMoney(from.getMoney()-money);
this.sessionFactory.getCurrentSession().update(from);
this.sessionFactory.getCurrentSession().update(to);
}
}
The assumption is that when somebody's making a transfer, they must pay 10$ tax. Let's say there are 2 users who have 100$ both and I want to make a transfer (User1->User2) of 95$. First in makeTransfer I check if User1 is able to pay a tax. He is so I'm moving forward and checking if he's got 95$ left for transfer. He doesn't so the transaction is rolled back. The problem is, in the end they both have 100$. Why? For method pay I set Propagation.REQUIRES_NEW, which means it executes in a separate transaction. So why is it also rolled back? The tax payment should be actually save into a database and only the transfer should be rolled back. The whole point of doing this for me is understanding propagations. I understand them teoretically but can't manage to do some real example of it(How propagation change affects my project). If this example is misunderstanding I'd love to see another one.
What M. Deinum said.
Furthermore, according to the Spring documentation:
Consider the use of AspectJ mode (see mode attribute in table below)
if you expect self-invocations to be wrapped with transactions as
well. In this case, there will not be a proxy in the first place;
instead, the target class will be weaved (that is, its byte code will
be modified) in order to turn #Transactional into runtime behavior on
any kind of method.
To use aspectj, write
<tx:annotation-driven transaction-manager="transactionManager" mode="aspectj"/>
instead of
<tx:annotation-driven transaction-manager="transactionManager" />
Source:
http://docs.spring.io/spring/docs/current/spring-framework-reference/html/transaction.html
I solved my problem. Thanks guys for pointing out allowing AspectJ but that wasn't the end. I did deeper research and it turned out that in #Transactional Spring Beans there's an aspect proxy, which is responsible(as far as get it) for operating on transactions(creating, rolling back etc.) for #Transactional methods. But it fails on self reference, because it bypasses proxy, so no new transactions are created. A simple way I used to get this working is calling function pay(..) not refering to this, but to a bean from the spring container, like this:
UsersDaoImpl self = (UsersDaoImpl)this.context.getBean("usersDao");
self.pay(from, 10);
Now, as it refers to the bean, it goes through the proxy so it creates a new transaction.
Just for learning issues I used those two lines to detect whether the two transactions are the same objects or not:
TransactionStatus status = TransactionAspectSupport.currentTransactionStatus();
System.out.println("Current transaction: " + status.toString());
I have an application that publishes event to RabbitMQ and a consumer which consumes the event. My question is is there a way to write a unit test to test the functionality of this consumer.
Just to add this, the consumer works more in an hierarchical structure i.e, if an order event is posted, the suborders in it are extracted and posts their corresponding events to a queue when the suborders get consumed the lineItems in each one is also posted to a queue and lastly the details for each lineItem would be posted to.
It looks like the way to have an easy-to-use solution for testing RabbitMQ-related develoment is still far to reach.
See this discussion and this discussion from SpringFramework forum. They are either using Mockito (for unit-tests) or a real RabbitMQ instance (integration tests) for their own testing.
Also, see this post where the author uses an real RabbitMQ byt with some facilities to make that more 'test-friendly' task. However, as of now, the solution is valid only for MAC users!
Extending on the previous answer, here's a very quick implementation for the unit test route (using Mockito). The starting point is one of RabbitMQ's own tutorials for java.
Receiver class (message handler)
public class LocalConsumer extends DefaultConsumer {
private Channel channel;
private Logger log;
// manual dependency injection
public LocalConsumer(Channel channel, Logger logger) {
super(channel);
this.log = logger;
}
#Override
public void handleDelivery(String consumerTag, Envelope envelope, AMQP.BasicProperties properties, byte[] body)
throws IOException {
String message = new String(body, "UTF-8");
// insert here whatever logic is needed, using whatever service you injected - in this case it is a simple logger.
log.print(" [x] Received and processed '" + message + "'");
}
}
Test class
public class LocalConsumerTest {
Logger mockLogger = mock(Logger.class);
Channel mockChannel = mock(Channel.class);
String mockConsumerTag = "mockConsumerTag";
LocalConsumer _sut = new LocalConsumer(mockChannel, mockLogger);
#Test
public void shouldPrintOutTheMessage () throws java.io.IOException {
// arrange
String message = "Test";
// act
_sut.handleDelivery(mockConsumerTag, null, new AMQP.BasicProperties(), message.getBytes() );
// assert
String expected = " [x] Received and processed '" + message + "'";
verify(mockLogger).print(eq(expected));
}
}
Consumer
// ...
// this is where you inject the system you'll mock in the tests.
Consumer consumer = new LocalConsumer(channel, _log);
boolean autoAck = false;
channel.basicConsume(queueName, autoAck, consumer);
// ...