Currently I am trying to improve my skills with Springboot applications and I wanted to know if it is possible for a Springboot application to insert into a MySQL database every 10 minutes (or some quantum of time) while the application is deployed on a server (I am using Elastic Beanstalk), and if so how would I be able to do this and if I would need additional tools to accomplish this.
You can use the #Scheduled annotation.
Here is a pretty nice example using cron, fixedRate, and fixedDelay.
Just be mindful that if you are using dynamic schedules (as shown below)
#Scheduled("${my.dynamic.schedule}")
public myScheduledMethod() {
//do some tasks here
}
that you may also introduce logic to ensure all instances are not running at the same time, performing the same task, to avoid redundant behavior.
Related
Context of My question:
I use a proprietary Database (target database) and I can not reveal the name of the DB (you may not know even If I reveal the name).
Here, I usually need to update the records using java. (The number of records vary from 20000 to 40000)
Each update transaction is taking one or two seconds for this DB. So, you see that the execution time would be in hours. There are no Batch execution functions are available for this Database API. For this, I am thinking to use Java multi-threaded feature, instead of executing all the records in single process I want to create a thread for every 100 records. We know that Java can make these threads run parallelly.
But, I want to know how does the DB process these threads sharing the same connection? I can find this by running a trail program and compare time intervals. I feel that it may be deceiving to some extent. I know that you don't have much information about the database. You can just answer this question assuming the DB as MS SQL/MySQL.
Please suggest me if there is any other feature in java I can utilize to make this program execute faster if not multi-threading.
It is not recommended to use single connection with multiple threads, you can read the pitfalls of doing so here.
If you really need to use a single connection with multiple threads, then I would suggest making sure threads start and stop successfully within a transaction. If one of them fails you have to make sure to rollback the changes. So, first get the count, make cursor ranges and for each range start a thread that will execute that on that range. One thing to look for is to not close the connection after executing the partitions individually, but to close it when the transaction is complete and the db is committed.
If you have an option to use Spring Framework, check out Spring Batch.
Spring Batch provides reusable functions that are essential in processing large volumes of records, including logging/tracing, transaction management, job processing statistics, job restart, skip, and resource management. It also provides more advanced technical services and features that will enable extremely high-volume and high performance batch jobs through optimization and partitioning techniques. Simple as well as complex, high-volume batch jobs can leverage the framework in a highly scalable manner to process significant volumes of information.
Hope this helps.
I have a Spring Boot application that receives an API instruction and then begins streaming in a file, hash totaling the file and then streaming it out somewhere else. In the real world this could take one second or it could take hours.
I'd like to add that using POSTMAN and curl we have fully tested this app and it works as per its design.
We need to cover this with JUnit.
We are using JUnit 5 I am trying to run a test where the API is called on a very small file to process (probably a few seconds in total) However the Spring Boot Application shuts down too quickly meaning that the test never actually completes.
The Inbound/Outbound Streams are both performed using #Aysnc methods which I don't think helps as these dive into separate threads.
I also whole-heartedly believe that this kind of processing should not be tested with JUnit. But we have a coverage target to hit. This is IST testing.
My question is...
Does anyone know of a way to keep the Spring Boot Application running for a longer time, within the JUnit?
Just long enough to see the file come out the other side.
I've not used any Mock Frameworks at this point in time. I'm open to this idea but some direction would be appreciated if this is a viable option.
You'll need to introduce some sort of blocking/polling to wait for the asynchronous task to complete before allowing the #Test method to complete.
Awaitility provides good support for testing scenarios like that.
I have a Java application that can save and retrieve data from an Apache Derby database using JDBC. I would like to update the view of every user when changes are made in the database.
I tried using a for-loop that polls the database every few seconds, but that uses loads of processor time as expected. I've also heard about TimeTask and ScheduledExecutorService. I'm not sure how they work but i imagine they are a better alternative to the for-loop. However they would also have to check the database, which i find less ideal than having the database notify of changes.
I've read about database Triggers, which i think might be the best solution? However, all the examples i find for apache derby only seem to trigger other changes in the database and not the Java application.
Is it possible to use a trigger to execute a method in the Java application? If so, how? or perhaps there is another approach to solving the problem that I don't know of?
I'm trying to make a mini web application for reminders, I deploy Quartz Scheduler to handle the issue of launch events reminder, I have understood the tasks (Jobs) and programmers (Schedulers) can be configured from a Database with JDBC, I have searched and can not find an example where I show what information should I put on the tables and I run java code to start operating scheduled tasks. If someone can have an example or something that I can serve this purpose, they are grateful.
You have understood wrong. You can use any JobStore (including the JdbcJobStore to store your jobs/triggers/etc. but creating them manually in the database is a bad idea™.
Depending on how you are using Quartz you can set it up, either using SPRING or using the Fluent syntax (which I believe is the preferred method these days).
Further reading: http://quartz-scheduler.org/documentation/quartz-2.1.x/tutorials/tutorial-lesson-09
I have a small web application configured with Guice, Jersey and EclipseLink, and run this application on jetty (8.0.0.M1) during development. There are about 10 (small) JPA managed classes (entities and embeddables), and about 20 classes total.
The initial startup takes 15 seconds + 5 seconds for the first requests. It seems like JPA is working on the first request, since I have the table generation strategy "create" enabled and see some JPA output from Maven on the first request.
A reload takes about 10 seconds and the first request after reloading takes about 3 to 4 seconds.
You may think, that the startup time is not so bad, but I'm wondering if I could accelerate the startup to work more fluently like with Django. Any idea for startup tuning?
I'm afraid that if you are not prepared to remove the table creation strategy, you will have to tolerate such loading times. In essence, everytime your startup your application, it will drop/create/verify the tables and issue the correct DDL statements to make it match the entities in your package.
Assuming that you're done defining your entities and you are working on some business-logic code, you can create the database once, and just re-use your initial setup.
I imagine you are using Jetty for rapid application development (RAD) and you want to see and test out any changes as quickly as possible. If there is no actual "persistent" requirement on your RAD environment's database, you could try moving to an im-memory DB engine. DB engine's like HSQL allow you to spin up new tables (and other structures) very rapidly compared to actual production quality DB engines. This would require that you use an ORM because HSQL's SQL is very different then most other databases but it sounds like you are already using JPA so this shouldn't be difficult.
The only alternative I see is using a database which has it's schema already created appropriately and not dropping it every time.