I have an employee management application. I am using a MySQL database.
In my application, I have functionality like add /edit/delete /view.
Whenever I run any functionality, one query is fired in the database. Like in add employee, it will fire the insert query.
So I want to do something on my database, so that I see how many queries have been fired till date.
I don't want to do any changes on my Java code.
You can use SHOW STATUS:
SHOW GLOBAL STATUS LIKE 'Questions'
As documented under Server Status Variables:
The status variables have the following meanings.
[ deletia ]
Questions
The number of statements executed by the server. This includes only statements sent to the server by clients and not statements executed within stored programs, unlike the Queries variable. This variable does not count COM_PING, COM_STATISTICS, COM_STMT_PREPARE, COM_STMT_CLOSE, or COM_STMT_RESET commands.
Beware that:
the statistics are reset when FLUSH STATUS is issued.
the SHOW STATUS command is itself a statement and will increment the Questions counter.
these statistics are server-wide and therefore will include other databases on the same server (if any exist)—a feature request for per-database statistics has been open since January 2006; in the meantime one can obtain per-table statistics from google-mysql-tools/UserTableMonitoring.
You should execute queries as mentioned below:
To get the SELECT query count, execute Show global status like 'com_select';
To get the UPDATE query count, execute Show global status like 'com_update';
To get the DELETE query count, execute Show global status like 'com_delete';
To get the INSERT query count, execute Show global status like 'com_insert';
You can also analyze the general log or route your application via a MySQL proxy to get all queries executed on a server.
If you don't want to modify your code then you can trace this on the database with triggers. The restriction is that triggers can only fire on insert/update/delete so can't be used to count reads (selects).
Maybe it's too "enterprise" and too "production" for your question.
When you use munin (http://munin-monitoring.org/) (other monitoring-tools have simular extenstions), you can use mysql-monitoring tools which show you how many requests (splitted in Insert/Update/Loaddata/...) you are firing.
With these tools, you see the usage and the load you are producing.
Especially when data changes, and may cause more accesses/load (missing indices, more queries because of big m:n-tables, ...) you recognize it.
It's extremely handy and you can do the check during your break. No typing, no thing, just check the graphs.
I think that the most exact method, which needs no modifications to the database or application in order to operate, would be to configure your database management system to log all events.
You are left with a log file, which is a text file that can be analyzed on demand.
Here is the The General Query Log manual page that will get you started.
Related
I'm just looking for high-level advice when dealing with an issue with a multi-threaded application.
Here's how it works:
The application takes in Alerts, which are then processed in different threads to make Reports. On occasion, two Alerts include the same Report, however that is not desired.
It is a Spring application, written in Java, using a MySQL DB.
I altered my code to run a SELECT SQL query before saving a Report which checks to see if a similar report is already there. If it exists, the Report is not generated. However, if two Alerts come in at the same time, the SELECT command is run for Report #2, before Report #1 is saved.
I thought about putting in a sleep() with a random wait time of 1-10 seconds, but it still would cause an issue when the two threads had the same random sleep time assigned.
I'm pretty new to multi-threading, so does anyone have any ideas? Or resources to point me in the right direction.
Thanks a lot!!
Assuming you have code that looks something like this:
Report report = getReport(...); // calls the DB to get a record to see if it already exists
if (report == null) {
insertReport(...); // add a record to DB which might have already been added by another thread
}
then to avoid collisions across threads (or JVMs) combine the SELECT and INSERT. For example:
insertReportIfNotAlreadyExists(...);
which uses a query structured as:
INSERT INTO REPORTS (...) VALUES (...)
WHERE NOT EXISTS (...)
with the NOT EXISTS clause SELECTing for the record to make sure it doesn't already exist.
I am currently in the design and trial step of a Java application to track targets at my work. I have created all GUIs, applied functionality for opening and closing windows and created a MySQL database with appropriate tables, including a Username and Password form which is connected and working with MySQL.
I have made 2 applications one for Operators(DB Input) the other for managers and display(DB Output).
my question is, using Netbeans can I submit user data (this will be the job serial) from the first app into MySQL and then recall results for display purposes in the second on an hour by hour basis?
I don't see why it isn't possible but I cannot find tutorials for this and do not want to waste time on trial and error to find out that it isn't possible.
Yes, it is very possible.
In order to submit user data, have that app connect to the database and use the SQL statements like INSERT INTO table_name. Also if you want users to have a login you can assign that in the connection. I recommend you look up how to use Prepared Statements. With this, you take in User Input and put it into the database.
In order to recall the results, you would also use SQL statements, but this time you select the whole table like SELECT * table_name. The * means select everything, however, you can specify the column. This would go into a variable to store your results (in JAVA its called a Result Set) and you would loop through the results to get each piece of data.
The hour to hour basis would just be a method or class that would execute the select statement with a timer.
If you are coding in Java, here's a decent tutorial: http://www.homeandlearn.co.uk/java/java_and_databases.html
Recently my team have get a situation in which some records in our shared test database disappear with no clear reason. Because it's a shared database (which is utilized by so many teams), so that we can't track down if it's a programming mistake or someone just run a bad sql script.
So that I'm looking for a way to notify (at database level) when a row of a specific table A get deleted. I have looked at the Postgres TRIGGER, but it failed to give me the specific sql that cause the deletion.
Is there anyway I can log the sql statement which cause the deletion of some rows in table A?
You could use something like this.
It allows you to create a special triggers for PostgreSQL tables, that log all the changes to the chosen tables.
This triggers can log the query, that cause the change (via current_query()).
Using this as a base you can add more fields/information to log.
You would do this to the actual postgres config files:
http://www.postgresql.org/docs/9.0/static/runtime-config-logging.html
log_statement (enum)
Controls which SQL statements are logged. Valid values are none (off), ddl, mod, and all (all statements). ddl logs all data
definition statements, such as CREATE, ALTER, and DROP statements. mod
logs all ddl statements, plus data-modifying statements such as
INSERT, UPDATE, DELETE, TRUNCATE, and COPY FROM. PREPARE, EXECUTE, and
EXPLAIN ANALYZE statements are also logged if their contained command
is of an appropriate type. For clients using extended query protocol,
logging occurs when an Execute message is received, and values of the
Bind parameters are included (with any embedded single-quote marks
doubled).
The default is none. Only superusers can change this setting.
You want either ddl or all to be the selection. This is what you need to alter:
In your data/postgresql.conf file, change the log_statement setting to 'all'. Further the following may also need to be validated:
1) make sure you have turned on the log_destination variable
2) make sure you turn on the logging_collector
3) also make sure that pg_log actually exists relative to your data directory, and that the postgres user can write to it.
taken from here
I have a Java web app (WAR deployed to Tomcat) that keeps a cache (Map<Long,Widget>) in memory. I have a Postgres database that contains a widgets table:
widget_id | widget_name | widget_value
(INT) (VARCHAR 50) (INT)
To O/R map between Widget POJOs and widgets table records, I am using MyBatis. I would like to implement a solution whereby the Java cache (the Map) is updated in real-time whenever a value in the widgets table changes. I could have a polling component that checks the table every, say, 30 seconds, but polling just doesn't feel like the right solution here. So here's what I'm proposing:
Write a Postgres trigger that calls a stored procedure (run_cache_updater())
The procedure in turns runs a shell script (run_cache_updater.sh)
The script base-64 encodes the changed widgets record and then cURLs the encoded record to an HTTP URL
The Java WAR has a servlet listening on the cURLed URL and handles any HttpServletRequests sent to it. It base-64 decodes the record and somehow transforms it into a Widget POJO.
The cache (Map<Long,Widget>) is updated with the correct key/value.
This solution feels awkward, and so I am first wondering how any Java/Postgres gurus out there would handle such a situation. Is polling the better/simpler choice here (am I just being stubborn?) Is there another/better/more standard solution I am overlooking?
If not, and this solution is the standard way of pushing changed records from Postgres to the application layer, then I'm choking on how to write the trigger, stored procedure, and shell script so that the entire widgets record gets passed into the cURL statement. Thanks in advance for any help here.
I can't speak to MyBatis, but I can tell you that PostgreSQL has a publish/subscribe system baked in, which would let you do this with much less hackery.
First, set up a trigger on widgets that runs on every insert, update, and delete operation. Have it extract the primary key and NOTIFYwidgets_changed, id. (Well, from PL/pgSQL, you'd probably want PERFORM pg_notify(...).) PostgreSQL will broadcast your notification if and when that transaction commits, making both the notification and the corresponding data changes visible to other connections.
In the client, you'd want to run a thread dedicated to keeping this map up-to-date. It would connect to PostgreSQL, LISTENwidgets_changed to start queueing notifications, SELECT * FROM widgets to populate the map, and wait for notifications to arrive. (Checking for notifications apparently involves polling the JDBC driver, which sucks, but not as bad as you might think. See PgNotificationPoller for a concrete implementation.) Once you see a notification, look up the indicated record and update your map. Note that it's important to LISTEN before the initial SELECT *, since records could be changed between SELECT * and LISTEN.
This approach doesn't require PostgreSQL to know anything about your application. All it has to do is send notifications; your application does the rest. There's no shell scripts, no HTTP, and no callbacks, letting you reconfigure/redeploy your application without also having to reconfigure the database. It's just a database, and it can be backed up, restored, replicated, etc. with no extra complications. Similarly, your application has no extra complexities: all it needs is a connection to PostgreSQL, which you already have.
I am a java programmer and I want to know how many database calls/trips are done by my application. We use Oracle as our relational database.
With oracle, I got to know about a way to alter session statistics and generate the trace files. Below are the queries to be fired:
ALTER SESSION SET TIMED_STATISTICS = TRUE;
ALTER SESSION SET SQL_TRACE = TRUE;
After the trace files are generated, they could be read using the TKProf utility. But this approach cannot be used because:
my application uses hibernate and spring frameworks and hence the application does not have an handle to the session.
Even if we get the trace files, I need to know whether the set of queries are fired in one go (in a batch) or separately. I am not sure if TkProf output could help to understand this.
Does anyone have any better suggestions?
In TkProf, you can basically tell the number of round-trips as the number of "calls" (although there are exceptions so that less round trips are required, e.g. parse/execute/fetch of a single row select is, theoretically, possible in a single round trip, the so called "exact fetch" feature of oracle). However as a estimate, the tkprof figures are good enough.
If trace wait events, you should directly see the 'SQL*Net from/to client' wait events in the raw trace, but I think tkprof does not show it (not sure, give it a try).
Another way is to look into the session statistics:
select value
from v$mystat ms, v$statname sn
where ms.value > 0
and ms.statistic#=sn.statistic#
and sn.name IN ('SQL*Net roundtrips to/from client')
However, if you do that in your app, you will slowdown your app, and the figures you receive will include the round-trips for that select.
A wrote a few articles about round-trip optimization:
http://blog.fatalmind.com/2009/12/22/latency-security-vs-performance/
http://blog.fatalmind.com/2010/01/29/oracle-jdbc-prefetch-portability/
Firstly, use a dedicated database (or timeframe) for this test, so it doesn't get easily confused with other sessions.
Secondly, look at the view v$session to identify the session(s) for hibernate. The USERNAME, OSUSER, TERMINAL, MACHINE should make this obvious. The SID and SERIAL# columns uniquely identify the session. Actually the SID is unique at any time. The SERIAL# is only needed if you have sessions disconnecting and reconnecting.
Thirdly, use v$sessstat (filtered on the SID,SERIAL# from the v$session) and v$statname (as shown by Markus) to pull out the number of round trips. You can take a snapshot before the test, run the test, then look at the values again and determine the work done.
That said, I'm not sure it is a particularly useful measure in itself. The TKPROF will be more detailed and is much more focussed on time (which is a more useful measure).
Best would be to get a dedicated event 10046 level 12 tracefile of the running session. You will find there all information in detail. This means that you can see how many fetches the application will do per executed command and the related wait events/elapsed time. The resul can be analyzed using tool from Oracle like TKPROF or Oracle Trace Analyzer or Third party tools like [QueryAdvisor][1].
By the way you can ask your DBA to define a database trigger activating Oracle filetrace automatic after login. So capturing the file should not be the problem.
R.U.
[1]: http://www.queryadvisor.com/"TKPROF Oracle tracefile analysis with QueryAdvisor"