I have been getting struggle to connect H2 database from a Spring Boot app by using the following connection string as mentioned on Database URL Overview section:
spring.datasource.url=jdbc:h2:tcp://localhost:9092/~/test-db
I also tried many different combination for tcp (server mode) connection, but still get error e.g. "Connection is broken: "java.net.SocketTimeoutException: connect timed out: localhost:9092" when running Spring Boot app.
#SpringBootApplication
public class Application {
// code omitted
#Bean(initMethod = "start", destroyMethod = "stop")
public Server h2Server() throws SQLException {
return Server.createTcpServer("-tcp", "-tcpAllowOthers", "-tcpPort", "9092");
}
}
So, how can I fix this problem and connect to H2 database via server mode?
You seem to be a little confused.
H2 can run in two different 'modes'.
Local mode
Local mode means H2 'just works', and you access this mode with the file: thing in the JDBC connect URL. The JDBC driver itself does all the database work, as in, it opens files, writes data, it does it all. There is no 'database server' at all. Or, if you prefer, the JDBC driver is its own server though it opens no ports.
Server mode
In this case you need a (separate) JVM and separately fire up H2 in server mode and then you can use the same library (still h2.jar) to serve as a JDBC server. In this mode, the two things are completely separate - if you want, you can run h2.jar on one machine to be the server, and run the same h2.jar on a completely different machine just to connect to the other H2 machine. The database server machine does the bulk of the work, with the 'client' H2 just being the JDBC driver. H2 is no different than e.g. mysql or postgres in such a mode: You have one 'app' / JVM process that runs as a database engine, allowing multiple different processes, even coming from completely different machines halfway around the world if you want to, to connect to it.
You access this mode with the tcp: thing in the JDBC string.
If you really want, you can run this mode and still have it all on a single machine, even a single JVM, but why would you want to? Whatever made you think this will 'solve lock errors' wouldn't be fixed by running all this stuff on a single JVM. There are only two options:
You're mis-analysing the problem.
You really do have multiple separate JVM processes (either one machine with 2 java processes in the activity monitor / ps auxww output / task manager, or 2+ machines) all trying to connect to a single database in which case you certainly do need this, yes.
How to do server mode right
You most likely want a separate JVM that starts before and that hosts the h2 database; it needs to run before the 'client' JVMs (the ones that will connect to it) start running. Catalina is not the 'server' you are looking for, it is org.h2.tools.Server, and if it says 'not found' you need to fix your maven imports. This needs be a separate JVM (you COULD write code that goes: Oh, hey, there isn't a separate JVM running with the h2 server so I'll start it in-process right here right now, but that means that process needs to stay in the air forever, which is just weird. Hence, you want a separate JVM process for this).
You haven't explained what you're doing. But, let's say what you're doing is this:
I have a CI script that fires up multiple separate JVMs, some in parallel even, which runs a bunch of integration and unit tests in parallel.
Even though they run in parallel (or perhaps intentionally so), you all want to run this off of a single DB. This is usually a really bad idea (you want tests to be isolated; that running them on their own continues to behave identically. You don't want a test to fail in a way that can only be reproduced if you run the same batch of 18 separate tests using the same run code, where one unrelated test fails in a specific fashion, whilst it's Tuesday, a full moon, and Beethoven is playing in your music player, and it's warmer than 24º in the room affecting the CPU's throttling, of course. Which is exactly what tends to happen if you try to re-use resources in multiple tests!) – still, you somehow really want this.
... then, edit the CI script to first Launch a JVM that hosts a H2 server, and once that's up and running, presumably run a process that fills this database with test data, and once that's done, then run all tests in parallel, and once those are all done, shut down the JVM, and delete the DB file.
Exactly how to do the third part is a separate question - if you need help with that, ask a new question and name the relevant tool(s) you are using to run this stuff, paste the config files, etc.
Related
My use case is a bunch of isolated calls that at some point interact with redis.
The problem is I'm seeing a super long wait time for acquiring a connection, having tried both predis and credis on my LAN environment. Over 1-3 client threads, the time it takes for my PHP scripts to connect to redis and select a database ranges from 18ms to 700ms!
Normally, I'd use a connection pool or cache a connection and use it across all my threads, but I don't think this can be done in PHP over different scripts.
Is there anything I can do to speed this up?
Apparently, Predis needs the persistent flag set: https://github.com/nrk/predis/wiki/Connection-Parameters) and also FPM, which was frustrating to set up on both Windows and Linux, not to mention testing before switching to FPM on our live setup.
I've switched to Phpredis (https://github.com/phpredis/phpredis), which is a PHP module/extension, and all is good now. The connection times have dropped dramatically using $redis->pconnect() and are consistent across multiple scripts/threads.
Caveat: it IS a little different from Predis in terms of error handling (it fails when instantiating the object, not when running the first call, it returns false instead of null for nonexistent values - ???), so watch out for that if switching from Predis.
I'm trying to test out MariaDB, Galera, and Corosync/Pacemaker to understand clustering with high-availability using CentOS 7 servers. The cluster size I am using for testing is 3 servers to prevent quorum issues for the most part. My tests and application are written in Java.
I have the clustering part down; it works in an active-active configuration and runs just fine. As well, I have HA set up using Pacemaker and corosync. I've done many tests with it in failing the slaves or bringing them up during the run. As well, I've written to all three at run-time regardless of if it was the master connection or one of the child connections. When I try to test the Master going down during run-time (to simulate a power outage, server crash, or whatever in the data center), the application immediately stops running. I get a java.net.SocketException error and the application closes with the other two connections being shut down successfully. I've used both the kill and stop commands in the terminal to test and see if it'll work (just in the off chance it would).
JDBC URL String
Below is the part of the code that connects the application to the cluster. It connects correctly and does work until I cause the first master to go down; the other two going down does not affect this.
public void connections() {
try {
bigConnec = DriverManager.getConnection(
"jdbc:mariadb:sequential:failover:loadbalance://"
+ "10.32.18.90,10.32.18.91,10.32.18.92/"+DB+"?autoReconnect=true&failOverReadOnly=false"
+ "&retriesAllDown=120",
"root", "PASS");
bigConnec.setAutoCommit(false);
} catch(SQLException e) {
System.err.println("Unable to connect to any one of the three servers! \n" + e);
System.exit(1);
}
...
}
There are three other connections made to each individual server so I can pull information more easily from them; that is what the ellipses indicate. The servers will exchange which is the "primary" node but the application will not connect to the next node in the list.
I feel like the issue is in the way I have my URL set up because everything works outside of the cluster. As well, nothing happens during testing when I shut down the child nodes. The most that happens is I'll get a warning that the URL lost connection to either or both of them. Is there a way to configure the URL in such a way to allow automatic failover to the next available node in the string or do I have to go about it some other way using individual connection URLs and Objects (or an array of Connection objects) or black magic and pixie dust that really only SysAdmins know some other way I have yet to try?
What I Have Tried
How to make MaxScale High Available with Corosync/Pacemaker (MariaDB Article)
Failover and High availability with MariaDB Connector/J (MariaDB Documentation)
What is the right MariaDB Galera jdbc URL properties for loadbalance (Stack Overflow)
HA Proxy Configuration with MariaDB Galera cluster (Stack Overflow)
Configuring Server Failover (MySQL Documentation)
Advanced Load-balancing and Failover Configuration (MySQL Documentation)
TL;DR
Problem: Java application is not automatically failing over despite being flagged for failover and sequential support in the JDBC URL (see JDBC URL String). Everything works with corosync and Pacemaker but I cannot get the Java application to transfer to the next available node to act as the primary connection when the current one goes down.
Question: Is the issue in the URL? A follow-up to that is, if so, would it be better to use three separate connections and use the first valid one or is there something I can do to allow the application to automatically rollover to the next available connection in the current URL?
Software/Equipment
MariaDB 10.1.24
corosync 2.4.0
Pacemaker 1.1.15
CentOS 7
Java 8 / Eclipse Neon.3 / Eclipse 4.6.3
MariaDB Connector/J 2.0.1
If there is any more information you need, please do tell me in the comments and I'll update this as soon as I can!
I have a very short Java application that just opens a connection to a remote MySQL database, reads some data, prints it, and exits. The most time-consuming part of the application is the database connection.
Currently I have only a single thread, and my only concern is to save the time of opening the connection.
I thought of several ways to make it faster, but it turned out they do not help:
Connection Pooling - doesn't help because the pool lives only only during a single run of the application. When the application is terminated, the pool is gone, and when I re-run the application, I have to re-open all the connections in the pool.
mysql-proxy - connects only to the local server: mysql-proxy for a remote MySQL server
TCP/IP server - I thought of holding a local TCP/IP server that will keep a persistent open connection and send it to a TCP/IP client on request. However, Connection objects cannot be serialized, so I have no way to pass the Connection object from client to server.
Any other option?
Generally connection to a DB is a most time-consuming operation. If the application is to be started and stopped then there is little that you can do.
Using connection-pooling in a web-server and call that by running your app which talks to the web server using JSON might be an option.
You said you have a very short application so your 3rd option might work if you put the database logic into you "option 3 TCP/IP server" and just forward the results to your connecting client. This is a typical application server pattern.
Another thing you should consider about network look up https://stackoverflow.com/q/3641155/1055715 which Marc B has mentioned in his comment.
It turns out the best solution is to use mysql-proxy with a script that handles connection pooling (a combination of my first two options). I found one such script here:
http://forge.mysql.com/tools/tool.php?id=151
It was probably written for an older version of mysql-proxy, so I had to fix it (if anyone need the fixed version - write me).
It works like a charm - I run the exact same application as before, the only change is in the connection string: instead of connecting to "qa-srv:3308" (the remote server) I connect to "127.0.0.1:4040" (the proxy server).
First of all, we are running a Java Web application running on WAS 5.1. Behind that, we use an Oracle data base. The problem that we're faced to is really simple, but after a couple of hours of Google search, I decided to ask you.
We have an application that is running on WAS. When we start the server, WAS sets his DataSource so that it points to the data base. Everything works fine, expect when the DBAs have to reboot the data base server. When they do, the data source is no longer valid and we have to manually restart all server and we are currently trying to correct that, if possible. We need to find a way to do it because we have 3 pre-production environnement for for our application, and there are two servers associated with it, one for the application and the other is a report generator web service. So, when the DBAs wants to reboot the server (and they usually don't tell us!) we have to reboot six servers. I was wondering if in Java, there was a way to reset the data source so that we don't need to restart the servers.
For you information, WebSphere is v5.1 and Oracle is 9g with Java 1.4.2.17.
We also use RAD:
Version: 6.0.1
Build id: 20050725_1800
You should configure your application server to always test the connection before leasing it out to a client. I'm not familiar with Websphere that much, but in WebLogic, you can set a jdbc sql statement such as select 1 from dual and the container removes stale connections from the connection pool.
Here is a link on how to do it in Websphere
http://www-01.ibm.com/support/docview.wss?uid=swg21439688
Based on what i read from your note, you should receive Stale connection exception as WAS has stale handles (in its pool) as the DB has been restarted.
The Data Source configuration can be configured to purge the entire pool once a stale connection is detected. The default policy is to purge the individual connection.
Adopting this would prevent you from restarting your WAS Servers.
There are a number of resources in this space
http://www-01.ibm.com/support/docview.wss?uid=swg21063645
HTH
Manglu
We have some applications that sometimes get into a bad state, but only in production (of course!). While taking a heap dump can help to gather state information, it's often easier to use a remote debugger. Setting this up is easy -- one need only add this to his command line:
-Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=PORT
There seems to be no available security mechanism, so turning on debugging in production would effectively allow arbitrary code execution (via hotswap).
We have a mix of 1.4.2 and 1.5 Sun JVMs running on Solaris 9 and Linux (Redhat Enterprise 4). How can we enable secure debugging? Any other ways to achieve our goal of production server inspection?
Update: For JDK 1.5+ JVMs, one can specify an interface and port to which the debugger should bind. So, KarlP's suggestion of binding to loopback and just using a SSH tunnel to a local developer box should work given SSH is set up properly on the servers.
However, it seems that JDK1.4x does not allow an interface to be specified for the debug port. So, we can either block access to the debug port somewhere in the network or do some system-specific blocking in the OS itself (IPChains as Jared suggested, etc.)?
Update #2: This is a hack that will let us limit our risk, even on 1.4.2 JVMs:
Command line params:
-Xdebug
-Xrunjdwp:
transport=dt_socket,
server=y,
suspend=n,
address=9001,
onthrow=com.whatever.TurnOnDebuggerException,
launch=nothing
Java Code to turn on debugger:
try {
throw new TurnOnDebuggerException();
} catch (TurnOnDebugger td) {
//Nothing
}
TurnOnDebuggerException can be any exception guaranteed not to be thrown anywhere else.
I tested this on a Windows box to prove that (1) the debugger port does not receive connections initially, and (2) throwing the TurnOnDebugger exception as shown above causes the debugger to come alive. The launch parameter was required (at least on JDK1.4.2), but a garbage value was handled gracefully by the JVM.
We're planning on making a small servlet that, behind appropriate security, can allow us to turn on the debugger. Of course, one can't turn it off afterward, and the debugger still listens promiscuously once its on. But, these are limitations we're willing to accept as debugging of a production system will always result in a restart afterward.
Update #3: I ended up writing three classes: (1) TurnOnDebuggerException, a plain 'ol Java exception, (2) DebuggerPoller, a background thread the checks for the existence of a specified file on the filesystem, and (3) DebuggerMainWrapper, a class that kicks off the polling thread and then reflectively calls the main method of another specified class.
This is how its used:
Replace your "main" class with DebuggerMainWrapper in your start-up scripts
Add two system (-D) params, one specifying the real main class, and the other specifying a file on the filesystem.
Configure the debugger on the command line with the onthrow=com.whatever.TurnOnDebuggerException part added
Add a jar with the three classes mentioned above to the classpath.
Now, when you start up your JVM everything is the same except that a background poller thread is started. Presuming that the file (ours is called TurnOnDebugger) doesn't initially exist, the poller checks for it every N seconds. When the poller first notices it, it throws and immediately catches the TurnOnDebuggerException. Then, the agent is kicked off.
You can't turn it back off, and the machine is not terribly secure when its on. On the upside, I don't think the debugger allows for multiple simultaneous connections, so maintaining a debugging connection is your best defense. We chose the file notification method because it allowed us to piggyback off of our existing Unix authen/author by specifying the trigger file in a directory where only the proper uses have rights. You could easily build a little war file that achieved the same purpose via a socket connection. Of course, since we can't turn off the debugger, we'll only use it to gather data before killing off a sick application. If anyone wants this code, please let me know. However, it will only take you a few minutes to throw it together yourself.
If you use SSH you can allow tunneling and tunnel a port to your local host. No development required, all done using sshd, ssh and/or putty.
The debug socket on your java server can be set up on the local interface 127.0.0.1.
You're absolutely right: the Java Debugging API is inherently insecure. You can, however, limit it to UNIX domain sockets, and write a proxy with SSL/SSH to let you have authenticated and encrypted external connections that are then proxied into the UNIX domain socket. That at least reduces your exposure to someone who can get a process into the server, or someone who can crack your SSL.
Export information/services into JMX and then use RMI+SSL to access it remotely. Your situation is what JMX is designed for (the M stands for Management).
Good question.
I'm not aware of any built-in ability to encrypt connections to the debugging port.
There may be a much better/easier solution, but I would do the following:
Put the production machine behind a firewall that blocks access to the debugging port(s).
Run a proxy process on the host itself that connects to the port, and encrypts the input and output from the socket.
Run a proxy client on the debugging workstation that also encrypts/decrypts the input. Have this connect to the server proxy. Communication between them would be encrypted.
Connect your debugger to the proxy client.