Trouble with MYSQL - java

<Edited>
I'm an intermediate Java programmer currently working with Java and MySQL to create an app. I'm using Xampp and the PHPMYAdmin that comes with it.
The server is on 127.0.0.1 without any routers,Wifi systems or network.My app is also on 127.0.0.1
Everytime I try to connect to MySQL using Java, this message is displayed:
com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link
failure The last packet sent successfully to the server was 0 milliseconds
ago.The driver has not received any packets from the server.
But the MySQL server is running alright. When I log into PHPMYAdmin, these
errors are shown:
#2002 - Only one usage of each socket address (protocol/network
address/port)
is normally permitted.
— The server is not responding (or the local server's socket is not
correctly configured).
mysqli_real_connect(): (HY000/2002): Only one usage of each socket
address
(protocol/network address/port) is normally permitted.
Connection for controluser as defined in your configuration failed.
mysqli_real_connect(): (HY000/2002): Only one usage of each socket
address
(protocol/network address/port) is normally permitted.
Retry to connect
Warning in .\libraries\dbi\DBIMysqli.php#629
mysqli_real_escape_string() expects parameter 1 to be mysqli, boolean
given
Backtrace
.\libraries\dbi\DBIMysqli.php#629: mysqli_real_escape_string(
boolean false,
string 'root',
)
.\libraries\DatabaseInterface.php#2670: PMA\libraries\dbi\DBIMysqli-
>escapeString(
boolean false,
string 'root',
)
.\libraries\Menu.php#142: PMA\libraries\DatabaseInterface-
>escapeString(string 'root')
.\libraries\Menu.php#110: PMA\libraries\Menu->_getAllowedTabs(string
'server')
.\libraries\Menu.php#83: PMA\libraries\Menu->_getMenu()
.\libraries\Response.php#316: PMA\libraries\Menu->getHash()
.\libraries\Response.php#441: PMA\libraries\Response->_ajaxResponse()
PMA\libraries\Response::response()
Warning in .\libraries\dbi\DBIMysqli.php#629
mysqli_real_escape_string() expects parameter 1 to be mysqli, boolean
given
Backtrace
.\libraries\dbi\DBIMysqli.php#629: mysqli_real_escape_string(
boolean false,
string 'root',
)
.\libraries\DatabaseInterface.php#2670: PMA\libraries\dbi\DBIMysqli-
>escapeString(
boolean false,
string 'root',
)
.\libraries\Menu.php#142: PMA\libraries\DatabaseInterface-
>escapeString(string
'root')
.\libraries\Menu.php#110: PMA\libraries\Menu->_getAllowedTabs(string
'server')
.\libraries\Menu.php#71: PMA\libraries\Menu->_getMenu()
.\libraries\Response.php#327: PMA\libraries\Menu->getDisplay()
.\libraries\Response.php#441: PMA\libraries\Response->_ajaxResponse()
PMA\libraries\Response::response()
the mysql my.ini file:
[mysqld]
port= 3306
socket = "E:/xampp/mysql/mysql.sock"
basedir = "E:/xampp/mysql"
tmpdir = "E:/xampp/tmp"
datadir = "E:/xampp/mysql/data"
pid_file = "mysql.pid"
# enable-named-pipe
key_buffer = 16M
max_allowed_packet = 1M
sort_buffer_size = 512K
net_buffer_length = 8K
read_buffer_size = 256K
read_rnd_buffer_size = 512K
myisam_sort_buffer_size = 8M
log_error = "mysql_error.log"
plugin_dir = "E:/xampp/mysql/lib/plugin/"
innodb_data_home_dir = "E:/xampp/mysql/data"
innodb_data_file_path = ibdata1:10M:autoextend
innodb_log_group_home_dir = "E:/xampp/mysql/data"
innodb_buffer_pool_size = 16M
innodb_additional_mem_pool_size = 2M
innodb_log_file_size = 5M
innodb_log_buffer_size = 8M
innodb_flush_log_at_trx_commit = 1
innodb_lock_wait_timeout = 50
[mysqldump]
quick
max_allowed_packet = 16M
[mysql]
no-auto-rehash
[isamchk]
key_buffer = 20M
sort_buffer_size = 20M
read_buffer = 2M
write_buffer = 2M
[myisamchk]
key_buffer = 20M
sort_buffer_size = 20M
read_buffer = 2M
write_buffer = 2M
[mysqlhotcopy]
interactive-timeout
And heres how I connect to Java:
try{
Class.forName("java.sql.DriverManager");
Connection conn=
(Connection)DriverManager.getConnection("jdbc:mysql://
localhost:3306/","root","mypassword");
Statement stmt=(Statement)conn.createStatement();
}
catch(Exception e){/*handling*/}

Related

JDBC connection between Docker containers (docker-compose)

I try to connect a web application which runs on a tomcat 8 to an oracle database. Both of them run as Docker containers:
docker-compose.yml:
version: "3"
services:
appweb:
build: ./app
image: "servlet-search-app:0.1"
ports:
- "8888:8080"
links:
- appdb
environment:
- DATA_SOURCE_NAME="jdbc:oracle:thin:#appdb:1521/XE"
appdb:
build: ./db
image: "servlet-search-db:0.1"
ports:
- "49160:22"
- "1521:1521"
- "8889:8080"
Dockerfile of my oracle DB image (build: ./db):
FROM wnameless/oracle-xe-11g
ADD createUser.sql /docker-entrypoint-initdb.d/
ENV ORACLE_ALLOW_REMOTE=true
Dockerfile of the tomcat image (build: ./app)
FROM tomcat:8.0.20-jre8
COPY servlet.war /usr/local/tomcat/webapps/
COPY ojdbc14-1.0.jar /usr/local/tomcat/lib/
So the app starts up as expected but throws an exception when trying to connect to the database:
java.lang.IllegalStateException: java.sql.SQLException: Io exception: Invalid connection string format, a valid format is: "host:port:sid"
org.se.lab.ui.ControllerServlet.createConnection(ControllerServlet.java:115)
org.se.lab.ui.ControllerServlet.handleSearch(ControllerServlet.java:78)
org.se.lab.ui.ControllerServlet.doPost(ControllerServlet.java:53)
org.se.lab.ui.ControllerServlet.doGet(ControllerServlet.java:38)
javax.servlet.http.HttpServlet.service(HttpServlet.java:618)
javax.servlet.http.HttpServlet.service(HttpServlet.java:725)
org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52)
Now the issue seems obvious, however when I fix the DATA_SOURCE_NAME string to:
DATA_SOURCE_NAME="jdbc:oracle:thin:#appdb:1521:XE"
I get the following exception:
java.lang.IllegalStateException: java.sql.SQLException: Listener refused the connection with the following error:
ORA-12505, TNS:listener does not currently know of SID given in connect descriptor
The Connection descriptor used by the client was:
appdb:1521:XE"
org.se.lab.ui.ControllerServlet.createConnection(ControllerServlet.java:115)
org.se.lab.ui.ControllerServlet.handleSearch(ControllerServlet.java:78)
org.se.lab.ui.ControllerServlet.doPost(ControllerServlet.java:53)
org.se.lab.ui.ControllerServlet.doGet(ControllerServlet.java:38)
javax.servlet.http.HttpServlet.service(HttpServlet.java:618)
javax.servlet.http.HttpServlet.service(HttpServlet.java:725)
org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52)
Now I tried to find out which one of them should actually work. Thus, I started only the DB container:
docker build -t dbtest .
docker run -it -d --rm -p 1521:1521 --name dbtest dbtest
docker inspect dbtest | grep IPAddress
>> "IPAddress": "172.17.0.4"
Next, I try to connect with sqlplus:
sqlplus system/oracle#172.17.0.4:1521/XE # works
sqlplus system/oracle#172.17.0.4:1521:XE #ERROR: ORA-12545: Connect failed because target host or object does not exist
So what's the problem? Due to the link in the docker-compose file, the tomcat container can resolve "appdb" to the container's IP.
Here's the code which should establish the connection:
protected Connection createConnection() {
String datasource = System.getenv("DATA_SOURCE_NAME");
try {
// debug
InetAddress address = null;
try {
address = InetAddress.getByName("appdb");
System.out.println(address); // resolves in appdb/10.0.0.2
System.out.println(address.getHostAddress()); // resolves in 10.0.0.2
} catch (UnknownHostException e) {
e.printStackTrace();
}
Class.forName("oracle.jdbc.driver.OracleDriver");
return DriverManager.getConnection(datasource, "system", "oracle");
} catch (SQLException | ClassNotFoundException e) {
throw new IllegalStateException(e);
}
}
Lastly here's the tnsnames.ora file:
cat $ORACLE_HOME/network/admin/tnsnames.ora
# tnsnames.ora Network Configuration File:
XE =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = fcffb044d69d)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = XE)
)
)
EXTPROC_CONNECTION_DATA =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC_FOR_XE))
)
(CONNECT_DATA =
(SID = PLSExtProc)
(PRESENTATION = RO)
)
)
Thanks!
The oracle default listener resolved the configured host to the wrong IP address:
vim $ORACLE_HOME/network/admin/listener.ora:
SID_LIST_LISTENER =
(SID_LIST =
(SID_DESC =
(SID_NAME = PLSExtProc)
(ORACLE_HOME = /u01/app/oracle/product/11.2.0/xe)
(PROGRAM = extproc)
)
)
LISTENER =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC_FOR_XE))
(ADDRESS = (PROTOCOL = TCP)(HOST = f4c4a3638c11)(PORT = 1521))
)
)
DEFAULT_SERVICE_LISTENER = (XE)
The HOST value is the Docker container id. If we look at /etc/hosts it is set up correctly for the service link in the docker-compose link:
10.0.0.5 f4c4a3638c11
It gets also resolved correctly from the tomcat container
ping f4c4a3638c11
PING f4c4a3638c11 (10.0.0.5): 56 data bytes
...
If I try to connect with an IP address of the other interface, which is the docker interface from the host system, the connection from the web application to the database works
String datasource = "jdbc:oracle:thin:#172.17.0.4:1521:XE";
So the solution is to configure the listener to listen to the correct IP address
(ADDRESS = (PROTOCOL = TCP)(HOST = 10.0.0.5)(PORT = 1521))
Now this connection string works:
jdbc:oracle:thin:#appdb:1521:XE
I will report this behavior to wnameless/oracle-xe-11g as a bug
Sorry, this is not a definitive answer. Let's treat it as long comment :)
Your setup is quite complex for me to recreate however your error message is intriguing:
The Connection descriptor used by the client was:
appdb:1521:XE"
...
It looks like the environment value was chopped to appdb:1521:XE. How about if you try hard-coding:
String datasource = "jdbc:oracle:thin:#appdb:1521/XE";
If that works, then probably need to somehow escape your docker DATA_SOURCE_NAME environment variable.
I could be completely wrong but I think it is worth a try.

issue with connect to Cassandra by spark in Java

I have server with docker and create 3 Cassandra node , 2 worker spark node and one master spark node.
Now i want to connect to spark from my Laptop by java application.
my java code is :
public SparkTestPanel(String id, User user) {
super(id);
form = new Form("form");
form.setOutputMarkupId(true);
this.add(form);
SparkConf conf = new SparkConf(true);
conf.setAppName("Spark Test");
conf.setMaster("spark://172.11.100.156:9050");
conf.set("spark.cassandra.connection.host", "cassandra-0");
conf.set("spark.cassandra.connection.port", "9042");
conf.set("spark.cassandra.auth.username", "cassandra");
conf.set("spark.cassandra.auth.password", "cassandra");
JavaSparkContext sc = null;
try {
sc = new JavaSparkContext(conf);
CassandraTableScanJavaRDD<com.datastax.spark.connector.japi.CassandraRow> cassandraTable = javaFunctions(sc).cassandraTable("test", "test_table");
List<com.datastax.spark.connector.japi.CassandraRow> collect = cassandraTable.collect();
for(com.datastax.spark.connector.japi.CassandraRow cassandraRow : collect){
Logger.getLogger(SparkTestPanel.class).error(cassandraRow.toString());
}
} finally {
sc.stop();
}
}
and I know that application connect to spark master because i see my app on web ui of spark but on line:
CassandraTableScanJavaRDD<com.datastax.spark.connector.japi.CassandraRow> cassandraTable = javaFunctions(sc).cassandraTable("test", "test_table");
I get below error:
2017-08-17 12:14:31,906 ERROR CassandraConnectorConf:72 - Unknown host 'cassandra-0'
java.net.UnknownHostException: cassandra-0: nodename nor servname provided, or not known
...
And other error:
Caused by: java.lang.IllegalArgumentException: Cannot build a cluster without contact points
at com.datastax.driver.core.Cluster.checkNotEmpty(Cluster.java:119)
at com.datastax.driver.core.Cluster.<init>(Cluster.java:112)
at com.datastax.driver.core.Cluster.buildFrom(Cluster.java:178)
at com.datastax.driver.core.Cluster$Builder.build(Cluster.java:1335)
at com.datastax.spark.connector.cql.DefaultConnectionFactory$.createCluster(CassandraConnectionFactory.scala:131)
at com.datastax.spark.connector.cql.CassandraConnector$.com$datastax$spark$connector$cql$CassandraConnector$$createSession(CassandraConnector.scala:159)
at com.datastax.spark.connector.cql.CassandraConnector$$anonfun$8.apply(CassandraConnector.scala:154)
at com.datastax.spark.connector.cql.CassandraConnector$$anonfun$8.apply(CassandraConnector.scala:154)
at com.datastax.spark.connector.cql.RefCountedCache.createNewValueAndKeys(RefCountedCache.scala:32)
at com.datastax.spark.connector.cql.RefCountedCache.syncAcquire(RefCountedCache.scala:69)
at com.datastax.spark.connector.cql.RefCountedCache.acquire(RefCountedCache.scala:57)
at com.datastax.spark.connector.cql.CassandraConnector.openSession(CassandraConnector.scala:79)
at com.datastax.spark.connector.cql.CassandraConnector.withSessionDo(CassandraConnector.scala:111)
at com.datastax.spark.connector.cql.CassandraConnector.withClusterDo(CassandraConnector.scala:122)
at com.datastax.spark.connector.cql.Schema$.fromCassandra(Schema.scala:330)
at com.datastax.spark.connector.cql.Schema$.tableFromCassandra(Schema.scala:350)
at com.datastax.spark.connector.rdd.CassandraTableRowReaderProvider$class.tableDef(CassandraTableRowReaderProvider.scala:50)
at com.datastax.spark.connector.rdd.CassandraTableScanRDD.tableDef$lzycompute(CassandraTableScanRDD.scala:62)
at com.datastax.spark.connector.rdd.CassandraTableScanRDD.tableDef(CassandraTableScanRDD.scala:62)
at com.datastax.spark.connector.rdd.CassandraTableRowReaderProvider$class.verify(CassandraTableRowReaderProvider.scala:137)
at com.datastax.spark.connector.rdd.CassandraTableScanRDD.verify(CassandraTableScanRDD.scala:62)
at com.datastax.spark.connector.rdd.CassandraTableScanRDD.getPartitions(CassandraTableScanRDD.scala:262)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:252)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:250)
at scala.Option.getOrElse(Option.scala:121)
And I go to server("172.11.100.156") and got to spark-master container ping cassandra-0 and see:
root#708d210056af:/# ping cassandra-0
PING cassandra-0 (21.1.0.21): 56 data bytes
64 bytes from 21.1.0.21: icmp_seq=0 ttl=64 time=0.554 ms
64 bytes from 21.1.0.21: icmp_seq=1 ttl=64 time=0.117 ms
64 bytes from 21.1.0.21: icmp_seq=2 ttl=64 time=0.116 ms
64 bytes from 21.1.0.21: icmp_seq=3 ttl=64 time=0.093 ms
what happened in my application that can got this error?
Can any one help?

Why does an optional flume channel cause a non-optional flume channel to have problems?

I have what seems to be a simple Flume configuration that is giving me a lot of problems. Let me first describe the problem and then I'll list the configuration files.
I have 3 servers: Server1, Server2, Server3.
Server1:
Netcat source / Syslogtcp source (I tested this on both netcat with no acks and syslogtcp)
2 memory channels
2 Avro sinks (one per channel)
Replicating selector with second memory channel optional
Server2,3:
Avro source
memory channel
Kafka sink
In my simulation, Server2 is simulating "production" and thus cannot experience any data loss whereas Server3 is simulating "development" and data loss is fine.
My assumption is that using 2 channels and 2 sources will decouple the two servers from each other and if Server3 goes down, it won't affect Sever2 (especially with the optional configuration option!). However, this is not the case. When I run my simulations and terminate Server3 with CTRL-C, I experience slowdown on Server2 and the output to the Kafka sink from Server2 becomes a crawl. When I resume the Flume agent on Server3, everything goes back to normal.
I didn't expect this behavior. What I expected was that because I have two channels and two sinks, if one channel and/or sink goes down, the other channel and/or sink shouldn't have a problem. Is this a limitation of Flume? Is this a limitation of my sources, sinks, or channels? Is there a way to have Flume behave where I use one agent with multiple channels and sinks that are decoupled from each other? I really don't want to have multiple Flume agents on one machine for each "environment" (production and development). Attached are my config files so you can see what I did in a more technical way:
SERVER1 (FIRST TIER AGENT)
#Describe the top level configuration
agent.sources = mySource
agent.channels = defaultChannel1 defaultChannel2
agent.sinks = mySink1 mySink2
#Describe/configure the source
agent.sources.mySource.type = netcat
agent.sources.mySource.port = 6666
agent.sources.mySource.bind = 0.0.0.0
agent.sources.mySource.max-line-length = 150000
agent.sources.mySource.ack-every-event = false
#agent.sources.mySource.type = syslogtcp
#agent.sources.mySource.host = 0.0.0.0
#agent.sources.mySource.port = 7103
#agent.sources.mySource.eventSize = 150000
agent.sources.mySource.channels = defaultChannel1 defaultChannel2
agent.sources.mySource.selector.type = replicating
agent.sources.mySource.selector.optional = defaultChannel2
#Describe/configure the channel
agent.channels.defaultChannel1.type = memory
agent.channels.defaultChannel1.capacity = 5000
agent.channels.defaultChannel1.transactionCapacity = 200
agent.channels.defaultChannel2.type = memory
agent.channels.defaultChannel2.capacity = 5000
agent.channels.defaultChannel2.transactionCapacity = 200
#Avro Sink
agent.sinks.mySink1.channel = defaultChannel1
agent.sinks.mySink1.type = avro
agent.sinks.mySink1.hostname = Server2
agent.sinks.mySink1.port = 6666
agent.sinks.mySink2.channel = defaultChannel2
agent.sinks.mySink2.type = avro
agent.sinks.mySink2.hostname = Server3
agent.sinks.mySink2.port = 6666
SERVER2 "PROD" FLUME AGENT
#Describe the top level configuration
agent.sources = mySource
agent.channels = defaultChannel
agent.sinks = mySink
#Describe/configure the source
agent.sources.mySource.type = avro
agent.sources.mySource.port = 6666
agent.sources.mySource.bind = 0.0.0.0
agent.sources.mySource.max-line-length = 150000
agent.sources.mySource.channels = defaultChannel
#Describe/configure the interceptor
agent.sources.mySource.interceptors = myInterceptor
agent.sources.mySource.interceptors.myInterceptor.type = myInterceptor$Builder
#Describe/configure the channel
agent.channels.defaultChannel.type = memory
agent.channels.defaultChannel.capacity = 5000
agent.channels.defaultChannel.transactionCapacity = 200
#Describe/configure the sink
agent.sinks.mySink.type = org.apache.flume.sink.kafka.KafkaSink
agent.sinks.mySink.topic = Server2-topic
agent.sinks.mySink.brokerList = broker1:9092, broker2:9092
agent.sinks.mySink.requiredAcks = -1
agent.sinks.mySink.batchSize = 100
agent.sinks.mySink.channel = defaultChannel
SERVER3 "DEV" FLUME AGENT
#Describe the top level configuration
agent.sources = mySource
agent.channels = defaultChannel
agent.sinks = mySink
#Describe/configure the source
agent.sources.mySource.type = avro
agent.sources.mySource.port = 6666
agent.sources.mySource.bind = 0.0.0.0
agent.sources.mySource.max-line-length = 150000
agent.sources.mySource.channels = defaultChannel
#Describe/configure the interceptor
agent.sources.mySource.interceptors = myInterceptor
agent.sources.mySource.interceptors.myInterceptor.type = myInterceptor$Builder
#Describe/configure the channel
agent.channels.defaultChannel.type = memory
agent.channels.defaultChannel.capacity = 5000
agent.channels.defaultChannel.transactionCapacity = 200
#Describe/configure the sink
agent.sinks.mySink.type = org.apache.flume.sink.kafka.KafkaSink
agent.sinks.mySink.topic = Server3-topic
agent.sinks.mySink.brokerList = broker1:9092, broker2:9092
agent.sinks.mySink.requiredAcks = -1
agent.sinks.mySink.batchSize = 100
agent.sinks.mySink.channel = defaultChannel
Thanks for your help!
I would look at tweaking this configuration parameters as it has to do with the memory channel:
agent.channels.defaultChannel.capacity = 5000
agent.channels.defaultChannel.transactionCapacity = 200
Possibly try doubling first, and perform the test again and you should see improvments:
agent.channels.defaultChannel.capacity = 10000
agent.channels.defaultChannel.transactionCapacity = 400
It would be also be good to observe the JVMs of the Apache Flume instances when during your tests

Get the number of open connections in mongoDB using java

My program requires a large number of connections to be open (Mongo). I get the error :
Too many connections open, can't open anymore
after 819 connections. I already know we can increase this limit. But that's not what I have in mind. I'm thinking of closing the MongoClient object, and then creating a new one again after 800 connections.
My thinking is that with a new mongoClient object all the connections will be closed and when I start/create it again, the connections will be opened again until 800.
Thus not giving the error. (Let me know if this approach is totally wrong/ won't give the required results.)
For this I need to know the number of connections opened ATM. Is there any way to get this information using java?
You can get connection information by using the db.serverStatus() command. It has a connections subdocument which contains the total/available connections information.
For more information :
Documentation of server status
Details of connections block
Check the number of MongoDB connections using MongoDB Scala driver:
Create a MongoDB client:
import org.mongodb.scala._
import scala.collection.JavaConverters._
import scala.concurrent.Await
import scala.concurrent.duration._
import scala.util.{Failure, Success, Try}
// To directly connect to the default server localhost on port 27017
val mongodbClient: MongoClient = MongoClient()
// Use a Connection String
val mongodbClient: MongoClient = MongoClient("mongodb://localhost")
// or provide custom MongoClientSettings
val settings: MongoClientSettings = MongoClientSettings.builder()
.applyToClusterSettings(b => b.hosts(List(new ServerAddress("localhost")).asJava).
.build()
val mongodbClient: MongoClient = MongoClient(settings)
Call getNoOfMongodbConnection by passing mongodbClient:
val result = getNoOfMongodbConnection(mongodbClient)
Method to get the number of connections(current, available and total)
def getNoOfMongodbConnection(mongodbClient: MongoClient) = {
val adminDatabase = mongodbClient.getDatabase("admin")
val serverStatus = adminDatabase.runCommand(Document("serverStatus" -> 1)).toFuture()
Try {
Await.result(serverStatus, 10 seconds)
} match {
case Success(x) => {
val connection = x.get("connections")
logger.info("Number of mongodb connection:--> " + connection)
connection
}
case Failure(ex) => {
logger.error("Got error while getting the number of Mongodb connection:---> " + ex.printStackTrace())
None
}
}
}

"names.default_domain = world" not being set when looking up tnsnames.ora

I have a requirement for my java application to connect to an Oracle database (10 and 11g) using the Oracle wallet for the DB credentials.
My issue is when connecting trying to connect to the database i get:
java.sql.SQLException: could not resolve the connect identifier "mydb".
sqlnet.ora looks like the following:
AUTOMATIC_IPC = OFF
TRACE_LEVEL_CLIENT = OFF
names.directory_path = (TNSNAMES)
names.default_domain = world
names.default_zone = world
WALLET_LOCATION =
(SOURCE =
(METHOD = FILE)
(METHOD_DATA =
(DIRECTORY = <wallet location C:....>)
)
)
SQLNET.WALLET_OVERRIDE = TRUE
SSL_CLIENT_AUTHENTICATION = FALSESSL_VERSION = 0
And my tnsnames.ora entry looks like:
mydb.WORLD =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP) (Host = <HOST>)
(Port = <PORT>))
)
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = <DB_SERVICES>)
(INSTANCE_NAME = mydb)
)
)
and my wallet entry is setup as follows:
mkstore -wrl C:\oracle\wallet -createCredential mydb username "password"
Java application connection string as follows:
jdbc:oracle:thin:/#mydb
Now the connection works if the .world is removed from the entry in tnsnames.ora, but from my understanding that is what the names.default_domain = world should pass to the tnsnames.ora lookup, is that correct? if so then why isn't this being set?
I can connect to sqlplus in dos fine by using sqlplus /#mydb
Thanks

Categories