Hibernate database connection error in Maven Dropwizard - java

I have been trying a Dropwizard project using Maven. I can run hello world programs. When I tried to run database program using Hibernate, I got an error like below when running the program using java -jar.
default configuration has the following errors:
* database.driverClass may not be null (was null)
* database.url may not be null (was null)
* database.user may not be null (was null)
* template may not be empty (was null)
This is my hello-world.yml file
template: Hello, %s!
defaultName: Stranger
database:
driverClass: com.mysql.jdbc.Driver
user: root
password:
url: jdbc:mysql://localhost/test
Thanks in advance !!

As stated in some examples throught the Dropwizard repository, you may need to provide your template file configuration so that database properties are pulled out from there.
So have you tried running you application to set up your database with the .yml file as a command line argument? Something like the following:
java -jar your-artifact.jar db migrate your-configuration-file.yml
Then run your application as follows:
java -jar your-artifact.jar server your-configuration-file.yml

I was having the same error, but a different solution fixed it. You're supposed to pass in your yml file as an argument when you run an io.dropwizard.Application in server mode. For example
public static void main(String[] args) throws Exception {
new ApiApplication().run(new String[] {"server", "db.yaml"});
}
Adding the "db.yaml" argument to my array of strings fixed it. Or if you're passing your args from the termina/console, you'd want to add "yourconfig.yaml" or "yourconfig.yml" as the 2nd argument.
Source of the solution I found

Related

WebLogic: add a new custom authentication-providers via WLST throws a ClassNotFoundException

I am trying to add a new custom authentication-provider with a WLST online-mode script but I get a class not found exception despite I can see my provider on the WL console.
This is the situation:
I have a JAR file, it contains a custom WebLogic authentication-provider.
The JAR is copied under the user_projects/domains/$DOMAIN_NAME/lib/ directory.
I can see the custom auth provider on the WL console, appears in the list: Home > Security Realms > myrealm > Providers > new> Type
I can add this custom provider by hand via WL Console.
But I need to automate this step so I have created a WLST script for this. The relevant part of the WLST is this:
# add a new authentication provider with name of MyCustomAuthProvider
cd('/SecurityConfiguration/' + _domainName + '/Realms/myrealm')
cmo.createAuthenticationProvider('MyCustomAuthProvider', 'aa.bb.cc.MyCustomAuthProvider')
cd('/SecurityConfiguration/' + _domainName + '/Realms/myrealm/AuthenticationProviders/MyCustomAuthProvider')
cmo.setControlFlag('OPTIONAL')
# reorder authentication providers
...
But this WLST throws the following exception:
java.lang.RuntimeException: java.lang.RuntimeException: java.lang.ClassNotFoundException: aa.bb.cc.MyCustomAuthProvider
So I did double-check to see whether the WL sees my custom auth provider:
wls:/offline> connect('weblogic', 'weblogic12', 't3://localhost:7001')
cd('/SecurityConfiguration/myDomain/Realms/myrealm')
ls()
The list I got is exactly the same as I expected: my class is on the list. This is the reason why I can add it using the web console.
This is the value of the AuthenticationProviderTypes:
java.lang.String[com.bea.security.saml2.providers.SAML2IdentityAsserter,
aa.bb.cc.MyCustomAuthProvider,
eblogic.security.providers.authentication.ActiveDirectoryAuthenticator,
weblogic.security.providers.authentication.CustomDBMSAuthenticator,
eblogic.security.providers.authentication.DefaultAuthenticator,
weblogic.security.providers.authentication.DefaultIdentityAsserter,
eblogic.security.providers.authentication.IPlanetAuthenticator,
weblogic.security.providers.authentication.LDAPAuthenticator,
weblogic.security.providers.authentication.LDAPX509IdentityAsserter,
weblogic.security.providers.authentication.NegotiateIdentityAsserter,
weblogic.security.providers.authentication.NovellAuthenticator,
weblogic.security.providers.authentication.OpenLDAPAuthenticator,
weblogic.security.providers.authentication.OracleIdentityCloudIntegrator,
weblogic.security.providers.authentication.OracleInternetDirectoryAuthenticator,
weblogic.security.providers.authentication.OracleUnifiedDirectoryAuthenticator,
weblogic.security.providers.authentication.OracleVirtualDirectoryAuthenticator,
weblogic.security.providers.authentication.ReadOnlySQLAuthenticator,
weblogic.security.providers.authentication.SQLAuthenticator,
weblogic.security.providers.authentication.VirtualUserAuthenticator,
weblogic.security.providers.saml.SAMLAuthenticator,
weblogic.security.providers.saml.SAMLIdentityAsserterV2]
Everything looks perfect. But then why WLST throws a class not found exception while trying to create it?
This is crazy.
I have googled for this, but only the same issues I have found without a solution.
here
and here
What I missed?
At some point oracle has changed from using CLASSPATH to WLST_EXT_CLASSPATH to set the classpath for WLST. Oracle doesn't seem to have done a great job of documenting that this is the right env variable to use though. I found it by digging through the various sh scripts that wlst.sh calls, but this document for 12c refers to it, but seems to be the only place that it's mentioned.
I've tested this using 14.1.1 and a custom provider in the DOMAIN/lib/mbeantypes dir and it works (i.e. I can use WLST to configure a custom security provider as long as I set WLST_EXT_CLASSPATH first) but don't have 12c to test that it works there.
I added my JAR to the WLST classpath, but this did not help.
I changed the CLASSPATH variable because the wlst.sh executes a java command in the background so this standard variable must be considered. It did not work.
I added the -cp JVM param manually to the java command that starts the WlST. It did not work.
The only workaround that worked for me is that the following:
for WL console: copy the JAR that contains the custom authentication provider under $ORACLE_HOME/user_projects/domains/$DOMAIN_NAME/lib/ directory
for WLST: copy the JAR to $ORACLE_HOME/wlserver/server/lib/mbeantypes/
The 2nd copy solved the class not found issue thrown by the WLST.
If you know a better, more standard way, please let me know.

Start Dropwizard with config.yaml from resources

I have a dropwizard question. I use Dropwizard with SBT (which works pretty fine).
If I run my application i package it with:
$ sbt clean assembly
And than run the application with:
$ java -jar APPLICATION.jar server
The problem is with this command Dropwizard doesnt load my config file (config.yaml), which is in the resources located.
Regarding the Dropwizard Docs I always have to give the config file as parameter like:
$ java -jar APPLICATION.jar server config.yaml
This works fine and it loads the application but is there any possibility to tell Dropwizard to load directly the config.yaml file, because my configuration in the config.yaml file is static and it is always the same. Settings like Database etc which are changing from Server Stage to Server Stage are made as Enviroment Variable which I load with EnvironmentVariableSubstitutor.
Thanks
Use class ResourceConfigurationSourceProvider:
#Override
public void initialize(final Bootstrap<ExampleConfiguration> bootstrap) {
bootstrap.setConfigurationSourceProvider(new ResourceConfigurationSourceProvider());
// The rest of initialize...
}
And then invoke the application like:
java -jar APPLICATION.jar server /resource-config.yaml
(note the initial /)
While this answer is very late, just thought I'd put this here. There is a dirty little hack to make it work so that you don't have to provide config.yaml in your application arguments.
Basically, you can submit a new String[] args to the run() method in the dropwizard application.
public class ApplicationServer extends Application<Config> {
public static void main(String[] args) {
String[] appArgs = new String[2];
appArgs[0] = args[0]; // This will be the usual server argument
appArgs[1] = "config.yaml";
new ApplicationServer().run(appArgs);
}
#Override
public void run(Config configuration, Environment environment) {
// Configure your resources and other application related things
}
}
I used this little trick to specify which config file I wanted to run with. So instead of specifying config.yaml, I would give my second argument as DEV/UAT/STAGE/PROD and pass on the appropriate config file to the run method.
Also interesting to have a look at:
earlye/dropwizard-multi-config
<dependency>
<groupId>com.thenewentity</groupId>
<artifactId>dropwizard-multi-config</artifactId>
<version>{version}</version>
</dependency>
It allows overriding and merging multiple config-files passed on the java command-line like:
java -jar sample.jar server -- sample.yaml override.yaml
Here you pass (1) sample.yaml as the primary configuration (e.g. having default values) and (2) override.yaml as the override. The effective config is a result from merging both in order of appearance: (1) defaults will be overwritten and merged with (2).

Running tests on travis using mysql

I've been trying for the last week or so to make integration tests work on travis for a school project. I've debugged a fair bit of the project but now I'm blocked and need external help.
To give a bit of context, so far, I've debugged the java project so that the tests can be launched from eclipse or from maven in command line. I've worked on the travis file so that a database is created, the database scripts run and the java tests launch. However, the tests fail on travis because of a "table missing" in the database.
This is a link to our repo.
This is the travis.yml file's code:
language : java
jdk:
- oraclejdk8
service:
- mysql
before_script:
- mysql -e 'DROP DATABASE IF EXISTS koalatest'
- mysql -e 'CREATE DATABASE IF NOT EXISTS koalatest;'
- mysql -u root --default-character-set=utf8 koalatest < backend/koalacal-backend/koalacal.sql
script: cd backend && cd koalacal-backend && mvn test -X
after_success:
- bash <(curl -s https://codecov.io/bash)
The java project that is being built and run by maven is located under rootfolder -> backend -> koalacal-backend.
Here is a link to the error log maven produces on travis.
This line seems to be the source of the error:
com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: Table 'koalatest.Calendar' doesn't exist
I have two hypothesis:
1- The sql script that creates all the tables is not being run properly by travis.
To test this hypothesis, I changed the name of the script called by travis. As expected, I got an error saying that travis can't find the file. So at least, I know that this line of code causes travis to look up at an sql file.
- mysql -u root --default-character-set=utf8 koalatest < backend/koalacal-backend/koalacal.sql
That being said, I have no idea if the file is run properly on the database.
For the sake of putting all relevant informations in this post, here is a link to the database script.
2- The tests can't connect properly to the database.
Here is the config file that contain the info regarding which database to connect to:
TestInstance=true
user=root
password=
serverName=localhost
databaseName=koalacal
portNumber=3306
testUser=root
testPassword=
testServerName=127.0.0.1
testDatabaseName=koalatest
testPortNumber=3306</code>
If the parameter TestInstance is set to true, the tests take the informations testUser, testPassword, testServerName, testDatabaseName and testPortNumber to connect to the relevant database.
I believe the connection informations currently contained in the config file match how the travis documentation says we need to connect to a mysql database. I tried to change the testUser to something invalid (like root3) and got error messages as expected.
Maybe somehow the tests can't connect to the database and don't produce a related error message, but I doubt it.
Can someone look at my problem and see if I've missed something obvious (or not)? I don't know what else to try and I don't want to be blocked one more week on a technical issue.
For anyone who may google travis mysql and has a similar error to the one I had, I solved my problem.
The error was caused by a case sensitivity issue. The java code tried to connect to tables like 'Calendar' and 'Event' while the sql script created the tables 'calendar' and 'event'.
It took a long time to troubleshoot this because the case sensitivity didn't pose any problem on my machine. Maven can run its tests properly without any issue. It's only on the travis servers that case sensitivity of the tables started to matter.

How to use Sqoop in Java Program?

I know how to use sqoop through command line.
But dont know how to call sqoop command using java programs .
Can anyone give some code view?
You can run sqoop from inside your java code by including the sqoop jar in your classpath and calling the Sqoop.runTool() method. You would have to create the required parameters to sqoop programmatically as if it were the command line (e.g. --connect etc.).
Please pay attention to the following:
Make sure that the sqoop tool name (e.g. import/export etc.) is the first parameter.
Pay attention to classpath ordering - The execution might fail because sqoop requires version X of a library and you use a different version. Ensure that the libraries that sqoop requires are not overshadowed by your own dependencies. I've encountered such a problem with commons-io (sqoop requires v1.4) and had a NoSuchMethod exception since I was using commons-io v1.2.
Each argument needs to be on a separate array element. For example, "--connect jdbc:mysql:..." should be passed as two separate elements in the array, not one.
The sqoop parser knows how to accept double-quoted parameters, so use double quotes if you need to (I suggest always). The only exception is the fields-delimited-by parameter which expects a single char, so don't double-quote it.
I'd suggest splitting the command-line-arguments creation logic and the actual execution so your logic can be tested properly without actually running the tool.
It would be better to use the --hadoop-home parameter, in order to prevent dependency on the environment.
The advantage of Sqoop.runTool() as opposed to Sqoop.Main() is the fact that runTool() return the error code of the execution.
Hope that helps.
final int ret = Sqoop.runTool(new String[] { ... });
if (ret != 0) {
throw new RuntimeException("Sqoop failed - return code " + Integer.toString(ret));
}
RL
Find below a sample code for using sqoop in Java Program for importing data from MySQL to HDFS/HBase. Make sure you have sqoop jar in your classpath:
SqoopOptions options = new SqoopOptions();
options.setConnectString("jdbc:mysql://HOSTNAME:PORT/DATABASE_NAME");
//options.setTableName("TABLE_NAME");
//options.setWhereClause("id>10"); // this where clause works when importing whole table, ie when setTableName() is used
options.setUsername("USERNAME");
options.setPassword("PASSWORD");
//options.setDirectMode(true); // Make sure the direct mode is off when importing data to HBase
options.setNumMappers(8); // Default value is 4
options.setSqlQuery("SELECT * FROM user_logs WHERE $CONDITIONS limit 10");
options.setSplitByCol("log_id");
// HBase options
options.setHBaseTable("HBASE_TABLE_NAME");
options.setHBaseColFamily("colFamily");
options.setCreateHBaseTable(true); // Create HBase table, if it does not exist
options.setHBaseRowKeyColumn("log_id");
int ret = new ImportTool().run(options);
As suggested by Harel, we can use the output of the run() method for error handling. Hoping this helps.
There is a trick which worked out for me pretty well. Via ssh, you can execute the Sqoop command directly. Just you have to use is an SSH Java Library
This is independent of Java. You just need to include any SSH library and sqoop installed in the remote system you want to perform the import. Now connect to the system via ssh and execute the commands which will export data from MySQL to hive.
You have to follow this step.
Download sshxcute java library: https://code.google.com/p/sshxcute/
and Add it to the build path of your java project which contains the following Java code
import net.neoremind.sshxcute.core.SSHExec;
import net.neoremind.sshxcute.core.ConnBean;
import net.neoremind.sshxcute.task.CustomTask;
import net.neoremind.sshxcute.task.impl.ExecCommand;
public class TestSSH {
public static void main(String args[]) throws Exception{
// Initialize a ConnBean object, the parameter list is IP, username, password
ConnBean cb = new ConnBean("192.168.56.102", "root","hadoop");
// Put the ConnBean instance as parameter for SSHExec static method getInstance(ConnBean) to retrieve a singleton SSHExec instance
SSHExec ssh = SSHExec.getInstance(cb);
// Connect to server
ssh.connect();
CustomTask sampleTask1 = new ExecCommand("echo $SSH_CLIENT"); // Print Your Client IP By which you connected to ssh server on Horton Sandbox
System.out.println(ssh.exec(sampleTask1));
CustomTask sampleTask2 = new ExecCommand("sqoop import --connect jdbc:mysql://192.168.56.101:3316/mysql_db_name --username=mysql_user --password=mysql_pwd --table mysql_table_name --hive-import -m 1 -- --schema default");
ssh.exec(sampleTask2);
ssh.disconnect();
}
}
If you know the location of the executable and the command line arguments you can use a ProcessBuilder, this can then be run a separate Process that Java can monitor for completion and return code.
Please follow the code given by vikas it worked for me and include these jar files in classpath and import these packages
import com.cloudera.sqoop.SqoopOptions;
import com.cloudera.sqoop.tool.ImportTool;
Ref Libraries
Sqoop-1.4.4 jar /sqoop
ojdbc6.jar /sqoop/lib (for oracle)
commons-logging-1.1.1.jar hadoop/lib
hadoop-core-1.2.1.jar /hadoop
commons-cli-1.2.jar hadoop/lib
commmons-io.2.1.jar hadoop/lib
commons-configuration-1.6.jar hadoop/lib
commons-lang-2.4.jar hadoop/lib
jackson-core-asl-1.8.8.jar hadoop/lib
jackson-mapper-asl-1.8.8.jar hadoop/lib
commons-httpclient-3.0.1.jar hadoop/lib
JRE system library
1.resources.jar jdk/jre/lib
2.rt.jar jdk/jre/lib
3. jsse.jar jdk/jre/lib
4. jce.jar jdk/jre/lib
5. charsets,jar jdk/jre/lib
6. jfr.jar jdk/jre/lib
7. dnsns.jar jdk/jre/lib/ext
8. sunec.jar jdk/jre/lib/ext
9. zipfs.jar jdk/jre/lib/ext
10. sunpkcs11.jar jdk/jre/lib/ext
11. localedata.jar jdk/jre/lib/ext
12. sunjce_provider.jar jdk/jre/lib/ext
Sometimes u get error if your eclipse project is using JDK1.6 and the libraries you add are JDK1.7 for this case configure JRE while creating project in eclipse.
Vikas if i want to put the imported files into hive should i use options.parameter ("--hive-import") ?

Why is DB2 Type 4 JDBC Driver looking for native library db2jcct2?

I thought the Type 4 JDBC driver was pure Java and wouldn't require native libraries.
When I put db2jcc4.jar in the WEB-INF/lib directory of my Tomcat app packaged as a .war file, I get the following error when attempting to use the app: Got SQLException: com.ibm.db2.jcc.am.SqlException: [jcc][10389][12245][4.12.55] Failure in loading native library db2jcct2, java.lang.UnsatisfiedLinkError
The relevant application code is as follows and the exception is thrown due to the last line in the listing:
import com.ibm.db2.jcc.DB2SimpleDataSource;
// ...
DB2SimpleDataSource main_db2_data_source = new DB2SimpleDataSource();
main_db2_data_source.setUser(main_database_user);
main_db2_data_source.setPassword(main_database_password);
main_db2_data_source.setServerName(main_database_host);
try {
Integer main_database_port_integer = Integer.parseInt(main_database_port);
main_db2_data_source.setPortNumber(main_database_port_integer);
} catch (NumberFormatException exception) {
throw new WebException("...");
}
Connection main_connection = null;
try {
main_connection = main_db2_data_source.getConnection();
I suspect the problem is that you haven't told it to use the type 4 driver - the same jar file contains both type 4 and type 2 drivers, I believe.
Try:
main_db2_data_source.setDriverType(4);
The db2 driver needs another jar that includes the license.
This license controls the connection type. If you are going to use "db2 connect" to connect to a mainframe as an i series you should use the corresponding license. If you are going to connect to a Linux UNIX or Windows server, the license is included when you get the "Data server client for JDBC"
Also try this:
Goto Configure Build Path --> Libraries
--> JRE System Libraries
--> Native Library Location : Set this to %DB2HOME%/BIN
(which is where db2jcct2.dll is saved)
Recently i have faced this issue, when i was connecting to DB2 from Glassfish server. for this i have followed below steps and resolved this issue.
Please check it the below steps
step1) i have checked DB2 details in Domain.xml file.there i have seen only
username,pwd,databaseName,serverName ,portnumber, But i havent see DriverType.
Means Type of driver is 2 or 4.
2)for adding Type of driver i have logged into the Glassfish server admin console
Resources-->JDBC-->Connection pool -->our poolname --.add extra property
here i haved added as drivertype is 4.
Hence my problem has been solved
Thanks,
Ramaiah Pillala.

Categories