I use the following code to create db connection
public final static String driver = "org.apache.derby.jdbc.ClientDriver";
public final static String connectionURL = "jdbc:derby:projectDB;create=true;user=user1;password=psssword";
public CreateConnectionDOA(String driver, String connectionURL) throws ClassNotFoundException,SQLException
{
Class.forName(driver);
conn = DriverManager.getConnection(connectionURL);
conn.setAutoCommit(false);
}
The project was created in Netbeans-Platform-Application-Module.
When i run the project through netbeans platform 7.4.. it works properly.
But when i try to create a installer using netbeans and run.. the project opens but it also gives an exception
"ERROR 42Y07: Schema 'projectDB' does not exist
try fully pathing your DB in your url
public final static String connectionURL =
"jdbc:derby:d:/myproject/projectDB;create=true;user=user1;password=psssword";
Full path works because your relative path was probably wrong. With a correct relative path, it should work.
Keep in mind that current directory is your project directory; write the relative path (../dataBase if necessary works as expected) and it will work.
Related
I use geoip2 to determine the country by ip. During development and testing of the code, I have no problems, but when I run the compiled archive, I encounter a java.io.FileNotFoundException exception. I understand that this is because the path to the file is absolute, and in the archive it changes. Question: How do I need to change my code so that even from the archive I can access the file?
public static String getCountryByIp(String ip) throws Exception {
File database = new File(URLDecoder.decode(GeoUtils.class.getResource("/GeoLite2-Country.mmdb").getFile(),"UTF-8"));
DatabaseReader dbReader = new DatabaseReader.Builder(database).build();
InetAddress ipAddress = InetAddress.getByName(ip);
CountryResponse response = dbReader.country(ipAddress);
return response.getCountry().getName();
}
test.war/
test.war/WEB-INF/classes
You can try this
InputStream is = this.getClass().getClassLoader().getResourceAsStream("GeoLite2-Country.mmdb");
I have a simple question, somehow I can't see where my problem is.
I've got a csv file in my C:/Temp folder. I would like to connect to the csv to get some data (depending on specific row data, different rows,...).
So I downloaded the csvjdbc-1.0-28.jar file and added it to the build path.
I wrote the code as shown below but always get the error:
"java.sql.SQLException: No suitable driver found for"
I have seen some people got also problems with it but I did not get the problem behind the issue I have. I know it has something to do with the Connection conn. Do I need to do some additional JDBC settings or how can I add the path for the connection?
Thanks in advance!
import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.sql.Statement;
import org.relique.jdbc.csv.CsvDriver;
public class Main_Class {
public static void main(String[] args) {
try {
try {
Class.forName("org.relique.jdbc.csv.CsvDriver");
Connection conn = DriverManager
.getConnection("c:\\temp\\Spieltage_log.txt");
Statement stmt = conn.createStatement();
ResultSet results = stmt
.executeQuery("select * from Offensiver_Zweikampf");
boolean append = true;
CsvDriver.writeToCsv(results, System.out, append);
conn.close();
System.out.println(results);
} catch (SQLException e) {
e.printStackTrace();
}
} catch (ClassNotFoundException e) {
e.printStackTrace();
}
// JFrame fenster = new Main_Menue();
}
}
According to the example at (http://csvjdbc.sourceforge.net/)-
// Create a connection. The first command line parameter is
// the directory containing the .csv files.
// A single connection is thread-safe for use by several threads.
Connection conn = DriverManager.getConnection("jdbc:relique:csv:" + directoryName);
In your case it should be -
Properties props = new Properties();
props.put("fileExtension", ".txt");
Connection conn = DriverManager.getConnection("jdbc:relique:csv:C:\\temp", props);
Also you've put the content in a txt file, so you'll need to specify a custom property with the fileExtension as '.txt' in it.
Your resultSet object than can query the file using the below syntax -
ResultSet results = stmt.executeQuery("select * from Spieltage_log");
The URL string passed to DriverManager.getConnection() needs to specify the driver name:
Connection conn = DriverManager
.getConnection("jdbc:relique:csv:c:\\temp");
Besides, you need to pass the directory of the csv file and not the file itself. See answer of Sachin who in the meantime posted detailed instructions.
Sorry my fault: I saved the file as a xlsx file and not csv file. So solved it now and it worked! Thanks a lot guys! Appreciate your help
I've tried loading data into hive with command line way. Its working fine with this way.
Now I want to load data through Java. So for this purpose I've written code & I'm able create tables,databases,inserting values into it, but while using load command it not working.
private static String driverName = "org.apache.hive.jdbc.HiveDriver";
private static String databaseURL = "jdbc:hive2://server_name:10001/test";
private static String userName = "<hadoop_user";
private static String password = "<password>";
private static Connection con = null;
private final static Logger log =
private static String dbName="db_name",
tableName="table_name",
path = "";
private void loadData(String path,String tableName) {
// create statement
Statement stmt;
try {
stmt = con.createStatement();
String sql = "LOAD DATA LOCAL INPATH 'file:/"+path+ "' OVERWRITE INTO TABLE "+tableName+"";
System.out.println("Load Data into successful"+sql);
stmt.execute("LOAD DATA LOCAL INPATH 'file:/"+path+ "' OVERWRITE INTO TABLE "+tableName+"");
con.close();
} catch (SQLException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
Giving This Issue,
Caused by: org.apache.hadoop.hive.ql.security.authorization.plugin.HiveAccessControlException: Permission denied: Principal [name=hadoop, type=USER] does not have following privileges for operation LOAD [[SELECT, INSERT, DELETE, OBJECT OWNERSHIP] on Object [type=LOCAL_URI, name=file:/D:/DTCC/Pig/Dummy_data_Main.tsv]]
at org.apache.hadoop.hive.ql.security.authorization.plugin.sqlstd.SQLAuthorizationUtils.assertNoDeniedPermissions(SQLAuthorizationUtils.java:414)
at org.apache.hadoop.hive.ql.security.authorization.plugin.sqlstd.SQLStdHiveAuthorizationValidator.checkPrivileges(SQLStdHiveAuthorizationValidator.java:96)
at org.apache.hadoop.hive.ql.security.authorization.plugin.HiveAuthorizerImpl.checkPrivileges(HiveAuthorizerImpl.java:85)
at org.apache.hadoop.hive.ql.Driver.doAuthorizationV2(Driver.java:725)
at org.apache.hadoop.hive.ql.Driver.doAuthorization(Driver.java:518)
at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:455)
at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:303)
at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1067)
at org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:1061)
at org.apache.hive.service.cli.operation.SQLOperation.prepare(SQLOperation.java:100)
... 15 more
What I tried:
1) I give all permission to hadoop user on HDFS path of table
2) I give all permission to table such as SELECT, INSERT, DELETE
Please help me to resolve this issue.
Make sure for the following -
if you have kerberos security setup, don't forget to use kinit
User "hadoop" should have access to write on the folder (hive table location). -- For any HDFS path that you are changing the permission, simply "change permission" or "chmod" command would not work. You need to run "hdfs dfs -setfacl -R -m user::rwx ".
Also, make sure this table location has the same parent directory as it is for other tables that you are successfully able to create. [Some times, admin can restrict to create table in other location ].
I am new to spring, I am trying a simple web dynamic application getting data from database and showing on front end using impala.
This is connector class :
private static final String IMPALAD_HOST = "host";
private static final String IMPALAD_JDBC_PORT = "port";
private static final String CONNECTION_URL = "jdbc:hive2://" + IMPALAD_HOST + ':' + IMPALAD_JDBC_PORT + "/;auth=noSasl";
private static final String JDBC_DRIVER_NAME = "org.apache.hive.jdbc.HiveDriver";
public Connection getConnection() throws ClassNotFoundException{
Connection con = null;
try {
Class.forName(JDBC_DRIVER_NAME);
con = DriverManager.getConnection(CONNECTION_URL,"","");
}catch (SQLException e) {
e.printStackTrace();
}
return con;
}`
HIve-connector jar is added in the java build path in eclipse. getConnection() works if i call it from a main method of a java class, but getConnection() gives hive driver not found exception if i call this method from jsp page. :
java.lang.ClassNotFoundException: org.apache.hive.jdbc.HiveDriver
You are not having the hive-jdbc.jar in your webapplication archive. i.e. war file. it is being missed while packaging the application.You should place it in the WEB-INF/lib directory. Please also ensure that you also add it in the deployment assembly of the eclipse project.
It works when you run the main class because the hive-jdbc.jar is configured in the build path. It is different from webapplication perspective.
Note: ClassNotFoundException shouldn't be thrown unless you are going to handle it. You should have all the required jars in your application package during runtime in classpath.
You are using the wrong Driver-Class.
Use org.apache.hadoop.hive.jdbc.HiveDriverinstead.
I am new to Hadoop, Mapr and Pivotal. I have written java code to write into pivotal but facing issue while writing into Mapr.
public class HadoopFileSystemManager {
private String url;
public void writeFile(String filePath,String data) throws IOException, URISyntaxException {
Path fPath = new Path(filePath);
String url = url = "hdfs://"+ip+":"+"8020";
FileSystem fs = FileSystem.get(new URI(url),new Configuration());
System.out.println(fs.getWorkingDirectory());
FSDataOutputStream writeStream = fs.create(fPath);
writeStream.writeChars(data);
writeStream.close();
}
}
This code works fine with pivoatal but fails with Mapr.
For Mapr i am using port = 7222.
I am getting the following error
"An existing connection was forcibly closed by the remote host"
Please let me know if am using the right port or anything needs to be changed in the code specific to Mapr.
I have stopped the iptables.
Any info is much appreciated.
Thanks
Heading
Try this code. But make sure you have MapR client setup in the node from where you are executing the test.
public class HadoopFileSystemManager {
private String url;
public void writeFile(String filePath,String data) throws IOException, URISyntaxException {
System.setProperty( "java.library.path", "/opt/mapr/lib" );
Path fPath = new Path(filePath);
String url = url = "hdfs://"+ip+":"+"8020";
FileSystem fs = FileSystem.get(new URI(url),new Configuration());
System.out.println(fs.getWorkingDirectory());
FSDataOutputStream writeStream = fs.create(fPath);
writeStream.writeChars(data);
writeStream.close();
}
}
Add the following to the classpath:
/opt/mapr/hadoop/hadoop-0.20.2/conf:/opt/mapr/hadoop/hadoop-0.20.2/lib/hadoop-0.20.2-dev-core.jar:/opt/mapr/hadoop/hadoop-0.20.2/lib/maprfs-0.1.jar:.:/opt/mapr/hadoop/hadoop-0.20.2/lib/commons-logging-1.0.4.jar:/opt/mapr/hadoop/hadoop-0.20.2/lib/zookeeper-3.3.2.jar
This statement in the above code: System.setProperty( "java.library.path", "/opt/mapr/lib" ); can be removed and can be supplied using -Djava.library.path too, if you are running your program from terminal when building.
/opt/mapr may not be your path to mapr files. If that's the case replace the path accordingly wherever applicable.
After comment:
If you are using Maven to build your project, try using the following in the pom.xml,
and with scope provided. MapR is compatible with the normal Apache Hadoop distribution too. So, while building you can use the same. Then when you run your program, you would supply the mapR jars in the classpath.
<dependency>
<groupid>hadoop</groupid>
<artifactid>hadoop</artifactid>
<version>0.20.2</version>
<scope>provided</scope>
</dependency>