I want to put my query commands in a sql file, and then using createStatement read the query from the file and do the binding.
Doing h.createStatement("SOME LONG QUERY WITH BUNCH OF JOINS AND WHERES IS HARD TO READ IN JAVA") is not very legible.
What's the best way, other than using File to open and read the file?
Jdbi provides the ClasspathSqlLocator class to read files on the classpath.
For example, this returns the content of the file query.sql which is inside the folder jdbiTest on the classpath:
String query = ClasspathSqlLocator.findSqlOnClasspath("jdbiTest.query");
Link to the documentation: http://jdbi.org/#_classpathsqllocator
Related
I created a Hive table by setting the following Properties on hive command prompt:
SET mapreduce.output.fileoutputformat.compress.codec=com.hadoop.compression.lzo.LzopCodec
SET hive.exec.compress.output=true
SET mapreduce.output.fileoutputformat.compress=true
Create table statement:
create external table dept_comp1(id bigint,code string,name string) LOCATION '/users/JOBDATA/comp' ;
insert overwrite table dept_comp select * from src__1;
Now I go to this location /users/JOBDATA/comp and find a file named 000000_0.deflate
I am not sure that this is the compressed file though when I download it, its unreadable. If it is, then why does it not have an .lzo extension?
If it is not, where can I find the .lzo file?
Lastly how can I decompress it using java?
Thanks
You can use Snappycodec Compression if you have the intention to save your disk space on hdfs. There are some compressed formats like .bz which are splittable and by setting certain hive properties like
SET hive.exec.compress.output=true;
SET mapred.output.compression.codec=org.apache.hadoop.io.compress.SnappyCodec;
SET mapred.output.compression.type=BLOCK;
I am creating a web application which will allow the upload of shape files for use later on in the program. I want to be able to read an uploaded shapefile into memory and extract some information from it without doing any explicit writing to the disk. The framework I am using (play-framework) automatically writes a temporary file to the disk when a file is uploaded, but it nicely handles the creation and deletion of said file for me. This file does not have any extension, however, so the traditional means of reading a shapefile via Geotools, like this
public void readInShpAndDoStuff(File the_upload){
Map<String, Serializable> map = new HashMap<>();
map.put( "url", the_upload.toURI().toURL() );
DataStore dataStore = DataStoreFinder.getDataStore( map );
}
fails with an exception which states
NAME_OF_TMP_FILE_HERE is not one of the files types that is known to be associated with a shapefile
After looking at the source of Geotools I see that the file type is checked by looking at the file extension, and since this is a tmp file it has none. (Running file FILENAME shows that the OS recognizes this file as a shapefile).
So at long last my question is, is there a way to read in the shapefile without specifying the Url? Some function or constructor which takes a File object as the argument and doesn't rely on a path? Or is it too much trouble and I should just save a copy on the disk? The latter option is not preferable, since this will likely be operating on a VM server at some point and I don't want to deal with file system specific stuff.
Thanks in advance for any help!
I can't see how this is going to work for you, a shapefile (despite it's name) is a group of 3 (or more) files which share a basename and have extensions of .shp, .dbf, .sbx (and usually .prj, .sbn, .fix, .qix etc).
Is there someway to make play write the extensions with the tempfile name?
I'm getting from the client an inputStream and file Metadata, and saving it in my SQL table. This table also holds full file path and some unique uid.
I want to be able to pass a uid and get a "handler" to the file, but can't seem to understand if I need to return outputStream, InputStream or File?
Which one should be returned?
I want this handler for the client for the following reasons:
The user will pass it to another function
The user will decide to convert stream to a file and copy it to some local path
Also, When returning outputstram,is it enough to do the following:
OutputStream out = new FileOutputStream(PATH_TO_MY_FILE))
return out;
Am I returning an empty stream? Does out contain all file data?
I thought maybe the best way will be to return file:
File f = new File(PATH_TO_MY_FILE);
return f;
Editing:
My metadata holds file name and file type. When I get InputStream I save in in my folder and set the path in the SQL table to be : folerPath+"/"+filename + "."+ fileType
When The user will run the following function : get(fileUid) I want to retrieve the full path (by using sql query) and return the file (hanlder)
Can you please advise?
Thanks
The user will decide to convert stream to a file and copy it to some local path
This tells us that what you need to give them is an InputStream (or Reader), since they'll be reading from it.
Your code will be reading from your database or whatever, presumably via the InputStream you get back from ResultSet#getBinaryStream or similar. You might give that directly to the caller, or you may prefer to have your code in the middle, perhaps working through a memory buffer.
Re your comment below:
I'm saving the file at some DB folder...
Databases don't have folders; file systems have folders. It sounds like the file isn't stored in your database table, just the path to it. If so, use FileInputStream with the path to get an InputStream for it, which you can return to the caller.
I'm using SymmetricDS for doing file synchronization between client and server node, I want to fetch the file sync target path from my database or file which is different for each client node.
I researched about it and find out that we can use shell script to change parameters like targetBaseDir , targetFileName, targetRelativeDir etc. inside before_copy_script or after_copy_script
Please see http://www.symmetricds.org/doc/3.5/html/configuration.html#filesync-beanshell
Here I have the targetRelativeDir path of each node in one of my database tables, and I have to fetch it and set it to the parameter targetRelativeDir using beanshell
Please give me any direction to accomplish this.
Your BSH will look similar to the following.
String nodeId = engine.getNodeService().findIdentityNodeId();
targetRelativeDir = engine.getSqlTemplate().queryForString(
"select targetRelativeDir from myTable where target_node=?", new Object[] {nodeId});
I assume this will work but I have not tested it.
I would like to build up an web application with H2 database engine. However, I still don't know how to back up the data while the database is running after reading this tutorial:
http://www.h2database.com/html/tutorial.html#upgrade_backup_restore
Does H2 output its stored file to somewhere in the file system? Can I just back up the outputted files?
H2 is stored on the file system, but it would be better to use the backup tools that you reference, because the file format can change between versions of H2. If you upgrade H2, it may not any longer be able to read the files it created in a previous version. Also, if you copy the files it uses, I would recommend shutting the database down first, otherwise the copied files may be unreadable by H2.
The location of the file depends on the jdbc url you specify. See the FAQ:
http://www.h2database.com/html/faq.html
As per the tutorial you linked, it is not recommended to backup the database by copying the files while it is running. Here is the right way to backup the database while it is running (Scala code, but can be easily converted to Java) (Source):
val connection:java.sql.Connection = ??? // get a database connection
connection.prepareStatement("BACKUP TO 'myFile.zip'").executeUpdate
Thx to Jus12 for the nice answer. I adapted it for JPARepositories in Spring Data and would like to share it here as I couldn't find a similar answer online:
#Modifying
#Transactional
#Query(value = "BACKUP TO ?1", nativeQuery = true)
int backupDB(String path);
try{
Class.forName("org.h2.Driver");
Connection con = DriverManager.getConnection("jdbc:h2:"+"./Dbfolder/dbname", "username", "password" );
Statement stmt = con.createStatement();
con.prepareStatement("BACKUP TO 'backup.zip'").executeUpdate();
}catch(Exception ex){
JOptionPane.showMessageDialog(null, ex.toString());
}