List attached devices on ubuntu in java - java

I'm a little stumped, currently I am trying to list all of the attached devices on my system in linux through a small java app (similar to gparted) I'm working on, my end goal is to get the path to the device so I can format it in my application and perform other actions such as labels, partitioning, etc.
I currently have the following returning the "system root" which on windows will get the appropriate drive (Ex: "C:/ D:/ ...") but on Linux it returns "/" since that is its technical root. I was hoping to get the path to the device (Ex: "/dev/sda /dev/sdb ...") in an array.
What I'm using now
import java.io.File;
class ListAttachedDevices{
public static void main(String[] args) {
File[] paths;
paths = File.listRoots();
for(File path:paths) {
System.out.println(path);
}
}
}
Any help or guidance would be much appreciated, I'm relatively new to SO and I hope this is enough information to cover everything.
Thank you in advance for any help/criticism!
EDIT:
Using part of Phillip's suggestion I have updated my code to the following, the only problem I am having now is detecting if the selected file is related to the linux install (not safe to perform actions on) or an attached drive (safe to perform actions on)
import java.io.File;
import java.io.IOException;
import java.nio.file.FileStore;
import java.nio.file.FileSystems;
import java.util.ArrayList;
import javax.swing.filechooser.FileSystemView;
class ListAttachedDevices{
public static void main(String[] args) throws IOException {
ArrayList<File> dev = new ArrayList<File>();
for (FileStore store : FileSystems.getDefault().getFileStores()) {
String text = store.toString();
String match = "(";
int position = text.indexOf(match);
if(text.substring(position, position + 5).equals("(/dev")){
if(text.substring(position, position + 7).equals("(/dev/s")){
String drivePath = text.substring( position + 1, text.length() - 1);
File drive = new File(drivePath);
dev.add(drive);
FileSystemView fsv = FileSystemView.getFileSystemView();
System.out.println("is (" + drive.getAbsolutePath() + ") root: " + fsv.isFileSystemRoot(drive));
}
}
}
}
}
EDIT 2:
Disregard previous edit, I did not realize this did not detect drives that are not already formatted

Following Elliott Frisch's suggestion to use /proc/partitions I've come up with the following answer. (Be warned this also lists bootable/system drives)
import java.io.BufferedReader;
import java.io.File;
import java.io.FileReader;
import java.io.IOException;
import java.util.ArrayList;
import java.util.logging.Level;
import java.util.logging.Logger;
class ListAttachedDevices{
public static void main(String[] args) throws IOException {
ArrayList<File> drives = new ArrayList<File>();
BufferedReader br = new BufferedReader(new FileReader("/proc/partitions"));
try {
StringBuilder sb = new StringBuilder();
String line = br.readLine();
while (line != null) {
String text = line;
String drivePath;
if(text.contains("sd")){
int position = text.indexOf("sd");
drivePath = "/dev/" + text.substring(position);
File drive = new File(drivePath);
drives.add(drive);
System.out.println(drive.getAbsolutePath());
}
line = br.readLine();
}
} catch(IOException e){
Logger.getLogger(ListAttachedDevices.class.getName()).log(Level.SEVERE, null, e);
}
finally {
br.close();
}
}
}

Related

Paths.get doesnt find file Windows 10 (java)

I'm following some course on udemy . Im learning about Paths , but i cant get Paths.get to work.
import java.io.BufferedReader;
import java.io.IOException;
import java.nio.file.FileSystems;
import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.Paths;
public class Main {
public static void main(String[] args) {
Path filePath = Paths.get("C:\\OutThere.txt");
printFile(filePath);
}
private static void printFile(Path path){
try(BufferedReader fileReader = Files.newBufferedReader(path)){
String line;
while((line = fileReader.readLine())!=null){
System.out.println(line);
}
}catch(IOException e){
System.out.println(e.getMessage());
e.printStackTrace();
}
}
}
The File exists , the name is correct and its on the C drive. What am i doing wrong?
java.nio.file.NoSuchFileException: C:\OutThere.txt
at com.bennydelathouwer.Main.main(Main.java:16)
It's a bad practice to hard-code "/" or "\" use:
File.separator
++ are you sure you have the proper privileges to read this file?

How to use multiple CSV files in mapreduce

First, I will explain what I am trying to do. First, I am putting input file (first CSV file) into mapreduce job and other CSV file will be put inside mapper class. But here is the thing. The code in mapper class does not work properly, like this right bottom code. I want to combine two CSV files to use several columns in each CSV file.
For example, 1 file has BibNum (user account), checkoutdatetime (book checkoutdatetime), and itemtype (book itemtype), and 2 CSV file has BibNum (user account), Title (book Title), Itemtype and so on. I want to find out which book will be likely borrowed in coming month. I would be really appreciated if you know the way can combine two CSV file and enlighten me with any helps. If you have any doubts for my code, just let me know, I will try to clarify it.
Path p = new Path("hdfs://0.0.0.0:8020/user/training/Inventory_Sample");
FileSystem fs = FileSystem.get(conf);
BufferedReader br = new BufferedReader(new InputStreamReader(fs.open(p)));
try {
String BibNum = "Test";
//System.out.print("test");
while(br.readLine() != null){
//System.out.print("test");
if(!br.readLine().startsWith("BibNumber")) {
String subject[] = br.readLine().split(",");
BibNum = subject[0];
}
}
.
import java.io.BufferedReader;
import java.io.FileReader;
import java.io.IOException;
import java.io.InputStreamReader;
import java.util.Date;
import java.util.HashMap;
import java.text.ParseException;
import java.text.SimpleDateFormat;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;
public class StubMapper extends Mapper<LongWritable, Text, Text, IntWritable> {
private Text outkey = new Text();
//private MinMaxCountTuple outTuple = new MinMaxCountTuple();
//String csvFile = "hdfs://user/training/Inventory_Sample";
#Override
public void map(LongWritable key, Text value, Context context)
throws IOException, InterruptedException {
Configuration conf = context.getConfiguration();
//conf.addResource("/etc/hadoop/conf/core-site.xml");
//conf.addResource("/etc/hadoop/conf/hdfs-site.xml");
Path p = new Path("hdfs://0.0.0.0:8020/user/training/Inventory_Sample");
FileSystem fs = FileSystem.get(conf);
BufferedReader br = new BufferedReader(new InputStreamReader(fs.open(p)));
try {
String BibNum = "Test";
//System.out.print("test");
while(br.readLine() != null){
//System.out.print("test");
if(!br.readLine().startsWith("BibNumber")) {
String subject[] = br.readLine().split(",");
BibNum = subject[0];
}
}
if(value.toString().startsWith("BibNumber"))
{
return;
}
String data[] = value.toString().split(",");
String BookType = data[2];
String DateTime = data[5];
SimpleDateFormat frmt = new SimpleDateFormat("MM/dd/yyyy hh:mm:ss a");
Date creationDate = frmt.parse(DateTime);
frmt.applyPattern("dd-MM-yyyy");
String dateTime = frmt.format(creationDate);
//outkey.set(BookType + " " + dateTime);
outkey.set(BibNum + " " + BookType + " " + dateTime);
//outUserId.set(userId);
context.write(outkey, new IntWritable(1));
} catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
finally{
br.close();
}
}
}
You are reading CSV file in the mapper code.
If you are using the path to open a file in the mapper, I guess you are using Distributed Cache, then only file would be shipped with the jar to each node where the map reduce should run.
There is a way to combine, but not in the mapper.
You can try the below approach :-
1) Write 2 separate mapper for two different file.
2) Send only those fields required from mapper to reducer.
3) Combine the results in the reducer ( as you want to join on some specific key ).
You can check out Multi Input Format examples for more.

BioNLP stanford - tokenization

I try to tokenize a biomedical text so I decided to use http://nlp.stanford.edu/software/eventparser.shtml. I used the stand-alone program RunBioNLPTokenizer that does what I want.
Now, I want to create my own program that uses Stanford libraries. So, I read the code from RunBioNLPTokenizer describing below.
package edu.stanford.nlp.ie.machinereading.domains.bionlp;
import java.io.File;
import java.io.FileOutputStream;
import java.io.IOException;
import java.io.PrintStream;
import java.util.Collection;
import java.util.List;
import java.util.Properties;
import edu.stanford.nlp.ie.machinereading.GenericDataSetReader;
import edu.stanford.nlp.ie.machinereading.msteventextractor.DataSet;
import edu.stanford.nlp.ie.machinereading.msteventextractor.EpigeneticsDataSet;
import edu.stanford.nlp.ie.machinereading.msteventextractor.GENIA11DataSet;
import edu.stanford.nlp.ie.machinereading.msteventextractor.InfectiousDiseasesDataSet;
import edu.stanford.nlp.io.IOUtils;
import edu.stanford.nlp.ling.CoreLabel;
import edu.stanford.nlp.util.StringUtils;
/**
* Standalone program to run our BioNLP tokenizer and save its output
*/
public class RunBioNLPTokenizer extends GenericDataSetReader {
public static void main(String[] args) throws IOException {
Properties props = StringUtils.argsToProperties(args);
String basePath = props.getProperty("base.directory", "/u/nlp/data/bioNLP/2011/originals/");
DataSet dataset = new GENIA11DataSet();
dataset.getFilesystemInformation().setTokenizer("stanford");
runTokenizerForDirectory(dataset, basePath + "genia/training");
runTokenizerForDirectory(dataset, basePath + "genia/development");
runTokenizerForDirectory(dataset, basePath + "genia/testing");
dataset = new EpigeneticsDataSet();
dataset.getFilesystemInformation().setTokenizer("stanford");
runTokenizerForDirectory(dataset, basePath + "epi/training");
runTokenizerForDirectory(dataset, basePath + "epi/development");
runTokenizerForDirectory(dataset, basePath + "epi/testing");
dataset = new InfectiousDiseasesDataSet();
dataset.getFilesystemInformation().setTokenizer("stanford");
runTokenizerForDirectory(dataset, basePath + "infect/training");
runTokenizerForDirectory(dataset, basePath + "infect/development");
runTokenizerForDirectory(dataset, basePath + "infect/testing");
}
private static void runTokenizerForDirectory(DataSet dataset, String path) throws IOException {
System.out.println("Input directory: " + path);
BioNLPFormatReader reader = new BioNLPFormatReader();
for (File rawFile : reader.getRawFiles(path)) {
System.out.println("Input filename: " + rawFile.getName());
String rawText = IOUtils.slurpFile(rawFile);
String docId = rawFile.getName().replace("." + BioNLPFormatReader.TEXT_EXTENSION, "");
String parentPath = rawFile.getParent();
runTokenizer(dataset.getFilesystemInformation().getTokenizedFilename(parentPath, docId), rawText);
}
}
private static void runTokenizer(String tokenizedFilename, String text) {
System.out.println("Tokenized filename: " + tokenizedFilename);
Collection<String> sentences = BioNLPFormatReader.splitSentences(text);
PrintStream os = null;
try {
os = new PrintStream(new FileOutputStream(tokenizedFilename));
} catch (IOException e) {
System.err.println("ERROR: cannot save online tokenization to " + tokenizedFilename);
e.printStackTrace();
System.exit(1);
}
for (String sentence : sentences) {
BioNLPFormatReader.BioNLPTokenizer tokenizer = new BioNLPFormatReader.BioNLPTokenizer(sentence);
List<CoreLabel> tokens = tokenizer.tokenize();
for (CoreLabel l : tokens) {
os.print(l.word() + " ");
}
os.println();
}
os.close();
}
}
I wrote the below code. I achieved to split the text into sentences but I can't use the BioNLPTokenizer as it is used in RunBioNLPTokenizer.
public static void main(String[] args) throws Exception {
// TODO code application logic here
Collection<String> c =BioNLPFormatReader.splitSentences("..");
for (String sentence : c) {
System.out.println(sentence);
BioNLPFormatReader.BioNLPTokenizer x = BioNLPFormatReader.BioNLPTokenizer(sentence);
}
}
I took this error
Exception in thread "main" java.lang.RuntimeException: Uncompilable source code - edu.stanford.nlp.ie.machinereading.domains.bionlp.BioNLPFormatReader.BioNLPTokenizer has protected access in edu.stanford.nlp.ie.machinereading.domains.bionlp.BioNLPFormatReader
My question is. How can I tokenize a biomedical sentence according to Stanford libraries without using RunBioNLPTokenizer?
Unfortunately, we made BioNLPTokenizer a protected inner class, so you'd need to edit the source and change its access to public.
Note that BioNLPTokenizer may not be the most general purpose biomedical sentence tokenzier -- I would spot check the output to make sure it is reasonable. We developed it heavily against the BioNLP 2009/2011 shared tasks.

Trying to copy files in specified path with specified extension and replace them with new extension

I have most of it down but when I try to make the copy, no copy is made.
It finds the files in the specified directory like it is supposed to do and I think the copy function executes but there aren't any more files in the specified directory. Any help is appreciated. I made a printf function that isn't shown here. Thanks!
import java.io.File;
import java.io.IOException;
import java.nio.file.Files;
import java.util.ArrayList;
import java.util.Scanner;
import org.apache.commons.io.FilenameUtils;
import org.apache.commons.io.FileUtils;
import static java.nio.file.StandardCopyOption.*;
public class Stuff {
static String path, oldExtn, newExtn;
static Boolean delOrig = false;
private static void getPathStuff() {
printf("Please enter the desired path\n");
Scanner in = new Scanner(System.in);
path = in.next();
printf("Now enter the file extension to replace\n");
oldExtn = in.next();
printf("Now enter the file extension to replace with\n");
newExtn = in.next();
in.close();
}
public static void main(String[] args) {
getPathStuff();
File folder = new File(path);
printf("folder = %s\n", folder.getPath());
for (final File fileEntry : folder.listFiles()) {
if (fileEntry.getName().endsWith(oldExtn)) {
printf(fileEntry.getName() + "\n");
File newFile = new File(FilenameUtils.getBaseName(fileEntry
.getName() + newExtn));
try {
printf("fileEntry = %s\n", fileEntry.toPath().toString());
Files.copy(fileEntry.toPath(), newFile.toPath(),
REPLACE_EXISTING);
} catch (IOException e) {
System.err.printf("Exception");
}
}
}
}
}`
The problem is that the new file is created without a full path (only the file name). So your new file is created - only not where you expect...
You can see that it'll work if you'll replace:
File newFile = new File(FilenameUtils.getBaseName(fileEntry
.getName() + newExtn));
with:
File newFile = new File(fileEntry.getAbsolutePath()
.substring(0,
fileEntry.getAbsolutePath()
.lastIndexOf(".")+1) + newExtn);

Read M3U(playlist) file in java

Does anyone know of a java library that would allow me to read m3u files to get
the file name and its absolute path as an array ... ?
Clarification: I would like my java program to be able to parse a winamp playlist
file (.M3U) to get the name+path of the mp3 files in the playlist
A quick google search yields Lizzy, which seems to do what you want.
Try my java m3u parser:
Usage:
try{
M3U_Parser mpt = new M3U_Parser();
M3UHolder m3hodler = mpt.parseFile(new File("12397709.m3u"));
for (int n = 0; n < m3hodler.getSize(); n++) {
System.out.println(m3hodler.getName(n));
System.out.println(m3hodler.getUrl(n));
}
}catch(Exception e){
e.printStackTrace():
}
The project is posted here
m3u is a regular text file that can be read line by line. Just remember the lines that start with # are comments. Here is a class I made for this very purpose. This class assumes you want only the files on the computer. If you want files on websites you will have to make some minor edits.
/*
* Written by Edward John Sheehan III
* Used with Permission
*/
package AppPackage;
import java.io.BufferedReader;
import java.io.File;
import java.io.FileReader;
import java.io.IOException;
import java.util.ArrayList;
import java.util.List;
/**
*
* #author Edward John Sheehan III
*/
public class PlayList {
List<String> mp3;
int next;
public PlayList(File f){
mp3=new ArrayList<>();
try (BufferedReader br = new BufferedReader(new FileReader(f))) {
String line;
while ((line = br.readLine()) != null) {
addMP3(line);
}
} catch (IOException ex) {
System.out.println("error is reading the file");
}
next=-1;
}
private void addMP3(String line){
if(line==null)return;
if(!Character.isUpperCase(line.charAt(0)))return;
if(line.indexOf(":\\")!=1)return;
if(line.indexOf(".mp3", line.length()-4)==-1)return;
mp3.add(line);
}
public String getNext(){
next++;
if(mp3.size()<=next)next=0;
return mp3.get(next);
}
}
Then just call
Playlist = new PlayList(file);

Categories