I am trying to sort a nested HashMap,
HashMap<Integer, HashMap<Integer, String>> myMap = new HashMap<>(), by a specific value in the inner HashMap.
The program reads a delimited file that contains the following values:
000001014|A||Harvey|T|Dent|05/27/1991|0902|000001014|05/27/1991|01/01/3000|
000001388|A||Tony|K|Stark|09/19/1992|0054|000001388|09/19/1992|01/01/3000|
000001395|A||Steve|C|Rogers|10/26/1992|7402|000001395|10/26/1992|01/01/3000| 000001396|A||Peter|P|Parker|11/02/1992|1002|000001396|11/02/1992|01/01/3000| 000011148|I||Drax|T|Destroyer|02/17/1992|7005|000011148|02/17/1992|01/01/3000| 000011141|A||Rocket|M|Raccoon|02/10/1992|7170|000011141|02/10/1992|01/01/3000|000001404|A||Natasha||Romanoff|12/28/1992|7240|00001404|12/28/1992|01/01/3000| 000001442|A||Bruce|T|Banner|10/06/1993|7012|000001442|10/06/1993|01/01/3000|
000001450|A||Scott|L|Lang|11/29/1993|0002|000001450|11/29/1993|01/01/3000| 000001486|A||Thor|J|Odinson|07/04/1994|0002|000001486|07/04/1994|01/01/3000|
I chose a Nested HashMap so that each line in the file has its own key and then each element in each line has a key. For example myMap.get(0).get(7) returns 0902, myMap.get(1).get(7) returns 0054, myMap.get(2).get(7) returns 7402. But the problem is that sorting the HashMap by the nested HashMap value has been a real humdinger. So, what I am trying to accomplish is to sort the whole HashMap by the 7th element in the inner map.
Should I sort myMap the old fashion way using a nested loops and binary sort or insertion sort? How do I tackle this problem?
private static Path directory() {
File home = FileSystemView.getFileSystemView().getHomeDirectory();
String path = home.getAbsolutePath();
Path dir;
//For reference Directory
C:\Users\PC_USER_NAME\Desktop\Work\Person\Employees.txt
if(getProperty("os.name").startsWith("Windows")) {//specify your
directory Windows
dir = Paths.get(path + File.separator + "Work" + File.separator + "Person");
} else {//Specify your directory Mac
dir = Paths.get(File.separator + "Users" + File.separator +
getProperty("user.name") + File.separator + "Desktop" + File.separator + "Work" + File.separator + "Person");
}
return dir;
}
private static void readFile() {
HashMap<Integer, HashMap<Integer, String>> myMap = new HashMap<>();
HashMap<Integer, String> inner = new HashMap<>();
BufferedReader reader;
String line;
int count = 0;
try {
File dir = new File(directory().toString());
File[] files = dir.listFiles((File pathname) ->
pathname.getName().startsWith("Employees"));
File lastModifiedFile = files[0];
for (File file : files) {
if (lastModifiedFile.lastModified() < file.lastModified()) {
lastModifiedFile = file;
}
}
reader = new BufferedReader(new FileReader(lastModifiedFile));
//Skips the header.
reader.readLine();
while((line = reader.readLine()) != null) {
String[] keyValue = line.split("\\|");
for (int i = 0; i < keyValue.length; i++) {
inner.put(i, keyValue[i]);
}
myMap.put(count, inner);
count++;
inner = new HashMap<>();
}
reader.close();
} catch (IOException e) { e.printStackTrace(); }
sort(myMap);
}
private static void sort(HashMap<Integer, HashMap<Integer, String>> myMap) {
Set<Entry<Integer, HashMap<Integer, String>>> sorted =
myMap.entrySet();
for(Entry<Integer, HashMap<Integer, String>> entry : sorted) {
System.out.println(entry.getKey() + " ==> " + entry.getValue().get(7));
}
//Won't add this method code for brevity sake
writeFile();
}
First of all - what does it mean to sort a HashMap? To print sorted values as I guess what does mean you won't be sorting Map itself, but some kind of collection of it's values
Second thing - why do you want to keep such data in a Map? It sounds like really bad idea, and you just spotted the first argument why
For me you should create some kind of Row class like
public class Row {
private List<String> items; // for '|' splitted values in a row, maybe it should be even String[]?
...
}
and keep your whole file as a List<Row>. Then you can create your own Comparator or even make Row implements Comparable
public class Row implements Comparable<Row>{
private List<String> items = new ArrayList<>();
...
#Override
public int compareTo(Row that) {
return this.items.get(8).compareTo(that.items.get(7));
}
}
Now you can easily sort the file using Collections.sort() util
Please notice that implementing Comparator allow you to create many versions of them (like SortBy6thComparator, SortBy7thComparator, SortBy8thComparator...). You just need to use then another version of sort method:
public static <T> void sort(List<T> list, Comparator<? super T> c)
Related
I am reading a txt file and store the data in a hashtable, but I couldn't get the correct output. the txt file like this (part) attached image
this is part of my data
And I want to store the column 1 and column 2 as the key(String type) in hashtable, and column 3 and column 4 as the value (ArrayList type) in hashtable.
My code below:
private Hashtable<String, ArrayList<String[]>> readData() throws Exception {
BufferedReader br = new BufferedReader (new FileReader("MyGridWorld.txt"));
br.readLine();
ArrayList<String[]> value = new ArrayList<String[]>();
String[] probDes = new String[2];
String key = "";
//read file line by line
String line = null;
while ((line = br.readLine()) != null && !line.equals(";;")) {
//System.out.println("line ="+line);
String source;
String action;
//split by tab
String [] splited = line.split("\\t");
source = splited[0];
action = splited[1];
key = source+","+action;
probDes[0] = splited[2];
probDes[1] = splited[3];
value.add(probDes);
hashTableForWorld.put(key, value);
System.out.println("hash table is like this:" +hashTableForWorld);
}
br.close();
return hashTableForWorld;
}
The output looks like this:
it's a very long long line
I think maybe the hashtable is broken, but I don't know why. Thank you for reading my problem.
The first thing we need to establish is that you have a really obvious XY-Problem, in that "what you need to do" and "how you're trying to solve it" are completely at odds with each other.
So let's go back to the original problem and try to work out what we need first.
As best as I can determine, source and action are connected, in that they represent queryable "keys" to your data structure, and probability, destination, and reward are queryable "outcomes" in your data structure. So we'll start by creating objects to represent those two concepts:
public class SourceAction implements Comparable<SourceAction>{
public final String source;
public final String action;
public SourceAction() {
this("", "");
}
public SourceAction(String source, String action) {
this.source = source;
this.action = action;
}
public int compareTo(SourceAction sa) {
int comp = source.compareTo(sa.source);
if(comp != 0) return comp;
return action.compareto(sa.action);
}
public boolean equals(SourceAction sa) {
return source.equals(sa.source) && action.equals(sa.action);
}
public String toString() {
return source + ',' + action;
}
}
public class Outcome {
public String probability; //You can use double if you've written code to parse the probability
public String destination;
public String reward; //you can use double if you're written code to parse the reward
public Outcome() {
this("", "", "");
}
public Outcome(String probability, String destination, String reward) {
this.probability = probability;
this.destination = destination;
this.reward = reward;
}
public boolean equals(Outcome o) {
return probability.equals(o.probability) && destination.equals(o.destination) && reward.equals(o.reward);
public String toString() {
return probability + ',' + destination + ',' + reward;
}
}
So then, given these objects, what sort of Data Structure can properly encapsulate the relationship between these objects, given that a SourceAction seems to have a One-To-Many relationship to Outcome objects? My suggestion is that a Map<SourceAction, List<Outcome>> represents this relationship.
private Map<SourceAction, List<Outcome>> readData() throws Exception {
It is possible to use a Hash Table (in this case, HashMap) to contain these objects, but I'm trying to keep the code as simple as possible, so we're going to stick to the more generic interface.
Then, we can reuse the logic you used in your original code to insert values into this data structure, with a few tweaks.
private Map<SourceAction, List<Outcome>> readData() {
//We're using a try-with-resources block to eliminate the later call to close the reader
try (BufferedReader br = new BufferedReader (new FileReader("MyGridWorld.txt"))) {
br.readLine();//Skip the first line because it's just a header
//I'm using a TreeMap because that makes the implementation simpler. If you absolutely
//need to use a HashMap, then make sure you implement a hash() function for SourceAction
Map<SourceAction, List<Outcome>> dataStructure = new TreeMap<>();
//read file line by line
String line = null;
while ((line = br.readLine()) != null && !line.equals(";;")) {
//split by tab
String [] splited = line.split("\\t");
SourceAction sourceAction = new SourceAction(splited[0], splited[1]);
Outcome outcome = new Outcome(splited[2], splited[3], splited[4]);
if(dataStructure.contains(sourceAction)) {
//Entry already found; we're just going to add this outcome to the already
//existing list.
dataStructure.get(sourceAction).add(outcome);
} else {
List<Outcome> outcomes = new ArrayList<>();
outcomes.add(outcome);
dataStructure.put(sourceAction, outcomes);
}
}
} catch (IOException e) {//Do whatever, or rethrow the exception}
return dataStructure;
}
Then, if you want to query for all the outcomes associated with a given source + action, you need only construct a SourceAction object and query the Map for it.
Map<SourceAction, List<Outcome>> actionMap = readData();
List<Outcome> outcomes = actionMap.get(new SourceAction("(1,1)", "Up"));
assert(outcomes != null);
assert(outcomes.size() == 3);
assert(outcomes.get(0).equals(new Outcome("0.8", "(1,2)", "-0.04")));
assert(outcomes.get(1).equals(new Outcome("0.1", "(2,1)", "-0.04")));
assert(outcomes.get(2).equals(new Outcome("0.1", "(1,1)", "-0.04")));
This should yield the functionality you need for your problem.
You should change your logic for adding to your hashtable to check for the key you create. If the key exists, then grab your array list of arrays that it maps to and add your array to it. Currently you will overwrite the data.
Try this
if(hashTableForWorld.containsKey(key))
{
value = hashTableForWorld.get(key);
value.add(probDes);
hashTableForWorld.put(key, value);
}
else
{
value = new ArrayList<String[]>();
value.add(probDes);
hashTableForWorld.put(key, value);
}
Then to print the contents try something like this
for (Map.Entry<String, ArrayList<String[]>> entry : hashTableForWorld.entrySet()) {
String key = entry.getKey();
ArrayList<String[]> value = entry.getValue();
System.out.println ("Key: " + key + " Value: ");
for(int i = 0; i < value.size(); i++)
{
System.out.print("Array " + i + ": ");
for(String val : value.get(i))
System.out.print(val + " :: ")
System.out.println();
}
}
Hashtable and ArrayList (and other collections) do not make a copy of key and value, and thus all values you are storing are the same probDes array you are allocating at the beginning (note that it is normal that the String[] appears in a cryptic form, you would have to make it pretty yourself, but you can still see that it is the very same cryptic thing all the time).
What is sure is that you should allocate a new probDes for each element inside the loop.
Based on your data you could work with an array as value in my opinion, there is no real use for the ArrayList
And the same applies to value, it has to be allocated separately upon encountering a new key:
private Hashtable<String, ArrayList<String[]>> readData() throws Exception {
try(BufferedReader br=new BufferedReader(new FileReader("MyGridWorld.txt"))) {
br.readLine();
Hashtable<String, ArrayList<String[]>> hashTableForWorld=new Hashtable<>();
//read file line by line
String line = null;
while ((line = br.readLine()) != null && !line.equals(";;")) {
//System.out.println("line ="+line);
String source;
String action;
//split by tab
String[] split = line.split("\\t");
source = split[0];
action = split[1];
String key = source+","+action;
String[] probDesRew = new String[3];
probDesRew[0] = split[2];
probDesRew[1] = split[3];
probDesRew[2] = split[4];
ArrayList<String[]> value = hashTableForWorld.get(key);
if(value == null){
value = new ArrayList<>();
hashTableForWorld.put(key, value);
}
value.add(probDesRew);
}
return hashTableForWorld;
}
}
Besides relocating the variables to their place of actual usage, the return value is also created locally, and the reader is wrapped into a try-with-resource construct which ensures that it is getting closed even if an exception occurs (see official tutorial here).
im implementing external merge sort using Java.
So given a file I split it into smaller ones , then sort the smaller portions and finally merge the sorted (smaller) files.
So , the last step is what im having trouble with.
I have a list of files and I want at each step , take the minimum value of the first rows of each file and then remove that line.
So , it is supposed to be something like this:
public static void mergeSortedFiles(List<File> sorted, File output) throws IOException {
BufferedWriter wf = new BufferedWriter(new FileWriter(output));
String curLine = "";
while(!sorted.isEmpty()) {
curLine = findMinLine(sorted);
wf.write(curLine);
}
}
public static String findMinLine(List<File> sorted) throws IOException {
List<BufferedReader> brs = new ArrayList<>();
for(int i =0; i<sorted.size() ; i++) {
brs.add(new BufferedReader(new FileReader(sorted.get(i))));
}
List<String> lines = new ArrayList<>();
for(BufferedReader br : brs) {
lines.add(br.readLine());
}
Collections.sort(lines);
return lines.get(0);
}
Im not sure how to update the files, anyone can help with that?
Thanks for helping!
You can create a Comparable wrapper around each file and then place the wrappers in a heap (for example a PriorityQueue).
public class ComparableFile<T extends Comparable<T>> implements Comparable<ComparableFile<T>> {
private final Deserializer<T> deserializer;
private final Iterator<String> lines;
private T buffered;
public ComparableFile(File file, Deserializer<T> deserializer) {
this.deserializer = deserializer;
try {
this.lines = Files.newBufferedReader(file.toPath()).lines().iterator();
} catch (IOException e) {
// deal with it differently if you want, I'm just providing a working example
// and wanted to use the constructor in a lambda function
throw new UncheckedIOException(e);
}
}
#Override
public int compareTo(ComparableFile<T> that) {
T mine = peek();
T theirs = that.peek();
if (mine == null) return theirs == null ? 0 : -1;
if (theirs == null) return 1;
return mine.compareTo(theirs);
}
public T pop() {
T tmp = peek();
if (tmp != null) {
buffered = null;
return tmp;
}
throw new NoSuchElementException();
}
public boolean isEmpty() {
return peek() == null;
}
private T peek() {
if (buffered != null) return buffered;
if (!lines.hasNext()) return null;
return buffered = deserializer.deserialize(lines.next());
}
}
Then, you can merge them this way:
public class MergeFiles<T extends Comparable<T>> {
private final PriorityQueue<ComparableFile<T>> files;
public MergeFiles(List<File> files, Deserializer<T> deserializer) {
this.files = new PriorityQueue<>(files.stream()
.map(file -> new ComparableFile<>(file, deserializer))
.filter(comparableFile -> !comparableFile.isEmpty())
.collect(toList()));
}
public Iterator<T> getSortedElements() {
return new Iterator<T>() {
#Override
public boolean hasNext() {
return !files.isEmpty();
}
#Override
public T next() {
if (!hasNext()) throw new NoSuchElementException();
ComparableFile<T> head = files.poll();
T next = head.pop();
if (!head.isEmpty()) files.add(head);
return next;
}
};
}
}
And here's some code to demonstrate it works:
public static void main(String[] args) throws IOException {
List<File> files = Arrays.asList(
newTempFile(Arrays.asList("hello", "world")),
newTempFile(Arrays.asList("english", "java", "programming")),
newTempFile(Arrays.asList("american", "scala", "stackoverflow"))
);
Iterator<String> sortedElements = new MergeFiles<>(files, line -> line).getSortedElements();
while (sortedElements.hasNext()) {
System.out.println(sortedElements.next());
}
}
private static File newTempFile(List<String> words) throws IOException {
File tempFile = File.createTempFile("sorted-", ".txt");
Files.write(tempFile.toPath(), words);
tempFile.deleteOnExit();
return tempFile;
}
Output:
american
english
hello
java
programming
scala
stackoverflow
world
So what you want to do is to swap two lines in a text file? You can do it by using a RandomAccessFile however this will be horrible slow since everytime when you swap two lines you have to wait for the next IO burst.
So i highly recommend you to use the following code to be able to do the merge sort on the heap:
List<String> lines1 = Files.readAllLines(youFile1);
List<String> lines2 = Files.readAllLines(youFile2);
//use merge sort on theese lines
List<String> merged;
FileWriter writer = new FileWriter(yourOutputFile);
for(String str: merged) {
writer.write(str + System.lineSeparator());
}
writer.close();
The standard merge technique between a fixed number of files (say, 2) is :
have a variable for the value of the ordering key of the current record of each file (for java, make that variable Comparable).
start the process by reading the first record of each file (and fill in the corresponding variable)
loop (until end-of-file on both) through a code block that says essentially
if (key_1.compareTo(key_2) == 0) { process both files ; then read both files}
else if (key_1.compareTo(key_2) == -1) { process file 1 ; then read file 1}
else { process file 2 ; then read file 2}
Note how this code does essentially nothing more than determine the file with the lowest key, and process that.
If your number of files is variable, then your number of key variables is variable too, and "determining the file with the lowest current key" cannot be done as per above. Instead, have as many current_key_value objects as there are files, and store them all in a TreeSet. Now, the first element of the TreeSet will be the lowest current key value of all the files and if you make sure that you maintain a link between your key variable and the file number you just process that file (and delete the just processed key value from the TreeSet and read a new record from the processed file and add its key value to the TreeSet).
I want to combine these two text files
Driver details text file:
AB11; Angela
AB22; Beatrice
Journeys text file:
AB22,Edinburgh ,6
AB11,Thunderdome,1
AB11,Station,5
And I want my output to be only the names and where the person has been. It should look like this:
Angela
Thunderdone
Station
Beatrice
Edinburgh
Here is my code. I'm not sure what i'm doing wrong but i'm not getting the right output.
ArrayList<String> names = new ArrayList<String>();
TreeSet<String> destinations = new TreeSet<String>();
public TaxiReader() {
BufferedReader brName = null;
BufferedReader brDest = null;
try {
// Have the buffered readers start to read the text files
brName = new BufferedReader(new FileReader("taxi_details.txt"));
brDest = new BufferedReader(new FileReader("2017_journeys.txt"));
String line = brName.readLine();
String lines = brDest.readLine();
while (line != null && lines != null ){
// The input lines are split on the basis of certain characters that the text files use to split up the fields within them
String name [] = line.split(";");
String destination [] = lines.split(",");
// Add names and destinations to the different arraylists
String x = new String(name[1]);
//names.add(x);
String y = new String (destination[1]);
destinations.add(y);
// add arraylists to treemap
TreeMap <String, TreeSet<String>> taxiDetails = new TreeMap <String, TreeSet<String>> ();
taxiDetails.put(x, destinations);
System.out.println(taxiDetails);
// Reads the next line of the text files
line = brName.readLine();
lines = brDest.readLine();
}
// Catch blocks exist here to catch every potential error
} catch (FileNotFoundException ex) {
ex.printStackTrace();
} catch (IOException ex) {
ex.printStackTrace();
// Finally block exists to close the files and handle any potential exceptions that can happen as a result
} finally {
try {
if (brName != null)
brName.close();
} catch (IOException ex) {
ex.printStackTrace();
}
}
}
public static void main (String [] args){
TaxiReader reader = new TaxiReader();
}
You are reading 2 files in parallel, I don't think that's gonna work too well. Try reading one file at a time.
Also you might want to rethink your data structures.
The first file relates a key "AB11" to a value "Angela". A map is better than an arraylist:
Map<String, String> names = new HashMap<String, String>();
String key = line.split(",")[0]; // "AB11"
String value = line.split(",")[1]; // "Angela"
names.put(key, value)
names.get("AB11"); // "Angela"
Similarly, the second file relates a key "AB11" to multiple values "Thunderdome", "Station". You could also use a map for this:
Map<String, List<String>> destinations = new HashMap<String, List<String>>();
String key = line.split(",")[0]; // "AB11"
String value = line.split(",")[1]; // "Station"
if(map.get(key) == null) {
List<String> values = new LinkedList<String>();
values.add(value);
map.put(key, values);
} else {
// we already have a destination value stored for this key
// add a new destination to the list
List<String> values = map.get(key);
values.add(value);
}
To get the output you want:
// for each entry in the names map
for(Map.Entry<String, String> entry : names.entrySet()) {
String key = entry.getKey();
String name = entry.getValue();
// print the name
System.out.println(name);
// use the key to retrieve the list of destinations for this name
List<String> values = destinations.get(key);
for(String destination : values) {
// print each destination with a small indentation
System.out.println(" " + destination);
}
}
public class CompareCSV {
public static void main(String args[]) throws FileNotFoundException, IOException {
String path = "C:\\csv\\";
String file1 = "file1.csv";
String file2 = "file2.csv";
String file3 = "file3.csv";
ArrayList<String> al1 = new ArrayList<String>();
ArrayList<String> al2 = new ArrayList<String>();
BufferedReader CSVFile1 = new BufferedReader(new FileReader("/C:/Users/bida0916/Desktop/macro.csv"));
String dataRow1 = CSVFile1.readLine();
while (dataRow1 != null) {
String[] dataArray1 = dataRow1.split(",");
for (String item1 : dataArray1) {
al1.add(item1);
}
dataRow1 = CSVFile1.readLine();
}
CSVFile1.close();
BufferedReader CSVFile2 = new BufferedReader(new FileReader("C:/Users/bida0916/Desktop/Deprecated.csv"));
String dataRow2 = CSVFile2.readLine();
while (dataRow2 != null) {
String[] dataArray2 = dataRow2.split(",");
for (String item2 : dataArray2) {
al2.add(item2);
}
dataRow2 = CSVFile2.readLine();
}
CSVFile2.close();
for (String bs : al2) {
al1.remove(bs);
}
int size = al1.size();
System.out.println(size);
try {
FileWriter writer = new FileWriter("C:/Users/bida0916/Desktop/NewMacro.csv");
while (size != 0) {
size--;
writer.append("" + al1.get(size));
writer.append('\n');
}
writer.flush();
writer.close();
} catch (IOException e) {
e.printStackTrace();
}
}
}
I want to compare two csv files in java and want to have the complete details removed of one csv file from the other by comparing the first column of both the files. Currently I am getting a csv file with one column only having all details jumbled up.
You are adding all values of all columns to a single list, that's why you get the mess in your output:
ArrayList<String> al1=new ArrayList<String>();
//...
String[] dataArray1 = dataRow1.split(",");
for (String item1:dataArray1)
{
al1.add(item1);
}
Add the complete string array from your file to your list, then you can access your data in a structured way:
List<String[]> al1 = new ArrayList<>();
//...
String[] dataArray1 = dataRow1.split(",");
al1.add(dataArray1);
But for removal of rows I'd recommend to use Maps for faster access, where the key is the element on which you decide which row to delete and the value is the full row from your cvs file:
Map<String, String> al1 = new HashMap<>(); // or LinkedHashMap if row order is relevant
//...
String[] dataArray1 = dataRow1.split(",");
al1.put(dataArray1[0], dataRow1);
But be aware, that if two rows in a file contain the same value in the first column, only one will be preserved. If that's possible you might need to adopt that solution to store the data in a Map<String, Set<String>> or Map<String, List<String>>.
At this point I'd like to recommend to extract the file-reading to a separate method, which you can reuse to read both of your input-files and reduce duplicate code:
Map<String, String> al1 = readInputCsvFile(file1);
Map<String, String> al2 = readInputCsvFile(file2);
For the deletion of the lines which shall be removed, iterate over the key set of one of the maps and remove the entry from the other:
for (String key : al2.keySet()) {
al1.remove(key);
}
And for writing your output file, just write the row read from the original file as stored in the 'value' of your map.
for (String dataRow : al1.values()) {
writer.append(dataRow);
writer.append('\n');
}
EDIT
If you need to perform operations based on other data columns you should rather store the 'split-array' in the map instead of the full-line string read from the file. Then you have all data columns separately available:
Map<String, String[]> al2 = new HashMap<>();
//...
String[] dataArray2 = dataRow2.split(",");
al2.put(dataArray2[0], dataArray2);
You might then, e.g. add a condition for deleting:
for (Entry<String, String[]> entry : al2.entrySet()) {
String[] data = entry.getValue();
if ("delete".equals(data[17])) {
al1.remove(entry.getKey());
}
}
For writing your output file you have to rebuild the csv-format.
I'd recommend to use Apache commons-lang StringUtils for that task:
for (String[] data : al1.values()) {
writer.append(StringUtils.join(data, ","));
writer.append('\n');
}
I have a code snippet that is not sorting correctly. I need to sort a HashMap by keys using TreeMap, then write out to a text file. I have looked online on sorting and found that TreeMap can sort a HashMap by keys but I am not sure if I am utilizing it incorrectly. Can someone please take a look at the code snippet and advise if this is incorrect?
public void compareDICOMSets() throws IOException
{
FileWriter fs;
BufferedWriter bw;
fs = new FileWriter("dicomTagResults.txt");
bw = new BufferedWriter(fs);
Map<String, String> sortedMap = new TreeMap<String, String>();
for (Entry<String, String> entry : dicomFile.entrySet())
{
String s = entry.toString().substring(0, Math.min(entry.toString().length(), 11));
if(dicomTagList.containsKey(entry.getKey()))
{
sortedMap.put(s, entry.getValue());
Result.put(s, entry.getValue());
bw.write(s + entry.getValue() + "\r\n");
}
}
bw.close();
menu.showMenu();
}
}
UPDATE:
This is what I get for results when I do a println:
(0008,0080)
(0008,0092)
(0008,1010)
(0010,4000)
(0010,0010)
(0010,1030)
(0008,103E)
(0008,2111)
(0008,1050)
(0012,0040)
(0008,0094)
(0010,1001)
I am looking to sort this numerically. I have added String s to trim the Key down just to the tags as it was displaying a whole string of stuff that was unnecessary.
You should first order your results, and then print them.
For Java 8:
Map<String, String> Result = ...;
// This orders your Result map by key, using String natural order
Map<String, String> ordered = new TreeMap<>(Result);
// Now write the results
BufferedWriter bw = new BufferedWriter(new FileWriter("dicomTagResults.txt"));
ordered.forEach((k, v) -> bw.write(k + v + "\r\n");
bw.close();
For pre Java 8:
Map<String, String> Result = ...;
// This orders your Result map by key, using String natural order
Map<String, String> ordered = new TreeMap<>(Result);
// Now write the results
BufferedWriter bw = new BufferedWriter(new FileWriter("dicomTagResults.txt"));
for (Map.Entry<String, String> entry : ordered.entrySet()) {
String k = entry.getKey();
String v = entry.getValue();
bw.write(k + v + "\r\n");
}
bw.close();