Java Bytecode - ASM - Get Label Offset - java

I am trying to get the offsets of all labels in a method.
I tried using the following code:
private static ArrayList<Integer> GetLabelOffsets(MethodNode methodNode) {
ArrayList<Integer> labelOffsets = new ArrayList<>();
for (AbstractInsnNode instruction : methodNode.instructions.toArray()) {
if (instruction instanceof JumpInsnNode) {
JumpInsnNode jumpInsnNode = (JumpInsnNode) instruction;
labelOffsets.add(jumpInsnNode.label.getLabel().getOffset());.
}
}
return labelOffsets;
}
However the getOffset() method throws an Exception:
java.lang.IllegalStateException: Label offset position has not been resolved yet
How can I resolve these offset positions? Or what is the proper way to achieve this?
Edit
The MethodNode is an org.objectweb.asm.tree.MethodNode Object from the Java ASM library
Added more code at request:
public static HashMap<String, ClassNode> ParseJar(JarFile jar) {
HashMap<String, ClassNode> classes = new HashMap<>();
try {
Enumeration<?> enumeration = jar.entries();
while (enumeration.hasMoreElements()) {
JarEntry entry = (JarEntry) enumeration.nextElement();
if (entry.getName().endsWith(".class")) {
ClassReader classReader = new ClassReader(jar.getInputStream(entry));
ClassNode classNode = new ClassNode();
classReader.accept(classNode, ClassReader.SKIP_DEBUG | ClassReader.SKIP_FRAMES);
classes.put(classNode.name, classNode);
}
}
jar.close();
return classes;
} catch (Exception ex) {
return null;
}
}
public static void main(String[] args) {
JarFile jar = new JarFile(fileName);
HashMap<String, ClassNode> classes = JarUtils.ParseJar(jar);
for (ClassNode classNode : classes.values()) {
for (MethodNode methodNode : classNode.methods) {
ArrayList<Integer> offsets = GetLabelOffsets(methodNode);
// do more stuff with offsets
}
}
}

From the documentation of getOffset():
This method is intended for Attribute sub classes, and is normally not needed by class generators or adapters.
Since this offset is defined in terms of bytes, it wouldn’t be very helpful when processing a list of instructions, especially as ASM abstracts the different forms of instructions that may have different lengths in byte code.
The general idea is that this instruction list can be changed, so Labels represent logical positions and the offset will be calculated when writing a method’s resulting bytecode and the actual numbers are known.
Within the instruction list, there should be a corresponding LabelNode referencing the same Label as the instruction.

Related

JVM throws java.lang.OutOfMemoryError: heap space (File processing)

I wrote a file dupelication processor which gets the MD5 hash of each file, adds it to a hashmap, than takes all of the files with the same hash and adds it to a hashmap called dupeList. But while running large directories to scan such as C:\Program Files\ it will throw the following error
Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
at java.nio.file.Files.read(Unknown Source)
at java.nio.file.Files.readAllBytes(Unknown Source)
at com.embah.FileDupe.Utils.FileUtils.getMD5Hash(FileUtils.java:14)
at com.embah.FileDupe.FileDupe.getDuplicateFiles(FileDupe.java:43)
at com.embah.FileDupe.FileDupe.getDuplicateFiles(FileDupe.java:68)
at ImgHandler.main(ImgHandler.java:14)
Im sure its due to the fact it handles so many files, but im not sure of a better way to handle it. Im trying to get this working so I can sift thru all my kids baby pictures and remove dupelicates before I put them on my external harddrive for longterm storage. Thanks everyone for the help!
My code
public class FileUtils {
public static String getMD5Hash(String path){
try {
byte[] bytes = Files.readAllBytes(Paths.get(path)); //LINE STACK THROWS ERROR
byte[] hash = MessageDigest.getInstance("MD5").digest(bytes);
bytes = null;
String hexHash = DatatypeConverter.printHexBinary(hash);
hash = null;
return hexHash;
} catch(Exception e){
System.out.println("Having problem with file: " + path);
return null;
}
}
public class FileDupe {
public static Map<String, List<String>> getDuplicateFiles(String dirs){
Map<String, List<String>> allEntrys = new HashMap<>(); //<hash, file loc>
Map<String, List<String>> dupeEntrys = new HashMap<>();
File fileDir = new File(dirs);
if(fileDir.isDirectory()){
ArrayList<File> nestedFiles = getNestedFiles(fileDir.listFiles());
File[] fileList = new File[nestedFiles.size()];
fileList = nestedFiles.toArray(fileList);
for(File file:fileList){
String path = file.getAbsolutePath();
String hash = "";
if((hash = FileUtils.getMD5Hash(path)) == null)
continue;
if(!allEntrys.containsValue(path))
put(allEntrys, hash, path);
}
fileList = null;
}
allEntrys.forEach((hash, locs) -> {
if(locs.size() > 1){
dupeEntrys.put(hash, locs);
}
});
allEntrys = null;
return dupeEntrys;
}
public static Map<String, List<String>> getDuplicateFiles(String... dirs){
ArrayList<Map<String, List<String>>> maps = new ArrayList<Map<String, List<String>>>();
Map<String, List<String>> dupeMap = new HashMap<>();
for(String dir : dirs){ //Get all dupe files
maps.add(getDuplicateFiles(dir));
}
for(Map<String, List<String>> map : maps){ //iterate thru each map, and add all items not in the dupemap to it
dupeMap.putAll(map);
}
return dupeMap;
}
protected static ArrayList<File> getNestedFiles(File[] fileDir){
ArrayList<File> files = new ArrayList<File>();
return getNestedFiles(fileDir, files);
}
protected static ArrayList<File> getNestedFiles(File[] fileDir, ArrayList<File> allFiles){
for(File file:fileDir){
if(file.isDirectory()){
getNestedFiles(file.listFiles(), allFiles);
} else {
allFiles.add(file);
}
}
return allFiles;
}
protected static <KEY, VALUE> void put(Map<KEY, List<VALUE>> map, KEY key, VALUE value) {
map.compute(key, (s, strings) -> strings == null ? new ArrayList<>() : strings).add(value);
}
public class ImgHandler {
private static Scanner s = new Scanner(System.in);
public static void main(String[] args){
System.out.print("Please enter locations to scan for dupelicates\nSeperate Location via semi-colon(;)\nLocations: ");
String[] locList = s.nextLine().split(";");
Map<String, List<String>> dupes = FileDupe.getDuplicateFiles(locList);
System.out.println(dupes.size() + " dupes detected!");
dupes.forEach((hash, locs) -> {
System.out.println("Hash: " + hash);
locs.forEach((loc) -> System.out.println("\tLocation: " + loc));
});
}
Reading the entire file into a byte array does not only require sufficient heap space, it’s also limited to file sizes up to Integer.MAX_VALUE in principle (the practical limit for the HotSpot JVM is even a few bytes smaller).
The best solution is not to load the data into the heap memory at all:
public static String getMD5Hash(String path) {
MessageDigest md;
try { md = MessageDigest.getInstance("MD5"); }
catch(NoSuchAlgorithmException ex) {
System.out.println("FileUtils.getMD5Hash(): "+ex);
return null;// TODO better error handling
}
try(FileChannel fch = FileChannel.open(Paths.get(path), StandardOpenOption.READ)) {
for(long pos = 0, rem = fch.size(), chunk; rem>pos; pos+=chunk) {
chunk = Math.min(Integer.MAX_VALUE, rem-pos);
md.update(fch.map(FileChannel.MapMode.READ_ONLY, pos, chunk));
}
} catch(IOException e){
System.out.println("Having problem with file: " + path);
return null;// TODO better error handling
}
return String.format("%032X", new BigInteger(1, md.digest()));
}
If the underlying MessageDigest implementation is a pure Java implementation, it will transfer data from the direct buffer to the heap, but that’s outside your responsibility then (and it will be a reasonable trade-off between consumed heap memory and performance).
The method above will handle files beyond the 2GiB size without problems.
Whatever implementation FileUtils has is trying to read in whole files for calculating hash. This is not necessary: calculation is possible by reading content in smaller chunks. In fact it is sort of bad design to require this, instead of simply reading in chunks that are needed (64 bytes?). So maybe you need to use a better library.
You have a lot of solutions:
Don't read all bytes at one time, try to use a BufferedInputStream, and read a lot of bytes every time. But not all the file.
try (BufferedInputStream fileInputStream = new BufferedInputStream(
Files.newInputStream(Paths.get("your_file_here"), StandardOpenOption.READ))) {
byte[] buf = new byte[2048];
int len = 0;
while((len = fileInputStream.read(buf)) == 2048) {
// Add this to your calculation
doSomethingWithBytes(buf);
}
doSomethingWithBytes(buf, len); // Do only with the bytes
// read from the file
} catch(IOException ex) {
ex.printStackTrace();
}
Use C/C++ for such thing, (well, this is unsafe, because you will handle the memory yourself)
Consider using Guava :
private final static HashFunction HASH_FUNCTION = Hashing.goodFastHash(32);
//somewhere later
final HashCode hash = Files.asByteSource(file).hash(HASH_FUNCTION);
Guava will buffer the reading of the file for you.
i had this java heap space error on my windows machine and i spend weeks searching online for a solution, i tried increasing my -Xmx value higher but to no success. i even try running my spring boot app with a parameter to increase the heap size during run time with command like one below
mvn spring-boot:run -Dspring-boot.run.jvmArguments="-Xms2048m -Xmx4096m"
but still no success. until i figured out i was running jdk 32 bit which has limited memory size and i had to uninstall the 32 bit and install the 64 bit which solved my issue for me. i hope this help someone with issue similar to mine.

Implementing save/open with RichTextFX?

Here is my code:
private void save(File file) {
StyledDocument<ParStyle, Either<StyledText<TextStyle>, LinkedImage<TextStyle>>, TextStyle> doc = textarea.getDocument();
// Use the Codec to save the document in a binary format
textarea.getStyleCodecs().ifPresent(codecs -> {
Codec<StyledDocument<ParStyle, Either<StyledText<TextStyle>, LinkedImage<TextStyle>>, TextStyle>> codec
= ReadOnlyStyledDocument.codec(codecs._1, codecs._2, textarea.getSegOps());
try {
FileOutputStream fos = new FileOutputStream(file);
DataOutputStream dos = new DataOutputStream(fos);
codec.encode(dos, doc);
fos.close();
} catch (IOException fnfe) {
fnfe.printStackTrace();
}
});
}
I am trying to implement the save/loading from the demo from here on the RichTextFX GitHub.
I am getting errors in the following lines:
StyledDocument<ParStyle, Either<StyledText<TextStyle>, LinkedImage<TextStyle>>, TextStyle> doc = textarea.getDocument();
error: incompatible types:
StyledDocument<Collection<String>,StyledText<Collection<String>>,Collection<String>>
cannot be converted to
StyledDocument<ParStyle,Either<StyledText<TextStyle>,LinkedImage<TextStyle>>,TextStyle>
and
= ReadOnlyStyledDocument.codec(codecs._1, codecs._2, textarea.getSegOps());
error: incompatible types: inferred type does not conform to equality
constraint(s) inferred: ParStyle
equality constraints(s): ParStyle,Collection<String>
I have added all the required .java files and imported them into my main code. I thought it would be relatively trivial to implement this demo but it has been nothing but headaches.
If this cannot be resolved, does anyone know an alternative way to save the text with formatting from RichTextFX?
Thank you
This question is quite old, but since i ran into the same problem i figured a solution might be useful to others as well.
In the demo, the code from which you use, ParStyle and TextStyle (Custom Types) are used for defining how information about the style is stored.
The error messages you get pretty much just tell you that your way of storing the information about the style (In your case in a String) is not compatible with the way it is done in the demo.
If you want to store the style in a String, which i did as well, you need to implement some way of serializing and deserializing the information yourself.
You can do that, for example (I used an InlineCssTextArea), in the following way:
public class SerializeManager {
public static final String PAR_REGEX = "#!par!#";
public static final String PAR_CONTENT_REGEX = "#!pcr!#";
public static final String SEG_REGEX = "#!seg!#";
public static final String SEG_CONTENT_REGEX = "#!scr!#";
public static String serialized(InlineCssTextArea textArea) {
StringBuilder builder = new StringBuilder();
textArea.getDocument().getParagraphs().forEach(par -> {
builder.append(par.getParagraphStyle());
builder.append(PAR_CONTENT_REGEX);
par.getStyledSegments().forEach(seg -> builder
.append(
seg.getSegment()
.replaceAll(PAR_REGEX, "")
.replaceAll(PAR_CONTENT_REGEX, "")
.replaceAll(SEG_REGEX, "")
.replaceAll(SEG_CONTENT_REGEX, "")
)
.append(SEG_CONTENT_REGEX)
.append(seg.getStyle())
.append(SEG_REGEX)
);
builder.append(PAR_REGEX);
});
String textAreaSerialized = builder.toString();
return textAreaSerialized;
}
public static InlineCssTextArea fromSerialized(String string) {
InlineCssTextArea textArea = new InlineCssTextArea();
ReadOnlyStyledDocumentBuilder<String, String, String> builder = new ReadOnlyStyledDocumentBuilder<>(
SegmentOps.styledTextOps(),
""
);
if (string.contains(PAR_REGEX)) {
String[] parsSerialized = string.split(PAR_REGEX);
for (int i = 0; i < parsSerialized.length; i++) {
String par = parsSerialized[i];
String[] parContent = par.split(PAR_CONTENT_REGEX);
String parStyle = parContent[0];
List<String> segments = new ArrayList<>();
StyleSpansBuilder<String> spansBuilder = new StyleSpansBuilder<>();
String styleSegments = parContent[1];
Arrays.stream(styleSegments.split(SEG_REGEX)).forEach(seg -> {
String[] segContent = seg.split(SEG_CONTENT_REGEX);
segments.add(segContent[0]);
if (segContent.length > 1) {
spansBuilder.add(segContent[1], segContent[0].length());
} else {
spansBuilder.add("", segContent[0].length());
}
});
StyleSpans<String> spans = spansBuilder.create();
builder.addParagraph(segments, spans, parStyle);
}
textArea.append(builder.build());
}
return textArea;
}
}
You can then take the serialized InlineCssTextArea, write the resulting String to a file, and load and deserialize it.
As you can see in the code, i made up some Strings as regexes which will be removed in the serialization process (We don't want our Serializer to be injectable, do we ;)).
You can change these to whatever you like, just note they will be removed if used in the text of the TextArea, so they should be something users wont miss in their TextArea.
Also note that this solution serializes the Style of the Text, the Text itself and the Paragraph style, BUT not inserted images or parameters of the TextArea (such as width and height), just the text content of the TextArea with its Style.
This issue on github really helped me btw.

Determining the Efferent coupling between objects (CBO Metric) using the parsed byte-code generated by BCEL

I have built a program, which takes in a provided ".class" file and parses it using the BCEL, I've learnt how to calculate the LCOM4 value now. Now I would like to know how to calculate the CBO(Coupling between object) value of the class file. I've scoured the whole web, trying to find a proper tutorial about it, but I've been unable so far (I've read the whole javadoc regarding the BCEL as well and there was a similar question on stackoverflow but it has been removed). So I would like some help with this issue, as in some detailed tutorials or code snippets that would help me understand on how to do it.
OK, here you must compute the CBO of the classes within a whole set of classes. The set can be the content of a directory, of a jar file, or all the classes in a classpath.
I would fill a Map<String,Set<String>> with the class name as the key, and the classes it refers to:
private void addClassReferees(File file, Map<String, Set<String>> refMap)
throws IOException {
try (InputStream in = new FileInputStream(file)) {
ClassParser parser = new ClassParser(in, file.getName());
JavaClass clazz = parser.parse();
String className = clazz.getClassName();
Set<String> referees = new HashSet<>();
ConstantPoolGen cp = new ConstantPoolGen(clazz.getConstantPool());
for (Method method: clazz.getMethods()) {
Code code = method.getCode();
InstructionList instrs = new InstructionList(code.getCode());
for (InstructionHandle ih: instrs) {
Instruction instr = ih.getInstruction();
if (instr instanceof FieldOrMethod) {
FieldOrMethod ref = (FieldInstruction)instr;
String cn = ref.getClassName(cp);
if (!cn.equals(className)) {
referees.add(cn);
}
}
}
}
refMap.put(className, referees);
}
}
When you've added all the classes in the map, you need to filter the referees of each class to limit them to the set of classes considered, and add the backward links:
Set<String> classes = new TreeSet<>(refMap.keySet());
for (String className: classes) {
Set<String> others = refMap.get(className);
others.retainAll(classes);
for (String other: others) {
refMap.get(other).add(className);
}
}

Transposing arrays

I am using the following code to read in a CSV file:
String next[] = {};
List<String[]> dataArray = new ArrayList<String[]>();
try {
CSVReader reader = new CSVReader(new InputStreamReader(getAssets().open("inputFile.csv")));
for(;;) {
next = reader.readNext();
if(next != null) {
dataArray.add(next);
} else {
break;
}
}
} catch (IOException e) {
e.printStackTrace();
}
This turns a CSV file into the array 'dataArray'. My application is for a dictionary type app - the input data's first column is a list of words, and the second column is the definitions of those words. Here is an example of the array loaded in:
Term 1, Definition 1
Term 2, Definition 2
Term 3, Definition 3
In order to access one of the strings in the array, I use the following code:
dataArray.get(rowNumber)[columnNumber]
However, I need to be able to generate a list of all the terms, so that they can be displayed for the dictionary application. As I understand it, accessing the columns by themselves is a much more lengthy process than accessing the rows (I come from a MATLAB background, where this would be simple).
It seems that in order to have ready access to any row of my input data, I would be better off transposing the data and reading it in that way; i.e.:
Term 1, Term 2, Term3
Definition 1, Definition 2, Definition 3
Of course, I could just provide a CSV file that is transposed in the first place - but Excel or OO Calc don't allow more than 256 rows, and my dictionary contains around 2000 terms.
Any of the following solutions would be welcomed:
A way to transpose an array once it has been read in
An alteration to the code posted above, such that it reads in data in the 'transposed' way
A simple way to read an entire column of an array as a whole
You would probably be better served by using a Map data structure (e.g. HashMap):
String next[] = {};
HashMap<String, String> dataMap = new HashMap<String, String>();
try {
CSVReader reader = new CSVReader(new InputStreamReader(getAssets().open("inputFile.csv")));
for(;;) {
next = reader.readNext();
if(next != null) {
dataMap.put(next[0], next[1]);
} else {
break;
}
}
} catch (IOException e) {
e.printStackTrace();
}
Then you can access the first column by
dataMap.keySet();
and the second column by
dataMap.values();
Note one assumption here: that the first column of your input data is all unique values (that is, there are not repeated values in the "Term" column).
To be able to access the keys (terms) as an array, you can simply do as follows:
String[] terms = new String[dataMap.keySet().size()];
terms = dataMap.keySet().toArray(terms);
If each row has two values, where the first one is the term and the second one is the definition, you could build a Map of it like this (Btw, this while loop does the exact same thing as your for loop):
String next[] = {};
Map<String, String> dataMap = new HashMap<String, String>();
try {
CSVReader reader = new CSVReader(new InputStreamReader(getAssets().open("inputFile.csv")));
while((next = reader.readNext()) != null) {
dataMap.put(next[0], next[1]);
}
} catch (IOException e) {
e.printStackTrace();
}
Then you can get the definition from a term via:
String definition = dataMap.get(term);
or all definitions like this:
for (String term: dataMap.keySet()) {
String definition = dataMap.get(term);
}

removeEldestEntry overriding

How can I override removeEldestEntry method to saving eldest entry to file? Also how to limit the size of a file like I did it in LinkedHashMap. Here is code:
import java.util.*;
public class level1 {
private static final int max_cache = 50;
private Map cache = new LinkedHashMap(max_cache, .75F, true) {
protected boolean removeEldestEntry(Map.Entry eldest) {
return size() > max_cache;
}
};
public level1() {
for (int i = 1; i < 52; i++) {
String string = String.valueOf(i);
cache.put(string, string);
System.out.println("\rCache size = " + cache.size() +
"\tRecent value = " + i + " \tLast value = " +
cache.get(string) + "\tValues in cache=" +
cache.values());
}
I tried to use FileOutPutSTream :
private Map cache = new LinkedHashMap(max_cache, .75F, true) {
protected boolean removeEldestEntry(Map.Entry eldest) throws IOException {
boolean removed = super.removeEldestEntry(eldest);
if (removed) {
FileOutputStream fos = new FileOutputStream("t.tmp");
ObjectOutputStream oos = new ObjectOutputStream(fos);
oos.writeObject(eldest.getValue());
oos.close();
}
return removed;
}
But I have gained an error
Error(15,27): removeEldestEntry(java.util.Map.Entry) in cannot override removeEldestEntry(java.util.Map.Entry) in java.util.LinkedHashMap; overridden method does not throw java.io.IOException
Without IOExecptio compiler asks to handle IOexception and Filenotfoundexception.
Maybe another way exists? Pls show me example code, I am new in java and just trying to understand the basic principles of 2 level caching. Thx
You first need to make sure your method properly overrides the parent. You can make some small changes to the signature, such as only throwing a more specific checked exception that is a sub-class of a checked exception declared in the parent. In this case, the parent does not declare any checked exception so you can not refine that further and may not throw any checked exceptions. So you will have to handle the IOException locally. There are several ways you can do that, convert it to a RuntimeException of some kind and/or log it.
If you are concerned about the file size, you probably do not want to keep just the last removed entry but many of them - so you should open the file for append.
You need to return true from the method to actually remove the eldest and you need to decide if the element should be removed.
When working with files you should use try/finally to ensure that you close the resource even if there is an exception. This can get a little ugly - sometimes it's nice to have a utility method to do the close so you don't need the extra try/catch.
Generally you should also use some buffering for file I/O which greatly improves performance; in this case use wrap the file stream in a java.io.BufferedOutputStream and provide that to the ObjectOutputStream.
Here is something that may do what you want:
private static final int MAX_ENTRIES_ALLOWED = 100;
private static final long MAX_FILE_SIZE = 1L * 1024 * 1024; // 1 MB
protected boolean removeEldestEntry(Map.Entry eldest) {
if (size() <= MAX_ENTRIES_ALLOWED) {
return false;
}
File objFile = new File("t.tmp");
if (objFile.length() > MAX_FILE_SIZE) {
// Do something here to manage the file size, such as renaming the file
// You won't be able to easily remove an object from the file without a more
// advanced file structure since you are writing arbitrary sized serialized
// objects. You would need to do some kind of tagging of each entry or include
// a record length before each one. Then you would have to scan and rebuild
// a new file. You cannot easily just delete bytes earlier in the file without
// even more advanced structures (like having an index, fixed size records and
// free space lists, or even a database).
}
FileOutputStream fos = null;
try {
fos = new FileOutputStream(objFile, true); // Open for append
ObjectOutputStream oos = new ObjectOutputStream(new BufferedOutputStream(fos));
oos.writeObject(eldest.getValue());
oos.close(); // Close the object stream to flush remaining generated data (if any).
return true;
} catch (IOException e) {
// Log error here or....
throw new RuntimeException(e.getMessage(), e); // Convert to RuntimeException
} finally {
if (fos != null) {
try {
fos.close();
} catch (IOException e2) {
// Log failure - no need to throw though
}
}
}
}
You can't change the method signature when overriding a method. So you need to handle the exception in the overridden method instead of throwing it.
This contains a good explanation on how to use try and catch: http://download.oracle.com/javase/tutorial/essential/exceptions/try.html

Categories