Have written a Java code which sends mail when whole system RAM reaches > 95%.
I want to write a Java code to test this scenario. Have written few(recursive etc...), but those are crashing the JVM but not System.
Any help please ?
UN_ORTHODOX SOLUTION but it works anyway
NOTE! I Used WINDOWS 8.1 host machine
I found this DOC on myself too! in old days; about JVM and host system access issues. I used this code to get details about host system!
import java.lang.management.ManagementFactory;
import java.lang.management.OperatingSystemMXBean;
import java.lang.reflect.Method;
import java.lang.reflect.Modifier;
private static void printUsage() {
OperatingSystemMXBean operatingSystemMXBean = ManagementFactory.getOperatingSystemMXBean();
for (Method method : operatingSystemMXBean.getClass().getDeclaredMethods()) {
method.setAccessible(true);
if (method.getName().startsWith("get")
&& Modifier.isPublic(method.getModifiers())) {
Object value;
try {
value = method.invoke(operatingSystemMXBean);
} catch (Exception e) {
value = e;
} // try
System.out.println(method.getName() + " = " + value);
} // if
} // for
}
NOTE !
that's not the real way To do but I did it by opening the Default browser (in most of cases I know most of the time its GOOGLE CHROME) , had 8GB ram those days so opening a few tabs with some random youtube and other links it helped me to reach up the memory usage unto 90% in no time! because it eats the RAM (No offence to CHROME people!) doing that I was able to achieve the test you are trying to get. :-)
TO Open a default browser just take a look on this thread its quite nice with different ways to do it!
well if you are using it for android well read about proguard rules for memory management and try using any external library which takes up too much memory like any dummy faceRecog library or simply just accessing some NDK features or so check this link for more
Related
I need another set of eyes on this.
I've written out a zip file into hundreds of gigabytes with this exact code with no modifications locally on MacOSX.
With 100% unchanged code, just deployed to an AWS instance running Ubuntu, this same code runs into Out of Memory issues (heap space).
Here's the code that's being run, streaming MyBatis to a CSV file on disk:
File directory = new File(feedDirectory);
File file;
try {
file = File.createTempFile(("feed-" + providerCode + "-"), ".csv", directory);
} catch (IOException e) {
throw new RuntimeException("Unable to create file to write feed to disk: " + e.getMessage(), e);
}
String filePath = file.getAbsolutePath();
log.info(String.format("File name for %s feed is %s", providerCode, filePath));
// output file
try (FileOutputStream out = new FileOutputStream(file)) {
streamData(out, providerCode, startDate, endDate);
} catch (IOException e) {
throw new RuntimeException("Unable to write feed to file: " + e.getMessage());
}
public void streamData(OutputStream outputStream, String providerCode, Date startDate, Date endDate) throws IOException {
try (CSVPrinter printer = CsvUtil.openPrinter(outputStream)) {
StreamingHandler<FStay> handler = stayPrintingHandler(printer);
warehouse.doForAllStaysByProvider(providerCode, startDate, endDate, handler);
}
}
private StreamingHandler<FStay> stayPrintingHandler(CSVPrinter printer) {
StreamingHandler<FStay> handler = new StreamingHandler<>();
handler.setHandler((stay) -> {
try {
EXPORTER.writeStay(printer, stay);
} catch (IOException e) {
log.error("Issue with writing output: " + e.getMessage(), e);
}
});
return handler;
}
// The EXPORTER method
import org.apache.commons.csv.CSVPrinter;
public void writeStay(CSVPrinter printer, FStay stay) throws IOException {
List<Object> list = asList(stay);
printer.printRecord(list);
}
List<Object> asList(FStay stay) {
List<Object> list = new ArrayList<>(46);
list.add(stay.getUid());
list.add(stay.getProviderCode());
//....
return list;
}
Here's a graph of the JVM heap space (using jvisualvm) when I run this locally. I've run this consistently with of Java 8 (jdk1.8.0_51 and 1.8.0_112) locally and have gotten great results. Even written out a terabyte of data.
^ In the above, the max heap space is set to 4 gigs, and the most it ever increases to is 1.5 gigs, before going back down to around 500 MB, while streaming data to the CSV file as it's supposed to.
However, when I run this on Ubuntu with jdk 1.8.0_111, the exact same operation will not complete, running out of heap space (java.lang.OutOfMemoryError: Java heap space)
I've upped the Xmx value from 8 gigs to 16 to 25 gigs, and still run out of heap space. Meanwhile... the total size of the file is only 10 Gigs in total... which really perplexes me.
Here's what the JVisualVm graph looks like on the Ubuntu box:
I've no doubt it's the exact same code running in both environments, with the same operation being performed in each (same database server providing the same data)
The only differences I can think of at this point are:
Operating system - Ubuntu vs Mac OS X
Hosted VM in AWS vs hard metal laptop
Network speed is faster in AWS between database and Ubuntu server
JDK version is 1.8.0_111 in Ubuntu, tried 1.8.0_51 and 1.8.0_112 locally
Can anyone help shed any light on this problem?
Update
I've tried replacing all the 'try-with-resources' statements with explicit flush/close statements and no luck.
What's more, I tried to force a garbage collection on the Ubuntu box as soon as I started to see the data come in, and it had no effect-- there is something definitely stopping the heap from being collected on the Ubuntu machine... while running the exact same code on OS X let me write the full enchilada again no problem.
Update 2
In addition to the differences in the environments above, the only other difference I can think of is if the connection between the servers in AWS is so fast that it streams the data faster than it can flush the data to disk... but that still doesn't explain the issue where I only have 10 gigs of data total, and it blows up a JVM with 20 Gigs of heap space.
Is there any likelihood of there being a bug at the Ubuntu/Java level for this?
Update 3
Tried replacing the output of the CSVPrinter to use an entirely separate library (OpenCSV's CSVWriter in lieu of Apache's CSV library) and the same result occurs.
As soon as this code starts receiving data from the database, the heap starts blowing up and the garbage collector fails to reclaim any memory... but only on Ubuntu. On OS X, everything is reclaimed immediately and the heap never grows.
I've also tried flushing the stream after every write, but had no luck with that as well.
Update 4
Got the heap dump to print out, and according to this I should be looking at the database driver. Specifically the InboundDataHandler in amazon's redshift driver.
I'm using myBatis with a custom result handler. I tried setting the result handler to effectively do nothing when it gets a result (new ResultHandler<>() { // method overridden to do literally nothing}) and I know I'm not holding on to any references there.
Since it's the InboundDataHandler defined by AWS/Redshift... it makes me think it may be lower than the myBatis level... either:
Error in the SqlSessionFactory I'm setting up
Bug in the Redshift driver that only pops up in Ubuntu / AWS
Bug in the result handler I have overwritten
Here's the heap dump screenshot:
Here's where I'm setting up my SqlSessionFactoryBean:
#Bean
public javax.sql.DataSource redshiftDataSource() throws ClassNotFoundException {
log.info("Got to datasource config");
// Dynamically load driver at runtime.
Class.forName(dataWarehouseDriver);
DataSource dataSource = new DataSource();
dataSource.setURL(dataWarehouseUrl);
dataSource.setUserID(dataWarehouseUsername);
dataSource.setPassword(dataWarehousePassword);
return dataSource;
}
#Bean
public SqlSessionFactoryBean sqlSessionFactory() throws ClassNotFoundException {
SqlSessionFactoryBean factoryBean = new SqlSessionFactoryBean();
factoryBean.setDataSource(redshiftDataSource());
return factoryBean;
}
Here's the myBatis code I'm running as a test to verify that it's not me holding on to records in my ResultHandler:
warehouse.doForAllStaysByProvider(providerCode, startDate, endDate, new ResultHandler<FStay>() {
#Override
public void handleResult(ResultContext<? extends FStay> resultContext) {
// do nothing
}
});
Is there a way I can force the SQL connection to not hang on to records or something? I'll again re-iterate that on my local machine, there is no issue with this memory leak... it only surfaces when running the code in the hosted AWS environment. And in both cases, the Database driver and server are the same.
Update 6
I think it's finally fixed. Thanks to all who pointed me in the direction of the heap dump. That helped narrow it down to the offending class in a huge way.
After that, I did some research on the AWS redshift driver, and it explicitly says that your clients should specify a limit for any operations on large data. So I found out how to do that in my myBatis configuration:
<select id="doForAllStaysByProvider" fetchSize="1000" resultMap="FStayResultMap">
select distinct
f_stay.uid,
And this did the trick.
Mind you, this isn't necessary even when handling much larger data sets downloaded remotely from AWS (Database in AWS, code executing on laptop at home), and this shouldn't be necessary since I'm overriding the myBatis ResultHandler<> which handles each row individually and never holds on to any objects.
Yet something funky happens with the AWS redshift jdbc driver only when it's run in AWS (database in aws, code executing in AWS instance) which causes this InboundDataHandler to never release its resources, unless a fetchSize is specified.
Here's the heap of the server running now, getting much further than it ever has before in AWS, with the heap space never moving above 500Mb, and after i hit 'force gc' in jvisualvm, it shows the 'used' heap at less than 100mb:
Thanks again in a huge way to all those who helped guide this!
Finally figured out a solution.
The heap dump was the biggest aid-- it indicated the InboundDataHandler class of Amazon's RedShift/postgres JDCB driver was the prime culprit.
The code to set up the SqlSession appeared legit, so traveling over to Amazon's documentation landed this gem:
To avoid client-side out-of-memory errors when retrieving large data
sets using JDBC, you can enable your client to fetch data in batches
by setting the JDBC fetch size parameter.
We hadn't run into this before, as we stream results with custom ResultHandlers in MyBatis... but there seems to be something different when the AWS Redshift JDBC driver is running on AWS itself vs outside AWS connecting in.
Taking the guidance from the documentation, we added a 'fetchSize' to our MyBatis select query:
<select id="doForAllStaysByProvider" fetchSize="1000" resultMap="FStayResultMap">
select distinct
f_stay.uid,
And voila! Everything worked swimmingly. This is the only change we made and the heap never went above a couple hundred MBs.
You can see in one of the above graphs where the heap goes off the charts, as soon as the data started to be received on Amazon, the heap marches right up linearly and never reclaims an ounce of heap space once it starts.
My guess is the Redshift JDBC driver is doing something different when it's in Amazon's environment for some kind of optimization... that's all I can think of to explain the behavior.
Clearly Amazon knows what's going on since they documented it up front. I may not know the full 'why' of what's happening, but at least everything is resolved in what appears to be a satisfactory way.
Thanks to all those who helped.
This is a very simple program I have written that uses jpbc library.
It compiles without any errors, but takes an unusually long time to show the output, or in fact it doesn't show the output at all. (Who in this era will have the patience to wait for nearly half an hour for such a tiny program?) I am using a system with i7 processor but still this is the case.
Could anyone tell what might be wrong with this code?
import it.unisa.dia.gas.jpbc.*;
import it.unisa.dia.gas.plaf.jpbc.pairing.PairingFactory;
import it.unisa.dia.gas.plaf.jpbc.pairing.parameters.*;
import it.unisa.dia.gas.jpbc.PairingParametersGenerator;
import it.unisa.dia.gas.jpbc.PairingParameters;
import it.unisa.dia.gas.plaf.jpbc.pairing.a1.TypeA1CurveGenerator;
public class PairingDemo {
public static void main(String [] args){
try{
int rBits = 160;
int qBits = 512;
PairingParametersGenerator pg = new TypeA1CurveGenerator(rBits, qBits);
PairingParameters params = pg.generate();
Pairing pair = PairingFactory.getPairing("D:\\JPBCLib\\params\\curves\\a1.Properties");
Field Zr = pair.getZr();
int degree = pair.getDegree();
System.out.println("Degree of the pairing : " + degree);
}catch(Exception e){
e.printStackTrace();
}
}
}
There are three issues that you're dealing with here
Generating the pairing parameters takes some time, but this only has to be done once for a system that you're building. You should store the generated pairing parameter for later use.
Since you're not using pg or params, you can remove that code. Instead you're reading precomputed parameters from a file.
jPBC is a complete and pure Java implementation of PBC. It is fully portable and therefore quite slow. jPBC has an option of using a PBCWrapper library which is a wrapper around libpbc which would enable you to get the performance of the native library. I wasn't able to make it work on Windows, but Linux should not be an issue (make sure to check the JNI version or load your own).
First of all sorry for my poor English.
I want to know how the windows will do auto java update check behind the User Interface ?
The UI will just react based on our input which is in the link , http://java.com/en/download/help/java_update.xml#howto .
But , how windows checks the updates programmatically.
I wrote a small program in java ,
public class JavaLatestVersion {
public static void main(String[] args){
try {
BufferedReader br = new BufferedReader(new InputStreamReader(new URL(
"http://java.com/applet/JreCurrentVersion2.txt").openStream())) ;
String fullVersion = br.readLine();
System.out.println("fullVersion : "+fullVersion);
String version = fullVersion.split("_")[0];
String revision = fullVersion.split("_")[1];
System.out.println("Version " + version + " revision " + revision);
} catch (IOException e) {
e.printStackTrace();
}
}
}
My questions :
1. Is the above program is the reliable way to get the latest java version ? Or any other standard way to get the latest java version (Not in the computer) ?
2. Is windows use the same way to determine the latest java version ?
3. Is windows use this link for updates http://java.com/applet/JreCurrentVersion2.txt ?
Any one know the secret code behind how windows will check for latest java updates?
Thanks in advance.
Checking for Java updates is done by Java Auto Updater. It is ordinary application (which is ran when Windows starts up).
Yes, it is reliable way to get the latest version of Java (as updater can not only update Java, it can also update itself). But pay attention to firewall/group policy settings which can prohibit updater to access the Web.
Windows doesn't update Java.
Only debugging Java Auto Updater can help to determine what URL it uses.
Unfortunately, Java Auto Updater has only graphical interface and hides all work behind the scenes. So finding a "secret code" is not easy to do. All the more in many cases reverse-engineering non-open source software is illegal from a license point of view.
URL that you provided above doesn't works. Because it says 8.0_51. But latest version of Java on Downloads page is 8u65 / 8u66.
Seems that latest available version (as plain text) can be determined only by fetching http://www.oracle.com/technetwork/java/javase/downloads/index.html web page, then parsing it, handling cases when page is moved to another location, etc.
I've actually got an Windows/Java Question. I've got a plugged-in device which I want to access via Java. Normally you can access an e.g. USB-Stick via the Drive letter... but this tablet is displayed by Windows as a "Portable Device"... which means, that the Path is something like "Computer\Archos 5S" and there is no Drive letter.
I want to access a file on this device via Java, but I am not able to figure out the correct path to it. There is a similar question out there, but without a productive answer. Or is there another way to access this device via Java?
Actually I've not solved this problem... I am still not able to access such a device via java.
At the moment I am trying to access a windows ShellFolder in Java.
A Shellfolder like: "Shell:::{35786D3C-B075-49b9-88DD-029876E11C01}"
Is this possible with Java?
Recently I uncovered the sun.awt class "ShellFolder"... is this the wanted feature?
thanks for your help
Ripei
The solution to above problem using JMTP library on https://code.google.com/p/jmtp/
Here is my code
package jmtp;
import be.derycke.pieter.com.COMException;
import be.derycke.pieter.com.Guid;
import java.io.*;
import java.math.BigInteger;
import jmtp.PortableDevice;
import jmtp.*;
public class Jmtp {
public static void main(String[] args) {
PortableDeviceManager manager = new PortableDeviceManager();
PortableDevice device = manager.getDevices()[0];
// Connect to my mp3-player
device.open();
System.out.println(device.getModel());
System.out.println("---------------");
// Iterate over deviceObjects
for (PortableDeviceObject object : device.getRootObjects()) {
// If the object is a storage object
if (object instanceof PortableDeviceStorageObject) {
PortableDeviceStorageObject storage = (PortableDeviceStorageObject) object;
for (PortableDeviceObject o2 : storage.getChildObjects()) {
//
// BigInteger bigInteger1 = new BigInteger("123456789");
// File file = new File("c:/JavaAppletSigningGuide.pdf");
// try {
// storage.addAudioObject(file, "jj", "jj", bigInteger1);
// } catch (Exception e) {
// //System.out.println("Exception e = " + e);
// }
//
System.out.println(o2.getOriginalFileName());
}
}
}
manager.getDevices()[0].close();
}
}
Do not forget add jmtp.dll files (that comes up with jmtp download) as a native library. For more info, see my answer on Including Native Library in Netbeans.
Like *nix systems, all devices (including drives) have paths that are part of a common root, this is normally hidden from users because they use the drive letters which are aliases to these fundamental paths, but you can also use full device paths by prefixing the path with "\\.\"
For instance, on my machine D: translates as "\Device\HarddiskVolume1" and can be accessed by passing "\\.\HarddiskVolume1" to CreateFile.
So the path to your device is probably "\\.\Archos 5s".
you can always download and install the Windows mobile developer Powertoys (http://www.microsoft.com/download/en/details.aspx?id=10601) and copy from and to the device using the command line utility cecopy, which you can run from any programming language. There are other options there too, but it's most targeted at .Net
I'm building a mobile app with J2ME, and I've found that the data I write into a RecordStore can be accessed while the program is still running but it is lost after quitting and restarting it. No exception is thrown, the data is simply lost.
UPDATE: Thanks everyone for your suggestions. I'm using NetBeans on Windows 7. I'm not sure if it is using the WTK version I have previously installed or another one it has installed somewhere else. I've checked my WTK folder for the files Pavel wrote about, but couldn't find them. Now I'm testing the features requiring persistence on my phone and everything else in the emulator, but it would of course be much better to be able to test everything in the emulator.
private RecordStore recordStore = null;
public MyMIDlet() {
readStuff(); // output: nothing found in recordStore :(
saveStuff();
readStuff(); // output: stuff
}
private void readStuff() {
try {
recordStore = RecordStore.openRecordStore(REC_STORE, true);
int n = recordStore.getNumRecords();
String stuff;
if (n == 0) {
stuff = "nothing found in recordStore :(";
}
else {
stuff = new String(recordStore.getRecord(1));
}
System.out.println(stuff);
}
catch (Exception e) {
System.out.println("Exception occured in readStuff: " + e.getMessage());
}
finally {
if (recordStore != null) {
try {
recordStore.closeRecordStore();
}
catch (Exception e) {
// ignore
}
}
}
}
private void saveStuff() {
try {
recordStore = RecordStore.openRecordStore(REC_STORE, true);
int n = recordStore.getNumRecords();
byte[] stuff = "stuff".getBytes();
recordStore.addRecord(stuff, 0, stuff.length);
} catch (Exception e) {
System.out.println("Exception occured in saveStuff: " + e.getMessage());
} finally {
if (recordStore != null) {
try {
recordStore.closeRecordStore();
} catch (Exception e) {
// ignore
}
}
}
}
If you use Sun WTK, it creates a file named "in.use" in its "appdb" folder:
C:\WTK25\appdb\DefaultColorPhone\in.use
If you close your emulator in unusual way (kill a process, for example), it would not delete it, and next time you run emulator, it would create temporary folder for storing data:
C:\WTK25\appdb\temp.DefaultColorPhone1
when starting this way, it should print in console: "Running with storage root temp.DefaultColorPhone1".
I fix it, including into my ".bat" file a line for deleting "in.use" file each time, emulator runs. But you should be careful when running several emulators at once.
I experienced the same problem myself, I did however discover that NetBeans, or whatever, deletes the deployed program files after execution. These files are located in the C:\Documents and Settings\MyUser\javame-sdk\3.0\work\0\appdb folder, might be different on Vista/Win7 and I guess the number in the path refers to the emulator you are currently using. Anyways, in this folder look for something that is named like your RecordStore. E.g. "00000002_PSC_onfig.db", which is my suite configuration recordstore named PSConfig. By copying this to e.g. "Copy of 00000002_PSC_onfig.db" it will not be deleted. After NetBeans have cleaned up, just copy it back to its original name.
The next time you hit run in NetBeans your recordstore will be there. It's pain, but at least it gives you the possibility to use the emulator to debug your RMS handling.
This question has been around for a while but I stumbled upon it whilst looking for an answer to the same problem with the emulator but in my case it was when using the Java ME 3 SDK. It is possible that the solution I found might also fix this problem.
Using:
emulator -Xdescriptor:/path/to/app.jad
will according to the docs: "Install a MIDlet, run it, and uninstall it after it finishes."
To persist an installation (and it's data) you should use:
emulator -Xjam:install=<JAD-file-URL>
The JAD file URL can either be a web address or 'file:///path/to/app.jad' if you want to install from your local file system. This installation command will display an application storage number which you can then use to launch the emulator and run the previously installed app by calling:
emulator -Xjam:run=<application-storage-number>
See the docs for further command line options.
I could finally get it to work on a real handset. It seems that, as Martin Clayton suggested, the emulator reset erased the data. I'm still looking for a way to enable persistence in the emulator though.
If you are using windows Vista there can and almost are permission issues. I am not sure how to resolve this but you might want to check that the user that is running the emulator has access to write to the emulator store.
In appdb/$phone/*.db
to fix the storage problem you need to check the option "Storage size" in the netbeans platform manager.
Go in project properties;
go in platform;
manage emulators;
select the sun java wireless toolkit;
go in Tools and Extensions;
open preferences;
storage;
storage size option... set a size.
this works for me
I experienced the same issue on Ubuntu Linux and working with WTK 2.5.2 and Netbeans 8.0.2. I later figured it was caused by shutting down my laptop without closing the emulator. It also happens if you run a second emulator without shutting down the first.
My solution is based on the Best Answer here, just shut down all emulators and delete the file located at
~/j2mewtk/2.5.2/appdb/DefaultColorPhone