Fatal error when closing device with aclCloseDevice - java

I have implemented sounds to my application using OpenAL. Seemingly it is working fine until I close the application and trying to clean up every sound related object. Basically I have a cleanup method looks like this:
public void cleanup(){
//looping through sources and deleting them like this:
alSourceStop(id);
alDeleteSources(id);
//ids of sources and buffers are not the same they are in different classes
//looping through buffers and deleting them like this:
alDeleteBuffers(id);
//destroying context
alcDestroyContext(context);
//closing device
alcCloseDevice(device);
}
When I comment alcCloseDevice out I get a message like:
AL lib: (EE) alc_cleanup: 1 device not closed
If I leave it in its place:
A fatal error has been detected by the Java Runtime Environment ... Failed to write core dump ... and so on
I'm using LWJGL 3.1.0 on Windows 7 64bit os and all OpenGL and OpenAL related stuff is managed by one thread.
My set up looks like this:
device = alcOpenDevice((ByteBuffer)null);
ALCCapabilities caps = ALC.createCapabilities(device);
context = alcCreateContext(device, (IntBuffer)null);
alcMakeContextCurrent(context);
AL.createCapabilities(caps);
device and context are created without problem.
Creating buffer like this:
id = alGenBuffers();
try(STBVorbisInfo info = STBVorbisInfo.malloc()){
ShortBuffer buffer = /*decoding ogg here without problem*/
alBufferData(id, info.channels() == 1 ? AL_FORMAT_MONO16 : AL_FORMAT_STEREO16, buffer, info.sample_rate());
}
Also set up source and listener but I don't believe that would have any impact on it, without actually creating any source and listener closing device result in error.

Call, parse, and output alGetError() after each of your openAL calls in close method. This may shed light on what is failing.
Try dequeuing all buffers from a source before deleting your buffers. alSourcei(sourceID, AL_BUFFER, null);

Related

Recreating Linux traceroute for an Android app

I am extremely new the Android app development and Stack Overflow. I am trying to recreate traceroute in an Android app since Android devices do not come with traceroute by default. I've encountered a couple stack overflow posts talking about solutions to this, but I have still run into challenges.
Traceroute on android - the top post on this thread links an Android Studio project that implements traceroute using ping. If I understand the algorithm correctly, it continually pings the destination IP, incrementing the time-to-live field to obtain information about intermediary routers. I've tried to recreate this behavior, but for certain values of TTL, the ping stalls and doesn't retrieve any router information. I'm not really sure why this happens. Here's a quick demo function I spun up... at some point in the loop the pings stall.
public static void smallTracerouteDemoShowingThatTheProgramStallsAtCertainTTLs() {
try {
String host = "google.com";
int maxTTL = 20;
for (int i = 1; i < maxTTL; i++) {
// Create a process that executes the ping command
Process p = Runtime.getRuntime().exec("ping -c 1 -t " + i + " " + host);
// Get a buffered reader with the information returned by the ping
BufferedReader br = new BufferedReader(new InputStreamReader(p.getInputStream()));
// Convert the BufferedReader to a string
String dataReturnedByPing = "";
for (String line; (line = br.readLine()) != null; dataReturnedByPing += "\n"+line);
// Print out information about each TTL
System.out.println("TTL = " + i + " out of " + maxTTL);
System.out.println(dataReturnedByPing);
System.out.println("========================================");
}
} catch (Exception e) {
e.printStackTrace();
}
}
how to run traceroute command through your application? - The solution on this thread suggests using BusyBox. I've not used BusyBox as yet, but it seems like I would have to embed BusyBox into my app to get things to work. After doing some research it looks like BusyBox provides numerous Linux commands through one executable. I'm a bit hesitant to explore this option because I really only need the traceroute command. In addition, I know that Android targets a few different CPU architectures, and I'm not sure if one executable will support them all.
I've also run into a github repository that takes another approach to running traceroute:
https://github.com/wangjing53406/traceroute-for-android - In this repository the author embeds the traceroute source code into the project and uses the NDK to build the source code along with the rest of his app. I really like this approach because it feels the most "correct." It uses a built traceroute instead of a Java-based implementation, so you can't find yourself in a situation where the Java implementation gives you one thing and the actual traceroute gives you another. When I open this project to experiment with it, my build fails. The top line says:
org.gradle.initialization.ReportedException: org.gradle.internal.exceptions.LocationAwareException: A problem occurred configuring root project 'traceroute-for-android-master'.
Any help on why this happens or ways to troubleshoot it would be fantastic.
For reference, the minimum SDK I am targeting is API 21 and I am running on Android Studio 3.3.0.
So, at this point I'm stumped. If you were trying to make an app that would let you execute traceroute commands, how would you do it? I really like the NDK approach because it guarantees you're getting true traceroute behavior. If you have any guides to getting that set up for my Android version/SDK, I would appreciate if you would post them. If you'd take another approach I'd to hear about it as well.
Thank you in advance.

Does JCODEC Support MPEG-TS or MPEG-PS

I am trying to be able to pick out frames (video and metadata) from MPEG, MPEG-TS and MPEG-PS files and live streams (network / UDP / RTP streams). I was looking into using JCODEC to do this and I started off by trying to use the FrameGrab / FrameGrab8Bit classes, and ran into an error that those formats are "temporarily unsupported". I looked into going back some commits to see if I could just use older code, but it looks like both of those files have had those formats "temporarily unsupported" since 2013 / 2015, respectively.
I then tried to plug things back into the FrameGrab8Bit class by putting in the below code...
public static FrameGrab8Bit createFrameGrab8Bit(SeekableByteChannel in) throws IOException, JCodecException {
...
SeekableDemuxerTrack videoTrack = null;
...
case MPEG_PS:
MPSDemuxer psd = new MPSDemuxer(in);
List tracks = psd.getVideoTracks();
videoTrack = (SeekableDemuxerTrack)tracks.get(0);
break;
case MPEG_TS:
in.setPosition(0);
MTSDemuxer tsd = new MTSDemuxer(in);
ReadableByteChannel program = tsd.getProgram(481);
MPSDemuxer ptsd = new MPSDemuxer(program);
List<MPEGDemuxerTrack> tstracks = ptsd.getVideoTracks();
MPEGDemuxerTrack muxtrack = tstracks.get(0);
videoTrack = (SeekableDemuxerTrack)tstracks.get(0);
break;
...
but I ran into a packet header assertion failure in the MTSDemuxer.java class in the parsePacket function:
public static MTSPacket parsePacket(ByteBuffer buffer) {
int marker = buffer.get() & 0xff;
int marker = by & 0xff;
Assert.assertEquals(0x47, marker);
...
I found that when I reset the position of the seekable byte channel (i.e.: in.setPosition(0)) the code makes it past the assert, but then fails at videoTrack = (SeekableDemuxerTrack)tstracks.get(0) (tstracks.get(0) cannot be converted to a SeekableDemuxerTrack)
Am I waisting my time? Are these formats supported somewhere in the library and I am just not able to find them?
Also, after going around in the code and making quick test applications, it seems like all you get out of the demuxers are video frames. Is there no way to get the metadata frames associated with the video frames?
For reference, I am using the test files from: http://samples.ffmpeg.org/MPEG2/mpegts-klv/
In case anyone in the future also has this question. I got a response from a developer on the project's GitHub page to this question. Response:
Yeah, MPEG TS is not supported to the extent MP4 is. You can't really seek in TS streams (unless you index the entire stream before hand).
I also asked about how to implement the feature. I thought that it could be done by reworking the MTSDemuxer class to be built off of the SeekableDemuxerTrack so that things would be compatible with the FrameGrab8Bit class, and got the following response:
So it doesn't look like there's much sense to implement TS demuxer on top of SeekableDemuxerTrack. We haven't given much attention to TS demuxer actually, so any input is very welcome.
I think this (building the MTSDemuxer class off of the SeekableDemuxerTrack interface) would work for files (since you have everything already there). But without fully fleshing out that thought, I could not say for sure (it definitely makes sense that this solution would not work for a live MPEG-TS / PS connection).

Java code runs out of space memory on AWS but not MacOSX

I need another set of eyes on this.
I've written out a zip file into hundreds of gigabytes with this exact code with no modifications locally on MacOSX.
With 100% unchanged code, just deployed to an AWS instance running Ubuntu, this same code runs into Out of Memory issues (heap space).
Here's the code that's being run, streaming MyBatis to a CSV file on disk:
File directory = new File(feedDirectory);
File file;
try {
file = File.createTempFile(("feed-" + providerCode + "-"), ".csv", directory);
} catch (IOException e) {
throw new RuntimeException("Unable to create file to write feed to disk: " + e.getMessage(), e);
}
String filePath = file.getAbsolutePath();
log.info(String.format("File name for %s feed is %s", providerCode, filePath));
// output file
try (FileOutputStream out = new FileOutputStream(file)) {
streamData(out, providerCode, startDate, endDate);
} catch (IOException e) {
throw new RuntimeException("Unable to write feed to file: " + e.getMessage());
}
public void streamData(OutputStream outputStream, String providerCode, Date startDate, Date endDate) throws IOException {
try (CSVPrinter printer = CsvUtil.openPrinter(outputStream)) {
StreamingHandler<FStay> handler = stayPrintingHandler(printer);
warehouse.doForAllStaysByProvider(providerCode, startDate, endDate, handler);
}
}
private StreamingHandler<FStay> stayPrintingHandler(CSVPrinter printer) {
StreamingHandler<FStay> handler = new StreamingHandler<>();
handler.setHandler((stay) -> {
try {
EXPORTER.writeStay(printer, stay);
} catch (IOException e) {
log.error("Issue with writing output: " + e.getMessage(), e);
}
});
return handler;
}
// The EXPORTER method
import org.apache.commons.csv.CSVPrinter;
public void writeStay(CSVPrinter printer, FStay stay) throws IOException {
List<Object> list = asList(stay);
printer.printRecord(list);
}
List<Object> asList(FStay stay) {
List<Object> list = new ArrayList<>(46);
list.add(stay.getUid());
list.add(stay.getProviderCode());
//....
return list;
}
Here's a graph of the JVM heap space (using jvisualvm) when I run this locally. I've run this consistently with of Java 8 (jdk1.8.0_51 and 1.8.0_112) locally and have gotten great results. Even written out a terabyte of data.
^ In the above, the max heap space is set to 4 gigs, and the most it ever increases to is 1.5 gigs, before going back down to around 500 MB, while streaming data to the CSV file as it's supposed to.
However, when I run this on Ubuntu with jdk 1.8.0_111, the exact same operation will not complete, running out of heap space (java.lang.OutOfMemoryError: Java heap space)
I've upped the Xmx value from 8 gigs to 16 to 25 gigs, and still run out of heap space. Meanwhile... the total size of the file is only 10 Gigs in total... which really perplexes me.
Here's what the JVisualVm graph looks like on the Ubuntu box:
I've no doubt it's the exact same code running in both environments, with the same operation being performed in each (same database server providing the same data)
The only differences I can think of at this point are:
Operating system - Ubuntu vs Mac OS X
Hosted VM in AWS vs hard metal laptop
Network speed is faster in AWS between database and Ubuntu server
JDK version is 1.8.0_111 in Ubuntu, tried 1.8.0_51 and 1.8.0_112 locally
Can anyone help shed any light on this problem?
Update
I've tried replacing all the 'try-with-resources' statements with explicit flush/close statements and no luck.
What's more, I tried to force a garbage collection on the Ubuntu box as soon as I started to see the data come in, and it had no effect-- there is something definitely stopping the heap from being collected on the Ubuntu machine... while running the exact same code on OS X let me write the full enchilada again no problem.
Update 2
In addition to the differences in the environments above, the only other difference I can think of is if the connection between the servers in AWS is so fast that it streams the data faster than it can flush the data to disk... but that still doesn't explain the issue where I only have 10 gigs of data total, and it blows up a JVM with 20 Gigs of heap space.
Is there any likelihood of there being a bug at the Ubuntu/Java level for this?
Update 3
Tried replacing the output of the CSVPrinter to use an entirely separate library (OpenCSV's CSVWriter in lieu of Apache's CSV library) and the same result occurs.
As soon as this code starts receiving data from the database, the heap starts blowing up and the garbage collector fails to reclaim any memory... but only on Ubuntu. On OS X, everything is reclaimed immediately and the heap never grows.
I've also tried flushing the stream after every write, but had no luck with that as well.
Update 4
Got the heap dump to print out, and according to this I should be looking at the database driver. Specifically the InboundDataHandler in amazon's redshift driver.
I'm using myBatis with a custom result handler. I tried setting the result handler to effectively do nothing when it gets a result (new ResultHandler<>() { // method overridden to do literally nothing}) and I know I'm not holding on to any references there.
Since it's the InboundDataHandler defined by AWS/Redshift... it makes me think it may be lower than the myBatis level... either:
Error in the SqlSessionFactory I'm setting up
Bug in the Redshift driver that only pops up in Ubuntu / AWS
Bug in the result handler I have overwritten
Here's the heap dump screenshot:
Here's where I'm setting up my SqlSessionFactoryBean:
#Bean
public javax.sql.DataSource redshiftDataSource() throws ClassNotFoundException {
log.info("Got to datasource config");
// Dynamically load driver at runtime.
Class.forName(dataWarehouseDriver);
DataSource dataSource = new DataSource();
dataSource.setURL(dataWarehouseUrl);
dataSource.setUserID(dataWarehouseUsername);
dataSource.setPassword(dataWarehousePassword);
return dataSource;
}
#Bean
public SqlSessionFactoryBean sqlSessionFactory() throws ClassNotFoundException {
SqlSessionFactoryBean factoryBean = new SqlSessionFactoryBean();
factoryBean.setDataSource(redshiftDataSource());
return factoryBean;
}
Here's the myBatis code I'm running as a test to verify that it's not me holding on to records in my ResultHandler:
warehouse.doForAllStaysByProvider(providerCode, startDate, endDate, new ResultHandler<FStay>() {
#Override
public void handleResult(ResultContext<? extends FStay> resultContext) {
// do nothing
}
});
Is there a way I can force the SQL connection to not hang on to records or something? I'll again re-iterate that on my local machine, there is no issue with this memory leak... it only surfaces when running the code in the hosted AWS environment. And in both cases, the Database driver and server are the same.
Update 6
I think it's finally fixed. Thanks to all who pointed me in the direction of the heap dump. That helped narrow it down to the offending class in a huge way.
After that, I did some research on the AWS redshift driver, and it explicitly says that your clients should specify a limit for any operations on large data. So I found out how to do that in my myBatis configuration:
<select id="doForAllStaysByProvider" fetchSize="1000" resultMap="FStayResultMap">
select distinct
f_stay.uid,
And this did the trick.
Mind you, this isn't necessary even when handling much larger data sets downloaded remotely from AWS (Database in AWS, code executing on laptop at home), and this shouldn't be necessary since I'm overriding the myBatis ResultHandler<> which handles each row individually and never holds on to any objects.
Yet something funky happens with the AWS redshift jdbc driver only when it's run in AWS (database in aws, code executing in AWS instance) which causes this InboundDataHandler to never release its resources, unless a fetchSize is specified.
Here's the heap of the server running now, getting much further than it ever has before in AWS, with the heap space never moving above 500Mb, and after i hit 'force gc' in jvisualvm, it shows the 'used' heap at less than 100mb:
Thanks again in a huge way to all those who helped guide this!
Finally figured out a solution.
The heap dump was the biggest aid-- it indicated the InboundDataHandler class of Amazon's RedShift/postgres JDCB driver was the prime culprit.
The code to set up the SqlSession appeared legit, so traveling over to Amazon's documentation landed this gem:
To avoid client-side out-of-memory errors when retrieving large data
sets using JDBC, you can enable your client to fetch data in batches
by setting the JDBC fetch size parameter.
We hadn't run into this before, as we stream results with custom ResultHandlers in MyBatis... but there seems to be something different when the AWS Redshift JDBC driver is running on AWS itself vs outside AWS connecting in.
Taking the guidance from the documentation, we added a 'fetchSize' to our MyBatis select query:
<select id="doForAllStaysByProvider" fetchSize="1000" resultMap="FStayResultMap">
select distinct
f_stay.uid,
And voila! Everything worked swimmingly. This is the only change we made and the heap never went above a couple hundred MBs.
You can see in one of the above graphs where the heap goes off the charts, as soon as the data started to be received on Amazon, the heap marches right up linearly and never reclaims an ounce of heap space once it starts.
My guess is the Redshift JDBC driver is doing something different when it's in Amazon's environment for some kind of optimization... that's all I can think of to explain the behavior.
Clearly Amazon knows what's going on since they documented it up front. I may not know the full 'why' of what's happening, but at least everything is resolved in what appears to be a satisfactory way.
Thanks to all those who helped.

Running Calabash XML From Code

I downloaded Calabash XML a couple of days back and got it working easily enough from the command prompt. I then tried to run it from Java code I noticed there was no API (e.g. the Calabash main method is massive with code calls to everywhere). To get it working was very messy as I had to copy huge chunks from the main method to a wrapper class, and divert from the System.out to a byte array output stream (and eventually into a String) i.e.
...
ByteArrayOutputStream baos = new ByteArrayOutputStream (); // declare at top
...
WritableDocument wd = null;
if (uri != null) {
URI furi = new URI(uri);
String filename = furi.getPath();
FileOutputStream outfile = new FileOutputStream(filename);
wd = new WritableDocument(runtime,filename,serial,outfile);
} else {
wd = new WritableDocument(runtime,uri,serial, baos); // new "baos" parameter
}
The performance seems really, really slow e.g. i ran a simple filter 1000 times ...
<p:filter>
<p:with-option name="select" select="'/result/meta-data/neighbors/document/title'" />
</p:filter>
On average each time took 17ms which doesn't seem like much but my spring REST controller with calls to Mongo DB and encryption calls etc take on average 3/4 ms.
Has anyone encountered this when running Calabash from code? Is there something I can do to speed things up?
For example, I this is being called each time -
XProcRuntime runtime = new XProcRuntime(config);
Can this be created once and reused? Any help is appreciated as I don't want to have to pay money to use Calamet but really want to get Xproc working from code to an acceptable performance.
For examples on how you could integrate XMLCalabash in a framework, I can mention Servlex by Florent Georges. You'd have to browse the code to find the relevant bit, but last time I looked it shouldn't be too hard to find:
http://servlex.net/
XMLCalabash wasn't build for speed unfortunately. I am sure that if you run profile, and can find some hotspots, Norm Walsh would be interested to hear about it.
Alternative is to look into Quixprox, which is derived from XMLCalabash:
https://code.google.com/p/quixproc/
I am also very sure that if you can send Norm a patch to improve the main class for better integration, he'd be interested to hear about it. In fact, the code should be on github, just fork it, fix it, and do a pull request..
HTH!

RXTXComm. Not correct Eclipse behaviour during read from stream []byte

I' creating application that reads/writes to/from com port using RXTXComm library. When I'm trying to read one byte from stream everything goes fine.
while ( ( data = in.read() ) > -1 )
Then I tried to read []byte and put breakpoint to this line:
int g = in.read(buffer,off,len);
When debug reaches this place and I do resume debug - new window with message described bellow appears:
Class File Editor
Source not found
----------------------
The JAR file c:\pro\RXTXcom.jar has no source attachment.
You can attach the source by clicking Attach Source below:
What is the problem? This is not exception, because I can't catch it in try-except block. What is this? I didn't asked for "trace in" and I don't need source.
It appears that your IDE (which you do not name) is telling you it is trying to display a line from the RXTXcom library, but it has no source code to use to do so. I would expect this if I were using eclipse, had a binary-only copy of the library, exception checking turned on in the debugger, and the library threw an exception.
I don't recognize "resume debug - new window", so I don't know what effect that might have.
Eclipse has a "step out" function in its debugger, allowing you to step through the next return statement; that might help you get to a level for which you do have source.
I doubt this message has much to do with your actual 1 byte vs. byte array reading problem.

Categories