How to enable sidetone/microphone pass-thru programmatically - java

For my current project I'm implementing a native library in C++ that I'll be accessing via JNA, this project is a low-latency communication simulator. There's a requirement to enable sidetone while transmitting in order to mimic the hardware the simulator is based on.
Of course JAVA sound is proving difficult to achieve near-zero latency (best we can get is ~120ms), in order to remain comprehensible we need the latency on sidetone to be near-zero. Fortunately it seems that in Windows there's a method to listen to the usb headset's microphone which produces perfect sidetone.
Audio Properties -> Playback -> Headset Earphone -> Properties -> Levels
An example of what I mean here
(Note that this is different from the 'listen to this device' feature which produces a pretty bad delay)
I've been working with the MSDN examples for the Core Audio API's and am able to query devices and get their channels, volume levels, mute setting, etc. but the microphone level mute/unmute doesn't seem to be accessible from even the core audio apis.
My question is this: is there a way to programmatically interface with a usb headset's microphone level/mute setting?
Our simulators are standardized so we don't have to worry about supporting a wide range of headsets (2 at the moment).

The key to solving this problem was to walk the device topology tree backwards until I found the part responsible for setting the sidetone mute attribute. So in my CPP project I had several methods working together to determine where I was in the topology tree looking for a SuperMix part.
SuperMix seems to be a common name for sidetone and is at least used by the two headsets we support. The tree is identical for both headsets, your mileage may vary. This is what the output may look like from the aforementioned WalkTreeBackwardsFromPart example (see this answer)
Part Name: SuperMix
Part Name: Volume
Part Name: Mute
Here's my modified version of WalkTreeBackwardsFromPart, which for all intents and purposes simply checks whether or not the part we're currently looking at is the SuperMix and the direct child of this part is a volume node, this is to prevent an incorrect assignment as I found that for our headsets there would often be two nodes called SuperMix and the only difference was that the one we wanted had a volume node child.
HRESULT Sidetone::WalkTreeBackwardsFromPart(IPart *part) {
HRESULT hr;
if (wcscmp(this->getPartName(part), L"SuperMix") == 0 && this->treePeek(part, L"Volume")){
this->superMix = part;
IPart** superMixChildren = this->getChildParts(part);
int nSuperMixChildren = sizeof(superMixChildren) / sizeof(superMixChildren[0]);
if (nSuperMixChildren > 0){
for (int i = 0; i < nSuperMixChildren; i++){
if (wcscmp(this->getPartName(superMixChildren[i]), L"Volume") == 0){
this->volumeNode = this->getIPartAsIAudioVolumeLevel(superMixChildren[i]);
if (this->volumeNode != NULL){
IPart** volumeNodeChildren = this->getChildParts(superMixChildren[i]);
int nVolumeNodeChildren = sizeof(volumeNodeChildren) / sizeof(volumeNodeChildren[0]);
if (nVolumeNodeChildren > 0){
for (int j = 0; j < nVolumeNodeChildren; j++){
if (wcscmp(this->getPartName(volumeNodeChildren[j]), L"Mute") == 0){
this->muteNode = this->getIPartAsIAudioMute(volumeNodeChildren[j]);
break;
}
}
}
}
break;
}
}
}
delete[] superMixChildren;
this->muteNode; // = someotherfunc();
this->superMixFound = true;
return S_OK;
} else if(superMixFound == false){
IPartsList *pIncomingParts = NULL;
hr = part->EnumPartsIncoming(&pIncomingParts);
if (E_NOTFOUND == hr) {
// not an error... we've just reached the end of the path
//printf("%S - No incoming parts at this part: 0x%08x\n", this->MSGIDENTIFIER, hr);
return S_OK;
}
if (FAILED(hr)) {
printf("%S - Couldn't enum incoming parts: hr = 0x%08x\n", this->MSGIDENTIFIER, hr);
return hr;
}
UINT nParts = 0;
hr = pIncomingParts->GetCount(&nParts);
if (FAILED(hr)) {
printf("%S - Couldn't get count of incoming parts: hr = 0x%08x\n", this->MSGIDENTIFIER, hr);
pIncomingParts->Release();
return hr;
}
// walk the tree on each incoming part recursively
for (UINT n = 0; n < nParts; n++) {
IPart *pIncomingPart = NULL;
hr = pIncomingParts->GetPart(n, &pIncomingPart);
if (FAILED(hr)) {
printf("%S - Couldn't get part #%u (0-based) of %u (1-basedSmile hr = 0x%08x\n", this->MSGIDENTIFIER, n, nParts, hr);
pIncomingParts->Release();
return hr;
}
hr = WalkTreeBackwardsFromPart(pIncomingPart);
if (FAILED(hr)) {
printf("%S - Couldn't walk tree on part #%u (0-based) of %u (1-basedSmile hr = 0x%08x\n", this->MSGIDENTIFIER, n, nParts, hr);
pIncomingPart->Release();
pIncomingParts->Release();
return hr;
}
pIncomingPart->Release();
}
pIncomingParts->Release();
}
return S_OK;
}
Sidetone::superMixFound is a boolean member used to quickly break our recursive loop and prevent us from walking the device topology tree any further (wasted time).
Sidetone::getPartName() is a simple reusable method for returning a widestring character array of the part's name.
Sidetone::treePeek() returns true if the children of the specified part contains a part with the name specified as the second parameter.
Sidetone::getChildParts() returns an array of pointers for each child of a given part.
After figuring this out it was just a matter of exposing the setMute method to dllmain.cpp and calling it via JNA whenever we needed to activate/deactivate sidetone, so at the beginning and ending of any transmission.

Related

How to express the reasoning for Weka instance classification?

Background:
If I open Weka Explorer GUI, train a J48 tree and test using the NSL-KDD training and testing datasets a pruned tree would be produced. Weka Explorer GUI expresses the algorithms reasoning for stating whether something would be classified as an anomaly or not in terms of queries such as src_bytes <= 28.
Screenshot of Weka Explorer GUI showing pruned tree
Question:
Referring to the pruned tree example produced by the Weka Explorer GUI, how can I programmatically have weka express the reasoning for each instance classification in Java?
i.e. Instance A was classified as an anomaly as src_bytes < 28 &&
dst_host_srv_count < 88 && dst_bytes < 3 etc.
So Far I've been able to:
Train and test a J48 tree on the NSL-KDD dataset.
Output a description of the J48 tree within Java.
Return the J48 tree as an if-then statement.
But I simply have no idea how whilst iterating through each instance during the testing phase, to express the reasoning for each classification; without each time manually outputting the J48 tree as an if-then statement and adding numerous println expressing when each was triggered (which I'd really rather not do, as this would dramatically increase the human intervention requirements in the long-term).
Additional Screenshots:
Screenshot of the 'description of the J48 tree within Java'
Screenshot of the 'J48 tree as an if-then statement'
Code:
public class Junction_Tree {
String train_path = "KDDTrain+.arff";
String test_path = "KDDTest+.arff";
double accuracy;
double recall;
double precision;
int correctPredictions;
int incorrectPredictions;
int numAnomaliesDetected;
int numNetworkRecords;
public void run() {
try {
Instances train = DataSource.read(train_path);
Instances test = DataSource.read(test_path);
train.setClassIndex(train.numAttributes() - 1);
test.setClassIndex(test.numAttributes() - 1);
if (!train.equalHeaders(test))
throw new IllegalArgumentException("datasets are not compatible..");
Remove rm = new Remove();
rm.setAttributeIndices("1");
J48 j48 = new J48();
j48.setUnpruned(true);
FilteredClassifier fc = new FilteredClassifier();
fc.setFilter(rm);
fc.setClassifier(j48);
fc.buildClassifier(train);
numAnomaliesDetected = 0;
numNetworkRecords = 0;
int n_ana_p = 0;
int ana_p = 0;
correctPredictions = 0;
incorrectPredictions = 0;
for (int i = 0; i < test.numInstances(); i++) {
double pred = fc.classifyInstance(test.instance(i));
String a = "anomaly";
String actual;
String predicted;
actual = test.classAttribute().value((int) test.instance(i).classValue());
predicted = test.classAttribute().value((int) pred);
if (actual.equalsIgnoreCase(a))
numAnomaliesDetected++;
if (actual.equalsIgnoreCase(predicted))
correctPredictions++;
if (!actual.equalsIgnoreCase(predicted))
incorrectPredictions++;
if (actual.equalsIgnoreCase(a) && predicted.equalsIgnoreCase(a))
ana_p++;
if ((!actual.equalsIgnoreCase(a)) && predicted.equalsIgnoreCase(a))
n_ana_p++;
numNetworkRecords++;
}
accuracy = (correctPredictions * 100) / (correctPredictions + incorrectPredictions);
recall = ana_p * 100 / (numAnomaliesDetected);
precision = ana_p * 100 / (ana_p + n_ana_p);
System.out.println("\n\naccuracy: " + accuracy + ", Correct Predictions: " + correctPredictions
+ ", Incorrect Predictions: " + incorrectPredictions);
writeFile(j48.toSource(J48_if-then.java));
writeFile(j48.toString());
} catch (Exception e) {
e.printStackTrace();
}
}
public static void main(String[] args) {
Junction_Tree JT1 = new Junction_Tree();
JT1.run();
}
}
I have never used it myself, but according to the WEKA documentation the J48 class includes a getMembershipValues method. This method should return an array that indicates the node membership of an instance. One of the few mentions of this method appears to be in this thread on the WEKA forums.
Other than this, I can't find any information on possible alternatives other than the one you mentioned.

Problems building A* algorithm

I am trying to build an app implementing A*, and I am having trouble working on the logic. The method here takes in 4 ints (startX/Y, goalX/Y) and then using the A* algorithm, it will build an ArrayList and return it, so the main method can take iterate through and display the path A* builds. But what I am getting is a jumpy path that eventually builds a very thick path to the goal node. Can anybody pinpoint where my mistake is.
Note: open and closed are priority queues and Tile implements comparable.
public ArrayList<Tile> findPath(int sX, int sY, int gX, int gY)
{
ArrayList<Tile> path = new ArrayList<Tile>();
open.offer(gameMap[sX][sY]);
Tile currentNode = gameMap[sX][sY];
Tile goalNode = gameMap[gX][gY];
int cX;
int cY;
while(open.size() > 0){
currentNode = open.poll();
closed.offer(currentNode);
path.add(currentNode);
cX = currentNode.getX();
cY = currentNode.getY();
if(currentNode == goalNode){
break;
}
if((cX > 0 && cX < gameMap.length - 1) && (cY > 0 && cY < gameMap.length -1)){
for(int i = -1; i < 2; i++){
for(int j = 1; j > -2; j--){
if(i == 0 && j == 0){}
else{
if((gameMap[cX + i][cX + j].type != 1) && !closed.contains(gameMap[cX + i][cX + j])){
if(!open.contains(gameMap[cX + i][cX + j])){
open.offer(gameMap[cX + i][cX + j]);
gameMap[cX + i][cX + j].parent = currentNode;
}
}
}
}
}
}
}
// while(currentNode != gameMap[sX][sY]){
// path.push(currentNode);
// currentNode = currentNode.parent;
// }
return path;
}
First off, I don't think your closed set needs to be a priority queue. It's just a set of nodes that have been looked at.
You seem to be missing the core part of how A* works, which is why I think this path finder is not working to well for you.
Here's the main idea:
Have a heuristic function that guesses how far away the destination is. Ideally, that function will be admissible, meaning that it will never overestimate the distance.
For tile grids, this can be done using manhattan distance (x difference + y difference) since that is the minimum distance, so it will always be admissible.
Whenever you take a tile out of your open list and add it to the closed set, you need to update the known value of how far away the neighboring tiles are (keeping the lowest known value). Since you have the known value for the tile you are putting in the closed set, you just add 1 to all the neighbors' known values.
By updating these values, the open list may shift order (which is why a priority queue is a good choice here). The heuristic value will probably remain the same, but the known value will get more refined.
Once you reach the destination, you will have a set of closed nodes that all have a known distance. You then backtrack from the destination, looking at each neighbor that is also in the closed set and choosing the one with the lowest known distance.
In terms of how to implement this, you may want to consider having your Tile class be wrapped in another class called SearchTile or something like that. It could look like this:
//You may not want to use public variables, depending on your needs
public class SearchTile implements Comparable<SearchTile> {
public final Tile tile;
//These need to be updated
public int knownDistance = 0;
public int heuristicDistance = 0;
public SearchTile(final Tile tile) {
this.tile = tile;
}
#Override
public int compareTo(final SearchTile other) {
if (knownDistance + heuristicDistance > other.knownDistance + other.heuristicDistance) {
return 1;
} else if (knownDistance + heuristicDistance < other.knownDistance + other.heuristicDistance) {
return -1;
} else {
return 0;
}
}
}
The cool thing about A* is that in the ideal case, it should go straight to the destination. In cases with walls, it will take the best guess and as long as the heuristic is admissible, it will come up with the optimal solution.
I've not completely entered in the details of your implementation, but it comes to my mind that the way in which you are inserting the nodes in OPEN might be a cause of trouble:
if(!open.contains(gameMap[cX + i][cX + j])){
open.offer(gameMap[cX + i][cX + j]);
gameMap[cX + i][cX + j].parent = currentNode;
}
Your goal here is to manage avoiding repeated elementes in your OPEN list, but it might happen that sometimes you have to replace the element because you have encountered a way in which you reach it with a better cost. In this case you need to remove the node already inserted in OPEN and reintroduce it with a lower cost (and thus with highest priority). If you do not allow this, you might be generating suboptimal paths as it seems to be your case.
Additionaly, some logic of the algorithm is missing. You should store the accumulated cost from the start, G, and the estimated cost to goal, H, for each node you create. The OPEN list is ordered according to the value of G+H, which I didn't notice in your code to be done this way. Anyway, I recommend you to take a look of some existing implementation of A* like one of the Hipster4j library to have more details on how this works.
Hope my answer helped!

Round robin java implementation

I was asked to do a multithreaded simulator of a specific algorithm.
One of the tasks was to compare the regular scheduling results with round robin results.
When I was looking for information about the round robin scheduling method I found vary general explanations and some code examples that I couldn’t find any relation between them and scheduling the threads.
For example this code (found here on stack overflow):
public static void RR3(int numProcess, int[] cpuBurst, int[] arrivalTime){
int quantum = 3,time = 0, temp;
int completionTime = 0;
LinkedList <Integer>process = new LinkedList();
for (int i = 0; i < numProcess; i++) {
process.add(i, cpuBurst[i]);
}
while (process.isEmpty() != true){
for (int j = 0; j < quantum; j++) {
System.out.println(process.getFirst());
if(process.peek() == 0 ){
completionTime = completionTime + time;
process.remove();
}
else{
temp = process.pop();
process.push(temp - 1);
time++;
}
}
process.addLast(process.getFirst());
process.removeFirst();
}
double act = (double) completionTime/numProcess;
System.out.println("-----------------RR3-----------------");
System.out.println(" Act = " + act + "ms");
}
I don't see anything but integers that represent the amount of process, time for each etc., but how do I actually manage their behavior? I dont see any call for a process to run or stop.
You already noticed that this is an abstraction. Namely that there is no real work performed. Instead, the work is just "imitated" by a set of Integers that represent the amount of work.
The question about how to run or stop the processes is somewhat hidden in the algorithm itself: The LinkedList stores the "active" processes. They are all started at the beginning. In each turn, they receive a short time slot in which they can do some of their work. When all their work is done, they are removed from the list.
In the simplest form, when the Integer values are replaced by real tasks, you could replace the line
if(process.peek() == 0 ){ ... }
with something like
Task task = process.peek();
if (task.isFinished()) { ... }
Otherwise (in the else case), when there is work to be done, you could replace the lines
temp = process.pop();
process.push(temp - 1);
with something like
Task task = process.peek();
task.doALittleBitOfWork();
The code that you posted was originally part of a question, so one has to assume that there's still something wrong with it, but maybe it is sufficient to get the basic idea.

Extracting Values used for Normalization in Weka Multilayer Perceptron

I have a machine learning scheme in which I am using the java classes from Weka to implement machine learning in a matlab script. I am then uploading the model for the classifier to a database, since I need to perform the classification on a different machine in a different language (obj-c). The evaluation of the network was fairly straightforward to program, but I need the values that WEKA used to normalize the data set before training so I can use them in the evaluation of the network later. Does anyone know how to get the normalization factors that weka would use for training a Multilayer Perceptron network? I would prefer the answer to be in Java.
After some digging through the WEKA source code and documentation... this is what I've come up with. Even though there is a filter in WEKA called "Normalize", the Multilayer Perceptron doesn't use it, instead it uses a bit of code internally that looks like this.
m_attributeRanges = new double[inst.numAttributes()];
m_attributeBases = new double[inst.numAttributes()];
for (int noa = 0; noa < inst.numAttributes(); noa++) {
min = Double.POSITIVE_INFINITY;
max = Double.NEGATIVE_INFINITY;
for (int i=0; i < inst.numInstances();i++) {
if (!inst.instance(i).isMissing(noa)) {
value = inst.instance(i).value(noa);
if (value < min) {
min = value;
}
if (value > max) {
max = value;
}
}
}
m_attributeRanges[noa] = (max - min) / 2;
m_attributeBases[noa] = (max + min) / 2;
if (noa != inst.classIndex() && m_normalizeAttributes) {
for (int i = 0; i < inst.numInstances(); i++) {
if (m_attributeRanges[noa] != 0) {
inst.instance(i).setValue(noa, (inst.instance(i).value(noa)
- m_attributeBases[noa]) /
m_attributeRanges[noa]);
}
else {
inst.instance(i).setValue(noa, inst.instance(i).value(noa) -
m_attributeBases[noa]);
}
So the only values that I should need to transmit to the other system I'm trying to use to evaluate this network would be the min and the max. Luckily for me, there turned out to be a method on the filter weka.filters.unsupervised.attribute.Normalize that returns a double array of the mins and the maxes for a processed dataset. All I had to do then was tell the multilayer perceptron to not automatically normalize my data, and to process it separately with the filter so I could extract the mins and maxes to send to the database along with the weights and everything else.

How do I know that my neural network is being trained correctly

I've written an Adaline Neural Network. Everything that I have compiles, so I know that there isn't a problem with what I've written, but how do I know that I have to algorithm correct? When I try training the network, my computer just says the application is running and it just goes. After about 2 minutes I just stopped it.
Does training normally take this long (I have 10 parameters and 669 observations)?
Do I just need to let it run longer?
Hear is my train method
public void trainNetwork()
{
int good = 0;
//train until all patterns are good.
while(good < trainingData.size())
{
for(int i=0; i< trainingData.size(); i++)
{
this.setInputNodeValues(trainingData.get(i));
adalineNode.run();
if(nodeList.get(nodeList.size()-1).getValue(Constants.NODE_VALUE) != adalineNode.getValue(Constants.NODE_VALUE))
{
adalineNode.learn();
}
else
{
good++;
}
}
}
}
And here is my learn method
public void learn()
{
Double nodeValue = value.get(Constants.NODE_VALUE);
double nodeError = nodeValue * -2.0;
error.put(Constants.NODE_ERROR, nodeError);
BaseLink link;
int count = inLinks.size();
double delta;
for(int i = 0; i < count; i++)
{
link = inLinks.get(i);
Double learningRate = value.get(Constants.LEARNING_RATE);
Double value = inLinks.get(i).getInValue(Constants.NODE_VALUE);
delta = learningRate * value * nodeError;
inLinks.get(i).updateWeight(delta);
}
}
And here is my run method
public void run()
{
double total = 0;
//find out how many input links there are
int count = inLinks.size();
for(int i = 0; i< count-1; i++)
{
//grab a specific link in sequence
BaseLink specificInLink = inLinks.get(i);
Double weightedValue = specificInLink.weightedInValue(Constants.NODE_VALUE);
total += weightedValue;
}
this.setValue(Constants.NODE_VALUE, this.transferFunction(total));
}
These functions are part of a library that I'm writing. I have the entire thing on Github here. Now that everything is written, I just don't know how I should go about actually testing to make sure that I have the training method written correctly.
I asked a similar question a few months ago.
Ten parameters with 669 observations is not a large data set. So there is probably an issue with your algorithm. There are two things you can do that will make debugging your algorithm much easier:
Print the sum of squared errors at the end of each iteration. This will help you determine if the algorithm is converging (at all), stuck at a local minimum, or just very slowly converging.
Test your code on a simple data set. Pick something easy like a two-dimensional input that you know is linearly separable. Will your algorithm learn a simple AND function of two inputs? If so, will it lean an XOR function (2 inputs, 2 hidden nodes, 2 outputs)?
You should be adding debug/test mode messages to watch if the weights are getting saturated and more converged. It is likely that good < trainingData.size() is not happening.
Based on Double nodeValue = value.get(Constants.NODE_VALUE); I assume NODE_VALUE is of type Double ? If that's the case then this line nodeList.get(nodeList.size()-1).getValue(Constants.NODE_VALUE) != adalineNode.getValue(Constants.NODE_VALUE) may not really converge exactly as it is of type double with lot of other parameters involved in obtaining its value and your convergence relies on it. Typically while training a neural network you stop when the convergence is within an acceptable error limit (not a strict equality like you are trying to check).
Hope this helps

Categories