I'm messing around with Auctionator (a WoW addon for the auction house). My application is still in development but out of curious i want to know the name for this format.
D:\Blizzard\World of Warcraft\WTF\Account\54621418#1\SavedVariables\Auctionator.lua
AUCTIONATOR_PRICE_DATABASE = {
["__dbversion"] = 4,
["Ragnaros_Horde"] = {
["Kraken's Eye of Agility"] = {
["mr"] = 6019998,
["cc"] = 3,
["H2935"] = 6019998,
["id"] = "153708:0:0:0:0",
["sc"] = 1,
},
["Tidespray Linen Pants of the Harmonious"] = {
["mr"] = 2930810,
["sc"] = 1,
["id"] = "154689:0:0:0:1715",
["L2926"] = 2930810,
["H2926"] = 19698294,
["cc"] = 4,
},
},
}
I ended up parsing the file with lots of indexOf(..) and Patters and Matchers because i couldn't find this format anywhere. Here's a screenshot of the application if you wanna see it.
A LUA file is a source code file written in Lua, a light-weight programming language designed for extending applications. It can be compiled into a program using an ANSI C compiler.
Your file looks like a table/config details
More you can have a look on https://en.wikipedia.org/wiki/Lua_(programming_language)
Related
As part of the upcoming 2023 new year I wanted to try and move my development environment to vim or neovim. I have gone through a bit of setup already and have go and js/ts setup and appearing to work just fine. Autocomplete, linting and import management.
Trying to get lsp-zero and java working though is turning out to be a nightmare (because of course java would be a problem child). I opened a java file lsp-zero was baller and asked to install the jdtls which appears to have worked and voila nothing... I just have code highlighting. No auto-complete or importing management.
I added the following to test
-- configure an individual server
lsp.configure('jdtls', {
flags = {
debounce_text_changes = 150,
},
on_attach = function(client, bufnr)
print('lsp server (jdtls) attached')
end
})
lsp.configure('gopls', {
flags = {
debounce_text_changes = 150,
},
on_attach = function(client, bufnr)
print('lsp server (gopls) attached')
end
})
Java is not picking up the lsp server
Go picks up just fine
Does anyone know of additional configs that are needed. I am not seeing anything specifically called out.
--- Config edit ---
I updated the config to call the windows version of the scripts. I also added a data path and root_dir. The lsp still never triggers.
require'lspconfig'.jdtls.setup{
cmd = {
'jdtls-win.cmd',
"-configuration",
"C:\\Users\\Coury\\AppData\\Local\\nvim-data\\mason\\packages\\jdtls\\config_win",
"-jar",
"C:\\Users\\Coury\\AppData\\Local\\nvim-data\\mason\\packages\\jdtls\\plugins\\org.eclipse.equinox.launcher_1.6.400.v20210924-0641.jar",
"-data",
"C:\\Users\\Coury\\Documents\\Code\\interviews\\truleo\\app",
},
single_file_support = true,
root_dir = function()
return "C:\\Users\\Coury\\Documents\\Code\\interviews\\truleo\\app"
end,
flags = {
debounce_text_changes = 150,
},
on_attach = function(client, bufnr)
print('lsp server (jdtls) attached')
end
}
First, include java path to your bashrc. And retry the installation using Mason.nvim
Else: Do below
Install eclipse.jdt.ls by following their Installation instructions.
Add the plugin:
vim-plug: Plug mfussenegger/nvim-jdtls
packer.nvim: use mfussenegger/nvim-jdtls
To solve this you'd have to create your personal jdlts config file in your plugins directory like so
-- Java.lua
local config = {
cmd = {
--
"java", -- Or the absolute path '/path/to/java11_or_newer/bin/java'
"-Declipse.application=org.eclipse.jdt.ls.core.id1",
"-Dosgi.bundles.defaultStartLevel=4",
"-Declipse.product=org.eclipse.jdt.ls.core.product",
"-Dlog.protocol=true",
"-Dlog.level=ALL",
"-Xms1g",
"--add-modules=ALL-SYSTEM",
"--add-opens",
"java.base/java.util=ALL-UNNAMED",
"--add-opens",
"java.base/java.lang=ALL-UNNAMED",
--
"-jar",
"/path/to/jdtls_install_location/plugins/org.eclipse.equinox.launcher_VERSION_NUMBER.jar",
"-configuration", "/path/to/jdtls_install_location/config_SYSTEM",
"-data", "/Users/YOUR_MACHINE_NAME/local/share/nvim/java"
},
settings = {
java = {
signatureHelp = {enabled = true},
import = {enabled = true},
rename = {enabled = true}
}
},
init_options = {
bundles = {}
}
}
Source the new config and open any java file.
I recommend using mfussenegger/nvim-jdtls to run and configure the language server.
Its simply a matter of setting up a FTPlugin for java that calls jdtls.start_or_attach(jdtls_config) whenever a java file/repo is opened which will start the language server and attach it to your buffer which can be verified by :LspInfo.
ftplugin/java.lua:
local jdtls_config = require("myconfig.lsp.jdtls").setup()
local pkg_status, jdtls = pcall(require,"jdtls")
if not pkg_status then
vim.notify("unable to load nvim-jdtls", "error")
return
end
jdtls.start_or_attach(jdtls_config)
and the corresponding config using jdtls (installed via mason)
You may want to provide your own capabilities and on_attach functions but otherwise it should give you a good nudge in the right direction.
myconfig/lsp/jdtls.lua
local opts = {
cmd = {},
settings = {
java = {
signatureHelp = { enabled = true },
completion = {
favoriteStaticMembers = {},
filteredTypes = {
-- "com.sun.*",
-- "io.micrometer.shaded.*",
-- "java.awt.*",
-- "jdk.*",
-- "sun.*",
},
},
sources = {
organizeImports = {
starThreshold = 9999,
staticStarThreshold = 9999,
},
},
codeGeneration = {
toString = {
template = "${object.className}{${member.name()}=${member.value}, ${otherMembers}}",
},
useBlocks = true,
},
configuration = {
runtimes = {
{
name = "JavaSE-1.8",
path = "/Library/Java/JavaVirtualMachines/amazon-corretto-8.jdk/Contents/Home",
default = true,
},
{
name = "JavaSE-17",
path = "/Library/Java/JavaVirtualMachines/jdk-17.jdk/Contents/Home",
},
{
name = "JavaSE-19",
path = "/Library/Java/JavaVirtualMachines/jdk-19.jdk/Contents/Home",
},
},
},
},
},
}
local function setup()
local pkg_status, jdtls = pcall(require,"jdtls")
if not pkg_status then
vim.notify("unable to load nvim-jdtls", "error")
return {}
end
-- local jdtls_path = vim.fn.stdpath("data") .. "/mason/packages/jdtls"
local jdtls_bin = vim.fn.stdpath("data") .. "/mason/bin/jdtls"
local root_markers = { ".gradle", "gradlew", ".git" }
local root_dir = jdtls.setup.find_root(root_markers)
local home = os.getenv("HOME")
local project_name = vim.fn.fnamemodify(root_dir, ":p:h:t")
local workspace_dir = home .. "/.cache/jdtls/workspace/" .. project_name
opts.cmd = {
jdtls_bin,
"-data",
workspace_dir,
}
local on_attach = function(client, bufnr)
jdtls.setup.add_commands() -- important to ensure you can update configs when build is updated
-- if you setup DAP according to https://github.com/mfussenegger/nvim-jdtls#nvim-dap-configuration you can uncomment below
-- jdtls.setup_dap({ hotcodereplace = "auto" })
-- jdtls.dap.setup_dap_main_class_configs()
-- you may want to also run your generic on_attach() function used by your LSP config
end
opts.on_attach = on_attach
opts.capabilities = vim.lsp.protocol.make_client_capabilities()
return opts
end
return { setup = setup }
These examples are yanked from my personal neovim config (jdtls config). Hope this helps get you rolling.
Also make sure you have jdk17+ available for jdtls (i launch neovim with JAVA_HOME set to my jdk17 install)
(your code can still be compiled by and run on jdk8 -- I successfully work on gradle projects that are built with jdk8 no problems w/ this config)
I have developed a Tensorflow model with python in Linux based on the tutorial here: "http://cv-tricks.com/tensorflow-tutorial/training-convolutional-neural-network-for-image-classification/". I trained and saved the model using "tf.train.Saver". I am able to deploy the model in Linux environment and perform prediction successfully. Now I need to be able to load this saved model in JAVA on WINDOWS. Through extensive research online I have read that it does not work with "tf.train.Saver" and I have to change my code to use "Serving" to be able to load a saved TF model in java! Therefore, I followed the tutorial here:
"https://github.com/tensorflow/serving/blob/master/tensorflow_serving/example/mnist_saved_model.py
" and changed my code. However, I have an error with "tf.FixedLenFeature" where it is asking me to use "FixedLenSequenceFeature". Here is the complete error message:
"ValueError: First dimension of shape for feature x unknown. Consider using FixedLenSequenceFeature."
which is happening here:
feature_configs = {'x': tf.FixedLenFeature(shape=[None, img_size,img_size,num_channels], dtype=tf.float32),}
I am not sure this is the right path to go since I have batch of images of size [batchsize*128*128*3] and should not be using the sequence feature! It would be great if someone could clear this out for me and answer these questions:
1- Do I have to change my code from "tf.train.Saver" to "serving" to be able to load the saved model and deploy it in JAVA?
2- If the answer to the above question is yes, how can I feed the data correctly and solve the aforementioned ERROR?
3- Is there any example of how to DEPLOY the model that was saved using "serving"?
Here is my training code that throws the error:
import dataset
import tensorflow as tf
import time
from datetime import timedelta
import math
import random
import numpy as np
import os
#Adding Seed so that random initialization is consistent
from numpy.random import seed
seed(1)
from tensorflow import set_random_seed
set_random_seed(2)
batch_size = 32
#Prepare input data
classes = ['class1','class2','class3']
num_classes = len(classes)
# 20% of the data will automatically be used for validation
validation_size = 0.2
img_size = 128
num_channels = 3
train_path='/home/user1/Downloads/Expression/Augmented/Data/Train'
# We shall load all the training and validation images and labels into memory using openCV and use that during training
data = dataset.read_train_sets(train_path, img_size, classes, validation_size=validation_size)
print("Complete reading input data. Will Now print a snippet of it")
print("Number of files in Training-set:\t\t{}".format(len(data.train.labels)))
print("Number of files in Validation-set:\t{}".format(len(data.valid.labels)))
session = tf.Session()
serialized_tf_example = tf.placeholder(tf.string, name='tf_example')
feature_configs = {'x': tf.FixedLenFeature(shape=[None, img_size,img_size,num_channels], dtype=tf.float32),}
tf_example = tf.parse_example(serialized_tf_example, feature_configs)
x = tf.identity(tf_example['x'], name='x') # use tf.identity() to assign name
# x = tf.placeholder(tf.float32, shape=[None, img_size,img_size,num_channels], name='x')
## labels
y_true = tf.placeholder(tf.float32, shape=[None, num_classes], name='y_true')
y_true_cls = tf.argmax(y_true, dimension=1)
##Network graph params
filter_size_conv1 = 3
num_filters_conv1 = 32
filter_size_conv2 = 3
num_filters_conv2 = 32
filter_size_conv3 = 3
num_filters_conv3 = 64
fc_layer_size = 128
def create_weights(shape):
return tf.Variable(tf.truncated_normal(shape, stddev=0.05))
def create_biases(size):
return tf.Variable(tf.constant(0.05, shape=[size]))
def create_convolutional_layer(input,
num_input_channels,
conv_filter_size,
num_filters):
## We shall define the weights that will be trained using create_weights function.
weights = create_weights(shape=[conv_filter_size, conv_filter_size, num_input_channels, num_filters])
## We create biases using the create_biases function. These are also trained.
biases = create_biases(num_filters)
## Creating the convolutional layer
layer = tf.nn.conv2d(input=input,
filter=weights,
strides=[1, 1, 1, 1],
padding='SAME')
layer += biases
## We shall be using max-pooling.
layer = tf.nn.max_pool(value=layer,
ksize=[1, 2, 2, 1],
strides=[1, 2, 2, 1],
padding='SAME')
## Output of pooling is fed to Relu which is the activation function for us.
layer = tf.nn.relu(layer)
return layer
def create_flatten_layer(layer):
#We know that the shape of the layer will be [batch_size img_size img_size num_channels]
# But let's get it from the previous layer.
layer_shape = layer.get_shape()
## Number of features will be img_height * img_width* num_channels. But we shall calculate it in place of hard-coding it.
num_features = layer_shape[1:4].num_elements()
## Now, we Flatten the layer so we shall have to reshape to num_features
layer = tf.reshape(layer, [-1, num_features])
return layer
def create_fc_layer(input,
num_inputs,
num_outputs,
use_relu=True):
#Let's define trainable weights and biases.
weights = create_weights(shape=[num_inputs, num_outputs])
biases = create_biases(num_outputs)
# Fully connected layer takes input x and produces wx+b.Since, these are matrices, we use matmul function in Tensorflow
layer = tf.matmul(input, weights) + biases
if use_relu:
layer = tf.nn.relu(layer)
return layer
layer_conv1 = create_convolutional_layer(input=x,
num_input_channels=num_channels,
conv_filter_size=filter_size_conv1,
num_filters=num_filters_conv1)
layer_conv2 = create_convolutional_layer(input=layer_conv1,
num_input_channels=num_filters_conv1,
conv_filter_size=filter_size_conv2,
num_filters=num_filters_conv2)
layer_conv3= create_convolutional_layer(input=layer_conv2,
num_input_channels=num_filters_conv2,
conv_filter_size=filter_size_conv3,
num_filters=num_filters_conv3)
layer_flat = create_flatten_layer(layer_conv3)
layer_fc1 = create_fc_layer(input=layer_flat,
num_inputs=layer_flat.get_shape()[1:4].num_elements(),
num_outputs=fc_layer_size,
use_relu=True)
layer_fc2 = create_fc_layer(input=layer_fc1,
num_inputs=fc_layer_size,
num_outputs=num_classes,
use_relu=False)
y_pred = tf.nn.softmax(layer_fc2,name='y_pred')
y_pred_cls = tf.argmax(y_pred, dimension=1)
values, indices = tf.nn.top_k(y_pred, 3)
table = tf.contrib.lookup.index_to_string_table_from_tensor(
tf.constant([str(i) for i in xrange(3)]))
prediction_classes = table.lookup(tf.to_int64(indices))
session.run(tf.global_variables_initializer())
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(logits=layer_fc2,
labels=y_true)
cost = tf.reduce_mean(cross_entropy)
optimizer = tf.train.AdamOptimizer(learning_rate=1e-4).minimize(cost)
correct_prediction = tf.equal(y_pred_cls, y_true_cls)
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
session.run(tf.global_variables_initializer())
def show_progress(epoch, feed_dict_train, feed_dict_validate, val_loss):
acc = session.run(accuracy, feed_dict=feed_dict_train)
val_acc = session.run(accuracy, feed_dict=feed_dict_validate)
msg = "Training Epoch {0} --- Training Accuracy: {1:>6.1%}, Validation Accuracy: {2:>6.1%}, Validation Loss: {3:.3f}"
print(msg.format(epoch + 1, acc, val_acc, val_loss))
total_iterations = 0
# saver = tf.train.Saver()
def train(num_iteration):
global total_iterations
for i in range(total_iterations,
total_iterations + num_iteration):
x_batch, y_true_batch, _, cls_batch = data.train.next_batch(batch_size)
x_valid_batch, y_valid_batch, _, valid_cls_batch = data.valid.next_batch(batch_size)
feed_dict_tr = {x: x_batch,
y_true: y_true_batch}
feed_dict_val = {x: x_valid_batch,
y_true: y_valid_batch}
session.run(optimizer, feed_dict=feed_dict_tr)
if i % int(data.train.num_examples/batch_size) == 0:
print(i)
val_loss = session.run(cost, feed_dict=feed_dict_val)
epoch = int(i / int(data.train.num_examples/batch_size))
show_progress(epoch, feed_dict_tr, feed_dict_val, val_loss)
print("Saving the model Now!")
# saver.save(session, save_path_full, global_step=i)
total_iterations += num_iteration
train(num_iteration=10000)#3000
# Export model
# WARNING(break-tutorial-inline-code): The following code snippet is
# in-lined in tutorials, please update tutorial documents accordingly
# whenever code changes.
export_path_base = './SavedModel/'
export_path = os.path.join(
tf.compat.as_bytes(export_path_base),
tf.compat.as_bytes(str(1)))
print 'Exporting trained model to', export_path
builder = tf.saved_model.builder.SavedModelBuilder(export_path)
# Build the signature_def_map.
classification_inputs = tf.saved_model.utils.build_tensor_info(
serialized_tf_example)
classification_outputs_classes = tf.saved_model.utils.build_tensor_info(
prediction_classes)
classification_outputs_scores = tf.saved_model.utils.build_tensor_info(values)
classification_signature = (
tf.saved_model.signature_def_utils.build_signature_def(
inputs={
tf.saved_model.signature_constants.CLASSIFY_INPUTS:
classification_inputs
},
outputs={
tf.saved_model.signature_constants.CLASSIFY_OUTPUT_CLASSES:
classification_outputs_classes,
tf.saved_model.signature_constants.CLASSIFY_OUTPUT_SCORES:
classification_outputs_scores
},
method_name=tf.saved_model.signature_constants.CLASSIFY_METHOD_NAME))
tensor_info_x = tf.saved_model.utils.build_tensor_info(x)
tensor_info_y = tf.saved_model.utils.build_tensor_info(y_pred)
prediction_signature = (
tf.saved_model.signature_def_utils.build_signature_def(
inputs={'images': tensor_info_x},
outputs={'scores': tensor_info_y},
method_name=tf.saved_model.signature_constants.PREDICT_METHOD_NAME))
legacy_init_op = tf.group(tf.tables_initializer(), name='legacy_init_op')
builder.add_meta_graph_and_variables(
sess, [tf.saved_model.tag_constants.SERVING],
signature_def_map={
'predict_images':
prediction_signature,
tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY:
classification_signature,
},
legacy_init_op=legacy_init_op)
builder.save()
print 'Done exporting!'
Ok I'll try and keep this short.
First let me explain exactly what I am trying to get. If you open Windows Explorer and go to a network drive there is a DFS tab there(must have DFS enabled VIA the servers on the network so it may not be there).
In that tab there is a list called the "Referral List"... I want what is in that box. I believe this is the DFS or UNC, you can correct me it will help me.
What I have is the \domainname.com\something$\BUS\blah\myDriveHome but this is tied to something else in that box that contains the actual server that that share is setting on and that share is what I need to run a compliance check.
I cannot use an exe that is not package with Windows 7 not any other exe as we cannot distribute exes.
So what have I done... a VERY thorough search for things like DFS/UNC paths from the command line, powershell, and registry and no go. Command line "net use" only return the linked path and not the server so that is useless.
I only ever post a question when I hit a wall that is taking up to much programming time.
If anyone has an info it would be grateful.
Thanks
I was able to steal the C# code in this answer here and make some modifications so it works with .Net 2.0, and use it within PowerShell:
$dfsCode = #'
using System;
using System.Runtime.InteropServices;
public static class Dfs
{
private enum NetDfsInfoLevel
{
DfsInfo1 = 1,
DfsInfo2 = 2,
DfsInfo3 = 3,
DfsInfo4 = 4,
DfsInfo5 = 5,
DfsInfo6 = 6,
DfsInfo7 = 7,
DfsInfo8 = 8,
DfsInfo9 = 9,
DfsInfo50 = 50,
DfsInfo100 = 100,
DfsInfo150 = 150,
}
[DllImport("netapi32.dll", SetLastError = true)]
private static extern int NetApiBufferFree(IntPtr buffer);
[DllImport("Netapi32.dll", CharSet = CharSet.Unicode, SetLastError = true)]
private static extern int NetDfsGetInfo(
[MarshalAs(UnmanagedType.LPWStr)] string DfsEntryPath, // DFS entry path for the volume
[MarshalAs(UnmanagedType.LPWStr)] string ServerName, // This parameter is currently ignored and should be NULL
[MarshalAs(UnmanagedType.LPWStr)] string ShareName, // This parameter is currently ignored and should be NULL.
NetDfsInfoLevel Level, // Level of information requested
out IntPtr Buffer // API allocates and returns buffer with requested info
);
[StructLayout(LayoutKind.Sequential, CharSet = CharSet.Auto)]
private struct DFS_INFO_3
{
[MarshalAs(UnmanagedType.LPWStr)]
public string EntryPath;
[MarshalAs(UnmanagedType.LPWStr)]
public string Comment;
public int State;
public int NumberOfStorages;
public IntPtr Storage;
}
[StructLayout(LayoutKind.Sequential, CharSet = CharSet.Auto)]
private struct DFS_STORAGE_INFO
{
public int State;
[MarshalAs(UnmanagedType.LPWStr)]
public string ServerName;
[MarshalAs(UnmanagedType.LPWStr)]
public string ShareName;
}
private static T GetStruct<T>(IntPtr buffer, int offset)where T:struct
{
T r = new T();
r = (T) Marshal.PtrToStructure((IntPtr)((long)buffer + offset * Marshal.SizeOf(r)), typeof(T));
return r;
}
public static string GetDfsInfo(string server)
{
string rval = null;
IntPtr b;
int r = NetDfsGetInfo(server, null, null, NetDfsInfoLevel.DfsInfo3, out b);
if(r != 0)
{
NetApiBufferFree(b);
// return passed string if not DFS
return rval;
}
DFS_INFO_3 sRes = GetStruct<DFS_INFO_3>(b,0);
if(sRes.NumberOfStorages > 0)
{
DFS_STORAGE_INFO sResInfo = GetStruct<DFS_STORAGE_INFO>(sRes.Storage,0);
rval = string.Concat(#"\\", sResInfo.ServerName, #"\", sResInfo.ShareName, #"\");
}
NetApiBufferFree(b);
return rval;
}
}
'#
Add-Type -TypeDefinition $dfsCode
[Dfs]::GetDfsInfo('\\ad.domain.com\Share')
This code will work with PowerShell 2.0 which is included with Windows 7.
I went another direction with the use of PSEXEC and DFSUtil to find the DFS info VIA the remote PC. Returns a lot of info but I filtered it in PowerShell after reading the file and matching the UNC. I would post the how to but I had to do some major adapting on my end with the info that is on a few other sites for DFSUtil and what to look for and PSExec. I will note this for PSEXEC:
cmd.exe /s /c C:\Temp\psexec.exe 2> $null
That "2> $null" will save you some headaches and your script crashing if the return is in the error channel. You will need to run it in the PS console though without that to catch the error, but when you have a script like mine performing 50+ system checks you don't want the whole thing to halt for just one error.
I am writing a program, where i have the CL ,from which I need to access the previoius revision CL of each file.
How can I get it ?
Code I have written till now is:
IChangelist cl = server.getChangelist(clId);
List<IFileSpec> files = cl.getFiles(true);
for(int i = 0; i < files.size() ; i++) {
IFileSpec fileSpec=files.get(i);
}
Revision specifiers can help you here (see 'p4 help revisions').
In particular, the previous revision of each of those files is the file as of the previous changelist.
So, since clId is the changelist you care about, compute change clPrev = (clId - 1), and then look for 'file#clPrev'.
I am beginner in java and Weka tool, I want to use Logitboost algorithm with DecisionStump as weak learner in my java code, but I don't know how do this work. I create a vector with six feature(without label feature) and I want feed it into logitboost for labeling and probability of its assignment. Labels are 1 or -1 and train/test data is in an arff file.This is my code, but algorithm always return 0 !
Thanks
double candidate_similarity(ha_nodes ha , WeightMatrix[][] wm , LogitBoost lgb ,ArrayList<Attribute> atts){
LogitBoost lgb = new LogitBoost();
lgb.buildClassifier(newdata);//newdata is an arff file with some labeled data
Evaluation eval = new Evaluation(newdata);
eval.crossValidateModel(lgb, newdata, 10, new Random(1));
try {
feature_vector[0] = IP_sim(Main.a_new.dip, ha.candidate.dip_cand);
feature_vector[1] = IP_sim(Main.a_new.sip, ha.candidate.sip_cand);
feature_vector[2] = IP_s_d_sim(Main.a_new.sip, ha);
feature_vector[3] = Dport_sim(Main.a_new.dport, ha);
freq_weight(Main.a_new.Atype, ha, freq_avg, weight_avg , wm);
feature_vector[4] = weight_avg;
feature_vector[5] = freq_avg;
double[] values = new double[]{feature_vector[0],feature_vector[1],feature_vector[2],feature_vector[3],feature_vector[4],feature_vector[5]};
DenseInstance newInst = new DenseInstance(1.0,values);
Instances dataUnlabeled = new Instances("TestInstances", atts, 0);
dataUnlabeled.add(newInst);
dataUnlabeled.setClassIndex(dataUnlabeled.numAttributes() - 1);
double clslable = lgb.classifyInstance(inst);
} catch (Exception ex) {
//Logger.getLogger(Module2.class.getName()).log(Level.SEVERE, null, ex);
}
return clslable;}
Where did this newdata come from? you need to load the file properly to get a correct classification, use this class to load features from the file:
http://weka.sourceforge.net/doc/weka/core/converters/ArffLoader.html
I'm not posting an example code because I use weka with MATLAB, so I dont have examples in Java.