[Update]
I'm new in weka. I want to add my double[] array to my weka Instances dataRaw but I have no idea how to do it.
This is my Code:
import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.PreparedStatement;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.util.ArrayList;
import java.util.logging.Level;
import java.util.logging.Logger;
import weka.core.DenseInstance;
import weka.core.Instances;
public class SVMTest
{
private Connection connect;
public SVMTest() throws Exception
{
try
{
String jdbcDriver ="org.gjt.mm.mysql.Driver";
String jdbcURL = "jdbc:mysql://localhost:3306/xign?";
Class.forName("com.mysql.jdbc.Driver");
connect = DriverManager
.getConnection("jdbc:mysql://localhost:3306/myDB?"
+ "user=" + "root" + "&password=" +
"xxx###111");
} catch (ClassNotFoundException ex)
{
Logger.getLogger(SVMTest.class.getName()).log(Level.SEVERE, null, ex);
}
}
public ArrayList<Double[]> loadValues(String generatedString) throws SQLException
{
ArrayList<Double[]> pictures = new ArrayList<>();
PreparedStatement ps = null;
ResultSet rs = null;
Double picture[] = new Double[3];
try
{
ps = connect.prepareStatement("SELECT X, Y, Z FROM myDB.Sensor WHERE key = ?");
ps.setString(1, generatedString);
rs = ps.executeQuery();
while(rs.next())
{
picture[0] = (rs.getDouble("X") * 100000);
picture[1] = (rs.getDouble("Y") * 100000);
picture[2] = (rs.getDouble("Z") * 100000);
pictures.add(picture);
picture = new Long[3];
}
}
catch (SQLException ex)
{
Logger.getLogger(SVMTest.class.getName()).log(Level.SEVERE, null, ex);
}
finally
{
if(rs != null )
try{ rs.close(); }
catch(SQLException ex) { ex.printStackTrace(); }
if(ps != null)
try{ ps.close(); }
catch(SQLException ex) { ex.printStackTrace(); }
}
return pictures;
}
public double [] toRawArray(Double[] array)
{
double[] out = new double[array.length];
for(int i = 0; i < array.length; i++)
{
out[i] = array[i];
}
return out;
}
public static void main(String[] args) throws Exception
{
SVMTest svm = new SVMTest();
ArrayList<Double[]> myValues = svm.loadValues("123456ASDF");
//at this point I want to add ArrayList<Double[]> myValues to
//weka Instances to classify the data but I don't really have
//an idea
Instances dataRaw = new Instances(?????); <--Error
for(Double[] a : myValues)
{
DenseInstance myDense = new DenseInstance(1.0, toRawArray(a));
dataRaw.add((Instance)myDense.dataset());
}
}
}
Double[] a looks like this:
for(Double[] a : alValues)
{
for(Double b : a))
{
System.out.print("[" + b + "]");
}
System.out.println();
}
//Output:
//[-1198.54][8534.44][4293.29]
//[-994.13][8812.43][3534.66]
//[-818.84][9026.96][2915.99]
//[-670.76][9186.82][2436.73]
Just basic explanation :-
First, to classify you need a model and to get a model you need to train an algorithm on data with attributes and classIndex.
Attributes is "Type of data", assuming if you have data of employees then name, desgination, age , salary etc are attributes or in simple terms column names in csv file.
Type of data can be Numeric (Integer or Real) or Nominal meaning normal string.
Classindex is the attribute/column index which you want your algorithm to predict/classify based on training instances. For example you can predict salary using age and designation.
After model is generated, on that model you can do classification (prediction) by sending data in similar format meaning instance created with same attributes and classindex.
You need to be sure about which algorithm you want to run and which attribute/column index you want to predict.
[Note :- There are algorithms which work with only numeric data and some other algorithms work only on Nominal data and some algorithm will work on both types of data. So you should pick an algorithm depending on type of data. There are other stuffs which you should check before selecting an algorithm but basic is the type of data.]
I suggest you to go through about machine learning and weka before attempting to run an algorithm.
Sample code which you can try and I am assuming your classindex to be z :-
ArrayList<Attribute> attributes = new ArrayList<Attribute>();
attributes.add(new Attribute("x"));
attributes.add(new Attribute("y"));
attributes.add(new Attribute("z"));
Instances dataRaw = new Instances("TestInstances", attributes , 0);
dataRaw.setClassIndex(dataRaw.numAttributes() - 1); // Assuming z (z on lastindex) as classindex
for (Double[] a: myValues) {
dataRaw.add(new DenseInstance(1.0, a));
}
// Then train or build the algorithm/model on instances (dataRaw) created above.
MultilayerPerceptron mlp = new MultilayerPerceptron(); // Sample algorithm, go through about neural networks to use this or replace with appropriate algorithm.
mlp.buildClassifier(dataRaw);
// Create a test instance,I think you can create testinstance without
// classindex value but cross check in weka as I forgot about it.
double[] values = new double[]{-818.84, 9186.82, 2436.73}; // sample values
DenseInstance testInstance = new DenseInstance(1.0, values);
testInstance.setDataset(dataRaw); // To associate with instances object
// now you can clasify
double classify = mlp.classifyInstance(testInstance);
For more information :- How to use weka programmatically
Related
I only get the first row from table, from a JDBC call.
Below is my code
import java.util.*;
import java.sql.*;
public class TrainManagementSystem {
public ArrayList <Train> viewTrain (String coachType, String source, String destination){
Connection myCon = null;
PreparedStatement myStat = null;
ResultSet myRes = null;
ArrayList<Train> result = new ArrayList<>();
try{
myCon = DB.getConnection();
myStat = myCon.prepareStatement("select * from train where source = ? and destination = ? and ? > 0");
myStat.setString(1, source);
myStat.setString(2, destination);
myStat.setString(3, coachType);
myRes = myStat.executeQuery();
while(myRes.next()){
int train_number = myRes.getInt("train_number");
String train_name = myRes.getString("train_name");
String source1 = myRes.getString("source");
String destination1 = myRes.getString("destination");
int ac1 = myRes.getInt("ac1");
int ac2 = myRes.getInt("ac2");
int ac3 = myRes.getInt("ac3");
int sleeper = myRes.getInt("sleeper");
int seater = myRes.getInt("seater");
System.out.println(myRes.getString("train_name"));
Train train = new Train(train_number, train_name, source1, destination1, ac1, ac2, ac3, sleeper, seater);
result.add(train);
return result;
}
}catch(Exception e){
System.out.println(e);
}
return result;
}
}
I only get the first row in result set in JDBC. The query is not retrieving the second row.
I have attached my entries in table.
When giving input as
Why is it not going to next row? When I try removing coachtype and only search for train between Howrah and Dehradun, it gives me result Dehradun Mail whereas the result should add Doon Express as well.
Because you have a return result; inside your while loop.
Move it line down outside of the loop
My problem: I have a model engine that takes a list of parameter configuration, and evaluates a double value that corresponds to the metric associated to that configuration. I have six parameters, and each of them can vary according to a list. I want to find by brute force the best parameter configuration considering the combination that will produce the higher value for the output metric. Since I'm learning Spark, I realized that with the cartesian product operation I can easily generate the combinations, and split the RDD to be processed in parallel. So, I came up with this driver program:
public static void main(String[] args) {
String scriptName = "model.mry";
String scriptStr = null;
try {
scriptStr = new String(Files.readAllBytes(Paths.get(scriptName)));
} catch (IOException ex) {
Logger.getLogger(BruteForceDriver.class.getName()).log(Level.SEVERE, null, ex);
System.exit(1);
}
final String script = scriptStr;
SparkConf conf = new SparkConf()
.setAppName("wordCount")
.setSparkHome("/home/danilo/bin/spark-2.2.0-bin-hadoop2.7")
.setJars(new String[]{"/home/danilo/NetBeansProjects/SparkHello1/target/SparkHello1-1.0.jar",
"/home/danilo/.m2/repository/org/modcs/mercury/4.7/mercury-4.7.jar"})
.setMaster("spark://danilo-desktop:7077");
String baseDir = "/home/danilo/NetBeansProjects/SimulationOptimization/workspace/";
JavaSparkContext sc = new JavaSparkContext(conf);
final int NUM_SERVICES = 6;
final int QTD = 3;
JavaRDD<Service>[] providers = new JavaRDD[NUM_SERVICES];
for (int i = 1; i <= NUM_SERVICES; i++) {
providers[i - 1] = sc.textFile(baseDir + "provider"
+ i
+ ".mat")
.filter((t1) -> !t1.contains("#") && !t1.trim().isEmpty())
.map(Service.createParser("" + i))
.zipWithIndex().filter((t1) -> {
return t1._2 < QTD;
}).keys();
}
JavaPairRDD c = null;
JavaRDD<Service> p = providers[0];
for (int i = 1; i < NUM_SERVICES; i++) {
if (c == null) {
c = p.cartesian(providers[i]);
} else {
c = c.cartesian(providers[i]);
}
}
JavaRDD<List<Service>> cartesian = c.map(new FlattenTuple<>());
final Broadcast<ModelEvaluator> model = sc.broadcast(new ModelEvaluator(script));
JavaPairRDD<Double, List<Service>> results = cartesian.mapToPair(
(t) -> {
try {
double val = model.value().evaluateModel(t);
System.out.println(val);
return new Tuple2<>(val, t);
} catch (Exception ex) {
return null;
}
}
);
results.sortByKey().collect().forEach((t) -> {
System.out.println(t._1 + ", " + t._2);
});
sc.close();
}
The "QTD" variable allows me to control the size the interval which each parameter will vary. For QTD = 3, I'll have 3^6 = 729 combinations. The problem is that it is taking so long to compute all those combinations. I wrote a implementations using only normal Java threads, and the runtime is about 40 seconds. Using my Spark driver program, the runtime more than 6 minutes. Why is my Spark program so slow compared to the plain Java multi-thread program?
Edit:
I put:
results = results.cache();
before sorting the results and now the runtime is 2.5 minutes.
Edit 2:
I created a RDD with the cartesian product of the parameters by hand instead of using the operation provided by the framework. Now my runtime is 1'25''. It does make sense now, since there is some overhead to start the driver and move the jars to the workers.
Hi I am writing a web server to be hosted locally that will have latitude and longitude posted in the URL/URI from an android device and this will be used as search criteria in an SQL Select query to retrieve the 5 clostes train stations.
I have made the code work with the hard coded Longitude and Latitude but now need to add in the functionality of it being dynamically added form teh adnroid device using the Post/Get functions, unfortunately i have never used get/post so dont know where to start.
Below is my code from all Classes in the web server, as said it all works hardcoded but now needs to accept input from an android device and return the same expected results. Thanks
public class WebServer {
static String jArray = "";
public static void main(String[] args) {
try{
HttpServer server = HttpServer.create(new InetSocketAddress(8080),0);
server.createContext("/",new HttpHandler(){
public void handle(HttpExchange he) throws IOException{
try {
jArray = sqlConnector.train(jArray);
} catch (Throwable e) {
e.printStackTrace();
}
BufferedWriter bw = new BufferedWriter(new OutputStreamWriter(he.getResponseBody()));
System.out.println("Processing Request");
he.sendResponseHeaders(200, 0);
String output = "<html><head></head><body><p>" + jArray + "</p></body></html>";
bw.write(output);
bw.close();
}
});
server.start();
System.out.println("Started up . . .");
}
catch (IOException ioe){
System.err.println("problems Starting Webserver: " + ioe);
}
}
}
public class sqlConnector {
public static String train(String jArray) throws Exception{
PreparedStatement s = null;
try
{
Connection c = DriverManager.getConnection("jdbc:sqlite:C:/Users/Colin/trainstations.db");
s = c.prepareStatement("SELECT Latitude, Longitude, StationName,( 3959 * acos(cos(radians(53.4355)) * cos(radians(Latitude)) * cos(radians(Longitude) - radians(-3.0508)) + sin(radians(53.4355)) * sin(radians(Latitude )))) AS distance FROM stations ORDER BY distance ASC LIMIT 0,5;");
ResultSet rs = s.executeQuery();
jArray = jsonConverter.convertResultSetIntoJSON(rs, jArray);
}
catch (SQLException se)
{
se.printStackTrace();
}
return jArray;
}
}
public class jsonConverter {
public static String convertResultSetIntoJSON(ResultSet rs, String jArray) throws Exception {
JSONArray jsonArray = new JSONArray();
while (rs.next()) {
int total_rows = rs.getMetaData().getColumnCount();
JSONObject obj = new JSONObject();
for (int i = 0; i < total_rows; i++) {
String columnName = rs.getMetaData().getColumnLabel(i + 1).toString();
Object columnValue = rs.getObject(i + 1);
obj.put(columnName, columnValue);
}
jsonArray.put(obj);
}
jArray = jsonArray.toString();
return jArray;
}
}
I am currently connected to another webserver that hosts the same data and is fully functinal and after the port number its format is as follows
/stations?lat=" + lat + "&lng=" + lng);
where lat and lng are my variable taken using GPS
The process would be like this:
1) Parse the Parameters from query String he.getRequestURI().getQuery()
/**
* returns the url parameters in a map
* #param query
* #return map
*/
public static Map<String, String> queryToMap(String query){
Map<String, String> result = new HashMap<String, String>();
for (String param : query.split("&")) {
String pair[] = param.split("=");
if (pair.length>1) {
result.put(pair[0], pair[1]);
}else{
result.put(pair[0], "");
}
}
return result;
}
2) Pass these parameters into select method
public static String train(String jArray, double lat, double long) throws Exception{
3) Use it in your statement
s = c.prepareStatement("SELECT Latitude, Longitude, StationName,
( 3959 * acos(cos(radians(?)) * cos(radians(Latitude))
* cos(radians(Longitude) - radians(?)) + sin(radians(?))
* sin(radians(Latitude ))))
AS distance FROM stations ORDER BY distance ASC LIMIT 0,5;");
s.setDouble(1, lat);
s.setDouble(2, long);
s.setDouble(3, lat);
And finally
4) Fix your code.
use a Database connection pool instead of connecting to the database for every call
use try-with-resource to automatically close ResultSet, PreparedStatement, Connection (even in case of Errors)
Hope that helps as a rough guideline :-)
How to load values from an Oracle db to a JComboBox to make it easier for the user To Choose from I Have tried this:
Connection dbcon = null;
try {
Class.forName("oracle.jdbc.driver.OracleDriver");
dbcon = DriverManager.getConnection("
jdbc:oracle:thin:#localhost:1521:XE", "USERNAME", "PASSWORD");
Statement st = dbcon.createStatement();
String combo = "Select EMP_IDNUM from employees";
ResultSet res = st.executeQuery(combo);
while(res.next()){
String ids = res.getString("EMP_IDNUM");
String [] str = new String[]{ids};
cboId = new JComboBox(str);
}
} catch (Exception d) {
System.out.println(d);
}
This is Only getting me the first Value Into the JComboBox cboID. What Is the Best way to Load the entire Field Data (EMP_IDNUM) into the Jcombobox??
String [] str = new String[]{ids};
It means your String array is having only one ids value which you have loaded String ids = res.getString("EMP_IDNUM");
if(rs.getRow()>0){
String [] str = new String[res.getRow()];
int i=0;
while(res.next()){
str[i++] = res.getString("EMP_IDNUM");
}
}
JComboBox jcb = new JComboBox(str);
Instead of array you can use Vector also to create JComboBox.
you can use this code:
Vector v = new Vector();
while(res.next()){
String ids = res.getString("EMP_IDNUM");
v.add(ids)
}
JComboBox jcb = new JComboBox(v);
In this code you create a Vector with your Strings, and then invoke directly the JComboBox(Vector items) constructor of JComboBox
there are three important areas
a) close all JDBC Objects in the finally block, because these Object aren't, never GC'ed
try {
} catch (Exception d) {
System.out.println(d);
} finally {
try {
st.close()
res.close()
dbcon.close()
} catch (Exception d) {
//not important
}
}
b) don't to create any Objects inside try - catch - finally, prepare that before
meaning cboId = new JComboBox(str);
c) put all data from JDBC to the ComboBoxModel, prepare that before
i have a collection of raw text in a table in database, i need to replace some words in this collection using a set of words.
i put all the term to be replace and its substitutes in a text file as below
min=admin
lelet=lambat
lemot=lambat
nii=nih
ntu=itu
and so on.
i have successfully initiate a variabel of File and Scanner to read the collection of the term and its substitutes.
i loop all the dataset and save the raw text in a string
in the same loop
i loop all the term collection and save its row to a string name 'pattern', and split the pattern into two string named 'term' and 'replacer'
in this loop i initiate a new string which its value is the string from the dataset modified by replaceAll(term,replacer)
end loop for term collection
then i insert the new string to another table in database
end loop for dataset
i do it manualy as below
replaceAll("min","admin")
and its works but its really something to code it manually for almost 2000 terms to be replace it.
anyone ever face this kind of really something..
i really need a help now desperate :(
package sentimenrepo;
import javax.swing.*;
import java.sql.*;
import java.io.*;
//import java.util.HashMap;
import java.util.Scanner;
//import java.util.Map;
/**
*
* #author herman
*/
public class synonimReplaceV2 extends SwingWorker {
protected Object doInBackground() throws Exception {
new skripsisentimen.sentimenttwitter().setVisible(true);
Integer row = 0;
File synonimV2 = new File("synV2/catatan_kata_sinonim.txt");
String newTweet = "";
DB db = new DB();
Connection conn = db.dbConnect("jdbc:mysql://localhost:3306/tweet", "root", "");
try{
Statement select = conn.createStatement();
select.executeQuery("select * from synonimtweet");
ResultSet RS = select.getResultSet();
Scanner scSynV2 = new Scanner(synonimV2);
while(RS.next()){
row++;
String no = RS.getString("no");
String tweet = " "+ RS.getString("tweet");
String published = RS.getString("published");
String label = RS.getString("label");
clean2 cleanv2 = new clean2();
newTweet = cleanv2.cleanTweet(tweet);
try{
Statement insert = conn.createStatement();
insert.executeUpdate("INSERT INTO synonimtweet_v2(no,tweet,published,label) values('"
+no+"','"+newTweet+"','"+published+"','"+label+"')");
String current = skripsisentimen.sentimenttwitter.txtAreaResult.getText();
skripsisentimen.sentimenttwitter.txtAreaResult.setText(current+"\n"+row+"original : "+tweet+"\n"+newTweet+"\n______________________\n");
skripsisentimen.sentimenttwitter.lblStat.setText(row+" tweet read");
skripsisentimen.sentimenttwitter.txtAreaResult.setCaretPosition(skripsisentimen.sentimenttwitter.txtAreaResult.getText().length() - 1);
}catch(Exception e){
skripsisentimen.sentimenttwitter.lblStat.setText(e.getMessage());
}
skripsisentimen.sentimenttwitter.lblStat.setText(e.getMessage());
}
}catch(Exception e){
skripsisentimen.sentimenttwitter.lblStat.setText(e.getMessage());
}
return row;
}
class clean2{
public clean2(){}
public String cleanTweet(String tweet){
File synonimV2 = new File("synV2/catatan_kata_sinonim.txt");
String pattern = "";
String term = "";
String replacer = "";
String newTweet="";
try{
Scanner scSynV2 = new Scanner(synonimV2);
while(scSynV2.hasNext()){
pattern = scSynV2.next();
term = pattern.split("=")[0];
replacer = pattern.split("=")[1];
newTweet = tweet.replace(term, replacer);
}
}catch(Exception e){
e.printStackTrace();
}
System.out.println(newTweet+"\n"+tweet);
return newTweet;
}
}
}
update
ive just realize that the code actually works but only for the first row in database, the second row and so on stand still. here is i update the newest code i ve build
public class synonimReplaceV2 extends SwingWorker {
protected Object doInBackground() throws Exception {
new skripsisentimen.sentimenttwitter().setVisible(true);
Integer row = 0;
String newTweet = "";
DB db = new DB();
Connection conn = db.dbConnect("jdbc:mysql://localhost:3306/tweet", "root", "");
try{
Statement select = conn.createStatement();
select.executeQuery("select * from synonimtweet limit 2,10");
ResultSet RS = select.getResultSet();
FileReader readSyn = new FileReader("synV2/catatan_kata_sinonim.txt");
BufferedReader buffSyn = new BufferedReader(readSyn);
while(RS.next()){
row++;
String no = RS.getString("no");
String tweet = " "+ RS.getString("tweet");
String published = RS.getString("published");
String label = RS.getString("label");
String pattern = "";
while((pattern=buffSyn.readLine())!=null){
String patternTerm = pattern.split("=")[0];
String patternSubs = pattern.split("=")[1];
tweet = tweet.replaceAll("\\s"+patternTerm, patternSubs);
}
try{
Statement insert = conn.createStatement();
insert.executeUpdate("INSERT INTO synonimtweet_v2(no,tweet,published,label) values('"
+no+"','"+tweet+"','"+published+"','"+label+"')");
String current = skripsisentimen.sentimenttwitter.txtAreaResult.getText();
skripsisentimen.sentimenttwitter.txtAreaResult.setText(current+"\n"+row+"original : "+tweet+"\n"+newTweet+"\n______________________\n");
skripsisentimen.sentimenttwitter.lblStat.setText(row+" tweet read");
skripsisentimen.sentimenttwitter.txtAreaResult.setCaretPosition(skripsisentimen.sentimenttwitter.txtAreaResult.getText().length() - 1);
}catch(Exception e){
skripsisentimen.sentimenttwitter.lblStat.setText(e.getMessage());
}
}
}catch(Exception e){
skripsisentimen.sentimenttwitter.lblStat.setText(e.getMessage());
// System.out.println(e.getMessage());
}
Thread.sleep(100);
return row;
}
}
Opening the synonym file and iterating over 2,000 lines for every row in your ResultSet is a bit wasteful.
Load your synonyms into an in-memory Map once, keyed by unique misspelt term, then do a lookup on the map for every row in your result set, and replace as necessary.
Let us use both solutions to build a single solution for you:
First, you create a HashMap with all your keys:
public static HashMap<String, String> getMap() {
//your version would read from the file
HashMap<String,String> myMap=new HashMap<String,String>();
myMap.put("min", "admin");
myMap.put("lelet", "lambat");
myMap.put("lemot", "lambat");
myMap.put("nii", "nih");
myMap.put("ntu", "itu");
return(myMap);
}
Second, you create a pattern that contains all the keys in your hashmap:
public static String getPattern(HashMap<String,String> mapReplacement) {
String pattern="";
for (String s : mapReplacement.keySet()) {
if (!pattern.isEmpty()) {
pattern=pattern+"|";
}
pattern=pattern+s;
}
return(pattern);
}
Next, you can create a cleanTweet method that uses both structures you created:
public static String cleanTweet(String tweet, Pattern pattern,HashMap<String, String> myMap) {
String newTweet=tweet;
Matcher matcher = pattern.matcher(newTweet);
int start=0;
while (matcher.find()) {
String key=matcher.group();
String replacement=myMap.get(key);
if (replacement!=null) {
newTweet=newTweet.replace(key, replacement );
}
}
return(newTweet);
}
This might require some tweaking to perfect (I onyl tested a few cases), but the point is that you are going to iterate a single time in your keys and then iterate only on your tweets.
I hope it helps.
I didn't try, but it seems to me that you've almost got it - just replace this line:
newTweet = tweet.replace(term, replacer);
with this:
tweet = tweet.replaceAll(term, replacer);
As you're not using newTweet any more, return tweet:
return tweet;
You should also delete the newTweet declaration.
Also, you shouldn't read Scanner to read lines. Use FileReader instead.
thanks folks
i ve found the answer why the code is not working,
the txt file containing terms and its substitutes should be initiated each time the program read a row from database.
the code would be like this
public class synonimReplaceV2 extends SwingWorker {
protected Object doInBackground() throws Exception {
new skripsisentimen.sentimenttwitter().setVisible(true);
Integer row = 0;
String newTweet = "";
DB db = new DB();
Connection conn = db.dbConnect("jdbc:mysql://localhost:3306/tweet", "root", "");
try{
Statement select = conn.createStatement();
select.executeQuery("select * from synonimtweet limit 2,10");
ResultSet RS = select.getResultSet();
while(RS.next()){
row++;
FileReader readSyn = new FileReader("synV2/catatan_kata_sinonim.txt");
BufferedReader buffSyn = new BufferedReader(readSyn);
String no = RS.getString("no");
String tweet = " "+ RS.getString("tweet");
String published = RS.getString("published");
String label = RS.getString("label");
String pattern = "";
while((pattern=buffSyn.readLine())!=null){
String patternTerm = pattern.split("=")[0];
String patternSubs = pattern.split("=")[1];
tweet = tweet.replaceAll("\\s"+patternTerm, patternSubs);
}
try{
Statement insert = conn.createStatement();
insert.executeUpdate("INSERT INTO synonimtweet_v2(no,tweet,published,label) values('"
+no+"','"+tweet+"','"+published+"','"+label+"')");
String current = skripsisentimen.sentimenttwitter.txtAreaResult.getText();
skripsisentimen.sentimenttwitter.txtAreaResult.setText(current+"\n"+row+"original : "+tweet+"\n"+newTweet+"\n______________________\n");
skripsisentimen.sentimenttwitter.lblStat.setText(row+" tweet read");
skripsisentimen.sentimenttwitter.txtAreaResult.setCaretPosition(skripsisentimen.sentimenttwitter.txtAreaResult.getText().length() - 1);
}catch(Exception e){
skripsisentimen.sentimenttwitter.lblStat.setText(e.getMessage());
}
}
}catch(Exception e){
skripsisentimen.sentimenttwitter.lblStat.setText(e.getMessage());
// System.out.println(e.getMessage());
}
Thread.sleep(100);
return row;
}
}
but im actually want to apply the code in which rlinden made above, but cant figure it out how to call the cleanTweet function.