Related
I got a CSV file, basically a list of cities with some codes.
In my app users write their city of birth, a list of cities appears suggesting it, when chose the city's code is used for other stuff.
Can I just move the .csv file in an Android Studio folder and just use it as a database made with sql lite?
If no, should I make the sql lite database in Android Studio (a DatabaseManager class with SqlOpenHelper and some queries if i got it), then copy the .csv? How can I just "copy" that?
EDIT: Sorry but I realized that my CSV file had too much columns and that'd be ugly and tiring to manually add the columns. So I used DB Browser for SQLite, now I got a .db file. Can I just put it in a specific database folder and querying it in my app?
Can I just move the .csv file in an Android Studio folder and just use
it as a database made with sql lite?
No.
A sqlite database, i.e. the file, has to be formatted so that the SQLite routines can access the data enclosed therein. e.g. the first 16 bytes of the file MUST BE SQLite format 3\000 and so on, as per Database File Format
If no, should I make the sql lite database in Android Studio (a
DatabaseManager class with SqlOpenHelper and some queries if i got
it), then copy the .csv?
You have various options e.g. :-
You could copy the csv file into an appropriate location so that it will be part of the package (e.g. the assets folder) and then have a routine to generate the appropriate rows in the appropriate table(s). This would require creating the database within the App.
You could simply hard code the inserts within the App. Again this would require creating the database within the App.
You could use an SQLite Tool to create a pre-populated database, copy this into the assets folder (assets/databases if using SQLiteAssetHelper) and copy the database from the assets folder. No need to have a csv file in this case.
Example of option 1
As an example that is close to option 1 (albeit that the data isn't stored in the database) the following code extracts data from a csv file from the assets folder.
This option is used in this case as the file changes on an annual basis, so changing the file and then distributing the App applies the changes.
The file looks like :-
# This file contains annual figures
# 5 figures are required for each year and are comma seperated
# 1) The year to which the figures are relevant
# 2) The annualised MTAWE (Male Total Average Weekly Earnings)
# 3) The annual Parenting Payment Single (used to determine fixed assessment)
# 4) The fixed assessment annual rate
# 5) The Child Support Minimum Annual Rate
# Lines starting with # are comments and are ignored
2006,50648,13040,1040,320
2007,52073,13315,1102,330
2008,54756,13980,1122,339
2009,56425,13980,1178,356
2010,58854,14615,1193,360
2011,61781,15909,1226,370
2012,64865,16679,1269,383
2013,67137,17256,1294,391
2014,70569,18197,1322,399
2015,70829,18728,1352,408
2016,71256,19011,1373,414
2017,72462,19201,1390,420
2018,73606,19568,1416,427
It is stored in the assets folder of the App as annual_changes.txt The following code is used to obtain the values (which could easily be added to a table) :-
private void BuildFormulaValues() {
mFormulaValues = new ArrayList<>();
mYears = new ArrayList<>();
StringBuilder errors = new StringBuilder();
try {
InputStream is = getAssets().open(formula_values_file);
BufferedReader bf = new BufferedReader(new InputStreamReader(is));
String line;
while ((line = bf.readLine()) != null ) {
if (line.substring(0,0).equals("#")) {
continue;
}
String[] values = line.split(",");
if (values.length == 5) {
try {
mFormulaValues.add(
new FormulaValues(
this,
Long.parseLong(values[0]),
Long.parseLong(values[1]),
Long.parseLong(values[2]),
Long.parseLong(values[3]),
Long.parseLong(values[4])
)
);
} catch (NumberFormatException e) {
if (errors.length() > 0) {
errors.append("\n");
}
errors.append(
this.getResources().getString(
R.string.invalid_formula_value_notnumeric)
);
continue;
}
mYears.add(values[0]);
} else {
if (errors.length() > 0) {
errors.append("\n");
errors.append(
getResources().getString(
R.string.invalid_formula_value_line)
);
}
}
}
} catch (IOException ioe) {
ioe.printStackTrace();
}
if (errors.length() > 0) {
String emsg = "Note CS CareCalculations may be inaccurate due to the following issues:-\n\n" +
errors.toString();
Toast.makeText(
this,
emsg,
Toast.LENGTH_SHORT
).show();
}
}
Try this for adding the.csv info to your DB
FileReader file = new FileReader(fileName);
BufferedReader buffer = new BufferedReader(file);
String line = "";
String tableName = "TABLE_NAME";
String columns = "_id, name, dt1, dt2, dt3";
String str1 = "INSERT INTO " + tableName + " (" + columns + ") values(";
String str2 = ");";
db.beginTransaction();
while ((line = buffer.readLine()) != null) {
StringBuilder sb = new StringBuilder(str1);
String[] str = line.split(",");
sb.append("'" + str[0] + "',");
sb.append(str[1] + "',");
sb.append(str[2] + "',");
sb.append(str[3] + "'");
sb.append(str[4] + "'");
sb.append(str2);
db.execSQL(sb.toString());
}
db.setTransactionSuccessful();
db.endTransaction();
I have a problem getting data previously recorded with my java program fronted in a Mysql Database. I checked both mysql and Netbeans and the encoding is utf-8 but I still have this kind of problem.
Any tips ??
I'm on mac with netbeans 8.2
My application shows the data like this:
MySQL shows the data with no issues:
The question is not precise enough, hence some points you might try.
Add those statements in your java frontend application, after database is connected, before you INSERT any data:
SET character_set_connection="utf8"
SET character_set_client="utf8"
SET character_set_database="utf8"
SET character_set_results="utf8"
SET character_set_server="utf8"
SET character_set_system="utf8"
Probably you won't need them all; feel free to experiment which ones do the trick.
You may also log into a MySQL console and see actual settings by issuing a command:
mysql> show variables like '%character_set%';
Ok, I resolved it.
basically It was a problem about Column data in Mysql that it was BLOB...I already tried to change in LONGTEXT, but even if all Database was in UTF-8 if I changed only type of content it wasn't enough !.
I Had to change both type of collation, Database and Column table.
Thanks for your support !
Alex
this is the code that a generate JTextArea and all data
private void popolaPianificazione(){
String tipo="Pianificazione";
String sql = "SELECT * FROM DomandePianificazione";
ResultSet res = null;
try {
res = MysqlStuff.richiediDatiSQL(sql);
if(res != null){
res.last();
if(res.getRow() != 0){
res.beforeFirst();
while(res.next()){
final String contatore = res.getString("id");
int conta = Integer.parseInt(contatore);
JPanel temp = new javax.swing.JPanel(new MigLayout("fill","grow"));
temp.setBorder(javax.swing.BorderFactory.createTitledBorder("DOMANDA "+"["+conta+"]"));
String domande = res.getString("Domanda");
domande.replace("รจ", "p");
javax.swing.border.Border border = BorderFactory.createEtchedBorder();
JTextArea domanda = new javax.swing.JTextArea(domande,2,2);
domanda.setBorder(border);
domanda.setBackground(colore);
domanda.setSize(400, 100);
domanda.setFont(font);
domanda.setMinimumSize(new Dimension(400,100));
domanda.setLineWrap(true);
domanda.setWrapStyleWord(true);
domanda.setOpaque(false);
domanda.setEditable(false);
JCheckBox rispostaC = new javax.swing.JCheckBox("Si/No");
JCheckBox rispostaCom = new javax.swing.JCheckBox("A completamento");
String rispostaCheck = res.getString("rispostaCheck");
String rispostaCompleta = res.getString("rispostaCompleta");
if (!"no".equals(rispostaCheck)){
rispostaC.setSelected(true);
}
else{
rispostaCom.setSelected(true);
}
JButton edit = new javax.swing.JButton("Modifica la domanda");
ButtonGroup buttonGroup1 = new javax.swing.ButtonGroup();
buttonGroup1.add(rispostaC);
buttonGroup1.add(rispostaCom);
rispostaC.setEnabled(false);
rispostaC.setRolloverEnabled(false);
rispostaCom.setEnabled(false);
rispostaCom.setRolloverEnabled(false);
temp.add(edit,"wrap");
edit.addActionListener(new ActionListener(){
#Override
public void actionPerformed(ActionEvent e) {
if ("Salva le modifiche".equals(edit.getLabel())){
System.out.println("Sto salvando...");
String pannello = "DomandePianificazione";
try {
SalvaDomanda(tipo,contatore,domanda,rispostaC,rispostaCom,pannello);
PanelPianificazione.revalidate();
PanelPianificazione.repaint();
} catch (SQLException ex) {
Logger.getLogger(ManageQuestionario.class.getName()).log(Level.SEVERE, null, ex);
}
SKIP
and this is the code for send data to mysql:
public static void inviaDatiSQL(String sql,String stat) throws SQLException, ClassNotFoundException{
UP = connetti();
System.out.println("INVIO dati a DB: \n"+ sql);
Statement stmt = null;
PreparedStatement test = UP.prepareStatement(sql);
test.setString(1, stat);
test.executeUpdate();
System.out.println("Finito !");
}
I'm collecting a bunch of sensor data in a Service, storing it into a SQL table, and when the user clicks a button I take all of that SQL data and save it to a CSV file, but I keep getting Window is full: requested allocation XXX errors showing in logcat
From a bit of googling I think this might be due to high RAM usage on my Nexus 5x?
When the user clicks the save button, the code to begin the process looks like this:
File subjectFile = new File(subjectDataDir, subNum + ".csv");
try{
dbHelper.exportSubjectData(subjectFile, subNum);
} catch (SQLException | IOException e){
mainActivity.logger.e(getActivity(), TAG, "exportSubjectData error", e);
}
Then in my DBHelper, the exportSubjectData method looks like this:
public void exportSubjectData(File outputFile, String subNum) throws IOException, SQLException {
csvWrite = new CSVWriter(new FileWriter(outputFile));
curCSV = db.rawQuery("SELECT * FROM " + DATA_TABLE_NAME + " WHERE id = " + subNum, null);
csvWrite.writeNext(curCSV.getColumnNames());
while (curCSV.moveToNext()) {
String arrStr[] = {curCSV.getString(0), curCSV.getString(1), curCSV.getString(2),
curCSV.getString(3), curCSV.getString(4), curCSV.getString(5),
curCSV.getString(6), curCSV.getString(7), curCSV.getString(8),
curCSV.getString(9), curCSV.getString(10)};
csvWrite.writeNext(arrStr);
}
csvWrite.close();
curCSV.close();
}
Firstly, is this type of problem normally caused by RAM usage?
Assuming that my problem is high RAM usage in that section of code, is there a more efficient way to do this without consuming so much memory? The table that its trying to write to CSV has over 300,000 rows and 10 columns
I guess you are using opencsv. What you can try is calling csvWrite.flush() after every x calls of csvWrite.writeNext(arrStr). That should write the data from memory to the disc.
You have to try what the best value for x is.
This is my first time trying to read and write to a VSAM file. What I did was:
Created a Map for the File using VSE Navigator
Added the Java beans VSE Connector library to my eclipse Java project
Use the code show below to Write and Read to the KSDS file.
Reading the file is not a problem but when I tried to write to the file it only works if I go on the mainframe and close the File before running my java program but it locks the file for like an hour. You cannot open the file on the mainframe or do anything to it.
Anybody can help with this problem. Is there a special setting that I need to set up for the file on the mainframe ? Why do you first need to close the file on CICS to be able to write to it ? And why does it locks the file after writing to it ?
import java.io.BufferedReader;
import java.io.InputStreamReader;
import java.sql.*;
public class testVSAM {
public static void main(String argv[]){
Integer test = Integer.valueOf(2893);
String vsamCatalog = "VSESP.USER.CATALOG";
String FlightCluster = "FLIGHT.ORDERING.FLIGHTS";
String FlightMapName = "FLIGHT.TEST2.MAP";
try{
String ipAddr = "10.1.1.1";
String userID = "USER1";
String password = "PASSWORD";
java.sql.Connection jdbcCon;
java.sql.Driver jdbcDriver = (java.sql.Driver) Class.forName(
"com.ibm.vse.jdbc.VsamJdbcDriver").newInstance();
// Build the URL to use to connect
String url = "jdbc:vsam:"+ipAddr;
// Assign properties for the driver
java.util.Properties prop = new java.util.Properties();
prop.put("port", test);
prop.put("user", userID);
prop.put("password", password);
// Connect to the driver
jdbcCon = DriverManager.getConnection(url,prop);
try {
java.sql.PreparedStatement pstmt = jdbcCon.prepareStatement(
"INSERT INTO "+vsamCatalog+"\\"+FlightCluster+"\\"+FlightMapName+
" (RS_SERIAL1,RS_SERIAL2,RS_QTY1,RS_QTY2,RS_UPDATE,RS_UPTIME,RS_EMPNO,RS_PRINTFLAG,"+
"RS_PART_S,RS_PART_IN_A_P,RS_FILLER)"+" VALUES(?,?,?,?,?,?,?,?,?,?,?)");
//pstmt.setString(1, "12345678901234567890123003");
pstmt.setString(1, "1234567890");
pstmt.setString(2,"1234567890123");
pstmt.setInt(3,00);
pstmt.setInt(4,003);
pstmt.setString(5,"151209");
pstmt.setString(6, "094435");
pstmt.setString(7,"09932");
pstmt.setString(8,"P");
pstmt.setString(9,"Y");
pstmt.setString(10,"Y");
pstmt.setString(11," ");
// Execute the query
int num = pstmt.executeUpdate();
System.out.println(num);
pstmt.close();
}
catch (SQLException t)
{
System.out.println(t.toString());
}
try
{
// Get a statement
java.sql.Statement stmt = jdbcCon.createStatement();
// Execute the query ...
java.sql.ResultSet rs = stmt.executeQuery(
"SELECT * FROM "+vsamCatalog+"\\"+FlightCluster+"\\"+FlightMapName);
while (rs.next())
{
System.out.println(rs.getString("RS_SERIAL1") + " " + rs.getString("RS_SERIAL2")+ " " + rs.getString("RS_UPTIME")+ " " + rs.getString("RS_UPDATE"));
}
rs.close();
stmt.close();
}
catch (SQLException t)
{
}
}
catch (Exception e)
{
// do something appropriate with the exception, *at least*:
e.printStackTrace();
}
}
}
Note: the OS is z/VSE
The short answer to your original question is that KSDS VSAM is not a DBMS.
As you have discovered, you can define the VSAM file such that you can update it both from batch and from CICS, but as #BillWoodger points out, you must serialize your updates yourself.
Another approach would be to do all updates from the CICS region, and have your Java application send a REST or SOAP or MQ message to CICS to request its updates. This does require there be a CICS program to catch the requests from the Java application and perform the updates.
The IBM Mainframe under z/VSE has different partitions that run different jobs. For example partition F7 CICS, partition F8 Batch Jobs, ETC.
When you define a new VSAM file you have to set the SHAREOPTIONS of the file. When I define the file I set the SHAREOPTIONS (2 3). 2 Means that only one partition can write to the file.
So when the batch program (in a different partition to the CICS partition) which is called from Java was trying to write to the file it was not able to write to the file unless I close the file in CICS first.
To fix it I REDEFINE the CICS file with SHAREOPTIONS (4 3). 4 Means that multiple partitions of the Mainframe can write to it. Fixing the problem
Below is a part of the definition code where you set the SHAREOPTION:
* $$ JOB JNM=DEFFI,CLASS=9,DISP=D,PRI=9
* $$ LST CLASS=X,DISP=H,PRI=2,REMOTE=0,USER=JAVI
// JOB DEFFI
// EXEC IDCAMS,SIZE=AUTO
DEFINE CLUSTER -
( -
NAME (FLIGHT.ORDERING.FLIGHTS) -
RECORDS (2000 1000) -
INDEXED -
KEYS (26 0) -
RECORDSIZE (128 128) -
SHAREOPTIONS (4 3) -
VOLUMES (SYSWKE) -
) -
.
.
.
I have been working on a web crawler for some time now, the idea is simple, I have a SQL table containing a list of websites, I have many threads fetching the first website from the table and deleting it, then crawling it ( in a heap like manner).
Code is a bit too long so I'm gonna try and delete some parts of it :
while(true){
if(!stopped){
System.gc();
Statement stmt;
String scanned = "scanned";
if (!scan)scanned = "crawled";
Connection connection = null;
try {
connection = Utils.getConnection();
} catch (Exception e1) {
connection.close();
e1.printStackTrace();
}
String name;
stmt = connection.createStatement();
ResultSet rs = null;
boolean next;
do {
rs = stmt.executeQuery("select url from websites where "+scanned+" = -1");
next = rs.next();
} while (next && Utils.inBlackList(rs.getString(1)));
if(next){
name = rs.getString(1);
stmt.executeUpdate("UPDATE websites SET "+scanned+" = 1 where url = '"+Utils.stripDomainName(name)+"'");
String backup_name = name;
name = Utils.checkUrl(name);
System.out.println(scanned + " of the website : " + name +" just started by the Thread : " + num);
// And here is the important part, I think
CrawlConfig config = new CrawlConfig();
String ts = Utils.getTime();
SecureRandom random = new SecureRandom();
String SessionId = new BigInteger(130, random).toString(32);
String crawlStorageFolder = "tmp/temp_storageadmin"+SessionId;
config.setCrawlStorageFolder(crawlStorageFolder);
config.setPolitenessDelay(Main.POLITENESS_DELAY);
config.setMaxDepthOfCrawling(Main.MAX_DEPTH_OF_CRAWLING);
config.setMaxPagesToFetch(Main.MAX_PAGES_TO_FETCH);
config.setResumableCrawling(Main.RESUMABLE_CRAWLING);
int numberOfCrawlers = Main.NUMBER_OF_CRAWLERS;
PageFetcher pageFetcher = new PageFetcher(config);
RobotstxtConfig robotstxtConfig = new RobotstxtConfig();
RobotstxtServer robotstxtServer = new RobotstxtServer(robotstxtConfig, pageFetcher);
try {
controller = new CrawlerController(config, pageFetcher, robotstxtServer);
controller.addSeed(name);
controller.setSeeed(name);
controller.setTimestamp(ts);
controller.setSessiiid("admin"+num+scan);
//Main.crawls.addCrawl("admin"+num+scan, new Crawl(name,"admin"+num+scan,ts));
stmt.executeUpdate("DELETE FROM tempCrawl WHERE SessionID = '"+"admin"+num+scan+"'");
if (!scan){
// Main.crawls.getCrawl("admin"+num+scan).setCrawl(true);
stmt.executeUpdate("INSERT INTO tempCrawl (SessionID, url, ts, done, crawledpages, misspelled, english, proper, scan, crawl )"
+ " VALUES ( '"+"admin"+num+scan+"', '"+name+"', '"+ts+"', false, 0, 0, true, false, "+false+" , "+true+" )");
}else{
//Main.crawls.getCrawl("admin"+num+scan).setScan(true);
stmt.executeUpdate("INSERT INTO tempCrawl (SessionID, url, ts, done, crawledpages, misspelled, english, proper, scan, crawl )"
+ " VALUES ( '"+"admin"+num+scan+"', '"+name+"', '"+ts+"', false, 0, 0, true, false, "+true+" , "+false+" )");
}
connection.close();
controller.start_auto(Crawler.class, numberOfCrawlers, false, scan,num);
} catch(Exception e){
rs.close();
connection.close();
e.printStackTrace();
}
}else{
rs.close();
connection.close();
}
//CrawlerController.start_auto(scan, num);
if (stopping){
stopped = true;
stopping = false;
}
}}
} catch (Exception e) {
e.printStackTrace();
}
As you can see, each time I'm creating a crawlerController, and crawling a website and so on.
The problem here is that jvm memory heap keeps increasing in size considerably. After profiling the application using yourKit Java profiler I located the memory leak in the following lines of code :
yourKit profiling screenshot
Now this is the exact line from where the memory leak starts, this env variable seems to take up too much space, and keeps increasing after each operation, whereas the operations are independent.
Environment env = new Environment(envHome, envConfig);
I don't really know what this variable does, and how I could fix it, one more thing, I did alter the crawlController source code, I thought that might be relevant.
Assuming that you are using crawler4j as crawling-framework.
Everytime you create a crawl controller you instantiate a new frontier, which is shared between the crawler threads to manage the queue of URLs to crawl. Moreover, a so called 'docIdServer' is created, which has the responsiblity to manage if an incoming URL (e.g. website) has already been processed in this crawl.
This frontier and the docIdServer are based on an in-memory database, in which the environment is responsible for caching, locking, logging and transaction. For that reason, this variable will grow over time.
If you set resumable crawling to true, the database will operate in file-mode and there it will grow slower.