Why does this env object keep growing in size ? - java

I have been working on a web crawler for some time now, the idea is simple, I have a SQL table containing a list of websites, I have many threads fetching the first website from the table and deleting it, then crawling it ( in a heap like manner).
Code is a bit too long so I'm gonna try and delete some parts of it :
while(true){
if(!stopped){
System.gc();
Statement stmt;
String scanned = "scanned";
if (!scan)scanned = "crawled";
Connection connection = null;
try {
connection = Utils.getConnection();
} catch (Exception e1) {
connection.close();
e1.printStackTrace();
}
String name;
stmt = connection.createStatement();
ResultSet rs = null;
boolean next;
do {
rs = stmt.executeQuery("select url from websites where "+scanned+" = -1");
next = rs.next();
} while (next && Utils.inBlackList(rs.getString(1)));
if(next){
name = rs.getString(1);
stmt.executeUpdate("UPDATE websites SET "+scanned+" = 1 where url = '"+Utils.stripDomainName(name)+"'");
String backup_name = name;
name = Utils.checkUrl(name);
System.out.println(scanned + " of the website : " + name +" just started by the Thread : " + num);
// And here is the important part, I think
CrawlConfig config = new CrawlConfig();
String ts = Utils.getTime();
SecureRandom random = new SecureRandom();
String SessionId = new BigInteger(130, random).toString(32);
String crawlStorageFolder = "tmp/temp_storageadmin"+SessionId;
config.setCrawlStorageFolder(crawlStorageFolder);
config.setPolitenessDelay(Main.POLITENESS_DELAY);
config.setMaxDepthOfCrawling(Main.MAX_DEPTH_OF_CRAWLING);
config.setMaxPagesToFetch(Main.MAX_PAGES_TO_FETCH);
config.setResumableCrawling(Main.RESUMABLE_CRAWLING);
int numberOfCrawlers = Main.NUMBER_OF_CRAWLERS;
PageFetcher pageFetcher = new PageFetcher(config);
RobotstxtConfig robotstxtConfig = new RobotstxtConfig();
RobotstxtServer robotstxtServer = new RobotstxtServer(robotstxtConfig, pageFetcher);
try {
controller = new CrawlerController(config, pageFetcher, robotstxtServer);
controller.addSeed(name);
controller.setSeeed(name);
controller.setTimestamp(ts);
controller.setSessiiid("admin"+num+scan);
//Main.crawls.addCrawl("admin"+num+scan, new Crawl(name,"admin"+num+scan,ts));
stmt.executeUpdate("DELETE FROM tempCrawl WHERE SessionID = '"+"admin"+num+scan+"'");
if (!scan){
// Main.crawls.getCrawl("admin"+num+scan).setCrawl(true);
stmt.executeUpdate("INSERT INTO tempCrawl (SessionID, url, ts, done, crawledpages, misspelled, english, proper, scan, crawl )"
+ " VALUES ( '"+"admin"+num+scan+"', '"+name+"', '"+ts+"', false, 0, 0, true, false, "+false+" , "+true+" )");
}else{
//Main.crawls.getCrawl("admin"+num+scan).setScan(true);
stmt.executeUpdate("INSERT INTO tempCrawl (SessionID, url, ts, done, crawledpages, misspelled, english, proper, scan, crawl )"
+ " VALUES ( '"+"admin"+num+scan+"', '"+name+"', '"+ts+"', false, 0, 0, true, false, "+true+" , "+false+" )");
}
connection.close();
controller.start_auto(Crawler.class, numberOfCrawlers, false, scan,num);
} catch(Exception e){
rs.close();
connection.close();
e.printStackTrace();
}
}else{
rs.close();
connection.close();
}
//CrawlerController.start_auto(scan, num);
if (stopping){
stopped = true;
stopping = false;
}
}}
} catch (Exception e) {
e.printStackTrace();
}
As you can see, each time I'm creating a crawlerController, and crawling a website and so on.
The problem here is that jvm memory heap keeps increasing in size considerably. After profiling the application using yourKit Java profiler I located the memory leak in the following lines of code :
yourKit profiling screenshot
Now this is the exact line from where the memory leak starts, this env variable seems to take up too much space, and keeps increasing after each operation, whereas the operations are independent.
Environment env = new Environment(envHome, envConfig);
I don't really know what this variable does, and how I could fix it, one more thing, I did alter the crawlController source code, I thought that might be relevant.

Assuming that you are using crawler4j as crawling-framework.
Everytime you create a crawl controller you instantiate a new frontier, which is shared between the crawler threads to manage the queue of URLs to crawl. Moreover, a so called 'docIdServer' is created, which has the responsiblity to manage if an incoming URL (e.g. website) has already been processed in this crawl.
This frontier and the docIdServer are based on an in-memory database, in which the environment is responsible for caching, locking, logging and transaction. For that reason, this variable will grow over time.
If you set resumable crawling to true, the database will operate in file-mode and there it will grow slower.

Related

How to scanning and deleting millions of rows in HBase

What Happened
All the data from last month was corrupted due to a bug in the system. So we have to delete and re-input these records manually. Basically, I want to delete all the rows inserted during a certain period of time. However, I found it difficult to scan and delete millions of rows in HBase.
Possible Solutions
I found two way to bulk delete:
The first one is to set a TTL, so that all the outdated record would be deleted automatically by the system. But I want to keep the records inserted before last month, so this solution does not work for me.
The second option is to write a client using the Java API:
public static void deleteTimeRange(String tableName, Long minTime, Long maxTime) {
Table table = null;
Connection connection = null;
try {
Scan scan = new Scan();
scan.setTimeRange(minTime, maxTime);
connection = HBaseOperator.getHbaseConnection();
table = connection.getTable(TableName.valueOf(tableName));
ResultScanner rs = table.getScanner(scan);
List<Delete> list = getDeleteList(rs);
if (list.size() > 0) {
table.delete(list);
}
} catch (Exception e) {
e.printStackTrace();
} finally {
if (null != table) {
try {
table.close();
} catch (IOException e) {
e.printStackTrace();
}
}
if (connection != null) {
try {
connection.close();
} catch (IOException e) {
e.printStackTrace();
}
}
}
}
private static List<Delete> getDeleteList(ResultScanner rs) {
List<Delete> list = new ArrayList<>();
try {
for (Result r : rs) {
Delete d = new Delete(r.getRow());
list.add(d);
}
} finally {
rs.close();
}
return list;
}
But in this approach, all the records are stored in ResultScanner rs, so the heap size would be huge. And if the program crushes, it has to start from the beginning.
So, is there a better way to achieve the goal?
Don't know how many 'millions' you are dealing with in your table, but the simples thing is to not try to put them all into a List at once but to do it in more manageable steps by using the .next(n) function. Something like this:
for (Result row : rs.next(numRows))
{
Delete del = new Delete(row.getRow());
...
}
This way, you can control how many rows get returned from the server via a single RPC through the numRows parameter. Make sure it's large enough so as not to make too many round-trips to the server, but at the same time not too large to kill your heap. You can also use the BufferedMutator to operate on multiple Deletes at once.
Hope this helps.
I would suggest two improvements:
Use BufferedMutator to batch your deletes,  it does exactly what you need – keeps internal buffer of mutations and flushes it to HBase when buffer fills up, so you do not have to worry about keeping your own list, sizing and flushing it.
Improve your scan:
Use KeyOnlyFilter – since you do not need the values, no need to retrieve them
use scan.setCacheBlocks(false) - since you do a full-table scan, caching all blocks on the region server does not make much sense
tune scan.setCaching(N) and scan.setBatch(N) – the N will depend on the size of your keys, you should keep a balance between caching more and memory it will require; but since you only transfer keys, the N could be quite large, I suppose.
Here's an updated version of your code:
public static void deleteTimeRange(String tableName, Long minTime, Long maxTime) {
try (Connection connection = HBaseOperator.getHbaseConnection();
final Table table = connection.getTable(TableName.valueOf(tableName));
final BufferedMutator mutator = connection.getBufferedMutator(TableName.valueOf(tableName))) {
Scan scan = new Scan();
scan.setTimeRange(minTime, maxTime);
scan.setFilter(new KeyOnlyFilter());
scan.setCaching(1000);
scan.setBatch(1000);
scan.setCacheBlocks(false);
try (ResultScanner rs = table.getScanner(scan)) {
for (Result result : rs) {
mutator.mutate(new Delete(result.getRow()));
}
}
} catch (IOException e) {
e.printStackTrace();
}
}
Note the use of "try with resource" – if you omit that, make sure to .close() mutator, rs, table, and connection.

How to change initial password when programmatically creating a Neo4j DB in Java

I need to programmatically create a Neo4j DB and then change the initial password. Creating the DB works, but I don't know how to change the password.
String graphDBFilePath = Config.getNeo4jFilesPath_Absolute() + "/" + schemaName; // This is an absolute path. Do not let Neo4j see it.
GraphDatabaseService graphDb = new GraphDatabaseFactory().newEmbeddedDatabase( new File(graphDBFilePath) ); // See http://neo4j.com/docs/java-reference/current/javadocs/org/neo4j/graphdb/GraphDatabaseService.html
// This transaction works
try (Transaction tx = graphDb.beginTx()) {
Node myNode = graphDb.createNode();
myNode.setProperty( "name", "my node");
tx.success();
}
// This transaction throws an error
Transaction tx = graphDb.beginTx();
try {
graphDb.execute("CALL dbms.changePassword(\'Spoon_XL\')"); // "Invalid attempt to change the password" error
tx.success();
} catch (Exception ex) {
Log.logError("createNewGraphDB() Change Initial Password: " + ex.getLocalizedMessage());
} finally {tx.close(); graphDb.shutdown();}
This may be version dependent (and I don't know what version you are running), but I think the procedure is actually dbms.security.changePassword
Hope this helps,
Tom

Problems with characters when getting data from Mysql

I have a problem getting data previously recorded with my java program fronted in a Mysql Database. I checked both mysql and Netbeans and the encoding is utf-8 but I still have this kind of problem.
Any tips ??
I'm on mac with netbeans 8.2
My application shows the data like this:
MySQL shows the data with no issues:
The question is not precise enough, hence some points you might try.
Add those statements in your java frontend application, after database is connected, before you INSERT any data:
SET character_set_connection="utf8"
SET character_set_client="utf8"
SET character_set_database="utf8"
SET character_set_results="utf8"
SET character_set_server="utf8"
SET character_set_system="utf8"
Probably you won't need them all; feel free to experiment which ones do the trick.
You may also log into a MySQL console and see actual settings by issuing a command:
mysql> show variables like '%character_set%';
Ok, I resolved it.
basically It was a problem about Column data in Mysql that it was BLOB...I already tried to change in LONGTEXT, but even if all Database was in UTF-8 if I changed only type of content it wasn't enough !.
I Had to change both type of collation, Database and Column table.
Thanks for your support !
Alex
this is the code that a generate JTextArea and all data
private void popolaPianificazione(){
String tipo="Pianificazione";
String sql = "SELECT * FROM DomandePianificazione";
ResultSet res = null;
try {
res = MysqlStuff.richiediDatiSQL(sql);
if(res != null){
res.last();
if(res.getRow() != 0){
res.beforeFirst();
while(res.next()){
final String contatore = res.getString("id");
int conta = Integer.parseInt(contatore);
JPanel temp = new javax.swing.JPanel(new MigLayout("fill","grow"));
temp.setBorder(javax.swing.BorderFactory.createTitledBorder("DOMANDA "+"["+conta+"]"));
String domande = res.getString("Domanda");
domande.replace("è", "p");
javax.swing.border.Border border = BorderFactory.createEtchedBorder();
JTextArea domanda = new javax.swing.JTextArea(domande,2,2);
domanda.setBorder(border);
domanda.setBackground(colore);
domanda.setSize(400, 100);
domanda.setFont(font);
domanda.setMinimumSize(new Dimension(400,100));
domanda.setLineWrap(true);
domanda.setWrapStyleWord(true);
domanda.setOpaque(false);
domanda.setEditable(false);
JCheckBox rispostaC = new javax.swing.JCheckBox("Si/No");
JCheckBox rispostaCom = new javax.swing.JCheckBox("A completamento");
String rispostaCheck = res.getString("rispostaCheck");
String rispostaCompleta = res.getString("rispostaCompleta");
if (!"no".equals(rispostaCheck)){
rispostaC.setSelected(true);
}
else{
rispostaCom.setSelected(true);
}
JButton edit = new javax.swing.JButton("Modifica la domanda");
ButtonGroup buttonGroup1 = new javax.swing.ButtonGroup();
buttonGroup1.add(rispostaC);
buttonGroup1.add(rispostaCom);
rispostaC.setEnabled(false);
rispostaC.setRolloverEnabled(false);
rispostaCom.setEnabled(false);
rispostaCom.setRolloverEnabled(false);
temp.add(edit,"wrap");
edit.addActionListener(new ActionListener(){
#Override
public void actionPerformed(ActionEvent e) {
if ("Salva le modifiche".equals(edit.getLabel())){
System.out.println("Sto salvando...");
String pannello = "DomandePianificazione";
try {
SalvaDomanda(tipo,contatore,domanda,rispostaC,rispostaCom,pannello);
PanelPianificazione.revalidate();
PanelPianificazione.repaint();
} catch (SQLException ex) {
Logger.getLogger(ManageQuestionario.class.getName()).log(Level.SEVERE, null, ex);
}
SKIP
and this is the code for send data to mysql:
public static void inviaDatiSQL(String sql,String stat) throws SQLException, ClassNotFoundException{
UP = connetti();
System.out.println("INVIO dati a DB: \n"+ sql);
Statement stmt = null;
PreparedStatement test = UP.prepareStatement(sql);
test.setString(1, stat);
test.executeUpdate();
System.out.println("Finito !");
}

Apache Commons - NNTP - "Article To List" - AWT

I am currently using Apache Commons Net to develop my own NNTP reader. Using the tutorial available I was able to use some of their code to allow me to get articles back.
The Code I am using from NNTP Section -
System.out.println("Retrieving articles between [" + lowArticleNumber + "] and [" + highArticleNumber + "]");
Iterable<Article> articles = client.iterateArticleInfo(lowArticleNumber, highArticleNumber);
System.out.println("Building message thread tree...");
Threader threader = new Threader();
Article root = (Article)threader.thread(articles);
Article.printThread(root, 0);
I need to take the articles and turn them into a List type so I can send them to AWT using something like this -
List x = (List) b.GetGroupList(dog);
f.add(CreateList(x));
My Entire code Base for this section is -
public void GetThreadList(String Search) throws SocketException, IOException {
String hostname = USE_NET_HOST;
String newsgroup = Search;
NNTPClient client = new NNTPClient();
client.addProtocolCommandListener(new PrintCommandListener(new PrintWriter(System.out), true));
client.connect(hostname);
client.authenticate(USER_NAME, PASS_WORD);
if(!client.authenticate(USER_NAME, PASS_WORD)) {
System.out.println("Authentication failed for user " + USER_NAME + "!");
System.exit(1);
}
String fmt[] = client.listOverviewFmt();
if (fmt != null) {
System.out.println("LIST OVERVIEW.FMT:");
for(String s : fmt) {
System.out.println(s);
}
} else {
System.out.println("Failed to get OVERVIEW.FMT");
}
NewsgroupInfo group = new NewsgroupInfo();
client.selectNewsgroup(newsgroup, group);
long lowArticleNumber = group.getFirstArticleLong();
long highArticleNumber = lowArticleNumber + 5000;
System.out.println("Retrieving articles between [" + lowArticleNumber + "] and [" + highArticleNumber + "]");
Iterable<Article> articles = client.iterateArticleInfo(lowArticleNumber, highArticleNumber);
System.out.println("Building message thread tree...");
Threader threader = new Threader();
Article root = (Article)threader.thread(articles);
Article.printThread(root, 0);
try {
if (client.isConnected()) {
client.disconnect();
}
}
catch (IOException e) {
System.err.println("Error disconnecting from server.");
e.printStackTrace();
}
}
and -
public void CreateFrame() throws SocketException, IOException {
// Make a new program view
Frame f = new Frame("NNTP Reader");
// Pick my layout
f.setLayout(new GridLayout());
// Set the size
f.setSize(H_SIZE, V_SIZE);
// Make it resizable
f.setResizable(true);
//Create the menubar
f.setMenuBar(CreateMenu());
// Create the lists
UseNetController b = new UseNetController(NEWS_SERVER_CREDS);
String dog = "*";
List x = (List) b.GetGroupList(dog);
f.add(CreateList(x));
//f.add(CreateList(y));
// Add Listeners
f = CreateListeners(f);
// Show the program
f.setVisible(true);
}
I just want to take my list of returned news articles and send them to the display in AWT. Can any one explain to me how to turn those Articles into a list?
Welcome to the DIY newsreader club. I'm not sure if you are trying to get a list of newsgroups on the server, or articles.You have already have your Articles in an Iterable Collection. Iterate through it appending what you want in the list from each article. You probably aren't going to want to display the whole article body in a list view. More likely the message id, subject, author or date (or combination as a string). For example for a List of just subjects:
...
Iterable<Article> articles = client.iterateArticleInfo(lowArticleNumber, highArticleNumber);
Iterator<Article> it = articles.iterator();
while(it.hasNext()) {
Article thisone = it.next();
MyList.add(thisone.getSubject());
//MyList should have been declared up there somewhere ^^^ and
//your GetThreadList method must include List in the declaration
}
return MyList;
...
My strategy has been to retrieve the articles via an iterator in to an SQLite database with the body, subject, references etc. stored in fields. Then you can create a list sorted just how you want, with a link by primary key to retrieve what you need for individual articles as you display them. Another strategy would be an array of message_ids or article numbers and fetch each one individually from the news server as required. Have fun - particularly when you are coding for Android and want to display a list of threaded messages in the correct sequence with suitable indents and markers ;). In fact, you can learn a lot by looking at the open source Groundhog newsreader project (to which I am eternally grateful).
http://bazaar.launchpad.net/~juanjux/groundhog/trunk/files/head:/GroundhogReader/src/com/almarsoft/GroundhogReader

reading huge data from database and writing into xml Java

I have huge data billions of records in tables what is the best way to read it in plain Java and write it in XML file?
Thanks
If by best you mean fastest - I would consider using native database tools to dump the files as this will be way faster than using JDBC.
Java (+Hibernate?) will slow the process down unnecessarily. Easier to do sqlplus script and spool formatted fields into your xml file.
On Toad you can right click a table and click export to xml. on the commercial version I think you can export all tables but I'm not sure
Another possibility (working with all db with a JDBC driver) would be to use Apache Cocoon. There are actually two ways: XSP ((alone or and with ESQL). Both technos are really quick to develop.
XSP alone example. Think of XSP as a little bit like JSP but generating XML instead of HTML. From a DB for instance.
<?xml version="1.0"?>
<xsp:page language="java" xmlns:xsp="http://apache.org/xsp"
xmlns:esql="http://apache.org/cocoon/SQL/v2"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://apache.org/cocoon/SQL/v2 xsd/esql.xsd"
space="strip">
<xsp:structure>
<xsp:include>java.sql.Connection</xsp:include>
<xsp:include>java.sql.DriverManager</xsp:include>
<xsp:include>java.sql.PreparedStatement</xsp:include>
<xsp:include>java.sql.SQLException</xsp:include>
<xsp:include>java.sql.ResultSet</xsp:include>
</xsp:structure>
<xsp:logic><![CDATA[
private static final String connectionString =
"jdbc:mysql://localhost/mandarin?user=mandarin&password=mandarin" ;
private Connection conn = null ;
private PreparedStatement pstmt = null ;
private void openDatabase() {
try {
DriverManager.registerDriver(new com.mysql.jdbc.Driver());
conn = DriverManager.getConnection (connectionString);
pstmt = conn.prepareStatement(
"select " +
" count(*) as cardinality " +
" from " +
" unihan50 u " +
" where " +
" unicode_id >= ? and " +
" unicode_id <= ? " ) ;
} catch (SQLException e) {
e.printStackTrace();
}
}
private int getRangeCardinality ( int lowerBound, int upperBound ) {
int cnt = 0 ;
try {
cnt = 2 ;
pstmt.setInt ( 1, lowerBound ) ;
pstmt.setInt ( 2, upperBound ) ;
boolean sts = pstmt.execute () ;
if ( sts ) {
ResultSet rs = pstmt.getResultSet();
if (rs != null && rs.next() ) {
cnt = rs.getInt ( "cardinality" ) ;
}
}
} catch (SQLException e) {
e.printStackTrace();
}
return cnt ;
}
private void closeDatabase() {
try {
pstmt.close () ;
} catch (SQLException e) {
e.printStackTrace();
}
try {
conn.close () ;
} catch (SQLException e) {
e.printStackTrace();
}
}
]]>
</xsp:logic>
<ranges>
<xsp:logic><![CDATA[
openDatabase() ;
for ( int i = 0; i < 16 ; i++ ) {
int from = i * 0x1000 ;
int to = i * 0x1000 + 0x0fff ;
]]>
<range>
<from>0x<xsp:expr>Integer.toString(from, 16)</xsp:expr></from>
<to>0x<xsp:expr>Integer.toString(to, 16)</xsp:expr></to>
<count><xsp:expr>getRangeCardinality ( from, to )</xsp:expr></count>
</range>
}
closeDatabase () ;
</xsp:logic>
</ranges>
</xsp:page>
XSP is even more straightforward coupled with ESQL. Here is sample
<?xml version="1.0" encoding="UTF-8"?>
<xsp:page language="java" xmlns:xsp="http://apache.org/xsp"
xmlns:esql="http://apache.org/cocoon/SQL/v2"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:xsp-request="http://apache.org/xsp/request/2.0"
xsi:schemaLocation="http://apache.org/cocoon/SQL/v2 xsd/esql.xsd"
space="strip">
<keys>
<esql:connection>
<esql:pool>mandarinMySQL</esql:pool>
<esql:execute-query>
<esql:query><![CDATA[
select
unicode_id,
kMandarin,
...
from
unihan50_unified
where
add_strokes = 0
order by
radical
]]>
</esql:query>
<esql:results>
<esql:row-results><key><esql:get-columns /></key></esql:row-results>
</esql:results>
</esql:execute-query>
</esql:connection>
</keys>
</xsp:page>
I'll be using database inbuild procedure (e.g. XML path) to get data already converted in xml format.
Now there are 2 ways to write in the file:
1. If you have to have Java interface (JDBC) to retrieve data (due to business req) then I'll simply read this data and write in a File (No XML Parser involvement unless you need to verify the output).
2. If you do not have Java restriction then I'll simply write a Stored Procedure which will dump XML data in a file.
Update to comment:
Workflow for fastest retrieval:
Create Stored Procedure which will retrieve data and dump into a file.
Call this SP through Java (as you said you need it)
Either SP can return you the file name or you can create SP which will take file name so you can dynamically manage the output location.
I have not used Oracle for a very long time but I hope this link can help you to kickstart.
If the DB is Oracle, then you can simply use JDBC with a SQLX query. This will generate your result set directly as XML fragments on the server much faster than if you'd do it on your own on the client side. SQLX has been available since 8.1.7 as project Aurora and since 9i in standard as XMLDB.
Here is a simple example.
select XMLelement ("Process",
XMLelement( "number", p.p_ka_id, '.', p_id ),
XMLElement( "name", p.p_name ),
XMLElement ( "processGroup", pg.pg_name ) )
from
PMP_processes p,
PMP_process_groups pg
where
condition ;
In addition to XMLelement, SQLX has XMLattribute, XMLforest, XMLaggregate... which allows you any resulting tree.
Use StAX to write the xml, not DOM.
You can query to the database and retrieve all data into a RESULTSET and use the following code to start off a root element.
DocumentBuilderFactory documentBuilderFactory = DocumentBuilderFactory.newInstance();
DocumentBuilder documentBuilder = documentBuilderFactory.newDocumentBuilder();
Document document = documentBuilder.newDocument();
Element Element_root = document.createElement("rootElement");
Thereafter you can add on as many as child elements using
Element Element_childnode = document.createElement("childnode");//create child node
Element_childnode.appendChild(document.createTextNode("Enter the value of text here"));//add data to child node
Element_root.appendChild(Element_childnode);//close the child node
Do not forget to close the opened node close the root at the end WITHOUT FAIL
Use this to close root.
document.appendChild(Element_causelist);
At the end if you have a XSD validate it your xml against it.....googling the validation online will provide good results.... http://tools.decisionsoft.com/schemaValidate/
NOTE : TIME !!! It will take time when data is huge nos...
But I think this is one and the most easiest way of doing it....Taking in consideration the data, I think one should run the program during down time when there is less traffic....
Hope this helps....Good Luck Gauls....
public class someclassname{
public static String somemethodname(){
String sql;
sql="SELECT * from yourdatabase.yourtable ";
return sql;
}
public static String anothermethodname(){
/*this is another method which is used to excute another query simultaneously*/
String sql;
sql="SELECT * from youdatabase.yourtable2";
return sql;
}
private static void saveasxml(String sql,String targetFile) throws SQLException, XMLStreamException, IOException{
int i,count;
FileOutputStream fos;
try{
Class.forName("com.mysql.jdbc.Driver");
Connection con=DriverManager.getConnection("jdbc:mysql://yourdomain:yourport/yourdatabase","username","password");
Statement stmt=con.createStatement();
ResultSet rs=stmt.executeQuery(sql);
ResultSetMetaData rsmd=rs.getMetaData();
count=rsmd.getColumnCount();
XMLOutputFactory outputFactory = XMLOutputFactory.newFactory();
fos=new FileOutputStream(targetFile);
XMLStreamWriter writer = outputFactory.createXMLStreamWriter(fos);
writer.writeStartDocument();
writer.writeCharacters("\n");
writer.writeStartElement("maintag line");
writer.writeCharacters("\n");
while(rs.next()){
writer.writeCharacters("\t");
writer.writeStartElement("foreveyrow-tagline");
writer.writeCharacters("\n\t");
for(i=1;i<count+1;i++){
writer.writeCharacters("\t");
writer.writeStartElement("Field"+i);
writer.writeCharacters(rs.getString(i));
writer.writeEndElement();
writer.writeCharacters("\n\t");
}
writer.writeEndElement();
writer.writeCharacters("\n");
}
writer.writeEndElement();
writer.writeEndDocument();
writer.close();
}catch(ClassNotFoundException | SQLException e){
}
}
public static void main(String args[]) throws Exception{
saveasxml(somemethodname(), " file location-path");
saveasxml(anothermethodname(), "file location path");
}
}
Thanks all for replying , so far i have managed to get a solution based on using threads and use multiple selects instead of one single complex sql joins (i hate SQL complex ones) life should be simple :) so i didn't waste too much time writing them i am using new threads for each select statements.
any better solution in POJO probabaly using spring is also fine
Thanks
gauls

Categories