Not able to update page number in TOC of document using docx4j - java

While generating Table of content , i am not able to update page number.
It shows the exception as "Encountered broken bookmarks; not configured to remediate."
I used the below code to update TOC ..
TocGenerator tocGenerator = new TocGenerator(wordMLPackage);
tocGenerator.generateToc( 0, " TOC \o \"1-3\" \h \z \u ", false);
tocGenerator.updateToc( false);

The message is coming from here:
/**
* Calculate page numbers
*
* #return
* #throws TocException
*/
private Map<String, Integer> getPageNumbersMap() throws TocException {
// #since 6.1, check bookmarks are ok first
// what to do if not ok?
// - default behaviour is to fail
// - but can be configured to remediate:
boolean remediate = Docx4jProperties.getProperty("docx4j.toc.BookmarksIntegrity.remediate", false);
//
BookmarksIntegrity bm = new BookmarksIntegrity();
StringWriter sw = new StringWriter();
bm.setWriter(sw);
BookmarksStatus result = null;
try {
// Checks are performed on all bookmarks, not just those with
// a name of the form "_Toc*". We don't check for missing _Toc bookmarks.
result = bm.check(wordMLPackage.getMainDocumentPart(), remediate);
} catch (Exception e) { /* won't happen */}
if (result==BookmarksStatus.BROKEN) {
throw new TocException("Encountered broken bookmarks; not configured to remediate. \n" + sw.toString());
}
if (Docx4J.pdfViaFO()) {
return getPageNumbersMapViaFOP();
} else {
// recommended
return getPageNumbersMapViaService();
}
}
Please note that unless you are an existing licensee of Plutext's PDF Converter service, you won't be able to use getPageNumbersMapViaService().
For a possible alternative approach, please see https://www.docx4java.org/blog/2020/03/documents4j-for-toc-update/ which builds upon https://www.docx4java.org/blog/2020/03/documents4j-for-pdf-output/

Related

Java unique code generation failed while calling the recurring function

We have to implement a logic to write the unique code generation in Java. The concept is when we generate the code the system will check if the code is already generate or not. If already generate the system create new code and check again. But this logic fails in some case and we cannot able to identify what is the issue is
Here is the code to create the unique code
Integer code = null;
try {
int max = 999999;
int min = 100000;
code = (int) Math.round(Math.random() * (max - min + 1) + min);
PreOrders preObj = null;
preObj = WebServiceDao.getInstance().preOrderObj(code.toString());
if (preObj != null) {
createCode();
}
} catch (Exception e) {
exceptionCaught();
e.printStackTrace();
log.error("Exception in method createCode() - " + e.toString());
}
return code;
}
The function preOrderObj is calling a function to check the code exists in the database if exists return the object. We are using Hibernate to map the database functions and Mysql on the backend.
Here is the function preOrderObj
PreOrders preOrderObj = null;
List<PreOrders> preOrderList = null;
SessionFactory sessionFactory =
(SessionFactory) ServletActionContext.getServletContext().getAttribute(HibernateListener.KEY_NAME);
Session Hibernatesession = sessionFactory.openSession();
try {
Hibernatesession.beginTransaction();
preOrderList = Hibernatesession.createCriteria(PreOrders.class).add(Restrictions.eq("code", code)).list(); // removed .add(Restrictions.eq("status", true))
if (!preOrderList.isEmpty()) {
preOrderObj = (PreOrders) preOrderList.iterator().next();
}
Hibernatesession.getTransaction().commit();
Hibernatesession.flush();
} catch (Exception e) {
Hibernatesession.getTransaction().rollback();
log.debug("This is my debug message.");
log.info("This is my info message.");
log.warn("This is my warn message.");
log.error("This is my error message.");
log.fatal("Fatal error " + e.getStackTrace().toString());
} finally {
Hibernatesession.close();
}
return preOrderObj;
}
Please guide us to identify the issue.
In createCode method, when the random code generated already exist in database, you try to call createCode again. However, the return value from the recursive call is not updated to the code variable, hence the colliding code is still returned and cause error.
To fix the problem, update the method as
...
if (preObj != null) {
//createCode();
code = createCode();
}
...
Such that the code is updated.
By the way, using random number to generate unique value and test uniqueness through query is a bit strange. You may try Auto Increment if you want unique value.

Migration from dcm4che2 to dcm4che3

I have used below mentioned API of dcm4che2 from this repository http://www.dcm4che.org/maven2/dcm4che/ in my java project.
dcm4che-core-2.0.29.jar
org.dcm4che2.data.DicomObject
org.dcm4che2.io.StopTagInputHandler
org.dcm4che2.data.BasicDicomObject
org.dcm4che2.data.UIDDictionary
org.dcm4che2.data.DicomElement
org.dcm4che2.data.SimpleDcmElement
org.dcm4che2.net.service.StorageCommitmentService
org.dcm4che2.util.CloseUtils
dcm4che-net-2.0.29.jar
org.dcm4che2.net.CommandUtils
org.dcm4che2.net.ConfigurationException
org.dcm4che2.net.NetworkApplicationEntity
org.dcm4che2.net.NetworkConnection
org.dcm4che2.net.NewThreadExecutor
org.dcm4che3.net.service.StorageService
org.dcm4che3.net.service.VerificationService
Currently i want to migrate to dcm4che3 but, above listed API is not found in dcm4che3 which i have downloaded from this repository http://sourceforge.net/projects/dcm4che/files/dcm4che3/
Could you please guide me for alternate approach?
As you have already observed, the BasicDicomObject is history -- alongside quite a few others.
The new "Dicom object" is Attributes -- an object is a collection of attributes.
Therefore, you create Attributes, populate them with the tags you need for RQ-behaviour (C-FIND, etc) and what you get in return is another Attributes object from which you pull the tags you want.
In my opinion, dcm4che 2.x was vague on the subject of dealing with individual value representations. dcm4che 3.x is quite a bit clearer.
The migration demands a rewrite of your code regarding how you query and how you treat individual tags. On the other hand, dcm4che 3.x makes the new code less convoluted.
On request, I have added the initial setup of a connection to some service class provider (SCP):
// Based on org.dcm4che:dcm4che-core:5.25.0 and org.dcm4che:dcm4che-net:5.25.0
import org.dcm4che3.data.*;
import org.dcm4che3.net.*;
import org.dcm4che3.net.pdu.AAssociateRQ;
import org.dcm4che3.net.pdu.PresentationContext;
import org.dcm4che3.net.pdu.RoleSelection;
import org.dcm4che3.net.pdu.UserIdentityRQ;
// Client side representation of the connection. As a client, I will
// not be listening for incoming traffic (but I could choose to do so
// if I need to transfer data via MOVE)
Connection local = new Connection();
local.setHostname("client.on.network.com");
local.setPort(Connection.NOT_LISTENING);
// Remote side representation of the connection
Connection remote = new Connection();
remote.setHostname("pacs.on.network.com");
remote.setPort(4100);
remote.setTlsProtocols(local.getTlsProtocols());
remote.setTlsCipherSuites(local.getTlsCipherSuites());
// Calling application entity
ApplicationEntity ae = new ApplicationEntity("MeAsAServiceClassUser".toUpperCase());
ae.setAETitle("MeAsAServiceClassUser");
ae.addConnection(local); // on which we may not be listening
ae.setAssociationInitiator(true);
ae.setAssociationAcceptor(false);
// Device
Device device = new Device("MeAsAServiceClassUser".toLowerCase());
device.addConnection(local);
device.addApplicationEntity(ae);
// Configure association
AAssociateRQ rq = new AAssociateRQ();
rq.setCallingAET("MeAsAServiceClassUser");
rq.setCalledAET("NameThatIdentifiesTheProvider"); // e.g. "GEPACS"
rq.setImplVersionName("MY-SCU-1.0"); // Max 16 chars
// Credentials (if appropriate)
String username = "username";
String passcode = "so secret";
if (null != username && username.length() > 0 && null != passcode && passcode.length() > 0) {
rq.setUserIdentityRQ(UserIdentityRQ.usernamePasscode(username, passcode.toCharArray(), true));
}
Example, pinging the PACS (using the setup above):
String[] TRANSFER_SYNTAX_CHAIN = {
UID.ExplicitVRLittleEndian,
UID.ImplicitVRLittleEndian
};
// Define transfer capabilities for verification SOP class
ae.addTransferCapability(
new TransferCapability(null,
/* SOP Class */ UID.Verification,
/* Role */ TransferCapability.Role.SCU,
/* Transfer syntax */ TRANSFER_SYNTAX_CHAIN)
);
// Setup presentation context
rq.addPresentationContext(
new PresentationContext(
rq.getNumberOfPresentationContexts() * 2 + 1,
/* abstract syntax */ UID.Verification,
/* transfer syntax */ TRANSFER_SYNTAX_CHAIN
)
);
rq.addRoleSelection(new RoleSelection(UID.Verification, /* is SCU? */ true, /* is SCP? */ false));
try {
// 1) Open a connection to the SCP
Association association = ae.connect(local, remote, rq);
// 2) PING!
DimseRSP rsp = association.cecho();
rsp.next(); // Consume reply, which may fail
// Still here? Success!
// 3) Close the connection to the SCP
if (as.isReadyForDataTransfer()) {
as.waitForOutstandingRSP();
as.release();
}
} catch (Throwable ignore) {
// Failure
}
Another example, retrieving studies from a PACS given accession numbers; setting up the query and handling the result:
String modality = null; // e.g. "OT"
String accessionNumber = "1234567890";
//--------------------------------------------------------
// HERE follows setup of a query, using an Attributes object
//--------------------------------------------------------
Attributes query = new Attributes();
// Indicate character set
{
int tag = Tag.SpecificCharacterSet;
VR vr = ElementDictionary.vrOf(tag, query.getPrivateCreator(tag));
query.setString(tag, vr, "ISO_IR 100");
}
// Study level query
{
int tag = Tag.QueryRetrieveLevel;
VR vr = ElementDictionary.vrOf(tag, query.getPrivateCreator(tag));
query.setString(tag, vr, "STUDY");
}
// Accession number
{
int tag = Tag.AccessionNumber;
VR vr = ElementDictionary.vrOf(tag, query.getPrivateCreator(tag));
query.setString(tag, vr, accessionNumber);
}
// Optionally filter on modality in study if 'modality' is provided,
// otherwise retrieve modality
{
int tag = Tag.ModalitiesInStudy;
VR vr = ElementDictionary.vrOf(tag, query.getPrivateCreator(tag));
if (null != modality && modality.length() > 0) {
query.setString(tag, vr, modality);
} else {
query.setNull(tag, vr);
}
}
// We are interested in study instance UID
{
int tag = Tag.StudyInstanceUID;
VR vr = ElementDictionary.vrOf(tag, query.getPrivateCreator(tag));
query.setNull(tag, vr);
}
// Do the actual query, needing an AppliationEntity (ae),
// a local (local) and remote (remote) Connection, and
// an AAssociateRQ (rq) set up earlier.
try {
// 1) Open a connection to the SCP
Association as = ae.connect(local, remote, rq);
// 2) Query
int priority = 0x0002; // low for the sake of demo :)
as.cfind(UID.StudyRootQueryRetrieveInformationModelFind, priority, query, null,
new DimseRSPHandler(as.nextMessageID()) {
#Override
public void onDimseRSP(Association assoc, Attributes cmd,
Attributes response) {
super.onDimseRSP(assoc, cmd, response);
int status = cmd.getInt(Tag.Status, -1);
if (Status.isPending(status)) {
//--------------------------------------------------------
// HERE follows handling of the response, which
// is just another Attributes object
//--------------------------------------------------------
String studyInstanceUID = response.getString(Tag.StudyInstanceUID);
// etc...
}
}
});
// 3) Close the connection to the SCP
if (as.isReadyForDataTransfer()) {
as.waitForOutstandingRSP();
as.release();
}
}
catch (Exception e) {
// Failure
}
More on this at https://github.com/FrodeRanders/dicom-tools

Whats the correct and efficient way to delete a versioned document in Filenet P8 4.5 or higher?

I want to delete documents for which a specific property has been set in the current version. If this property has been set all versions of that document need to be removed.
My current implementation which searches for IsCurrentVersion = TRUE and foo = 'bar' has the problem that only the current version gets removed and not the older ones. So I assume that I need to delete the complete VersionSeries ?
Till now I use
doc.delete();
doc.save(RefreshMode.NO_REFRESH);
for each document I find. How can I retrieve all documents from the series and have them deleted too ? And is it more efficient if I add this to a batch ?
You should call the
delete()
method for the VersionSeries (http://www-304.ibm.com/support/knowledgecenter/SSNW2F_5.2.0/com.ibm.p8.ce.dev.java.doc/com/filenet/api/core/VersionSeries.html) instance,
VersionSeries vs = doc.getVersionSeries();
vs.delete();
vs.save(Refresh.NO_REFRESH);
Quote from docs
Caution: The delete and moveContent methods impact all document versions in the version series. That is, all document versions are deleted, and the content of all document versions are moved.
Method for deleting all the versions of a document from FileNet
public void deleteDocumentFromCE(String filenetDocGUID) throws Exception
{
System.out.println("In deleteDocumentFromCE() method");
System.out.println("Input Parameter filenetDocGUID is : " + filenetDocGUID);
Document document = null;
UserContext uc = null;
ObjectStore os = null;
Subject subject = null;
VersionSeries vs = null;
try
{
if (filenetDocGUID != null)
{
getCESession(); //This method will get the CE session and set it in ceSessionData private class variable
os = ceSessionData.getObjectStore();
System.out.println("ObjectStore fetched from CESession static reference is : " + os.get_Name());
subject = ceSessionData.getSubject();
System.out.println("Subject fetched from CESession static reference.");
uc = UserContext.get();
uc.pushSubject(subject);
if (os != null)
{
document = Factory.Document.fetchInstance(os, filenetDocGUID, null);
vs = document.get_VersionSeries();
vs.delete();
vs.save(RefreshMode.NO_REFRESH);
System.out.println("All Document Versions deleted : " + filenetDocGUID);
}
else
{
System.out.println("Error :: Object Store is not available.");
}
}
}
catch (Exception e)
{
System.out.println("Exception in deleteDocumentFromCE() Method : "+ e.getMessage());
//pass the error to the calling method
throw new Exception("System Error occurred while deleting the document in CE.. "+e.getMessage());
}
finally
{
if (uc != null)
uc.popSubject();
}
System.out.println("End of deleteDocumentFromCE() method");
}

accumulo - batchscanner: one result per range

So my general question is "Is it possible to have an Accumulo BatchScanner only pull back the first result per Range I give it?"
Now some details about my use case as there may be a better way to approach this anyway. I have data that represent messages from different systems. There can be different types of messages. My users want to be able to ask the system questions, such as "give me the most recent message of a certain type as of a certain time for all these systems".
My table layout looks like this
rowid: system_name, family: message_type, qualifier: masked_timestamp, value: message_text
The idea is that the user gives me a list of systems they care about, the type of message, and a certain timestamp. I used masked timestamp so that the table sorts most recent first. That way when I scan for a timestamp, the first result is the most recent prior to that time. I am using a BatchScanner because I have multiple systems I am searching for per query. Can I make the BatchScanner only fetch the first result for each Range? I can't specify a specific key because the most recent may not match the datetime given by the user.
Currently, I am using the BatchScanner and ignoring all but the first result per Key. It works right now, but it seems like a waste to pull back all the data for a specific system/type over the network when I only care about the first result per system/type.
EDIT
My attempt using the FirstEntryInRowIterator
#Test
public void testFirstEntryIterator() throws Exception
{
Connector connector = new MockInstance("inst").getConnector("user", new PasswordToken("password"));
connector.tableOperations().create("testing");
BatchWriter writer = writer(connector, "testing");
writer.addMutation(mutation("row", "fam", "qual1", "val1"));
writer.addMutation(mutation("row", "fam", "qual2", "val2"));
writer.addMutation(mutation("row", "fam", "qual3", "val3"));
writer.close();
Scanner scanner = connector.createScanner("testing", new Authorizations());
scanner.addScanIterator(new IteratorSetting(50, FirstEntryInRowIterator.class));
Key begin = new Key("row", "fam", "qual2");
scanner.setRange(new Range(begin, begin.followingKey(PartialKey.ROW_COLFAM_COLQUAL)));
int numResults = 0;
for (Map.Entry<Key, Value> entry : scanner)
{
Assert.assertEquals("qual2", entry.getKey().getColumnQualifier().toString());
numResults++;
}
Assert.assertEquals(1, numResults);
}
My goal is that the returned entry will be the ("row", "fam", "qual2", "val2") but I get 0 results. It almost seems like the Iterator is being applied before the Range maybe? I haven't dug into this yet.
This sounds like a good use case for using one of Accumulo's SortedKeyValueIterators, specifically the FirstEntryInRowIterator (contained in the accumulo-core artifact).
Create an IteratorSetting with the FirstEntryInRowIterator and add it to your BatchScanner. This will return the first Key/Value in that system_name, and then stop avoiding the overhead of your client ignoring all other results.
A quick modification of the FirstEntryInRowIterator might get you what you want:
/*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.accumulo.core.iterators;
import java.io.IOException;
import java.util.Collection;
import java.util.HashMap;
import java.util.Map;
import org.apache.accumulo.core.client.IteratorSetting;
import org.apache.accumulo.core.data.ByteSequence;
import org.apache.accumulo.core.data.Key;
import org.apache.accumulo.core.data.PartialKey;
import org.apache.accumulo.core.data.Range;
import org.apache.accumulo.core.data.Value;
import org.apache.hadoop.io.Text;
public class FirstEntryInRangeIterator extends SkippingIterator implements OptionDescriber {
// options
static final String NUM_SCANS_STRING_NAME = "scansBeforeSeek";
// iterator predecessor seek options to pass through
private Range latestRange;
private Collection<ByteSequence> latestColumnFamilies;
private boolean latestInclusive;
// private fields
private Text lastRowFound;
private int numscans;
/**
* convenience method to set the option to optimize the frequency of scans vs. seeks
*/
public static void setNumScansBeforeSeek(IteratorSetting cfg, int num) {
cfg.addOption(NUM_SCANS_STRING_NAME, Integer.toString(num));
}
// this must be public for OptionsDescriber
public FirstEntryInRangeIterator() {
super();
}
public FirstEntryInRangeIterator(FirstEntryInRangeIterator other, IteratorEnvironment env) {
super();
setSource(other.getSource().deepCopy(env));
}
#Override
public SortedKeyValueIterator<Key,Value> deepCopy(IteratorEnvironment env) {
return new FirstEntryInRangeIterator(this, env);
}
#Override
public void init(SortedKeyValueIterator<Key,Value> source, Map<String,String> options, IteratorEnvironment env) throws IOException {
super.init(source, options, env);
String o = options.get(NUM_SCANS_STRING_NAME);
numscans = o == null ? 10 : Integer.parseInt(o);
}
// this is only ever called immediately after getting "next" entry
#Override
protected void consume() throws IOException {
if (finished == true || lastRowFound == null)
return;
int count = 0;
while (getSource().hasTop() && lastRowFound.equals(getSource().getTopKey().getRow())) {
// try to efficiently jump to the next matching key
if (count < numscans) {
++count;
getSource().next(); // scan
} else {
// too many scans, just seek
count = 0;
// determine where to seek to, but don't go beyond the user-specified range
Key nextKey = getSource().getTopKey().followingKey(PartialKey.ROW);
if (!latestRange.afterEndKey(nextKey))
getSource().seek(new Range(nextKey, true, latestRange.getEndKey(), latestRange.isEndKeyInclusive()), latestColumnFamilies, latestInclusive);
else {
finished = true;
break;
}
}
}
lastRowFound = getSource().hasTop() ? getSource().getTopKey().getRow(lastRowFound) : null;
}
private boolean finished = true;
#Override
public boolean hasTop() {
return !finished && getSource().hasTop();
}
#Override
public void seek(Range range, Collection<ByteSequence> columnFamilies, boolean inclusive) throws IOException {
// save parameters for future internal seeks
latestRange = range;
latestColumnFamilies = columnFamilies;
latestInclusive = inclusive;
lastRowFound = null;
super.seek(range, columnFamilies, inclusive);
finished = false;
if (getSource().hasTop()) {
lastRowFound = getSource().getTopKey().getRow();
if (range.beforeStartKey(getSource().getTopKey()))
consume();
}
}
#Override
public IteratorOptions describeOptions() {
String name = "firstEntry";
String desc = "Only allows iteration over the first entry per range";
HashMap<String,String> namedOptions = new HashMap<String,String>();
namedOptions.put(NUM_SCANS_STRING_NAME, "Number of scans to try before seeking [10]");
return new IteratorOptions(name, desc, namedOptions, null);
}
#Override
public boolean validateOptions(Map<String,String> options) {
try {
String o = options.get(NUM_SCANS_STRING_NAME);
if (o != null)
Integer.parseInt(o);
} catch (Exception e) {
throw new IllegalArgumentException("bad integer " + NUM_SCANS_STRING_NAME + ":" + options.get(NUM_SCANS_STRING_NAME), e);
}
return true;
}
}

Apache Commons - NNTP - "Article To List" - AWT

I am currently using Apache Commons Net to develop my own NNTP reader. Using the tutorial available I was able to use some of their code to allow me to get articles back.
The Code I am using from NNTP Section -
System.out.println("Retrieving articles between [" + lowArticleNumber + "] and [" + highArticleNumber + "]");
Iterable<Article> articles = client.iterateArticleInfo(lowArticleNumber, highArticleNumber);
System.out.println("Building message thread tree...");
Threader threader = new Threader();
Article root = (Article)threader.thread(articles);
Article.printThread(root, 0);
I need to take the articles and turn them into a List type so I can send them to AWT using something like this -
List x = (List) b.GetGroupList(dog);
f.add(CreateList(x));
My Entire code Base for this section is -
public void GetThreadList(String Search) throws SocketException, IOException {
String hostname = USE_NET_HOST;
String newsgroup = Search;
NNTPClient client = new NNTPClient();
client.addProtocolCommandListener(new PrintCommandListener(new PrintWriter(System.out), true));
client.connect(hostname);
client.authenticate(USER_NAME, PASS_WORD);
if(!client.authenticate(USER_NAME, PASS_WORD)) {
System.out.println("Authentication failed for user " + USER_NAME + "!");
System.exit(1);
}
String fmt[] = client.listOverviewFmt();
if (fmt != null) {
System.out.println("LIST OVERVIEW.FMT:");
for(String s : fmt) {
System.out.println(s);
}
} else {
System.out.println("Failed to get OVERVIEW.FMT");
}
NewsgroupInfo group = new NewsgroupInfo();
client.selectNewsgroup(newsgroup, group);
long lowArticleNumber = group.getFirstArticleLong();
long highArticleNumber = lowArticleNumber + 5000;
System.out.println("Retrieving articles between [" + lowArticleNumber + "] and [" + highArticleNumber + "]");
Iterable<Article> articles = client.iterateArticleInfo(lowArticleNumber, highArticleNumber);
System.out.println("Building message thread tree...");
Threader threader = new Threader();
Article root = (Article)threader.thread(articles);
Article.printThread(root, 0);
try {
if (client.isConnected()) {
client.disconnect();
}
}
catch (IOException e) {
System.err.println("Error disconnecting from server.");
e.printStackTrace();
}
}
and -
public void CreateFrame() throws SocketException, IOException {
// Make a new program view
Frame f = new Frame("NNTP Reader");
// Pick my layout
f.setLayout(new GridLayout());
// Set the size
f.setSize(H_SIZE, V_SIZE);
// Make it resizable
f.setResizable(true);
//Create the menubar
f.setMenuBar(CreateMenu());
// Create the lists
UseNetController b = new UseNetController(NEWS_SERVER_CREDS);
String dog = "*";
List x = (List) b.GetGroupList(dog);
f.add(CreateList(x));
//f.add(CreateList(y));
// Add Listeners
f = CreateListeners(f);
// Show the program
f.setVisible(true);
}
I just want to take my list of returned news articles and send them to the display in AWT. Can any one explain to me how to turn those Articles into a list?
Welcome to the DIY newsreader club. I'm not sure if you are trying to get a list of newsgroups on the server, or articles.You have already have your Articles in an Iterable Collection. Iterate through it appending what you want in the list from each article. You probably aren't going to want to display the whole article body in a list view. More likely the message id, subject, author or date (or combination as a string). For example for a List of just subjects:
...
Iterable<Article> articles = client.iterateArticleInfo(lowArticleNumber, highArticleNumber);
Iterator<Article> it = articles.iterator();
while(it.hasNext()) {
Article thisone = it.next();
MyList.add(thisone.getSubject());
//MyList should have been declared up there somewhere ^^^ and
//your GetThreadList method must include List in the declaration
}
return MyList;
...
My strategy has been to retrieve the articles via an iterator in to an SQLite database with the body, subject, references etc. stored in fields. Then you can create a list sorted just how you want, with a link by primary key to retrieve what you need for individual articles as you display them. Another strategy would be an array of message_ids or article numbers and fetch each one individually from the news server as required. Have fun - particularly when you are coding for Android and want to display a list of threaded messages in the correct sequence with suitable indents and markers ;). In fact, you can learn a lot by looking at the open source Groundhog newsreader project (to which I am eternally grateful).
http://bazaar.launchpad.net/~juanjux/groundhog/trunk/files/head:/GroundhogReader/src/com/almarsoft/GroundhogReader

Categories