How to fix duplicate elements in repeated protocol-buffer field? - java

I want to load some data using protocoll-buffers (JSON was way too slow on Android) but somehow my repeated field called company contains 6 copies of every element - although I am not storing any duplicates.
How do I know that it shouldn't contain duplicates?
I did set a counter for every object I am saving - and it was the expected length.
This is my schema:
syntax = "proto3";
[...]
message CompanyProtoRepository {
// THIS FIELD CONTAINS DUPLICATES!
repeated CompanyProto company = 1;
}
How I store my data:
public void writeToFile(String fileName) {
CompanyProtos.CompanyProtoRepository repo = loadRepository();
try {
OutputStream outputStream = mContext.openFileOutput(fileName, Context.MODE_PRIVATE);
repo.writeTo(outputStream);
} catch (Exception e) {
e.printStackTrace();
}
}
private CompanyProtos.CompanyProtoRepository loadRepository() {
CompanyLoaderService jsonLoader = new JsonCompanyLoaderService(mContext.getResources());
CompanyProtos.CompanyProtoRepositoryOrBuilder repo = CompanyProtos.CompanyProtoRepository.newBuilder();
int counter = 0; // Will be 175 which is correct (every company 1 time)
// Will contain every id only time -> correct!
HashMap<Integer, Integer> map = new HashMap<>();
for (Company company : jsonLoader.getCompanies()) {
counter++;
if (!map.containsKey(company.getName()))
map.put(company.getId(), 1);
else
map.put(company.getId(), map.get(company.getId()) + 1);
CompanyProtos.CompanyProto proto = toProto(company);
if (!repo.getCompanyList().contains(proto))
((CompanyProtos.CompanyProtoRepository.Builder) repo).addCompany(proto);
}
return ((CompanyProtos.CompanyProtoRepository.Builder) repo).build();
}
And this is how I load my data:
private List<Company> loadCompanies() {
CompanyProtos.CompanyProtoRepository repo = null;
try {
InputStream inputStream = mContext.getResources().openRawResource(R.raw.company_buffers);
repo = CompanyProtos.CompanyProtoRepository.parseFrom(inputStream);
ArrayList<Company> list = new ArrayList<>();
for (CompanyProtos.CompanyProto companyProto: repo.getCompanyList()) {
list.add(fromProto(companyProto));
}
// This list contains every company 6 times!
return list;
} catch (Exception ex) { }
}
Of course I expected to have each company only 1 time since I verfied that I store each company only time inside my CompanyProtoRepository instead of 6 times.

Oh my f**** godness.
I just spent hours and hours by trying to fix that bug.
Turns out I am reading from an old corrupt data-set - and not the file I am actually writing to.

Related

Atomic increment via operate command using java client (5.1.5.1) fails

I am trying to do atomic operations on a single bin set. However I am noticing a strange behaviour where randomly record is not getting incremented/decremented. Operate command is returning null in response. Not getting any exception as well. Sometimes it works and sometimes doesn't.
Attempted to check using both persistent and inmemory storage. It's a clustered environment of 3 nodes.
For some cases it gave Hot key error which i was able to resolve by setting the property transaction-pending-limit = 0.
Aerospike client :
<dependency>
<groupId>com.aerospike</groupId>
<artifactId>aerospike-client</artifactId>
<version>5.1.5.1</version>
</dependency>
Operate command :
public long incrementAndGetBinValue(AerospikeClient aerospikeClient, String key, Bin bin, long value, Key asKey) {
Record record;
try {
record = aerospikeClient.operate(aerospikeClient.writePolicyDefault, asKey, Operation.add(bin), Operation.get(bin.name));
logger.info("Aeroospike record {} ", record);
} catch (AerospikeException exception) {
logger.error("Aerospike Exception while incrementing bin {} ,key {} by value {} error : {}", bin.name, key, value, ExceptionUtils.getStackTrace(exception));
throw new NexusException(ErrorCodes.AEROSPIKE_OPERATION_ERROR, "aerospike exception operation error");
} catch (Exception exception) {
logger.error("Exception while incrementing bin {} ,key {} by value {} error : {}", bin.name, key, value, ExceptionUtils.getStackTrace(exception));
}
if (record == null) {
logger.error("incrementAndGetBinValue : No record returned for key {} ,bin {}", key, bin.name);
throw new NexusException(ErrorCodes.AEROSPIKE_NO_RECORD_FOUND, "aerospike no record returned");
}
return record.getLong(bin.name);
}
Policy :
ClientPolicy policy = new ClientPolicy();
policy.timeout = 5000;
WritePolicy writePolicy = new WritePolicy();
writePolicy.socketTimeout = 60000;
writePolicy.maxRetries = 2;
writePolicy.sendKey = true;
writePolicy.expiration = -1; // never expire record
writePolicy.respondAllOps=true;
writePolicy.durableDelete=true;
policy.writePolicyDefault = writePolicy;
Policy readPolicy = new Policy();
readPolicy.maxRetries = 2;
readPolicy.socketTimeout = 5000;
policy.readPolicyDefault = readPolicy;
return policy;
1 - you are passing long value but not using it.
2 - it has to be used in the Bin constructor when passing to Operation.add().
For e.g.
Bin bin1 = new Bin("name", "John Doe");
Bin bin2 = new Bin("age", 32);
Bin bin3 = new Bin("greeting", "Hello World!");
// Write a record
client.put(null, key, bin1, bin2, bin3);
//This creates the record with bin2=32
import com.aerospike.client.Operation;
Bin addTobin2 = new Bin("age", 4);
client.operate(null,key,Operation.add(addTobin2));
Here I expect the put() call to create the bin with value 32.
Then, the operate() call with Operation.add(addTobin2)... will take that the value 4 and add to 32. Net value expected: 36.
// Read the record
Record record = client.get(null, key);
System.out.println(record);
Record values are:
(gen:6),(exp:394484896),(bins:(name:John Doe),(age:36),(greeting:Hello World!))
Upsert Code Demo:
client.delete(null,key);
// Write a record
import com.aerospike.client.Operation;
Bin bin2 = new Bin("test", 2);
client.operate(null,key,Operation.add(bin2));
Record record = client.get(null, key);
System.out.println(record);
client.operate(null,key,Operation.add(bin2));
Record record = client.get(null, key);
System.out.println(record);
client.operate(null,key,Operation.add(bin2));
Record record = client.get(null, key);
System.out.println(record);
(gen:1),(exp:394490665),(bins:(test:2))
(gen:2),(exp:394490665),(bins:(test:4))
(gen:3),(exp:394490666),(bins:(test:6))
Multiple Ops done Atomically / code example:
client.delete(null,key);
// Write a record
import com.aerospike.client.Operation;
Bin bin2 = new Bin("test", 2);
System.out.println(bin2);
Record record = client.operate(null,key,
Operation.add(bin2),
Operation.get(bin2.name),
Operation.add(bin2),
Operation.get(bin2.name),
Operation.add(bin2),
Operation.get(bin2.name)
);
List<?> retList = (ArrayList<?>)record.getList(bin2.name);
System.out.println(retList.get(0));
System.out.println(retList.get(1));
System.out.println(retList.get(2));
client.operate(null,key,Operation.add(bin2));
record = client.get(null, key);
System.out.println(record);
client.operate(null,key,Operation.add(bin2));
record = client.get(null, key);
System.out.println(record);
Output:
test:2
2
4
6
(gen:2),(exp:394501613),(bins:(test:8))
(gen:3),(exp:394501613),(bins:(test:10))

Compare Two CSV Files and Fetch Data

I have two csv files. One Master CSV File around 500000 records. Another DailyCSV file has 50000 Records.
The DailyCSV files misses few columns which has to be fetched from Master CSV File.
For example
DailyCSV File
id,name,city,zip,occupation
1,Jhon,Florida,50069,Accountant
MasterCSV File
id,name,city,zip,occupation,company,exp,salary
1, Jhon, Florida, 50069, Accountant, AuditFirm, 3, $5000
What I have to do is, read both files, match the records with ID, if ID is present in the master file, then i have to fetch company, exp, salary and write it to a new csv file.
How to achieve this.??
What I have done Currently
while (true) {
line = bstream.readLine();
lineMaster = bstreamMaster.readLine();
if (line == null || lineMaster == null)
{
break;
}
else
{
while(lineMaster != null)
readlineSplit = line.split(",(?=([^\"]*\"[^\"]*\")*[^\"]*$)", -1);
String splitId = readlineSplit[4];
String[] readLineSplitMaster =lineMaster.split(",(?=([^\"]*\"[^\"]*\")*[^\"]*$)", -1);
String SplitIDMaster = readLineSplitMaster[13];
System.out.println(splitId + "|" + SplitIDMaster);
//System.out.println(splitId.equalsIgnoreCase(SplitIDMaster));
if (splitId.equalsIgnoreCase(SplitIDMaster)) {
String writeLine = readlineSplit[0] + "," + readlineSplit[1] + "," + readlineSplit[2] + "," + readlineSplit[3] + "," + readlineSplit[4] + "," + readlineSplit[5] + "," + readLineSplitMaster[15]+ "," + readLineSplitMaster[16] + "," + readLineSplitMaster[17];
System.out.println(writeLine);
pstream.print(writeLine + "\r\n");
}
}
}pstream.close();
fout.flush();
bstream.close();
bstreamMaster.close();
First of all, your current parsing approach will be painfully slow. Use a CSV parsing library dedicated for that to speed things up. With uniVocity-parsers you can process your 500K records in less than a second. This is how you can use it to solve your problem:
First let's define a few utility methods to read/write your files:
//opens the file for reading (using UTF-8 encoding)
private static Reader newReader(String pathToFile) {
try {
return new InputStreamReader(new FileInputStream(new File(pathToFile)), "UTF-8");
} catch (Exception e) {
throw new IllegalArgumentException("Unable to open file for reading at " + pathToFile, e);
}
}
//creates a file for writing (using UTF-8 encoding)
private static Writer newWriter(String pathToFile) {
try {
return new OutputStreamWriter(new FileOutputStream(new File(pathToFile)), "UTF-8");
} catch (Exception e) {
throw new IllegalArgumentException("Unable to open file for writing at " + pathToFile, e);
}
}
Then, we can start reading your daily CSV file, and generate a Map:
public static void main(String... args){
//First we parse the daily update file.
CsvParserSettings settings = new CsvParserSettings();
//here we tell the parser to read the CSV headers
settings.setHeaderExtractionEnabled(true);
//and to select ONLY the following columns.
//This ensures rows with a fixed size will be returned in case some records come with less or more columns than anticipated.
settings.selectFields("id", "name", "city", "zip", "occupation");
CsvParser parser = new CsvParser(settings);
//Here we parse all data into a list.
List<String[]> dailyRecords = parser.parseAll(newReader("/path/to/daily.csv"));
//And convert them to a map. ID's are the keys.
Map<String, String[]> mapOfDailyRecords = toMap(dailyRecords);
... //we'll get back here in a second.
This is the code to generate a Map from the list of daily records:
/* Converts a list of records to a map. Uses element at index 0 as the key */
private static Map<String, String[]> toMap(List<String[]> records) {
HashMap<String, String[]> map = new HashMap<String, String[]>();
for (String[] row : records) {
//column 0 will always have an ID.
map.put(row[0], row);
}
return map;
}
With the map of records, we can process your master file and generate the list of updates:
private static List<Object[]> processMasterFile(final Map<String, String[]> mapOfDailyRecords) {
//we'll put the updated data here
final List<Object[]> output = new ArrayList<Object[]>();
//configures the parser to process only the columns you are interested in.
CsvParserSettings settings = new CsvParserSettings();
settings.setHeaderExtractionEnabled(true);
settings.selectFields("id", "company", "exp", "salary");
//All parsed rows will be submitted to the following RowProcessor. This way the bigger Master file won't
//have all its rows stored in memory.
settings.setRowProcessor(new AbstractRowProcessor() {
#Override
public void rowProcessed(String[] row, ParsingContext context) {
// Incoming rows from MASTER will have the ID as index 0.
// If the daily update map contains the ID, we'll get the daily row
String[] dailyData = mapOfDailyRecords.get(row[0]);
if (dailyData != null) {
//We got a match. Let's join the data from the daily row with the master row.
Object[] mergedRow = new Object[8];
for (int i = 0; i < dailyData.length; i++) {
mergedRow[i] = dailyData[i];
}
for (int i = 1; i < row.length; i++) { //starts from 1 to skip the ID at index 0
mergedRow[i + dailyData.length - 1] = row[i];
}
output.add(mergedRow);
}
}
});
CsvParser parser = new CsvParser(settings);
//the parse() method will submit all rows to the RowProcessor defined above.
parser.parse(newReader("/path/to/master.csv"));
return output;
}
Finally, we can get the merged data and write everything to another file:
... // getting back to the main method here
//Now we process the master data and get a list of updates
List<Object[]> updatedData = processMasterFile(mapOfDailyRecords);
//And write the updated data to another file
CsvWriterSettings writerSettings = new CsvWriterSettings();
writerSettings.setHeaders("id", "name", "city", "zip", "occupation", "company", "exp", "salary");
writerSettings.setHeaderWritingEnabled(true);
CsvWriter writer = new CsvWriter(newWriter("/path/to/updates.csv"), writerSettings);
//Here we write everything, and get the job done.
writer.writeRowsAndClose(updatedData);
}
This should work like a charm. Hope it helps.
Disclosure: I am the author of this library. It's open-source and free (Apache V2.0 license).
I will approach the problem in a step by step manner.
First I will parse/read the master CSV file and keep its content into a hashmap, where the key will be each record's unique 'id' as for the value maybe you can store them in a hash or simply create a java class to store the information.
Example of hash:
{
'1' : { 'name': 'Jhon',
'City': 'Florida',
'zip' : 50069,
....
}
}
Next, read your comparer csv file. For each row, read the 'id' and check if the key exists on the hashmap you have created earlier.
if it exists, then from the hashmap access the information you need and write to a new CSV file.
Also, you might want to consider using a 3rd party CSV parser to make this task easier.
If you have maven maybe you can follow this example I found on net. Otherwise you can just google for apache 'csv parser' example on the internet.
http://examples.javacodegeeks.com/core-java/apache/commons/csv-commons/writeread-csv-files-with-apache-commons-csv-example/

Grabbing tagged instagram photos in real time

I'm trying to download photos posted with specific tag in real time. I found real time api pretty useless so I'm using long polling strategy. Below is pseudocode with comments of sublte bugs in it
newMediaCount = getMediaCount();
delta = newMediaCount - mediaCount;
if (delta > 0) {
// if mediaCount changed by now, realDelta > delta, so realDelta - delta photos won't be grabbed and on next poll if mediaCount didn't change again realDelta - delta would be duplicated else ...
// if photo posted from private account last photo will be duplicated as counter changes but nothing is added to recent
recentMedia = getRecentMedia(delta);
// persist recentMedia
mediaCount = newMediaCount;
}
Second issue can be addressed with Set of some sort I gueess. But first really bothers me. I've moved two calls to instagram api as close as possible but is this enough?
Edit
As Amir suggested I've rewritten the code with use of min/max_tag_ids. But it still skips photos. I couldn't find better way to test this than save images on disk for some time and compare result to instagram.com/explore/tags/.
public class LousyInstagramApiTest {
#Test
public void testFeedContinuity() throws Exception {
Instagram instagram = new Instagram(Settings.getClientId());
final String TAG_NAME = "portrait";
String id = instagram.getRecentMediaTags(TAG_NAME).getPagination().getMinTagId();
HashtagEndpoint endpoint = new HashtagEndpoint(instagram, TAG_NAME, id);
for (int i = 0; i < 10; i++) {
Thread.sleep(3000);
endpoint.recentFeed().forEach(d -> {
try {
URL url = new URL(d.getImages().getLowResolution().getImageUrl());
BufferedImage img = ImageIO.read(url);
ImageIO.write(img, "png", new File("D:\\tmp\\" + d.getId() + ".png"));
} catch (Exception e) {
e.printStackTrace();
}
});
}
}
}
class HashtagEndpoint {
private final Instagram instagram;
private final String hashtag;
private String minTagId;
public HashtagEndpoint(Instagram instagram, String hashtag, String minTagId) {
this.instagram = instagram;
this.hashtag = hashtag;
this.minTagId = minTagId;
}
public List<MediaFeedData> recentFeed() throws InstagramException {
TagMediaFeed feed = instagram.getRecentMediaTags(hashtag, minTagId, null);
List<MediaFeedData> dataList = feed.getData();
if (dataList.size() == 0) return Collections.emptyList();
String maxTagId = feed.getPagination().getNextMaxTagId();
if (maxTagId != null && maxTagId.compareTo(minTagId) > 0) dataList.addAll(paginateFeed(maxTagId));
Collections.reverse(dataList);
// dataList.removeIf(d -> d.getId().compareTo(minTagId) < 0);
minTagId = feed.getPagination().getMinTagId();
return dataList;
}
private Collection<? extends MediaFeedData> paginateFeed(String maxTagId) throws InstagramException {
System.out.println("pagination required");
List<MediaFeedData> dataList = new ArrayList<>();
do {
TagMediaFeed feed = instagram.getRecentMediaTags(hashtag, null, maxTagId);
maxTagId = feed.getPagination().getNextMaxTagId();
dataList.addAll(feed.getData());
} while (maxTagId.compareTo(minTagId) > 0);
return dataList;
}
}
Using the Tag endpoints to get the recent media with a desired tag, it returns a min_tag_id in its pagination info, which is tied to the most recently tagged media at the time of your call. As the API also accepts a min_tag_id parameter, you can pass that number from your last query to only receive those media that are tagged after your last query.
So based on whatever polling mechanism you have, you just call the API to get the new recent media if any based on last received min_tag_id.
You will also need to pass a large count parameter and follow the pagination of the response to receive all data without losing anything when the speed of tagging is faster than your polling.
Update:
Based on your updated code:
public List<MediaFeedData> recentFeed() throws InstagramException {
TagMediaFeed feed = instagram.getRecentMediaTags(hashtag, minTagId, null, 100000);
List<MediaFeedData> dataList = feed.getData();
if (dataList.size() == 0) return Collections.emptyList();
// follow the pagination
MediaFeed recentMediaNextPage = instagram.getRecentMediaNextPage(feed.getPagination());
while (recentMediaNextPage.getPagination() != null) {
dataList.addAll(recentMediaNextPage.getData());
recentMediaNextPage = instagram.getRecentMediaNextPage(recentMediaNextPage.getPagination());
}
Collections.reverse(dataList);
minTagId = feed.getPagination().getMinTagId();
return dataList;
}

LensKit: LensKit demo is not reading my data file

When I run the LensKit demo program I get this error:
[main] ERROR org.grouplens.lenskit.data.dao.DelimitedTextRatingCursor - C:\Users\sean\Desktop\ml-100k\u - Copy.data:4: invalid input, skipping line
I reworked the ML 100k data set so that it only holds this line although I dont see how this would effect it:
196 242 3 881250949
186 302 3 891717742
22 377 1 878887116
244
Here is the code I am using too:
public class HelloLenskit implements Runnable {
public static void main(String[] args) {
HelloLenskit hello = new HelloLenskit(args);
try {
hello.run();
} catch (RuntimeException e) {
System.err.println(e.getMessage());
System.exit(1);
}
}
private String delimiter = "\t";
private File inputFile = new File("C:\\Users\\sean\\Desktop\\ml-100k\\u - Copy.data");
private List<Long> users;
public HelloLenskit(String[] args) {
int nextArg = 0;
boolean done = false;
while (!done && nextArg < args.length) {
String arg = args[nextArg];
if (arg.equals("-e")) {
delimiter = args[nextArg + 1];
nextArg += 2;
} else if (arg.startsWith("-")) {
throw new RuntimeException("unknown option: " + arg);
} else {
inputFile = new File(arg);
nextArg += 1;
done = true;
}
}
users = new ArrayList<Long>(args.length - nextArg);
for (; nextArg < args.length; nextArg++) {
users.add(Long.parseLong(args[nextArg]));
}
}
public void run() {
// We first need to configure the data access.
// We will use a simple delimited file; you can use something else like
// a database (see JDBCRatingDAO).
EventDAO base = new SimpleFileRatingDAO(inputFile, "\t");
// Reading directly from CSV files is slow, so we'll cache it in memory.
// You can use SoftFactory here to allow ratings to be expunged and re-read
// as memory limits demand. If you're using a database, just use it directly.
EventDAO dao = new EventCollectionDAO(Cursors.makeList(base.streamEvents()));
// Second step is to create the LensKit configuration...
LenskitConfiguration config = new LenskitConfiguration();
// ... configure the data source
config.bind(EventDAO.class).to(dao);
// ... and configure the item scorer. The bind and set methods
// are what you use to do that. Here, we want an item-item scorer.
config.bind(ItemScorer.class)
.to(ItemItemScorer.class);
// let's use personalized mean rating as the baseline/fallback predictor.
// 2-step process:
// First, use the user mean rating as the baseline scorer
config.bind(BaselineScorer.class, ItemScorer.class)
.to(UserMeanItemScorer.class);
// Second, use the item mean rating as the base for user means
config.bind(UserMeanBaseline.class, ItemScorer.class)
.to(ItemMeanRatingItemScorer.class);
// and normalize ratings by baseline prior to computing similarities
config.bind(UserVectorNormalizer.class)
.to(BaselineSubtractingUserVectorNormalizer.class);
// There are more parameters, roles, and components that can be set. See the
// JavaDoc for each recommender algorithm for more information.
// Now that we have a factory, build a recommender from the configuration
// and data source. This will compute the similarity matrix and return a recommender
// that uses it.
Recommender rec = null;
try {
rec = LenskitRecommender.build(config);
} catch (RecommenderBuildException e) {
throw new RuntimeException("recommender build failed", e);
}
// we want to recommend items
ItemRecommender irec = rec.getItemRecommender();
assert irec != null; // not null because we configured one
// for users
for (long user: users) {
// get 10 recommendation for the user
List<ScoredId> recs = irec.recommend(user, 10);
System.out.format("Recommendations for %d:\n", user);
for (ScoredId item: recs) {
System.out.format("\t%d\n", item.getId());
}
}
}
}
I am really lost on this one and would appreciate any help. Thanks for your time.
The last line of your input file only contains one field. Each input file line needs to contain 3 or 4 fields.

Fetching articles form Liferay portal

Our goal is to fetch some of the content from Liferay Portal via SOAP services using Java. We are successfully loading articles right now with JournalArticleServiceSoap. The problem is that the method requires both group id and entry id, and what we want is to fetch all of the articles from a particular group. Hence, we are trying to get the ids first, using AssetEntryServiceSoap but it fails.
AssetEntryServiceSoapServiceLocator aesssLocator = new AssetEntryServiceSoapServiceLocator();
com.liferay.client.soap.portlet.asset.service.http.AssetEntryServiceSoap assetEntryServiceSoap = null;
URL url = null;
try {
url = new URL(
"http://127.0.0.1:8080/tunnel-web/secure/axis/Portlet_Asset_AssetEntryService");
} catch (MalformedURLException e) {
e.printStackTrace();
}
try {
assetEntryServiceSoap = aesssLocator
.getPortlet_Asset_AssetEntryService(url);
} catch (ServiceException e) {
e.printStackTrace();
}
if (assetEntryServiceSoap == null) {
return;
}
Portlet_Asset_AssetEntryServiceSoapBindingStub assetEntryServiceSoapBindingStub = (Portlet_Asset_AssetEntryServiceSoapBindingStub) assetEntryServiceSoap;
assetEntryServiceSoapBindingStub.setUsername("bruno#7cogs.com");
assetEntryServiceSoapBindingStub.setPassword("bruno");
AssetEntrySoap[] entries;
AssetEntryQuery query = new AssetEntryQuery();
try {
int count = assetEntryServiceSoap.getEntriesCount(query);
System.out.println("Entries count: " + Integer.toString(count));
entries = assetEntryServiceSoap.getEntries(query);
if (entries != null) {
System.out.println(Integer.toString(entries.length));
}
for (AssetEntrySoap aes : assetEntryServiceSoap.getEntries(query)) {
System.out.println(aes.getEntryId());
}
} catch (RemoteException e1) {
e1.printStackTrace();
}
Although getEntriesCount() returns a positive value like 83, getEnries() always returns an empty array. I'm very new to Liferay portal, but it looks really weird to me.
By the way, we are obviously not looking for performance here, the key is just to fetch some specific content from the portal remotely. If you know any working solution your help would be much appreciated.
Normally a AssetEntryQuery would have a little more information in it for example:
AssetEntryQuery assetEntryQuery = new AssetEntryQuery();
assetEntryQuery.setClassNameIds(new long[] { ClassNameLocalServiceUtil.getClassNameId("com.liferay.portlet.journal.model.JournalArticle") });
assetEntryQuery.setGroupIds(new long[] { groupId });
So this would return all AssetEntries for the groupId you specify, that are also JournalArticles.
Try this and see, although as you say, the Count method returns a positive number so it might not make a difference, but give it a go! :)

Categories