I've been trying to use Hadoop to send N amount of lines to a single mapping. I don't require for the lines to be split already.
I've tried to use NLineInputFormat, however that sends N lines of text from the data to each mapper one line at a time [giving up after the Nth line].
I have tried to set the option and it only takes N lines of input sending it at 1 line at a time to each map:
job.setInt("mapred.line.input.format.linespermap", 10);
I've found a mailing list recommending me to override LineRecordReader::next, however that is not that simple, as that the internal data members are all private.
I've just checked the source for NLineInputFormat and it hard codes LineReader, so overriding will not help.
Also, btw I'm using Hadoop 0.18 for compatibility with the Amazon EC2 MapReduce.
You have to implement your own input format. You also have the possibility to define your own record reader then.
Unfortunately you have to define a getSplits()-method. In my opinion this will be harder than implementing the record reader: This method has to implement a logic to chunk the input data.
See the following excerpt from "Hadoop - The definitive guide" (a great book I would always recommend!):
Here’s the interface:
public interface InputFormat<K, V> {
InputSplit[] getSplits(JobConf job, int numSplits) throws IOException;
RecordReader<K, V> getRecordReader(InputSplit split,
JobConf job,
Reporter reporter) throws IOException;
}
The JobClient calls the getSplits() method, passing the desired number of map tasks
as the numSplits argument. This number is treated as a hint, as InputFormat imple-
mentations are free to return a different number of splits to the number specified in
numSplits. Having calculated the splits, the client sends them to the jobtracker, which
uses their storage locations to schedule map tasks to process them on the tasktrackers.
On a tasktracker, the map task passes the split to the getRecordReader() method on
InputFormat to obtain a RecordReader for that split. A RecordReader is little more than
an iterator over records, and the map task uses one to generate record key-value pairs,
which it passes to the map function. A code snippet (based on the code in MapRunner)
illustrates the idea:
K key = reader.createKey();
V value = reader.createValue();
while (reader.next(key, value)) {
mapper.map(key, value, output, reporter);
}
I solved this problem recently by simply creating my own InputFormat that overrides NLineInputFormat and implements a custom MultiLineRecordReader instead of the default LineReader.
I chose to extend NLineInputFormat because I wanted to have the same guarantee of having exactly N line(s) per split.
This record reader is taken almost as is from http://bigdatacircus.com/2012/08/01/wordcount-with-custom-record-reader-of-textinputformat/
The only things I modified is the property for maxLineLength that now uses the new API, and the value for NLINESTOPROCESS that gets read from NLineInputFormat's setNumLinesPerSplit() insead of being hardcoded (for more flexibility).
Here is the result:
public class MultiLineInputFormat extends NLineInputFormat{
#Override
public RecordReader<LongWritable, Text> createRecordReader(InputSplit genericSplit, TaskAttemptContext context) {
context.setStatus(genericSplit.toString());
return new MultiLineRecordReader();
}
public static class MultiLineRecordReader extends RecordReader<LongWritable, Text>{
private int NLINESTOPROCESS;
private LineReader in;
private LongWritable key;
private Text value = new Text();
private long start =0;
private long end =0;
private long pos =0;
private int maxLineLength;
#Override
public void close() throws IOException {
if (in != null) {
in.close();
}
}
#Override
public LongWritable getCurrentKey() throws IOException,InterruptedException {
return key;
}
#Override
public Text getCurrentValue() throws IOException, InterruptedException {
return value;
}
#Override
public float getProgress() throws IOException, InterruptedException {
if (start == end) {
return 0.0f;
}
else {
return Math.min(1.0f, (pos - start) / (float)(end - start));
}
}
#Override
public void initialize(InputSplit genericSplit, TaskAttemptContext context)throws IOException, InterruptedException {
NLINESTOPROCESS = getNumLinesPerSplit(context);
FileSplit split = (FileSplit) genericSplit;
final Path file = split.getPath();
Configuration conf = context.getConfiguration();
this.maxLineLength = conf.getInt("mapreduce.input.linerecordreader.line.maxlength",Integer.MAX_VALUE);
FileSystem fs = file.getFileSystem(conf);
start = split.getStart();
end= start + split.getLength();
boolean skipFirstLine = false;
FSDataInputStream filein = fs.open(split.getPath());
if (start != 0){
skipFirstLine = true;
--start;
filein.seek(start);
}
in = new LineReader(filein,conf);
if(skipFirstLine){
start += in.readLine(new Text(),0,(int)Math.min((long)Integer.MAX_VALUE, end - start));
}
this.pos = start;
}
#Override
public boolean nextKeyValue() throws IOException, InterruptedException {
if (key == null) {
key = new LongWritable();
}
key.set(pos);
if (value == null) {
value = new Text();
}
value.clear();
final Text endline = new Text("\n");
int newSize = 0;
for(int i=0;i<NLINESTOPROCESS;i++){
Text v = new Text();
while (pos < end) {
newSize = in.readLine(v, maxLineLength,Math.max((int)Math.min(Integer.MAX_VALUE, end-pos),maxLineLength));
value.append(v.getBytes(),0, v.getLength());
value.append(endline.getBytes(),0, endline.getLength());
if (newSize == 0) {
break;
}
pos += newSize;
if (newSize < maxLineLength) {
break;
}
}
}
if (newSize == 0) {
key = null;
value = null;
return false;
} else {
return true;
}
}
}
}
I think that in your case you can follow the delegation pattern and implement a wrapper around LineRecordReader that overrides necessary methods i.e. next() (or nextKeyValue() in new API) to set value to a concatenation of N lines, rather than one line.
I have googled the exemplary implementation of ParagraphRecordReader that uses LineRecordReader to read input data line by line (and concatenate it) until encountering either EOF or a blank line. Then it returns pair, where value is a paragraph (instead of one line). Moreover, ParagraphInputFormat for this ParagraphRecordReader is as simple as standard TextInputFormat.
You can find the necessary links to this implementation and a couple of words about that the following post: http://hadoop-mapreduce.blogspot.com/2011/03/little-more-complicated-recordreaders.html.
Best
Related
I am programming a Study in MotiveWave, a program used for (day)trading. The study is its own class. (info about MotiveWave's SDK found here: https://www.motivewave.com/sdk/javadoc/overview-summary.html)
public class L_V2 extends com.motivewave.platform.sdk.study.Study
My study uses 2 different timeframes: the 1 hour and the 4 hour bars. Both are calculated in a different function. Otherwise formulated: both use a different dataseries, as shown in the code below.
I have two values, being calculated on the 4 hour timeframe, called 'ma9' and 'ma11' that I would like to use in an 'if'-statement on the 1 hour timeframe.
This is the code for the 4 hour timeframe. It simply calculates 2 moving averages
#Override
protected void calculateValues(DataContext ctx)
{
int maPeriodTF2 = getSettings().getInteger(MA_PERIOD_TF2);
int ma2PeriodTF2 = getSettings().getInteger(MA2_PERIOD_TF2);
//Object maInput = getSettings().getInput(MA_INPUT, Enums.BarInput.CLOSE);
BarSize barSizeTF2 = getSettings().getBarSize(MA_BARSIZE_TF2);
DataSeries series2 = ctx.getDataSeries(barSizeTF2);
StudyHeader header = getHeader();
boolean updates = getSettings().isBarUpdates() || (header != null && header.requiresBarUpdates());
// Calculate Moving Average for the Secondary Data Series
for(int i = 1; i < series2.size(); i++) {
if (series2.isComplete(i)) continue;
if (!updates && !series2.isBarComplete(i)) continue;
// MA TF2
Double ma9 = series2.ma(getSettings().getMAMethod(MA_METHOD_TF2), i, maPeriodTF2, getSettings().getInput(MA_INPUT_TF2));
Double ma11 = series2.ma(getSettings().getMAMethod(MA2_METHOD_TF2), i, ma2PeriodTF2, getSettings().getInput(MA2_INPUT_TF2));
series2.setDouble(i, Values.MA9_H4, ma9);
series2.setDouble(i, Values.MA11_H4, ma11);
}
// Invoke the parent method to run the "calculate" method below for the primary (chart) data series
super.calculateValues(ctx);
I would now like to use those 2 values, 'ma9' and 'ma11' in another function, on the 1 hour timeframe:
#Override
protected void calculate(int index, DataContext ctx)
DataSeries series=ctx.getDataSeries();
if (ma9 < ma11 && other conditions)
{ctx.signal(index, Signals.YOU_SHOULD_BUY, "This would be my buying signal", series.getClose(index));
}
How can I export the ma9 and the ma11 so they become 'global' and I can re-use them in this other function ?
Basically, the idea is to store somewhere the values or just pass them appropriately after being computed.
There is a java pattern based on singleton that allow you to store/retrieve values inside a class (using a collection : HashMap). Any values could be added,retried in any classes based on predefined (key,value) using the construction Singelton.getInstance() with HashMap standard operation (put, get).
Maybe this example could be useful.
import java.util.Hashtable;
class Singleton extends Hashtable<String, Object> {
private static final long serialVersionUID = 1L;
private static Singleton one_instance = null;
private Singleton() {
};
public static Singleton getInstance() {
one_instance = (one_instance == null) ? new Singleton() : one_instance;
return one_instance;
}
}
import java.util.Random;
public class Reuse {
public static void main(String[] args) {
Reuse r = new Reuse();
Compute c = r.new Compute();
Singleton.getInstance().put("r1", c.getRandom());
Singleton.getInstance().put("r2", c.getRandom());
Singleton.getInstance().put("n", c.getName());
System.out.println(Singleton.getInstance().get("r1"));//print random_number_1
System.out.println(Singleton.getInstance().get("r2"));//print random_number_2
System.out.println(Singleton.getInstance().get("n"));// print name (value for key n)
}
class Compute
{
public Double getRandom()
{
return new Random().nextDouble();
}
public String getName()
{
return "name";
}
}
}
I have a MapReduce job that outputs an IntWritable as the key and Point (Object I created that implements writable) object as the value from the map function. Then in the reduce function I use a for-each loop to go through the iterable of Points to create a list:
#Override
public void reduce(IntWritable key, Iterable<Point> points, Context context) throws IOException, InterruptedException {
List<Point> pointList = new ArrayList<>();
for (Point point : points) {
pointList.add(point);
}
context.write(key, pointList);
}
The problem is that this list is then the correct size, but every Point is exactly the same. The fields in my Point class are not static and I have printed each point individually in the loop to ensure the points are unique (which they are). Furthermore, I have created a separate class that just creates a couple of points and adds them to a list, and this seems to work, which implies that MapReduce does something I am not aware of.
Any help with fixing this would be greatly appreciated.
UPDATE:
Code for Mapper class:
private final static IntWritable one = new IntWritable(1);
private Text word = new Text();
private IntWritable firstChar = new IntWritable();
private Point point = new Point();
#Override
public void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
String line = value.toString();
StringTokenizer tokenizer = new StringTokenizer(line, " ");
while(tokenizer.hasMoreTokens()) {
String atts = tokenizer.nextToken();
String cut = atts.substring(1, atts.length() - 1);
String[] nums = cut.split(",");
point.set(Double.parseDouble(nums[0]), Double.parseDouble(nums[1]), Double.parseDouble(nums[2]), Double.parseDouble(nums[3]));
context.write(one, point);
}
}
Point class:
public class Point implements Writable {
public Double att1;
public Double att2;
public Double att3;
public Double att4;
public Point() {
}
public void set(Double att1, Double att2, Double att3, Double att4) {
this.att1 = att1;
this.att2 = att2;
this.att3 = att3;
this.att4 = att4;
}
#Override
public void write(DataOutput dataOutput) throws IOException {
dataOutput.writeDouble(att1);
dataOutput.writeDouble(att2);
dataOutput.writeDouble(att3);
dataOutput.writeDouble(att4);
}
#Override
public void readFields(DataInput dataInput) throws IOException {
this.att1 = dataInput.readDouble();
this.att2 = dataInput.readDouble();
this.att3 = dataInput.readDouble();
this.att4 = dataInput.readDouble();
}
#Override
public String toString() {
String output = "{" + att1 + ", " + att2 + ", " + att3 + ", " + att4 + "}";
return output;
}
The problem is in your reducer. You don't want to store all the points in memory. They may be possibly big and Hadoop solves that for you (even though in an awkward way).
When looping through the given Iterable<Points>, each Point instance is re-used, so it only keeps one instance around at a given time.
That means when you call points.next(), these two things will happen:
Point instance is re-used and set with the next point data
The same works with the Key instance.
In your case you will find in the List just one instance of the Point inserted multiple times and set with the data from the last Point.
You shouldn't save instances of Writables in your reducer or should clone them.
You can read more about this problem here https://cornercases.wordpress.com/2011/08/18/hadoop-object-reuse-pitfall-all-my-reducer-values-are-the-same/
I am trying to make a new command for the first time and was following this tutorial
which is slightly old but I believe will still work. After finishing it I tried running my mod and everything ran fine but my command did not exist. Here is my code:
public class MainRegistry {
#EventHandler
public void serverStart(FMLServerStartingEvent event) {
MinecraftServer server = MinecraftServer.getServer();
ICommandManager command = server.getCommandManager();
ServerCommandManager manager = (ServerCommandManager) command;
manager.registerCommand(new FireBall5());
}
}
And my actual CommandBase class:
public class FireBall5 extends CommandBase {
#Override
public String getCommandName() {
return "fireball 5";
}
#Override
public String getCommandUsage(ICommandSender var1) {
return "Shoots fireball with explosive power 5";
}
#Override
public void processCommand(ICommandSender icommandsender, String[] var2) {
if (icommandsender instanceof EntityPlayer) {
EntityPlayer player = (EntityPlayer) icommandsender;
World par2World = player.worldObj;
if (!par2World.isRemote)
par2World.spawnEntityInWorld(new PlayerFireBall(par2World, 5.0f));
}
}
}
It is calling an entity PlayerFireBall which I created myself and is simply a fireball with increased explosion power.
Commands cannot contain whitespaces. To implement your command, please follow the following:
#Override
public String getCommandName() {
return "fireball"; // Remove the argument - Leave the command only.
}
The argument has to be read like this instead:
{
if (sender instanceof EntityPlayer) {
final EntityPlayer player = (EntityPlayer) sender;
final World par2World = player.worldObj;
final float power;
// The "default" method:
// power = 5; // Write the default value here!
if (var2.length > 0) try {
power = Float.parseFloat(var2[0]); // Parse the first argument.
} catch(NumberFormatException ex) {}
// The "validation" method:
if (var2.length == 0) {
sender.sendMessage("You forgot to specify the fireball power.");
return;
}
if ( !var2[0].matches("\\d{2}")) { // Asserts this argument is two digits
sender.sendMessage("Incorrect.");
return;
}
power = Float.parseFloat(var2[0]);
if ( !par2World.isRemote)
par2World.spawnEntityInWorld(new PlayerFireBall(par2World, power));
}
}
Read more:
Reading arguments as Integer for a Bounty in a Bukkit plugin
See #Unihedron answer for the fix for the actual problem with this code. This answer simply cleans up his code even more.
CommandBase from which you inherit actually has several static methods that make parsing numbers and such from arguments much safer.
The ones you might want to use are:
CommandBase.parseDouble(ICommandSender, String) - Parses the given string and returns a double safely
CommandBase.parseDoubleWithMin(ICommandSender, String, int min) - Same as above, but with a required minimum value
CommandBase.parseDoubleBounded(ICommandSender, String, int min, int max) - Same as above, but with an upper limit as well
All these have an integer counterpart as well.
Also, not useful for your context, but maybe for future use is this:
CommandBase.parseBoolean(ICommandSender, String) - Parses the given string and returns a boolean safely
Look through the CommandBase class for many more useful static methods.
So for example, rather than this:
if (var2.length > 0) try {
power = Float.parseFloat(var2[0]); // Parse the first argument.
} catch(NumberFormatException ex) {}
Try this:
if(var2.length > 0){
//bounded because you don't want a size less than 0, could really be anything
power = CommandBase.parseDoubleWithMin(sender, var2[0], 0);
}
Minecraft will automatically tell the player if there is something wrong with there input and safely return the parsed value to you.
Good luck with your mod and have fun!
Update: It was a static buried deep in some code where it was used for just a couple of instructions. Thank you all for the suggestions.
We are not using one HashMap across threads (yes that is bad for many reasons). Each thread has its own HashMap.
We have a class that extends from Thread. In Thread.run() we create a HashMap, set a key/value pair in it, and pass that HashMap to a method. That method retrieves the value from the HashMap, inserts it into a string, and returns the string.
Sometimes the returned string has a different value (still in Thread.run()). This only occurs on hardware with 3+ physical cores. And it has only happened twice (before we added logging to help us find exactly what is going on of course).
Any idea why this would occur.
Update: here's the full code. The ProcessTxt is what pulls the value from the HashMap and puts it in the string.
import java.io.*;
import java.util.HashMap;
import junit.framework.TestCase;
import net.windward.datasource.dom4j.Dom4jDataSource;
import net.windward.xmlreport.ProcessReport;
import net.windward.xmlreport.ProcessTxt;
/**
* Test calling from multiple threads
*/
public class TestThreads extends TestCase {
private static String path = ".";
// JUnit stuff
public TestThreads(String name) {
super(name);
}
// Get logging going - called before any tests run
protected void setUp() throws Exception {
ProcessReport.init();
}
// this is not necessary - called after any tests are run
protected void tearDown() {
}
private static final int NUM_THREADS = 100;
private boolean hadWithVarError = false;
/**
* Test that each thread has unique variables.
*/
public void testRunReportsWithVariables() throws Exception {
// run 10 threads
ReportThreadWithVariables[] th = new ReportThreadWithVariables[NUM_THREADS];
for (int ind = 0; ind < NUM_THREADS; ind++) {
th[ind] = new ReportThreadWithVariables(this, ind);
th[ind].setName("Run " + ind);
}
for (int ind = 0; ind < NUM_THREADS; ind++)
th[ind].start();
boolean allDone = false;
while (!allDone) {
Thread.sleep(100);
allDone = true;
for (int ind = 0; ind < NUM_THREADS; ind++)
if (th[ind].isAlive())
allDone = false;
}
assertTrue(!hadWithVarError);
}
public static class ReportThreadWithVariables extends Thread {
private TestThreads obj;
private int num;
public ReportThreadWithVariables(TestThreads tt, int num) {
obj = tt;
this.num = num;
}
public void run() {
try{
System.out.println("starting " + num);
ByteArrayOutputStream out = new ByteArrayOutputStream();
ProcessTxt pt = new ProcessTxt(new FileInputStream(new File(path, "Thread_Test.docx")), out);
pt.processSetup();
// don't use order1.xml, but need a datasource.
Dom4jDataSource datasource = new Dom4jDataSource(new FileInputStream(new File(path, "order1.xml")));
HashMap map = new HashMap();
map.put("num", new Integer(num));
datasource.setMap(map);
pt.processData(datasource, "");
pt.processComplete();
String result = out.toString().trim();
System.out.println("complete " + num + ", result = " + result);
String expected = "Number: " + num;
if (!result.equals( expected ))
obj.hadWithVarError = true;
assertEquals(expected, result);
} catch (Throwable e) {
obj.hadWithVarError = true;
e.printStackTrace();
}
}
}
}
(edit to format code)
Given the lack of code and based solely on what has been written I am going to hypothesize that something is static. That is, somewhere along the line a static member is being stored to/written from.
num is not mutable and the other variables (string, map) are local so ReportThreadWithVariables looks thread safe. It seems to me that the problem is in the calls to external objects rather than what you posted.
Are the classes you use documented as Thread Safe?
For exampel, the javadoc of the processData method states that it should not be called multiple times for the same datasource which you seem to be doing (same file name).
ps: (not related) you could use a CountDownLatch instead of the while loop.
I'm writing a SAX parser in Java to parse a 2.5GB XML file of wikipedia articles. Is there a way to monitor the progress of the parsing in Java?
Thanks to EJP's suggestion of ProgressMonitorInputStream, in the end I extended FilterInputStream so that ChangeListener can be used to monitor the current read location in term of bytes.
With this you have finer control, for example to show multiple progress bars for parallel reading of big xml files. Which is exactly what I did.
So, a simplified version of the monitorable stream:
/**
* A class that monitors the read progress of an input stream.
*
* #author Hermia Yeung "Sheepy"
* #since 2012-04-05 18:42
*/
public class MonitoredInputStream extends FilterInputStream {
private volatile long mark = 0;
private volatile long lastTriggeredLocation = 0;
private volatile long location = 0;
private final int threshold;
private final List<ChangeListener> listeners = new ArrayList<>(4);
/**
* Creates a MonitoredInputStream over an underlying input stream.
* #param in Underlying input stream, should be non-null because of no public setter
* #param threshold Min. position change (in byte) to trigger change event.
*/
public MonitoredInputStream(InputStream in, int threshold) {
super(in);
this.threshold = threshold;
}
/**
* Creates a MonitoredInputStream over an underlying input stream.
* Default threshold is 16KB, small threshold may impact performance impact on larger streams.
* #param in Underlying input stream, should be non-null because of no public setter
*/
public MonitoredInputStream(InputStream in) {
super(in);
this.threshold = 1024*16;
}
public void addChangeListener(ChangeListener l) { if (!listeners.contains(l)) listeners.add(l); }
public void removeChangeListener(ChangeListener l) { listeners.remove(l); }
public long getProgress() { return location; }
protected void triggerChanged( final long location ) {
if ( threshold > 0 && Math.abs( location-lastTriggeredLocation ) < threshold ) return;
lastTriggeredLocation = location;
if (listeners.size() <= 0) return;
try {
final ChangeEvent evt = new ChangeEvent(this);
for (ChangeListener l : listeners) l.stateChanged(evt);
} catch (ConcurrentModificationException e) {
triggerChanged(location); // List changed? Let's re-try.
}
}
#Override public int read() throws IOException {
final int i = super.read();
if ( i != -1 ) triggerChanged( location++ );
return i;
}
#Override public int read(byte[] b, int off, int len) throws IOException {
final int i = super.read(b, off, len);
if ( i > 0 ) triggerChanged( location += i );
return i;
}
#Override public long skip(long n) throws IOException {
final long i = super.skip(n);
if ( i > 0 ) triggerChanged( location += i );
return i;
}
#Override public void mark(int readlimit) {
super.mark(readlimit);
mark = location;
}
#Override public void reset() throws IOException {
super.reset();
if ( location != mark ) triggerChanged( location = mark );
}
}
It doesn't know - or care - how big the underlying stream is, so you need to get it some other way, such as from the file itself.
So, here goes the simplified sample usage:
try (
MonitoredInputStream mis = new MonitoredInputStream(new FileInputStream(file), 65536*4)
) {
// Setup max progress and listener to monitor read progress
progressBar.setMaxProgress( (int) file.length() ); // Swing thread or before display please
mis.addChangeListener( new ChangeListener() { #Override public void stateChanged(ChangeEvent e) {
SwingUtilities.invokeLater( new Runnable() { #Override public void run() {
progressBar.setProgress( (int) mis.getProgress() ); // Promise me you WILL use MVC instead of this anonymous class mess!
}});
}});
// Start parsing. Listener would call Swing event thread to do the update.
SAXParserFactory.newInstance().newSAXParser().parse(mis, this);
} catch ( IOException | ParserConfigurationException | SAXException e) {
e.printStackTrace();
} finally {
progressBar.setVisible(false); // Again please call this in swing event thread
}
In my case the progresses raise nicely from left to right without abnormal jumps. Adjust threshold for optimum balance between performance and responsiveness. Too small and the reading speed can more then double on small devices, too big and the progress would not be smooth.
Hope it helps. Feel free to edit if you found mistakes or typos, or vote up to send me some encouragements! :D
Use a javax.swing.ProgressMonitorInputStream.
You can get an estimate of the current line/column in your file by overriding the method setDocumentLocator of org.xml.sax.helpers.DefaultHandler/BaseHandler. This method is called with an object from which you can get an approximation of the current line/column when needed.
Edit: To the best of my knowledge, there is no standard way to get the absolute position. However, I am sure some SAX implementations do offer this kind of information.
Assuming you know how many articles you have, can't you just keep a counter in the handler? E.g.
public void startElement (String uri, String localName,
String qName, Attributes attributes)
throws SAXException {
if(qName.equals("article")){
counter++
}
...
}
(I don't know whether you are parsing "article", it's just an example)
If you don't know the number of article in advance, you will need to count it first. Then you can print the status nb tags read/total nb of tags, say each 100 tags (counter % 100 == 0).
Or even have another thread monitor the progress. In this case, you might want to synchronize access to the counter, but not necessary given that it doesn't need to be really accurate.
My 2 cents
I'd use the input stream position. Make your own trivial stream class that delegates/inherits from the "real" one and keeps track of bytes read. As you say, getting the total filesize is easy. I wouldn't worry about buffering, lookahead, etc. - for large files like these it's chickenfeed. On the other hand, I'd limit the position to "99%".