<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	
	>
<channel>
	<title>Comments for Todd&#039;s blog</title>
	<atom:link href="https://toddlittleweb.com/wordpress/comments/feed/" rel="self" type="application/rss+xml" />
	<link>https://toddlittleweb.com/wordpress</link>
	<description>Just another WordPress site</description>
	<lastBuildDate>Wed, 29 May 2019 07:22:34 +0000</lastBuildDate>
		<sy:updatePeriod>hourly</sy:updatePeriod>
		<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=4.0.1</generator>
	<item>
		<title>Comment on The Testing Diamond and the Pyramid by Testing Rails Apps: Optimize for Business Value &#8211; The Lien Startup</title>
		<link>https://toddlittleweb.com/wordpress/2014/06/23/the-testing-diamond-and-the-pyramid-2/#comment-62123</link>
		<dc:creator><![CDATA[Testing Rails Apps: Optimize for Business Value &#8211; The Lien Startup]]></dc:creator>
		<pubDate>Wed, 29 May 2019 07:22:34 +0000</pubDate>
		<guid isPermaLink="false">http://toddlittleweb.com/wordpress/?p=83#comment-62123</guid>
		<description><![CDATA[[&#8230;] Testing Diamond emphasizes acceptance tests that go through the main interfaces that your application exposes. In [&#8230;]]]></description>
		<content:encoded><![CDATA[<p>[&#8230;] Testing Diamond emphasizes acceptance tests that go through the main interfaces that your application exposes. In [&#8230;]</p>
]]></content:encoded>
	</item>
	<item>
		<title>Comment on The Cost of Delay and the Cost of Crap by Patrick McMonagle</title>
		<link>https://toddlittleweb.com/wordpress/2014/08/15/the-cost-of-delay-and-the-cost-of-crap/#comment-52931</link>
		<dc:creator><![CDATA[Patrick McMonagle]]></dc:creator>
		<pubDate>Fri, 04 Aug 2017 16:05:08 +0000</pubDate>
		<guid isPermaLink="false">http://toddlittleweb.com/wordpress/?p=172#comment-52931</guid>
		<description><![CDATA[&quot;Value Lost from Delay&quot; does have a cognitive bump that had me reading the term backwards. Try &quot;delay&quot; as a verb, not a noun?

I can offer &quot;Delayed Value Lost&quot; or &quot;Delayed Value Shrinkage.&quot;]]></description>
		<content:encoded><![CDATA[<p>&#8220;Value Lost from Delay&#8221; does have a cognitive bump that had me reading the term backwards. Try &#8220;delay&#8221; as a verb, not a noun?</p>
<p>I can offer &#8220;Delayed Value Lost&#8221; or &#8220;Delayed Value Shrinkage.&#8221;</p>
]]></content:encoded>
	</item>
	<item>
		<title>Comment on About Todd by Andy Silber</title>
		<link>https://toddlittleweb.com/wordpress/sample-page/#comment-52804</link>
		<dc:creator><![CDATA[Andy Silber]]></dc:creator>
		<pubDate>Mon, 24 Jul 2017 18:03:58 +0000</pubDate>
		<guid isPermaLink="false">http://toddlittleweb.com/wordpress/?page_id=2#comment-52804</guid>
		<description><![CDATA[Todd,

I just read your article &quot;Adaptive Agility - Managing Complexity and Uncertainty&quot; and found you had come to the same conclusion I&#039;ve come to in the hardware product development world. My book &quot;Adaptive Product Management: Leading Complex and Uncertain Projects&quot; (http://a.co/2wRFxjV) makes the same point about considering complexity and uncertainty and picking a management style that works for that project. 

Cheers,

Andy Silber]]></description>
		<content:encoded><![CDATA[<p>Todd,</p>
<p>I just read your article &#8220;Adaptive Agility &#8211; Managing Complexity and Uncertainty&#8221; and found you had come to the same conclusion I&#8217;ve come to in the hardware product development world. My book &#8220;Adaptive Product Management: Leading Complex and Uncertain Projects&#8221; (<a href="http://a.co/2wRFxjV" rel="nofollow">http://a.co/2wRFxjV</a>) makes the same point about considering complexity and uncertainty and picking a management style that works for that project. </p>
<p>Cheers,</p>
<p>Andy Silber</p>
]]></content:encoded>
	</item>
	<item>
		<title>Comment on To Estimate or #NoEstimates, that is the Question by Sebastian Kübeck</title>
		<link>https://toddlittleweb.com/wordpress/2016/03/14/to-estimate-or-noestimates-that-is-the-question-2/#comment-51482</link>
		<dc:creator><![CDATA[Sebastian Kübeck]]></dc:creator>
		<pubDate>Fri, 21 Apr 2017 07:05:21 +0000</pubDate>
		<guid isPermaLink="false">http://toddlittleweb.com/wordpress/?p=186#comment-51482</guid>
		<description><![CDATA[Todd,

thank you very much for the detailed explanations! The problem I have with linear burnup is that it is a point estimate that doesn&#039;t tell me anything about the uncertainty. It produces a value and I don&#039;t know what to do with it.

We had teams that started slow, the linear burnup indicated a catastrophe and in the end, the projects turned out to be remarkably successful.
We had other teams that started quick and got in trouble close to the end.
Our approach was to try to get the most critical stuff done as early as possible and relied on the people&#039;s judgment (which is far from being perfect of course).

Currently, I&#039;m doing a little research on Monte Carlo estimation (see https://sebastiankuebeck.wordpress.com/2017/03/15/planning-with-uncertainty-part-2/) and I am testing the method by applying it to several open source projects. I&#039;ll publish the results as soon as I&#039;m done.
The nice thing about it is that it doesn&#039;t require assumptions about the distribution and that it produces a distribution instead of a point estimate. The downside is that it requires a reasonable amount of historic data.

To the effort of story point estimation: The of story point estimation depends on the team/company culture. We used &lt;a href=&quot;http://campey.blogspot.co.at/2010/09/magic-estimation.html&quot; rel=&quot;nofollow&quot;&gt;Magic Estimation&lt;/a&gt; to estimate backlogs with 100+ stories in an hour or so. I have also seen teams who spend a whole day per sprint discussing estimates without getting better results.

&lt;blockquote&gt;
In addition to looking at the P90/P10 ratios we also compared the resulting curves visually (they are virtually identical) and also used Q-Q plots.
&lt;/blockquote&gt;

Well, if the result is as overwhelming as you wrote than there is no need to verify the method by using a different method. The terrible things that have been done with and to statistics (see e.g. http://journals.plos.org/plosmedicine/article?id=10.1371/journal.pmed.0020124) made me a little bit paranoid. Sorry for that. ;-)]]></description>
		<content:encoded><![CDATA[<p>Todd,</p>
<p>thank you very much for the detailed explanations! The problem I have with linear burnup is that it is a point estimate that doesn&#8217;t tell me anything about the uncertainty. It produces a value and I don&#8217;t know what to do with it.</p>
<p>We had teams that started slow, the linear burnup indicated a catastrophe and in the end, the projects turned out to be remarkably successful.<br />
We had other teams that started quick and got in trouble close to the end.<br />
Our approach was to try to get the most critical stuff done as early as possible and relied on the people&#8217;s judgment (which is far from being perfect of course).</p>
<p>Currently, I&#8217;m doing a little research on Monte Carlo estimation (see <a href="https://sebastiankuebeck.wordpress.com/2017/03/15/planning-with-uncertainty-part-2/" rel="nofollow">https://sebastiankuebeck.wordpress.com/2017/03/15/planning-with-uncertainty-part-2/</a>) and I am testing the method by applying it to several open source projects. I&#8217;ll publish the results as soon as I&#8217;m done.<br />
The nice thing about it is that it doesn&#8217;t require assumptions about the distribution and that it produces a distribution instead of a point estimate. The downside is that it requires a reasonable amount of historic data.</p>
<p>To the effort of story point estimation: The of story point estimation depends on the team/company culture. We used <a href="http://campey.blogspot.co.at/2010/09/magic-estimation.html" rel="nofollow">Magic Estimation</a> to estimate backlogs with 100+ stories in an hour or so. I have also seen teams who spend a whole day per sprint discussing estimates without getting better results.</p>
<blockquote><p>
In addition to looking at the P90/P10 ratios we also compared the resulting curves visually (they are virtually identical) and also used Q-Q plots.
</p></blockquote>
<p>Well, if the result is as overwhelming as you wrote than there is no need to verify the method by using a different method. The terrible things that have been done with and to statistics (see e.g. <a href="http://journals.plos.org/plosmedicine/article?id=10.1371/journal.pmed.0020124" rel="nofollow">http://journals.plos.org/plosmedicine/article?id=10.1371/journal.pmed.0020124</a>) made me a little bit paranoid. Sorry for that. <img src="https://toddlittleweb.com/wordpress/wp-includes/images/smilies/icon_wink.gif" alt=";-)" class="wp-smiley" /></p>
]]></content:encoded>
	</item>
	<item>
		<title>Comment on To Estimate or #NoEstimates, that is the Question by Todd</title>
		<link>https://toddlittleweb.com/wordpress/2016/03/14/to-estimate-or-noestimates-that-is-the-question-2/#comment-51373</link>
		<dc:creator><![CDATA[Todd]]></dc:creator>
		<pubDate>Thu, 13 Apr 2017 16:21:17 +0000</pubDate>
		<guid isPermaLink="false">http://toddlittleweb.com/wordpress/?p=186#comment-51373</guid>
		<description><![CDATA[Sebastian,
Thanks for the great post.  What I have found is that the use of the linear burnup extrapolation is a good, but not great tool for predictions.  For this data using the burnup gives an &lt;a href=&quot;https://www.stickyminds.com/article/becoming-better-estimator&quot; rel=&quot;nofollow&quot;&gt;EQF (Estimation Quality Factor)&lt;/a&gt; median of 6.0 for both story point and throughput extrapolation. This compares well with industry data from DeMarco and Lister reporting a median of 3.8.  They judged a median EQF of 5.0 as pretty good.  I think this improvement in EQF is primarily gained by de-biasing the estimates.  That is the one thing that burnups do well, while we frequently see biases with estimates from humans.  Burnup charts do not, however, reduce uncertainty unless the underlying data shows such a reduction.  
You are correct that this study does not show that story points are bad in general. What we see is that for most conditions, there is little difference between using story points and using throughput.  The only condition we studied that showed a significant difference is when the backlog contains stories with a wide distribution of story points. There may be other reasons why story points could be bad or good.  
But I would disagree with your contention that counting stories would not reduce estimation time. There is a big difference between quantifying story points for each story and simply identifying which stories are so big that they need to be split.  
As for the P90/P10 ratio, I did not post here all the rationale for why we used it.  The primary reason that we used it is that it is essentially a means of stating the variance for distributions which are lognormal-ish (lognormal or Weibull).  For such skewed distributions, we state the variance as a ratio rather than as a plus/minus which would be proper for a normal symmetric distribution.  As we reported, we found the distributions to be close to either lognormal and Weibull.  The nice thing about the P90/P10 ratio is that it uniquely describes the shape factor of either a lognormal or a Weibull distribution.  For a lognormal distribution, the log of the P90/P10 ratio is proportional to the variance of the log of the distribution.  For Weibull it is easy to derive the shape factor from the ratio.  In addition to looking at the P90/P10 ratios we also compared the resulting curves visually (they are virtually identical) and also used Q-Q plots.
Lastly, you indicated that you found that burnup chart extrapolations did not produce reliable predictions.  I’m curious what approaches you have used that are better.]]></description>
		<content:encoded><![CDATA[<p>Sebastian,<br />
Thanks for the great post.  What I have found is that the use of the linear burnup extrapolation is a good, but not great tool for predictions.  For this data using the burnup gives an <a href="https://www.stickyminds.com/article/becoming-better-estimator" rel="nofollow">EQF (Estimation Quality Factor)</a> median of 6.0 for both story point and throughput extrapolation. This compares well with industry data from DeMarco and Lister reporting a median of 3.8.  They judged a median EQF of 5.0 as pretty good.  I think this improvement in EQF is primarily gained by de-biasing the estimates.  That is the one thing that burnups do well, while we frequently see biases with estimates from humans.  Burnup charts do not, however, reduce uncertainty unless the underlying data shows such a reduction.<br />
You are correct that this study does not show that story points are bad in general. What we see is that for most conditions, there is little difference between using story points and using throughput.  The only condition we studied that showed a significant difference is when the backlog contains stories with a wide distribution of story points. There may be other reasons why story points could be bad or good.<br />
But I would disagree with your contention that counting stories would not reduce estimation time. There is a big difference between quantifying story points for each story and simply identifying which stories are so big that they need to be split.<br />
As for the P90/P10 ratio, I did not post here all the rationale for why we used it.  The primary reason that we used it is that it is essentially a means of stating the variance for distributions which are lognormal-ish (lognormal or Weibull).  For such skewed distributions, we state the variance as a ratio rather than as a plus/minus which would be proper for a normal symmetric distribution.  As we reported, we found the distributions to be close to either lognormal and Weibull.  The nice thing about the P90/P10 ratio is that it uniquely describes the shape factor of either a lognormal or a Weibull distribution.  For a lognormal distribution, the log of the P90/P10 ratio is proportional to the variance of the log of the distribution.  For Weibull it is easy to derive the shape factor from the ratio.  In addition to looking at the P90/P10 ratios we also compared the resulting curves visually (they are virtually identical) and also used Q-Q plots.<br />
Lastly, you indicated that you found that burnup chart extrapolations did not produce reliable predictions.  I’m curious what approaches you have used that are better.</p>
]]></content:encoded>
	</item>
	<item>
		<title>Comment on To Estimate or #NoEstimates, that is the Question by Sebastian Kübeck</title>
		<link>https://toddlittleweb.com/wordpress/2016/03/14/to-estimate-or-noestimates-that-is-the-question-2/#comment-51263</link>
		<dc:creator><![CDATA[Sebastian Kübeck]]></dc:creator>
		<pubDate>Thu, 06 Apr 2017 14:55:55 +0000</pubDate>
		<guid isPermaLink="false">http://toddlittleweb.com/wordpress/?p=186#comment-51263</guid>
		<description><![CDATA[Hello Todd,

first of all, thank you for taking the time to create this study! 
If I got this right, you conclude that linear extrapolation in the burn up chart doesn&#039;t generate reliable predictions. This confirms my personal experience. 
However, the study does not provide evidence that story point estimation is bad in general,  nor does it suggest an alternative that creates better results.
Just counting stories doesn&#039;t really reduce estimation effort as you still have to estimate whether a story is a nut or a 10 tons super nut from outer space that has to be split into parts.
Another thing I noticed is that you only used the P90/P10 method without proving in any way that the assumptions for using this method are really met. I&#039;d personally use a few methods just to be sure that the choice of method doesn&#039;t influence the result.]]></description>
		<content:encoded><![CDATA[<p>Hello Todd,</p>
<p>first of all, thank you for taking the time to create this study!<br />
If I got this right, you conclude that linear extrapolation in the burn up chart doesn&#8217;t generate reliable predictions. This confirms my personal experience.<br />
However, the study does not provide evidence that story point estimation is bad in general,  nor does it suggest an alternative that creates better results.<br />
Just counting stories doesn&#8217;t really reduce estimation effort as you still have to estimate whether a story is a nut or a 10 tons super nut from outer space that has to be split into parts.<br />
Another thing I noticed is that you only used the P90/P10 method without proving in any way that the assumptions for using this method are really met. I&#8217;d personally use a few methods just to be sure that the choice of method doesn&#8217;t influence the result.</p>
]]></content:encoded>
	</item>
	<item>
		<title>Comment on To Estimate or #NoEstimates, that is the Question by Dealing with the Quality of Estimates - KBP Media -</title>
		<link>https://toddlittleweb.com/wordpress/2016/03/14/to-estimate-or-noestimates-that-is-the-question-2/#comment-47016</link>
		<dc:creator><![CDATA[Dealing with the Quality of Estimates - KBP Media -]]></dc:creator>
		<pubDate>Fri, 16 Sep 2016 03:10:11 +0000</pubDate>
		<guid isPermaLink="false">http://toddlittleweb.com/wordpress/?p=186#comment-47016</guid>
		<description><![CDATA[[&#8230;] That’s why James Grenning suggested planning poker and it’s variants. That’s why the #noestimates folks suggest that counting stories may be an appropriate alternative to spending a lot of time even doing planning poker. That’s also one of the implied findings from a study that Todd Little describes in his post To Estimate or #NoEstimates that is the question. [&#8230;]]]></description>
		<content:encoded><![CDATA[<p>[&#8230;] That’s why James Grenning suggested planning poker and it’s variants. That’s why the #noestimates folks suggest that counting stories may be an appropriate alternative to spending a lot of time even doing planning poker. That’s also one of the implied findings from a study that Todd Little describes in his post To Estimate or #NoEstimates that is the question. [&#8230;]</p>
]]></content:encoded>
	</item>
	<item>
		<title>Comment on To Estimate or #NoEstimates, that is the Question by Todd</title>
		<link>https://toddlittleweb.com/wordpress/2016/03/14/to-estimate-or-noestimates-that-is-the-question-2/#comment-45778</link>
		<dc:creator><![CDATA[Todd]]></dc:creator>
		<pubDate>Mon, 25 Apr 2016 17:16:13 +0000</pubDate>
		<guid isPermaLink="false">http://toddlittleweb.com/wordpress/?p=186#comment-45778</guid>
		<description><![CDATA[Glen,

Thanks for the question.  We aren’t normalizing to compare SP across projects, but rather to compare the overall projects against each other.  We normalize SP by dividing by the total SP delivered in the project.   We do a similar normalization for time.  This ensures that the burnup chart for each project starts at (0,0) and eventually concludes at (1,1).  Using this approach allows up to compare projects on the same scale.]]></description>
		<content:encoded><![CDATA[<p>Glen,</p>
<p>Thanks for the question.  We aren’t normalizing to compare SP across projects, but rather to compare the overall projects against each other.  We normalize SP by dividing by the total SP delivered in the project.   We do a similar normalization for time.  This ensures that the burnup chart for each project starts at (0,0) and eventually concludes at (1,1).  Using this approach allows up to compare projects on the same scale.</p>
]]></content:encoded>
	</item>
	<item>
		<title>Comment on To Estimate or #NoEstimates, that is the Question by Glen ALLEMAN</title>
		<link>https://toddlittleweb.com/wordpress/2016/03/14/to-estimate-or-noestimates-that-is-the-question-2/#comment-45771</link>
		<dc:creator><![CDATA[Glen ALLEMAN]]></dc:creator>
		<pubDate>Sun, 24 Apr 2016 16:56:49 +0000</pubDate>
		<guid isPermaLink="false">http://toddlittleweb.com/wordpress/?p=186#comment-45771</guid>
		<description><![CDATA[A question on &quot;normalized.&quot; Are the SPs &quot;normalized&quot; between each project. Since SPs are ordinal measures. Is a SP in one project worth the same as a SP in another project? 
Hours and dollars are cardinal measures so comparing hours and dollars between projects can be done. How is this done with SPs?]]></description>
		<content:encoded><![CDATA[<p>A question on &#8220;normalized.&#8221; Are the SPs &#8220;normalized&#8221; between each project. Since SPs are ordinal measures. Is a SP in one project worth the same as a SP in another project?<br />
Hours and dollars are cardinal measures so comparing hours and dollars between projects can be done. How is this done with SPs?</p>
]]></content:encoded>
	</item>
	<item>
		<title>Comment on Collaborating with Non-Collaborators by failed fat lady</title>
		<link>https://toddlittleweb.com/wordpress/2011/04/26/collaborating-with-non-collaborators/#comment-15</link>
		<dc:creator><![CDATA[failed fat lady]]></dc:creator>
		<pubDate>Sun, 19 Jun 2011 23:43:09 +0000</pubDate>
		<guid isPermaLink="false">http://toddlittleweb.com/wordpress/?p=9#comment-15</guid>
		<description><![CDATA[Nice Blog with Excellent information]]></description>
		<content:encoded><![CDATA[<p>Nice Blog with Excellent information</p>
]]></content:encoded>
	</item>
</channel>
</rss>
