Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Software does not exist in a vacuum. You are probably right in thinking that story points should not matter to you, as a developer; but they do matter to external stakeholders.

Forecasting and management of expectations is necessary because software often has external real-world dependencies: available funding for a first release, marketing materials, hardware that can't be released without software, trade shows, developer conferences, yearly retail release parties, OEM partners, cyclic stable releases for enterprise customers who won't push software into production without extensive pre-testing, graphics adapters that can't be released without drivers, rockets that won't launch, cars that won't drive, etc.

All of these things require some degree of forecasting and appropriately-evolving management of expectations. Here's where we stand. Here's what we can commit to. Here's what we might deliver, but are willing to defer to a future release. Here are the (low value) items were under consideration that we will definitely defer to a future release, here are the features you will need to drop to get feature b in this release.

The purpose of story points is to help provide forecasting and management of expectations (with appropriately limited commitments) to stakeholders, on the understanding that forecasts are approximate.

Calibrated burndown of story points is pretty much the only basis on which forecasting and management of expectations can be done in an agile process. The key is to make sure that stakeholders understand the difference between forecasting and commitment, and to make sure your development team is appropriately protected by a healthy development process.

Whether the author's claim that you get better forecasts by just counting stories, instead of summing story points... color me skeptical. I do get that it prevents some obvious abuses of process, while enabling other that are just as bad. If somebody is using story points as a developer performance metric (which they shouldn't), there's nothing that prevents them from using completed stories as a developer performance metric (which they shouldn't). The corresponding abuse of process to combat that metric would be to hyper-decompose stories.



> Calibrated burndown of story points is pretty much the only basis on which forecasting and management of expectations can be done in an agile process.

Do you mean in an Agile process?

Sorry I have a visceral reaction to this, having seen teams try to accomplish large things by breaking them down into story-level tasks and then sum up the estimates, and then watch the slow motion train wreck as gaps emerge, requirements evolve, learnings accumulate, and everyone throws their hands in the air and points to the long paper trail of tickets that prove they did their job.

Scrum and story points are a reasonable way to drive local incremental improvements when you have no greater ambitions than local optimizations of an established system, but they have a low ceiling for achieving anything ambitious in a large system. At best you'll get solid utilization of engineering resources when you have a steady backlog of small requests, at worst you'll redirect engineers' attention off of the essential details that make or break difficult projects towards administrative overhead and ticket shuffling. I understand why this might be the way to go in a low trust environment, but it's really no way to live for an experienced and talented cross-functional team.


I would love to hear your view of handling ambitious changes in a large system.

(In the middle of this scenario. Mix of experienced/talented and low trust environment.)


> The corresponding abuse of process to combat that metric would be to hyper-decompose stories.

You're right about that, but that's also one of the benefits to the approach. Inflating a point estimate is easy and there's no real audit trail for why the value is what it is.

On the other hand, if a team creates tasks like "Commit the code" or "save the file" it's pretty easy to identify as fluff.


'save the file' is not fluff.


There was a part of the article that discussed that you create tasks, not stories.

You can group the tasks into stories or milestones or iterations or epics.

In general, he’s saying you should always keep breaking down a task until it’s a 1, since 1s are easy to estimate.

The key part that may be missed is the “mob programming” or “pair programming” aspect where all the engineers on a team sit together and work through a story or milestones or epic to come up with a list one 1 point tasks.

Obviously this still can’t be done, so the only effective end solution is maximum pair/mob programming unless all tasks in an iteration are accounted for and broken down into easily understandable and estimable bits of work.

There is at least some truth to the notion that if you use mob programming, estimating becomes pointless.


> The key part that may be missed is the “mob programming” or “pair programming” aspect where all the engineers on a team sit together and work through a story or milestones or epic to come up with a list one 1 point tasks.

My issue with this has always been that once an issue is straightforward enough to estimate as a 1 point task, you could’ve implemented the task already during the estimation process. The unknown effort is almost never in the writing code part, but figuring out the complexities around business rules & externalities.

But this doesn’t fix the process, it just moves the variability of effort & time into a different part of the process.


I have actually always had the same feeling about 1 pointers, but in this case you're talking about a collection of small tasks that make up a larger feature.

The larger feature probably couldn't have been implemented during the estimation process, but a single isolated small task could have.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: