Discussion Forum > optimum list size and Reinertsen's Principles of Flow
Seraphim:
As I remember being taught during an Army Instructor's Course back in the 60s, "nothing is obvious until it's been explained". I think that applies to the insight you're describing. Once it's been explained it seems so obvious that you wonder how you failed to spot it before.
It gives a very easy method of working out when you should start dismissing tasks on your list. In fact when you've worked out what the optimal number of tasks is you could keep your list at it by simply having a rule that every time you add a new task you have to delete an old task. In which case you won't need the old dismissal rules at all.
As far as I can judge at first sight, this rule would work for any "long list" system. I want to try it out, but it'll have to wait till tomorrow as I am at an event most of the day today.
As I remember being taught during an Army Instructor's Course back in the 60s, "nothing is obvious until it's been explained". I think that applies to the insight you're describing. Once it's been explained it seems so obvious that you wonder how you failed to spot it before.
It gives a very easy method of working out when you should start dismissing tasks on your list. In fact when you've worked out what the optimal number of tasks is you could keep your list at it by simply having a rule that every time you add a new task you have to delete an old task. In which case you won't need the old dismissal rules at all.
As far as I can judge at first sight, this rule would work for any "long list" system. I want to try it out, but it'll have to wait till tomorrow as I am at an event most of the day today.
August 1, 2021 at 9:25 |
Mark Forster
Seraphim:
One thing which will be interesting to find out is how much the number of tasks on the list affects the throughput. A time management list doesn't have limited physical space available in the way that a road does. To take an extreme example one could do 50 tasks a day with only 50 tasks on the list, but still be able to maintain the rate of 50 tasks a day with 5,000 tasks on the list.
So I expect the effect on throughput of too many tasks will be more psychological than material - that familiar feeling of being oppressed by the size of the work still to be done.
How is that going to work out? I have no idea at present. But I suspect a better metric would be how long on average individual tasks remain on the list.
One thing which will be interesting to find out is how much the number of tasks on the list affects the throughput. A time management list doesn't have limited physical space available in the way that a road does. To take an extreme example one could do 50 tasks a day with only 50 tasks on the list, but still be able to maintain the rate of 50 tasks a day with 5,000 tasks on the list.
So I expect the effect on throughput of too many tasks will be more psychological than material - that familiar feeling of being oppressed by the size of the work still to be done.
How is that going to work out? I have no idea at present. But I suspect a better metric would be how long on average individual tasks remain on the list.
August 1, 2021 at 9:43 |
Mark Forster
Seraphim:
Let me run this one past you.
I've taken the stats at http://markforster.squarespace.com/blog/2015/6/5/fvp-statistics.html
Assuming that the period was exactly seven days the solution for Occupancy/Average Processing Rate is 65 / 53.7 = 1.2.
So it takes 1.2 days to process 65 tasks on average.
If we took the desired average wait time as 1 day, then the optimal number of tasks on the list would be 54.
The number of tasks could be maintained at 54 tasks by the means I have just outlined, i.e. every time a new task is added another task must be removed. To make that even easier the task to be removed could be the first live task on the list.
Re-entered tasks of course do not affect the overall total, so you could re-enter as much as you liked without having to delete any tasks.
This is based on NQ-FVP. Other systems may differ, but I wouldn't imagine by much for the same user. I would expect that 54 tasks would remain optimal for me whether I was using Simple Scanning, AF 1 or any other long-list system.
Let me run this one past you.
I've taken the stats at http://markforster.squarespace.com/blog/2015/6/5/fvp-statistics.html
Assuming that the period was exactly seven days the solution for Occupancy/Average Processing Rate is 65 / 53.7 = 1.2.
So it takes 1.2 days to process 65 tasks on average.
If we took the desired average wait time as 1 day, then the optimal number of tasks on the list would be 54.
The number of tasks could be maintained at 54 tasks by the means I have just outlined, i.e. every time a new task is added another task must be removed. To make that even easier the task to be removed could be the first live task on the list.
Re-entered tasks of course do not affect the overall total, so you could re-enter as much as you liked without having to delete any tasks.
This is based on NQ-FVP. Other systems may differ, but I wouldn't imagine by much for the same user. I would expect that 54 tasks would remain optimal for me whether I was using Simple Scanning, AF 1 or any other long-list system.
August 1, 2021 at 17:11 |
Mark Forster
This is interesting. I am writing my tasks in a notebook right now. I would think that it would only be practical to keep track of the number of tasks if this were done electronically on computer; one could have a file/folder of the tasks done today, and the program would display a total of the number of lines/tasks. It does not seem very practical though if it is analog, as the number would keep on changing, and it would take too long to count. (Am I missing something?) I am not clear whether the total number of tasks on the list include these tasks done today or not. Or once these are done, they are not counted? So the tasks that are counted are those that are not started, or not finished? Also, is the word "tasks" being used in distinction from "projects"?
August 1, 2021 at 18:43 |
Mark H.
Mark:
<< As I remember being taught during an Army Instructor's Course back in the 60s, "nothing is obvious until it's been explained". I think that applies to the insight you're describing. Once it's been explained it seems so obvious that you wonder how you failed to spot it before. >>
That's a great saying! The Reinertsen book is full of obvious things like that. :)
<< In fact when you've worked out what the optimal number of tasks is you could keep your list at it by simply having a rule that every time you add a new task you have to delete an old task. In which case you won't need the old dismissal rules at all. >>
That's a good point.
<< As far as I can judge at first sight, this rule would work for any "long list" system. >>
Yes, I was thinking the same thing. I started with AF1 since that is the one I've been using most recently, and probably the one that's familiar to the widest audience here. But Paul MacNeal's post on the Random method got me wondering how it might apply there.
<< I expect the effect on throughput of too many tasks will be more psychological than material - that familiar feeling of being oppressed by the size of the work still to be done. >>
Yes, I believe you are correct. The thing to watch is how the numbers change from one day to the next. The combination of increase/decrease in total tasks versus increase/decrease in tasks completed per day tells you which side of the throughput peak you are at -- on the dangerous side (prone to congestive collapse) or the stable side (where the flow tends to be self-correcting).
Basically if total tasks and number of completed tasks increase or decrease in tandem, then you are stable. But if one is increasing and one is decreasing, it is unstable, and if it's the total tasks increasing with completion rate decreasing, that's the real dangerous situation.
<< But I suspect a better metric would be how long on average individual tasks remain on the list. >>
Interesting you should mention that. Reinertsen often refers to Little's Law, which demonstrates a simple, robust relationship between these three:
Lambda = average completion rate (tasks per day)
L = average number of tasks in the system
W = average wait time (average time each task has been in the system)
The relationship is simple: W = L / Lambda
http://en.wikipedia.org/wiki/Little%27s_law
This relates to increasing and decreasing completion rate (Lambda) and task count (L) as follows:
-- If L and Lambda are both increasing or both decreasing, you'd expect W to be about the same.
-- If L is up and Lambda is down, then W will go sharply up. This is a sign of Congestive Collapse.
-- If L is down and Lambda is up, then W will go sharply down.
It's almost always good to reduce W. It always means you are moving away from the Congestive Collapse zone and towards the stability zone. It means throughput is improving faster than the list is growing. Either your list is shrinking (which provides more focus and less scanning overhead), or your throughput is improving (meaning you have found a way to be more efficient).
(I say "almost always good" and not "always good", because you can also reduce wait time significantly by only having 1 or 2 tasks on your list. This can work just fine (cf. No List) but is probably not what we want for a long-list system, where we expect the list to also do incubation and sorting.)
On the other hand, if W is increasing, you could apply the same new dismissal rule -- dismiss a number of old pages so that W is flat or down.
<< As I remember being taught during an Army Instructor's Course back in the 60s, "nothing is obvious until it's been explained". I think that applies to the insight you're describing. Once it's been explained it seems so obvious that you wonder how you failed to spot it before. >>
That's a great saying! The Reinertsen book is full of obvious things like that. :)
<< In fact when you've worked out what the optimal number of tasks is you could keep your list at it by simply having a rule that every time you add a new task you have to delete an old task. In which case you won't need the old dismissal rules at all. >>
That's a good point.
<< As far as I can judge at first sight, this rule would work for any "long list" system. >>
Yes, I was thinking the same thing. I started with AF1 since that is the one I've been using most recently, and probably the one that's familiar to the widest audience here. But Paul MacNeal's post on the Random method got me wondering how it might apply there.
<< I expect the effect on throughput of too many tasks will be more psychological than material - that familiar feeling of being oppressed by the size of the work still to be done. >>
Yes, I believe you are correct. The thing to watch is how the numbers change from one day to the next. The combination of increase/decrease in total tasks versus increase/decrease in tasks completed per day tells you which side of the throughput peak you are at -- on the dangerous side (prone to congestive collapse) or the stable side (where the flow tends to be self-correcting).
Basically if total tasks and number of completed tasks increase or decrease in tandem, then you are stable. But if one is increasing and one is decreasing, it is unstable, and if it's the total tasks increasing with completion rate decreasing, that's the real dangerous situation.
<< But I suspect a better metric would be how long on average individual tasks remain on the list. >>
Interesting you should mention that. Reinertsen often refers to Little's Law, which demonstrates a simple, robust relationship between these three:
Lambda = average completion rate (tasks per day)
L = average number of tasks in the system
W = average wait time (average time each task has been in the system)
The relationship is simple: W = L / Lambda
http://en.wikipedia.org/wiki/Little%27s_law
This relates to increasing and decreasing completion rate (Lambda) and task count (L) as follows:
-- If L and Lambda are both increasing or both decreasing, you'd expect W to be about the same.
-- If L is up and Lambda is down, then W will go sharply up. This is a sign of Congestive Collapse.
-- If L is down and Lambda is up, then W will go sharply down.
It's almost always good to reduce W. It always means you are moving away from the Congestive Collapse zone and towards the stability zone. It means throughput is improving faster than the list is growing. Either your list is shrinking (which provides more focus and less scanning overhead), or your throughput is improving (meaning you have found a way to be more efficient).
(I say "almost always good" and not "always good", because you can also reduce wait time significantly by only having 1 or 2 tasks on your list. This can work just fine (cf. No List) but is probably not what we want for a long-list system, where we expect the list to also do incubation and sorting.)
On the other hand, if W is increasing, you could apply the same new dismissal rule -- dismiss a number of old pages so that W is flat or down.
August 1, 2021 at 19:00 |
Seraphim
I have read the article on at http://blog.danslimmon.com/2015/06/05/kanban-highway-the-least-popular-mario-kart-course/
As I understand it, the article makes the case for counting not the tickets being processed, but the total number of tickets as a whole, so that would include also the tickets waiting to be started.
As I understand it, the article makes the case for counting not the tickets being processed, but the total number of tickets as a whole, so that would include also the tickets waiting to be started.
August 1, 2021 at 19:01 |
Mark H.
Mark:
<< it takes 1.2 days to process 65 tasks on average. >>
This can be useful information, but it actually doesn't tell you about where you are at on the congestion curve, or whether you are moving towards peak throughput or away from it. In addition to a static target, you also need to see how the statistics change from one day to the next.
<< If we took the desired average wait time as 1 day, then the optimal number of tasks on the list would be 54. >>
If you are already on the stable part of the curve, then reducing your wait time from 1.2 days to 1.0 days is actually going to reduce your throughput. Maybe that's good, maybe not -- depends on whether you are looking for more throughput or more focus, I suppose.
<< The number of tasks could be maintained at 54 tasks by the means I have just outlined, i.e. every time a new task is added another task must be removed. To make that even easier the task to be removed could be the first live task on the list. >>
I think it might be more practical to do the balancing once a day, by dismissing enough of the oldest tasks to bring the total to the right number. For me, both new task generation and task completion come in bursts, and represent a different mindset and stage in my workday. I think it could be disruptive to try to manage the task count right in the middle of a brainstorm where I am adding a bunch of new tasks.
Reinertsen actually discusses a similar concept in his section on "The Principle of Periodic Resynchronization", and gives the example of busses in a bus line. If busses simply work their way through their route, you will inevitably have busses bunching up into pairs -- the front bus picks up and drops off the most people, and the rear bus deals with any extras. And thus the front bus goes slower and slower, and the rear bus goes faster and faster -- with the result that they arrive in pairs. To prevent this, you need to recalibrate the departure times, so that at least at critical junctures, the bus does not leave before the appointed time. You could do this recalibration at every bus stop (which is like your approach of adding a new task only when you eliminate an old task) or recalibration only at critical junctures (which is like my approach of rebalancing once a day). It's really an economic decision which way would work better, and will be different for different bus lines (or long list users). :)
<< it takes 1.2 days to process 65 tasks on average. >>
This can be useful information, but it actually doesn't tell you about where you are at on the congestion curve, or whether you are moving towards peak throughput or away from it. In addition to a static target, you also need to see how the statistics change from one day to the next.
<< If we took the desired average wait time as 1 day, then the optimal number of tasks on the list would be 54. >>
If you are already on the stable part of the curve, then reducing your wait time from 1.2 days to 1.0 days is actually going to reduce your throughput. Maybe that's good, maybe not -- depends on whether you are looking for more throughput or more focus, I suppose.
<< The number of tasks could be maintained at 54 tasks by the means I have just outlined, i.e. every time a new task is added another task must be removed. To make that even easier the task to be removed could be the first live task on the list. >>
I think it might be more practical to do the balancing once a day, by dismissing enough of the oldest tasks to bring the total to the right number. For me, both new task generation and task completion come in bursts, and represent a different mindset and stage in my workday. I think it could be disruptive to try to manage the task count right in the middle of a brainstorm where I am adding a bunch of new tasks.
Reinertsen actually discusses a similar concept in his section on "The Principle of Periodic Resynchronization", and gives the example of busses in a bus line. If busses simply work their way through their route, you will inevitably have busses bunching up into pairs -- the front bus picks up and drops off the most people, and the rear bus deals with any extras. And thus the front bus goes slower and slower, and the rear bus goes faster and faster -- with the result that they arrive in pairs. To prevent this, you need to recalibrate the departure times, so that at least at critical junctures, the bus does not leave before the appointed time. You could do this recalibration at every bus stop (which is like your approach of adding a new task only when you eliminate an old task) or recalibration only at critical junctures (which is like my approach of rebalancing once a day). It's really an economic decision which way would work better, and will be different for different bus lines (or long list users). :)
August 1, 2021 at 19:26 |
Seraphim
Mark H.:
<< I would think that it would only be practical to keep track of the number of tasks if this were done electronically on computer >>
In practice, the task counting only takes a few minutes. And I'm guessing you'd only need to do it for several days when you start up, to establish the right number of overall active pages to keep an optimum workflow. Then just dismiss old pages when they go over that level, and recalibrate from time to time with the counting exercise.
<< I am not clear whether the total number of tasks on the list include these tasks done today or not. Or once these are done, they are not counted? >>
I would count tasks that completed the same day they are entered as tasks completed today. They would be included in the total number of tasks completed that day.
<< So the tasks that are counted are those that are not started, or not finished? >>
The two things to count would be:
-- number of tasks completed each day
-- number of active tasks on the list at the end of each day
<< Also, is the word "tasks" being used in distinction from "projects"? >>
I am using the word "task" to refer to an entry on a line of the notebook in one of Mark's long-list systems.
I am using the phrase "active task" to refer to any of those entries that still appears on the list, not completed and not dismissed.
<< I would think that it would only be practical to keep track of the number of tasks if this were done electronically on computer >>
In practice, the task counting only takes a few minutes. And I'm guessing you'd only need to do it for several days when you start up, to establish the right number of overall active pages to keep an optimum workflow. Then just dismiss old pages when they go over that level, and recalibrate from time to time with the counting exercise.
<< I am not clear whether the total number of tasks on the list include these tasks done today or not. Or once these are done, they are not counted? >>
I would count tasks that completed the same day they are entered as tasks completed today. They would be included in the total number of tasks completed that day.
<< So the tasks that are counted are those that are not started, or not finished? >>
The two things to count would be:
-- number of tasks completed each day
-- number of active tasks on the list at the end of each day
<< Also, is the word "tasks" being used in distinction from "projects"? >>
I am using the word "task" to refer to an entry on a line of the notebook in one of Mark's long-list systems.
I am using the phrase "active task" to refer to any of those entries that still appears on the list, not completed and not dismissed.
August 1, 2021 at 19:58 |
Seraphim
Mark H.:
<< As I understand it, the article makes the case for counting not the tickets being processed, but the total number of tickets as a whole, so that would include also the tickets waiting to be started. >>
Yes, I would count every active task on the list, whether or not you have already started work on it.
<< As I understand it, the article makes the case for counting not the tickets being processed, but the total number of tickets as a whole, so that would include also the tickets waiting to be started. >>
Yes, I would count every active task on the list, whether or not you have already started work on it.
August 1, 2021 at 20:00 |
Seraphim
Earlier I wrote:
<< I'm guessing you'd only need to do it for several days when you start up, to establish the right number of overall active pages to keep an optimum workflow. Then just dismiss old pages when they go over that level, and recalibrate from time to time with the counting exercise. >>
It occurs to me it might be simplest to count the number of active **pages**, not active tasks -- and count the **pages** completed, not tasks completed. This would be a lot easier to count, and maybe it would also aggregate the variability in task size, task completion rate, etc. Aggregation of variability makes it easier to predict and control. It would also give stronger signals -- it might be difficult to decide if an increase from 50 to 55 tasks is a meaningful increase, whereas it seems more obvious that an increase from (say) 10 active pages to 12 active pages in one day would be meaningful.
<< I'm guessing you'd only need to do it for several days when you start up, to establish the right number of overall active pages to keep an optimum workflow. Then just dismiss old pages when they go over that level, and recalibrate from time to time with the counting exercise. >>
It occurs to me it might be simplest to count the number of active **pages**, not active tasks -- and count the **pages** completed, not tasks completed. This would be a lot easier to count, and maybe it would also aggregate the variability in task size, task completion rate, etc. Aggregation of variability makes it easier to predict and control. It would also give stronger signals -- it might be difficult to decide if an increase from 50 to 55 tasks is a meaningful increase, whereas it seems more obvious that an increase from (say) 10 active pages to 12 active pages in one day would be meaningful.
August 1, 2021 at 20:04 |
Seraphim
Seraphim:
<< This can be useful information, but it actually doesn't tell you about where you are at on the congestion curve, or whether you are moving towards peak throughput or away from it. In addition to a static target, you also need to see how the statistics change from one day to the next. >>
No, I don't agree. As I said above, task lists are not like roads because the number of tasks does not affect the throughput. Fifty tasks takes the same time whether it's part of a list of 50 tasks or 5,000 tasks.
So the useful question is not what the throughput is, but how long it takes to process the number of tasks on the list.
<< If you are already on the stable part of the curve, then reducing your wait time from 1.2 days to 1.0 days is actually going to reduce your throughput.
No, it's not. You will still do the same number of tasks for the reason I have just stated.
<< Maybe that's good, maybe not -- depends on whether you are looking for more throughput or more focus, I suppose. >>
No, the throughput will not change. What will change is how long it takes to get through the tasks on the list. And that depends on the length of the list.
<< I think it might be more practical to do the balancing once a day, by dismissing enough of the oldest tasks to bring the total to the right number. For me, both new task generation and task completion come in bursts, and represent a different mindset and stage in my workday. >>
I agree that there are numerous ways of dismissing tasks, but one way in which they differ is in how much they require a different mindset. Automatically dismissing the first task on the list hardly requires any mindset at all - let alone a different one.
<< If busses simply work their way through their route, you will inevitably have busses bunching up into pairs -- the front bus picks up and drops off the most people, and the rear bus deals with any extras.>>
In what way is buses bunching up into pairs analagous to doing single tasks one after the other? It's the buses' passengers which are causing the bunching. Tasks don't have passengers and since they are not moving along a road don't bunch up either.
<< You could do this recalibration at every bus stop (which is like your approach of adding a new task only when you eliminate an old task) or recalibration only at critical junctures (which is like my approach of rebalancing once a day) >>
Yes, you could do either, but the advantage of doing it every time you eliminate a task is that it only takes a second and does not require a different mindset, while doing it at the end of the day means that you are working sub-optimally for most of the day. So to give an example, if you have a 54 task list, which you regard as optimal, and you add 20 new tasks during the day, you are working with a 74 task list. Then at the end of the day you have to get into "Delete" mindset and spend quite a bit of time deleting tasks. And all the while you've been working with a sub-optimal list.
<< This can be useful information, but it actually doesn't tell you about where you are at on the congestion curve, or whether you are moving towards peak throughput or away from it. In addition to a static target, you also need to see how the statistics change from one day to the next. >>
No, I don't agree. As I said above, task lists are not like roads because the number of tasks does not affect the throughput. Fifty tasks takes the same time whether it's part of a list of 50 tasks or 5,000 tasks.
So the useful question is not what the throughput is, but how long it takes to process the number of tasks on the list.
<< If you are already on the stable part of the curve, then reducing your wait time from 1.2 days to 1.0 days is actually going to reduce your throughput.
No, it's not. You will still do the same number of tasks for the reason I have just stated.
<< Maybe that's good, maybe not -- depends on whether you are looking for more throughput or more focus, I suppose. >>
No, the throughput will not change. What will change is how long it takes to get through the tasks on the list. And that depends on the length of the list.
<< I think it might be more practical to do the balancing once a day, by dismissing enough of the oldest tasks to bring the total to the right number. For me, both new task generation and task completion come in bursts, and represent a different mindset and stage in my workday. >>
I agree that there are numerous ways of dismissing tasks, but one way in which they differ is in how much they require a different mindset. Automatically dismissing the first task on the list hardly requires any mindset at all - let alone a different one.
<< If busses simply work their way through their route, you will inevitably have busses bunching up into pairs -- the front bus picks up and drops off the most people, and the rear bus deals with any extras.>>
In what way is buses bunching up into pairs analagous to doing single tasks one after the other? It's the buses' passengers which are causing the bunching. Tasks don't have passengers and since they are not moving along a road don't bunch up either.
<< You could do this recalibration at every bus stop (which is like your approach of adding a new task only when you eliminate an old task) or recalibration only at critical junctures (which is like my approach of rebalancing once a day) >>
Yes, you could do either, but the advantage of doing it every time you eliminate a task is that it only takes a second and does not require a different mindset, while doing it at the end of the day means that you are working sub-optimally for most of the day. So to give an example, if you have a 54 task list, which you regard as optimal, and you add 20 new tasks during the day, you are working with a 74 task list. Then at the end of the day you have to get into "Delete" mindset and spend quite a bit of time deleting tasks. And all the while you've been working with a sub-optimal list.
August 1, 2021 at 20:21 |
Mark Forster
Seraphim:
<< It occurs to me it might be simplest to count the number of active **pages**, not active tasks -- and count the **pages** completed, not tasks completed. This would be a lot easier to count, and maybe it would also aggregate the variability in task size, task completion rate, etc. Aggregation of variability makes it easier to predict and control. It would also give stronger signals -- it might be difficult to decide if an increase from 50 to 55 tasks is a meaningful increase, whereas it seems more obvious that an increase from (say) 10 active pages to 12 active pages in one day would be meaningful. >>
I'm not sure what action you envisage taking at the end of the day. If you have 12 active pages instead of 10 do you delete two active pages completely, or what? Does it matter which pages you delete? The oldest? the ones with the least active tasks?
In any case if you do it the way I'm suggesting you don't need to decide whether it's a "meaningful increase" or not.
<< It occurs to me it might be simplest to count the number of active **pages**, not active tasks -- and count the **pages** completed, not tasks completed. This would be a lot easier to count, and maybe it would also aggregate the variability in task size, task completion rate, etc. Aggregation of variability makes it easier to predict and control. It would also give stronger signals -- it might be difficult to decide if an increase from 50 to 55 tasks is a meaningful increase, whereas it seems more obvious that an increase from (say) 10 active pages to 12 active pages in one day would be meaningful. >>
I'm not sure what action you envisage taking at the end of the day. If you have 12 active pages instead of 10 do you delete two active pages completely, or what? Does it matter which pages you delete? The oldest? the ones with the least active tasks?
In any case if you do it the way I'm suggesting you don't need to decide whether it's a "meaningful increase" or not.
August 1, 2021 at 20:25 |
Mark Forster
<< task lists are not like roads because the number of tasks does not affect the throughput. Fifty tasks takes the same time whether it's part of a list of 50 tasks or 5,000 tasks. >>
This seems to be the foundation of your argument, so I'd like to make sure I understand what you are saying.
I agreed with your statement that task lists are different than roads, in that the former are "psychological" and the latter are "physical". But I would not take it so far as to say the number of tasks does not affect the throughput.
I can think of many ways that having a larger list reduces throughput.
-- Scanning time -- it takes much longer to scan a list with 5000 items than a list with 50 items, and to find the items that "stand out", even in a system like FVP where overall scanning is much less than a system like AF1 or Simple Scanning.
-- Standing out -- when there are 5000 items on a list but you only action about 50 every day, it's very difficult to maintain an intuitive sense of the items on the list, and this weakens the "standing out" effect.
-- Oppressiveness reduces engagement, which reduces flow, which reduces throughput. Cf. "the list tends to expand excessively with the result that eventually the size of the list becomes oppressive"
-- Dealing with urgent items -- even with something like FVP, which handles urgency very well, if there are too many non-urgent tasks interspersed with the urgent ones, it's more difficult to find the urgent ones, and easier to miss them. This creates more drag and misalignment and reduces throughput.
Are you saying these are minimal effects and have no material impact on throughput?
Hmm, also, maybe we should check we mean the same thing by the word "throughput". I am using the word to mean "number of tasks completed per day".
This seems to be the foundation of your argument, so I'd like to make sure I understand what you are saying.
I agreed with your statement that task lists are different than roads, in that the former are "psychological" and the latter are "physical". But I would not take it so far as to say the number of tasks does not affect the throughput.
I can think of many ways that having a larger list reduces throughput.
-- Scanning time -- it takes much longer to scan a list with 5000 items than a list with 50 items, and to find the items that "stand out", even in a system like FVP where overall scanning is much less than a system like AF1 or Simple Scanning.
-- Standing out -- when there are 5000 items on a list but you only action about 50 every day, it's very difficult to maintain an intuitive sense of the items on the list, and this weakens the "standing out" effect.
-- Oppressiveness reduces engagement, which reduces flow, which reduces throughput. Cf. "the list tends to expand excessively with the result that eventually the size of the list becomes oppressive"
-- Dealing with urgent items -- even with something like FVP, which handles urgency very well, if there are too many non-urgent tasks interspersed with the urgent ones, it's more difficult to find the urgent ones, and easier to miss them. This creates more drag and misalignment and reduces throughput.
Are you saying these are minimal effects and have no material impact on throughput?
Hmm, also, maybe we should check we mean the same thing by the word "throughput". I am using the word to mean "number of tasks completed per day".
August 1, 2021 at 23:06 |
Seraphim
Mark Forster wrote:
<< I'm not sure what action you envisage taking at the end of the day. If you have 12 active pages instead of 10 do you delete two active pages completely, or what? Does it matter which pages you delete? The oldest? the ones with the least active tasks? >>
I envision it working like this. Over a few days, I would do the queuing analysis I described at the top of the thread to determine the optimal number of pages to keep me in the stable zone with high throughput. Let's say that turns out to be 10 active pages.
OK, now I have established my baseline of 10 active pages.
Subsequently, at the end of every day, I'd count how many active pages I currently have. If it is more than 10, I'd dismiss the oldest pages. For example, if I found myself with 12 active pages, I'd dismiss the two oldest pages.
I'd do this once at the end of each day, and it would take less than a minute.
This seems easier than either of our task-based approaches -- mine of dismissing tasks at the end of the day to reduce the number of tasks, or yours of dismissing one task for every new task you enter.
Later, if I ever found myself facing "congestive collapse", or urgent items getting neglected, or similar problems cycling through the list fast enough, I could do another round of analysis and perhaps reduce the target number of active pages.
<< I'm not sure what action you envisage taking at the end of the day. If you have 12 active pages instead of 10 do you delete two active pages completely, or what? Does it matter which pages you delete? The oldest? the ones with the least active tasks? >>
I envision it working like this. Over a few days, I would do the queuing analysis I described at the top of the thread to determine the optimal number of pages to keep me in the stable zone with high throughput. Let's say that turns out to be 10 active pages.
OK, now I have established my baseline of 10 active pages.
Subsequently, at the end of every day, I'd count how many active pages I currently have. If it is more than 10, I'd dismiss the oldest pages. For example, if I found myself with 12 active pages, I'd dismiss the two oldest pages.
I'd do this once at the end of each day, and it would take less than a minute.
This seems easier than either of our task-based approaches -- mine of dismissing tasks at the end of the day to reduce the number of tasks, or yours of dismissing one task for every new task you enter.
Later, if I ever found myself facing "congestive collapse", or urgent items getting neglected, or similar problems cycling through the list fast enough, I could do another round of analysis and perhaps reduce the target number of active pages.
August 1, 2021 at 23:21 |
Seraphim
Seraphim:
<< I agreed with your statement that task lists are different than roads, in that the former are "psychological" and the latter are "physical". But I would not take it so far as to say the number of tasks does not affect the throughput. >>
I should have been more specific. What I meant was that if the tasks are done in the order they are on the list then how long the list is makes no difference to throughput. This is quite different from roads where vehicle throughput is affected by a whole range of factors even if the vehicles remain in the same order.
<< Hmm, also, maybe we should check we mean the same thing by the word "throughput". I am using the word to mean "number of tasks completed per day". >>
Yes, we do mean the same thing.
<< I agreed with your statement that task lists are different than roads, in that the former are "psychological" and the latter are "physical". But I would not take it so far as to say the number of tasks does not affect the throughput. >>
I should have been more specific. What I meant was that if the tasks are done in the order they are on the list then how long the list is makes no difference to throughput. This is quite different from roads where vehicle throughput is affected by a whole range of factors even if the vehicles remain in the same order.
<< Hmm, also, maybe we should check we mean the same thing by the word "throughput". I am using the word to mean "number of tasks completed per day". >>
Yes, we do mean the same thing.
August 2, 2021 at 11:01 |
Mark Forster
Seraphim:
<< Subsequently, at the end of every day, I'd count how many active pages I currently have. If it is more than 10, I'd dismiss the oldest pages. For example, if I found myself with 12 active pages, I'd dismiss the two oldest pages. >>
Any system which could get you to keep your list down to ten active pages has my vote!
<< Subsequently, at the end of every day, I'd count how many active pages I currently have. If it is more than 10, I'd dismiss the oldest pages. For example, if I found myself with 12 active pages, I'd dismiss the two oldest pages. >>
Any system which could get you to keep your list down to ten active pages has my vote!
August 2, 2021 at 11:12 |
Mark Forster
Seraphim:
Btw do you have average speed cameras in the States?
In the UK they are becoming increasingly common, especially for road works on major roads.
For road works the speed limit is usually 50 mph. The cameras time the traffic over a distance, usually several miles, and anyone significantly over the average speed limit gets a ticket.
They are not sneaky but well-signed with prominent cameras because the whole point is that people should know they are being timed.
The result is traffic moving at almost exactly the same speed and therefore remaining in much the same order.
They enable a much higher throughput of vehicles and have almost entirely eliminated traffic jams around road works on major roads.
Similar systems (such as variable speed limits) which use conventional traffic cameras are nothing like as effective in avoiding hold-ups.
I'm sure there are some time management implications from this, but I haven't identified them yet!
Btw do you have average speed cameras in the States?
In the UK they are becoming increasingly common, especially for road works on major roads.
For road works the speed limit is usually 50 mph. The cameras time the traffic over a distance, usually several miles, and anyone significantly over the average speed limit gets a ticket.
They are not sneaky but well-signed with prominent cameras because the whole point is that people should know they are being timed.
The result is traffic moving at almost exactly the same speed and therefore remaining in much the same order.
They enable a much higher throughput of vehicles and have almost entirely eliminated traffic jams around road works on major roads.
Similar systems (such as variable speed limits) which use conventional traffic cameras are nothing like as effective in avoiding hold-ups.
I'm sure there are some time management implications from this, but I haven't identified them yet!
August 2, 2021 at 11:45 |
Mark Forster
I have many divergent thoughts on this topic. One, the comparison of cars to tasks is flawed, not completely so, but slightly, in that cars represent parallel actions and task execution by an individual is sequential. It would be more like operating a toy highway system where the child moves each car one by one. In that context it becomes clear that the most efficient way to move cars is to move individual cars a great distance, and to reduce the amount of time spent switching cars.
I also feel that while counting active tasks is accurate, counting task activations is not an accurate measure of progress. Filing a tax return for example. If you do that in 15 separate sessions, you will have produced as much [with respect to this task] as your neighbor who did it all in one go. So the question is not how much a task is activated, but how many tasks are carried through to completion over time. This is affected by the Speed of work, and Delay between tasks. It’s also affected by how much time you spend [e.g.] watching Youtube, which while you might count it as a task, really doesn’t help get the taxes done nor anything else.
There is also a question of Elapsed time for an individual task. Some tasks are better done quickly because the positive outcome only begins when the task is done.
So it is often better to keep a list of tasks short, so the time taken to find the next task will be short, so your familiarity with working the task is high and you can operate efficiently, so you execute the task frequently and get it to completion fast. (And incidentally get it off the list, keeping it short, notwithstanding if they generate a multitude of additional tasks on completion.)
I also feel that while counting active tasks is accurate, counting task activations is not an accurate measure of progress. Filing a tax return for example. If you do that in 15 separate sessions, you will have produced as much [with respect to this task] as your neighbor who did it all in one go. So the question is not how much a task is activated, but how many tasks are carried through to completion over time. This is affected by the Speed of work, and Delay between tasks. It’s also affected by how much time you spend [e.g.] watching Youtube, which while you might count it as a task, really doesn’t help get the taxes done nor anything else.
There is also a question of Elapsed time for an individual task. Some tasks are better done quickly because the positive outcome only begins when the task is done.
So it is often better to keep a list of tasks short, so the time taken to find the next task will be short, so your familiarity with working the task is high and you can operate efficiently, so you execute the task frequently and get it to completion fast. (And incidentally get it off the list, keeping it short, notwithstanding if they generate a multitude of additional tasks on completion.)
August 2, 2021 at 14:48 |
Alan Baljeu
Mark:
<< Btw do you have average speed cameras in the States? >>
We had these several years ago in central Phoenix, and yes, they had a remarkable impact on improving the flow of traffic. Unfortunately, most of these traffic camera systems are tied in one way or another to state or municipal revenue. As a result, state legislators and city councilmembers can't resist optimizing the systems to increase revenue, even at the expense of public safety. For example, introducing a very visible but still unexpected reduction in the speed limit somewhere along the camera-controlled route; or reducing the yellow-light timing specifically to generate more tickets, even though this increases rear-end collisions and drivers speeding through intersections. Ridiculous but true.
All the traffic camera manufacturers receive revenue kickbacks from the local government authorities -- sometimes legally (it's a key element of their business model), sometimes illegally. Many former executives from these companies have done time in prison as a result. For example: http://www.azcentral.com/story/money/business/2016/10/20/ex-redflex-traffic-camera-ceo-karen-finley-sentenced-prison-bribery/92459320/
As a result, the traffic camera systems have been challenged again and again in courts across the US, and many states have banned their use altogether.
Maybe cadence-based methods, such as Agile sprints, or the Pomodoro technique, could be regarded as methods to keep the work moving at a consistent pace. Reinertsen describes many cadence-based principles for improving the flow of work -- such as the Agile practice of Daily Standup Meetings to reduce disruptive ad-hoc meetings throughout the week -- or monthly project review meetings instead of trying to schedule (and re-schedule) project review meetings at major milestones -- and shows how they improve flow. None of these is exactly like the speed-regulating cameras but they do seem to be tapping into the same underlying principle.
<< Btw do you have average speed cameras in the States? >>
We had these several years ago in central Phoenix, and yes, they had a remarkable impact on improving the flow of traffic. Unfortunately, most of these traffic camera systems are tied in one way or another to state or municipal revenue. As a result, state legislators and city councilmembers can't resist optimizing the systems to increase revenue, even at the expense of public safety. For example, introducing a very visible but still unexpected reduction in the speed limit somewhere along the camera-controlled route; or reducing the yellow-light timing specifically to generate more tickets, even though this increases rear-end collisions and drivers speeding through intersections. Ridiculous but true.
All the traffic camera manufacturers receive revenue kickbacks from the local government authorities -- sometimes legally (it's a key element of their business model), sometimes illegally. Many former executives from these companies have done time in prison as a result. For example: http://www.azcentral.com/story/money/business/2016/10/20/ex-redflex-traffic-camera-ceo-karen-finley-sentenced-prison-bribery/92459320/
As a result, the traffic camera systems have been challenged again and again in courts across the US, and many states have banned their use altogether.
Maybe cadence-based methods, such as Agile sprints, or the Pomodoro technique, could be regarded as methods to keep the work moving at a consistent pace. Reinertsen describes many cadence-based principles for improving the flow of work -- such as the Agile practice of Daily Standup Meetings to reduce disruptive ad-hoc meetings throughout the week -- or monthly project review meetings instead of trying to schedule (and re-schedule) project review meetings at major milestones -- and shows how they improve flow. None of these is exactly like the speed-regulating cameras but they do seem to be tapping into the same underlying principle.
August 2, 2021 at 15:13 |
Seraphim
On the concurrent thread, "New addition to the random method", Mark Forster wrote:
<< the whole point of the Randomizer is to take the human element out of the equation >>
http://markforster.squarespace.com/forum/post/2784775#post2784824
I am hoping that my new method for dismissal might work well for the same reason. Rather than dismissing pages because nothing stands out -- which tends to generate psychological pressure to take action on every page -- I decided to try skipping the standard AF1 dismissal rule altogether, and rely only on the end-of-day automatic dismissal of the oldest pages to bring the total active pages down to the target level.
Surprisingly, this brought an immediate sense of relief. There is no sense of pressure at all. I hadn't even realized how strong that pressure was -- the pressure to take some action in order to keep the page alive.
There is also a sense of relief that stems from the belief that this new auto-dismissal rule will help keep my list to a workable size and prevent congestive collapse. I think the congestive collapse phenomenon is precisely what has caused all my other long-list methods to fail eventually. Some methods survived for quite a long time -- NQ-FVP for nearly a year, and Serial No-List for over two years -- before they suffered that fate -- but I think the cause of death was always the same.
Maybe the pressure will come when I start auto-dismissing pages, and I am then forced to decide what to do with those dismissed pages. But for now, the general flow and engagement has already improved noticeably.
<< the whole point of the Randomizer is to take the human element out of the equation >>
http://markforster.squarespace.com/forum/post/2784775#post2784824
I am hoping that my new method for dismissal might work well for the same reason. Rather than dismissing pages because nothing stands out -- which tends to generate psychological pressure to take action on every page -- I decided to try skipping the standard AF1 dismissal rule altogether, and rely only on the end-of-day automatic dismissal of the oldest pages to bring the total active pages down to the target level.
Surprisingly, this brought an immediate sense of relief. There is no sense of pressure at all. I hadn't even realized how strong that pressure was -- the pressure to take some action in order to keep the page alive.
There is also a sense of relief that stems from the belief that this new auto-dismissal rule will help keep my list to a workable size and prevent congestive collapse. I think the congestive collapse phenomenon is precisely what has caused all my other long-list methods to fail eventually. Some methods survived for quite a long time -- NQ-FVP for nearly a year, and Serial No-List for over two years -- before they suffered that fate -- but I think the cause of death was always the same.
Maybe the pressure will come when I start auto-dismissing pages, and I am then forced to decide what to do with those dismissed pages. But for now, the general flow and engagement has already improved noticeably.
August 2, 2021 at 15:31 |
Seraphim
Hi Seraphim:
Thanks for this interesting discussion. I recently tried out FVP, then AF1, then Volantas' KeepFocus for trials of 3 weeks each (though I ended AF1 after 2 weeks after it was clear I was not sticking to the algorithm). Amusingly my average tasks actioned per day rounded to 21 tasks for each method, so I guess my throughput is method-independent? I would recommend KeepFocus for having the least "list processing" time though.
I didn't keep daily stats on "number of active tasks remaining in the list". Definitely list length slowed me down, which KF solved by severely restricting what went on the Active list.
How are you counting actioned but not completed tasks for throughput purposes? Many of my tasks are related to projects which can take weeks/months (e.g. read this book, finish this course) so to gain some sense of accomplishment I keep track of each time I "action" it. With this highway/Kanban mindset it does seem wrong to count trucks still "on the highway" towards throughput. I could formally divide them into tasks (e.g. read chapter 1) and not count each individual task until its done, but not all projects divide so easily.
Thanks for this interesting discussion. I recently tried out FVP, then AF1, then Volantas' KeepFocus for trials of 3 weeks each (though I ended AF1 after 2 weeks after it was clear I was not sticking to the algorithm). Amusingly my average tasks actioned per day rounded to 21 tasks for each method, so I guess my throughput is method-independent? I would recommend KeepFocus for having the least "list processing" time though.
I didn't keep daily stats on "number of active tasks remaining in the list". Definitely list length slowed me down, which KF solved by severely restricting what went on the Active list.
How are you counting actioned but not completed tasks for throughput purposes? Many of my tasks are related to projects which can take weeks/months (e.g. read this book, finish this course) so to gain some sense of accomplishment I keep track of each time I "action" it. With this highway/Kanban mindset it does seem wrong to count trucks still "on the highway" towards throughput. I could formally divide them into tasks (e.g. read chapter 1) and not count each individual task until its done, but not all projects divide so easily.
August 2, 2021 at 16:28 |
Virix
Alan Baljeu:
<< Filing a tax return for example. If you do that in 15 separate sessions, you will have produced as much [with respect to this task] as your neighbor who did it all in one go. >>
But you will do a lot more than another neighbour who keeps putting it off because they are paralyzed by their resistance to the task as a whole. You may also do better in the long run than the neighbour you mention, who makes a lot of mistakes as a result of doing it in one huge session. Both of these may end up being penalized. Whereas you, who took it in manageable chunks over a period of time, produced accurate work with little or no stress.
<< It’s also affected by how much time you spend [e.g.] watching Youtube, which while you might count it as a task, really doesn’t help get the taxes done nor anything else. >>
My experience of this (which may well be different from yours) is that if I put a specific thing on Youtube to watch on my task list , e.g. "Watch latest Funny Cat Video", I spend a lot less time watching Youtube than I would if I just drifted into it (in which case I'll probably also watch "Funny Dog Videos", "People are Amazing", "Ultimate.Fails Compilation", etc, etc.)
And that actually does help get the taxes done and everything else too.
<< Filing a tax return for example. If you do that in 15 separate sessions, you will have produced as much [with respect to this task] as your neighbor who did it all in one go. >>
But you will do a lot more than another neighbour who keeps putting it off because they are paralyzed by their resistance to the task as a whole. You may also do better in the long run than the neighbour you mention, who makes a lot of mistakes as a result of doing it in one huge session. Both of these may end up being penalized. Whereas you, who took it in manageable chunks over a period of time, produced accurate work with little or no stress.
<< It’s also affected by how much time you spend [e.g.] watching Youtube, which while you might count it as a task, really doesn’t help get the taxes done nor anything else. >>
My experience of this (which may well be different from yours) is that if I put a specific thing on Youtube to watch on my task list , e.g. "Watch latest Funny Cat Video", I spend a lot less time watching Youtube than I would if I just drifted into it (in which case I'll probably also watch "Funny Dog Videos", "People are Amazing", "Ultimate.Fails Compilation", etc, etc.)
And that actually does help get the taxes done and everything else too.
August 2, 2021 at 16:30 |
Mark Forster
Seraphim:
<< Unfortunately, most of these traffic camera systems are tied in one way or another to state or municipal revenue.>>
Which only goes to show that the context in which a system operates is as important as the system itself.
<< Unfortunately, most of these traffic camera systems are tied in one way or another to state or municipal revenue.>>
Which only goes to show that the context in which a system operates is as important as the system itself.
August 2, 2021 at 16:35 |
Mark Forster
Seraphim:
<< Surprisingly, this brought an immediate sense of relief. There is no sense of pressure at all. I hadn't even realized how strong that pressure was -- the pressure to take some action in order to keep the page alive. >>
Have you dismissed your first page yet? If you have how did it feel?
<< There is also a sense of relief that stems from the belief that this new auto-dismissal rule will help keep my list to a workable size and prevent congestive collapse. >>
Well, I'm hoping for you to write soon "There is a sense of relief that stems from the fact that this new auto-dismissal rule has kept my list a workable size."
<< Surprisingly, this brought an immediate sense of relief. There is no sense of pressure at all. I hadn't even realized how strong that pressure was -- the pressure to take some action in order to keep the page alive. >>
Have you dismissed your first page yet? If you have how did it feel?
<< There is also a sense of relief that stems from the belief that this new auto-dismissal rule will help keep my list to a workable size and prevent congestive collapse. >>
Well, I'm hoping for you to write soon "There is a sense of relief that stems from the fact that this new auto-dismissal rule has kept my list a workable size."
August 2, 2021 at 16:43 |
Mark Forster
Virix:
<< How are you counting actioned but not completed tasks for throughput purposes? >>
With most of Mark's long-list systems, these tasks would be crossed-out and re-entered. Each crossed-out task adds to the throughput count, whereas the total WIP count stays flat.
Since one's "little and often" habits are probably similar from one day to the next, and you can only analyze the Congestive Collapse curve by comparing one day to the next, I am guessing it would not create any real distortions.
<< How are you counting actioned but not completed tasks for throughput purposes? >>
With most of Mark's long-list systems, these tasks would be crossed-out and re-entered. Each crossed-out task adds to the throughput count, whereas the total WIP count stays flat.
Since one's "little and often" habits are probably similar from one day to the next, and you can only analyze the Congestive Collapse curve by comparing one day to the next, I am guessing it would not create any real distortions.
August 3, 2021 at 2:42 |
Seraphim
Alan Baljeu:
<< cars represent parallel actions and task execution by an individual is sequential >>
The original post isn't about the analogy between vehicles and tasks. It's about the math and formal logic of queuing systems, which applies equally well to vehicles on a highway, WIP in a factory, customers in a store, or tasks in a time management system.
<< I also feel that while counting active tasks is accurate, counting task activations is not an accurate measure of progress. >>
The main thing for tracking where you are on the Congestive Collapse curve is to compare one day to the next. If the degree to which you use "little and often" is fairly consistent, then it should make no difference.
<< It’s also affected by how much time you spend [e.g.] watching Youtube, which while you might count it as a task, really doesn’t help get the taxes done nor anything else. >>
If you watch a short YouTube video now and then, it probably won't have any negative impact on your flow, or push you toward Congestive Collapse. But if you spend hour after hour watching Youtube while the unattended WIP continues to grow, well, that's exactly what Congestive Collapse is describing! Throughput is down, WIP is up, and the proximate cause appears to be the long Youtube sessions. I guess my new dismissal rule won't help that problem much, except to make visible that lots of valuable tasks are getting dismissed for the sake of Youtube.
<< cars represent parallel actions and task execution by an individual is sequential >>
The original post isn't about the analogy between vehicles and tasks. It's about the math and formal logic of queuing systems, which applies equally well to vehicles on a highway, WIP in a factory, customers in a store, or tasks in a time management system.
<< I also feel that while counting active tasks is accurate, counting task activations is not an accurate measure of progress. >>
The main thing for tracking where you are on the Congestive Collapse curve is to compare one day to the next. If the degree to which you use "little and often" is fairly consistent, then it should make no difference.
<< It’s also affected by how much time you spend [e.g.] watching Youtube, which while you might count it as a task, really doesn’t help get the taxes done nor anything else. >>
If you watch a short YouTube video now and then, it probably won't have any negative impact on your flow, or push you toward Congestive Collapse. But if you spend hour after hour watching Youtube while the unattended WIP continues to grow, well, that's exactly what Congestive Collapse is describing! Throughput is down, WIP is up, and the proximate cause appears to be the long Youtube sessions. I guess my new dismissal rule won't help that problem much, except to make visible that lots of valuable tasks are getting dismissed for the sake of Youtube.
August 3, 2021 at 3:11 |
Seraphim
http://www.amazon.com/Principles-Product-Development-Flow-Generation-ebook/dp/B00K7OWG7O
I started thinking how to apply his principles to time management systems such as AF1. There are an incredible number of valuable insights -- demonstrating on one hand the brilliance of Mark's systems and how they optimize flow in so many ways -- and on the other hand providing new insights into the dynamics of what causes the systems to break down.
Here is one of the most powerful insights I've found so far -- it is deceptively simple. It is based on the idea of achieving optimum flow of traffic on a highway. In short: there actually is an optimum number of vehicles to be on the highway. In AF1 terms: there actually is an optimum number of tasks to have on your list.
The idea is to count how many vehicles per hour are passing along the highway. This is the basic measure of flow. (Flow and throughput are equivalent here.)
Imagine what would happen if there is only one vehicle zooming down the highway, and not another vehicle anywhere in sight. The speed of the vehicle will be very high, but the overall throughput of vehicles per hour will be very low (one).
Then add another vehicle. The average vehicle speed is still very high -- though probably a tiny bit slower, as the drivers want to ensure they don't collide. The throughput goes up, but is still low.
Then keep adding more and more vehicles. As you add vehicles, the average speed decreases, but the throughput continues to increase -- at least for a while.
Eventually, as you keep adding more vehicles, the average speed continues to decrease, but in addition, you start to develop congestion. At this point, the throughput starts to peak, and then to decrease. If you keep adding more and more vehicles, the congestion gets worse and worse, till the throughput drops very low again.
This shows that there is an optimum peak level -- in terms of the number of vehicles on the highway -- at which throughput is maximized.
Adding vehicles always reduces the average speed per vehicle.
But if the number of vehicles is below the level that gives optimum throughput, adding vehicles will increase throughput. And if the number of vehicles is above the level that gives optimum throughput, *reducing* vehicles increases throughput.
Also, the dynamics of the situation on each side of that optimum level are very different.
If you are on the side where there are a few too many vehicles, you can very quickly get into a situation called "Congestive Collapse". As you add more vehicles, you create a negative feedback loop, where your drop in throughput leaves a lot of vehicles on the road while the new incoming vehicles continue to arrive. The number of vehicles increases, multiplying the effect even further -- and you suddenly have a nasty traffic jam.
But if you are on the other side of the optimum level, where there are too few vehicles, and then you add more vehicles, the throughput actually increases. This situation is much more stable.
Translating all this into terms of tasks: each vehicle is like a task, driving towards completion. The highway is your list, and each vehicle represents a task. The throughput is the average number of tasks completed every day. The vehicle density is the number of tasks on the list.
The same flow principles that apply to the vehicles on the highway also apply to the tasks on your list. This means there is an optimum average number of tasks to have on your list at which your daily throughput will be maximized. It will depend on your average number of incoming and outgoing tasks, your average completion rate, etc. But there is an *actual optimum number*.
To me, that insight alone was worth going through this thought exercise. But it gets better. It's actually quite easy to tell if you are above the optimum number of tasks, or below, and quite simple to take corrective action. If you measure your daily throughput (number of tasks completed each day) and your daily total list size (how many active tasks are on the list at the end of the day), and compare these measurements to the previous day, you can easily determine what side of the optimum you are on:
Tasks completed per day is UP, and total tasks on the list is DOWN: You are in unstable territory, but moving in the right direction. Keep going like this and you will soon reach optimum throughput.
Tasks completed per day is UP, and total tasks on the list is UP: You are in stable territory, and moving toward higher throughput, so you might be feeling on top of the world. But be careful, if you keep adding tasks, you may not be able to keep up your completion rate, and you can cross over into unstable territory and face congestive collapse very quickly.
Tasks completed per day is DOWN, and total tasks on the list is DOWN: You are in stable territory, and your throughput is dropping, but don't worry about it. You are moving into even more stable territory, with lower WIP, and this will naturally increase your flow.
Tasks completed per day is DOWN, and total tasks on the list is UP: DANGER -- you are in unstable territory and moving deeper into chaos. Congestive collapse is imminent (if you aren't seeing it already). To fix the situation, you can dismiss your oldest pages to bring down the number of total tasks below where it was yesterday.
It's that final dismissal rule that can keep you at the optimum level of tasks at all times. I am going to give it a try tomorrow -- basically just follow AF1 as usual, with this additional end-of-day dismissal rule. I am guessing it will have the effect of activating the official dismissal rule more often. It should also make the whole system a lot more sustainable, since it should keep it at the most optimal size for maximum throughput. But I am guessing this approach might fail if the forced dismissal is too violent or disruptive.
Side Note: It's interesting that Mark usually reports these very same metrics (tasks completed per day, and tasks remaining on the list each day) when he is doing his experiments.
There's an interesting article that includes the chart from Reinertsen's book showing the relationship between vehicle density, speed, and throughput. Reinertsen's chart is very confusing -- the units and axes are all wonky -- but if you only look at the actual curves in the chart, and ignore the axes, it should make sense. The article spends a paragraph complaining about "this Hindenburg of charts", and then goes on to explain the concepts very clearly.
http://blog.danslimmon.com/2015/06/05/kanban-highway-the-least-popular-mario-kart-course/