To Think About . . .

The price of inaction is far greater than the cost of making a mistake. Meister Eckhart

 

 

 

My Latest Book

Product Details

Also available on Amazon.com, Amazon.fr, and other Amazons and bookshops worldwide! 

Search This Site
Log-in
Latest Comments
My Other Books

Product Details

Product Details

Product Details

The Pathway to Awesomeness

Click to order other recommended books.

Find Us on Facebook Badge

Discussion Forum > Insights from computer science on how to manage your time more effectively

Not sure if this video has been shared before on this site, but I found it very interesting:

How to manage your time more effectively (according to machines) - Brian Christian
http://www.youtube.com/watch?v=iDbdXTMnOmE

The two key ideas I think in the video are:

1. As the number of tasks increase, prioritizing tasks can take so much time that it gets in the way of getting things done. One solution is to have rough priority "buckets." Another is to just go by chronological or random order.

2. Focusing (blocking out interruptions) and being responsive (responding to interruptions) are in tension. One solution is to "batch" interruptions and deal with them together periodically, taking into account how long the interruptions can wait.

I understand the ideas are sourced from a book called Algorithms to Live By, by Brian Christian and Tom Griffiths. I haven't read that book yet.

I find point 1 particularly interesting. It perhaps explains the problem some people have with full-on GTD: organizing your tasks seems sensible at first but ends up taking more and more of your time as the number of your tasks increase.

I think it may also explain the difficulties some experience with some of the long list systems here, in that when the list gets really long, then the systems run into problems.

I haven't thought too much about how those points would affect the design of a long list system though.

The connection in the video to computers/computer science reminds me of Aaron Hsu's discussion before describing task selection in the language of computers:

http://markforster.squarespace.com/forum/post/2783485#post2783519

I'd be interested in hearing any thoughts.
January 15, 2022 at 3:47 | Unregistered CommenterCharles
If anyone wants to dig into these ideas at a real layer, then I encourage you to explore watching an Operating Systems lecture series or the like. A lot of the development of operating systems revolves around how to manage doing more things than the CPU has time to do "right now".

The same results as they find there tend to apply to human work as well, with caveats, with the most important things being that there isn't one right answer. Realtime systems have to take a different strategy than general-purpose operating systems which have to take a different approach than high-performance computing environments. To give you a rough idea of the three styles:

Real time systems tend to require responsiveness above everything else. This generally means that they need to work with "controlled cleanup" that is regulated, and where everything is "timeboxed" into small units with guaranteed interruptions between them. This means that we can ensure that the system will always be able to process new inputs at least every X units of time for whatever our X timebox is. There's the concept of garbage collection in computers, and realtime systems generally are very careful about how they do cleanup, with tight controls on how much can get into a system and how much is "left lying around" for later cleanup, because that cleanup can suddenly pile up into a set of things that will take longer than your X timebox to cleanup, ruining your responsiveness (in programming language theory this is the Garbage Collection spike or lag or overhead). In constrained embedded real time systems, cooperative multi-tasking might be the only way to go, but it requires every single task to be able to give up control at just the right moment.

For very high-performance systems, you're dealing with a massive amount of data that has to be run that is expensive, difficult, and time consuming. Here, getting through the data and maximizing throughput is most important, because context switching between tasks can bring your system to a standstill. In these systems, you often have a single queue that people register their "batch" for and then they wait in the queue until it's their turn, and then they have uncontested access to the system. This means you might have to wait a little bit to get going, but when you do, you'll have full access to the complete power of the system.

The hardest one, but the one that most people use every day is the general operating system. In this system, being able to work with lots of different tasks without becoming useless is the key. It's okay to have a little lag, but you can't make it happen all the time. These systems have evolved a lot over time and most systems today generally make use of pre-emptive multitasking, which means that there is something external that interrupts the processes at appropriate intervals to see whether they are going to switch to doing something else. The challenge is in how often these interrupts happen, how to manage them, and how to select what process to switch to, or whether to keep going on the current one.

http://en.wikipedia.org/wiki/Preemption_(computing)
http://en.wikipedia.org/wiki/Cooperative_multitasking
http://en.wikipedia.org/wiki/Process_management_(computing)

There's also lots of ramifications related to organization of our tasks and our work. For example, one optimization made by the Big Bag of Pages memory collection system was to sort data into different types, but only a few sets of big bags, which then allowed for review to happen over that data quickly, and to reduce the costs of going through your data and cleaning things up. This is something that GTD takes advantage of, by trying to create a few sets of relevant contexts and lists, with a process that clearly indicates which task goes where. The benefit is that during review in the day, you can review much faster, and then you can save a lot of review for a later date (weekly review), which helps reduce your in the moment processing overheads.

On the other hand, there are cases in operating systems where this algorithm won't help, because you don't know ahead of time how you can "group" things. You can look at managing your tasks as managing inputs and outputs as well, and there's a whole field of research on this as well:

http://en.wikipedia.org/wiki/I/O_scheduling

Simple scanning is very similar to the C-SCAN algorithm.
January 15, 2022 at 4:31 | Registered CommenterAaron Hsu
Charles:

I read "Algorithms to Live By" when it first came out - I can't remember how long ago - and it is definitely a worthwhile read. I was interested that the video mentioned selecting at random as better than spending too long prioritizing.

But one does have to remember that computers are not humans. And what applies to computers does not necessarily apply to humans. Computers are a tool, designed by humans to help humans - like a knife. Just because a knife is a useful implement which needs regular sharpening it doesn't mean that humans need regular sharpening.

For prioritising purposes, there is one very significant difference between a computer and a human. A computer does not experience resistance. So any prioritizing method for humans which does not take resistance into consideration is bound to fail.

There are many other differences of course, but that is the most significant in my opinion.
January 15, 2022 at 10:47 | Registered CommenterMark Forster
Aaron Hsu:

The fact that humans have designed amazingly efficient operating systems for computers and yet no one, including me, has yet designed the perfect operating system for humans shows that the similarities between human prioritizing and computer prioritizing only go so far and no further.
January 15, 2022 at 10:57 | Registered CommenterMark Forster
There's definitely a limit to the computer analogy. Not only do computers not face resistance, they also don't have nearly the same difficulties with large numbers of tasks. They can multi-task, literally. You can throw many thousands of tasks at a computer and rely on the context switching happening fast enough that ever task is likely to get enough CPU time.

I think things get a little more interesting when you look at what you have with resource constraint situations, such as embedded devices with very small memories, or very large HPC systems that have even bigger problems to crunch over, or how people managed systems when the processors were much, much slower, and memory much more limited. Even today, there's interesting stuff going on to deal with what happens when a modern system gets overloaded.

However, it's also the case that humans have a lot less clear goals, with a lot less clear paths, with a lot less clear sense of why. Anytime you have fuzziness, computers tend to struggle until we start taking pages from human neuroscience.
January 16, 2022 at 6:04 | Registered CommenterAaron Hsu
It so happens that my lifetime almost exactly matches the lifetime of the electronic computer. The first electronic computer was made in 1942, and I was born in 1943. So I have lived and worked both in a completely computerless environment and today's omnipresence.

It took a long time before it really started to impact everyday life, as opposed to business life. I was an early adopter of the home PC and a little bit later the mobile phone. I was a relatively late adopter of the smart phone. The interesting question to me is not how much difference it made to my working environment, but how much difference it made to me as a person. There's no doubt in my mind that it made a lot, but if I were transported back to the working environment of my 20s or 30s, would I tackle things differently?

I don't know the answer to that.
January 16, 2022 at 10:11 | Registered CommenterMark Forster
<< They can multi-task, literally. You can throw many thousands of tasks at a computer and rely on the context switching happening fast enough that ever task is likely to get enough CPU time. >>

And even that has its equivalent in human task processing. I have always advised that the best way to get major tasks done is to work on them "little and often".
January 16, 2022 at 10:25 | Registered CommenterMark Forster
Aaron, it took me multiple reads, but I think I'm beginning to understand a little of your first comment. Apologies for butchering this into layman terms, but I'm imagining real-time systems to be somewhat like a daycare teacher needing to attend to lots of children's requests pretty much immediately, high-performance systems like a CEO or some high-level executive having a full schedule of appointments, and the general operating system like a typical work-from-home office worker juggling work, family, hobbies, etc. Pre-emption would be sort of a timer going off for you to decide to keep going or check your list for something else to do. I'm starting to see what you mean now that different circumstances call for different approaches.

Regarding the challenges of transferring lessons from one field to the other, I guess other differences could include:
- Computer inputs/tasks could have explicit meta-data (and thus easily use that for task selection) whereas a lot of the "meta-data" of our tasks (e.g. importance of its completion to us) is more hidden/subjective and fuzzy, as Aaron mentions
- Computers can faithfully follow very complicated algorithms to select a task, whereas for humans it becomes harder to fully follow an algorthm the more complicated it gets.
- For humans it matters that they like/enjoy the algorithm to help ensure they stick with it

I'm still considering applications though...

For example, with the Big Bag of Pages idea, would it be possible to get the benefits of grouped tasks while avoiding the high overhead of GTD by just having, say, 3 groups of tasks and vague delineations between them? It could be, say, high, mid, and low priority. Or high, mid, and low energy. Tasks are entered directly into appropriate lists based on your gut feeling (and you can move them around afterwards), so if you already know from the outset that a new input is low priority, it won't get in the way of you selecting high priority tasks. Each list could be processed with Simple Scanning. You could then choose to just focus on high priority tasks or low priority ones (if, say, you're tired). But I guess this goes against the idea of letting priority organically emerge rather than consciously deciding it. Maybe the best way to find out how well this works is to try it...

I haven't looked through all the I/O scheduling disciplines (and I didn't understand most of the ones I did check out...), but could each discipline basically be another way to process long lists, or are most not really transferable?

I also wonder, Aaron, with your background in computer science and having gotten into Mark's systems/ideas, if you've thought of any new ways of processing long lists that seem promising to you?
January 16, 2022 at 11:08 | Unregistered CommenterCharles
Charles:

I think you've captured the gist of things perfectly fine, and that's probably plenty good enough for application into your own life.

As a point on priority, general-purpose operating systems often do have the concept of high, mid, and low priority, but something really key about that is that almost all of the tasks (well over 99%) are in the mid priority. Effective operation of that priority system depends on having most of the tasks in the mid category. Moreover, priority doesn't have anything to do with importance in this case, but responsiveness. So high priority tasks are able to "choke out" mid and low priority tasks, and low priority tasks can never choke out mid and high priority tasks.

In that case, if you have only one or two high priority tasks, and only a few low priority tasks, then you're in good shape. You *always* take care of the high priority ones, and the low priority ones are *specifically* tasks that you are happy to have run almost never. I imagine a good use of such an approach could be having your "one big thing" as a high priority task that you always take care of, and then everything else in a mid-priority group, and then maybe a list of Youtube videos or the like to watch, which you can do or not do without consequence.

I personally think context is the more appropriate grouping mechanism (this is also more similar to the BiBoP model), but I think the mistake a lot of people make with contexts is misunderstanding their use. They're there as a convenience to help you ignore what you don't want to see at certain times, to make it easier to focus. It's a little like a closed list (not quite, though, since you don't have the shrinking effect). People often make too many contexts that they don't need. I think Mark's AutoFocus recommendation of a Home and Work list is an excellent use of the Context idea, and personally I've only found the Work and Home contexts really useful for active tasks.

As for finding new algorithms, I think Mark has been remarkably thorough in his review of different approaches. And as is often the case in C.S., the simplest processes often win out over seemingly more optimal ones because of their lower overheads.

I think for me, the thing that I'm leveraging the most at the moment is actually a networking concept. With an operating system, you can in theory spawn as many processes to do work as you want, and each active process will do work until it is put away and another one is brought onto the active queue. However, each process has a level of active overhead involved in terms of memory and context that has to exist in order to have that process in the execution queue.

When you are writing a network application, a naive way to do it is to take every message/input that is received and spawn a process to deal with that message immediately. If the number of messages is relatively low, then the operating system can easily context switch between them and manage it. However, if the inputs get too high, then the processes bog down because of the overhead of managing all of those processes. In high throughput systems, you generally don't spawn new processes, but instead you have a fixed set of handlers that are tuned to the capacity of your system. When a message is received, it is queued up to be handled on an event queue, then, when a given handler finishes responding to one input, it goes and finds another input off of the event queue and continues working. Each handler just keeps grabbing events whenever the handler has finished its last one. This results in many fewer active processes running and the whole system processing events much faster.

I find the idea of limiting your work-in-progress to be a powerful analogue in personal time management. This is where I think AutoFocus has a distinct advantage, as its dismissal process is very good at really keeping your work in progress low if you allow it to do its job. Similarly, that's why Ivy Lee works well for me, because it lets me limit my work in progress. In that past, and for many people, I think this is something they struggle with in GTD. Technically, GTD includes a WIP limit concept, but it's so "intuitive" that people miss it, and they run the system into the ground because rather than looking at their current huge range of obligations that they clearly have nowhere near enough time to finish and putting a lot of stuff on their backburner (Someday/Maybe) or simply getting rid of it, they try to keep it on their active projects list, and then just get overwhelmed.

I think this is where maybe Mark's systems are a little weaker compared to something like Personal Kanban, which makes this concept really explicit. This is why, IMO, you have so many people talk about their long lists collapsing when they get too big. But, as Mark has pointed out, you have to be willing to embrace dismissal without the negative energy that a lot of people have around it. Some things aren't going to be worked on "immediately" and that's okay. I find that the more I embrace this idea, the more stuff I actually do. I found that a lot of times, my backlogs of work disappear when I switch from not being crisp about my WIP and then getting crisp. All of a sudden, before I know it, my list has shrunk down and things are just done, and all those things I was worried I wouldn't get to are now the only things sitting on my list, ready to be done.

I think a lot of this is universal, but you just have to figure out what systems best let you implement the principles effectively in your life.
January 17, 2022 at 6:44 | Registered CommenterAaron Hsu