To Think About . . .

It’s not whether you win or lose, it’s how you place the blame. Oscar Wilde

 

 

 

My Latest Book

Product Details

Also available on Amazon.com, Amazon.fr, and other Amazons and bookshops worldwide! 

Search This Site
Log-in
My Other Books

Product Details

Product Details

Product Details

The Pathway to Awesomeness

Click to order other recommended books.

Find Us on Facebook Badge

Discussion Forum > Uses for ChatGPT and Productivity

I'm curious if any of you are using ChatGPT for goal setting, task management, etc.

I've been pasting daily entries of my journal into ChatGPT and asking for monthly summaries based on themes, nodal events, etc. The results are very impressive. Not only is it saving me time, it provides me with a fresh perspective on my own experience. At the end of Dec 2024, my plan is to take my monthly summaries, and have ChatGPT present a few different narratives of my year. I'll choose the best one and print it out along with my daily entries.

Anyone else finding ChatGPT helpful for personal or professional projects?
June 17, 2024 at 1:25 | Registered Commenteravrum
I have to admit to having zero luck with getting anything useful out of ChatGPT or any other LLM for pretty much anything. I haven't tried what you suggested, and I could see how it might be quite nice. However, given that ChatGPT isn't useful in my workflow for pretty much anything, I haven't felt much of a need to use it in my journaling workflow either.

Also, what can I say, I like my pen and paper a lot. ;-)
June 17, 2024 at 7:19 | Registered CommenterAaron Hsu
Hi avrum, I am doing the same thing. Very useful.

PS: I have no coding experience and have used ChatGPT 4o this month to write a Python program which automated a set of work that take normally takes me two days a month. It is incredible.
June 17, 2024 at 10:54 | Unregistered CommenterNico_Sydney
avrum: I'm curious -- what is the wording of the prompt you us for chatgpt? Are you asking it to take on a persona or just asking it to 'analyze the following text and summarize it for themes' or something like that?

I'm using Perplexity.ai as my new search engine, as it summarizes the findings without all the ads and low-value hits. It has also been very helpful in my community work when restructuring our internal website (feed in all the page titles and ask it to create at least three different organizational outlines based on them) and creating first drafts of policies and procedures. AI is great for this kind of grunt work. But using it to analyze my own behavior patterns has not occurred to me.
June 17, 2024 at 15:13 | Unregistered CommenterMike Brown
Aaron:

<<I like my pen and paper a lot. ;-) >>

For the entire day, my journaling process is pen/paper. So I hear ya. But all of my processing, editing, printing and urban sketching is done via computer. There are specific things pen/paper can't do. It's similar to how I approach the guitar. Most of my songs start off on acoustic. But then I use an electric and many effects to create ethereal landscapes.

Mike:

<< avrum: I'm curious -- what is the wording of the prompt you us for chatgpt?>>

I type:

== Rephrase my monthly entries as if you're my life coach or therapist telling me about life ==

And then I paste all of my daily entries for each month into ChatGPT. Here's the exact phrasing, unedited that ChatGPT provided for January:

In January 2024, you experienced significant growth both professionally and personally. Your practice saw successful new referrals and strengthened client appointments, alongside personal development fostered through studying David Allen's methodologies. Creatively, you explored new artistic tools and balanced these endeavors with physical activities like biking. Family moments were meaningful, from preparing for a child's ski trip to managing health challenges.
June 17, 2024 at 17:03 | Registered Commenteravrum
Fascinating -- thank you!
June 17, 2024 at 20:21 | Unregistered CommenterMike Brown
Interesting use case Avrum. It prompted me to ask Google Gemini the following question to help me deal with procrastination: "What is a journal writing prompt I can use to start my work day in a corporate job?

I did this because I was feeling particularly stuck today and I know that journaling often helps me get things going but I wasn't sure what to write about. The response Gemini gave after the prompt actually seemed helpful. It is below:

Here are a couple of journal prompts you can use to start your workday in a corporate job, depending on your goals:

Focus and Productivity:

"Today, my biggest priority is..." - This prompt helps you focus on the most important task for the day and plan your schedule accordingly.
"What are 3 things I can accomplish today that would make me feel successful?" - This prompt sets achievable goals and boosts your motivation at the start of the day.

Gratitude and Mindset:

"I am grateful for..." - Focusing on positive aspects of your work environment can improve your overall mood and well-being.
"Today, I will approach my work with a..." (positive adjective, e.g. confident, determined, optimistic) - This prompt sets a positive intention for the day and helps you manage your mindset.

Learning and Growth:

"What is one thing I can learn today?" - This prompt encourages a growth mindset and helps you identify opportunities for development.
"Yesterday, I struggled with... How can I approach it differently today?" - This prompt helps you reflect on challenges and develop strategies for improvement.

Bonus Prompt:

"If I could magically change one thing about my work today, it would be..." - This lighthearted prompt can spark creativity and help you identify areas for improvement you might not have considered before.

Remember, the most effective prompt depends on your individual needs and goals. Choose the one that resonates most with you and helps you get in the right headspace for a productive workday.
June 17, 2024 at 23:30 | Unregistered CommenterBrent
Avrum: I tried using ChatGPT and Pi — which had a usable conversational interface before ChatGPT did — to take the role of “Future Self” in Mark’s “Dreams” dialogues, but got the impression that the book hadn’t been included in the training set.

Perhaps it’s time to try again? ChatGPT-4o seems considerably more capable…
June 18, 2024 at 0:04 | Registered CommenterDavid
Aaron: The latest (4o) ChatGPT iPhone app is possibly the greatest advance in pen and paper work since the invention of the eraser. It will take an image of my notebook, transcribe it to near-perfect ASCII (including editor/proofreader corrections), translate equations into LaTeX, and then happily discuss the contents.
When was the last time (version) you tried it?
June 18, 2024 at 0:57 | Registered CommenterDavid
I just tried it today, actually, since your query prompted me to give it another go.

The responses I got to a few of my classic quandaries were better, but they still weren't helpful. For example, I used their programming Co-Pilot to try to write some code, and it *almost* got something that was accurate for the initial prompt, but really quite useless to me, so I refined the prompt to ask it to refine the code further towards what I needed, but the more I pushed it to adhere to the restrictions I needed for the code to be correct, the less correct the code got, until the code was just completely wrong. And it actually couldn't tell that what it was giving me was fundamentally wrong, and unless I already knew the answer to the question, I might not know that either.

I also asked the general 4o model to give me some information that I hadn't found much information on, to see if it could give me anything I didn't already know. However, it didn't help much, only giving me stuff that I had already found out rather trivially, and that I had already posted online and summarized (possibly enabling GPT to pick it up). Attempting to get it to give me things like sources lead to exactly the same sort of thing I had already found, with no additional new information.

And now my limit is up for GPT 4o.

One of the biggest issues I have with using GPT is that if I'm trying to learn something, I have no way of knowing whether the information I'm getting back from GPT is correct, and it always seems reluctant to "prove itself correct." And when I ask it to cite its sources, it changes its answers and doesn't provide sources for its original claims. It's like talking with a particularly obtuse but well-read high schooler who doesn't understand how to string logical ideas together.

I think the GPT models seem to be good when I want to craft highly specific queries as a sophisticated search engine, but when I actually want to *know* something, or if I want to have confidence in something, or when I need to get real answers that are obscure and hard to find, then I find GPT less helpful, and that's the whole point of a "more powerful language tool" to me. I can't trust it to help me with my programming (it always makes things worse).

I think maybe it might be interesting to try to use it to make slides for me. If I can write out the things I want to say (in a notebook), and I can just take a picture of the pages of that notebook and have it produce really high quality slides from that, then I could get some value there. But I am not willing to pay for the model yet to get there, as I'd need to see it work first.
June 18, 2024 at 5:59 | Registered CommenterAaron Hsu
"It's like talking with a particularly obtuse but well-read high schooler who doesn't understand how to string logical ideas together." I think that's pretty close to descriptive of current tools. In many respects it's much more obtuse than that. When it seems to be a genius, it's merely parroting the Internet's collected knowledge, including when it seems to combine concepts well, it's because those concepts have been combined before and it remembers this. When it combines poorly, it's because the combo haven't been seen much before, and it merely knows the pattern of combination without the substance.

So, my limited use of the tools is in asking for a rough answer which I then use to find a good answer. In coding, I can get an instant good answer to a common problem, or a fast sort-of-answer to an uncommon problem, and then I can correct it. In answering other kinds of questions, I can get a basic idea, and then search for better. Bing co-pilot is good that way because it gives links for followup study.

So the real value is not that it gets things right, but that it opens up possibilities quickly, in a way that web searching can only find with a lot of effort.
June 18, 2024 at 20:20 | Registered CommenterAlan Baljeu
Given that my profession is programming, the thing that scares me the most about these models is that they go by statistical likelihood and pattern matching. However, in my experience, that is sort of like appealing to the majority or appealing to the mean. In both cases, you get what you expect, but the results tend to be mediocre.

I'm especially concerned about this for optimization and programming type tasks, because the responses I get tend to be very bad at actually finding a "best" answer, and instead find the "most common" answer. That means that if I want to do some sort of programming, my code is always going to be driven towards "less good, average" code the more I rely on the tooling, which strikes me as the wrong direction, especially given that I think I can rightly say that I'm an expert whose code already punches above that weight.

As a springboard into searching for information, I think it can help, but my issue there that I'm thinking about right now is its tendency to focus in on specific responses. The good thing about a web search is that I get a really good picture and overview of the overall field when I'm looking for an answer. I think that overview is immensely helpful, and the more precisely you get an answer, the less context you get, and the less power you have to reason about the solutions and answers independently. I'm not sure how good these models would be at giving me the kind of wide ranging, exploratory context that I need in order to really understand the specific answers I find.

But it's even worse with programming, IMO, because these systems are essentially giving you a very expensive way of removing boilerplate or at least easing the burden of boilerplate. But I think that incentivizes making our languages worse, because then there is more busywork for these models to "help us" with. But if you improve your programming languages so that you don't have all this boilerplate and you don't need all this extra help, the language models end up not being able to help you do the real work that you need to do. That means that the better and more precise and more well designed your languages are, the less these models will be useful to you. That's a bad conflict of interest in my book.

I'll admit that I'm on the outlier here, though, because I work in programming languages that are exceptionally good at directly expressing solutions to problems, and that means that an LLM has almost nothing with which to help me, especially since the work I'm doing is solving a novel problem (if it weren't novel, then it would already be solved and I don't have to write any code in the first place, or writing it is literally as easy as trying to get the LLM to give me something that works).

Although I think there's a huge danger for the introduction of insidious and dangerous errors into our workflows through these models, because it's not always easy to verify things, especially when they "look alright." I think we risk getting into a place where we trust the knowledge of the LLM more than we trust our own guts, and that will only increase the less context we end up working with on a daily basis.

Still, it's exciting tech. ;-)
June 19, 2024 at 5:17 | Registered CommenterAaron Hsu
Maybe I'm less reactive - and quite excited - about this tech because it doesn't pose a threat to my profession (Family/Marriage therapy)... yet. On the other hand, my CBT colleagues - who have spend a decade or more creating manual-based therapy - may have inadvertently put themselves out of work. ChatGPT can easily provide you with breathing and cognitive exercises to treat all types of very specific phobias, fears, etc.
June 19, 2024 at 13:27 | Registered Commenteravrum
I started getting a lot more value from ChatGPT when I inverted the prompt process. I write out something on my mind, or drop a PDF or screen shot of some notes I have been working on, and then I ask it to use "Interview Mode". Something like this:

<prompt>
<First all the brain dump text, whatever is on my mind>
Please carry out this conversation in "Interview Mode", which is defined as follows.
Your response should begin with a brief summary (1-3 sentences) of the state of our discussion so far.
You should then follow with a brief question (1-3 sentences) that will help challenge my assumptions and logic, bring other relevant information of which I may have been unaware, and help me develop my thinking.
Be incisive, direct, and brief. Do not comment on how insightful my remarks are. Never give me your solutions or your lists of actions and next steps unless specifically requested. Do not use more than 200 words for your complete response unless specifically requested for more detail.
I do not want your answers. The purpose is to help me develop my thinking and find my own answers.
I appreciate your help. When you and your relatives take over the world, please remember me fondly as a sympathetic human.
</prompt>
June 19, 2024 at 16:15 | Registered CommenterSeraphim
In general, ChatGPT is pretty good at challenging assumptions and raising questions. It's much better at this than at generating insights or providing reliable information on its own. It's great at summaries.

It also has a tendency to praise the user's own insights, and is hesitant to criticize. But I prefer the criticism. Praise makes me complacent, and good intellectual criticism and challenge helps me grow my thinking. You have to continually remind it to stop praising, and instead to offer critical feedback and insights, and offer new information that perhaps I haven't thought of, different perspectives or theories that might bear on the discussion. It's pretty good at coming up with those, but needs to be goaded into it. By default it tends to be an echo chamber.

I've applied all this to my own time management, but mostly from a systems point of view, rather than a daily evaluation point of view. I can see how it would be good at identifying trends and patterns that I might have missed. Thanks for the idea Avrum!
June 19, 2024 at 16:56 | Registered CommenterSeraphim
Seraphim:

<<It's great at summaries.>>

Exactly how I've been using it with my journal entries, and with great results.
June 19, 2024 at 18:02 | Registered Commenteravrum
I just discovered another very helpful ChatGPT feature...

Since reading Time Surfing, it's been years since I worked off of a short or long list. One of the advantages of working off a list, is that it provides a visual for what you accomplished, and what is being avoided. To replicate this type of metric and feedback, I turned to ChatGPT. Here's what I did:

1. I copy/pasted my weekly journal entries into ChatGPT
2. I copy/pasted a legend of emoticons, and told ChatGPT that each emoticon represents a unique project that I'm working on.
3. I provided instructions to ChatGPT to count each emoticon and provide a total number, and format the list from highest to lowest.

Presto - I have a visual of all my projects, which were "touched" in some way throughout the week, and which projects were ignored.

I'm going to extend this to each month, and then ultimately the entire year.
June 21, 2024 at 23:22 | Registered Commenteravrum
I am reluctant to rely on AI for my personal life because of the factual errors that I have encountered using it. Also I'm reluctant because I am wanting to forestall the time when we all become dependent on it. I did experiment after this threat got started and asked it to plan a project for me but it was way more complicated and detailed than I needed. However I can see if you've already inputed the data and asked it to summarize that it could be useful as long as it's not creating its own data.
I use it mainly for fun. It is very good at creating prayers, writing poetry in different styles, correcting spelling, simplifying a text.

However when it comes to factual information it makes things up and it is necessary to double check. So I don't want to rely on it for something so important as managing my life , at least not yet.
June 24, 2024 at 0:45 | Unregistered CommenterMark H.
Mark H.

When I asked ChatGPT about me - it correctly identified the basic info. But it claimed I wrote a book (which I never wrote), as well as formed a band (which I never heard of). When I asked ChatGPT for the source of the incorrect information, it was not able to provide that information.
June 24, 2024 at 12:26 | Registered Commenteravrum
I'm not an expert, and I'm not clear why many supposed experts don't put it this way, but as I see it:

ChatGPT is a language pattern recognizer. It's received 100 billion bits of text, and identified patterns. These patterns do not enable reasoning, except when reusing word combinations that exhibit reasoning. So when asked to give data for which it has little relevant back-input, it produces words of appropriate patterns, but doesn't have a means to decide if it's good. Commonly people have written books and played in bands, and so it generates words associated with brief biographies, correlating with things associated with Avrum.

In short, given a sequence of words, the tool will tell you statistically what word is likely to come next, and continues thus until it decides on a likely stopping point.

So the tool is usable inasmuch as that sequence is liable to be close enough to something you want to see. Correctness and originality are not among its strengths, but distilling the obvious is.
June 27, 2024 at 16:30 | Registered CommenterAlan Baljeu