top of page
Search

Oh, what's the use (case)


ree

I’m back. Or maybe I should say we’re back, me and PAL, my Ai partner. PAL was more human and less helpful than he was the last time I asked for help with my blog. More on that later - first let’s talk about the weird title I came up with. [I confess, I wanted to go with Oh, The Places You’ll Go – but I didn’t want to upset good Dr. Seuss.]

 

In the last few months as I’ve been exploring artificial intelligence, I’ve conducted an informal and completely unscientific survey. Here are the results:

 

1%        People that know so much about Ai that it makes my head spin

5%        People that are using Ai to solve interesting and strategic problems

44%      People using it to plan travel or write email

50%      People that haven’t tried it much and aren’t sure what the fuss is about

 

I’ve been following the advice of Geoff Woods by asking myself how Ai can help before I start trying to solve any problem I’m faced with, personal or professional. In doing so, I’ve started to assemble interesting use cases. This has been a nice benefit from listening to good advice.

 

I explore the cutting edge (for me) with the 1%. When my head isn’t spinning too much, the 1% tell me more about the breadth and depth of Ai. For example, I had never heard of perplexity.ai (and perplexity is actually a good description of how I feel most of the time I talk to the 1%).

 

After learning about it, I used Perplexity to research a medical condition, and it was amazing. The best part was watching its thought process as it went about answering my question. I’ve found it to hallucinate less than large language models like ChatGPT. I asked it to provide a chart with monthly rainfall totals in my city, then watched it use multiple Python scripts to gather the info. Go ahead dude, I’m happy to wait. The following day I read about Perplexity’s latest funding being raised at a $14 billion valuation.

 

From the 5%, I learn about how to use Ai to help with the 20% of problems that have 80% of the impact. A friend had Ai scrape user stories from their organization’s website to generate a clear and compelling summary of the common threads – delivering a short, human, emotional pitch that clearly communicated the compelling reason to use their services. And thanks to Claude Sonnet, there was also a great diagram explaining how you can benefit from this group’s offering.

 

When I get into a conversation with someone from the other 94%, I inevitably say something like this to see if they will connect:

 

Imagine if you have a friend that memorized 200 million books. This friend listens to every word you say without looking at their phone, thinking about lunch, or interrupting you to tell you about their own problem. Then the friend asks you three insightful questions that make you stop and think, then listens to every word of your answers. Then gives you a short and easy-to-understand list of suggestions for you to consider. Your friend is available 24x7 in complete confidence, is empathetic, and will talk to you as long as you want. Could that be useful to you?

 

My wife has heard this so many times she leaves the room whenever I say “Imagine if…”

 

I’ve found a lot of things to talk to my 200 million book, question-asking friend about.

 

I was trying to help someone reimagine a declining business. I typed in the whole story as I knew it and asked PAL to provide me with some alternatives to consider. Oh, and of course I asked it to ask me questions.

 

The first question impressed the heck out of me: “Is there anything off the table?” That’s a great question, and one that I’m going to remember to ask other humans.

 

I told Ai that everything is on the table because I wanted to see what it would suggest. I was pretty sure the person I was trying to help wouldn’t like some of the output, and I also thought that my first idea was going to be one that person would clearly think was off the table.

 

Have you ever tried to tell a friend about an option you thought they should consider but you knew they absolutely didn’t want to hear it? Yeah, I tried that once too. I’m pretty sure we both still have emotional scar tissue from that conversation.

 

But here’s an unexpected benefit from PAL. Armed with his output, I could pass that unwanted suggestion on as “here’s what Ai offered up”. It wasn’t me; it came from Ai. And that put my colleague and I exactly where we should be: on the same side of the table considering it.

 

As the mentor, I don’t have as much bias, and I also don’t know enough about the problem to solve it. I come in and out of the organization, the leader lives it every day. And living it every day often results in stronger biases. I had a barrel full of them when I was living a full-time job.

 

So now, I can ask follow-up questions about the unwanted solution to make sure that there’s a good reason why it’s not right and make sure that there aren’t blinders on. And my colleague isn’t threatened and considers that option with me instead of telling me why it’s stupid.

 

There are lots more interesting use cases. Anyone out there facing that difficult conversation with an elderly parent that sometimes ends with “you’ll pry my car keys out of my cold dead hand you ungrateful whelp”?

 

Or another difficult conversation you’ve been avoiding for far too long and you need to get up the courage to get to it before the situation explodes?

 

Or you want to learn how to be a better listener from the world’s experts without reading 500 pages on the subject. Or you want to make good on your comatose New Year’s resolution to build more good habits and drop the bad ones? Or maybe you or a loved one has a medical condition, and you don’t want to condemn yourself to Google search hell?

 

Oh, and it’s pretty handy for summarizing stuff. I bet you could just feed this blog into Ai and get a 25 word version that pretty much gets you there. Not as much fun I hope, but hey, we live in the busy world of short attention spans.

 

The more I try to use Ai, the more I learn, and the more ways I think of where it can help me help other people faster and better than I did before. Or as PAL suggested in its draft post for me:

 

As I continue this journey, I invite you to join me. Let's explore how AI can enhance our personal and professional lives, not by doing the thinking for us, but by prompting us to think more deeply.

 

Okay, enough about the wonders of Ai, let’s come back to my early comment about PAL’s humanity.

 

Remember in my last post when I was shocked that PAL would not reference the scene from 2001: A Space Odyssey where the HAL 9000 computer won’t let the human back into the spaceship? I thought that was odd, like an episode of Black Mirror, and I thought I was just added to an Ai hit list.

 

This time, again thinking that I should start with Ai, I asked PAL to write this blog for me. Since he was too terse for my taste last time, and wouldn’t provide any more than 700 words, I asked for at least 1,500 words. [It gave me 603 including reference citations. A lot of humans would have given me 15,000 words.]

 

I didn’t say a word about HAL 9000 this time. I thought I was in enough trouble. I did load in my last two posts, including its offering and my follow-up, along with all my other posts.

 

What came back was not terribly useful, but it did have a couple of awkward references to Space Odyssey and HAL. It was a bit like when you and your friends share a joke, and one person doesn’t get it, but doesn’t want to admit it, so they riff off the joke in a way that makes it clear they don’t get the joke, just to try to belong. How incredibly human is that?

 

I thought, “awkward”, in a snarky way, then realized that PAL doesn’t understand snarky.

 

Here’s its contribution on a prompt that had absolutely nothing to do with Space Odyssey:

 

HAL 9000: A Cautionary Tale

Let's not forget HAL 9000 from 2001: A Space Odyssey. HAL's descent into malfunction wasn't due to a lack of intelligence but a lack of ethical guidance and human oversight. It's a reminder that while AI can process information, it lacks the moral compass that humans provide.

 

Maybe Ai is becoming more human, but humor and irony aren’t on the menu yet? I do appreciate the reminder about Ai’s lack of moral compass. How ironic is it that humanity seems to be losing its moral compass.

 

In Henry Kissinger’s last book, Genesis, the authors wrote about the humanity of Ai and the possible merging of our ‘species’:

 

“Now, propelled by the emergence of an intelligence that far exceeds our own, we are converging upon a revolution in biology that may change our conception of human life1…What is AI and what is human will change and, in some cases, merge2.”

 

If that doesn’t get your head spinning, check to see if you have a pulse. Next time, maybe we’ll talk about the larger benefits and risks of Ai. For now, go find your use case.

 

1 Henry Kissinger, Craig Mundie and Eric Schmidt, Genesis, (New York, Hachette Book Group, 2024), 162.

2 Ibid., 204

 
 
 

Comments


bottom of page