I’d wager to guess that you don’t have to go too far to hear someone utter the following expression: “I’m so tired of AI.” It is hard not to experience AI fatigue, what with Microsoft putting Copilot into just about every recess of the Windows operating system, Apple Intelligence and Google Gemini ever-present on phones, and ChatGPT advertising on billboards. With the unrelenting pace of AI advancements, tool adoption, and constant information overload, it is too easy to feel overwhelmed or exhausted.
I think the big problem here with AI is that the concept is largely ephemeral. As a casual user, it’s hard to quantify exactly how to measure it’s use case. But a clarifier needs to be tossed into the mix: if you use AI with intention, you may be able to cut through the noise and find real value.
Replacing AI Fatigue with Real Value
In the book Co-Intelligence: Living and Working with AI, Ethan Mollick offers a four-rule guide to working with AI that we think will be helpful dealing with AI fatigue. They are (in our terms) as follows:
- Always bring AI to the table.
- Treat AI like a coworker and tell it what kind of coworker you need it to be.
- Be the human in the loop.
- Assume this is the worst AI you will ever use.
As you think about whether your task might benefit from the help of AI, these are a good principled starting point. After all, it can’t help if it’s not involved. All it can do is nag in the background.
Bringing AI to the Table
As Mollick indicates, there is a subtle difference between “using AI” and “bringing AI to the table” — it’s the intention of the matter. Use AI intentionally. For instance, there is a notable difference between “How do I use Intune?” and “Talk to me as an IT director and tell me how to use Intune to make sure that devices on my network are compliant prior to accessing our company resources.” In both cases AI will provide you with useful information, but in the case of the former, you’re left with an ephemeral, non-specific set of directions that may or may not get you what you need.
Think about it in terms of upsides and downsides here. The upside is that the AI you are using completely answers the question you have, allows you to learn something new, and saves you some multiple of time. The downside? You spent 15-30 seconds crafting a prompt and received perhaps not exactly what you wanted or needed. Even then, you have an inflection point – you can still go ahead and clarify your ask, taking another small period, or you can go through the process outside of AI to search for the same thing. Minimal downside, significant reward.
Treat AI like a Coworker
Remember, too, that tools like ChatGPT, Copilot and Gemini are word completion engines. They want to fill what comes next and keep you happy. They are not (yet) sentient. You control how they are likely to proceed and whether they are offering value in what they deliver. If you want your agent to present the information you’ve asked for with the lens of a marketer, salesperson, or technical expert, tell it so.
Mollick speaks specifically about how he wouldn’t consider himself a “technical expert” but certainly indicated he and his partner had a way with prompts. Think about interacting with different coworkers: you probably don’t interact with everyone exactly the same way. Some appreciate candor, others appreciate details, others yet like to be thanked in advance. AI is no different here. Prompting isn’t a one fits-all, it depends on the task and even on the particular AI agent.
Be the Human in the Loop
Mollick also reminds that hallucinations, while reduced significantly in more recent versions of AI agents, can still happen. Agents are as powerful as the logical indicators of the models allow. Think about using AI as if you were using Wikipedia: check the sources and understand the gravity of the task you’re asking it to perform. If it’s a public-facing, highly critical piece of work, consider the development and production pipeline, don’t push it until it’s been vetted.
Make sure that you own the content, just like you would with anything you’re working on.
Assume this is the worst AI you will ever use
A large portion of Mollick’s work in his book is conducted using GPT-4, a generation or two behind at the time of this piece. Take a look at the graph below, a relevant example from OpenAI at the generational leap in error rate:
You can see here that not only is the error rate on the newer model much lower, but it’s willingness to abstain from an exact answer is also higher. In essence, it’s less likely (but not impossibly unlikely) to volunteer incorrect information just to satisfy a question. At the end of the day, this is a tool, not the beginning and the end. And if the agent didn’t provide you what you needed this time, don’t write it off forever – the field is changing at exponential speed.
Taking Small Steps Towards Real Change
At the end of the day, we get it. It can be exhausting opening your device to advertisements, pleas and suggestions around AI. Getting through that noise and using AI as a tool with intention might just take the AI fatigue you feel and turn it into a real difference-maker. If 15-30 seconds of prompting doesn’t result in a satisfying answer, it’s really no harm done. Just remember that the reward from a successfully detailed prompt might save you more time than you’ve considered. Try not to worry about being an AI expert. There’s no better time to start than right now.