In today's Pro issue, I share the "gray area" of truth and why, as humans, it's critical we practice the skill of discernment when working with AI systems.

Generated by me in Midjourney using the prompt: Pixar character, higher self, spiritual guide personified

Rule #3: Don't Delegate Your Discernment

If the AI tools we're using to compile research can't independently verify that the information they're generating is accurate, how can we trust any of it?

Your answer to this trust question is important – to inform your usage of AI, and to inform how you build relationships with your clients and community as AI permeates into every core technology system.

Two important AI terms to know:

  • Alignment: An AI system's ability to compute and generate results in line with its creators' intent, values, and purpose. ChatGPT is an example of an AI system aligned to the ethos of OpenAI (here's OpenAI's latest post on its alignment).
  • Hallucination: A response generated by an AI chatbot that sounds plausible and convincing, but is entirely fictitious. For example, many have shared erroneous results generated by ChatGPT and the ChatGPT-enabled Bing.

Trust is contextual, and our ability to trust someone or something is highly dependent on our personal perspective and life experiences.

Human brains are hard-coded for bias, and so are today's AI systems. We humans still have such powerful internal tools for discerning truth, and right now, it's still too early to fully outsource to machines the task of telling us the truth.

đź’ˇ Insight: The same rules for responsible AI builders apply to humans operating in this new age of AI.

In the post I linked above, the OpenAI team shared three (of many) essential building blocks to ensure AI has the maximum positive influence.

I argue that these building blocks also apply to all of us, especially in a business climate where AI is increasingly being used to do jobs humans used to do.

Regardless of whether you use AI in your workflows, it's worth exploring these three points from an operational integrity perspective.

🚨 Lesson: Trust is earned, contextual, and increasingly important as AI proliferates. See the forest and the trees.

Here are the three OpenAI building blocks:

1. Improve default behavior: For OpenAI, this point is about minimizing the limitations of large language models, like hallucination and biases. What's the average ChatGPT experience like for the average individual user?

2. Define your AI's values within broad bounds: What's the right relationship between freedom and guardrails? OpenAI discusses the challenges of allowing users to customize ChatGPT's behavior in an upcoming feature release:

This will mean allowing system outputs that other people (ourselves included) may strongly disagree with. Striking the right balance here will be challenging–taking customization to the extreme would risk enabling malicious uses of our technology and sycophantic AIs that mindlessly amplify people’s existing beliefs.
There will therefore always be some bounds on system behavior. The challenge is defining what those bounds are. If we try to make all of these determinations on our own, or if we try to develop a single, monolithic AI system, we will be failing in the commitment we make in our Charter to “avoid undue concentration of power.”

3. Public input on defaults and hard bounds: OpenAI plans to solicit public input on decisions related to boundaries, like the use of ChatGPT in education, or disclosure methods like watermarking. Third-party audits with vetted partners will provide additional accountability.

âś… Action: Optimize your company's alignment

What are your company's values, and how do they influence the way clients and community use your product or service?

  • Think about your clients, team, and community. Do they feel that, "out of the box," your product or service respects their values?
  • If you don't know, or if you don't have a sense of your customers' values, how can you find out? If so, how can you strengthen this emotional bond even more?
  • For your business, this inquiry might create opportunities to optimize your marketing messaging, humanize your onboarding processes, or improve the value proposition of your monthly subscription.
  • You can't always manage how clients feel about your brand, but you can manage how consistently and effectively your values are communicated throughout your company touchpoints.
🙋🏽‍♀️
One of the Creative Intelligence values is integrity. It's critical that as we cover AI, we do so with a commitment to honesty, ethics, and virtue.

Where do your clients or community have a say in your strategic direction?

  • Who are the tastemakers within your company, client base, community, and your industry?
  • How would they be able to raise ethical (or other) concerns – and how would you evaluate those concerns to ensure your product-market fit continues in a changing landscape?
  • What third-party endorsements would enhance your credibility, reputation, and trust within the marketplace?

📆 Q1 Event Calendar

Our Zoom meetups happen every Thursday at 11 am PST.

They have evolved into a place and space to bring your AI questions, projects, ideas, and works in progress to workshop with me and other community members.

Lately, we've been talking about some of the societal challenges and disruptive opportunities that come with AI, like election meddling or bias in AI training data, and the different ways everyday people can bring positive influence.

Here are the Zoom links to get these meetups in your calendars:

đź’Ž I'm also recording an interview with Suman Kanuganti of Personal.ai on March 1. If you have a question you'd like him to take on, head here. I may facilitate a live integration session after the interview drops. If you're interested, click the link I just shared and weigh in!


đź’¬ Conversation-Starters

So much of the value of this knowledge comes from the ways you share it with people you care about. Here are a few conversation questions to bring to your dinner table or happy hour this week.

Context: So much of our experience as humans is tinted by the bias of our own perceptions and the quality of information we can access. Discerning truth is a learned skill, especially at a time where algorithms determine much of what we see online, and deepfakes are cheap and easy to make.

  • How do you know when information is "true," "correct," or "right"?
  • What are your internal signals, cues, and signs that something you see is right or wrong? (e.g. intuition, gut check)
  • What are some examples of external signals of truth or accuracy? (e.g. credible academic papers, peer-reviewed studies, body language)
  • What are some ways AI might be used to spread misinformation or fool us into believing something that isn't true?
  • What are some ways humans spread misinformation or fool us into believing something that isn't true?

âť“ Quick Request

Would you be willing to write or approve a testimonial from you that advocates for me and/or the information I share through Creative Intelligence? Please reply to this email if you're interested. I'm happy to pre-draft examples for you to personalize, or work with a version you create. THANK YOU!

Share this post