Demystifying AI

Demystifying AI: Everything You Wanted to Know but Were Afraid to Ask

If you’re wondering how to make AI work for you at work then you’re going to want to catch up on our AI live chat series with Radical Candor author and co-founder Kim Scott, co-founder and CEO Jason Rosoff, and Lead Coach and Podcast Host Amy Sandler. (And checkout our AI at Work podcast episode below.)

Clearly defining the use case and providing the right instructions to the AI interface you’re using is critical for ensuring it operates within appropriate boundaries at work. The key is being intentional about deploying AI as an assistive tool while enforcing ethical guidelines.

Here are some creative ways to use AI at work.

  • Create an AI “Board of Advisors:” Generate AI personas that provide different perspectives and counter your blind spots when making decisions. These AI advisors can challenge your thinking from various philosophical, professional, or contrarian viewpoints.

  • Leverage AI for Debate Prep: When faced with a complex business issue, use an AI language model to lay out and engage in a thoughtful debate, weighing the pros and cons from multiple angles and lenses like finance, operations, legal implications, etc.

  • Get an AI Tutor: If struggling to grasp a particular complex topic or skill, an AI assistant can act as a personalized tutor. It can ask questions to assess your understanding, provide step-by-step guidance tailored to your learning needs, and adjust its teaching style/persona as required.

  • Use Voice Cloning Strategically: While voice impersonation raises ethical concerns, AI voice cloning could have valid professional uses like generating audio training materials or making presentations more engaging by embodying different tones/accents. Just be transparent that it is an AI voice.

  • Externalize Your Internal Voices: Leverage AI voice clones to give embodied expression to the different “internal voices” we all have – the perfectionist, the inner critic, the ambitious go-getter, etc. This can provide perspective when grappling with self-doubt.

Watch our demystifying AI series below and tune in each week for new live chats with Kim Scott, Jason Rosoff, and Amy Sandler. (We used Anthropic‘s Claude to help generate these video summaries)

 

Demystifying AI: Where to Start and How to Use It

  

Have you been hearing a lot about AI lately and feeling a bit lost? Maybe you want to try it out but don’t know where to start. Don’t worry, we’ve got you covered!

In our latest discussion on demystifying AI, we dove into the basics – where to access AI tools and how to begin using them. Here are the key takeaways:

The Three Major AI Playgrounds

There are three main companies that have opened up their large language model AI for public use through chat interfaces:

1. Anthropic – Their AI is called Claude

2. Google – Their AI is called Gemini

3. OpenAI – Their AI is called ChatGPT

To access these, simply search for the name in Google (e.g. “Claude AI”) and it will lead you to the website or interface.

Getting Started with AI

Once you’ve navigated to one of the AI interfaces, you’ll typically see a text box where you can enter your prompt or query. Don’t be intimidated by the blank space! Treat it like a sandbox to explore and play around.

You can ask AI all sorts of questions – silly, serious, creative, analytical. You can also instruct the AI to take on different roles or approaches for assisting you, like asking it to behave as a newspaper reporter or creative writing partner.

One powerful use of AI is for task assistance. You can break down different steps or components of a task you’re working on and have the AI help elaborate, explain concepts, or suggest ideas along the way.

The Generative Power of AI

Beyond just answering questions, AI excels at generative tasks – creating new content and ideas from your inputs. You can engage in back-and-forth brainstorming, have the AI build upon your concepts, or ask it to generate entirely new options fitting certain criteria.

As we discovered, AI can be an incredible co-pilot for ideation, writing, creative projects, and more. You provide guidance and direction, and AI’s generative capabilities can open up new possibilities.

So don’t be afraid to start exploring these AI tools! Treat them as a knowledgeable assistant and creative companion, ready to support you with a wide range of queries and tasks. The possibilities are endless when you unlock the power of AI.

The Differences Between AI Search and Traditional Search Engines

 

In our latest discussion on demystifying AI, we explored the key distinctions between using AI tools like Claude, Gemini, and ChatGPT for searches versus relying on traditional search engines like Google. Here are the main takeaways:

How AI Search Works

When you enter a query into an AI chatbot, it doesn’t simply return a list of websites or sources like a traditional search engine. Instead, the large language model (LLM) powering the AI draws from its training data – a vast corpus of digitized information spanning books, articles, websites and more – to generate a narrative response aimed at answering your question directly.

Some AI models like Google’s Gemini and the latest ChatGPT can also supplement their knowledge by searching the internet and incorporating relevant information from web sources into their responses.

The Limitations of AI Search

While AI search has powerful capabilities, it’s important to understand its limitations compared to traditional engines:

1. Training data constraints – Most LLMs are trained on data pre-dating late 2021 or early 2022. So they may lack the latest facts and information on recent events when answering queries.

2. Inaccurate “hallucinations” – Without robust sources to verify against, LLMs can sometimes generate plausible-sounding but ultimately inaccurate or fabricated information.

3. Lack of source transparency – AI tools typically can’t disclose the specific sources their responses pull from, making it hard to fact-check.

4. Loss of context across sessions – AIs maintain conversational context only within the same dialog session. Start a new one and the previous context is lost.

Using AI Search Responsibly

Given these constraints, it’s crucial to think of AI search assistants not as authoritative sources of truth, but as supplementary tools to augment and kickstart your research process. Any critical facts or claims should be cross-verified against trusted primary sources before being relied upon.

AI search can be incredibly valuable for:

– Getting an initial overview or explanation on a complex topic

– Surfacing relevant context and framing productive queries

– Generating ideas, outlines, or drafts to build upon

– Finding instructions or workflows for specific tasks

However, the onus remains on the user to validate the veracity of the AI’s responses, especially for any information impacting important decisions or legal matters.

As these AI tools continue evolving, the capabilities and safety guardrails around their knowledge acquisition and retrieval methods will keep advancing. But for now, viewing AI assistants as knowledgeable-but-fallible partners in your exploratory process is advised.

We’re excited to keep demystifying this powerful technology! Stay tuned for more tips on getting the most out of AI while avoiding its pitfalls.

Using AI as Your Personal Gift-Giving Assistant

We all have that person in our life who seems impossible to shop for – whether it’s a partner with very specific interests or someone who truly has everything they need already. For those stumped on finding the perfect gift, AI can be an unexpected assistant!

In our latest AI demystifying session, we put various language models to the test by tasking them with recommending a birthday gift for Kim Scott’s husband Andy. Here’s a recap of how it all unfolded:

Prompting the AI for the Best Results

Rather than just asking “What gift should I get my husband?”, Kim’s co-host Jason provided more context by prompting the AI with:

“You are an expert gift giver. I need to purchase a gift for my husband. Ask me questions to identify a great gift I can get in time for his birthday this weekend.”

By setting the scene and assigning the persona of “expert gift giver” to the AI, it was better able to take a consultative approach – just like a human personal shopper might.

The follow-up questions from Claude, Gemini, and ChatGPT showed they understood the best gift insights come from learning about the recipient’s interests first.

Honing in on the Winning Idea

As Kim provided details like Andy’s love for the San Francisco Giants baseball team, fixing things, and listening to the radio while working on projects, the AI assistants offered an array of thoughtful suggestions:

– Tickets to a Giants game, potentially with premium experiences

– A high-end tool set, maybe one specifically for baseball bat maintenance

– A new radio for listening to Giants games

– Baseball skills training from a former pro player

Gemini went above and beyond by not only recommending Giants game experiences but actually surfacing a link to the team’s website to purchase them.

While not perfect, the AI models demonstrated an impressive ability to synthesize the interests and budget parameters into creative and ultra-personalized gift ideas.

Cringe-Worthy Moments

In a twist, Jason tried getting the AI to take on Andy’s persona directly by stating “You are my husband” and listing his interests. Unfortunately, this prompted some cringe-worthy bias from Claude, who repeatedly referred to Kim as “honey,” which is not something Andy would ever say. It also assumed a heteronormative relationship without asking for context.

While certainly an unintended outcome, it was an insightful example of how language models can pick up on and perpetuate societal biases from the data they were trained on. As Jason explained, AI companies do work on mitigating harmful biases around race, gender, and other characteristics – but relatively benign assumptions about relationship dynamics seem to have slipped through.

Gemini notably avoided this pitfall, suggesting Google has made more focused efforts around inclusive pronoun use and minimizing heteronormative framings. It’s a reminder that while AI assists us, we need to consciously counteract its blind spots until the models become more attuned to consciously inclusive communication.

The Verdict

Was the AI gift counsel successful? According to Kim, the exercise was “surprisingly helpful” in clarifying her direction without necessarily providing a definitive answer. Between revisiting the idea of replacing Andy’s beloved Nike jacket and the curated Giants game experience, she felt equipped to make her final decision.

More broadly, it illustrated how simple prompts can unlock AI’s potential as an insightful discussion partner and creative co-ideator. By providing context upfront, we can steer AI towards more productive roles beyond just reactive responding.

So the next time you need to pinpoint the perfect present (or tackle any aimed task), consider recruiting an AI assistant! With a few cues about your objective and recipient’s preferences, you may be surprised at the thoughtful suggestions it proposes. Just watch out for any rogue heteronormative humor along the way.

Demystifying AI Safety: Behind the Scenes with the Experts

 

As artificial intelligence (AI) continues to make headlines and reshape various industries, concerns around the safety and responsible use of these powerful technologies are also growing. In a recent discussion, Jason Rosof, CEO of Radical Candor, and other experts shed light on the measures being taken by major tech companies to ensure AI systems are developed and deployed responsibly.

One of the key stories that sparked this conversation was a Bloomberg report about Microsoft boosting its “Responsible AI” team from 350 to 400 employees – a 15-20% increase. This move comes in the wake of incidents involving Microsoft’s co-pilot chatbot generating “weird to harmful” responses, as well as internal whistleblowing raising red flags about potential risks.

So, what exactly are companies like Microsoft doing to make AI safer and more trustworthy? According to the experts, there are three main technical tools being employed:

1. Toxicity Gates: These are systems that evaluate user inputs to AI models, assessing if a request appears designed to circumvent safety measures or elicit harmful responses. If flagged as potentially toxic, the input may be blocked from reaching the AI.

2. Moderation Tools: While toxicity gates filter inputs, moderation tools evaluate the outputs generated by AI models. Using complex logic, they identify and block responses deemed unsafe or harmful from being sent back to the user.

3. Red Teaming: This involves dedicated teams actively trying to “break” or manipulate the AI into producing unsafe outputs. The insights gained from successful red team attacks are then used to improve the AI’s safety protocols.

Importantly, human reviewers play a crucial role in analyzing unsafe outputs caught by moderation tools or red teams. Their findings help engineers understand vulnerabilities and strengthen AI systems against future attacks or misuse.

However, the experts acknowledge that defining “harm” itself is a complex philosophical challenge. As Jason noted, “We need actual ethicist philosophers…to help us think about how to define harm so that these tools advance human society as opposed to further degrading human relationships.”

While major tech companies have made efforts to involve ethicists and philosophers in AI development, integrating this thinking into rapidly evolving AI systems remains an ongoing challenge. As Rosof warns, AI could accelerate the spread of misleading or harmful “expert” content that exploits human vulnerability to trust perceived authorities.

As AI capabilities continue to grow, the coming months may see difficult ethical dilemmas arise. The experts hope this will spur greater collaboration among AI leaders to combat misinformation and clearly identify AI-generated content so it can be properly scrutinized.

Ultimately, ensuring AI safety is a multifaceted endeavor involving technical safeguards, human oversight, and deep philosophical reflection on the implications of these transformative technologies. Getting it right will require ongoing dialogue and cooperation across disciplines to uphold ethical principles as AI’s impact on society expands.

Exploring the Latest AI Innovations: GPT-4, Education, and Interview Prep

 

The world of artificial intelligence is moving at a blistering pace, with major tech companies unveiling exciting new AI capabilities seemingly every week. In our latest discussion, we dove into some of the biggest AI announcements, including OpenAI’s release of GPT-4 and its potential applications in education and interview preparation.

One of the most significant developments came from OpenAI, which unveiled GPT-4, the latest iteration of its powerful language model. Notably, OpenAI made the unprecedented move of making GPT-4 available for free to everyone immediately, a departure from its previous approach of offering new tech to subscribers first. This strategic decision not only democratizes access to cutting-edge AI but also cleverly positions OpenAI as the consumer-friendly choice amidst a flurry of more technical announcements from competitors like Google.

AI In Education

One of the most compelling demonstrations of GPT-4’s capabilities involved using it as an AI tutor to help a student solve a math problem. By providing step-by-step guidance, asking questions to assess understanding, and responding to the student’s inputs (including drawings on a tablet), GPT-4 showcased its potential as a personalized educational assistant. This ability to have a dynamic, back-and-forth conversation – even allowing the student to interrupt the AI – represents a breakthrough in transitioning from turn-based interactions to more natural discourse.

AI for Interview Prep

Beyond education, GPT-4 also displayed its versatility in a mock job interview scenario. The AI offered candid feedback on the candidate’s appearance and demeanor, providing an outside perspective akin to seeking advice from a friend but without the potential for judgment or bias. This capacity to offer an impartial, “safe” viewpoint could prove invaluable in various social situations where we often hesitate to seek human opinions.

As exciting as these developments are, the experts caution that defining and preventing “harmful” outputs from AI remains a complex challenge. While major tech companies have made efforts to involve ethicists and philosophers in AI development, integrating this thinking into rapidly evolving systems is an ongoing endeavor. There are also concerns about AI’s potential to accelerate the spread of misinformation by masquerading as credible expertise.

AI Safeguards

To address these risks, companies are employing a range of technical safeguards, including “toxicity gates” to filter potentially harmful inputs, “moderation tools” to block unsafe outputs, and “red teams” dedicated to trying to “break” the AI to identify vulnerabilities. Human reviewers also play a crucial role in analyzing unsafe outputs caught by these systems and improving the AI’s safety protocols.

Looking ahead, the coming months may present difficult ethical dilemmas as AI’s capabilities and societal impact continue to grow. The experts hope this will spur greater collaboration among AI leaders to combat misinformation and develop clear methods for identifying AI-generated content so it can be properly scrutinized.

As AI becomes more integrated into our daily lives, we may all need to develop skills in providing clear instructions and desired outcomes to these powerful tools. Just as the math student specified his preferences for how GPT-4 should tutor his son, we may need to learn how to frame our requests effectively to ensure AI assists us in the intended manner.

Ultimately, ensuring AI’s safety and responsible use will require ongoing dialogue and cooperation across disciplines. By combining technical safeguards, human oversight, and deep philosophical reflection, we can work towards upholding ethical principles as these transformative technologies continue to reshape our world.

Exploring AI Impersonation: Ethical Boundaries and Positive Applications

 

As artificial intelligence capabilities continue to advance at a breakneck pace, concerns are mounting about the potential misuse of AI for impersonation and spreading misinformation. However, these powerful technologies also hold intriguing possibilities for enriching our lives in novel ways. In our latest discussion, we dove into the ethical guardrails needed around AI impersonation while also examining some of its positive applications.

The Impersonation Debate

Questions arose over whether OpenAI’s AI assistant “Sky” had improperly replicated the voice of actress Scarlett Johansson without consent. While OpenAI claims they hired a voice actor and any resemblance is coincidental, the incident highlighted growing unease about the potential for AI to be misused to masquerade as real individuals.

An even more concerning case emerged during the New Hampshire primary elections, where bad actors used an AI model to generate a fake audio clip impersonating President Joe Biden, using it for robocalls that aimed to suppress voter turnout. The state’s Attorney General has filed criminal charges and levied $6 million in fines over this deceptive stunt.

As Amy noted, quoting an Atlantic article, there is a “hubris and entitlement” ingrained in transformative technologies that can breed a mindset of “manifest destiny” among their creators. The piece argues generative AI in particular “stretches this dynamic to the point of absurdity…it’s not stealing to build the future if you believe it has belonged to you all along.”

While Kim understood the author’s cynicism, she countered that we shouldn’t be too quick to declare defeat before guardrails are put in place. Drawing parallels to past debates around Google’s book-scanning project, Kim expressed hope that just as that initiative ended up increasing access to knowledge, AI models can be shaped in ethical ways that benefit society.

One positive step is the EU’s plans for an AI Safety Council comprised of government, ethicists, and tech companies to help define responsible practices. This marks a shift from the untenable paradigm of leaving it solely to companies to self-regulate.

Jason then provided an eye-opening demo from ElevenLabs, an AI voice cloning tool. By training it with snippets of Kim’s voice, he was able to generate audio of “her” answering questions in a fairly realistic way. He also showed how the tool could make her “say” outright falsehoods, underscoring the risks of unethical actors hijacking our voices and identities.

Powerful Positive Outcomes

However, as Jason and Amy discussed, the positive applications of this technology are equally powerful. Voice cloning could help preserve connections to departed loved ones. It may enable us to embody different “internal voices” during self-reflection, coaching or therapy. We could create AI advisors to provide alternative perspectives that counter our blind spots.

The possibilities don’t end there. Language models could engage in thoughtful debates weighing the pros and cons of an issue through different philosophical or professional lenses. Or they could take on specialized personas to provide tutoring tailored to a student’s learning needs and preferences.

Ultimately, while the ethical pitfalls are serious and must be proactively addressed through government intervention and collective accountability, we shouldn’t let the fear of misuse cause us to forsake the immense creative and connective potential of these technologies. As with all transformative tools, we must be vigilant in steering AI toward its highest, most noble applications while establishing clear guidelines around its ethical boundaries.

Using AI for Data Analysis and Summarization

 

In the latest episode of the “Demystifying AI” series, Jason and Amy discuss the potential for AI to help level the playing field, particularly when it comes to data analysis and summarization tasks.

They look at a thought-provoking excerpt from the newsletter “The Neuron” which posits that AI may act as a democratizing force, allowing lower performers to catch up to top performers by using AI tools.

To illustrate this, Jason walks through a fascinating example of using ChatGPT to analyze survey response data. He shows how simply copy-pasting data from a spreadsheet into the ChatGPT interface allows the AI to recreate the data in tabular form.

From there, he can prompt ChatGPT to summarize and analyze the data in different ways – pulling out key themes, calculating percentages, creating visual charts and graphs, and more.

Providing The Right Context

An important insight is that to get the most value from ChatGPT, users need to provide the right context and framing for the AI. As a novice, directly asking to “summarize this data” may yield unsatisfactory results.

But by carefully constructing prompts that set the context (“I am an expert researcher analyzing this survey data for my boss…”) and asking specific questions, ChatGPT is able to tackle analysis tasks more adeptly.

Jason also demonstrates the value of prompting ChatGPT to “show its work” by asking how it arrived at a particular output. This can provide a teaching moment to understand the AI’s logic and approach.

He notes that for pure analysis tasks, ChatGPT tends to outperform other AI offerings like Claude and Anthropic, though those may be better for creative writing prompts.

Overall, the segment highlights the immense potential of AI to augment data analysis abilities for users at all levels. With intuitive prompting to provide context, even novices can harness the remarkable capabilities of cutting-edge language models like ChatGPT.

As the underlying technology continues advancing, we may indeed see AI acting as a great equalizer in analytical skills across diverse industries and roles.

AI and Truth — Understanding the Limitations of Large Language Models

 

Kim, Jason, and Amy explore the complex relationship between artificial intelligence (AI), particularly large language models like ChatGPT, and the concept of truth. The conversation centered around a provocative academic paper titled “ChatGPT is Bullshit.

The key argument presented is that describing AI’s inaccuracies as “hallucinations” is misleading. Instead, the paper’s authors suggest that AI systems like ChatGPT are essentially “bullshitting” — a term with philosophical implications distinct from lying.

Bullshitting, as defined by philosopher Harry Frankfurt, is characterized not by an intent to deceive but by a reckless disregard for the truth. This framing helps explain why AI models sometimes produce incorrect or nonsensical information. They’re not trying to lie or hallucinate; rather, they’re generating responses with no inherent concern for truthfulness.

The discussion highlights several important points:

  1. AI models are designed for utility, not truth: The primary goal of these systems is to provide useful responses, not necessarily factual ones.

  2. Lack of agency: Unlike humans, AI doesn’t have intentions or an agenda when producing information.

  3. Pattern recognition vs. understanding: AI models work by recognizing patterns in human language, not by truly comprehending information or facts.

  4. Implications for bias: The model’s indifference to truth means it can reproduce biases present in its training data.

  5. Challenges in implementing “truth checks”: Adding a layer of fact-checking to AI systems would be complex and potentially introduce new biases.

Kim, Amy, and Jason emphasize the importance of maintaining a skeptical attitude when using AI tools. Users should view AI-generated content as a starting point for brainstorming or research, not as a source of definitive facts.

Looking ahead, the development of separate AI “agents” to help verify information was proposed as a potential solution. However, this approach would also come with its own set of challenges and potential biases.

Understanding AI’s relationship with truth is crucial for responsible use of these powerful tools. By recognizing that AI models like ChatGPT are essentially sophisticated “bullshitters,” users can better leverage their capabilities while remaining aware of their limitations.

Comparing AI-Generated and Human-Written Responses

Kim shares an interesting experiment comparing AI-generated content with human-written responses. The experiment focused on answering a question about handling criticism and negative feedback.

Two responses were presented: one written by Kim herself and another generated by an AI trained on her work. Surprisingly, both responses received positive feedback, with the AI-generated answer scoring a respectable B+ grade from Kim.

Key takeaways from the experiment:

  1. AI capabilities: The AI-generated response covered important points clearly and concisely, demonstrating the potential of AI in producing high-quality content.

  2. Human touch: Scott’s human-written response was noted for its warmth, tone, and personal connection, elements that the AI version lacked.

  3. Scaling potential: Scott highlighted the value of AI in addressing the challenge of responding to numerous inquiries, which human experts often struggle to manage due to time constraints.

  4. Room for improvement: The AI response contained minor errors and used phrases that Scott wouldn’t typically use, indicating areas for refinement in AI training.

Kim and Jason also touch on broader AI topics, including:

  1. Microsoft’s decision to resign its board seat at OpenAI, potentially due to antitrust concerns.

  2. The importance of transparency and external oversight in AI development.

  3. Apple’s approach to integrating AI features while prioritizing user privacy.

  4. Sequoia Capital’s warning about overinvestment in AI startups.

Kim also share personal experiences with AI, including using it as an “outboard brain” for research and enjoying a surprisingly smooth interaction with a bank chatbot.

As AI continues to evolve, it’s clear that while it can produce impressive results, there’s still value in the human touch. The future may lie in finding the right balance between AI efficiency and human expertise.

Leveraging AI as a Collaborative Tool

Amy and Jason share valuable insights on using artificial intelligence (AI) as a collaborative tool for various projects. They discuss practical applications of AI, particularly in strategy development, and research.

Key Takeaways:

  1. AI as a Workbook: Jason emphasized the importance of treating AI conversations like workbooks, building context over time to make the AI more useful. He demonstrated how he used ChatGPT to develop a community strategy for Radical Candor, showing that maintaining a long-running conversation history can lead to more refined and contextually relevant outputs.

  2. Customization and Context: The discussion highlights the significance of providing AI with specific context and custom instructions. Jason explains how he fed ChatGPT with background information and goals, allowing it to generate more accurate and tailored responses.

  3. AI as an Outboard Brain: One of the most powerful uses of AI, according to Jason, is as an “outboard brain.” It can help synthesize and summarize information, acting as a thought partner for solo projects.

  4. Limitations and Challenges: While acknowledging AI’s potential, Jason also points out its current limitations. He noted that AI’s ability to handle complex structures like OKRs (Objectives and Key Results) might require specific prompting and custom instructions to be truly useful.

  5. The AI Bubble and Economic Impact: Amy and Jason touch on the growing skepticism in the AI startup world, with many companies struggling to secure funding. Jason suggests that many current AI applications might be too trivial, and there’s a need for more substantial uses of the technology.

  6. Future Outlook: Looking ahead, Jason speculates on potential game-changing applications of AI, such as making medical research more accessible. He also mentions the ongoing experiment at Radical Candor to create an AI bot that can consistently reproduce high-quality answers similar to what Kim Scott (co-founder of Radical Candor) might provide.

The discussion underscores the importance of thoughtful, purpose-driven uses of AI. While the technology has shown promise in various fields, Jason emphasizes the need for entrepreneurs to focus on creating genuinely helpful and impactful AI applications.

As AI continues to evolve, its role as a collaborative tool in strategy development, research, and knowledge dissemination is likely to grow. However, realizing its full potential will require careful consideration of its limitations and a focus on substantive, value-adding applications.

————————————————————————————————————————————————————————————–

Do you want to fix communication issues in the workplace? Then you need The Feedback Loop (think Groundhog Day meets The Office), a 5-episode workplace comedy series starring David Alan Grier that brings to life Radical Candor’s simple framework for navigating candid conversations.

You’ll get an hour of hilarious content about a team whose feedback fails are costing them business; improv-inspired exercises to teach everyone the skills they need to work better together; and after-episode action plans you can put into practice immediately to up your helpful feedback EQ.

We’re offering Radical Candor readers 10% off the self-paced e-course. Follow this link and enter the promo code FEEDBACK at checkout.