Skip to Main Content
HBS Home
  • About
  • Academic Programs
  • Alumni
  • Faculty & Research
  • Baker Library
  • Giving
  • Harvard Business Review
  • Initiatives
  • News
  • Recruit
  • Map / Directions
Working Knowledge
Business Research for Business Leaders
  • Browse All Articles
  • Popular Articles
  • Cold Call Podcast
  • Managing the Future of Work Podcast
  • About Us
  • Book
  • Leadership
  • Marketing
  • Finance
  • Management
  • Entrepreneurship
  • All Topics...
  • Topics
    • COVID-19
    • Entrepreneurship
    • Finance
    • Gender
    • Globalization
    • Leadership
    • Management
    • Negotiation
    • Social Enterprise
    • Strategy
  • Sections
    • Book
    • Podcasts
    • HBS Case
    • In Practice
    • Lessons from the Classroom
    • Op-Ed
    • Research & Ideas
    • Research Event
    • Sharpening Your Skills
    • What Do You Think?
    • Working Paper Summaries
  • Browse All
    What Makes Employees Trust (vs. Second-Guess) AI?
    Research & Ideas
    What Makes Employees Trust (vs. Second-Guess) AI?
    19 Jan 2023Research & Ideas

    What Makes Employees Trust (vs. Second-Guess) AI?

    by Rachel Layne
    19 Jan 2023| by Rachel Layne
    While executives are quick to adopt artificial intelligence, front-line employees might be less willing to take orders from an algorithm. Research by the Laboratory for Innovation Science at Harvard sheds light on what it takes for people to get comfortable with machine learning.
    LinkedIn
    Email

    When an algorithm recommends ways to improve business outcomes, do employees trust it? Conventional wisdom suggests that understanding the inner workings of artificial intelligence (AI) can raise confidence in such programs.

    Yet, new research finds the opposite holds true.

    In fact, knowing less about how an algorithm works—but following its advice based on trusting the people who designed and tested it—can lead to better decision-making and financial results for businesses, say researchers affiliated with the Laboratory for Innovation Science at Harvard (LISH).

    Why? Because employees who are making decisions often decide to trust their anecdotal experience rather than AI’s interpretation of the data. The trouble is, sometimes decision makers think they understand the inner workings of an AI system better than they actually do.

    The findings have implications for a variety of businesses, from retailers and hospitals to financial firms, as they decide not only how much to invest in AI, but how decision makers can use the technology to their advantage. Understanding how algorithms work to make recommendations—and knowing how people navigate them—is more important than ever, the researchers say.

    “Companies are trying to make the decision: ‘Do we invest in AI or not?’ And it's very expensive,” says Timothy DeStefano, an affiliated researcher with the LISH team from Harvard Business School and an associate professor at Georgetown University.

    DeStefano partnered on the paper with LISH senior research scientist Michael Menietti; Katherine C. Kellogg, a professor at the Massachusetts Institute of Technology; and Luca Vendraminelli, who is affiliated with LISH and a post-doctoral fellow at the Politecnico di Milano.

    Trusting a fashion retailer’s program

    To test how employees react to AI systems, the researchers worked last year with the luxury fashion retailer Tapestry Inc., whose accessory and lifestyle brands include Coach, Kate Spade, and Stuart Weitzman. The firm employs 18,000 people worldwide and has about $6.7 billion in annual sales.

    Like all retailers, Tapestry tries to put the right number of products in the right stores at the right time, so it sells as much as possible and doesn’t lose track of stock.

    As part of the study, Tapestry managers who oversee shelf stocking provided employees called “allocators” with two sets of recommendations to help them choose which goods to display. One set was from an algorithm that allocators could interpret, and the other was from a “black box” algorithm they couldn’t.

    Researchers then tested consumer reactions to allocators’ decisions for 425 product SKUs—the numbers used to trace each item—at 186 stores. The products were grouped in 241 “style-colors'' and sizes.

    When the allocators received a recommendation from an interpretable algorithm, they often overruled it based on their own intuition. But when the same allocators had a recommendation from a similarly accurate “black box” machine learning algorithm, they were more likely to accept it even though they couldn’t tell what was behind it. Their resulting stocking decisions were 26 percent closer to the recommendation than the average choice.

    Why? Because they trusted their own peers who had worked with the programmers to develop the algorithm.

    Social-proofing the algorithm

    The allocators “knew that people like them—people with their knowledge base and experience—had had input into how and why these recommendations were being made and had tested the performance of the algorithm,” the researchers write. “We call this social-proofing the algorithm.”

    That finding comes from 31 interviews with 14 employees that researchers conducted after the experiment.

    “Since I couldn’t see what was behind the recommendation, I was only willing to accept it because I knew that [peers] had spent a lot of time with the developers beforehand making sure that the model was accurate,” one interviewee told researchers, according to the paper.

    That employee confidence has big implications. In Tapestry’s case, revenue rose—and the leftover stock was less common—in stores where allocators used the machine learning algorithm to inform their choices.

    “If your decision-making employees are comfortable with an algorithm’s development, the expense pays off in the long run,'' Menietti says. “The takeaway is that, at least in industries like fashion, you can use the most complex AI models that give you better predictions. We’d now like to see if this finding holds in higher stakes settings such as medical diagnosis and treatment or credit lending.”

    That finding may be useful across industries from transportation to medicine as AI evolves and the quality—and quantity—of data climbs, DeStefano says. Researchers are working on a similar study in the trucking industry now.

    AI improves human decision-making

    The research emerges as LISH joins the newly launched Digital, Data, and Design Institute at Harvard. The 12-lab organization launched last year to study six themes including algorithms and ethics, performance and metrics, and societal impact.

    “We know that these advanced technologies play an instrumental role in configuring the right managerial decisions.”

    “This is precisely the type of practitioner-oriented research we’ve conducted at LISH for the past decade and now we’re scaling with our new partners,” says Jin Paik, head of labs at the institute. “We know that these advanced technologies play an instrumental role in configuring the right managerial decisions.”

    Paik and DeStefano agree that machine learning will never completely replace people as the ultimate decision makers. However, AI recommendations can help managers make better choices, resulting in more efficient organizations.

    “It’s about how we think about talent and resource allocation,” Paik says.

    You Might Also Like:

    • When Bias Creeps into AI, Managers Can Stop It by Asking the Right Questions
    • Delivering a Personalized Shopping Experience with AI
    • Data-centric business: Inside the artificial intelligence factory

    Feedback or ideas to share? Email the Working Knowledge team at hbswk@hbs.edu.

    Image: iStockphoto/mkistryn

      Trending
        • 16 Mar 2023
        • Research & Ideas

        Why Business Travel Still Matters in a Zoom World

        • 14 Mar 2023
        • In Practice

        What Does the Failure of Silicon Valley Bank Say About the State of Finance?

        • 25 Jan 2022
        • Research & Ideas

        More Proof That Money Can Buy Happiness (or a Life with Less Stress)

        • 25 Feb 2019
        • Research & Ideas

        How Gender Stereotypes Kill a Woman’s Self-Confidence

        • 10 Mar 2011
        • What Do You Think?

        To What Degree Does the Job Make the Person?

    Find Related Articles
    • Technological Innovation
    • Decision Making
    • Innovation and Invention
    • Strategy

    Sign up for our weekly newsletter

    Interested in improving your business? Learn about fresh research and ideas from Harvard Business School faculty.
    This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
    ǁ
    Campus Map
    Harvard Business School Working Knowledge
    Baker Library | Bloomberg Center
    Soldiers Field
    Boston, MA 02163
    Email: Editor-in-Chief
    →Map & Directions
    →More Contact Information
    • Make a Gift
    • Site Map
    • Jobs
    • Harvard University
    • Trademarks
    • Policies
    • Accessibility
    • Digital Accessibility
    Copyright © President & Fellows of Harvard College