October 14, 2024

AI and Change Management: Navigating the Human Side of AI

Group of young coworkers working together in modern office.Woman talking with colleague about new startup project.Business people brainstorming concept.

Topics

Industry

    The adage “Once burned, twice shy” aptly describes the last AI revolution. Company decision-makers invested a lot of money into AI and found integrating it into their businesses more complicated and expensive than anticipated.

    Despite the risks, forward-thinking organizations have embraced the ongoing AI revolution. Various companies have purchased Microsoft Copilot licenses and taken a trial-and-error approach to AI, experimenting with use cases to determine what works.

    An effective AI change management strategy must consider multiple employee groups — including front-, middle-, and back-office workers — whom AI implementation most affects. How you talk about AI and introduce AI tools to your employees influences how your people perceive AI, which affects their acceptance and adoption of it.

    The Most Important Part of Your AI Change Management Strategy: Your People

    Your AI investments’ success depends on your people. They must buy into AI’s value, see themselves as innovating with AI rather than training their replacements, and use the AI tools you’ve built in their day-to-day work.

    Your employees’ daily interactions with AI tools generate valuable training data and feedback loops. Long-term success in AI depends on the quality of this data. The only way to get that high-quality training data is to get your people on board with AI.

    People often fear AI due to concerns about the unknown, obsolescence, and job security. Soothe your team’s anxiety with sensitive messaging, emphasizing an arm-in-arm approach to AI that leaves no one behind.

    If you frame an AI tool as a personal assistant that helps with cumbersome tasks, employees may feel excited to spend more time on tasks they enjoy. On the other hand, if you frame AI as a tool that automates work, employees may worry about their job security.

    Understanding AI Decision-Making

    AI can only help us make decisions if we look at decisions as probabilistic scenarios to evaluate.

    People often struggle with probabilistic thinking, finding it challenging to envision competing outcomes with different likelihoods.

    People have trouble distinguishing between a 70 percent, 80 percent, and 90 percent chance of a given outcome happening. However, in a business setting, the difference between a 70 percent and 80 percent chance over hundreds of scenarios results in hugely different outcomes.

    Many of us haven’t learned to use probabilities to evaluate decisions and consider risk. For example, we struggle to assess outlier risk or the downside of a wrong decision versus the upside of a right decision.

    To encourage AI adoption, integrate it seamlessly into existing workflows. This requires rethinking business processes and re-engineering tasks.

    For instance, if you transition from a wood-burning stove to an electric stove, you don’t throw a couple of logs on the electric stove and try to turn it on. Doing so betrays a fundamental lack of understanding of the new technology.

    Similarly, you hinder your progress if you add AI to your organization without providing a framework and guidance and expect your people to figure it out. You might end up with the equivalent of a pile of logs and no return on your investment.

    Just because technology exists doesn’t mean people know how to apply it.

    Quote: AI and Change Management: Navigating the Human Side of AI

    Evaluate the Risks of AI Systems

    Your strategy, engineering, legal, and compliance teams play a pivotal role in AI change management. They help you consider AI’s risks to your organization holistically.

    As prediction machines, AI tools have flaws. Recognize their imperfections while understanding when they outperform your existing tools. Get comfortable being wrong and imperfect.

    Switching away from human-based systems can stir powerful emotions not rooted in rational decision-making. Take self-driving cars, which haven’t gotten off the ground despite 15-plus years of autonomous driving technology efforts. Human drivers make mistakes, such as driving drunk or crashing their cars. But we have a whole system built around human liability, insurance, and culpability, with laws, rules, and regulations that govern human driving.

    Even as autonomous driving technology improves, we don’t know how self-driving cars can clear the bar of “good enough.” We don’t have a consensus around an acceptable level of risk for the technology, so universal adoption stays out of reach indefinitely.

    Companies face similar challenges with AI change management. We know the liabilities and rules that apply to humans, but we don’t know which apply to AI.

    AI tools require training, and it’s your responsibility to train and manage these tools. Think of AI as a hyperintelligent intern with no social skills who knows nothing about your business. It’s highly capable, but you wouldn’t put it in front of customers on day one.

    AI Risk Scenarios

    Let’s explore how different sectors weigh AI’s risks against its rewards:

    • Utility: Utility companies used to manage vegetation on a planned schedule. A power line inspector would drive out to a site to estimate the quantity of vegetation and level of risk.Now, utility companies use lidar drone technology to take pictures and machine learning algorithms to identify vegetation that threatens power lines.In this scenario, the cost savings outweigh the risk of an occasional error. A drone with a camera costs less than a helicopter or truck.
    • Financial services: Manual and AI-powered fraud detection come with different risks. With manual detection, missing a fraud event negatively affects customers. With money laundering, missing a fraud event leads to major regulatory issues.Automated fraud detection tools can block legitimate transactions in error. For example, they can deny transfers between individuals’ bank accounts or stop a wire transfer for a home sale because they flag the transfer as unusual — and thus suspicious.When AI-driven fraud detection tools make a mistake, customers need to talk to a human. People want simple, explainable algorithms.
    • Healthcare: Consider a patient awaiting a diagnosis. Would they prefer a human doctor with 80% accuracy or an AI with 82% accuracy?Most people prefer to speak face-to-face with a human doctor. A human can tailor their delivery to the patient and, in the case of a life-changing diagnosis, provide comfort machines can’t. But if a conversation with a doctor costs $100 while a diagnosis from a machine costs $1, the risk-benefit calculus changes.
    • Consumer: Generative AI tools can write marketing emails and generate ads, and most companies use personalization rule engines to target ads for consumers. However, humans still review AI-generated emails and ads because, if unchecked, generative AI programs may hurt your brand’s reputation by writing distasteful jokes and generating images that violate copyright law.

    Prepare Your Organization for AI

    To prepare your product for AI, you need to know what data to collect so your AI teams can build machine learning algorithms.

    To administer effective AI change management, your product team must observe the change from multiple angles. Ask yourself the following questions:

    • Who will use the AI tool?
    • How will they use it?
    • What are the risks of using the tool?
    • What’s technically feasible?

    Say you want to use AI to assess storm damage. If affected customers send pictures after a storm, an AI tool can scan the images and classify the telephone poles as right side up or upside down.

    Asking the AI tool a less binary, more complex question — such as how serious the damage looks or how to optimize your dispatch and route management schedule — requires more data. Account for existing strategies and routines, and partner with a data science team.

    Also, investigate simpler solutions. Not every problem needs to be solved using AI.

    Infographic: AI and Change Management: Navigating the Human Side of AI

    The Three Feedback Loops of AI

    Like your employees, your machine learning algorithms and AI models have jobs at your company. You need to review their performance, help them improve over time, and decide what financial and human resources to allocate to them.

    Once AI models launch, deployment and governance teams must ensure the models continue to accomplish their goals. They must ask the following critical questions:

    • Function: Did we get the model right? Does it work in a real environment, and does it do what it’s asked?
    • Adoption: Do people use the model?
    • ROI: Did the model address our business needs?

    If you’d like to craft a foolproof AI change management strategy and explore your AI options, reach out to Method today.