Social AI: The Potential To Improve Human-To-Human Relationships Among The Workforce Or With Customers

Social AI: The Potential To Improve Human-To-Human Relationships Among The Workforce Or With Customers
Social AI: The Potential To Improve Human-To-Human Relationships Among The Workforce Or With Customers

You’re frustrated. Two functional leaders are pulling you into a nasty turf war when you need them to collaborate. You’re writing a frustrated reply, when a friend stops you. They recommend more appropriate wording, and that you ask the functional leaders to schedule a meeting to discuss conflicting priorities and come up with a solution. You take the recommendation and cool off. You would like to reach out and thank your friend and confidante, but you can’t because they’re an AI. With the help of current artificial intelligence (AI) technologies, this – and many other social capabilities – may already be possible with the tools that many organizations have access to.

While 91% of business leaders surveyed in 2022 said they have an enterprisewide AI strategy, they are typically using AI in the workplace to generate insights, optimize processes, lower costs, improve collaboration across businesses, etc. Within the context of these applications, the potential for human-machine collaboration is well-established. However, the potential of AI to improve human-to-human relationships among the workforce or with customers and potential recruits – what we call the social side of work – can often be overlooked.

By analyzing interactions and communications and generating personalized, data-driven recommendations, AI can do much more than just promoting email diplomacy. It could be a powerful tool for the workforce to nurture uniquely human capabilities. AI can help us prepare for key presentations, expand our professional networks, understand the personalities and feelings of customers, promote diversity and inclusion in everyday work, and even drive innovation and culture change across an organization. Of course, such capabilities come with adoption challenges. Skepticism for this kind of AI can run deep. But a careful, user-centric, opt-in/-out approach can help overcome resistance, and gradually introduce employees to AI.

What Social Aspects Of Work Can AI Improve?

Beyond the tactical knowledge, expertise, and skills needed to do one’s job, there are enduring human capabilities that are universally applicable and harder to develop, such as emotional intelligence, teaming, and empathy. These capabilities enable workers to build meaningful relationships with customers, leaders, peers, and potential recruits. The value of these human-to-human relationships can be foundational and critical to organizational success.

We surveyed 2,620 business leaders as part of Deloitte’s State of AI 2022 study. More than two-thirds of leaders noted that their organizations have either deployed or were developing AI applications for natural language processing (including sentiment detection and text summarization), computer vision, text chatbots, and voice agents. Additionally, less than a third were planning or exploring these technologies. Organizations are typically using these technologies to generate insights, optimize processes, lower costs, and improve collaboration across businesses. In addition to these applications, AI technologies can analyze human interactions during and after an event to generate personalized, confidential recommendations at an individual and organizational level to help improve human interactions at work. There are multiple AI applications for the social side of work (figure 1).

Social AI: The Potential To Improve Human-To-Human Relationships Among The Workforce Or With Customers
Social AI Figure – 1

Amplifying Emotional Intelligence Through AI Simulations, Personal Upskilling, And Networking

Simulations: Affective computing, also known as emotion AI, is a constantly evolving application that understands human emotions in response to a situation and makes recommendations accordingly. Its real-world applications span across a few areas of communication.

For example, before a meeting or presentation, leaders can practice interactions with AI avatars representing team members. Based on the narrative, AI would generate possible arguments, assess persuasiveness, and give feedback to make communication more effective.

AI simulations can also be helpful when leaders are looking for inputs on early-stage thinking. For complex topics, leaders may first seek inputs from AI, then review with peers and leaders at later stages when the thinking is more developed to save time.

Upskilling At Scale: Traditionally, coaching has largely been made available to select professionals in an organization – either high performers or individuals experiencing performance issues that require direct interaction and intervention. This leaves out much of the workforce. AI can enable learning experiences tailored to an individual’s emotional intelligence needs and drive those learning experiences at scale. In one example, a coaching network uses AI and machine learning algorithms to match employees with coaches focused on different skills categories related to inclusive leadership, persuasive communication, etc.

Networking: AI-enabled applications can connect professionals with other people who have similar interests and help them grow their professional networks within and outside of their organization. Users provide information about their professional background, industry or sector specialization, areas of interest, etc., to a model that can generate matches periodically, send introductory emails, and set up meetings. These interactions can drive virtual watercooler or coffee bar conversations in the hybrid work environment. On similar lines, AI-enabled platforms can also facilitate experience/expertise-based networking outside of the organization.

Understanding Customers Better And Providing Superior Customer Service

Contact centers have been early adopters of automated voice systems to address higher call volumes, with labor shortages and lower IT budgets. However, the endless loops of automated responses can often lead to customer alienation, making this a much less popular and often-derided use of automation. AI can not only drive automation, but it can also make each customer touchpoint meaningful – while reducing the need for 24/7 human involvement. By analyzing data from past conversations, AI can help contact center representatives with insights to prepare a baseline customer profile before an interaction, enable them to perform well during the interaction, and help them update the customer profile based on the interaction to generate recommendations for future use.

Getting to know your customers before meeting them. Past customer interactions are a gold mine for deriving customer insights. AI tools can ingest basic customer data and previous conversations to create their profile based on their communication style, personal priorities, responses in previous conversations, etc. Contact center representatives can review this profile before engaging with a customer and be better prepared to have a seamless conversation. AI can also identify the most appropriate service representative for a customer based on similarities in personalities and communication styles.

In one example, Vodafone Italy combined customers’ profile data with a customized language generator algorithm, and developed personalized promotional messages for each customer segment for plan upgrades, 5G launch, etc. The effort resulted in increasing customer subscriptions by 40% in 2020.

Secondly, engaging effectively during a customer interaction. While engaging with customers, virtual agents or chatbots can conduct a real-time sentiment analysis of the conversation. The bot can then adjust its response based on the results of the sentiment analysis; for instance, if a customer interaction has a positive sentiment, the bot can pitch for cross-selling or upselling. And if the customer interaction has a negative sentiment, the bot can quickly transfer the call to a contact center representative with notes about the interaction, enabling the representative to take it forward.

Even when representatives are interacting with customers, AI programs can monitor the interaction in real-time and provide suggestions to the representatives (through text prompts) on how to respond. Humana Pharmacy uses voice analytics in its call centers. Voice signals can be analyzed to determine customer engagement and provide real-time feedback to contact center employees during the calls, allowing them to amend their approach accordingly.

The conversational AI solution should be sophisticated enough and be able to combine language semantics with voice tonality to understand the customer’s emotion correctly. For instance, a user (in a stable and flat voice) says, “I’m really surprised that you still haven’t managed to provide a resolution.” While the tone doesn’t show anger or frustration, some words, such as really surprised or haven’t managed, when spoken with longer-than-normal pauses, could indicate a negative emotion. The application should be able to pick up these nuances to generate effective advice for customer service representatives.

As conversational AI continues to learn and improve over time, benefits can be significant. One study involving 445 businesses across industries using AI solutions for contact center service reported 2.2 times higher first-call resolution rates and 4.5 times greater service-level agreement (SLA) attainment rates, compared to non-AI users.

Finally, deriving insights after a customer interaction for future use. AI applications can analyze interactions with customers to update customer profiles, enable service professionals to improve their pitches, and also reassess the pairing between customer service representatives and customers based on similarities in personalities and communication styles for future interactions.

As the customer service use case shows, AI has the ability to automate processes (that have traditionally been done by humans) with a “human touch” and free up time for humans to take up higher-quality work. This use case also illustrates that worker data can be used to not only draw meaningful insights but also create a better work experience and, as such, can be mutually beneficial for the organization and the workforce.

Recruiting A Diverse Workforce And Building Diverse Project Teams

AI and data-based algorithms can provide visibility into whether the organization is truly diverse. By analyzing the profile of the workforce, AI can assess diversity (race, gender, ethnicity, etc.) and monitor it in real-time across functions, career levels, and other criteria. AI can also help attract diverse new talent in many ways, including:

  • Blind Hiring. One of the earliest results of blind hiring can be observed in orchestras: Female musicians in symphony orchestras in the United States comprised typically less than 5% of all performers in the 1970s. Gradually, orchestras tweaked their audition process by introducing “blind auditions” – adding partitions to shield the identity of those auditioning. The percentage of female musicians then increased from 5% in the 1970s to 25–40% in the early 2000.

In the workplace, AI can enable blind hiring by stripping away identifiable attributes from resumes that are typically not related to candidates’ skills, expertise, or experience. By removing attributes such as name, age, headshot, gender, race, or ethnicity from resumes before they reach hiring managers, AI can reduce human biases and help drive a more equitable recruitment process.

  • Soft-Skill Assessments. Some companies use evaluative AI screeners (with neuroscience-based games embedded) to better understand candidates’ hard-to-assess competencies, such as risk-taking, perseverance, and emotional intelligence, along with traditional traits such as logical reasoning and quantitative and verbal abilities.
  • Interview Panel Design. After the initial screening, AI can also help hiring managers build diverse interview panels to minimize biases.
  • Diverse Team Building From Internal Talent. Referred to as the “internal talent marketplace,” AI can match people’s skills against project needs to build effective teams, while being intentional about bringing diverse professionals from outside of the core project team. In one example, IBM deployed its Opportunity Team Builder AI solution to identify the best candidates to join a sales team based on their social skills and predict the impact each member would have on the sales team’s overall performance. As team members are added to the team, the tool continuously calculates the gaps in skills needed until an optimal team is formed.

Project teams may be more amenable to work assigned by AI compared to that assigned by their managers in some cases. Team members are likely to be more trusting toward AI when they are looking for quick and unbiased information, logic-driven solutions, confidential responses without the fear of scrutiny or retaliation, etc. By integrating AI in day-to-day workflows and allocations, managers can improve trust with their team members.

Fostering An Inclusive Work Environment

Diversity without inclusion is insufficient. AI can enable the workforce to drive respectful conversations and inclusive workflows that are critical – especially in hybrid and remote work environments. AI can drive inclusion and accessibility in meetings, including:

  • Microaggression Coaching. AI can detect microaggressions by analyzing written or verbal communications, suggest alternatives and provide feedback confidentially to the communicator to improve their sensitivity over time. When somebody’s tone becomes disrespectful, a sophisticated AI application wouldn’t scold or criticize them (a user may otherwise dismiss the coaching). Instead, the application would subtly suggest that their tone may have shifted toward the negative and nudge them to change their tonality.
  • Encouraging Turn-Taking. Based on simple voice detection, AI can identify individuals and groups that take over a conversation, leaving no space for others to contribute to the discussion. Such in-the-moment analysis is helpful especially in hybrid/virtual settings to ensure everyone can speak and contribute to a discussion. Receiving such recommendations could be uncomfortable for many people and they could simply reject them. Thus, it’s imperative that organizations build trust in AI systems and help the workforce appreciate the role of AI to enhance their emotional intelligence through fair and impartial feedback.
  • Improving Accessibility. AI can remove language barriers and improve accessibility in meetings and discussions. Meeting notes can be immediately transcribed into multiple languages to improve participation from global teams. Accessibility of content can be improved by providing lip-reading recognition for people with hearing impairment, facial or image recognition for people with visual impairment, and text summarization for professionals who aren’t comfortable with digesting large bodies of text in one sitting.

Leveraging “Informal” Networks To Drive Change Management And Innovation

Leaders typically use a formal hierarchy and top-down communication to disseminate culture and values within the organization. However, they often face challenges in gaining acceptance through such channels. Not everyone who represents a box on an organizational chart is more influential than those that flow below them. The truth is that influencers can fall all over an organizational chart, but we tend to prioritize hierarchy over influence. In reality though, workforce behaviors and culture change happen in the organizational network (figure 2).

Social AI: The Potential To Improve Human-To-Human Relationships Among The Workforce Or With Customers
Social AI Figure – 2

By using technologies such as text mining, natural language processing, etc., organizations can analyze who is connected to whom, the nature of their interactions and relationships, and informal influencers within the organization in a systematic and scientific way. Data-driven analysis of responses to surveys, focus group discussions, interviews, etc., can highlight reasons for workforce hesitancy toward proposed changes and the degree of resistance. It can determine who is “on the fence” versus opposed to change. When leaders understand the reasons and degree of hesitance, they’re often better equipped to formulate potential actions to address that hesitance, and drive acceptance and change management with the help of influencers.

Informal influencers can also be helpful in driving innovation within the organization as they can mobilize individuals and groups to facilitate the flow of ideas and information within the organization. By analyzing the connections between employees, General Motors identified “influencers” from different teams and functions who could drive innovative ideas for product design and customer service. Then, they created an environment to develop the ideas by onboarding additional people interested in building the solutions and driving wider adoption across the organization.

Challenges Confronting The Social Side Of AI And Potential Solutions

Applications of social AI will likely face many of the same challenges as other AI applications – concerns about lack of explainability in AI decisions and risks associated with data privacy, trust, reliability, etc.

Social AI: The Potential To Improve Human-To-Human Relationships Among The Workforce Or With Customers
Social AI Figure – 3

We discuss below some of the key elements that organizations should consider integrating when developing and implementing social AI solutions. These elements can address some of the challenges and can help create better work for humans and better humans for work.

Training The Social AI Model To Generate Impartial Recommendations For Building Workforce Trust

The training dataset for the social AI algorithm should be chosen to ensure a fair representation of the population and mitigate biases resulting from human inputs. Also, it’s important to ensure that recommendations (for improvements in communication, workflows, etc.) are not influenced or biased by career levels, e.g., junior professionals need more training on inclusive communication. Further, the application should not only offer corrective action on communications and interactions but also convey appreciation and admiration when employees adapt their behavior to the recommendations and improve the quality of their interactions.

Defining Responsibility And Accountability For The Social AI Solution And The Workforce

In recent years, there has been much discussion about whether AI should be held to machine or human standards – both ethically and legally. And, who should be held responsible and accountable for a decision: AI or the person who created or deployed it?

It’s important to establish responsibility and accountability in conversations and interactions among AI and human users. When doing so, social AI cannot be considered in isolation. It’s a part of an organization’s overall ethics policy, and human users remain integral throughout the AI loop. Let’s consider an example where a social AI application provides a recommendation about language choice to an employee or a suitable composition for a team to a project manager. In this case, the responsibility to generate the most appropriate recommendation lies with the developer teams; however, the accountability to act on that recommendation rests with the user, i.e., the human workforce. It’s imperative that this understanding of responsibility and accountability is documented and communicated to the developers as well as end users.

Defining The Purpose Of Social AI Clearly To Drive Data Privacy

During one of our research interviews, an AI specialist who focuses on AI/machine learning product management said, “The topmost challenge is privacy … users freak out when they learn that their data is being collected … they feel, ‘I am being monitored, and my behavior will be distributed where I don’t have control.’”

One way to alleviate privacy concerns is to ensure that the user data isn’t used for evaluative purposes. In other words, don’t use AI to rate your workforce’s emotional intelligence for performance reviews. Also, the application should seek permission to use the workforce data for each purpose (analyzing team conversations, sales pitches, customer support calls, etc.), and there shouldn’t be blanket consent from the workforce for the deployment of multiple social technologies.

Depending on the AI’s purpose, there may or may not be a need to store the data. In a simple example of turn-taking and analyzing airtime in a multiperson conversation, the data is useful in the moment to allow everyone to contribute to the discussion, and it can be deleted after the conversation. In other applications, such as improving contact center conversations, data may need to be stored for future training and improvement purposes.

Conversational AI should replicate the trust and discretion that is integral to human-to-human conversations. As we share information with other individuals, there is an unsaid understanding that the listener will exercise discretion when sharing that information with other individuals. Likewise, as social AI systems interact with other human users (say peers or customers) on behalf of the workforce, they must only share what the user is comfortable sharing with other parties. For instance, an AI database has a user’s full date of birth – but when another human user or AI bot requests this information, the system uses discretion and only shares the day and month but not the year, thus, moving the conversation forward while keeping the data safe.

Securing Social AI Models By Design

The European Union Agency for Cybersecurity (ENISA), the Federal Trade Commission (FTC) in the United States, and other organizations globally, have outlined cybersecurity frameworks to assess the exposure level of an AI model to cyberthreats. Organizations should test their social AI models against these security frameworks periodically to check for vulnerabilities to existing and emerging threats and deploy appropriate security controls.

When developers are building the social AI training data, they could ingest harmful content into the training dataset, for example, malicious content that tries to access and edit a user’s data or the complete dataset. This could help in training the algorithms to identify abnormal behaviors compared to normal user patterns and restrict further user activity, even leading up to denial of service as required.

Deploying Transparent Social AI Models With Explainable Decision-Making

The workforce should be able to see how their data feeds into the social AI algorithm, how the algorithm makes decisions, and how it would benefit them. The algorithm should be open to inspection and corrections as required. For example, if AI recommends that someone modify their tone, it should also provide a decision tree explaining why something is appropriate or not based on organizational guidance on language nuances.

IBM provides factsheets for each AI model that contain information about the creation and deployment of a model throughout the life cycle. End users can review the data captured and how it moves through the AI life cycle to determine the model’s decision-making process. Consumers trust food nutrition labels because it enables them to decide whether to purchase and consume an item. Social AI factsheets may drive transparency and trust with the workforce the way food labels do.

Maintaining Social AI Models’ Robustness And Reliability Over Time

When social AI systems can learn from users and each other, they can produce reliable results and, over time, build trust with users. Human intervention may be required to ensure the model is and stays robust. Teams need to identify the right people to provide human input. Have they received training on company guidelines and policies, and are they equipped to take on this responsibility? It’s important to identify periodic refresher trainings on bias mitigation and ethics for those involved to keep the solution robust over time.

Getting Started

In Deloitte’s survey of business leaders conducted in 2022, 76% said they plan to increase or significantly increase their organizational spending on AI in the next year. In addition to the established uses of AI in the workplace for making internal processes more efficient and generating data insights, leaders have the untapped opportunity to leverage AI to enhance the social side of work. Here are some actions to consider to get started.

Define Social AI Use Cases And Establish Value Metrics

Define what constitutes a social AI use or interaction, so you know how to set metrics and measure them. Identify value capture for each social AI application (increase in contact center resolution rates, higher employee engagement, improved acceptance of new processes, etc.). Measure value both in terms of breadth and depth. Breadth can be assessed by looking at how far-reaching the impact of the social AI solution is. Is it compartmentalized to select functions within the organization or across the organization? Is the impact within the organization or outside as well with external stakeholders such as customers, potential recruits, etc.? Depth can be assessed by looking at whether the social AI application is simply improving existing processes or establishing new trustworthy processes, thereby reinventing work practices.

Make The Workforce Comfortable With Social AI

It is a huge shift for the workforce to trust a machine socially – people have to get comfortable holding a mirror up to their development areas. Leaders and managers have the responsibility to enroll the workforce with the idea that the use of their data is mutually beneficial for them and the organization. It often starts with letting the workforce know how their data will be used, giving them a “trial period” to evaluate the application, and an opt-in/-out ability at any point in time. Also, professionals tend to prefer to take “recommendations” from AI – not instructions. As such, it’s important to make it clear in the social AI user interface that the application is playing the role of a coach or buddy and not that of a gatekeeper or enforcer.

Identify How The Workforce Would Like To Engage With Social AI Considering Cross-Cultural Differences

Begin by identifying workforce needs for teaming, relationship-building, networking, etc., and assess where AI solutions can be implemented to address current problems or uncover value-creation opportunities. There may be cross-cultural differences in social AI deployments for a globally dispersed workforce. For instance, based on a survey of 1,015 respondents from 48 countries, respondents from East Asia are more likely to have a trusting attitude toward emotion AI compared to respondents from western countries. This could require leaders to develop location-specific strategies for their global teams.

Build A Custom Solution Suited To Your Organization’s Social Nuances

When implementing a solution, it’s important to work closely (as a partner) with the AI solution provider. Since every organization is different in terms of its processes, communication styles, work dynamics, etc., it’s important to deploy a solution that is customized to the needs of the organization and the unique needs of different functions within the organization (sales, customer support, human resources, learning and development, etc.). Also, it’s important to have the right training dataset to train AI models; some of the training datasets should come from the organization’s actual data to keep the model close to reality and ensure that the model keeps adapting to incoming data.

Pilot The Social AI Solution For Internal Conversations, Incorporate Feedback, Then Scale To External Applications

Pilot the solution with conversations and interactions within the organization (among the workforce) and build feedback loops from the workforce before scaling the solution to external interactions (with potential recruits, customers, etc.). While scaling the solution, a transfer-learning approach may be helpful. For example, when a team is developing a microaggression detector algorithm, they will have to train the model on hours of audio inputs, which would be time- and cost-expensive. Instead, the development team can use pretrained models (used elsewhere in the organization) or external open-source models and adapt them to their needs. When using an external open-source dataset, make sure to check that it is diverse to train your model well.

Time Is Short – Seize The Opportunity

There is a confluence of cost and performance improvements in enabling technologies (such as cloud, network speeds, computer vision, and language recognition) that could make it opportune for organizations to implement social AI now. AI is a powerful tool in leaders’ arsenals. With it, they can drive efficiency by creating leaner and simpler organizations and enhance unique human capabilities for long-term organizational success. By driving greater trust and transparency in hybrid operations, AI can improve the quality of work, increase employee engagement, and reduce attrition. As such, organizations adopting a wait-and-watch approach may run the risk of losing competitive advantage in the current race for talent.

originally posted on deloitte.com by Monika Mahto, Don Miller, Brenna Sniderman and Maya Bodan

About Authors:
Monika Mahto | India Research Lead – Deloitte Center for Integrated Research
Monika Mahto is the India Research Lead for Deloitte Center for Integrated Research. Mahto has close to 15 years of experience in research focused on advanced manufacturing, smart factory, future of work, Industry 4.0, IoT, and other advanced technologies. Her research is cited in prominent platforms including MIT Sloan Management Review, Wall Street Journal, and Thrive Global. Monika collaborates with other thought leaders, industry executives, and academicians to deliver insights into the strategic and organizational implications of advanced technologies.

Don Miller | US Leader – Organizational Strategy, Design, and Transition
Don leads Deloitte Consulting LLP’s Organizational Design practice, empowering global clients to design their organization structures based on their best human impulses and aspirations to future-proof their businesses. He has more than 15 years of experience in bringing together diverse leaders to co-create new organization governance and decision rights models to quickly foster their teams’ accountability to organize, operate, and behave differently to stay resilient in a fast-paced world. While his OD industry experience spans all sectors and functions, Don is also one of the leaders of Deloitte’s Human Capital Media, Entertainment, and Sports practice.

Brenna Sniderman | Center for Integrated Research Lead
Brenna leads the Center for Integrated Research, where she oversees cross-industry thought leadership for Deloitte. In this capacity, Brenna leads a team of researchers focused on global shifts in digital transformation, trust, climate, and the future of work; in other words, how organizations can operate and strategize in an age of digital, cultural, environmental, and workplace transformation. Her own research focuses on connected digital and physical technologies and their transformational impact. She works with other thought leaders to deliver insights into the strategic, organizational, leadership, and human implications of these technological changes.

Maya Bodan
Maya Bodan brings 17+ years of consulting experience advising global clients in the designing and implementation of large-scale transformations. Her expertise is in managing all people aspects of transformations, including organization design and restructures, M&A, offshoring, organizational culture change, employee engagement, talent management, and career development. She helps her clients prepare for the Future of Work through flexible organization structure and talent practices.