AI laundering is a messy business. Lenovo's COO explains how to avoid it

Updated 5 months ago on June 08, 2024

Whether it's impressive on a case-by-case basis or not, artificial intelligence (AI) seems to be everywhere. However, some of those technologies that are advertised as AI are actually not - they are simply products that have been labeled to increase interest and attract attention.

This practice of making excessive claims about artificial intelligence is called AI washing. While it may seem harmless, AI washing can reduce the integrity of AI solutions, make it difficult to understand what really works, and make it difficult to assess the success of this emerging technology.

I had the opportunity to speak with Lenovo's Linda Yao, Chief Operating Officer and Chief Strategy Officer of the Solutions and Services Division and Vice President of Al Solutions and Services, about the concept, what it means for business, what to watch out for, and what you can do to make your AI efforts transparent and credible.

AU: Please introduce yourself and tell us about your role at Lenovo.

Linda Yao: As part of Lenovo's fastest growing business, I am responsible for driving the AI Services practice to ensure we continue to innovate with our customers to solve their most exciting challenges.

Our AI Center of Excellence has core competencies in security, people, technology and processes that help clients implement the right AI strategies and solutions for their challenges. Our mission is to help organizations successfully move from AI concepts to real-world results by scaling AI quickly, responsibly and securely.

I also lead strategy and operations for the division, which provides ample opportunity to drink our own champagne and implement artificial intelligence that will transform our operational processes and customer experience.

AU: What is your understanding of what artificial intelligence is and why is it a growing concern in the technology industry?

LY: The promise of artificial intelligence has long captured our imagination, especially now that generative AI has become readily available at both organizational and personal levels. Because its potential seems limitless, there is a desire to associate this newfound technology as the cure for everything.

Lenovo's data shows that nearly all companies are increasing their AI investments, yet three out of five companies are unsure of the return on that investment (ROI). It remains to be seen whether these AI implementations are delivering meaningful business results for their organizations.

Since the impact of AI is not yet fully defined and the technology itself is not fully understood by everyone, we leave room for interpretation and embellishment. Thus, the term AI wash draws a parallel with greenwashing, where companies can make speculative claims about the environmental benefits of their products.

While I don't believe this is done with intent, AI brainwashing can lead to skepticism and distrust among consumers and stakeholders, reducing the appreciation and confidence in true AI advances coming to market.

AU: What are the long-term implications of artificial intelligence for businesses and consumers?

LY: There is a real fear of missing out on something new (FOMO) for enterprises. The risk of AI "washout" is that it can divert management attention and resources away from practical AI innovation. Rather than investing in developing meaningful AI capabilities, vendors may go for misguided investments or superficial improvements that slow down the real progress they could make with the technology.

For businesses that receive assistance, AI "AI washing" complicates their decision-making process. It can be difficult for these companies to isolate truly valuable AI solutions from the noise, which can lead to wasted investments in ineffective technologies. This can hinder digital transformation efforts, stifling innovation and jeopardizing business performance.

Both suppliers and business users can benefit from working with trusted AI partners that take proactive steps to use AI responsibly, as well as take an ethical approach when advising on the right AI choices.

The implications of AI for consumers will be much more serious: risks to data security and privacy due to poorly designed AI technology, as well as poor user experience or frustration from technology that does not meet quality expectations. Consumers will be looking for brands they trust, technologies and form factors that have served them well in the past, education and training opportunities to make AI more accessible, and transparency from their vendors regarding AI use.

AU: How can companies ensure that their claims about artificial intelligence are accurate and ethical?

L.Y.: First of all, we need to recognize that implementing effective generative AI solutions in an organization is not easy, and scaling can be difficult. Compared to the maturity of an organization's people, processes, and security policies around AI, technology adoption may even be the least challenging part.

According to Lenovo's Global CIO Survey, 76% of CIOs said their organizations have no corporate policy on the operational or ethical use of AI. There are no silver bullets or quick fixes, so an important step would be to recognize that this is an incremental process and important disclosure to customers. AI providers need to be transparent about what tools, data and methods are being used, and companies should consider developing their own AI policies tailored to their use cases.

Lenovo's proprietary processes are designed to ensure safe, ethical and responsible development and use of AI, and these best practices are at the core of how we work with customers on their AI adoption journey.

AU: How does AI washing undermine the true transformational potential of AI technology?

LY: The erasure of AI can lead to a conflation of the embellished [with] reality. This increases the risk of AI fatigue, which together will deepen the "trough of disillusionment" and discourage progress and investment in real AI innovation.

This is why I believe it is important to take a practical and pragmatic approach to implementing AI. We exacerbate the mistrust of AI and its negative consequences when AI is treated as an abstract concept without tangible results.

At Lenovo, we are committed to achieving meaningful business results through proven practical experience, and we directly link the adoption of technologies such as artificial intelligence to those results.

AU: What strategies can enterprises use to talk about artificial intelligence in a way that aligns with their real-world capabilities and accomplishments?

LY: Businesses should focus on fact-based messaging, transparency, training and real-world use cases to authentically communicate their AI capabilities. Lay out specific metrics, case studies, and real-world examples that demonstrate the impact of AI on your business and your employees. Be transparent about your development process, data sources, and decision-making.

At Lenovo, we believe that hands-on experience is critical, and we've scaled dozens of real-world use cases with tangible business results to show it. When you're generating millions of dollars in revenue, there's no need to use artificial intelligence - proven methods and measurable impact speak for themselves.

AU: What role does transparency play in building trust in AI initiatives in companies?

LY: Transparency is the cornerstone of trust in AI initiatives. It demystifies the technology, aligns expectations with reality, and engages people as supporters rather than skeptics. This openness not only reassures stakeholders, but also fosters informed collaboration, encouraging innovation and confidence in the real possibilities of AI.

AU: Can you talk about specific measures Lenovo has taken to avoid the use of artificial intelligence in its communications and practices?

LY: At Lenovo, we demonstrate our transparency in practice by allowing stakeholders to see the real impact of AI firsthand - whether in the contact center, on the shop floor, or in sales. We are building trust in our AI solutions and methods through direct user experience.

Lenovo has been implementing artificial intelligence in our own IT environment for over a decade, and our culture of drinking our own champagne goes back several more decades, so this is nothing new to us!

AU: How does Lenovo address ethical issues related to the development and deployment of artificial intelligence solutions?

Last year, Lenovo established the Responsible Artificial Intelligence Committee, comprised of employees from diverse backgrounds - by gender, ethnicity, and disability. Together, they review internal products and external partnerships using the core principles of diversity and inclusion, privacy and security, accountability and trustworthiness, explainability, transparency, and environmental and social impact.

We apply real rigor to our own solutions, as well as to the work of our partners, where diversity, equity and inclusion (DEI) is a priority. We use specialized tools to assess data bias and identify subgroups that may be underrepresented or segmented in some way. One such tool is AI Fairness 360, an open source software that evaluates artificial intelligence algorithms and training data to mitigate bias.

AU: What are some common misconceptions about AI that contribute to AI "washback" and how can they be overcome?

LY: Let's talk about three myths:

Myth: AI can solve any problem and provide a huge ROI right away.

Reality: AI is great at specific tasks, but no algorithm is a one-size-fits-all solution. Its benefits tend to accumulate over time with careful iteration. We're addressing this with our people-centric strategy to educate stakeholders about AI's strengths and weaknesses by sharing our own practical experiences with AI implementations and real-world use cases that continue to deliver value over time as experience accumulates.

Myth: AI works autonomously, without human control.

Reality: Most AI-based solutions, especially generative AI, require some level of governance for effective implementation and ethical use. And that's where our people-centric strategy comes in - we put people at the center as the experts who guide the use of AI and interpret its results.

Myth: More data means better AI.

Reality: The quality and relevance of your data is more important than its volume. Our AI services practice helps clients assess their data's readiness for AI and ensure that their data sets are capable of achieving the desired business outcomes. If not, our data services practice can help them do just that.

AU: What are the potential risks of not addressing AI washout in the technology industry? How can industry standards and regulations help mitigate the risks associated with AI washout?

LY: Industry standards play an important role in mitigating the impact of AI. Earlier this year, Lenovo signed the UNESCO Recommendation on the Ethics of Artificial Intelligence, which commits to "prevent, mitigate or eliminate" the negative impacts of AI, in addition to specific remedial measures in AI solutions that have already been released to the market.

In May of this year, we joined the Government of Canada's Voluntary Code of Conduct for the Responsible Development and Management of Advanced Generative Artificial Intelligence Systems. These are important commitments that hold the industry accountable not only for the safe and ethical use of AI, but also for its explainability and transparency.

AU: What future trends do you foresee in AI ethics and governance?

LY: The ethics and governance of AI will continue to evolve and tighten, and companies at the forefront of AI adoption will need to take decisive action to set the rest of the industry on the path to ethical and responsible use of AI. In particular, let's look at three areas.

  1. Stricter regulation and accountability: Companies will have to comply with increasingly stringent rules around data privacy, bias and ethical use. They will establish clearer accountability - through chief AI officers, chief responsibility officers, or otherwise - and will develop corporate policies to ensure that AI is handled responsibly. They will likely turn to trusted AI advisors to help define, evaluate and implement these policies.

  2. Ethical principles and transparency: The industry will move towards standardized ethical principles. Organizations will demand transparency by providing clear documentation of processes for training, testing and validating AI models. Independent audits and certifications will become increasingly common.

  3. Fair and ethical AI by design: Companies will focus on mitigating bias, implementing fairness practices and regular auditing when developing AI. Ethical considerations will be integrated from the beginning, ensuring that issues are addressed throughout the AI lifecycle. Early adopters, such as Lenovo, will drive these efforts by guiding companies to adopt best practices and fostering a robust, ethical AI landscape.

Let's get in touch!

Please feel free to send us a message through the contact form.

Drop us a line at mailrequest@nosota.com / Give us a call over skypenosota.skype