All Things AI
7
min read

Internal AI Transparency Can and Should Be Better

In order to foster trust and adaptability among employees as the integration of AI expands, businesses need to enhance transparency in the usage of AI.
Internal AI Transparency Can and Should Be Better
Published on
November 21, 2023
by
Mohammed Mukhar

UKG, a leading provider of HR and workforce solutions, recently conducted a survey involving 4,200 professionals from various roles and across 10 countries. The aim was to understand their perception and utilization of AI, both in their personal lives and professional settings. The findings are quite intriguing.

According to the survey, a staggering 78% of C-suite leaders confirmed that AI is currently being utilized within their organizations. Furthermore, 71% of these leaders believe that AI has the potential to provide a competitive advantage to their businesses or teams. Consequently, investing in more advanced applications of this technology has become a medium-to-high priority for their respective organizations.

However, when it comes to the employees who power these businesses, the survey revealed a concerning statistic. A significant 54% of respondents admitted to having "no idea" about how their company employs AI. This lack of transparency has emerged as a genuine problem, as highlighted by Dan Schawbel, the managing partner at Workplace Intelligence, who collaborated with UKG for this study.

Undoubtedly, this lack of awareness poses a real challenge that needs to be addressed.

When considering the findings of the study, it becomes evident that lack of transparency is a significant issue. The study reveals that approximately 77% of employees would be more enthusiastic about artificial intelligence (AI) if they had a better understanding of how their company utilizes it. Similarly, if leaders provided guidance on how AI could enhance their workflows, it would also generate excitement. Furthermore, employees who are open to embracing AI believe that doing so would lead to increased job satisfaction and a willingness to go above and beyond in their roles. It would also foster trust in their leadership.

This raises the question: why is transparency so elusive? Why do leaders and their companies struggle to be transparent about AI usage, despite the clear interest from both sides in adopting this technology?

According to Schawbel, organizations must prioritize transparency regarding AI implementation in the workplace if they want to gain a competitive edge and establish trust with their employees. By being upfront about how AI is being used, companies can earn and maintain the trust of their workforce.

When AI Transparency in Business Matters

The diverse applications of generative AI tools in various businesses can be quite nuanced. The significance of AI transparency may vary depending on the use case. Therefore, it is crucial to clarify the circumstances in which I believe disclosing the use of AI in a company makes the most sense.

Is AI transparency necessary when...

...A single employee or a small group of employees utilize AI to assist them in their day-to-day tasks? Perhaps. Is this AI-assisted work intended for public consumption, or is the technology simply being used as a personal assistant, for instance? When creating content that will be presented to the public, or work that carries any level of impact for either the public or the organization as a whole, it is important for others within the company to be aware that AI is being utilized. This would enable leadership to establish guidelines regarding the safe and reliable use of AI in various contexts. However, if AI's influence on content has minimal to no impact, transparency may not be necessary.

..AI has been integrated into entire teams or departments as a crucial part of their infrastructure, serving as a tool that has the potential to significantly influence the overall success of the team and the company as a whole. This is undeniably true.

...AI is utilized by key stakeholders in pivotal and mission-critical operations within the organization. It is often seamlessly integrated into these operations through automation and similar means. This unquestionably applies.

In essence, if AI's presence in a company, whether it is utilized by an individual or an entire team, carries substantial weight and has the potential to impact important aspects such as the company's reputation, profitability, or customer base, it is imperative that its usage be disclosed.

More Transparency Means More Trust

Increased transparency is crucial for building trust as a leader. Keeping the use of AI within your company under wraps, including the how, when, why, and where it is employed, can have negative consequences and raise suspicions.

Consider the case of CNET, a well-established company with a 30-year history. Without informing their staff, they quietly introduced approximately 70 articles generated by AI on their widely recognized website in November 2022, coinciding with the release of ChatGPT. This revelation caught employees off guard and left many feeling dismayed. Not only was it discovered that AI had been used to produce these articles without their knowledge, but more than half of them contained errors and instances of plagiarism. The tech world was up in arms over this discovery.

In response to the backlash, Connie Guglielmo, CNET's Editor in Chief and Senior VP at the time, issued an apology for the subpar implementation of AI and pledged to develop improved AI systems. As a result, the company established an AI policy. Shortly thereafter, a labor union representing CNET employees initiated negotiations based on concerns about "a lack of transparency and accountability from management" regarding key issues such as AI.

By embracing transparency and addressing the concerns around AI use, leaders can foster trust among their employees and stakeholders alike.

Could all of this have been prevented if CNET's leadership had been more open and honest about their use of AI from the start? It's hard to say for sure. However, it is clear that increased transparency could have mitigated some of the internal and external distrust that arose as a result.

In general, taking a transparent and well-considered approach to AI within a company can yield numerous benefits.

For example, it can foster a stronger sense of trust and empowerment among team members, as they are able to openly discuss their thoughts and concerns regarding AI best practices.

Additionally, it provides an opportunity for leadership to educate employees on how to adapt their workflows and enhance their skills in conjunction with AI technology.

A transparent approach also ensures that AI policies align with the company's values on various topics, such as content creation, data security, and avoiding biases. This not only helps to build internal cohesion but also promotes external transparency, as there is still widespread skepticism surrounding AI. With more people than ever questioning whether AI played a role in certain outcomes, having an external statement on AI becomes even more crucial, especially for businesses involved in creating public-facing content.

The details of a particular company's position on AI, principles, code of conduct, or any similar term can vary greatly depending on how AI is intended (or not intended) to be utilized. This can also differ from one team to another within the same organization. That is why it is crucial to obtain input from employees or an AI ethics committee as the use of AI becomes more prevalent.

Here’s PocketAI's AI policy as an example of what one of these policies looks:

“We use AI to assist in some content development at our company. To ensure transparency, accountability, quality and privacy, we adhere to internal AI usage standards. These standards help us safeguard against biases, maintain data security, and uphold our commitment to ethical marketing practices. One of these standards is that AI should be used to assist in content creation, not fully automate it. We ensure that every piece of content we develop is shaped and reviewed by people who have an understanding of our audience and AI’s limitations.”

It is strange that numerous executives in the survey conducted by UKG mentioned the utilization of AI in their business, and an equal number of employees expressed enthusiasm about using it if they were aware of the opportunity. However, approximately 50% of these professionals are unaware that they can currently implement AI or provide insights into its usage within their company.

Leaders must make a greater effort to bridge this gap. There should be better coordination among department heads regarding who is utilizing AI, how it is being utilized, and why. This is crucial because the actions of one team or individual can have a significant impact on everyone else. Without this alignment, leaders run the risk of alienating both their staff and their audience, potentially necessitating the release of apology statements.

Share this post
AI Chatbots
Business
Mohammed Mukhar
PocketAI Founder & CEO
Get the latest AI Models on WhatsApp including GPT-4, ChatGPT and DALL-E 3 - PocketAI
Get Started with PocketAI
Discover what writing with AI feels like. See for yourself with a free trial -- we assure you'll save 20+ hours every week.
Try for FREE
Join our community
Join a super cool squad of pros who are rocking their productivity and scaling content needs with the awesome power of PocketAI and WhatsApp. Come on in and make some new friends while leveling up your productivity game!🚀😎
Count Me In
Weekly newsletter
No spam. Just the latest releases and tips, interesting articles, and exclusive interviews in your inbox every week.
Read about our privacy policy.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Receive a weekly recap of breaking news, case studies, and exclusive webinars on what’s happening in generative AI.

More from PocketAI Blog