INSIGHTS

9 min read

Blog thumbnail
Published on 06/13/2024
Last updated on 06/18/2024

AI ethics: Balancing innovation with your enterprise values

Share

Notable innovations throughout human history, from the printing press to the internet, have raised societal ethical debates. That’s because while powerful tools have great potential to benefit our lives, they can also cause harm when used irresponsibly. This duality extends to artificial intelligence (AI), with recent advancements in generative AI (GenAI) tools like large language models (LLMs).

Some experts—like Pete Rai, principal engineer in emerging technologies at Outshift by Cisco—believe that ethical concerns for AI are distinct compared to those for prior innovations.  

“People recognize we’re on the threshold of creating algorithms that can act and behave like us. AI will help make ever more consequential decisions about people’s lives, like what medical treatments we have, what our insurance rates will be, or what sentences we’ll receive for certain crimes,” Rai says.

Given these significant impacts, Rai argues that this higher level of consequence demands enhanced oversight and control as more organizations participate in AI transformation.

For enterprises, greater oversight and control means having a clear stance on AI ethics. Establishing standards and best practices that embed ethical values into your organization’s culture and development teams is crucial for building trust in AI as the technology accelerates. However, it’s a process that’s easier said than done. Leading players are still figuring out how to build ethical AI frameworks while addressing market demand for innovative new solutions. It’s a complicated balance that companies are navigating cautiously. 

Balancing AI ethics with innovation starts with understanding the technology’s limitations

To ensure responsible AI aligns with an enterprise’s innovation strategy, it's imperative to embed ethical considerations into the core corporate culture and development processes. Organizations must understand how AI works to address its inherent ethical limitations.

AI bias amplification and its impact on business values

AI models are trained using vast datasets created by humans and selected by developers. As such, training data inevitably reflects existing societal biases, which is then reflected in generated outputs.

While models don’t create new biases, Rai says that they can amplify bias beyond the scope of what was present in the initial training data.

“The problem with AI is that it’s so powerful that it can magnify, inflate, and scale up existing biases. While these biases may well have previously affected individual decisions on a human scale, they might now play out on a huge, machine scale,” he says.

This can impact the reliability of model outputs and result in outcomes that conflict with the values that guide your organization’s innovation agenda.

The conflict between evolving ethics and static models

Models are limited to the knowledge gained at the time of training. As a result, outputs will become outdated as societal, cultural, and legal standards surrounding ethics, bias, and fairness evolve. It is a sobering mental exercise to imagine a world where ethical standards of earlier times are encoded into the AI models used for everyday decision-making. Furthermore, training, fine-tuning, and monitoring model drift are resource-intensive, so modifying models to reflect current ethical frameworks is a constant challenge in a rapid-paced innovative environment.

Data privacy concerns and the need for model transparency

Respecting privacy and the personal data rights of individuals and AI users is an important part of responsible AI transformation. This area of AI ethics is governed by laws like the General Data Protection Regulation (GDPR), which requires companies to state how LLMs make decisions or use an individual’s data. Model transparency is also important for addressing standards set by the European Union AI Act. The Act obliges companies to provide comprehensive summaries of training data, evaluate models, and report on incidents, depending on a model’s perceived risk level.

However, internal AI model processes and algorithms used to support innovation are notoriously opaque, so it’s often difficult to gauge whether they perform within ethical AI frameworks or laws. Model and algorithm complexity can also make it challenging to explain outputs in easily digestible ways, further impacting AI system transparency.

Building an ethical AI framework: Key considerations for balancing innovation with responsibility

Innovating with AI responsibly requires a nuanced approach tailored to an organization’s needs. For example, companies that rely on end-user data to develop AI will have different values and ethical considerations than those primarily using proprietary business data.

Organizations also have access to different resources: a smaller company with limited time, finances, or expertise may be unable to address AI ethics issues at the same depth as a larger enterprise. Regardless of your specific requirements, some key considerations should be at the core of a robust ethical AI framework.

1. Translating business values into AI development

When innovating with AI, organizations must think beyond financial gains. Instead, define what you want to achieve in a broader sense, and what outcomes you consider ethically acceptable. Consider the impact your AI innovations can have on customers, users, or society.

Rai considers this a foundational practice for any enterprise developing or using AI.  

“It can’t just be as simple as the maxim of building value for shareholders. Businesses have to decide what value they want to bring with AI—and it should reflect the company’s values and those of their staff,” he says.

These values can then inform AI development, placing parameters around functionality. For example, your enterprise may avoid building features that leverage user data if this conflicts with your values—even when they’re features that some customers want. Organizations may need to accept short-term inefficiencies or financial impacts to achieve the long-term competitive advantages that come with upholding your values.

2. Leading AI ethics initiatives to stay competitive

Creating and following through on an ethical AI framework is an effective way to differentiate your business and win stakeholder trust while advancing innovation. However, this doesn’t mean you have to produce AI ethics solutions in a vacuum. Rai argues that balancing innovation with responsible AI should be tackled collaboratively. “It’s a complex challenge. Solving it will take a dialogue amongst business, government, and society,” he says.

Organizations should consider becoming active in AI standards groups and leading forums or conferences focused on AI ethics. Connect with other industry players and share how you address AI ethics issues to foster responsible AI more broadly across the industry.

3. Establishing an AI-ready workforce

When it comes to workforce engagement, organizations should plan development roadmaps that align in size and speed with employee willingness to embrace AI transformation. Respecting your teams’ attitudes toward AI transformation is crucial for gaining buy-in and integrating AI ethics into company culture.

Invite employees to voice their values related to AI and use this input to inform your development policies. Companies should also be transparent about their approach and offer staff training and support around AI processes and ethics.

For example, you may need to educate staff on how they can use AI outputs. Employees should understand the impact of the AI tools they’re using to innovate and how data is handled.

4. Approaching issues of AI bias and workplace diversity

Some organizations may impose guidelines around team diversity to minimize bias in AI systems. However, Rai believes that this should be navigated carefully. While enterprises can and should work toward increasing diversity, it’s almost impossible to ensure that every team is sufficiently diverse, even with unlimited resources.

For one, AI development teams will almost always lack variation because the field has traditionally been male-dominated, with only 26% of global data and AI positions occupied by women. Diversity and bias concepts, such as gender or religious bias, also vary from location to location.“

“We need to understand that diversity is complicated. What’s considered a diverse team in California doesn’t necessarily match the diversity equation you might use in India,” he says.

Rai advises focusing on educating employees on diversity and representation issues in AI and developing techniques to identify bias more accurately. “The right way to do it is to sensitize everybody to the issue and have robust mitigation processes to understand and measure bias,” he says.

5. Building ethics-minded leadership teams

Enterprise leaders will inevitably face business decisions that tend to prioritize shareholder value, sometimes at the expense of responsible AI practices. For example, setbacks in financial performance may draw the C-suite to more cost-effective cloud service providers, even if the organization hasn’t yet performed ethical due diligence on those vendors. To navigate these scenarios, it’s important to establish processes that will help leadership teams align their actions to the organization’s guiding ethical AI frameworks.

For some businesses, these processes may be directed by an ethics strategy lead, such as a Chief AI Ethics Officer. However, AI transformation is complex and uncharted territory—challenges that don’t necessarily fit neatly into a single role.

In Rai’s experience, striking a balance between AI ethics and innovation works better when all board and executive team members collectively bring together several areas of expertise to navigate transformation. A collaborative approach also offers a more balanced perspective than relying on one individual, which is more appropriate for a function underpinned by bias awareness. 

Prioritizing company values for long-term AI innovation success

AI’s rapid acceleration makes it difficult to anticipate the technology’s capabilities in the coming months, let alone years. What’s certain, however, is that AI innovation will present new ethical challenges in several business areas, from job disruption to copyright.

In this uncertain landscape, organizations must remember that building trust in AI isn’t a race to the end, but a continuous journey that requires flexibility and clear values. A starting point is to define your organization’s broader goals and acceptable outcomes for AI rather than focusing on individual features or financial return on investment (ROI).

According to Rai, the organizations that tend to succeed long-term are those that know their values.

“Define how you want to treat people in our society and what acceptable outcomes you want to achieve first,” he says. “Then, test the technologies against those values.” This approach is key to building innovative products and services that truly change people’s lives for the better.

Learn more about how Outshift is helping shape the conversation around trustworthy and responsible AI

Subscribe card background
Subscribe
Subscribe to
the Shift!

Get emerging insights on emerging technology straight to your inbox.

Unlocking Multi-Cloud Security: Panoptica's Graph-Based Approach

Discover why security teams rely on Panoptica's graph-based technology to navigate and prioritize risks across multi-cloud landscapes, enhancing accuracy and resilience in safeguarding diverse ecosystems.

thumbnail
I
Subscribe
Subscribe to
the Shift
!
Get
emerging insights
on emerging technology straight to your inbox.

The Shift keeps you at the forefront of cloud native modern applications, application security, generative AI, quantum computing, and other groundbreaking innovations that are shaping the future of technology.

Outshift Background