Technology Why it's critical to embed AI ethics and principles...

Why it’s critical to embed AI ethics and principles into your organization

-

Couldn’t attend Transform 2022? Check out all the top sessions in our on-demand library now! Look here.


As technology advances, business leaders understand the need to adopt business solutions that leverage artificial intelligence (AI). However, there is understandable hesitation due to implications around the ethics of this technology: is AI inherently biased, racist, or sexist? And what impact could this have on my business?

It is important to remember that AI systems are essentially nothing. They are human-built tools that can maintain or amplify the biases that exist in the people who develop them or those who create the data used to train and evaluate them. In other words, a perfect AI model is nothing more than a reflection of its users. We, as humans, choose the data used in AI and do so despite our inherent biases.

Ultimately, we are all subject to different sociological and cognitive biases. If we are aware of these biases and continuously take action to combat them, we will continue to make progress in minimizing the damage these biases can do when built into our systems.

Investigating Ethical AI Today

Organizational emphasis on AI ethics has two sides. The first relates to AI governance that deals with what is allowed in the field of AI, from development to adoption, to use.

Event

MetaBeat 2022

MetaBeat will bring together thought leaders to offer advice on how metaverse technology will change the way all industries communicate and do business October 4 in San Francisco, CA.

Register here

The second concerns AI ethics research aimed at understanding the inherent characteristics of AI models due to certain development practices and their potential risks. We think the lessons from this field will become increasingly nuanced. For example, current research is largely focused on foundation models and in the coming years will focus on smaller downstream tasks that can mitigate or diffuse the drawbacks of these models.

Universal adoption of AI in all aspects of life requires us to think about its power, purpose and impact. This is done by focusing on AI ethics and requiring AI to be used in an ethical manner. Of course, the first step to achieve this is to agree on what it means to use and develop AI ethically.

One step toward optimizing products for fair and inclusive outcomes is having fair and inclusive training, development and testing data sets. The challenge is that high-quality data selection is a nontrivial task. Obtaining these types of datasets can be difficult, especially for smaller startups, as much readily available training data contains bias. It’s also helpful to add debiasing techniques and automated model evaluation processes to the data augmentation process, and to start from the very beginning with thorough data documentation practices so that developers have a clear idea of ​​what they need to build the datasets they decide on. to fill. to use.

The cost of unbiased AI

Red flags exist everywhere and technology leaders should be open to seeing them. Since bias is unavoidable to some extent, it’s important to consider a system’s main use case: decision-making systems that can affect human lives (i.e., automated resume screening or predictive policing) have the potential to cause untold harm. to target. In other words, the central purpose of an AI model can be a red flag in itself. Technology organizations should openly examine the purpose of an AI model to determine whether that purpose is ethical.

Furthermore, it is increasingly common to rely on large and relatively uncurated datasets (such as Common Crawl and ImageNet) to train core systems that are then “tuned” to specific use cases. These large scraped data sets have been repeatedly shown to contain active discriminatory language and/or disproportionate skews in the distribution of their categories. That’s why it’s important for AI developers to thoroughly research the data they’ll be using from the start of their project when creating a new AI system.

Ultimately cheaper

As mentioned, resources for startups and some tech companies can play a role in the effort and cost invested in these systems. Fully developed ethical AI models can certainly seem more expensive at the beginning of the design. For example, creating, finding and buying high-quality data sets can be costly in terms of time and money. Likewise, replenishing missing data sets can take time and resources. It also takes time, money and resources to find and hire diverse candidates.

However, in the long run, due diligence will become cheaper. For example, your models will perform better, you will not have to deal with large-scale ethical errors and you will not suffer the consequences of persistent damage to different members of society. You’ll also spend fewer resources demolishing and redesigning large-scale models that have become too biased and impractical to repair — resources that could be better spent on innovative technologies used for good.

If we are better, AI is better

Inclusive AI requires technology leaders to proactively try to limit the human biases fed into their models. This requires an emphasis on inclusivity, not just in AI, but in technology in general. Organizations need to think clearly about AI ethics and promote strategies to reduce bias, such as periodic reviews of what data is being used and why.

Companies must also choose to fully live those values. Inclusive training and hiring in diversity, equality and inclusion (DE&I) is a good start and should be meaningfully supported by the workplace culture. From this, companies should actively encourage and normalize inclusive dialogue within the AI ​​discussion, as well as in the larger work environment, making us better as employees and in turn making AI technologies better.

On the development side, there are three key areas to focus on so AI can better suit end users regardless of differentiators: understanding, taking action, and transparency.

In terms of understanding, systematic checks for bias are needed to ensure that the model does its best to provide a non-discriminatory judgment. A major source of bias in AI models is the data developers they start with. If training data is biased, that bias will be baked into the model. We place a lot of emphasis on data-centric AI, which means that we do our best from the beginning of the model design, namely the selection of appropriate training data, to create optimal data sets for model development. However, not all datasets are created equal and real world data can be skewed in many ways – sometimes we have to work with data that may be biased.

Representative data

One technique to practice better understanding is disaggregated evaluation: measuring the performance of subsets of data that represent specific groups of users. Models are good at working their way through complex data, and even if the variables like race or sexual orientation aren’t explicitly included, they can surprise you by figuring this out and still discriminate against these groups. By checking specifically for this, it becomes clear what the model actually does (and what it doesn’t do).

When taking action after gaining a better understanding, we use different debiasing techniques. These include balancing data sets in a positive way to represent minorities, data augmentation, and coding sensitive functions in a specific way to reduce their impact. In other words, we do tests to find out where our model might fall short in training data and then we increase the datasets in those areas so that we continuously improve when it comes to debiasing.

Finally, it is important to be transparent in reporting data and model performance. Simply put, if you notice that your model is discriminating against someone, say so and own it.

The future of ethical AI applications

Today, companies are bridging the gap in AI adoption. We see in the business-to-business community that many organizations are using AI to solve frequent and repetitive problems and using AI to generate real-time insights into existing data sets. We experience these opportunities in many areas – from our personal lives, from our Netflix recommendations to analyzing the sentiment of hundreds of customer conversations in the business world.

Until there are top-down regulations on the ethical development and use of AI, no predictions can be made. Our AI ethics principles at Dialpad are a way of holding ourselves accountable for the AI ​​technology used in our products and services. Many other tech companies have joined us in promoting AI ethics by publishing similar ethical principles, and we applaud those efforts.

However, without outside accountability (whether through government regulations or industry standards and certifications), there will always be actors who willfully or negligently develop and use AI that is not focused on inclusiveness.

No future without (ethical) AI

The dangers are real and practical. As we’ve said repeatedly, AI permeates everything we do professionally and personally. If you don’t proactively prioritize inclusiveness (alongside other ethical principles), you inherently allow your model to be subject to overt or internal bias. That means the users of those AI models – often unknowingly – process the biased results, which have practical implications for everyday life.

There is probably no future without AI, as it is becoming more and more common in our society. It has the potential to significantly increase our productivity, our personal choices, our habits, and even our happiness. The ethical development and use of AI is not a controversial topic and it is a social responsibility that we must take seriously – and we hope others do too.

The development and use of AI by my organization is a small part of AI in our world. We have adhered to our ethical principles and we hope other technology companies will too.

Dan O’Connell is CSO of Keypad

DataDecision makers

Welcome to the VentureBeat Community!

DataDecisionMakers is where experts, including the technical people who do data work, can share data-related insights and innovation.

If you want to read about the very latest ideas and up-to-date information, best practices and the future of data and data technology, join us at DataDecisionMakers.

You might even consider contributing an article yourself!

Read more from DataDecisionMakers

Shreya Christinahttp://ukbusinessupdates.com
Shreya has been with ukbusinessupdates.com for 3 years, writing copy for client websites, blog posts, EDMs and other mediums to engage readers and encourage action. By collaborating with clients, our SEO manager and the wider ukbusinessupdates.com team, Shreya seeks to understand an audience before creating memorable, persuasive copy.

Latest news

1xbet Зеркало Букмекерской Конторы 1хбет На следующий ️ Вход и Сайт Прямо тольк

1xbet Зеркало Букмекерской Конторы 1хбет На следующий ️ Вход и Сайт Прямо только1xbet Зеркало на Сегодня Рабочий официальный Сайт...

Mostbet Pakistan ᐉ Online Casino Review Official Website

Join us to dive into an immersive world of top-tier gaming, tailored for the Kenyan audience, where fun and...

Casino Pin Up Pin-up Casino Resmi Sitesi Türkiye Proloq Ve Kayıt Çevrimiçi

ContentPin Up Nə Say Onlayn Kazino Təklif Edir?Pin Up Casino-da Pul Çıxarmaq Nə Miqdar Müddət Alır?Vəsaiti Kartadan Çıxarmaq üçün...

Играть В Авиатора: Самолетик Pin Up

ContentAviator: Son Qumar Oyunu Təcrübəsini AçınMobil Proqram Pin UpPin Up Aviator Nasıl Oynanır?Бонус За Регистрацию В Pin Up?Pin Up...

Pin Up 306 Casino əvvəl Qeydiyyat, Bonuslar, Yukl The National Investo

ContentDarajalarfoydalanuvchilar Pin UpCasino Pin-up Pin-up On Line Casino Resmi Sitesi Türkiye Başlanğıc Ve Kayıt ÇevrimiçPromosyon Və Qeydiyyatdan KeçməkAviator OyunuAviator...

Find Experts to Write My Paper for Me. Just Click a Button Even though you may have many...

Must read

You might also likeRELATED
Recommended to you