Technology How to manage risk as AI spreads throughout your...

How to manage risk as AI spreads throughout your organization

-

Register now for your free virtual pass to the Low-Code/No-Code Summit on November 9. Hear from executives at Service Now, Credit Karma, Stitch Fix, Appian and more. Learn more.


With AI spreading across the enterprise, organizations are having a hard time weighing the benefits against the risks. AI is already baked into a range of tools from IT infrastructure management to DevOps software to CRM suites, but most of those tools were adopted without an AI mitigation strategy.

Of course, it’s important to remember that the list of potential AI benefits is just as long as the risks, which is why so many organizations skimp on risk assessments in the first place.

Many organizations have already made serious breakthroughs that would not have been possible without AI. For example, AI is being deployed throughout healthcare for everything from robotic-assisted surgery to reducing drug dosing errors to streamlined administrative workflows. GE Aviation relies on AI to build digital models that better predict when parts fail, and there are, of course, countless ways AI is being used to save money, such as letting AI take drive-thru restaurant orders with conversational AI.

That’s the good side of AI.

Event

Top with little code/no code

Join today’s leading executives at the Low-Code/No-Code Summit virtually on November 9. Register for your free pass today.

Register here

Now let’s look at the bad and ugly.

The Bad and Ugly of AI: Bias, Security Issues, and Robot Wars

AI risks are as varied as the many use cases that have hyped its proponents, but three areas have proven particularly worrisome: bias, security, and war. Let’s look at each of these issues individually.

Prejudice

While HR departments initially thought AI could be used to eliminate bias in hiring, the opposite has happened. Models built with implicit biases baked into the algorithm end up being actively biased against women and minorities.

Amazon, for example, had to be AI-powered. delete automated resume screener because it filtered out female candidates. Likewise, when Microsoft used tweets to train a chatbot to interact with Twitter users, they created a monster. As a CBS News headline put it:Microsoft shuts down AI chatbot after it turned into a Nazi.”

These problems may seem inevitable in hindsight, but if industry leaders like Microsoft and Google can make these mistakes, so can your company. At Amazon, the AI ​​was trained on resumes mostly from male applicants. With Microsoft’s chatbot, the only positive thing you can say about that experiment is that they didn’t use 8chan to train the AI. If you spend five minutes swimming through the toxicity of Twitter, you’ll understand what a terrible idea it was to use that dataset for training anything.

Security issues

Uber, Toyota, GM, Google and Tesla, among others, have raced to realize fleets of self-driving vehicles. Unfortunately, the more researchers experiment with self-driving cars, the further that fully autonomous vision fades into the distance.

In 2015, the first death was caused by a self-driving car happened in Florida. According to the National Highway Traffic Safety Administration, a Tesla in autopilot mode could not stop for a tractor that turned left at an intersection. The Tesla crashed into the large rig, fatally injuring the driver.

This is just one of many mistakes made by autonomous vehicles. Uber’s self-driving cars didn’t know that pedestrians can jaywalk. A Google-powered Lexus a Silicon Valley city bus swept aside, and in April a partially autonomous TruSimple Semi Truck swerved into a concrete center partition on I-10 near Tucson, AZ because the driver had not properly restarted the autonomous driving system, causing the truck to follow outdated commands.

Federal regulators even report that self-driving cars were involved in almost 400 accidents on US roads in less than a year (from July 1, 2021 to May 15, 2022). In those 392 accidents, six people were killed and five were seriously injured.

Fog of war

If self-driving vehicle crashes aren’t enough of a safety hazard, consider an autonomous warship.

Autonomous drones powered by AI are now making life and death decisions on the battlefield, and the risks associated with potential mistakes are complex and controversial. According to a United Nations report, by 2020 an autonomous Turkey built quadcopter decided to attack retreating Libyan fighters without human intervention.

Military personnel around the world are considering a range of applications for autonomous vehicles, from combat to sea transport to flying in formation with piloted fighter jets. Even when not actively hunting the enemy, autonomous military vehicles can still make some deadly mistakes, similar to those of self-driving cars.

7 Steps to Mitigate AI Risks Across the Enterprise

For the typical business, your risks won’t be as terrifying as killer drones, but even a simple mistake that causes a product failure or opens you up to lawsuits can send you in the red.

Consider these 7 steps to better mitigate risk as AI spreads throughout your organization:

Start with early adopters

First, look at the places where AI has already gained a foothold. Find out what works and build on that foundation. From this you can develop a basic rollout template that different departments can follow. However, keep in mind that whatever AI adoption plans and rollout templates you develop must be procured across the organization to be effective.

Find the right bridgehead

Most organizations will want to start small with their AI strategy and test the plan in one or two departments. The logical place to start is where risk is already a top priority, such as Governance, Risk and Compliance (GRC) and Regulatory Change Management (RCM).

GRC is essential to understanding the many threats to your business in a hyper-competitive marketplace, and RCM is essential to keeping your organization on the right side of the many laws you need to follow in multiple jurisdictions. Every practice is also one with manual, labour-intensive and ever-changing processes.

With GRC, AI can handle such tricky tasks, such as starting the process of defining fuzzy concepts like “risk culture,” or it can be used to collect publicly available competitor data that helps guide new product development in a way which does not infringe copyright laws.

In RCM, outsourcing things like managing regulatory changes and monitoring the daily onslaught of enforcement actions can save your compliance experts up to a third of their workdays for more valuable tasks.

Map processes with experts

AI can only follow processes that you can map in detail. If AI is impacting a particular role, ensure that those stakeholders are involved in the planning stages. Too often developers plow through without enough input from the end users who will adopt or reject these tools.

Focus on workflows and processes that hold back experts

Look for processes that are repetitive, manual, error-prone, and likely to be annoying to the people running them. Logistics, sales and marketing, and R&D are all areas of repetitive tasks that can be delegated to AI. AI can improve business outcomes in these areas by improving efficiency and reducing errors.

Check your datasets thoroughly

Researchers from the University of Cambridge recently studied 400 COVID-19-related AI models and found that: each of them had fatal flaws. The errors fell into two general categories, those using data sets that were too small to be valid and those with limited information disclosure, leading to different biases.

Small data sets are not the only kind of data that can throw off models. Public datasets can come from invalid sources. For example, last year Zillow introduced a new feature called Zestimate that used AI to make cash offers for homes in a fraction of the time it usually takes. The Zestimate algorithm ended up making thousands of offers above the market based on flawed data from the Home Mortgage Disclosure Act, ultimately leading Zillow to million dollar prize for improving the model.

Choose the right AI model

As AI models evolve, only a small portion of them are fully autonomous. In most cases, however, AI models benefit greatly from active human (or rather, expert) input. “Supervised AI” relies on humans to guide machine learning, rather than letting the algorithms figure everything out on their own.

Most knowledge work requires supervised AI to achieve your goals. However, for complicated, specialized work, guided AI doesn’t get you as far as most organizations would like to go. To increase and unlock the real value of your data, AI needs not only supervision, but also expert input.

The Expert-in-the-Loop (EITL) model can be used to address major problems or problems that require specialized human judgment. For example, EITL AI has been used to: discover new polymers, improve aircraft safetyand even to help law enforcement plan how dealing with autonomous vehicles.

Start small but dream big

Make sure to thoroughly test AI-driven processes and then continue vetting AI-driven processes, but once you’ve worked out the kinks, you now have a plan to extend AI across your organization based on a template that you have already tested and proven in specific areas, such as GRC and RCM.

Kayvan Alikhani is co-founder and chief product officer at Compliance.ai. Kayvan previously led the Identity Strategy team at RSA. and was the co-founder and CEO of PassBan (acquired by RSA).

DataDecision makers

Welcome to the VentureBeat Community!

DataDecisionMakers is where experts, including the technical people who do data work, can share data-related insights and innovation.

If you want to read about the latest ideas and up-to-date information, best practices and the future of data and data technology, join us at DataDecisionMakers.

You might even consider contributing an article yourself!

Read more from DataDecisionMakers

Shreya Christinahttp://ukbusinessupdates.com
Shreya has been with ukbusinessupdates.com for 3 years, writing copy for client websites, blog posts, EDMs and other mediums to engage readers and encourage action. By collaborating with clients, our SEO manager and the wider ukbusinessupdates.com team, Shreya seeks to understand an audience before creating memorable, persuasive copy.

Latest news

Casino Pin Up Pin-up Casino Resmi Sitesi Türkiye Proloq Ve Kayıt Çevrimiçi

ContentPin Up Nə Say Onlayn Kazino Təklif Edir?Pin Up Casino-da Pul Çıxarmaq Nə Miqdar Müddət Alır?Vəsaiti Kartadan Çıxarmaq üçün...

Играть В Авиатора: Самолетик Pin Up

ContentAviator: Son Qumar Oyunu Təcrübəsini AçınMobil Proqram Pin UpPin Up Aviator Nasıl Oynanır?Бонус За Регистрацию В Pin Up?Pin Up...

Pin Up 306 Casino əvvəl Qeydiyyat, Bonuslar, Yukl The National Investo

ContentDarajalarfoydalanuvchilar Pin UpCasino Pin-up Pin-up On Line Casino Resmi Sitesi Türkiye Başlanğıc Ve Kayıt ÇevrimiçPromosyon Və Qeydiyyatdan KeçməkAviator OyunuAviator...

Find Experts to Write My Paper for Me. Just Click a Button Even though you may have many...

Oyunu Xinclamaq Mümkündürmü?

ContentAviator Apk HackAviator-da Necə Bonus Əldə Etmək OlarAviator Hack - Oyunu Xinclamaq Mümkündürmü?Aviator Hədis AlqoritmləriIşarə Hacking AviatorAviator Oyunu 1winMərclər...

Rəsmi Casino Veb Pin Up

ContentPin Up Bet-ə Casino Girişi - TədqiqatçılarPin Up QeydiyyatıMüasir Kriptovalyuta Kazinolarını Skan Etmək üçün ürəyiaçiq MəsləhətlərPinup-az Online Casino Pin-upPin-up...

Must read

You might also likeRELATED
Recommended to you