Startups Uncensored AI art model raises ethical questions - ukbusinessupdates.com

Uncensored AI art model raises ethical questions – ukbusinessupdates.com

-

A new open source AI image generator capable of producing realistic images of any text prompt has seen an astonishingly fast uptake in its first week. Stability AI’s Stable Diffusion, high-fidelity but suitable for use on out-of-the-box consumer hardware, is now used by art generator services such as Artbreeder, Pixelz.ai, and more. But the unfiltered nature of the model means that not all use has been completely overboard.

For the most part, the use cases were above board. For example, NovelAI has been experimenting with Stable Diffusion to produce art that can accompany the AI-generated stories created by users on its platform. Midjourney has launched a beta version that uses Stable Diffusion for more photorealism.

But Stable Diffusion is also used for less savory purposes. On the infamous discussion forum 4chan, where the model leaked early, several threads have been devoted to AI-generated art from naked celebrities and other forms of generated pornography.

Emad Mostaque, the CEO of Stability AI, called it “unfortunate” that the model leaked on 4chan and emphasized that the company was working with “leading ethicists and technologies” on security and other responsible release mechanisms. One of these mechanisms is a customizable AI tool, Safety Classifier, included in the overall Stable Diffusion software package that attempts to detect and block offensive or unwanted images.

However, security rating can be disabled while enabled by default.

Stable Diffusion is a whole new territory. Other AI art-generating systems, such as OpenAI’s DALL-E 2, have implemented strict filters for pornographic material. (The license for the open source Stable Diffusion prohibits certain uses, such as exploiting minors, but the model itself is not tied to the technical level.) Moreover, many do not have the opportunity to create art of public figures, unlike Stable Diffusion. Those two possibilities can be risky when combined, allowing bad actors to create pornographic “deepfakes” that can – in the worst cases – continue abuse or implicate someone in a crime they didn’t commit.

Stable diffusion

A deepfake of Emma Watson, made by Stable Diffusion and published on 4chan.

Unfortunately, women are by far the most affected by this. A study conducted in 2019 revealed that of the 90% to 95% of non-consensual deepfakes, about 90% are women. That doesn’t bode well for the future of these AI systems, according to Ravit Dotan, an AI ethicist at the University of California, Berkeley.

“I’m concerned about other effects of synthetic depictions of illegal content — that it will exacerbate the illegal behavior portrayed,” Dotan told ukbusinessupdates.com via email. “For example, will synthetic child [exploitation] increase the creation of an authentic child [exploitation]? Will pedophile attacks increase?”

Abhishek Gupta, principal investigator of the Montreal AI Ethics Institute, shares this view. “We really need to think about the AI ​​system lifecycle, including post-deployment use and monitoring, and think about how we can envision controls that can minimize damage even in the worst case scenario,” he said. “This is especially the case when a strong power [like Stable Diffusion] into the wild that can cause real trauma to those against whom such a system could be used, for example by creating offensive content in the likeness of the victim.”

something of a example set last year when a father, on the advice of a nurse, took pictures of his young child’s swollen genital area and sent them to the nurse’s iPhone. The photo was automatically backed up to Google Photos and flagged as child sexual abuse material by the company’s AI filters, leading to the man’s account being disabled and an investigation launched by the police. San Francisco.

If a legitimate photo could disable such a detection system, experts like Dotan say, there’s no reason deepfakes generated by a system like Stable Diffusion couldn’t — and on a large scale.

“The AI ​​systems that people create, even if they have the best intentions, can be used in harmful ways that they do not foresee and cannot prevent,” Dotan said. “I think developers and researchers have often underestimated this point.”

Of course, the technology to create deepfakes has been around for a while, whether or not based on AI. a 2020 report from the deepfake detection company Sensity found that hundreds of explicit deepfake videos featuring female celebrities were uploaded to the world’s largest pornographic websites every month; the report estimated the total number of deepfakes online at around 49,000, more than 95% of which are porn. Actresses such as Emma Watson, Natalie Portman, Billie Eilish, and Taylor Swift have been the target of deepfakes since AI-powered facial swapping tools hit the mainstream a few years ago, and some, including Kristen Bell, have spoken out against what they consider to be sexual exploitation.

But Stable Diffusion represents a newer generation of systems that can create incredibly — if not perfectly — convincing fake images with minimal user work. It’s also easy to install, requiring no more than a few installation files and a graphics card that costs several hundred dollars on the high side. Work is underway on even more efficient versions of the system that can run on an M1 MacBook.

Stable diffusion

A Kylie Kardashian deepfake posted on 4chan.

Sebastian Berns, a Ph.D. researcher in the AI ​​group at Queen Mary University of London, thinks the automation and the ability to scale custom image generation are the big differences from systems like Stable Diffusion — and the main problems. “Most of the harmful images can already be produced using conventional methods, but are manual and require a lot of effort,” he said. “A model that can produce near-photorealistic images could give way to personalized blackmail attacks on individuals.”

Berns fears that personal photos scraped from social media could be used to condition Stable Diffusion or similar model to generate targeted pornographic images or depictions of illegal acts. There is certainly a precedent. After Indian investigative journalist Rana Ayyub reported on the rape of an eight-year-old Kashmiri girl in 2018 became the target of Indian nationalist trolls, some of whom were doing deepfake porn with her face on someone else’s body. The deepfake was shared by the leader of the nationalist political party BJP, and the harassment Ayyub received as a result became so bad that the United Nations had to intervene.

“Stable Diffusion offers enough customization to broadcast automated threats against individuals to either pay or risk publishing fake but potentially harmful imagery,” Berns continues. “We are already seeing people being extorted after their webcam has been used remotely. That infiltration step may no longer be necessary.”

Now that Stable Diffusion is out in the wild and already being used to generate pornography – some non-consensual – it may become the duty of image hosts to take action. ukbusinessupdates.com contacted one of the major adult content platforms, OnlyFans, but heard nothing back as of publishing time. A spokesperson for Patreon, which also allows adult content, noted that the company has a policy against deepfakes and bans images that “reuse the likenesses of celebrities and place non-adult content in an adult context.”

However, if history is any indication, enforcement will likely be uneven — in part because few laws specifically protect against deepfaking when it comes to pornography. And even if the threat of legal action takes down some sites devoted to objectionable AI-generated content, there’s nothing stopping new ones from appearing.

In other words, says Gupta, it’s a brave new world.

“Creative and malicious users can abuse its capabilities [of Stable Diffusion] to generate subjectively offensive content on a large scale, with minimal resources to execute inferences – which is cheaper than training the entire model – and then publish it on sites like Reddit and 4chan to generate traffic and draw attention, Gupta said. “There’s a lot at stake when such opportunities escape ‘into the wild’ where controls like API speed limits, security controls on the kind of output returned by the system no longer apply.”

Shreya Christinahttp://ukbusinessupdates.com
Shreya has been with ukbusinessupdates.com for 3 years, writing copy for client websites, blog posts, EDMs and other mediums to engage readers and encourage action. By collaborating with clients, our SEO manager and the wider ukbusinessupdates.com team, Shreya seeks to understand an audience before creating memorable, persuasive copy.

Latest news

Играть В Авиатора: Самолетик Pin Up

ContentAviator: Son Qumar Oyunu Təcrübəsini AçınMobil Proqram Pin UpPin Up Aviator Nasıl Oynanır?Бонус За Регистрацию В Pin Up?Pin Up...

Pin Up 306 Casino əvvəl Qeydiyyat, Bonuslar, Yukl The National Investo

ContentDarajalarfoydalanuvchilar Pin UpCasino Pin-up Pin-up On Line Casino Resmi Sitesi Türkiye Başlanğıc Ve Kayıt ÇevrimiçPromosyon Və Qeydiyyatdan KeçməkAviator OyunuAviator...

Find Experts to Write My Paper for Me. Just Click a Button Even though you may have many...

Oyunu Xinclamaq Mümkündürmü?

ContentAviator Apk HackAviator-da Necə Bonus Əldə Etmək OlarAviator Hack - Oyunu Xinclamaq Mümkündürmü?Aviator Hədis AlqoritmləriIşarə Hacking AviatorAviator Oyunu 1winMərclər...

Rəsmi Casino Veb Pin Up

ContentPin Up Bet-ə Casino Girişi - TədqiqatçılarPin Up QeydiyyatıMüasir Kriptovalyuta Kazinolarını Skan Etmək üçün ürəyiaçiq MəsləhətlərPinup-az Online Casino Pin-upPin-up...

Играть В Авиатора: Самолетик Pin Up

ContentAzərbaycanda Rəsmi SayЕсли Ли Джекпот В Aviator?Pin-up Aviator: Hədis Qaydaları Və StrategiyalarAviator Oyununu Necə Tapmaq OlarКак Играть В Игру...

Must read

You might also likeRELATED
Recommended to you