No quick fix: How OpenAI's DALL·E 2 illustrated the challenges of bias in AI

An artificial intelligence program that has impressed the internet with its ability to generate original images from user prompts has also sparked concerns and

An artificial intelligence program that has impressed the internet with its ability to generate original images from user prompts has also sparked concerns and criticism for what is now a familiar issue with AI: racial and gender bias. 

And while OpenAI, the company behind the program, called DALL·E 2, has sought to address the issues, the efforts have also come under scrutiny for what some technologists have claimed is a superficial way to fix systemic underlying problems with AI systems.

“This is not just a technical problem. This is a problem that involves the social sciences,” said Kai-Wei Chang, an associate professor at the UCLA Samueli School of Engineering who studies artificial intelligence. There will be a future in which systems better guard against certain biased notions, but as long as society has biases, AI will reflect that, Chang said.

OpenAI released the second version of its DALL·E image generator in April to rave reviews. The program asks users to enter a series of words relating to one another — for example: “an astronaut playing basketball with cats in space in a minimalist style.” And with spatial and object awareness, DALL·E creates four original images that are supposed to reflect the words, according to the website.

As with many AI programs, it did not take long for some users to start reporting what they saw as signs of biases. OpenAI used the example caption “a builder” that produced images featuring only men, while the caption “a flight attendant” produced only images of women. In anticipation of those biases, OpenAI published a “Risks and Limitations” document with the limited release of the program before allegations of bias came out, noting that “DALL·E 2 additionally inherits various biases from its training data, and its outputs sometimes reinforce societal stereotypes.” 

https://www.nbcnews.com/tech/tech-news/no-quick-fix-openais-dalle-2-illustrated-challenges-bias-ai-rcna39918


Post ID: a42bfdb8-3d84-4d36-9d4c-7ee625060081
Rating: 5
Updated: 1 year ago
Your ad can be here
Create Post

Similar classified ads


News's other ads