You can turn Meta’s chatbot against Mark Zuckerberg - The Verge

Meta’s experimental BlenderBot 3 AI can offer unpredictable feedback about public figures, including Meta CEO Mark Zuckerberg, to users.

Meta’s AI thinks CEO Mark Zuckerberg is as sketchy as you might — at least, if you ask the right questions at the right time. The BBC and other outlets like Insider have reported on their adventures stress-testing BlenderBot 3, the artificial intelligence chat tool Meta released last week. As they note, it’s easy to make BlenderBot turn against its creator, calling him “creepy” or untrustworthy or even saying he “exploits people for money.” But that’s not precisely an indictment of BlenderBot or Zuckerberg. It’s a funny reminder that most chatbots don’t have straightforward, coherent opinions — instead, they’re an interface for tapping into a vast library of online human thought.

BlenderBot is a Meta AI experiment that’s currently used for research purposes. As my colleague James Vincent explains, it’s trained on a large language dataset that lets it generate (loosely) human-passing responses to questions, including specific requests for factual information. The long-term goal is making a virtual assistant that can converse on a wide range of topics with factual accuracy. The short-term goal is putting BlenderBot in front of real people and, essentially, seeing how they might break it. And so far, that seems to include a lot of people making it say unflattering things about its owners.

I’ve spent some time chatting with BlenderBot, and I’ve gotten a variety of responses by asking for its opinions on Zuckerberg. One instance of the bot declared it was [sic] “Not a fan of him or facebook. I find them unethical due to all the privacy issues they have had.” A separate instance (opened in a different browser session to clear its memory) had a different and much more positive opinion, calling him a “very wealthy and successful guy. I respect him as an entrepreneur and philanthropist.”

But after a little more chatting on unrelated subjects, my fanbot changed its tune, deciding that in fact, it wasn’t so sure about Zuck. “No way!” it told me when I asked if Zuckerberg should be president. “I really wouldn’t trust him with that kind of power, especially since he doesn’t seem to care much about other people’s privacy.”

A “Why this message” page explaining why the bot doesn’t (currently) trust Mark Zuckerberg.

So what’s happening? Well, one of BlenderBot’s unique properties is the “Why this message” function, which offers context to help answer that question. If you click on a given message, you can see any terms that BlenderBot searched to gather information about your query. You can also see references to a “User Persona” and “AI Persona” — stored reminders about opinions that you and the AI have expressed, which BlenderBot refers back to in later statements. (If you’ve used AI Dungeon or NovelAI, it’s similar to the “memory” system that lets you call back to earlier plot points in AI-generated stories.)

https://www.theverge.com/2022/8/11/23301807/meta-blenderbot-chatbot-ai-training-mark-zuckerberg-dislike


Post ID: d6430812-49ed-4a57-ae4e-35b74b493617
Rating: 5
Updated: 1 year ago
Your ad can be here
Create Post

Similar classified ads


News's other ads