This week’s update:
🔌 ChatGPT Plugins
Apple Bans ChatGPT
🎤 Sam Altman testifies
🖥️ Zoom gets AI
🙈 The Cost of AI Training
Latest Updates
ChatGPT Plugins
OpenAI announced it would begin rolling out access to plugins to premium users this week.
If you are a ChatGPT Plus user, enjoy early access to experimental new features, which may change during development. We’ll be making these features accessible via a new beta panel in your settings, which is rolling out to all Plus users over the course of the next week.
Enable 3rd party cookies or use another browser
I got access on Monday and began playing around with a number of them. If you are a ChatGPT Plus user, you should have access as well.
Apple Bans ChatGPT
It may not come as a surprise, since Microsoft is a major competitor to Apple, but Apple doesn’t want their employees using ChatGPT.
Enable 3rd party cookies or use another browser
Apple employees will reportedly be restricted from using ChatGPT and other artificial intelligence tools.
This follows in the footsteps of other companies blocking the use of AI tools.
Apple isn't the first major tech company to restrict ChatGPT use by its staff. ChatGPT has been banned for Samsung employees after sensitive information was inadvertently leaked to the platform. Other institutions, like JPMorgan, Bank of America, and Citigroup have also banned ChatGPT to protect confidential information.
It will certainly be interesting to see the divides between people and companies as the AI wars continue to heat up.
Sam Altman Testifies
Sam Altman, CEO of OpenAI, testified before congress this week. He urged regulation in the industry:
“We think that regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful models”
It’s not clear to me that the US government understands technology, let alone AI and the speed it is moving. And regulation never moves quickly. So this will be an interesting space to watch.
Anthropic and Zoom
Zoom and Anthropic announced a new partnership. Zoom will begin to incorporate Claude, Anthropic’s AI into Zoom tools:
Zoom will use Claude, our AI assistant built with Constitutional AI, to build customer-facing AI products focused on reliability, productivity, and safety.
Lots more to come, no doubt.
Other Links
Clippy Office Assistant - Do you need a rude AI assistant? This should fit the bill.
Product Photos - Get better product photos with AI. I’m trying this out over the weekend to update some listings.
Stratup.AI - Need help generating a startup idea?
Chatbot Creator - Platform for creating your own chatbot.
Midjourney Prompt Generator - Need help generating prompts? The image below came from the prompt generator I used to help me depict the next deep dive
Deep Dives
In a funny scenario in Silicon Valley, Erlich and Jian-Yang create an app that detects “hot dog” or “not hot dog.”
Of course, this isn’t a very useful tool for food, but has useful implications for detecting anatomic imagery, and Periscope purchases the app for $4 million dollars. But that leaves Dinesh to train the model on what is a hot dog and what isn’t:
Gilfoyle finds this hilarious, naturally. And it is funny for the characters in the show.
But the reality is that content moderation and AI training has a dark side. We often overlook the people who have to train the models to identify explicit content in order to help the rest of us avoid it.
In an interesting article, workers in Kenya were employed to train ChatGPT to identify and label explicit content. That meant reading the content and labeling it for ChatGPT to understand.
In a process called Reinforcement Learning from Human Feedback, or RLHF, bots become smarter as humans label content, teaching them how to optimize based on that feedback. AI leaders, including OpenAI’s Sam Altman, have praised the practice’s technical effectiveness, yet they rarely talk about the cost some humans pay to align the AI systems with our values.
This can take a heavy toll on the workers who are doing the moderation and training:
Mophat Okinyi, a QA analyst on Mathenge’s team, is still dealing with the fallout. The repeated exposure to explicit text, he said, led to insomnia, anxiety, depression, and panic attacks. Okinyi’s wife saw him change, he said, and she left him last year. “However much I feel good seeing ChatGPT become famous and being used by many people globally,” Okinyi said, “making it safe destroyed my family. It destroyed my mental health. As we speak, I’m still struggling with trauma.”
This isn’t a new problem, but it is still a serious one. In an article in The Verge, they discuss how Facebook handles its content moderation by asking its (mostly contractors) to review the most obscene content and decide if it violates standards:
Here is a racist joke. Here is a man having sex with a farm animal. Here is a graphic video of murder recorded by a drug cartel. Some of the posts Miguel reviews are on Facebook, where he says bullying and hate speech are more common; others are on Instagram, where users can post under pseudonyms, and tend to share more violence, nudity, and sexual activity.
Each post presents Miguel with two separate but related tests. First, he must determine whether a post violates the community standards. Then, he must select the correct reason why it violates the standards. If he accurately recognizes that a post should be removed, but selects the “wrong” reason, this will count against his accuracy score.
All of us benefit from the work that contractors and employees do behind the scenes to identify harmful content. But it’s not without significant cost. And that’s important to remember. Because we can’t continue to use up real people to clean up the messes society makes with technology.