Welcome to our early iterations of this newsletter. I’m excited to have you here and hope you’ll enjoy diving more into AI and the latest technology as it comes out, especially as we add to our Product Thinking.
It’s been a wild week in AI, so let’s catch up on a few things:
AI and music
How generative AI works
Latest releases
Latest Happenings
The music industry is facing another Napster moment, and it’s going to be AI driven.
Enable 3rd party cookies or use another browser
Music fans responded with disbelief this week to the release on streaming and social media platforms of the viral song "Heart on My Sleeve."
That’s from an NPR article on Friday. “Heart on My Sleeve” was written by a Ghostwriter, who then used AI to make it sound like a collaboration by Drake and The Weeknd.
Millions of views and listens across platforms stacked up, until Universal Music Group shut it down.
Much like Napster in the early 2000s, this has the potential to seriously disrupt music. For those of us who were downloading music at that time, there was no other way to get digital music, except through illegal file sharing online. So that’s what we did. And the music industry fought it until it was forced to embrace it.
So what will be the future of AI generated music?
It is impossible to tell at this point. But obviously there is no putting this genie back in the bottle. While some artists are opposed to the idea (which there is a great case to be made against AI generated music), others are embracing the idea.
Grimes has welcomed musicians to create new songs with her voice using Artificial Intelligence, saying she would split 50% of royalties on any successful AI-generated track that included her voice.
We’ll see much more in this space, like all others, in the near future. So stay tuned. Until then here is Ariana Grande covering Taylor Swift:
Enable 3rd party cookies or use another browser
Deep Dive and Application
We’ve been hearing non-stop about ChatGPT and generative AI lately. But at its core, what is it and how can we understand it better?
In a great article in the New Yorker, Cal Newport, a Georgetown computer science professor (who is also the author of one of my favorite books: Deep Work), describes the workings of ChatGPT and generative AI in simple terms. The whole article is worth a read, but we’ll pull out a few ideas here.
First, ChatGPT and generative AI models are compiling vast amounts of information and using that to generate the next word, based on query parameters. In a simple form, we can imagine a chat program searching through all its sources and then choosing the most likely word to follow whatever word or phrase it has last generated:
In designing our hypothetical chat program, we will use the same general approach of producing our responses one word at a time, by searching in our source text for groups of words that match the end of the sentence we’re currently writing.
We can make these models better by giving them lots of data to search against and allowing them to generate their own rules, rather than dictating every rule for how to generate the text.
From there, the size and complexity of the neural networks and calculations can get massive, but the foundation of the data generation stays the same. It relies on a vast amount of language processing to combine and recombine text in a way that makes sense.
For many applications, the results can be uncanny. But for others, they can be obviously wrong.
When interacting with these systems, it doesn’t take long to stumble into a conversation that gives you goosebumps. Maybe you’re caught off guard by a moment of uncanny humanity, or left awestruck by the sophistication of a response. Now that we understand how these feats are actually performed, however, we can temper these perceptions. A system like ChatGPT doesn’t create, it imitates. When you send it a request to write a Biblical verse about removing a sandwich from a VCR, it doesn’t form an original idea about this conundrum; it instead copies, manipulates, and pastes together text that already exists, originally written by human intelligences, to produce something that sounds like how a real person would talk about these topics. This is why, if you read the Biblical-VCR case study carefully, you’ll soon realize that the advice given, though impressive in style, doesn’t actually solve the original problem very well.
I also encountered this recently. As I was asking ChatGPT to summarize an example from a book, it left out an important detail. So I asked it to try again. Then when I asked it for a date for the example, it recanted the whole conversation and said the example didn’t exist and it was mistaken. Of course, I knew the example was real because I read the book. But it seemed to doubt itself because I was pushing for additional information it didn’t know.
Enable 3rd party cookies or use another browser
The A.I. is simply remixing and recombining existing writing that’s relevant to the prompt.
So what does all of this mean for our work? According to this article:
ChatGPT won’t replace doctors, but it might make their jobs easier by automatically generating patient notes from electronic medical-record entries. ChatGPT cannot write publishable articles from scratch, but it might provide journalists with summaries of relevant information, collected into a useful format.
I have many examples where I believe this will play out as well. There still needs to be human interaction and verification. We’re still in the early stages, but we’ll continue to explore how this will affect all of us.
Additional Links
OpenAI rolls out 'incognito mode' on ChatGPT - If you’re concerned about ChatGPT using your data for its training (a real concern for businesses), you’re in luck. They just rolled out a way to turn off sharing.
Apple to Expand Health Initiatives With AI Health Coach, Report Says - Apple is moving more into AI, specifically within health. If you’re question is “what about Siri,” you’re in good company. It sounds like there is a lot of internal politics at play.
Replit Raises Money, Launches Lots of New Features - The coding copilot software, a Github Copilot competitor, launched a host of new features and raised lots of money for its advancement further into AI.
Hugging Face releases its own version of ChatGPT - Hugging face is an AI startup, and they recently released Hugging Chat, an open-sourced alternative to ChatGPT. It isn’t as powerful as ChatGPT, but is free and has ambitions to be much bigger.
Let me know what you think. And, as always, please share!