Discover more from Next Horizon Newsletter
Llama 2, Apple GPT, Elon, and AI Love
AI and Technology Weekly
Here’s this week’s AI and technology news, product applications, and broader philosophical implications. So here we go:
✒️ AI content
😺 AI Optimism
Latest News and Updates
Meta released Llama 2 this week, an open-source large language model (LLM). Unlike other LLMs, they aren’t keeping anything hidden. The code is available to anyone for research or commercial purposes. The hope is to drive innovation forward, along with better responsibility:
“Open-source drives innovation because it enables many more developers to build with new technology,” Zuckerberg said in a Facebook post. “It also improves safety and security because when software is open, more people can scrutinize it to identify and fix potential issues.”
Apple recently banned its employees from using AI tools. Unsurprisingly, it is creating its own:
Apple Inc. is quietly working on artificial intelligence tools that could challenge those of OpenAI Inc., Alphabet Inc.’s Google and others, but the company has yet to devise a clear strategy for releasing the technology to consumers.
Not to be outdone, Elon also launched an AI company:
On Wednesday, an official website for Musk's AI venture went live at X.AI. A Twitter account also launched with the handle @xAI and affiliate badges for individuals working at the newly formed company. Musk then confirmed his involvement by changing his own personal Twitter profile's bio to simply the @xAI handle.
Some AI tools have image features embedded, and others are stand-alone image generators. I know I’ve been waiting for images in ChatGPT for some time. But they are holding back still:
OpenAI has been testing its multimodal version of GPT-4 with image-recognition support prior to a planned wide release. However, public access is being curtailed due to concerns about its ability to potentially recognize specific individuals, according to a New York Times report on Tuesday.
Useful Tools & Resources
I use ChatGPT, Bard, and other AI tools to help with many things. From generating ideas to refining content to brainstorming. What I don’t want, though, is to have too much AI generated content in anything I do. So being able to test my own content, and other things I see on the internet, is very helpful.
CopyLeaks does a great job at detecting AI generated content vs human-generated content. I expect eventually these kinds of tools will be incorporated in everything we do, but for now, at least there is a place to check.
Deep Dive - Anthropomorphizing
I was having a conversation recently about the anthropomorphizing of AI. It has happened since the beginning of AI and chatbots, and we’ll likely see it continue. This article on AI love: It's complicated dives into more:
Now, some are finding love with an artificial intelligence. The online forum Reddit is filled with people confessing their love for AI chatbots, such as Replika.
But is it real love? To understand whether this "digital love" is the same or similar to the love we experience with other humans is tough. Because it's hard enough to understand what "normal" love is.
This will be a fascinating area to observe and study, but one that likely become a bigger part of life and society…