- Aioli
- Posts
- AI Gets Eyes, and Sounds Better Too
AI Gets Eyes, and Sounds Better Too
Buckle in for a big news week in AI recap
⏳ Read time: under 8 minutes
Welcome to the tenth edition of Aioli, your source for understanding the latest AI advancements and their real-world implications.
This week, OpenAI launched its latest and best model, GPT-4o, followed by Google's AI announcements at its I/O Developer's Conference. We'll break down these developments and explore their potential impact on the AI landscape.
📬 Did someone forward you this email? Sign up to get this newsletter here
TODAY’S SPREAD
🔧 OpenAI Launches GPT-4o, Democratizing Access to Powerful AI
💻 Google Announces AI Upgrades, and I’m Glad to Be a Google Workspace User
👀 Apple’s Incredible Accessibility Updates
🖊️ ChatGPT’s New Highlight Feature
🏀 Basketball Courts Around the World
AI Updates
ChatGPT’s Best Model is Now Free for Everyone
113,000 people tuned in for OpenAI’s Spring Update this past Monday, but for the rest of us, here’s what you might want to know.
New Model: GPT-4o
OpenAI introduced GPT-4o, a new model with GPT-4-level intelligence that's twice as fast, 50% cheaper, and has 5x higher rate limits than GPT-4-Turbo. They also announced that it will be free for everyone.
🐝 What everyone is focused on: It's cheaper, faster, and offers API access!
🔎 What I am focused on: Free users just got a significant performance boost, which could reduce churn and encourage widespread AI adoption. Think about the implications of this across education, work, and global entrepreneurship.
The big question: should you keep paying for ChatGPT Plus? I will, personally, because I build custom GPTs (which is only possible with a paid plan) and because you can access the model significantly more on Plus.
Free users get 16 ChatGPT-4o messages per 3 hours.
Plus users get 80 ChatGPT-4o messages per 3 hours
Teams users 160 ChatGPT-4o messages per 3 hours.
GPTs for All
Now, even free users can access the “mini task bot” GPTs.
What are GPTs? Think of them as personal assistants trained for a specific task that knows your specific preferences, and have a lot of background info on you.
🐝 What everyone is focused on: Everyone can now use the GPTs they've built!
🔎 What I am focused on: This opens the door to tens of millions of new users, testing and pushing the capabilities further.
Now that anyone can access it, check out the Garden Guru GPT that I built; a garden expert providing tailored planting advice based on USDA zones. Happy spring planting!
More Voice Features
Real-time voice interaction has improved significantly, reducing the previous 2-3 second lag. The voice assistant can now pick up on speech nuances and offers fast multi-language translation for 50 languages, covering 97% of the world’s population. I encourage you to watch and listen to the demo. It’s pretty impressive how human-like the chatbot sounds.
🐝 What everyone is focused on: It sounds like Scarlett Johansson!
🔎 What I am focused on: I use ChatGPT Voice daily, and this upgrade will enhance voice-first experiences. Offices need to prepare for everyone using AI assistants simultaneously.
*This feature hasn’t rolled out yet, so don’t feel too let down when you run to the app to try it! OpenAI will be releasing this to Plus users first, so if you’re keen to try it ASAP, make sure you have a paid subscription.
Vision on Desktop
OpenAI will release a desktop app. The desktop version now has vision capabilities, allowing it to "see" your screen with permission. It can describe graphs or articles without much effort.
🐝 What everyone is focused on: Privacy concerns and the necessity of voice for code.
🔎 What I am focused on: This is the standout feature—imagine a coworker on screen share with you 24/7, without fatigue. This could revolutionize how we work.
These features will roll out over the next few weeks.
See a deeper exploration of GPT-4o’s capabilities here.
AI Accessibility
With Apple’s upcoming iOS18 Update, Users Can Control Their iPad or iPhone Screen with Their Eyes
A Win for Accessibility
We used to look at our screens, but now they can look back at us. Powered by AI, Eye Tracking allows users to have a built-in option for controlling their iPad and iPhone with just their eyes. (Woah.)
Check out the other amazing new accessibility updates here.
AI Updates
The 5 Things That Google Announced This Week At Their Conference That You Should Care About (Out of 100+ Announcements)
Do you use Google Workspace? Think Google Drive, Google Sheets, Google Docs, Gmail, the list goes on. These announcements will impact you directly. If you’re not using Google, here’s what the competition has on you.
1️⃣ Gemini Flash ⚡
What it is: A newly developed AI model that boasts speeds 50% faster and costs 95% less than GPT-4 and Gemini Pro.
Why it matters: Creating AI applications can often be cost-prohibitive. This model offers a more affordable solution, revolutionizing the industry.
2️⃣ Project Astra 🤳
What it is: An enhanced version of Gemini that provides real-time responses.
Why it matters: This AI can assist you in solving math problems step-by-step as you work on them and can even engage in brainstorming sessions on a whiteboard, offering suggestions and feedback instantly.
3️⃣ Gemini AI Teammate 🤖
What it is: An AI assistant that operates around the clock. It has its own email, responds to chats, completes tasks in Google Docs and Sheets, and conducts web searches.
Why it matters: Imagine having a tireless employee who handles routine tasks, allowing you to focus on more important work.
4️⃣ Gemini is Everywhere 🔍
What it is: Gemini is now deeply integrated into all Google products.
Why it matters: It can identify your license plate in Google Photos, summarize your monthly expenses from Gmail, and prevent phone scams with real-time call analysis. Gemini is bringing futuristic capabilities to the present.
5️⃣ Veo 🎥
What it is: A tool that generates 1080p videos from simple text prompts at a fraction of the cost of traditional video production.
Why it matters: Capture stunning environmental shots without the need for drones or costly equipment, or recreate historical scenes based on descriptions, making high-quality video production accessible and affordable.
Again, these are announcements and most of these tools will be “rolled out soon”. But lots to look forward to!
ChatGPT Use Case of the Week
Highlight Part of Your Response to Ask Follow-Up Questions
ChatGPT now allows users to highlight parts of its responses for follow-up questions, partial rewrites, and context reuse. Here's how to use this feature:
Generate a response from ChatGPT.
Highlight the relevant parts of the response and click the double quote icon.
The highlighted text will be added to the next prompt for further clarification or refinement.
This feature enables more targeted and efficient interactions with ChatGPT.
Thanks for reading. Your curiosity fuels this journey.
Until next time!
Meg 👩🏻💻
p.s. if you want to sign up for this newsletter or share it with a friend or colleague, you can do so here. For questions and feedback, email me at [email protected]
What did you think of today's email?Your feedback helps me create better emails for you! |
Reply