Just Do It

Plus: OpenAI unveils GPT-4o, Google overhauls search, Apple nears deal to use ChatGPT and much more...

Hey everyone,

A special welcome this week to our new subscribers—great to have you here!

This is your Sunday Space, where I serve up the best ideas, tools and resources I’ve found each week as we explore the technology shaping the future.

If you find something thought-provoking, forward it to a friend.

Just Do It

Created with Midjourney

Christopher Nolan, the visionary director of Oppenheimer, says the best way to learn how to make a film is to actually make one.

This principle applies to most things, not just filmmaking.

For aspiring founders, you won’t learn how to build a business in a classroom. You’ll learn by building a product and selling it to customers.

For aspiring writers, you won’t learn to write by reading Stephen King’s book, “On Writing: A Memoir of the Craft.” You’ll learn by writing.

Let this be our constant reminder to do the thing instead of studying the thing, talking about doing the thing or reading about how to do the thing.

Just Do It.

Groq’s Quiet Dominance

The holy trinity of AI models aside, the question of where to deploy and run the model remains. That’s where cloud/API providers come in.

One such provider recently making waves is Groq—not to be confused with Grok, Elon’s chatbot built into X.

Groq is a specialised hardware and software platform building solutions for accelerating AI inference.

Their core product is the Language Processing Unit (LPU)—a new type of processor designed specifically for running LLMs with exceptional speed and low latency.

There’s nothing theoretical about the promise of LPUs. Just look how far ahead Groq is when it comes to latency, throughput (i.e. speed) and price:

Source: ArtificialAnalysis.ai

They’re the only provider to have reached the promised land (green quadrant) and are miles ahead of the incumbent platforms like Microsoft Azure and Amazon Bedrock.

If you want to make your AI apps fast, utilising Llama 3 deployed on Groq is a no-brainer.

AI Word of the Day


Inference refers to the process of using a trained model to evaluate new data and make predictions. When you provide an input or prompt to ChatGPT, the underlying model uses inference to generate a relevant output or response.

From Articles I Read

We now have a model that reasons across voice, text and vision—which means it processes all its inputs and outputs in one neural net (just like humans…), something yet to be achieved by anyone else.

Source: OpenAI

GPT-4o outperforming other flagship models in the standard benchmark tests is a minor point. Here are my biggest takeaways from the announcement:

  • We now have a high-quality model that is approaching Llama speed. OpenAI says it’s 2x faster than GPT-4T, but it felt 10x faster when I tested it out this week.

  • GPT-4o is cheap—half the price of GPT-4T, to be exact. This follows the trend of getting higher-quality releases for less.

  • The new voice capabilities are out of this world—lower latency, the ability to switch between multiple languages and adapt to how humans speak naturally could change everything.

OpenAI’s release is probably bigger than we think—we are just scratching the surface of what these models can do in a more human-native context.

A day after OpenAI announced GPT-4o to the world, Google held its I/O conference, which showed that it’s not out of the AI game just yet.

Google’s multimodal assistant and Sora competitor aside, the biggest shock for me was their willingness to cannibalise their business's cash cow product—Google Search.

Source: Google

Google revealed AI Overviews, where instead of giving users 10 blue links, they get concise Gemini-written summaries with citations.

This shows that they are unwilling to be left behind as AI transforms every part of the internet.

Innovators like Perplexity have been pioneering this way of search for a few years now (I use them as my default search engine).

However, with Google’s near-infinite resources and change in focus, I’d be worried if I were on Perplexity’s founding team.

OpenAI and the GPT models may well power your iPhone when iOS 18 is released this summer.

I find this hilarious.

Firstly, because this is so not Apple.

And secondly, because OpenAI just showed how good Siri could have been all these years, right before ChatGPT (and its new voice capabilities) becomes Siri.


Personally, I doubt Steve Jobs would have tolerated this. Apple has always been a pioneer of in-house hardware and software, tightly integrated and designed in a way that only Apple could.

Licensing your AI capabilities from OpenAI and Google is selling out in a way. I’ve said it a few times here already, but it’s not looking good for Apple

From X/Twitter

From YouTube

Quote I’m Pondering

“When you let your attention slide for a bit, don’t think you will get back a grip on it whenever you wish—instead, bear in mind that because of today’s mistake everything that follows will be necessarily worse… Is it possible to be free from error? Not by any means, but it is possible to be a person always stretching to avoid error. For we must be content to at least escape a few mistakes by never letting our attention slide.”

—Epictetus, Discourses, 4.12.1; 19

What did you think of today's edition?

How can I improve? What topics interest you the most?

Login or Subscribe to participate in polls.

Was this email forwarded to you? If you liked it, sign up here.

If you loved it, forward it along and share the love.

Thanks for reading,

— Luca

*These are affiliate links—we may earn a commission if you purchase through them.

Join the conversation

or to participate.