The Paradox of Letting Go

Plus: SpaceX re-lands Starship, xAI's Gigafactory of Compute, NVIDIA beats Moore's Law and much more...

Hey everyone,

A special welcome this week to our new subscribers—great to have you here!

This is The Space, where I serve up the best ideas, tools and resources I’ve found each week as we explore the technology shaping the future.

If you find something thought-provoking, forward it to a friend.

ANNOUNCEMENT
Join Me at Sumo Day 2024 Live!

I’m very humbled to have been invited by my friend Chris Cownden of Podcast Launch Strategy to speak at AppSumo’s Sumo Day.

He’s done an incredible job of bringing together a stellar group of software builders, no-coders, content creators and more to share some knowledge throughout the event.

I’ll be on today (Monday, June 10th) at 12:30 pm CST, talking through my journey and all things No-Code, AI, and content creation.

Not only will you hear from the best in the business, but AppSumo is running its biggest-ever software sale during the event. Oh, and they’re also giving away a Cybertruck

Okay, back to today’s docket!

IDEAS
The Paradox of Letting Go

Created with Midjourney

We all have big dreams.

  • To become a famous writer.

  • To be a billionaire business owner.

  • To start the next Google, Facebook or Amazon.

While these ambitions serve us greatly, they’re only useful up to a point. Too much focus on them leads to anxiety and overwhelm.

This is because it’s impossible for us to control achieving those exact outcomes. The probabilities are just too small.

But, what we can control is our effort—the inputs and consistent daily habits.

This is “The Process,” and we must trust it.

Focusing on what we can control and letting go of our lofty ambitions might just be the key to making them a reality.

RESEARCH
LLMs Beat Analysts at Earnings Forecast

A recent study from the University of Chicago found that GPT-4 can effectively analyse financial statements and even outperform human analysts in predicting future earnings.

Researchers used anonymised financial statements and a “Chain-of-Thought” prompt to simulate how analysts forecast earnings.

Source: University of Chicago

Applying the methodology above, GPT-4 outperformed human analysts by ~10% in predicting the direction of future earnings.

Source: University of Chicago

While this is truly incredible, two other insights from the study blew me away:

  1. GPT-4’s performance was on par with a specialised machine learning model (see the diagram below) trained for this exact task.

  2. GPT-4’s ability was not a result of its training data. Rather, its ability to apply economic reasoning and analyse data garnered the results.

GPT-4 vs Artificial Neural Network (ANN), Source: University of Chicago

It’s becoming clearer and clearer that AI is going to disrupt “business-as-usual” in finance. These models are just way better than us at ingesting and analysing massive datasets.

And now the best LLMs can even interpret the data better than us.

This fact, when you think about it, is actually unsurprising. What’s even more unsurprising is that the vested interests in Big Finance are sceptical.

Granted, AI probably won’t be able to pick stocks that outperform a broader index like the S&P 500… yet.

And, sure, the top 1% of human analysts probably have an edge over LLMs... for now.

We have no idea how quickly these models will improve, and we, including Big Finance, probably aren’t ready for it.

AI Word of the Day

Chain of Thought (CoT) Prompt

Source: Prompt Engineering Guide

This is a specific way of interacting with an LLM, like GPT-4.

Rather than asking it to solve a problem immediately, you first ask the model to break it down into smaller sub-problems. Then, you ask it to solve each sub-problem, eventually leading to a solution for the main problem.

This approach allows for a more systematic and detailed approach to problem-solving with LLMs.

INSIGHTS
From Articles I Read

Source: TechCrunch

The next step in making the spacecraft that will take us to Mars fully reusable was a massive success. After launch, the Super Heavy booster and the upper stage returned to Earth via a controlled ocean splashdown.

A little-known fact about Elon is that when he first saw ChatGPT's capability, he didn’t start xAI in retaliation. He actually doubled down on Starship, knowing that humans must become multi-planetary in case AGI overwhelms us.

Luca’s take: I hope we don’t have to flee Earth because of superintelligent AI, but ensuring humanity endures is still incredibly important. With the constant improvements to Starship, SpaceX keeps moving us toward that future.

Source: Data Centre Dynamics

Fast forward to today, Elon is now doubling down on xAI.

Following the company’s $6B Series B, he’s announced plans to build a cluster of 100k NVIDIA H100 GPUs (the largest on Earth) by next year.

Luca’s take: Although xAI lags in the LLM space, one should never bet against Elon. What we have today won’t get us to AGI, so more innovation is needed. With the additional resources and some new ideas, there’s a future where xAI leapfrogs the competition.

Jensen Huang once again highlighted the incredible pace of improvements in the computational power of NVIDIA chips in his Computex keynote.

The new Blackwell Ultra chip shows a 5x increase in TFLOPs over 2 years. It’s wild that this is even faster than Moore’s Law, which predicts a 2x increase in transistors on a microchip every 2 years.

Source: NVIDIA

Luca’s take: NVIDIA briefly became the 2nd most valuable company by market cap this week. While many sceptics think they’re about to be out-innovated, if they continue this pace of improvement, I’m not sure anyone can stop them.

From X/Twitter

From YouTube

THOUGHTS
Quote I’m Pondering

"The world breaks everyone, and afterward, some are strong at the broken places."

— Ernest Hemingway

What did you think of today's edition?

How can I improve? What topics interest you the most?

Login or Subscribe to participate in polls.

Was this email forwarded to you? If you liked it, sign up here.

If you loved it, forward it along and share the love.

Thanks for reading,

— Luca

P.S. Generative AI or Degenerative AI?

*These are affiliate links—we may earn a commission if you purchase through them.

Reply

or to participate.