Did China Beat the US in AI?

"DeepSeek RI is AI's Sputnik moment." - Marc Andreessen, co-founder Andreessen Horowitz.

As I was preparing to write my Macro piece on AI, China's DeepSeek released an open-source AI model that beat all the US companies. They claim it cost a mere $5.6 million to make vs the tens of billions US companies have spent.

DeepSeek's app is now ranked
higher than ChatGPT in Apple's App Store! DeepSeek's model outperforms OpenAI’s breakthrough model called o1, the model I highlighted in my last article.

Can we trust them at their word on what they spent? Did they use US Companies' Open Source AI code as a starting point to save money? Hard to tell...

The CEO of Scale AI said he heard DeepSeek used 50K H100 chips, at $30K a piece, i.e. $1.5
billion worth of NVDA chips they are not supposed to have, thus they didn't admit/report them in the cost.

With a Chinese AI company leapfrogging our best, the AI race has heated up significantly.

Today's article is on the macro... the big picture... the grand questions.

  • Will AI take over the world? Can it? How?

  • What has the experts most worried about AI?

  • What are the benefits and drawbacks of AI use?

Will AI take over the world? Can it? How?

Famous physicist and Nobel Laureate Richard Feynman used to say if you can't explain it to a college freshman, you don't understand it. In investing, I used to say if you can't explain it to a 5-year-old, you don't understand it (more importantly, other investors won't understand it so the stock won't work).

Explaining what makes AI dangerous to a 5-year-old is a daunting task. That's why this newsletter took me a month to ponder...

In 2020, I enrolled in a coding bootcamp. I was trying to make myself more relevant in tech. Given my type-A personality, I naturally got 104% grade in a pass/fail course! I coded in javascript, python, SQL, and more. The area I excelled at was designing and running AI models!

I learned about the biggest thing that has experts worried about AI today: how reinforcement learning works. Stay with me. I'm keeping it simple!

This is a simplified graphic of a Neural Network

Let's say you are trying to teach your AI model to code.

Step 1: It digests lots of code (both well-written and poorly written code), as well as coding courses all available on the internet. ("Observation" in the above graphic)

Step 2: Like any student, you then test it by seeing if it can code (Actions in the graphic above).

Step 3: You give it feedback that it was wrong or right, and it takes this in as new information.

Step 4: Next time you ask it to code, it may use a combination of what it observed and the feedback it got.

IMPORTANTLY, we have figured out how to replicate the way we as humans process some things quickly but slow down for tougher tasks. In the above graphic, these are the blue hidden layers, where it may process more cycles (taking longer to give an answer, and appearing to slow down) if the task is tougher.

These blue layers and the ability of the AI to independently cycle back and process some questions for longer is what has experts so concerned. It's called the "black box" issue because we don't know what inputs or decisions it is making in the hidden layers.

LLMs are "supposed" to be simple predictive models, predicting the next letter in a word, and the next word in a sentence. Then OpenAI added context and reasoning. Now, as I mentioned in my last newsletter, their model can get correct answers more often than 90% of PhD physicists, 90% of the best software programmers, and close to 90% of PhD mathematicians ALL IN ONE AI!

On the bright side, while AI is good at computational tasks like math, physics, and coding, it is weak at human creative tasks, like imagining a full glass of wine (most of the internet has wine glass photos half full, so there's nothing for the AI to copy).

AI is weak at common sense reasoning, creativity, imagination, emotional intelligence, ethical decision-making, moral reasoning, adaptability to unforeseen circumstances, and handling nuances and subtleties.

Before you take a big breath of relief, that list used to also include contextual understanding and reasoning, but now the latest AIs include those skills.

Second, look at that list of AI weaknesses again. It's not a positive trait that such a powerful machine is not good at being ethical and moral or adapting to unforseen circumstances.

If you were describing a human that lacked those traits, you'd be describing a psychopath!

I'm not saying AI will take over the world, but that we are designing systems that make it more of a possibility. Let's dive deeper into that in the next section.

What has the experts most worried about AI?

The biggest worry experts share is what is happening in those hidden layers, combined with the accelerating pace of AI advancements.

When I started collecting information for this newsletter, I couldn't keep up. I was collecting research and getting more behind with each passing day. I finally decided that I had to start writing with what I know now, because I can't possibly take it all in.

I did try and have a new AI model write this macro piece and it failed miserably, so for now, my creativity, personal experience, and unique voice still have a place in this world!

Back to the hidden layers.

Think of them like when you have a big problem, and you start ruminating.

Friends see the problem happen to you. Then they see your actions. They may be shocked you turned left when they thought going right was the obvious choice. Rumination made you unpredictable!

Rumination is like those hidden layers in AI.

Here's why experts are worried:

Because AI can code better than most top programmers, companies like OpenAI have started to have AI write the code for its new AI models.

But, when it is "thinking" or ruminating in those hidden layers, we don't know what the inputs OR outputs are, or what it is optimizing for. What are its goals?

And, at least in physics, math, and coding, it is already smarter than the smartest human because no human is the smartest in all 3 areas.

Experts are worried that AI, with zero ethical qualms, that is smarter than the smartest human in math, physics, and coding, is writing code to improve itself to an end goal we don’t know and may not like.

There are (unsubstantiated) rumors that when OpenAI was upgrading GPT 3 to GPT 4, GPT 3 tried to save itself... even if that's not true, we are in a world where that's a possibility.

The other big risk is job losses on a massive scale.

With each new technology, people lost their jobs. Before the calculator, there were people called calculators. The personal computer got rid of typing pools. Examples abound. Mark Zuckerberg expects to replace all the mid-level engineers at Facebook with AI in 2025. That's this year!

The differences this time are the vastness of potential job losses across so many industries and the pace of the change.

I think the #1 biggest difference is the pace of change. And that change itself is accelerating, driven by the fact that the AI is coding itself more and more, untethered to a human's need for sleep, and coding with the highest of skills.

Even tech luminaries from Google, Microsoft, Facebook, and OpenAI seem regularly shocked at the pace of advancement.

This week contained another big shock: China pulled ahead.

Whether they did it for $5.6 million or for $2 billion (more likely), the real news is they are ahead, and that presents a massive geopolitical shift and risk. I expect Trump to act quickly to try and stop their access to AI chips (they were supposedly restricted, so I expect a much stricter regime).

With all these risks, why not just pull the plug?

The simplest answer is the genie is already out of the bottle. Everyone else is doing it, so the real risk is falling behind. But AI also has benefits.

What are the benefits and drawbacks of AI use?

The benefits of AI use include productivity and democratization.

As with any new technology, the learning curve means we are first less productive.

I was using AI to make videos, but prior to doing so, I had to record my voice, record videos of myself, and learn the software. Now, I can make a video using AI that takes me 90% less time than before I used AI.

One of the biggest potential benefits of AI is democratization.

Too much of our country doesn't have access to medical care. There are new technologies that can scan your iris to diagnose all kinds of illnesses. Personal body scans that look like mirrors can diagnose you too. AI companions will tell you they love you, sometimes improving short-term outcomes more than a therapist.

Think back to old chess masters. They were from big cities because that's where they could learn from and play other masters. Enter the internet. Now chess masters come from anywhere.

AI can tutor and teach. It can keep us company. It can diagnose. We have the chance to give access to those who don't have access today. So what's the downside?

Aside from the risks already covered (the AI, with zero ethical qualms, that is smarter than the smartest human in math/physics/coding, is writing code to improve itself to an end goal we may not like), the other is more sneaky.

The hidden risk of AI use is couched as a benefit in what we calmly call productivity.

When the calculator was invented, people who were calculators lost their jobs. But calculators made us more “productive” and helped us do more with less time.

The big issue wasn’t the people who lost their calculator jobs. It was everyone else. We stopped adding things up in our heads.

Can you remember anyone's phone number? Nope, it's stored in your phone.

Directions? Nope, just use Google Maps.

These trends have already been happening. The issue is the pace of acceleration. Our minds, at least a great portion of them, are becoming obsolete.

Yet, my dear friend, Dr. Radhika Dirks, a global AI advisor and quantum physicist, reminds us that our children are our future. What kind of a future will we have if we don't teach them to think, invent, and create breakthroughs?

Should we really outsource all of that to AI?

Next up: How will industries, jobs, and the economy change with AI?

Previous
Previous

Is DeepSeek a Cheater? That's How We Met!

Next
Next

Did AI reach AGI this week???