How Hard Can It Be? Jensen Huang, the Hawking Fellowship, and the New Age of Manufactured Intelligence

How Hard Can It Be? Jensen Huang, the Hawking Fellowship, and the New Age of Manufactured Intelligence

AuthorZion Zhao Real Estate | 88844623 | 狮家社小赵

Author's noteNot financial advice, please do your own due diligence! 

Jensen Huang’s 2025 Professor Stephen Hawking Fellowship address at the Cambridge Union links the university’s legacy of Newton, Turing and Hawking with today’s revolution in accelerated computing and AI. Huang’s story is less about hardware and more about character, courage and institutional choices in an age of “manufactured intelligence.”

It traces Nvidia’s journey from a three-founder startup in 1993 to the company that pioneered the GPU, CUDA and today’s AI data-centre stack, enabling deep learning to scale globally. Huang emphasises that each reinvention—graphics, general computing, AI—required leaping a “canyon” where costs were high, applications immature and survival uncertain. What made this possible, he suggests, was a combination of immigrant grit, a growth mindset and a culture that rejects “rank-and-yank” in favour of psychological safety and rapid learning from failure.

Huang’s most ambitious vision: turning biology into engineering. Drawing parallels with electronic design automation, he argues that AI-driven “digital twins” of molecules and cells could shift drug discovery toward true drug design, building on breakthroughs such as AlphaFold and GPU-accelerated bio-AI platforms.

Huang also frames AI supercomputers like Cambridge-1 and Isambard-AI as critical infrastructure, comparable to energy and the internet, and urges countries such as the UK and South Korea to seize a “Goldilocks moment” in AI by pairing talent with large-scale compute.

On jobs and education, he contends that AI will transform rather than eliminate work, making humans busier by automating routine tasks and shifting value toward ill-defined, creative problem-solving. The essay concludes that honouring Hawking’s legacy in this new era requires combining Huang’s childlike optimism—“how hard can it be?”—with institutions and regulations that keep us honest, protect the vulnerable and share AI’s gains widely.

https://www.huttonsgroup.com/vcard/R071443Z

https://linktr.ee/zionzhao













Nvidia's Jensen Huang says ‘to be a CEO is a lifetime of sacrifice ...


How Hard Can It Be?

Jensen Huang, the Hawking Fellowship, and the New Age of Manufactured Intelligence

When Jensen Huang walks into the Cambridge Union debating chamber to receive the 2025 Professor Stephen Hawking Fellowship, there is a sense of symmetry in the room. On one side, the tradition of Newton, Maxwell, Turing and Hawking. On the other, the man whose company helped turn artificial intelligence from science fiction into an industrial platform.

The Hawking Fellowship, awarded by students rather than the university itself, recognises individuals who have shaped science, technology and its communication—particularly for younger generations. Previous fellows include Bill Gates (2019), Jane Goodall (2020), and the OpenAI team (2023). (The Cambridge Union) Adding Jensen Huang to that list is more than a corporate honour roll entry. It is a signal that accelerated computing and AI are now seen as part of the same intellectual lineage as cosmology and theoretical physics.


Hawking’s “No Boundary” Mindset, Reimagined for AI

Huang begins by framing Cambridge as a “cathedral of world-changing ideas,” threading a quick tour through Newton, Darwin, Maxwell, Turing and Hawking before landing on a central theme: curiosity without boundaries. Hawking’s own “no boundary” proposal in cosmology—developed with James Hartle—was an attempt to describe a universe without a temporal edge, where time behaves more like a spatial dimension at the earliest moments of the Big Bang.

Huang adapts that image metaphorically: even when Hawking’s body was confined, his mind “traveled beyond the stars.” In Huang’s rendering, the lesson is not simply intellectual brilliance but a combination of conviction, humour and stubborn optimism. Discovery, he argues, is not just a function of IQ; it is a function of character.

This framing matters. It sets up the rest of his talk as a kind of applied Hawking ethos: a refusal to treat physical limitation, technological incumbency or institutional inertia as hard constraints. Instead, the question is always: how hard can it be?


Nvidia’s “Impossible Odds” and the Birth of Accelerated Computing

Factually, Huang’s summary of Nvidia’s journey checks out. Nvidia was founded in 1993 by Huang, Chris Malachowsky and Curtis Priem, initially targeting graphics for gaming and multimedia. (Yahoo Finance) In 1999 the company introduced the GeForce 256, marketed as the world’s first “GPU”—a general-purpose graphics processing unit that moved much of the 3D rendering workload off the CPU. (Yahoo Finance)

That architectural shift laid the groundwork for something far more consequential. In 2006, Nvidia launched CUDA, a parallel programming model that allowed developers to use GPUs for general-purpose computation rather than just graphics. (NVIDIA Docs) With CUDA, the GPU stopped being a niche add-on for gamers and became an engine for scientific computing, simulation, and eventually deep learning.

Huang’s bolder claim—that Nvidia helped ignite an “AI industrial revolution” by providing an instrument to “manufacture intelligence”—is not pure marketing. The modern wave of deep learning, from AlexNet in 2012 to large language models, has been tightly coupled to GPU acceleration and specialised software stacks. (Nature) Economically, the concentration of AI training workloads on Nvidia hardware has created both a massive profit pool and a strategic dependency for hyperscalers, startups and governments alike.

What Huang underplays, perhaps unsurprisingly, is the competitive and geopolitical dimension. Alternatives such as Google’s TPUs, AMD’s Instinct line, and specialised AI accelerators from companies like Cerebras and Graphcore show that “manufacturing intelligence” is not a single-vendor project. But it is fair to say Nvidia defined the template others are racing to copy.


Leadership as Sacrifice: Grit, Growth Mindset and Psychological Safety

The most revealing parts of Huang’s conversation are not about chips but about what it means to lead a company through repeated reinvention.

He jokes that his two co-founders “didn’t want the job” of CEO and were probably right. Being CEO, he insists, is mostly a lifetime of sacrifice—creating conditions for others to do their life’s work, making painful trade-offs, and absorbing uncertainty. That characterisation aligns with a growing management literature that shifts focus from heroic, all-knowing leaders to leaders who enable learning and adaptivity in their organisations.

Three themes stand out:

  1. Character forged by struggle. Huang links his resilience to a childhood immigrant story: modest means, cultural dislocation, and parents who insisted he was “special” and capable. This maps closely to Angela Duckworth’s concept of grit—a combination of passion and perseverance for long-term goals—which empirical work has correlated with sustained achievement in demanding fields. (Simon & Schuster)

  2. A growth mindset and intellectual honesty. Huang’s description of constantly revisiting his assumptions, being “quick to adapt” and refusing to tie his ego to past declarations closely echoes Carol Dweck’s “growth mindset”: the belief that abilities can be developed through effort, strategies and feedback rather than being fixed traits. (PenguinRandomhouse.com) For Dweck, this mindset predicts greater willingness to embrace challenges and learn from failure—exactly the behaviour Huang insists is non-negotiable in fast-moving tech.

  3. Psychological safety over rank-and-yank. Huang recounts experimenting with Silicon Valley’s fashion for ranking employees and cutting the “bottom 5%” every year—then abandoning it entirely. This squares with Amy Edmondson’s influential work on psychological safety, which shows that teams learn and perform better when members feel safe to take interpersonal risks, admit mistakes and voice dissent. (Massachusetts Institute of Technology) A culture that punishes recent failure, especially in high-uncertainty innovation, tends to kill exactly the risk-taking organisations need.

Here Huang’s instincts are well supported by evidence: systematic culling may superficially increase average performance metrics, but it can also hollow out long-term innovation capacity and discourage the kind of exploratory projects that drive step-change value.


Crossing the Canyon: From Graphics to General Intelligence

One of the richest sections of the talk is Huang’s analogy between reinventing a product category and leaping across a canyon.

When Nvidia introduced programmable shaders and early GPUs suitable for general computation, the cost of the new technology was significantly higher than the immediate value perceived by customers. There were no applications yet; developers had not rewritten their code; buyers would have rationally preferred cheaper, fixed-function cards. The same pattern repeated with smartphones: early iPhones were expensive phones with a few extra functions, not yet the computational prosthetics they would become.

Geoffrey Moore famously described this pattern as “crossing the chasm” between early adopters and the mainstream market. (DDN) Clayton Christensen, in a complementary frame, called it the innovator’s dilemma: incumbent firms struggle to adopt disruptive technologies because they initially underperform on the metrics existing customers care about. (HPCwire)

Huang’s answer to this dilemma is psychologically simple but operationally brutal: have the courage to leap anyway, accept a period where costs are high and value is invisible, and survive long enough to emerge on the other side. Nvidia has done this multiple times: from fixed-function graphics to programmable GPUs, from gaming to data-centre compute, from rendering pixels to accelerating neural networks.

The fact that Nvidia is, as Huang notes, one of the very few chip companies to successfully navigate multiple computing eras is consistent with the historical record. (Yahoo Finance) The lesson is less “be brave” and more “build balance sheets, cultures and investor relationships that let you absorb those canyon years without dying.”


Turning Biology into Engineering: AI, Drug Design and Digital Twins

Huang’s most audacious intellectual move is his attempt to apply the logic of electronic design automation to biology.

He notes that 40 years ago, chip engineers designed circuits “by hand.” Today, essentially 100% of modern chips are conceived as digital twins: complex hierarchies of logic blocks simulated exhaustively before any silicon is fabricated. (Yahoo Finance) The result is extraordinary: chips with tens of billions of transistors that usually “just work” on first silicon.

Biomedicine, by contrast, still relies heavily on what Huang calls “drug discovery”—a process that often resembles scientific mushroom hunting: systematic, but still surprisingly empirical. Here, he argues, AI can help biology make the same leap electronics did: from discovery to design.

This vision is not science fiction. DeepMind’s AlphaFold 2 achieved near-experimental accuracy in predicting many protein structures from amino-acid sequences, a breakthrough recognised by the 2024 Nobel Prize in Chemistry for Demis Hassabis and John Jumper. (Wikipedia) AlphaFold 3 extends this to predicting how proteins interact with other molecules, including potential drugs. (The Guardian)

On top of that, platforms such as Nvidia’s own BioNeMo and open-source projects like MONAI (Medical Open Network for AI) are building domain-specific toolkits for medical imaging and molecular modelling on GPU supercomputers. (Wikipedia)

Huang’s prediction—that we will be able to “talk to proteins” using natural-language interfaces, asking about their behaviour under different temperatures, solvents or mutations—sounds fanciful but is conceptually aligned with current work on multimodal models that treat molecular structures as tokens in a learned “language of biology.”

However, it is important to temper the optimism. Recent papers have highlighted robustness issues in protein-folding networks and limits to their ability to capture dynamic conformational changes. (arXiv) Experimental validation, regulatory scrutiny and ethical oversight will remain crucial. AI will not abolish wet lab science; it will, if used well, make it more targeted and imaginative.


Instruments of Knowledge: Cambridge-1, Isambard-AI and Global AI Infrastructure

Huang’s argument that AI supercomputers are becoming core infrastructure—akin to energy grids or the internet—is already visible in the physical world.

In the UK, Nvidia partnered with AstraZeneca, GSK and others to build Cambridge-1, launched in 2021 as the country’s most powerful supercomputer focused on healthcare and life sciences. (The Cambridge Union) More recently, the University of Bristol’s Isambard-AI system, built on 5,448 Nvidia GH200 Grace Hopper Superchips, debuted as the UK’s fastest AI supercomputer and one of the top systems globally, aimed explicitly at work in climate science, engineering and medicine. (University of Bristol)

Huang’s claim in Cambridge that the UK sits in a “Goldilocks moment”—rich in computer-science history and entrepreneurial talent but under-resourced in AI infrastructure—is backed by his actions. In September 2025, Nvidia announced plans to invest around £2 billion in the UK AI ecosystem, including data-centre capacity, startups and research collaborations, in a deal publicly championed by Prime Minister Keir Starmer. (NVIDIA Investor Relations)

Similarly, his description of South Korea as an industrial powerhouse well placed to reinvent manufacturing with AI reflects reality. South Korea is a semiconductor and electronics giant, home to Samsung and SK hynix, and in 2024–2025 Nvidia announced partnerships and large-scale GPU deployments in Korean data centres worth billions of dollars, with government support. (Anadolu Ajansı)

Huang also contrasts regulatory philosophies. He suggests that China tends to “regulate late,” with many political leaders trained as engineers, while Western systems with lawyer-heavy elites risk overregulating early and stifling innovation. That is clearly his normative view, not a neutral fact. The policy literature shows a more complicated picture: while heavy-handed, premature rules can slow experimentation, weak or delayed regulation can entrench harms, exacerbate inequality and create systemic risks. (OECD)

The deeper point, though, stands: AI capability is increasingly gated by access to large-scale compute and energy. Nations that combine talent, capital and infrastructure will shape not only the economics of AI but the norms embedded in its deployment.


“Intelligence Is Becoming a Commodity”: Education After ChatGPT

One of the sharpest questions from the audience came from a Cambridge academic who asked whether exam-driven ranking still made sense in a world where “intelligence is about to be a commodity.”

Huang is careful here. He does not argue against rigour or standards; instead, he questions systems that reduce complex human potential to narrow rankings and punish short-term failure. In a world where information retrieval and basic reasoning are increasingly automated, what remains scarce is not access to answers but the courage to ask new questions, the discipline to pursue them, and the interpersonal skills to build teams that can execute.

This direction is echoed in emerging education and workforce debates. The OECD, for example, has warned that while AI can improve productivity and job quality, it also risks displacing routine tasks and amplifying inequalities unless paired with re-skilling and new forms of assessment that emphasise creativity, collaboration and critical thinking. (OECD)

For universities like Cambridge, Huang’s implicit challenge is stark: if exams primarily reward memorisation and standardised problem-solving, AI systems will outperform humans at exactly the tasks we currently test. In that environment, ranking students on their ability to do what machines can do instantly is both demotivating and economically misaligned.


Jobs Will Change, Not Vanish—But Only If We Adapt

Huang’s line—“you will not lose your job to AI; you will lose your job to someone who uses AI”—has already become a meme. But does the evidence support his optimism that jobs will be transformed rather than destroyed?

Radiology is his case study. For more than a decade, commentators have predicted that image-interpreting specialists would be early casualties of AI, based on advances in computer vision for CT, X-ray and MRI. In practice, a global shortage of radiologists has persisted. The UK’s 2023 Clinical Radiology Workforce Census reports a 30% shortfall in consultant radiologists, forcing the NHS to outsource hundreds of millions of pounds of imaging work to private firms. (RSNA Publications)

At the same time, the percentage of radiology departments deploying AI tools is rising: by late 2024, over half of UK trusts were using some form of AI in clinical practice. (SpringerLink) Research suggests these systems are best understood as workload multipliers rather than replacements—automating triage, flagging anomalies, drafting reports, and reducing burnout, enabling radiologists to focus on complex cases and direct patient communication. (Nature)

Zooming out, large-scale analyses of AI and employment by the OECD and the World Economic Forum converge on a similar conclusion: AI will both destroy and create jobs, with net outcomes depending heavily on policy, education and organisational choices. The WEF’s Future of Jobs Report 2025 estimates that technology trends, including AI, could create roughly 170 million jobs and displace around 92 million by 2030, implying net gains but significant transition pain. (World Economic Forum) The OECD’s 2023 Employment Outlook likewise finds that AI use at work is associated with higher job satisfaction and wages in many cases, but also with risks around stress, surveillance and inequality. (OECD)

So Huang’s optimism is plausible but not guaranteed. Whether AI augments workers or replaces them depends on choices: how firms redesign jobs, how governments regulate and tax, which skills education systems prioritise, and whose interests are centred in those decisions.


The Entrepreneur’s Heuristic: “How Hard Can It Be?”

Huang repeatedly returns to a deceptively simple heuristic: look at a problem, reason from first principles, and ask, how hard can it be? His mother teaching him English with a dictionary despite not speaking the language is his archetype for this mindset.

There is real power here. First-principles reasoning—breaking problems down to their physical, mathematical or economic fundamentals—has a long pedigree in science and engineering. It guards against cargo-culting past solutions and makes it easier to spot when technology trajectories will bend.

Huang’s own track record of applying this to GPUs, autonomous vehicles, robotics and drug design, as he notes with some pride, is impressive. But it also sits alongside survivorship bias: for every founder who looks at an intractable domain and says “how hard can it be?” there are many who discover the answer is “very hard indeed,” and run out of capital or time before they can adapt.

This is where Huang’s other principles matter: staying “in the game” long enough to learn, being willing to pivot quickly when reality contradicts your model, and building cultures that let people admit error without humiliation. Those behaviours are precisely what growth-mindset and psychological-safety research identifies as core to high-performing, innovative teams. (Google Scholar)

The deeper entrepreneurial lesson, then, is not to copy Huang’s risk tolerance in isolation, but to pair bold first-principles bets with institutional designs that can absorb failure, recycle learning and protect people who take intelligent risks.


Regulation Without Paralysis: A Balanced View

Huang’s advice to the UK startup community—“regulate less”—is provocative, especially in a room full of future policymakers. His point is that regulating hypothetical harms too early can freeze experimentation and cede ground to jurisdictions that move faster.

The policy data show why this debate is so contentious. On one hand, under-regulation risks biased algorithms harming vulnerable groups, opaque decision-making in critical domains, and concentration of power in a handful of firms. On the other, rigid, overly prescriptive rules can lock in incumbent technologies and slow the diffusion of beneficial innovations. (OECD)

A more balanced approach—often described as “risk-based” regulation—is emerging in the OECD and other international fora: focus rules on high-risk use cases (e.g., healthcare, critical infrastructure, employment screening), require transparency and accountability, and allow low-risk experimentation in sandboxes. (OECD)

Huang’s warning is still useful as a counterweight: if every new AI application is treated as an existential threat, societies may delay precisely the tools they need for climate modelling, drug discovery, and productivity growth. But ignoring genuine systemic risks would be equally irresponsible. Hawking himself, late in life, repeatedly urged serious reflection on the long-term trajectory of AI, even as he celebrated its potential.


Conclusion: Hawking’s Fellowship as a Call to Courage and Responsibility

By the end of the evening at Cambridge, the Hawking Fellowship in Huang’s hands feels less like a lifetime achievement award and more like a challenge—especially to the students in the room.

From one angle, his story is the familiar Silicon Valley arc: immigrant founder, garage-era startup, improbable product bets, and eventual industry dominance. But through the lens of Hawking’s legacy, a different pattern emerges:

  • Curiosity without boundaries – taking tools invented for gaming and using them to simulate the cosmos, fold proteins and model economies.

  • Character shaped by struggle – using personal hardship and corporate near-death experiences as training grounds for long-term grit, rather than as excuses for cynicism.

  • Courage to reinvent – repeatedly leaping across the canyon from one computing era to the next, knowing that for a time costs will be high and value invisible.

  • Responsibility at scale – recognising that when you build the “instrument of knowledge discovery” for the world, your choices about access, safety and governance are no longer merely corporate strategy; they are public policy by another name.

As long-term shareholder of $NVDA, I see Huang’s Cambridge appearance as a moment where several threads of our time knot together: AI as a general-purpose technology, biology on the cusp of becoming an engineering discipline, work and education being reshaped by commoditised intelligence, and nations scrambling to build the “intelligence infrastructure” that will determine their future prosperity.

The Hawking Fellowship asks a simple, uncomfortable question: if intelligence—human and machine—is no longer bounded by the old constraints, what do we choose to do with it?

Huang’s answer, distilled, is disarmingly childlike: stay curious, be optimistic, and ask “how hard can it be?” My own addition, after tracing the evidence, would be this: match that optimism with institutions, regulations and habits of mind that keep us intellectually honest, protect the vulnerable, and ensure the gains from this new industrial revolution are broadly shared.

Only then will the spirit of Hawking’s “no boundaries” truly be honoured in the age of manufactured intelligence.

In a world racing to “manufacture intelligence” with AI, you still need real, tangible assets that can compound quietly in the background. That is where disciplined, research-backed Singapore real estate comes in.

I am a Singapore-based real estate professional who lives at the intersection of property, macroeconomics, and markets. Every day, I dedicate hours to writing in-depth essays—like How Hard Can It Be? Jensen Huang, the Hawking Fellowship, and the New Age of Manufactured Intelligence—and to studying global geopolitics, interest-rate cycles, equity and crypto markets. I do this due diligence so that when you make a move, it is anchored in data, context and risk management, not headlines.

With a strong foundation in economics, portfolio construction and Singapore Land and Business Law, and as an Officer Commanding (Captain) in the SAF, I bring both analytical discipline and operational reliability to your real estate decisions. Whether you are an international or China Chinese investor, a South East Asian family, or a Singaporean upgrading your home—or planning陪读, 留学, or 家办—my role is to help you integrate Singapore property sensibly into your broader portfolio.

In a volatile world of stocks and digital assets, quality Singapore real estate can provide a more stable, less volatile anchor with the potential for long-term capital appreciation and rental income that behaves like dividends—without promising what markets can never guarantee.

If you want an advisor who understands AI, macro and markets—but is firmly grounded in bricks, land and law—reach out. Let’s design a resilient, globally aware portfolio in which Singapore property plays its rightful, strategic role.

https://www.huttonsgroup.com/vcard/R071443Z

https://linktr.ee/zionzhao


References (APA style)

Cambridge Union. (n.d.). Professor Stephen Hawking Fellowship. Cambridge Union Society. (The Cambridge Union)

Duckworth, A. L. (2016). Grit: The power of passion and perseverance. Scribner. (Simon & Schuster)

Dweck, C. S. (2006). Mindset: The new psychology of success. Random House. (Google Scholar)

Edmondson, A. (1999). Psychological safety and learning behavior in work teams. Administrative Science Quarterly, 44(2), 350–383. (Massachusetts Institute of Technology)

Jing, A. B., & colleagues. (2025). AI solutions to the radiology workforce shortage. NPJ Digital Medicine. (Nature)

Jumper, J., Evans, R., Pritzel, A., et al. (2021). Highly accurate protein structure prediction with AlphaFold. Nature, 596, 583–589. (Wikipedia)

NVIDIA. (2021). NVIDIA Cambridge-1: The UK’s most powerful supercomputer for healthcare and life sciences. (The Cambridge Union)

NVIDIA. (2023). CUDA parallel computing platform and programming model. (Developer documentation). (NVIDIA Docs)

NVIDIA. (2024). Isambard-AI: UK’s fastest AI supercomputer at the University of Bristol. (University of Bristol)

NVIDIA. (2024). Building AI infrastructure and ecosystem to fuel Korea’s innovation. (Anadolu Ajansı)

Organisation for Economic Co-operation and Development (OECD). (2023). OECD employment outlook 2023: Artificial intelligence and the labour market. OECD Publishing. (OECD)

Organisation for Economic Co-operation and Development (OECD). (2024). AI and work. OECD.AI. (OECD)

Royal College of Radiologists. (2024). Clinical radiology UK workforce census 2023. Royal College of Radiologists. (RSNA Publications)

Varadi, M., Anyango, S., Deshpande, M., et al. (2022). AlphaFold protein structure database: Massively expanding the structural coverage of protein-sequence space with high-accuracy models. Nucleic Acids Research, 50(D1), D439–D444. (Google Scholar)

World Economic Forum. (2025). The future of jobs report 2025. World Economic Forum. (World Economic Forum)

World Economic Forum. (2025, January 8). The jobs of the future – and the skills you need to get them. World Economic Forum Insight Article. (World Economic Forum)


Comments