China’s Analogue AI Chip: Hype, Evidence, and What the Nature Electronics Paper Actually Shows (Oct 2025)

China’s Analogue AI Chip: Hype, Evidence, and What the Nature Electronics Paper Actually Shows (Oct 2025)

Author: Zion Zhao Real Estate | 88844623 | 狮家社小赵
Author's note: I am not a professional in the semi-conductor or technology industry, I am just a nerd doing my due diligence for my own investment. Definitely not financial advice.  Read to know my two-cents worth. 

Executive take

A Peking University team reports a high-precision analogue matrix-equation solver built on resistive random-access memory (RRAM). In peer-reviewed benchmarks, the authors show measured speed/efficiency gains over digital baselines at modest sizes and argue that—with realistic circuit improvements—their approach could reach up to ~1,000× higher throughput and ~100× better energy efficiency than leading digital processors for the same numerical precisionon specific linear-algebra tasks (not all of AI). The claims are task-scoped and contingent on engineering assumptions; still, the work is a serious advance in analogue, “in-memory” computing. (Zuo et al., 2025; Bela, 2025). Nature+1











1) What the researchers actually built

The paper—published October 13, 2025 in Nature Electronics—demonstrates a precise and scalable analogue matrix-equation solver using RRAM crossbar arrays. Two chips were used: a 1-Mb RRAM device for high-precision matrix–vector multiplications (MVMs) and an 8×8 RRAM array wired in a closed loop to perform low-precision analogue matrix inversion (INV) in one step. Fabrication used a commercial 40-nm CMOS platform, which is notable because foundry-compatible flows are a precondition for scale. The system experimentally solved 16×16 real-valued inversions at 24-bit fixed-point precision (comparable to FP32), then extrapolated performance for larger matrices via a block-matrix method. (Zuo et al., 2025). Nature

The Plain-English version. Instead of shuttling numbers between memory and a processor (the von Neumann bottleneck), this architecture computes inside memory: conductances in an RRAM crossbar physically encode matrix entries; Ohm’s and Kirchhoff’s laws realize MVMs “in one shot.” The team then wraps a clever iterative refinement around a one-step analogue inversion, achieving high precision without pulling heavy work back into digital. This is why their solver’s effective complexity can scale more gently than the O(N3) digital inversion path. (Zuo et al., 2025; Wan et al., 2022; Leroux et al., 2025). Nature+2Nature+2


2) Where the “1,000× faster than Nvidia” line comes from—and what it means

The Nature Electronics paper benchmarks the analogue solver against Nvidia H100AMD Vega 20, and a published MIMO ASIC, holding FP32-class precision constant. Two distinct claims appear:

  1. Measured/near-term: at modest sizes (e.g., N0=32), the analogue approach already outperforms those digital baselines in throughput and energy efficiency; at N=128, it remains ~3–10× ahead (depending on block mode).

  2. Projected/with faster op-amps: if the analogue blocks reach ~10–20 ns response via higher-GBWP amplifiers (a credible near-term circuit optimization), the solver could deliver “up to three orders of magnitude” higher throughput and nearly two orders higher energy efficiency than the digital references.

Those upper-bound numbers—“~1,000×” throughput, “~100×” energy efficiency—are scenario-based projections, not a blanket statement about all AI workloads. (Zuo et al., 2025; Bela, 2025). Nature+1

As always Scope matters. The gains are quoted for matrix inversions and closely related solves (e.g., in scientific computingmassive-MIMO detection, and second-order training where Hessian/normal-equation solves dominate). They are not a claim that analogue chips will train GPT-class transformer models end-to-end faster than Nvidia’s general-purpose GPUs. (Zuo et al., 2025). Nature


3) Why this is a real advance (and not just a press-office headline)

For two decades, analogue in-memory computing promised speed/efficiency but struggled with precision, variability, drift, line resistance, ADC/DAC overheads, and scaling. This work addresses the precision barrier by marrying a closed-loop analogue INV (to cut iterations) with bit-sliced, high-precision analogue MVM—both on foundry-fabricated RRAM—and by quantifying the totals (including converters) against credible digital baselines. That combination—precision + foundry process + end-to-end accounting—is why the community is taking it seriously. (Zuo et al., 2025; Wan et al., 2022; Rasch et al., 2024; Ielmini & Wong review line). Semantic Scholar+3Nature+3Nature+3

Fabrication details that matter. The RRAM chips are TaOx devices in 1T1R cells at 40 nm, and the paper explicitly models ADC area/energy and wire resistance impacts—finding that performance advantages persist under realistic parasitics and that ADC area becomes the main scaling challenge for larger arrays. (Zuo et al., 2025). Nature


4) What this could change in practice

  • Wireless & edge infrastructure. Massive-MIMO baseband (e.g., 5G/6G) is dominated by linear solves. An accelerator that solves Ax=b with orders-of-magnitude better joules/op changes throughput, latency, and TCO in radios and base stations. (Zuo et al., 2025). Nature

  • Scientific computing. PDE solvers, inverse problems, and optimization pipelines often bottleneck on sparse/structured solves; moving these into RRAM-based analogue blocks can free digital cores for pre/post-processing. (Zuo et al., 2025). Nature

  • AI training algorithms. Second-order or curvature-aware training (e.g., quasi-Newton, K-FAC variants) repeatedly touches matrix inversions; analogue blocks could act as domain-specific co-processors alongside GPUs/ASICs. (Zuo et al., 2025; Rasch et al., 2024). Nature+1

Importantly, this complements rather than replaces general-purpose GPUs: think “linear-algebra turbocharger” vs “GPU killer.”


5) Caveats the headlines gloss over

  • Task-specific, not universal. The 1,000× figure is tied to matrix inversion benchmarks at FP32-class precision, with assumed faster op-amps. It says little about attention kernelsconvolutions, or end-to-end model trainingwhere dataflow, memory capacity, and programmability dominate. (Zuo et al., 2025). Nature

  • Scaling still hard. The demonstrated LP-INV is 8×8 today; the authors argue 32×32–64×64 is feasible on-chip, with larger systems stitched via block methodsADC/DACs dominate area/power; line resistance and noise must be managed as arrays grow. (Zuo et al., 2025). Nature

  • Analogue non-idealities. Device variability, drift, IR-drop, quantization at the A/D interface, and programming errors are well-documented constraints in analogue in-memory computing. These demand robust calibration/compensation and influence lifetime economics. (Wan et al., 2022; Mackin, 2022; “Memory Is All You Need” overview, 2024). Nature+2Research Communities by Springer Nature+2


6) How it fits the broader landscape (China and beyond)

China’s labs have published multiple “beyond-GPU” prototypes—from spiking systems to ACCEL/Ising-type or in-memory designs—often touting spectacular speed/efficiency on narrow tasks. The Peking University result stands out because it is peer-reviewedprecision-credible, and foundry-fabricated, with transparent benchmarking vs H100/Vega 20. That’s a higher evidentiary bar than typical headline claims. (Bela, 2025; Tsinghua/ACCEL news; TrendForce/Nanjing analogue-IMC precision record). South China Morning Post+2tsinghua.edu.cn+2


7) Bottom line for practitioners and investors

  • Treat the “1,000×” line as credible potential, not a universal present-day fact.

  • Expect domain-specific analogue accelerators to appear first in communications and scientific computing stacks where matrix solves dominate.

  • Watch for integration: analogue-solve tiles co-packaged with HBM and GPUs/ASICs, plus software toolchains that expose these solvers as linear-algebra primitives.

  • The decisive questions now are manufacturabilityconverter economicscalibration lifecycles, and compiler/runtime maturity—not whether analogue MVMs work (they do). (Zuo et al., 2025; Wan et al., 2022). Nature+1


Fact-check notes

  • The work centers on matrix inversion/solves and signal processing, not general AI training; the “1,000×” is a projected throughput upper bound tied to specific circuit-speed assumptions. (Zuo et al., 2025; Bela, 2025). Nature+1

  • The device is RRAM-based and foundry-fabricated (40 nm); precision demonstrated up to 24-bit fixed-point on 16×16 problems. (Zuo et al., 2025). Nature

  • Nvidia comparisons come from H100 core-normalized analyses in the paper, plus an AMD Vega 20 and a published MIMO ASIC reference. (Zuo et al., 2025). Nature


Prudent, Cross-Asset Property Strategy with Integrity

Singapore and Asia’s markets are moving with AI, geopolitics and rates—and your property strategy should too. I’m a Singapore-based real-estate agent and SAF Captain (OC) who dedicates hours daily to research and to writing source-checked essays— from “TSMC: Picks-and-Shovels King of the AI Era” to China’s analogue AI chip study. My edge: disciplined due diligence, macro-to-micro translation, and portfolio thinking across equities, crypto and real estate.

If you seek a less volatile, income-producing core with credible upside, add Singapore property for prudent diversification—stable rental yields, professional management, and long-term capital appreciation potential, aligned to your risk, liquidity and immigration/education needs.

I serve international, China Chinese, Southeast Asian and local clients, UHNW families and institutions. 欢迎中国与东南亚家庭/家办/机构投资者(陪读家长、留学生、家办)联系我,获得跨资产、合规且可执行的置业方案。

Message me for a private, no-obligation strategy session. Let’s time entries, structure financing wisely, and curate assets that compound—calmly, professionally, and with integrity. Your goals first; my role is advisor, risk manager, and execution partner.

Disclaimer(合规说明)

This article is for information and academic discussion only—not investment advice or solicitation. All technical claims are tied to cited sources and should be interpreted within their experimental scope and assumptions.(仅作信息分享,不构成投资建议或招揽。)


References (APA style)

Bela, V. (2025, October 22). China’s analogue AI chip could work 1,000 times faster than Nvidia GPU: Study. South China Morning Post. South China Morning Post

Ielmini, D., & Wong, H.-S. P. (2020). In-memory computing with resistive switching devices. Nature Electronics, 3(6), 333–343. (Review; overview of RRAM IMC.) Semantic Scholar

Leroux, N., et al. (2025). Analog in-memory attention mechanism for fast and energy-efficient transformers. Nature Machine Intelligence. (Advance in analogue attention.) Nature

Rasch, M. J., et al. (2024). Fast and robust analog in-memory deep neural network training. Nature Communications. (Training-phase AIMC trade-offs.) Nature

Wan, W., et al. (2022). A compute-in-memory chip based on resistive random-access memory. Nature, 608, 504–512. (Foundational RRAM-CIM demonstration; limits and promise.) Nature

Zuo, P., Wang, Q., Luo, Y., Xie, R., Wang, S., Cheng, Z., … Sun, Z. (2025). Precise and scalable analogue matrix equation solving using resistive random-access memory chips. Nature Electronicshttps://doi.org/10.1038/s41928-025-01477-0 Nature

(Contextual reading)
Science and Technology Daily / TMTPost. (2025, Oct. 13). Chinese researchers achieve high-precision, scalable analogue matrix computing with RRAM. (English summary.) TMTPOST

TrendForce. (2025, Oct. 17). Chinese research team breaks precision record with new analogue in-memory computing chip. (Industry context—separate Nanjing University result.) TrendForce

Tsinghua University News. (2023–2025). ACCEL/brain-inspired chip reports (various). (Illustrates breadth of China’s AI-hardware research; task-specific claims.) tsinghua.edu.cn

Bioengineer.org. (2025, Oct. 13). Efficient matrix solving with resistive RAM technology. (Press summary of the Nature Electronics work.) BIOENGINEER.ORG

Comments