On March 26, a paper from Google Research sent shockwaves through the global memory chip market, wiping out over $90 billion in market value among U.S. and South Korean industry giants.

The paper claimed a new algorithm called TurboQuant could compress the KV cache of large AI models to just 1/6 of its original size without sacrificing accuracy.

Just one day later, Gao Jianyang, a postdoctoral researcher at ETH Zurich, publicly accused the Google paper of serious academic misconduct on social media.

Gao alleged that Google deliberately obscured the strong similarities between TurboQuant and RaBitQ — a method he developed during his PhD at Nanyang Technological University (NTU) in 2024. He further claimed the paper misrepresented RaBitQ’s theoretical results and rigged an unfair experimental comparison.

RaBitQ is a vector quantization algorithm that preserves reliable search performance even under extreme data compression.

According to Gao, the TurboQuant team has been “unrepentant”. He says he flagged these issues via email before the paper’s formal publication in April 2025, yet Google failed to fully correct the final version despite being aware of the problems.

On March 29, National Business Daily (NBD) interviewed Gao Jianyang and Long Cheng, the authors of RaBitQ. Gao led the development of RaBitQ during his PhD at NTU, with Long Cheng serving as his doctoral advisor.

NBD also sent an interview request to Google, but had not received a response as of press time. Notably, Google Research is scheduled to present the TurboQuant paper at ICLR 2026 this April.

“Google Paper Contains Severe Inaccuracies, Refused to Revise After Communication”

NBD: When did you first notice problems with Google’s TurboQuant paper?

Gao Jianyang: Back in January 2025, Majid Daliri, the second author of TurboQuant, reached out to us directly. He asked for help debugging a Python port he had written based on our RaBitQ C++ code, sharing detailed reproduction steps and error logs. This proved the TurboQuant team had deep familiarity with RaBitQ’s technical details.

When TurboQuant was published in April 2025, we found it severely misrepresented RaBitQ. It incorrectly labeled RaBitQ as a grid-based PQ method, completely ignoring its core random rotation step. Without any derivation or evidence, it also dismissed RaBitQ’s theoretical guarantees as “suboptimal” and used clearly unfair experimental comparisons.

We were confused and disappointed. The technical similarities between TurboQuant and RaBitQ are obvious, and the authors clearly understood RaBitQ well. Such systematic misrepresentation can hardly be explained as carelessness.

NBD: What communication occurred between the two teams before you went public?

Gao Jianyang: We held multiple rounds of discussions over more than a year.

In May 2025, we exchanged detailed technical emails with Majid Daliri regarding flawed experimental settings and theoretical optimality, clarifying each misinterpretation point by point. Daliri explicitly confirmed he had shared our feedback with all co-authors.

However, after we requested corrections to factual errors in the paper, he stopped replying.

In November 2025, we discovered TurboQuant had been submitted to ICLR 2026 with the same false claims unchanged. We contacted the ICLR 2026 PC Chairs but received no response.

After Google promoted the paper through official channels in March 2026, we sent another formal email to all authors.

Their reply: lead author Amir Zandieh promised to fix parts of the theoretical and experimental descriptions, but flatly refused to acknowledge the methodological similarities. He also insisted any revisions would only happen after the ICLR 2026 conference. We were disappointed but not surprised. The team clearly understood the issues yet chose only minimal concessions.

“Core Mechanisms Nearly Identical, Yet Unmentioned — Reviewers Noted the Issue”

NBD: What is the most critical similarity between TurboQuant and RaBitQ?

Gao Jianyang: The core overlap is that both apply a random rotation (Johnson-Lindenstrauss transform) to vectors before quantization, then use statistical properties of the rotated coordinates to build distance estimators.

Notably, in the authors’ response to reviewers on ICLR OpenReview, they described their method as:

“We first normalize vectors by their L2 norm, then apply a random rotation to ensure rotated components follow a Beta distribution.”

This matches RaBitQ’s core mechanism almost exactly — yet the paper itself never openly acknowledges this connection.

To use an analogy: imagine one chef publishes a full recipe for a dish. Later, another chef releases a nearly identical dish but describes the original as “different and inferior,” without mentioning any relation. Readers cannot form a fair judgment without this context.

Gao Jianyang Photo/provided to NBD

NBD: How should such relationships be handled under academic standards?

Long Cheng: Academic norms require that when new work substantially builds on existing methods, authors must clearly cite and discuss the connection, explaining both improvements and inherited frameworks.

This is especially important here because one ICLR reviewer independently noted:

“RaBitQ and its variants share similarities with TurboQuant in using random projections.”

The reviewer explicitly requested fuller discussion and comparison.

Even so, in the final paper, the authors not only failed to add this discussion but moved the already incomplete description of RaBitQ from the main text to the appendix. This directly violates basic academic standards.

“Small Research Teams Cannot Easily Compete With Google”

NBD: Why go public now instead of continuing through internal academic channels?

Long Cheng: We are not bypassing academic processes — we are going public after exhausting them.

We contacted the authors, ICLR PC Chairs, filed a formal complaint with full evidence to ICLR General Chairs and Code & Ethics Chairs, and posted public comments on OpenReview.

But we face a reality: we are a small university research team; the other side is Google Research. We are unequal in resources, influence, and voice.

The TurboQuant paper quickly racked up tens of millions of views on social media — a reach no university lab could match.

Under this imbalance, remaining silent and waiting for internal procedures would only let the false narrative become accepted truth. Going public is one of the few ways for the disadvantaged party to defend academic integrity when formal channels are slow to respond.

NBD: What are the consequences if these issues remain uncorrected?

Long Cheng: It will distort the academic record, leading future researchers to misjudge the origins of methods and build work on false foundations.

It undermines incentives for original research. If a rigorously derived, asymptotically optimal method can be rebranded and promoted to massive public attention while original authors are denied credit, it causes long-term harm to the academic ecosystem.

For the fast-developing, industry-critical vector quantization field, inaccurate attribution will misguide practitioners and researchers in choosing technical routes, leading to misallocated resources.

NBD: Do you consider this an academic disagreement?

Long Cheng: This goes beyond mere academic disagreement. Genuine disagreements stem from honest differences in understanding.

In this case, the TurboQuant team’s deep knowledge of RaBitQ is well-documented. We clarified the theoretical optimality point by point in May 2025, and Daliri confirmed he informed all authors. The authors also admitted the unfair experimental setup in emails.

Despite this, the errors persisted through submission, review, acceptance, publication, and large-scale promotion. We avoid definitive labeling, but we believe the facts provide sufficient grounds for the academic community and relevant institutions to make an independent judgment.

Long Cheng Photo/provided to NBD

“Plan to Release Technical Report, Pursue Further Academic Remedies”

NBD: What responsibility do large institutions like Google Research hold?

Long Cheng: Institutional endorsement creates amplification effects. A paper promoted through Google’s official channels spreads at a scale incomparable to ordinary research.

Once false narratives spread at that volume, correcting them becomes exponentially more costly. Large institutions have a duty to fact-check descriptions of others’ work before large-scale promotion, not shift full responsibility to peer review.

They should also maintain formal internal processes to address well-documented external objections, rather than staying silent. This is a responsibility to the academic community and to their own credibility.

NBD: Will you take further action?

Long Cheng: Next, we plan to publish a detailed technical report on arXiv, systematically comparing the methodological relationship between RaBitQ and TurboQuant and addressing the three core issues technically for the academic community.

We are also considering escalating the matter to bodies such as the Google Research Escalation Council.

Our goal is simply to ensure the public academic record accurately reflects the true relationship between the methods — not to create confrontation.

Editor: Gao Han