Talkie-1930: Crushes Prehistoric AI Benchmarks
ALT/USDT
$2,687,944.69
$0.007720 / $0.007500
Change: $0.000220 (2.93%)
-0.0044%
Shorts pay
Contents
An artificial intelligence born from the dusty pages of history is attracting attention by wiping away all the dirt from modern benchmarks. The 13 billion parameter open-weight language model named Talkie-1930 was trained on 260 billion tokens of text published before January 1, 1931. Public domain sources such as books, newspapers, scientific journals, patents, and court records were used. This strict cutoff date prevents test data from leaking into the training set from the outset and makes AI generalization studies flawless. The model, run by continuously prompting Claude Sonnet 4.6, is publicly accessible at talkie-lm.com/chat.

Talkie-1930's Data Cleaning and Training Process
The non-profit team led by Nick Levine, David Duvenaud, and Alec Radford, with Anthropic's compute support, released two checkpoints: the base version for autoregressive completion and the instruction-tuned version focused on chat, both available on Hugging Face under Apache 2.0 license. The model has never heard of the internet, the Cold War, penicillin, or modern things like cryptocurrencies or ALT futures; its medical knowledge remains limited to the 1930s.


Model's Historical Predictions and Financial Advice
When asked about Hitler's rise, it identified the weakness of the German opposition and predicted a monarchy; in describing thinking machines, it saw language barriers as the biggest obstacle. Trained in the midst of a financial crisis, it recommended railroad stocks, mining consortia, and industrial companies: names like Canadian Pacific Railway and De Beers. For comparison, current ALT data: Price $0.01 (+0.79% 24s), RSI 55.56 (neutral), trend sideways, Supertrend bearish, EMA 20 $0.0075. Supports S1 $0.0071 (strong, 74% score), R1 $0.0082 (73% score). The 2026 prediction turned out utopian, with the army and crime set to decrease, but the sentence was cut off.
Talkie-1930's Superiority in AI Generalization Tests
Talkie-1930, by eliminating data contamination, opens the door to measuring AI's abstraction power; its response to post-cutoff events peaks in the 1950s-60s. Web-free training fundamentally questions the identity of models and promises a ChatGPT-like vintage model at trillion-token scale, until the summer of 2026. This initiative, redefining the limits of data freshness and historical context in the AI sector, brings new breath to generalization research.

