Story Operators: RKHS Applications to Fiction and Poetry

A Mathematical Framework for Narrative Discovery and Transformation

by Fred ZimmermanNimble AI, an imprint of Nimble Books LLC

ISBN-13
9781608885350
Format
Paperback, 7" × 10"
Pages
320
Publication date
February 10, 2026
Series
RKHS Integrated Series (vol. 1)
Language
English

Abstract

This book introduces Reproducing Kernel Hilbert Space (RKHS) theory as a mathematical framework for narrative analysis. Passages from fiction and poetry are embedded as 768-dimensional feature vectors, and Story Operators — linear maps on the resulting Hilbert space — formalize operations such as genre projection, similarity search, and narrative morphing. Cosine kernels and Chebyshev kernels are developed as the primary similarity measures, with explicit closure-property proofs for composite kernels. The central claim is that narrative analysis stands at the same moment as statistical genetics before Fisher, chemistry before Mendeleev's periodic table, and linguistics before Chomsky's formal grammar — a field waiting for its mathematical language.

Ten distinctive claims (novel to this volume)

C1. A 768-dimensional RKHS is adopted as the canonical working space, matching the output dimension of widely used transformer sentence encoders so embeddings plug in without re-training.
C2. A Story Operator is defined as a linear map on the RKHS that transforms one narrative representation into another (e.g., comedy to tragedy, third-person to first-person) by acting on feature vectors rather than surface text.
C3. Chebyshev kernels are introduced alongside cosine kernels for capturing worst-case stylistic divergence; a closure-property proof shows the Chebyshev-cosine composite remains positive semi-definite.
C4. A Gram matrix from a 500-excerpt corpus is presented as "your universe in a table" — the object from which projection, clustering, and nearest-genre lookup are derived.
C5. The introduction of RKHS to narrative is framed as the analogue to Fisher's statistical genetics, Mendeleev's periodic table, and Chomsky's formal grammar.
C6. Genre is defined geometrically: each genre is a subspace of the RKHS; membership is the squared norm of projection onto that subspace.
C7. RKHS-First Publishing is introduced as a kernel-based novelty-detection methodology for screening ideation output before committing to full production — in use at Nimble Books' Codexes Factory pipeline.
C8. Story morphing is demonstrated by geodesic interpolation between two narrative anchors in the RKHS (Hemingway to Borges) producing statistically coherent intermediate text.
C9. Closure-property recipes for custom kernels (sums, products, scaling, composition) are given with warnings about construction errors that silently produce indefinite kernels.
C10. Three reading paths: a 60-second version (Ch. 2), a foundations track (Ch. 3–5), and a practitioner track that jumps to Gram matrix and kernel trick without full functional analysis.

Table of contents

Part I. The Big Idea

  1. Why Stories Need Mathematics
  2. The 60-Second Version

Part II. Foundations

  1. What Is a Hilbert Space?
  2. The Kernel Trick

Files

Full PDF (40 MB) LLM-friendly summary (llms.txt) Structured JSON Publisher llms.txt