Top-down view of a dark laboratory workbench with glowing amber flask, petri dish containing a miniature network graph, and magnifying glass over data patterns

The GEO Log

The GEO Log

What Is The GEO Log?

The GEO Log is the public experiment record of The GEO Lab. Each entry documents a controlled test or audit conducted against the GEO Stack framework — measuring how structural changes to content affect retrieval probability, extractability, and citation behaviour across AI-driven search systems.

Last Updated: March 2026

The GEO Field Manual compiles frameworks drawn from these experiments. Apply the findings with the GEO Workbook — 30-Day Action Plan.

How Are Experiments Structured?

Experiments follow a consistent structure: hypothesis, methodology, results, interpretation, and limitations. The goal is replicable evidence, not speculation. Methodology and measurement approach are detailed in the GEO Field Manual. If you run these experiments on your own content and get different results, I want to know.

What Has Been Tested?

Experiment 001: Content Structure Impact on Citation Rate — Tested whether declarative structure outperforms narrative structure for AI citation. Results: declarative structure produced a 61% citation rate versus 37% for narrative — a 24 percentage point improvement across 75 query iterations per version on Perplexity.

Experiment 002: Entity Density Testing — Scheduled for March 24, 2026. Will measure the relationship between entity signal density and retrieval probability.


Key Takeaway

The GEO Log provides replicable evidence for content optimisation decisions. Experiment 001 demonstrated a 24 percentage point improvement (61% vs 37% citation rate) when using declarative structure over narrative structure. Each experiment follows a rigorous methodology: hypothesis, controlled testing, measurement across multiple AI engines, and documented limitations.


Frequently Asked Questions

What results has The GEO Log documented?

Experiment 001 demonstrated that declarative content structure produces a 61% citation rate compared to 37% for narrative structure — a 24 percentage point improvement. This was measured across 75 query iterations per content version using Perplexity AI.

How often are new experiments published?

New experiments are published as they complete. The GEO Log prioritises methodological rigour over publication frequency. Experiment 002 on entity density is scheduled for March 24, 2026.

Can I replicate these experiments?

Yes. Each experiment includes full methodology details. The GEO Field Manual provides the framework for running your own tests. Contact The GEO Lab if you achieve different results.

What methodology does The GEO Log use?

Each experiment follows a consistent structure: hypothesis, methodology with control and treatment setup, results across multiple AI engines (ChatGPT, Perplexity, Gemini), interpretation, and documented limitations. The goal is replicable evidence, not speculation.

How do GEO Log findings inform the GEO Stack?

GEO Log experiments test specific hypotheses derived from the GEO Stack framework. Results either validate or refine the framework. For example, Experiment 001 validated Layer 2 (Extractability) principles by demonstrating declarative structure superiority.


Have questions about this topic? Contact The GEO Lab · Return to homepage


About the Author

Artur Ferreira is the founder of The GEO Lab with over 20 years (since 2004) of experience in SEO and organic growth strategy. He developed the GEO Stack framework and leads research into Generative Engine Optimisation methodologies. Connect on X/Twitter or LinkedIn.

Have questions about this topic? Contact The GEO Lab · Return to homepage