[{"data":1,"prerenderedAt":217},["ShallowReactive",2],{"\u002Fblog\u002Fmodern-growth-analytics-at-monument-labs":3,"\u002Fblog\u002Fmodern-growth-analytics-at-monument-labs-surround":210},{"id":4,"title":5,"authors":6,"badge":12,"body":14,"date":199,"description":200,"extension":201,"image":202,"meta":203,"navigation":204,"path":205,"seo":206,"stem":208,"__hash__":209},"posts\u002F3.blog\u002F00.modern-growth-analytics-at-monument-labs.md","Modern Growth Analytics at Monument Labs",[7],{"name":8,"to":9,"avatar":10},"Kyle Johnson","https:\u002F\u002Fwww.kylejohnson.ai",{"src":11},"\u002Fimg\u002Fkyle_headshot.jpg",{"label":13},"Analytics",{"type":15,"value":16,"toc":189},"minimark",[17,28,31,34,37,48,51,56,59,62,65,68,72,81,84,92,100,103,107,110,113,116,119,122,126,129,132,139,145,151,154,161,165,168,171,175,178,181],[18,19,20],"p",{},[21,22],"img",{"alt":23,"className":24,"src":26,"width":27},"Richmond skyline along the James River, line-art illustration",[25],"rounded-lg","\u002Fimg\u002Fblog\u002Frichmond_skyline_line_art.png",1456,[18,29,30],{},"For seven years at Meta I ran growth and integrity analyses the same way most data scientists still do. Tuesday started in the experimentation platform. By 10am I'd be writing PrestoDB on top of dim_users joined to fact_events. By 2pm the SQL was in a notebook and the notebook had charts in it. By 4pm the charts were in a slide deck. By Wednesday morning the deck was in someone's inbox. By Thursday someone would ask the question the deck didn't actually answer, and the loop would start again.",[18,32,33],{},"The shape of the work was: pull the data, summarize the data, package the summary, send the summary, wait for the question.",[18,35,36],{},"Every step was a task switch across tools, and every task switch cost wall-clock hours. Senior analysts spent the bulk of their week on plumbing. Synthesis was where the senior judgment lived; plumbing was the rest of the day.",[18,38,39,40,47],{},"I left Meta in 2024 to run ",[41,42,46],"a",{"href":43,"rel":44},"https:\u002F\u002Fwww.monumentlabs.io",[45],"nofollow","Monument Labs",". One of the things I do here is the same growth-analytics work for the products we ship. That work now takes thirty minutes, end to end, and the artifact at the end is more thorough than the slide deck I would have produced after two days at Meta.",[18,49,50],{},"This post is about what changed.",[52,53,55],"h2",{"id":54},"the-old-shape","The old shape",[18,57,58],{},"The reason a senior DS spent two days on a weekly funnel review is that the answer required moving data between systems. The ads platform held one piece. Behavioral analytics held another. The data warehouse held a third. The codebase held a fourth, in the form of \"what shipped this week that could explain the change.\" Each source lived behind a different login, a different query language, a different person who could be asked.",[18,60,61],{},"The DS's actual job was the synthesis. The DS's actual time was the plumbing.",[18,63,64],{},"Anyone who's run that loop at scale knows the meeting on Wednesday is downstream. The pull-and-paste is where the hours went. That part was the work no one wanted to do, and it determined whether the meeting was useful at all.",[18,66,67],{},"The half-life of the artifact was also short. A slide deck that survived one Q&A meeting and got archived in someone's \"old reviews\" folder was real work to produce and impossible to keep useful. By the next Wednesday it was invisible.",[52,69,71],{"id":70},"the-new-shape","The new shape",[18,73,74,75,80],{},"I sat down this morning and asked Claude Code for a complete ads-and-funnel review of ",[41,76,79],{"href":77,"rel":78},"https:\u002F\u002Fdaylight.legal",[45],"Daylight",", our voice-first custody journal for parents in family-court proceedings. Three sentences in the prompt. No SQL written by me, no Playwright scripts, no chart configs, no slide template. Thirty-three minutes later I had a 4,000-word HTML report with eight Chart.js visualizations, a per-user cohort breakdown, screenshots of every supporting view from the ads UI, a git-log correlation of what shipped during the campaign window, and a four-sprint implementation plan with two parallel review subagents that flagged risks I hadn't thought of.",[18,82,83],{},"The four-source synthesis happened inside one continuous session. The plumbing was done by the agent. The agent that pulled the search-terms table from Google Ads is the same agent that ran the SQL against Postgres, the same agent that read the diff log of recent commits, the same agent that wrote the conclusion paragraph.",[18,85,86],{},[21,87],{"alt":88,"className":89,"src":90,"width":91},"Per-user cohort table from a recent weekly review, with emails masked for this post",[25],"\u002Fimg\u002Fblog\u002Fgrowth_analytics_04_cohort_table.png",1000,[18,93,94,95,99],{},"The image above is the part of the report I want my old DS-director friends to look at hardest. Eight users, one row each, every column drawn from a different source: ads attribution from the platform's API, behavioral activity from an events table, the literal ",[96,97,98],"code",{},"last_action"," from a path-reconstruction layer. Read each row. Six of the eight bounce on the same step. One row breaks the pattern. That row is the most useful data point of the week.",[18,101,102],{},"This kind of read used to be a special-case investigation. You had a hypothesis, wrote a one-off query, exported to Excel, read the rows. Now it's a default in every report, because the agent can write the per-user table as cheaply as the aggregate one.",[52,104,106],{"id":105},"what-it-cost-me-to-set-up","What it cost me to set up",[18,108,109],{},"For the new workflow to work, the analytics layer has to be set up correctly. That part is non-negotiable.",[18,111,112],{},"There's one Postgres-shaped events table, append-only, every behavioral event in one schema, payloads as JSONB. There's a markdown documentation file that names every event and its keys. There's a markdown cookbook of common SQL recipes. There's a per-project memory entry the agent reads at session start that captures the current state of the funnel: campaigns running, conversion labels, known cliffs.",[18,114,115],{},"Total surface area: one table, two markdown files, one memory note per project.",[18,117,118],{},"This is the boring part of the work, and it determines whether the new workflow is unlocked. A messy multi-table multi-vendor analytics stack is harder to query, and the agent will get it wrong. A single-table append-only events log is the substrate that makes the reflexive workflow possible.",[18,120,121],{},"Anyone who's worked at a company with a mature warehouse will recognize the substrate. It's the same shape Meta's events tables have. The difference is that we own it, we documented it for an agent to read, and the agent reads it.",[52,123,125],{"id":124},"why-ds-directors-should-care","Why DS directors should care",[18,127,128],{},"If you direct a DS or growth team in 2026, the highest-impact move is reshaping the analytics stack so an agent can drive it. More headcount keeps the plumbing problem in place. The same five engineers task-switching across the same five tools doesn't compound; an operator driving an agent does.",[18,130,131],{},"Three concrete moves.",[18,133,134,138],{},[135,136,137],"strong",{},"One. Make your events table reachable by an agent."," That means a single source of truth instead of three. It means a documented schema, in markdown, that lives in the repo. It means the cookbook of common queries committed alongside the code rather than lost in someone's Hex notebook.",[18,140,141,144],{},[135,142,143],{},"Two. Write the report as HTML, not a slide."," HTML survives. HTML embeds Chart.js from a CDN in twenty lines of JavaScript. HTML opens in a browser three weeks later and still says exactly what was true the day it was written. The artifact lives next to the codebase. Anyone on the team can open the file.",[18,146,147,150],{},[135,148,149],{},"Three. Instrument the agent's read path, not just the human's."," The agent doesn't need a Looker dashboard. It needs SQL recipes, Playwright entry points, and a git log. Build for the agent's read path; your senior analysts then drive the agent instead of doing the data wrangling themselves.",[18,152,153],{},"What this leaves the senior analyst doing is the part the field always claimed was the actual job: ask the right question, choose where to drill, name the pattern, decide what to ship. That part was supposed to be the high-impact work. Until now, the plumbing kept eating it.",[18,155,156],{},[21,157],{"alt":158,"className":159,"src":160,"width":91},"A funnel view from the same review, ad-touched cohort over the prior week",[25],"\u002Fimg\u002Fblog\u002Fgrowth_analytics_03_funnel.png",[52,162,164],{"id":163},"what-thirty-minutes-buys-you","What thirty minutes buys you",[18,166,167],{},"Speed compounds. A weekly review that takes thirty minutes is a weekly review that gets run. A review that took two days got run when the founder pushed for it, which was less often than weekly, which meant decisions lagged the campaign window by a week or more. With a thirty-minute review you can re-run mid-week if a campaign shifts. The cycle time of the analysis becomes shorter than the cycle time of the underlying campaigns. That's the threshold where analytics actually drives decisions.",[18,169,170],{},"Thirty minutes also buys you the per-user level by default. The outlier row at the top of the cohort table above, ten document uploads, three paywall hits, zero core actions, would never have been the front-and-center finding of a Wednesday slide review. There were always too many things to talk about. In a thirty-minute artifact that surfaces outliers automatically, that row becomes the second screen and the outreach email goes out the same morning.",[52,172,174],{"id":173},"where-this-is-going","Where this is going",[18,176,177],{},"The reflex shape, operator-asks then agent-pulls-and-synthesizes then HTML-artifact-survives, will spread to every analytical surface that currently lives in a slide deck or a Looker dashboard. The unlock is mechanical. The agent collapses the handoff between tools that always capped throughput. Reasoning quality was already adequate; the plumbing between systems was the ceiling.",[18,179,180],{},"If you direct a DS team and your weekly reviews are still moving through a multi-person handoff chain, the question to ask is what it would take to wire your events table, your documentation, and your reporting templates into a shape an agent can drive. It's a two-week project at most. The output is your team gets to do the part of the job that was always supposed to be hard.",[18,182,183,184,188],{},"If you want to see the shape of this in practice, my email is ",[41,185,187],{"href":186},"mailto:kyle@monumentlabs.io","kyle@monumentlabs.io",". The reports are HTML and they travel.",{"title":190,"searchDepth":191,"depth":191,"links":192},"",2,[193,194,195,196,197,198],{"id":54,"depth":191,"text":55},{"id":70,"depth":191,"text":71},{"id":105,"depth":191,"text":106},{"id":124,"depth":191,"text":125},{"id":163,"depth":191,"text":164},{"id":173,"depth":191,"text":174},"2026-05-06","Two days at Meta. Thirty minutes now. How an operator and an agent collapsed a four-source weekly funnel review into one continuous session, and why DS directors should care.","md",{"src":26},{},true,"\u002Fblog\u002Fmodern-growth-analytics-at-monument-labs",{"image":207,"title":5,"description":200},"\u002Fimg\u002Fblog\u002Fgrowth_analytics_og.png","3.blog\u002F00.modern-growth-analytics-at-monument-labs","AUWaVVdPsrEWk6vElJ6mZc8nxgB6zXQHYha1A103Fa0",[211,212],null,{"title":213,"path":214,"stem":215,"description":216,"children":-1},"Your AI Agent Says It's Done. Make It Prove It.","\u002Fblog\u002Fyour-ai-agent-says-its-done-make-it-prove-it","3.blog\u002F01.your-ai-agent-says-its-done-make-it-prove-it","Why every significant AI-built feature should ship with a verification artifact: an HTML page of screenshots, sample outputs, and SQL checks that lets a human verify the work in five minutes instead of fifty",1778434438123]