Methodology

How the work actually runs

Most agencies sequester the build behind a discovery phase and a polished demo. Monument Labs places the real product in your hands on day one. Direct experience with it drives what ships next, in place of abstract requirements gathering.

Why this is different from how most agencies work

The standard agency engagement opens with a kickoff call and a discovery phase, then runs through weeks of silence before culminating in a polished demo. The consultant runs a screen-share through happy paths, the room nods along, and three days later the team discovers that the auth flow is broken on Safari and the email integration was mocked. By the time those gaps surface, two more weeks have passed and the contract is closer to its end than its start.

Monument Labs runs differently. On every engagement, the real product is deployed to a working URL on the first day, and you interact with it directly as the build progresses. Each new version lands at the same URL within 24–48 hours, shaped by your feedback on the version before.

You get a live URL on day one

From day one, the actual product is deployed and clickable on the same hosting your users will eventually use. Day one might be a single page with a single function, and that is fine. The point is that the product is real from the first commit, and it stays real every day after that.

That URL becomes the source of truth for the entire engagement. Anyone you want in the loop (co-founder, board chair, IT director, future first user) can open it on their phone and see exactly where the build stands. There are no meetings to schedule and no status updates to request: the status is the link.

The gap between "we think this is what we want" and "we want this once we see it working" is enormous. Most projects discover that gap during a launch-week demo. The Monument Labs version surfaces it on day three, when there is still time to do something about it.

You record an open walkthrough

When time allows, you open a screen recorder (Loom, your phone, or any tool of choice), pull up the live URL, and start exploring. You narrate what you notice as you click: friction points, confusion, anything that surprises you in the moment. All of it captured in your own context, in your own words.

There is no structured feedback form to complete, no review meeting on the calendar, and no issue tickets to write up. The walkthrough is the feedback. The only ask is that you talk while you click; even a quiet "that feels off" is a usable signal. Five minutes of exploration carries more weight than an hour of abstract description.

Send the link to the recording when you are done. That is all. There is no need to translate your reactions into requirements; that translation is our work, not yours.

Changes ship in 24–48 hours

Each walkthrough is reviewed and translated into queued changes that land on the live URL within 24 hours for most items. Larger requests (a new flow, a new integration) take up to 48. Either way, the response time on a piece of feedback never exceeds two business days.

The loop is asynchronous on your end and disciplined on ours. Walkthroughs arrive on your schedule; the build advances on ours. Most weeks settle into a walkthrough every couple of days, with no required cadence. During a busy week on your end, the build keeps moving on items already in the queue. When feedback resumes, the loop picks up where it left off.

The 24–48 hour turnaround matters because the context of a walkthrough stays fresh when the change appears that quickly. You remember why you reacted the way you did, which makes it possible to judge whether a fix addressed the root issue or only covered the symptom. That speed is core to the work, and it is the reason Monument Labs caps the number of concurrent engagements.

What this looks like across Pilot, MVP, and Studio

The loop is the same on every engagement: a live URL on day one, walkthroughs delivered on your schedule, changes shipped within 24–48 hours. The tier defines scope, not method.

Pilot ($7,500 / 2 weeks)

Two weeks, one live URL, every change shipped within 24–48 hours of a walkthrough. Day one is the first deploy: typically a landing page plus a single core flow that proves the idea is real. By the end of week two, the product is deployed, the flows you walked through are working, and the engagement closes with a running version of the thing rather than a deck about it.

MVP ($20,000 / 4–6 weeks)

Same loop, longer arc. The user-visible flows are agreed up front: authentication, the core feature, billing if relevant, the dashboard where users live. You walk through each of them on the live URL as they come online. By the end of the engagement, the product is deployed and live with real users, and the surfaces those users hit first are the surfaces you invested the most reaction time on.

Studio (custom / 8–12 weeks)

Same loop, extended past launch. A Studio engagement runs through traction, which keeps the URL live, the walkthroughs flowing, and the team instrumenting analytics and the ad funnel against the changes requested. The 24–48 hour turnaround carries more weight once real users are on the product, since the cost of a bad guess rises. Every shipped change continues to run through the same loop.

Ready to talk scope?

Pilot, MVP, or Studio. Fixed-price product work that runs through the same loop.