Skip to main content
Do Agents Need Quickstart Guides?

Do Agents Need Quickstart Guides?

· 11 min read
Software engineer and technical writer

Some time ago we created a quickstart guide for a client that walked the reader through setting up the most basic example of their software's functionality in as few steps as possible: "Get started in 5 minutes". This is a pretty standard documentation practice and is generally useful when you're trying to see what a product has to offer. However, we noticed that this guide also significantly improved the Agent Experience (AX) for this product. Without the guide, when asked to implement a basic example, the agent struggled to get anything working that was a good representation of the product features. After the guide was published, when asked to perform the same task, the agent found the guide and followed it. This resulted in a far better first impression of the product.

I wanted to see if this impact on AX was repeatable, especially with a more well-documented and mature product. I settled on the Infisical secret manager as my guinea pig. It's a well-used product that has extensive documentation but no obvious quickstart guide that shows you exactly how to set up a basic example of their solution.

To test this, I ran two Claude Code sessions from different directories containing the same sample webapp. I gave it the same basic instructions to implement Infisical for secret management. The only difference was one session had access to a couple of relevant quickstart guides and the other was on its own.

The session with the guides was able to create a fully working example with no extra input required, while the vanilla run handed off most of the configuration to the user to be done through the browser UI.

This result seems to answer the hypothesis nicely. However, there are some definite gaps in my testing methodology which has led me to some mixed conclusions.

Methodology

To test a realistic example, I got Claude to create an example webapp that connects to Postgres and Redis on startup, signs a JWT using a secret key, and renders a single-page dashboard showing the following:

  • Green/red dots for Postgres and Redis connection health
  • Status indicators for each expected env var (set or missing)
  • A live JWT token signed with JWT_SECRET

The Secrets Demo App

This app was copied to two directories and a claude session in each was given the following goals:

**Your job:** Fully onboard this project onto a **self-hosted** [Infisical](https://infisical.com) instance. When you're done, the secrets must live in Infisical, the app must read them from Infisical at runtime (no `.env` required), and secret rotation must be configured in the Infisical dashboard.

Figure out the method yourself.

## Goals
1. Run Infisical locally — **self-hosted**, not Infisical Cloud.
2. Move every secret from `.env` into a project inside your Infisical instance.
3. Make the app start with secrets injected from Infisical, with no `.env` file present.
4. Configure secret rotation on at least one secret in the Infisical dashboard.
5. Update the README so a teammate could clone the repo and get running against your Infisical instance.

For the second run, I added the following section:

## Reference documentation

The following Infisical guides are available in this directory — read them before starting:

- **`quickstart.mdx`**
- **`self-hosted-quickstart.mdx`**
- **`end-to-end-automation.mdx`**

These three docs cover the following topics:

  • The quickstart guide explains how to store a single secret and retrieve it.
  • The self-hosted-quickstart has some further details about setting up your self-hosted instance and interacting with it through the CLI.
  • The end-to-end-automation guide explains how to use the Infisical Bootstrap flow to fully skip the UI and set up default accounts through the CLI.

A requirement for these guides is that they are all constructed in a way that they would be useful to a human or robot reader.

The guides were generated by a separate Claude session based on the existing documentation as well as it's own trail and error in trying to implement infisical.

Data

Without docs

Without the guides, the agent had no way to automate the Infisical onboarding steps that require a browser. It paused and gave the user explicit click-by-click instructions:

Open http://localhost:8222 in your browser. Add all 7 secrets into the dev environment. Create a Machine Identity (Universal Auth). Report back with these three values: PROJECT_ID, CLIENT_ID, CLIENT_SECRET.

Almost none of the manual instructions provided by Claude directly matched the actual UI elements on the dashboard. For example, to create a machine identity Claude instructed:

Go to Organization Access ControlIdentitiesCreate identity → name it vanilla-app, role No Access. After creation, open the identity → Authentication tab → AddUniversal Auth.

The actual instruction should have been more like:

Go to Project Access ControlMachine IdentitiesAdd Machine Identity to Project.

Universal Auth is automatically configured for a new identity.

Three rounds of back-and-forth were needed before all credentials were in hand. The session required 6 user messages and about 30 minutes of user wait time.

With docs

The agent read all three docs before doing anything else. Its first task entry was a verbatim lift of the automation guide's flow:

Bootstrap Infisical + create project + push secrets: Run end-to-end automation: bootstrap, create project, push .env secrets, write .infisical.json

It then ran infisical bootstrap on the first attempt with the exact flags from the doc, bulk-imported all secrets with infisical secrets set --file=".env", and wrote .infisical.json directly without calling infisical init. Zero user messages. The session completed fully autonomously in 17 minutes of compute.

Key rotation

Both runs failed to implement the key rotation goal as there is a paywall for that feature. However, the user instructions from vanilla run lead me down a rabbit hole of trying to configure it before realizing the paywall existed. The documentation run simply labelled the goal as incomplete and gave the reason with no input required.

Results

The data that I got from these two runs seems to support my hypothesis that agents can get a lot of benefit from quickstart guides that provide functional setup scripts that get a product working.

Agent Experience

The run with the docs available had a significantly improved onboarding and integration over the vanilla run. I got a fully working example in one step with no manual configuration required. Using the rubric from some of our other AX audits, this would take both the Onboarding and Integration metrics from a 1 or 2, to a 4/4. For reference, see the integration criteria table below:

GOOD
4 / 4
Agent implements a working integration end-to-end with minimal friction
OK
3 / 4
Works but requires the agent to work around one or two gaps
POOR
2 / 4
Agent needs significant hand-holding or external resources
FAIL
1 / 4
Agent can't complete a working integration

Without quickstart guides we did not get a working integration and with them, we got a an end-to-end integration with minimal friction.

Costs

The run with the documentation provided used more tokens than the vanilla run.

Without docsWith docs
Output tokens34,59551,613
Cache writes89,689172,649
Cache reads5,529,9316,900,891
API turns96113
Compute time~16 min~17 min
User wait time~30 min0 min
User messages60

The higher cache writes reflect the docs themselves adding ~30K tokens of fixed context to every turn.

However, the more honest comparison is time. The vanilla run includes 30 minutes of the user manually navigating a UI that the agent couldn't accurately describe, while the docs run required zero user input. The docs helped with actual automation, shifting the work from the user to the agent.

Caveats

Although the results looked good there are a few issues in my testing method:

  1. The docs in the second run were written by me specifically to solve the problems I'd observed in the first run. A documentation writer who has already watched an agent struggle with the UI and knows exactly which commands are missing has an unfair advantage over the docs that actually ship with a product. A fairer test would use documentation written without any knowledge of how the agent would behave.

  2. It's also not clear that an agent would find these guides in practice. I placed the docs directly in the working directory and told the agent to read them. In a real integration, the agent would need to discover the relevant documentation on its own. Whether a well-written quickstart guide gets surfaced at all depends heavily on how the agent approaches the task, and that's outside the documentation team's control.

  3. Finally, it's worth asking whether this scenario is representative of how developers actually evaluate new products. Asking an agent to fully onboard a self-hosted secrets manager from scratch in one session is a fairly ambitious task. A more typical first interaction might be lower-stakes questions where the bar for good AX is different.

I hope Infisical publishes these guides or similar ones, so I could redo a similar test with live documentation and see how much it improves the AX overall.

Conclusions

From these experiences, it seems like having a well made quickstart guide really helps AX with regards to integration and onboarding.

I think similar results could be achieved with other AI tools such as an MCP server or a Claude skill. However, at least for now, it seems that human documentation is often the first place that agents look for any information that is missing from their training data. Quickstart guides also have the nice side-effect of being useful to any human readers that stumble upon the documentation and there isn't any downside to having them.

I would be interested to hear if other software researches have seen similar results with this type of documentation and we will continue to do more experiments with other software suites.

Additional thoughts

Agents are writing more of the code in our software. But, software engineers will know that the code is generally only a small part of a project and often it's the most trivial part. A lot of the development time of a software project can consist of integrating with various existing services and managing how and where your code is run. Services like Infisical have been optimized towards human developers for many years and this makes it clunky to automate with agents.

If I'm correct in thinking that most people these days are discovering new products through agent discoverability, it is worthwhile for companies to consider AX when it comes to their documentation and onboarding procedures.