marsten.ai glossary

Don’t let the jargon throw you — we make AI integration simple! But we also recognize that a list of list of phrases and concepts can be super useful to understanding artificial intelligence. For those of you who love to go down rabbit holes, we also recommend checking out these glossaries from The New York Times and MIT.

  • A simple experiment that compares two versions to see which performs better.

  • A page section that expands or collapses to show details. Helpful on long pages.

  • A multi-step AI workflow that can use tools like search, email, calendar, or a CRM to complete a task.

    Why it matters:
    moves work forward without copy-paste.

    Example:
    draft a reply, look up a customer, schedule a follow-up, log a note.

  • The system that generates text, images, or audio based on patterns learned from data.

  • The approved sources and actions a helper can access, and the sources or actions it cannot.

    Why it matters:
    reduces risk and prevents data leaks.

    See also:
    guardrails, prompt hardening

  • A short description of an image used for accessibility and SEO.

    Example:
    “Douglas Hunter headshot, blue checkered shirt”

  • A way for software to talk to other software.

    Why it matters:
    lets us connect your tools to helpers without manual copy-paste.

    See also:
    webhooks, integration

  • When possible, we prefer Apple-native tools like Shortcuts, Mail, and Calendar before adding new apps.

    Why it matters:
    simpler stacks, less context switching.

  • A set of rules that runs tasks without manual effort, often triggered by an event.

    Example:
    when a form is submitted, draft a thank-you email in your voice.

  • Your company’s tone, vocabulary, and style. Helpers are tuned to match it.

    See also:
    voice and style pack, style guide, few-shot example

  • A simple list of improvements to each helper so changes are visible and reversible.

  • Splitting long documents into smaller pieces so a helper can read and search them accurately.

    See also:
    context window

  • Following laws and company policies for privacy and security.

    Why it matters:
    keeps builds safe and aligned with your counsel and IT.

  • Plain-English lines in proposals or emails that explain how we will use specific data, tools, or testimonials.

  • How much text a model can consider at once.

    Why it matters:
    longer windows allow more background to stay in view.

  • Settings that control how steady or varied the writing is.

    Tip:
    lower is steadier, higher is more exploratory.

  • The next step you want someone to take.

    Example:
    “Book your 15-minute discovery call.”

  • A tailored helper configured for your business with your instructions, examples, and allowed tools.

    See also:
    helper, voice and style pack

  • How your organization manages access, quality, retention, and compliance.

    See also:
    data inventory, retention, least privilege

  • A short list of what data a project uses, where it lives, who can access it, and how long we keep it.

  • Using the least amount of data needed to do the job.

    See also:
    de-identification, redaction

  • Where data is stored geographically. Some companies require specific regions.

  • Removing personal details so data cannot easily be tied back to a person.

    See also:
    PII, PHI, redaction

  • A short, free conversation to confirm fit and identify the biggest opportunity.

  • A contract add-on that sets privacy, retention, and subprocessors for a project.

  • A numeric representation of text used for search and retrieval.

    Why it matters:
    helps helpers find related ideas quickly.

    See also:
    vector database, RAG

  • Protecting data while it moves across the internet and while stored on disk.

  • A small set of examples used to test a helper’s accuracy and tone before launch.

    See also:
    human-in-the-loop, guardrails

  • FWhen a helper cannot proceed, it hands off to a person or a simpler workflow so users are not stuck.

  • A couple of short examples in a prompt to teach format and tone.

    Example:
    two sample replies in your voice, then “write one for this case.”

  • Training a model further on a small set of your examples to improve style or accuracy for your use case.

  • Your unique way of speaking and deciding. Helpers are tuned to this “signal” so outputs sound like you.

  • Google’s analytics tool for measuring site traffic and conversions.

    See also:
    tracking link (UTM)

  • An answer that cites approved sources so you can verify it.

    See also:
    grounding, RAG

  • Keeping answers tied to your approved sources. RAG is one way to do this.

    See also:
    grounded answer, knowledge base

  • Rules that control what a helper can read, write, and do.

    Example:
    allow-list approved docs, require human review before any email is sent.

  • When a model sounds confident but gets facts wrong.

    How we reduce it:
    retrieval, examples, and human review for anything client-facing.

  • A small, purpose-built AI tool we configure for a repeatable job in your voice.

    Why it matters:
    faster cycles, fewer edits

  • A person approves or edits AI output before it goes out. Default for client-facing messages.

  • The standing guidance that sets tone, role, and boundaries for a helper.

  • Connecting tools so they share data or trigger actions.

    See also:
    API, webhooks

  • Tricks that try to make a model ignore instructions or leak data.

    See also:
    prompt hardening, guardrails

  • The set of approved documents a helper can use.

    See also:
    retrieval, RAG

  • Delay between request and response. We design for low latency so helpers feel snappy.

  • Giving only the minimum access necessary to complete the work.

    See also:
    RBAC, access approvals

  • A type of AI model that predicts likely next words to generate text.

  • Keeping a record of operations to help debug and audit. You can opt out of nonessential logs.

  • An extra sign-in step, like a code, that protects accounts even if a password is stolen.

  • A confidentiality agreement to protect shared information. Often used before deeper data access.

  • The steps to get set up: access, data samples, voice examples, and a quick success plan.

  • Running AI in a private environment so prompts and outputs are not retained by a public vendor.

    See also:
    private mode, zero-retention mode

  • Standard Operating Procedure

    A single page that shows when to use the helper, the steps, and what “good” looks like.

  • Any file or result a workflow produces, like a document, email draft, or spreadsheet.

  • Personal health data covered by regulation. Requires special handling, or we avoid using it.

  • Data that can identify a person, like name, email, or phone. Handle carefully.

    See also:
    redaction, de-identification

  • A limited trial with real users to validate a workflow before broader rollout.

    See also:
    rollout, eval set

  • Using settings or deployments where prompts and outputs are not stored by vendors or are stored only in your environment.

    See also:
    zero-retention mode, on-prem

  • The instruction you give a model. Clear prompts get better results.

  • A working session to improve prompts and identify quick wins, includes before and after.

  • Techniques to resist prompt injection and data exfiltration.

    Examples:
    allow-lists, strict tool permissions, content filters

  • A reusable prompt with blanks for details.

    Example:
    “Write a friendly two-paragraph email in our voice to [name] about [topic].”

  • A useful improvement you are not doing yet because no one connected the dots. Often a small helper with a big payoff.

  • Have the helper read your approved docs before answering so it stays grounded in your content.

    See also:
    grounding, vector database

  • Caps vendors set on how many requests you can make per minute. We design around these.

  • Access granted by role, like read-only finance, to keep permissions tidy.

    See also:
    least privilege

  • The intermediate steps a system takes to decide what to output. We focus on reliable results over exposing inner traces.

  • Masking sensitive details before sharing or storing text.

    Example:
    replace phone numbers with [REDACTED]

  • How long data is stored. Defaults are short timelines with deletion at project end unless you require otherwise.

  • Who the helper is pretending to be.

    Example:
    “You are a helpful client services assistant.”

  • Turning on a workflow for more users in stages after the pilot.

  • A step-by-step plan to launch helpers safely, train the team, and measure results.

  • Rules that prevent harmful or disallowed outputs. Applied at the vendor level and in our prompts.

  • The precise boundaries of what a helper will and will not do. We keep scopes small and useful.

  • Short-term context a helper can remember during a conversation to stay consistent, then reset.

    See also:
    context window

  • Third-party security standards some vendors have. We prefer tools with these attestations when relevant.

    (System and Organization Controls 2 / International Organization for Standardization 27001)

  • Plain-English steps so a task is done the same way every time. Often delivered as a one-pager.

  • A short document listing scope, deliverables, timeline, and responsibilities.

  • The tiniest build that delivers value now. We favor this over big-bang projects.

  • Sending output as it is generated so you see it appear quickly.

  • A brief reference for tone, formatting, and dos and don’ts so outputs stay on-brand.

    See also:
    brand voice, voice and style pack

  • The official place a piece of information lives, like your CRM for client notes.

    See also:
    single source of truth

  • How long it takes from kickoff to your first measurable result. We track and work to shorten it.

  • A small chunk of text models count to measure length and cost. Roughly three to four characters on average.

  • Letting a helper call an external tool mid-task, like search, calendar, or email.

  • A link with tags such as utm_source and utm_medium so we can see which campaign brought a visitor.

    See also:
    GA4, QR code

  • What a model learned from before you ever used it. For IP-sensitive work, we choose tools with clear training sources.

  • A trigger is the event that starts a workflow. An action is what happens next.

  • A specialized store for embeddings that makes similarity search fast and accurate.

    See also:
    embedding, RAG

  • Labeling iterations of prompts, workflows, and models so we know what is running where.

  • A compact guide with examples, phrasing, and dos and don’ts that trains helpers to stay on-brand.

  • Automatic messages tools send each other when something happens, like “a form was submitted.”

    See also:
    API, integration

  • A vendor setting where prompts and outputs are not stored or used for training.

    See also:
    private mode, on-prem

  • How many examples you give in the prompt. Zero-shot is none, one-shot is one example, few-shot is a small set.

Last updated: October 18, 2025 (HST)