how we work:
ethical, privacy-first AI
I treat your data, your customers, and your brand like my own.
Last updated: October 18, 2025 (HST)
Still have questions? Email me at douglas@marsten.ai
-
I only use data you explicitly approve for this project and purpose.
I minimize sensitive details and keep them out of public chat tools.
When possible, I configure zero-retention or private deployments so prompts and outputs aren't stored or used for model training.
I don't scrape or ingest third-party content you don't own or license.
For demos and testing, I use de-identified or synthetic data whenever feasible.
-
I vet tools for privacy controls, encryption, enterprise terms, and retention settings.
If your company bans a tool, I propose alternative approaches.
When feasible, I build inside your accounts or with your API keys so you remain in control.
I prefer vendors with independent security attestations (SOC 2, ISO) and clear documentation about training-data sources and opt-outs.
For IP-sensitive use cases, I avoid generative tools whose training sources are unclear or likely to include unlicensed material.
-
Least-privilege access only. I request the minimum permissions required.
Devices are encrypted. Secrets and passwords live in a password manager, never in plain text.
Files live in approved folders with encryption at rest and in transit.
If email is unavoidable, attachments are encrypted and keys are shared via a separate channel.
-
We agree up front on what I receive, where it lives, who can access it, and for how long.
I maintain a simple data inventory for the project.
On project close, I delete local copies and temporary artifacts within the agreed timeframe and confirm in writing.
Backups and logs follow the same timelines unless your policy requires different handling.
-
Every workflow ships with a plain-English one-pager: inputs, steps, tools used, privacy notes.
Any external integrations, automations, or data connections require written approval.
I align with your legal and IT policies and adjust after your review.
A current list of tools and sub-processors used on your project is available on request.
-
AI outputs are drafts until a human approves them. Nothing auto-sends to customers without your sign-off.
I test for prompt risks, hallucinations, and data leakage before recommending production use.
Where relevant, I add guardrails and allow-lists.
I monitor outputs in pilots and adjust prompts, data, or workflows when necessary.
-
I avoid ingesting third-party content you don't own or license.
Deliverables, custom helpers, prompts, and workflow documents created for your business are yours as defined in our agreement.
I don't reuse your proprietary datasets or workflows in other client work.
I won't cross-pollinate competitive strategies between clients without explicit permission.
-
I don't use your name, logo, or results in marketing without written permission.
Testimonials and case studies are drafted for your approval before publication and can be anonymized.
-
If I detect a data exposure or security issue, I notify your point of contact promptly with facts, scope, and next steps.
We pause affected workflows until risks are addressed, then document the fix.
I keep a simple incident log for transparency.
-
I'm not a law firm. I flag issues early and follow your counsel's guidance.
If needed, we can add a short data-processing addendum naming approved tools, sub-processors, and regions.
For regulated data (PHI, PCI, or government identifiers), we either put additional controls and agreements in place or avoid using it altogether.
If your policies require specific data residency, we configure regional controls where vendors support them.
-
You can require zero-retention modes where supported and available.
You choose data-residency preferences when tools support region selection.
You can opt out of any nonessential analytics or logging for helpers I build.
You approve the data sources a workflow may read and the channels it may write to.