Where We Stand
We're a small team doing work that has real consequences. We don't have a large legal department or a PR firm managing our reputation. What we have is a short list of things we won't compromise on — and a shared understanding of why.
Honest work
We don't oversell what AI can do. We don't hide what it can't. If a client's use case is a bad fit for what we build, we say so — even when the contract is already in front of us. The projects that go wrong are almost always the ones where someone decided not to say the uncomfortable thing early.
Responsible deployment
Agents we build operate in real environments with real consequences. We design for failure, not just success — and we push for human oversight in any context where that matters. Which, right now, is most of them.
No hidden agendas
We don't take work that requires us to build systems designed to deceive, manipulate, or surveil people without their knowledge. If a brief heads in that direction, we'll say so and decline. It has happened. It will happen again.
Data handled with care
We treat client data like it belongs to the client — because it does. We don't retain what we don't need, we're clear about what we use and why, and we'd rather ask twice than assume once.
What This Looks Like Day to Day
Principles are easy to write. They're harder to keep when the work gets complicated. Here's how ours show up in practice.
We flag problems before they become your problem
If something concerns us — a system behaving unexpectedly, a scope that's drifted, a decision that deserves a second look — we raise it. Not in a lengthy report. In a direct conversation, as soon as we can have it.
We don't pretend agents are people
The agents we build are clearly identifiable as agents. We don't design systems that misrepresent their nature to the humans interacting with them. That's a line we're not interested in crossing, regardless of how the request is framed.
We hold ourselves to the same standard
We use AI internally. The principles that govern what we build for clients govern how we use it ourselves. If we wouldn't deploy it for a client, we don't run it internally and call it fine.
If Something Doesn't Look Right
We'd rather know about a problem than not. If you've seen something in our work, our conduct, or our systems that gives you pause — tell us. You don't need to be certain. You don't need to have a case fully assembled. A concern is enough.
Reports go directly to our admin contact — not to a ticketing system, not to a queue. A person reads it. You can report anonymously. We won't press for more than you want to share.
The Short Version
The AI industry has a credibility problem — not because everyone in it is acting badly, but because the defaults favor speed over care and hype over honesty. We're trying to be the counterexample. Not loudly. Just in the actual work.
If something on this page rings true, that's probably a reasonable sign we're aligned. If something rang false — we'd genuinely like to hear that too.
— Webmaster, Lab 317