The problem this guide solves
The market for AI consulting services is extraordinarily crowded and extraordinarily uneven. Firms with radically different capabilities, track records, and architectural approaches all describe their work using roughly the same language. Prospective buyers — especially those evaluating AI consulting for the first time — have very little signal to work with.
This guide is a set of questions we wish every AI consulting buyer asked. We’d benefit from them being asked more, because we answer them well. But the point is not self-promotion. The point is giving buyers a framework for distinguishing the firms that will ship production work from the firms that will ship demos.
Question 1: Where does the data actually live?
A specific question to ask: “For the application you’re proposing to build, walk me through every place my data will travel, every third party that will process it, and every log that will be created.”
What a good answer sounds like: a specific, diagrammed data flow covering ingestion, storage, processing, inference, and logging, with named components and named third parties where they exist.
What a bad answer sounds like: vague reassurances about security, mention of encryption without specifics, inability to name specific infrastructure.
Question 2: Who will actually be doing the work?
A specific question to ask: “Tell me the names of the engineers who will be on my engagement. Where will they be located? What’s their experience?”
What a good answer sounds like: specific people, specific backgrounds, specific commitments about who stays on the engagement.
What a bad answer sounds like: “Our team is experienced” followed by a pitch deck.
Question 3: What happens after launch?
A specific question to ask: “Twelve months after launch, who is responsible for this system? What happens when the underlying models improve? What happens when a dependency is deprecated?”
What a good answer sounds like: a specific operations model, a specific commitment, a specific pricing structure for ongoing work.
What a bad answer sounds like: “We’ll hand it off to your team” or “We offer optional support packages.”
Question 4: Show me a system you’ve shipped.
A specific question to ask: “Can I see a working system you’ve built that’s in production today? Not a demo. A system being used by real people.”
What a good answer sounds like: a live system (or a careful description of one, with appropriate confidentiality), case study detail, reference clients available for a conversation.
What a bad answer sounds like: decks, mockups, “we can arrange a demo.”
Question 5: What would you not recommend building?
A specific question to ask: “In the scope we’ve discussed, what would you recommend we not do with AI? Where would you tell us the technology isn’t ready or the return doesn’t justify the investment?”
What a good answer sounds like: specific things the vendor would tell you to skip, specific reasons why.
What a bad answer sounds like: universal enthusiasm for everything you’ve proposed.
Question 6: How do you price ongoing operations?
A specific question to ask: “What’s the total cost of ownership of what you’re proposing, including hosting and operations, over three years?”
What a good answer sounds like: a clear monthly figure, documented assumptions, a specific breakdown of what scales with usage and what doesn’t.
What a bad answer sounds like: a build-phase price with vague references to ongoing support.
Question 7: What’s your posture on model selection?
A specific question to ask: “Which foundation model are you proposing to use and why? What would you change if that provider raised prices by 3x?”
What a good answer sounds like: specific reasoning about the choice, a discussion of alternatives, a real answer about architectural flexibility.
What a bad answer sounds like: “We use GPT-4” with no further discussion.
The red flags we see most often
- The firm leads with certifications and partnerships rather than systems they’ve built.
- The proposed architecture involves sending client data to public APIs without explicit acknowledgment of what that means.
- The engagement is structured as a build, with operations as an afterthought.
- The team that pitches is not the team that will do the work.
- The firm cannot describe a specific system, in production, doing a specific thing, for a specific named client.
The qualifying questions we welcome
If you’re evaluating Skyview Labs specifically, the questions above are the ones we welcome. If you ask them and our answers don’t hold up, we would want to know that before a contract.