Back to journal

What Tribal Boards Should Ask Before Approving an AI Project

Eight questions a tribal council, board of directors, or executive committee should ask before approving any AI engagement — vendor or in-house. Designed to be printed and brought into the meeting.

May 9, 2026

Most AI proposals that come in front of a tribal council or non-profit board are sold by people who have never been in front of one. The slides talk about productivity gains, transformation, ROI multipliers. The slides do not talk about tribal sovereignty, data residency, what happens to your member information when the vendor gets acquired, or whether the staff actually wants the tool.

This is a list of questions board members and council members should be asking — designed to be printed, brought into the meeting, and used. Eight questions, organized by category. Default answer to most of these is "we don't know yet, get us those answers and come back."

On the problem

1. What's the specific problem this AI tool will solve, in one sentence?

If the answer is "use AI to grow our membership / our donor base / our followers by 5x" — that's a vendor pitch, not a problem definition. Push back. Ask for a sentence about a specific question the staff currently answers manually, or a specific report that takes too long, or a specific bottleneck the team feels every week.

If the proposer can't answer this in one sentence, the project is not ready for board approval.

2. What does the staff who currently does this work say about the proposal?

Ask the membership manager. Ask the program director. Ask the cultural center coordinator. AI projects that get approved over the heads of the staff who'll use them are the AI projects that fail.

If the staff is uncomfortable, find out why. Sometimes it's resistance to change (workable). Sometimes it's a real concern about the work being misrepresented (a stop signal).

On data and sovereignty

3. Where will our data live, and who else will be able to see it?

This is the question that most generalist AI proposals fail. Tribes are sovereign nations. Your member information, your enrollment data, your cultural records — these have governance implications that a cloud-default architecture might not respect.

Specific sub-questions:

  • Will the AI vendor train models on our data? (Often the answer is yes by default — read the terms.)
  • What's our data retention policy with this tool?
  • If the vendor gets acquired, what happens to our data?
  • Do we have a written data processing agreement?

4. What happens if we want to leave?

Vendor lock-in is real. Ask before you sign:

  • Can we export all our data in a standard format?
  • How long does the export take?
  • What happens to the AI models or workflows we've trained — do those transfer or stay with the vendor?

On cost and decision-making

5. What's the total cost of ownership over five years, not just the first year?

The pitch usually shows the first-year price. The reality is the AI tool requires:

  • Annual subscription (often increases each year)
  • Internal staff time to maintain
  • Eventual migration costs when the vendor pivots or sunsets the product

Ask for a five-year cost projection. If the proposer can't give one, ask why.

6. What's the minimum version of this we could ship in 30 days?

This is the "are you actually serious about this" question. AI projects that can't articulate a minimum viable version are usually theoretical. Real proposals can describe the smallest possible version that delivers visible value within a month.

If the answer is "we need 6 months and $300K to set up the infrastructure before anything ships" — that's a different kind of project, and it deserves much more scrutiny.

On governance and reversibility

7. Who has decision-making authority on what this tool does and doesn't do?

When the AI tool starts producing outputs, who reviews them before they go to members or the public? Who has authority to change the prompts? Who decides what categories of data are off-limits (sacred sites, ceremonial photos, enrollment records)?

If the answer is "the vendor handles all of that" — the project doesn't have appropriate oversight.

8. How do we turn this off if it's not working?

Sometimes AI tools produce confidently wrong answers. Sometimes they cause unexpected staff burnout. Sometimes the technology landscape changes and what made sense in March 2026 doesn't make sense in March 2027.

The project needs a documented process for turning the tool off, communicating that to staff and members, and exporting whatever value was created. If "off-ramp" isn't part of the proposal, that's a red flag.

A few questions that are NOT useful

In our experience, these questions sound rigorous but produce noise rather than signal:

  • "Is this AI tool secure?" — every vendor will say yes. The actual security questions are #3 and #4 above.
  • "Can it handle our scale?" — most tribal organizations are below the scale threshold where this matters. It's usually a deflection from the real cost question.
  • "What's the AI's accuracy rate?" — accuracy is not a single number for AI tools, and any vendor who quotes one is selling. Ask about specific failure modes instead.
  • "How does it compare to GPT-5 / Claude / Gemini?" — model choice is an implementation detail. The question is whether the tool solves your specific problem.

What an honest pitch looks like

If a vendor or consultant comes to your board with the following structure, they're worth taking seriously:

  1. The specific problem they're solving, in one sentence
  2. What they'll deliver and on what timeline (under 8 weeks for the first version)
  3. What it costs upfront and ongoing, with a 5-year projection
  4. Where the data lives, what the off-ramp looks like
  5. Who at your organization is the day-to-day point of contact
  6. References from organizations that look like yours

If they don't lead with these, the proposal isn't board-ready.

How we approach this with our clients

We're an AI consulting practice that works specifically with indigenous-serving organizations, so this list isn't theoretical for us — it's the framework we use when we propose work. If we can't answer all eight questions on your board's behalf, the engagement isn't right for us either.

For tribes evaluating any AI engagement (us included), we offer a free 15-minute call to walk through these questions together. The point isn't to sell you our service — it's to help you ask the right questions when the next pitch lands.

If you'd rather have a written framework you can take into a meeting, the AI 1-pager is a more compact version of the same thinking. Print it, keep it in the desk drawer next to the next vendor proposal.