What a Weekly AI Executive Brief Actually Looks Like
We get asked what an AI-powered weekly brief actually contains. Here's the structure of the one that lands in the AIT CEO's inbox every Monday at 7am — section by section, with the why behind each.
May 8, 2026
We talk a lot about the AI executive brief we built for AIT, but in practice "AI executive brief" can mean almost anything. Some are a thin AI-generated summary on top of last quarter's KPIs. Some are unfiltered Perplexity dumps with the org's name pasted in. Some are genuinely useful.
Here's the actual section-by-section structure of the brief that arrives in the AIT CEO's inbox every Monday morning at 7am, written for any leadership team thinking about commissioning something similar.
Section 1: The narrative summary
A 2–3 paragraph summary at the top of the email. Written in the CEO's voice (we tuned the prompt with samples of her past board updates), it sets context for everything that follows.
"This was a strong week for outreach — the team logged 47 activities reaching 23 unique contacts across 8 tribes. Membership saw 3 new signups at the Tribal Enterprise level, though 8 memberships are due for renewal in the next 30 days. Your Constant Contact campaigns averaged a 24% open rate, with the 'Spring Conference Save the Date' performing best at 31%..."
The summary uses concrete numbers — not vague claims. The model is given the actual hub data and prompted to write in terms of what changed, what's outperforming, and what needs attention this week.
Why this section matters most: the CEO can read just this and know whether to forward the brief to her chief of staff or open the rest herself. It's the highest-stakes 200 words in the whole email.
Section 2: Outreach activity
A table — actually a styled HTML block since real <table> rendering breaks in some email clients. Shows:
- Total activities logged this week vs. trailing 4-week average
- Top 3 staff by activity count
- Top 5 tribes by activities-this-week
- Activities by channel (call / email / meeting / event)
The "vs. trailing 4-week average" part is the value. A flat list of numbers tells you nothing; a list of numbers compared to baseline tells you which tribes are heating up.
Section 3: Membership pipeline
- New signups this week (with member name + tier)
- Renewals due in next 30 / 60 / 90 days (with renewal status)
- "At risk" flags — members whose engagement has dropped below threshold
- Lapsed members reactivated (rare but valuable to call out when it happens)
The at-risk logic is the only piece in the brief that uses anything resembling ML. We score engagement based on email opens, event attendance, and recent staff outreach. When the score crosses below a threshold, the member shows up here.
Section 4: Email campaigns
Pulled from Constant Contact via their API:
- Open rate and click rate for each campaign sent this week
- The single best-performing subject line
- The single worst-performing subject line (this is the most useful one — people don't usually look)
We don't include unsubscribe rate. It's noisy at this volume and tends to alarm leadership disproportionately to its actual impact.
Section 5: Grant pipeline
For organizations with active grant programs (AIT has many):
- Submissions in flight, with deadline + reviewer
- Grants awaiting decisions, with expected response date
- Awards received this week (rare and worth highlighting)
- Upcoming federal funding deadlines flagged from the intelligence section
Section 6: Real-time intelligence
This is the section that took the brief from "report" to "AI chief of staff." Four topic queries run through Perplexity Sonar each Monday morning:
- "Indigenous tourism news this week" — current developments, with citations
- "Federal policy and legislation affecting tribes" — IRS, BIA, NPS, congressional activity
- "Industry trends in tribal tourism and tribal enterprises" — broader market context
- "Active grant opportunities for tribal organizations" — new RFPs, deadline reminders
Each topic produces 3–5 bullet points with hyperlinked citations. The CEO clicks through to the originals if she wants to go deep; otherwise the bullets are the read.
Cost: ~$0.005 per Perplexity Sonar query × 4 queries = ~$0.02 per brief.
Section 7: This week's calendar (optional)
When integrated with Google Calendar (an opt-in second phase), a final section shows the CEO's upcoming week with conflicts flagged. We don't generate or modify calendar events — just surface them in the same context as everything else.
What we left out (and why)
- Social media metrics. They're noisy and don't drive the decisions a CEO makes.
- Website analytics. Marketing lead gets a separate brief focused on this.
- Generic industry "AI is changing everything" commentary. Nobody reads it.
- Forecasts and predictions. AI is bad at these; humans are slightly less bad.
The discipline is "what would a smart chief of staff actually flag for the boss this week?" Most of what AI vendors are pitching is the opposite — fill the page, justify the subscription. We optimize for the inbox.
What it costs to build
The hub already had the data. The brief itself was a 4-week build: 1 week designing the section structure with the CEO, 2 weeks implementing data collectors and the email template, 1 week iterating on the narrative-summary prompt until the voice landed.
For an organization that already has a working CRM or member system: a comparable brief is a 4-week project at our Custom Build tier. For organizations without that foundation: that comes first, and it's a longer engagement.
If you want one
The cleanest first step is the free 15-minute call. We'll talk about whether your organization has the data and the leadership rhythm where a brief like this actually pays off. If it doesn't, we'll say so. If it does, the next question is which sections — the AIT structure isn't sacred, it's the result of a decade of conversations about what gets read on a Monday morning.