Not Done Conversation — Ben Rafter drops 12/16
CEO, Hotel Equities & Springboard Hospitality
Ben Rafter brings a rare lens to hospitality: tech entrepreneur first, hotel CEO second—and that perspective shows.
His core insight is simple and timely:
this is a hard cycle for owners, but a real opportunity for strong operators.
A few takeaways that stood out:
The gap between operators and owners is widening—and will continue to do so
Soft brands and lifestyle concepts are filling the space between independence and scale
Content and experience now matter as much as flags
AI will ultimately level distribution, shifting power back to properties that know who they are
Not Done Conversation — Richard Garcia drops 12/23
Head of Food & Beverage, Crescent Hotels (former Remington SVP)
Richard Garcia’s career spans the military, kitchens, operations, and executive leadership—but the through-line is consistent: culture drives performance.
A few ideas worth sitting with:
Leadership is a choice, not a role
People learn fastest when they’re allowed to fail—within guardrails
Food & beverage isn’t about menus; it’s about storytelling and service

MIT Insight — What MIT Is Teaching About AI (That Most Leaders Still Miss)
After spending over 100 hours inside MIT’s AI coursework, one thing became clear very quickly:
AI is no longer a tool problem. It’s a leadership design problem.
Here are the five ideas that matter most straight from the #1 Engineering School in the World:
1. AI Doesn’t “Know” Things — It Predicts Them
Large Language Models don’t reason like humans. At their core, they are statistical prediction systems trained to generate the most likely next output based on patterns in massive datasets—not truth, judgment, or intent.
That distinction matters because:
AI can sound confident while being wrong
Errors are expected behavior without grounding or oversight
The real risk isn’t hallucination—it’s misplaced trust
Leadership takeaway:
AI should not operate in isolation. The most effective deployments embed AI inside workflows, supported by retrieval, verification, and human judgment. “Human In-The-Loop!”
2. Scale Unlocks Capability — and Complexity
MIT research consistently shows that as models scale—more data, more compute, more context—they exhibit emergent capabilities like reasoning, abstraction, and synthesis.
But scale also introduces:
Knowledge becoming stale over time
Bias propagation from training data (Most models are trained in English & Mandarin. Most models are trained by Men. Etc.)
Security, misuse, and governance challenges
Systems that behave differently in production than in testing
Leadership takeaway:
The question is no longer “How powerful is the model?”
It’s “How well is the system designed around it?”
3. The Real Shift Is From Models to Systems
The most effective AI applications MIT highlights are not standalone models—they are AI systems.
That means:
Retrieval-augmented generation instead of free-form answers
Tool-using agents with defined boundaries
Oversight layers that enforce policy and quality
Monitoring and evaluation after deployment, not just before
This is how reliability improves and risk becomes manageable.
Leadership takeaway:
If your AI strategy ends at “we use a foundation model,” you don’t have a strategy—you have an experiment. …Maybe ChatGPT 5.2 makes you a better writer but it will stop there for now…
4. Regulation Is Moving Into the Architecture
One of the most important ideas coming out of MIT is Regulation by Design.
Instead of treating compliance as an afterthought, regulatory and risk objectives are embedded directly into the technical design of AI systems—covering areas like data quality, usage context, and system behavior.
Why this matters:
AI systems are increasingly governed by sector-specific and regional regulation
Organizations—not model providers—are accountable for outcomes
Poorly designed systems increase both legal and operational risk
Leadership takeaway:
The companies that win won’t resist regulation. They’ll architect for it from day one like Anthropic is doing for Enterprise accounts.
5. Safety Is Becoming a Strategic Advantage
MIT’s work on AI safety and evaluation makes one thing clear:
AI systems will be stressed, probed, and misused in real-world conditions.
That’s not a future scenario—it’s already happening.
Leading organizations are:
Actively testing their own systems for failure modes
Monitoring AI behavior post-deployment
Treating AI as critical infrastructure, not novelty software
Leadership takeaway:
Trust will differentiate winners. Well-designed, well-governed AI systems will outperform reckless ones—over time, and at scale.

Ongoing Evolution of LLMs
AI won’t be the advantage.
Design will.
Every company will have access to AI.
Very few will design it well.
— Sloan