
Intent is the new perimeter
On todays episode:
I had taken a bit of a break from creating podcasts for the last year. I needed to focus on the changes that were happening in the world of AI. Every week there were 30 new things I needed to understand and internalize, so that I can then explain them to other people. The pace at which change was happening was accelerating, and still is.
This has forced me to analyze many things, including the premise of this show, which was named the Canadian Cybersecurity Podcast. However, in this last year, not only did cybersecurity fundamentally change, but the entire world changed. There has been a shift to a world that is no longer run by people, but by agents orchestrated by people, agents orchestrated by agents, and people orchestrated by agents.
Those that understand the nature of our new reality, will thrive. Those that do not, will face great hurdles, and may not be able to recover from it. Thus, I have now changed the name of this podcast, to the Canadian AI & Cybersecurity Podcast.
In todays episode. I will speak with Junior Williams, a Principle Architect for Cybersecurity and AI at Bitsummit, a Canadian IT consulting and services organization.
We will discuss the intersection of AI and cybersecurity, with “intent as the new perimeter” beyond traditional identity-based controls.
We will discuss the need for AI governance: councils, Boolean enforcement hooks, Enterprise AI rollouts patterns, and some real world risks like the current AI supply chain attacks.
Then will also venture into self-evolving systems, and using an exocortex system for AI.
Enjoy.
Transcript Structured Summary
Guest intro and background
- Guest: Junior Williams, returning from your first-ever episode, now a principal enterprise architect at BitSummit.
- Career arc: Decades in programming (starting with object‑oriented C), telco, web app dev, systems analysis, network admin and architecture, then cyber risk consulting (risk analysis, policy, pen‑testing, incident response).
- AI journey: Long‑time interest in ML; hands‑on with GPT‑2 around 2020, then daily work with frontier and local models since the launch of ChatGPT.
- Current focus: Living at the intersection of cybersecurity and AI, helping enterprise and public sector clients navigate the rapidly evolving AI landscape.
The new perimeter: intent over identity
- Traditional view: “Identity is the new perimeter” – who you are and what you can access remains necessary but no longer sufficient in an AI‑driven environment.
- Problem: An agent with valid credentials, scope, and API keys can still perform actions you never intended, even if permissions are technically correct.
- Dual intent model:
- Example incident: A fully authenticated agent with Bash access pulled API keys from macOS Keychain and printed them into the chat context, turning encrypted‑at‑rest secrets into cleartext sent to a cloud provider and potentially subject to the CLOUD Act.
- Hooks as a game‑changer: Using deterministic pre‑tool or pre‑publish hooks to intercept and block unsafe behaviors (e.g., scanning for key patterns before executing a tool or shipping artifacts).
- Takeaway: “Intent as the new perimeter” – you must govern what an agent is trying to do, not just what it is allowed to touch.
From prompt engineering to harness engineering
- Evolution of practice:
- Prompt engineering – shaping individual prompts to steer model behavior.
- Context engineering – controlling what information flows into and out of the model’s working set.
- Harness engineering – designing the overall agentic runtime (skills, tools, hooks, policies, flows) to make behavior more deterministic and governable.
- Core problem: Organizations want deterministic outcomes from non‑deterministic systems by “just talking” to LLMs; this expectation is fundamentally misaligned with how models work.
- Determinism tactics:
Designing safer agentic systems in enterprises
- Start with fundamentals: “Know thy network” becomes “know thy agents and data.”
- Required inventories:
- Blast radius thinking: If you cannot enumerate the potential blast radius of an agent, you are not ready to expand it.
- Non‑determinism risk: You cannot “tune” the base model into strict determinism without losing the creative, multi‑path behavior that makes it powerful; guardrails must be outside the model.
AI roadmap vs security maturity
- Misalignment: AI initiatives are often moving much faster than organizational security maturity, effectively accelerating risk rather than modernization.
- Governance first:
- Three architectural design principles:
- Boolean enforcement – a control plane that can deterministically allow/block actions and tool calls; don’t rely on the model to police itself.
- Lifecycle positioning – place enforcement at the right points in the agent lifecycle (pre‑tool, pre‑deploy, CI/CD, runtime), and revisit continuously rather than as an annual penetration test.
- Auditability – full traceability of agent actions and decision chains to analyze near‑misses and actual incidents.
- Reliability expectation: For safety‑critical AI agents, the bar should be closer to OT‑style 100% reliability than IT’s “five nines” uptime mindset.
Breach mindset and new AI attack surfaces
- Assume‑breach posture: Continuous threat exposure management is essential; treat systems as perpetually at risk rather than “secure between pen tests.”
- Axios NPM supply‑chain attack:
- Popular package compromise illustrates how a single dependency can reach tens or hundreds of thousands of applications and endpoints.
- Even simple controls like strict outbound firewalls and endpoint egress controls could have mitigated some impact, though more sophisticated payloads could evade naïve defenses.
- API key exhaustion attacks:
- Anthropic’s Claude Code source exposure:
- Strategic implication: Organizations must rethink AI supply‑chain risk (packages, SDKs, harnesses, orchestration layers, and MCP servers) and treat them as critical dependencies, not just developer tools
SBOM and the agentic stack (CISO lens)
- Need for an AI‑aware SBOM: CISOs must understand not just software components but the full “agentic stack” – models, tools, connectors, harnesses, plugins, and data flows.
- SBOM considerations for agents:
- Dependencies: NPM/PyPI/other libraries, orchestrators (OpenAI/Anthropic SDKs, OpenClaw, Paperclip, Exocortex‑like systems).
- Connectors and hubs: MCP servers, API gateways, RAG pipelines, secret hubs, internal tool catalogs.
- Runtime policies: Hook configurations, guard scripts, control‑plane rules that change over time.
- Defense in depth: Combine SBOM visibility with classic controls (egress filtering, rate limits, key scoping and rotation, strong identity, network segmentation) rather than chasing “AI‑only” silver bullets.
- First‑principles questions for CISOs:
- What problem are we trying to solve, and what is the ROI versus risk?
- Are we using AI for technology’s sake, or for clear business and security outcomes?
Self‑improving agents and “time travel” patterns
- Sci‑fi “30‑second rewind” analogy: Instead of correcting an LLM within a polluted context, you can ask how it would fix an error, then rewind earlier in the conversation and re‑run with the improved plan.
- Benefits: Cleaner context, fewer tokens, better iterative improvement, and a practical mechanism for self‑training at the harness level.
- Autopoietic systems and Exocortex:
- Junior describes an “Exocortex” of agents that evaluate their own past work and evolve over time.
- Uses councils of agents (including a dedicated contrarian) to review commits, summaries, and sessions, focusing on security issues, code exposure, and downstream risk.
- Over time, the system not only finds but also remediates issues autonomously, while still flagging changes to the human operator.
- Multi‑model review: Leveraging a second frontier model (e.g., sending code from one ecosystem to another, such as Anthropic ↔ OpenAI) for adversarial review of agents and code.
Frontier models, local models, and the edge
- Mythos model concerns: Reports of Anthropic’s “mythos” model being delayed over cybersecurity power highlight the inevitability of highly capable offensive/defensive AI (if not from one vendor, then from others, including open‑source communities).
- Auto‑research and peer‑to‑peer improvement: Karpathy‑style local auto‑research pipelines plus peer coordination could crowdsource model improvement, breaking the notion that only hyperscalers can field advanced capabilities.
- Local inference trend:
OpenClaw, harness security, and enterprise deployment
- OpenClaw phenomenon:
- Reality check:
- Deployment patterns:
- Partnering with NVIDIA: Described efforts to design secure OpenClaw deployments using NVIDIA’s stack (NeMo, etc.), aiming to industrialize these patterns for enterprises.
Human factors, engagement farming, and phishing risk
- Engagement farming concern: Infinite‑scroll short‑form content that keeps users in a semi‑trance state substantially increases susceptibility to phishing and social engineering in chat overlays.
- Attack surface expansion: AI‑driven engagement systems plus live chats equate to a massive and growing phishing vector, as users click links while cognitively “down‑regulated.”
Closing themes and mindset
- Balanced stance:
- Continuous learning: Best practice is not a fixed target; the only stable recommendation is to remain in a continuous learning and experimentation loop and not assume mastery.
- Practical mantra: Combine assume‑breach security thinking, explicit intent control for agents, and disciplined, auditable harness engineering to stay out of the “lowest‑hanging‑fruit” category.
About the author

With over 25 years of industry experience, Daemon Behr is a seasoned expert, having served global financial institutions, large enterprises, and government bodies. As an educator at BCIT and UBC, speaker at various notable events, and author of multiple books on infrastructure design and security, Behr has widely shared his expertise. He maintains a dedicated website on these subjects, hosts the Canadian Cybersecurity Podcast, and founded the non-profit Canadian Cyber Auxiliary, providing pro bono security services to small businesses and the public sector. His career encapsulates significant contributions to the IT and Cybersecurity community.
Other recent articles of note.
Discover more from Designing Risk in IT Infrastructure
Subscribe to get the latest posts sent to your email.





