<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[Head of Ops]]></title><description><![CDATA[hardware, operations, supply chain]]></description><link>https://headofops.com</link><generator>Substack</generator><lastBuildDate>Sat, 11 Apr 2026 08:18:01 GMT</lastBuildDate><atom:link href="https://headofops.com/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Brian Head]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[headofops@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[headofops@substack.com]]></itunes:email><itunes:name><![CDATA[Brian Head]]></itunes:name></itunes:owner><itunes:author><![CDATA[Brian Head]]></itunes:author><googleplay:owner><![CDATA[headofops@substack.com]]></googleplay:owner><googleplay:email><![CDATA[headofops@substack.com]]></googleplay:email><googleplay:author><![CDATA[Brian Head]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[Two weeks in: deploying a persistent agent to my ops team]]></title><description><![CDATA[I deployed a custom-made persistent AI agent inside my company as a test.]]></description><link>https://headofops.com/p/two-weeks-in-deploying-a-persistent</link><guid isPermaLink="false">https://headofops.com/p/two-weeks-in-deploying-a-persistent</guid><dc:creator><![CDATA[Brian Head]]></dc:creator><pubDate>Tue, 24 Mar 2026 03:32:26 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!5Lst!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F255f9377-608f-457e-befa-8247e5bd072c_982x982.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I deployed a custom-made persistent AI agent inside my company as a test. The agent runs on OpenClaw here in our office in Mountain View.</p><p>Our agent, nicknamed &#8220;Owl&#8221;, has already become a documentation champ. I didn&#8217;t expect this use case to be the first win. It&#8217;s almost fully removed the annoying/cumbersome (but important) task of capturing decisions in our Confluence wiki. It jotted down some decisions on IEEPA tariff refunds / Section 122 tariff changes, worked up a page about an expensive component risk buy we made &amp; drafted several internal docs on 3PL workflows. Combined, I&#8217;d estimate that saved 4-5 hours of work.</p><p>The Claw framework is hyped, maybe overhyped; but with companies like NVIDIA launching their own fork, <a href="https://www.nvidia.com/en-us/ai/nemoclaw/">Nemoclaw</a>, I believe it has already made a permanent mark. Anthropic &amp; OpenAI have started to build persistent memory &amp; remote-first features into their own products.</p><p><a href="https://headofops.com/p/the-case-for-team-configured-ai-agents">My thesis going in</a> was that an agent can collect our team&#8217;s own institutional knowledge. It will become a compounding knowledge base that can be easily searched. And it will capture key decisions and be an automated early signal on news in our specific supply chain (electronics). There&#8217;s so many directions we could take this - but next on the roadmap is moving toward establishing the concept of &#8220;advisory councils&#8221; for the team - this could really widen our small team&#8217;s reach to draw on simulated &#8220;expert&#8221; advice for our key issues.</p><p>Owl is purposefully limited in its access to internal systems. This is a trial and cannot access any PII or sensitive IP.</p><p>A few quick notes about the agent&#8217;s access:</p><ul><li><p>it has its own email address, google drive &amp; calendar API integration</p></li><li><p>it has access to read/write from confluence</p></li><li><p>interfacing with the agent happens on slack</p><ul><li><p>only the operations team has access / in specific channels</p></li></ul></li><li><p>it has access to crawl the internet.</p></li></ul><p>Owl writes everything down in markdown format and saves it, making querying down the line easy. It should hopefully function like a second brain.</p><p>For now the use case of having institutional memory is not delivering. Its knowledge of us is thin. I suspect investing in building out a repeatable process for capturing context will be key to making it useful. We will be looking into creating an internal CRM (of our partners &amp; actions that we&#8217;ve taken), automating meeting note intake, slack summaries &amp; weekly status updates.</p><p>I&#8217;m excited to see where this heads. Stay tuned for more updates.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://headofops.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Head of Ops! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[The case for team-configured AI agents]]></title><description><![CDATA[Translating the agentic coding playbook for non-technical teams]]></description><link>https://headofops.com/p/the-case-for-team-configured-ai-agents</link><guid isPermaLink="false">https://headofops.com/p/the-case-for-team-configured-ai-agents</guid><dc:creator><![CDATA[Brian Head]]></dc:creator><pubDate>Tue, 24 Feb 2026 22:06:58 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!5Lst!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F255f9377-608f-457e-befa-8247e5bd072c_982x982.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>There&#8217;s a pattern emerging in software engineering that most non-technical teams haven&#8217;t noticed yet.</p><p>Over the past year, developers have been working with a new kind of AI tool. Persistent agents that live in their codebase, remember context across sessions, run tasks on a schedule, and accumulate knowledge in plain text files. Tools like Claude Code, OpenAI Codex, and a growing ecosystem of open-source agent frameworks.</p><p>These tools work differently from ChatGPT. You configure an agent with a workspace: a folder of markdown files that describe who it is, what it knows, how it should behave, and what tools it has access to. The agent reads those files every time it wakes up. It writes new files as it learns things. Over time, the workspace becomes the agent&#8217;s institutional memory.</p><p>The engineering teams using these tools are seeing something interesting. The value is in the accumulation. An agent that&#8217;s been working with your codebase for three months knows where the edge cases are, remembers the decisions you made and why, and can draft changes that fit the patterns your team uses.</p><p>I think this playbook applies to operations, supply chain, marketing, CS, sales, basically any team that runs on institutional context and repeating workflows. And I think the teams that figure this out in 2026 will have a real advantage.</p><p>I&#8217;m not an engineer. I run operations and supply chain at a ~125-person company. I&#8217;ve spent the last few months learning this playbook and applying it to my own work, and I want to lay out why I think it matters beyond software.</p><div><hr></div><h2><strong>The playbook</strong></h2><p>Here&#8217;s what the agentic coding world figured out that translates to any team:</p><p><strong>Persistent context in plain text.</strong> The agent has a workspace, a folder of markdown files. A file that describes its role. A file that describes the team. Files for SOPs, templates, and accumulated knowledge. The agent reads these every session. This is what makes it useful on day 30 in a way it wasn&#8217;t on day 1.</p><p>Your team&#8217;s context lives in Slack threads, email chains, shared drives, and people&#8217;s heads. Most of it is inaccessible to any AI tool. A persistent agent with a markdown workspace gives that context a home. Anyone on the team can read the files, edit them, and see exactly what the agent knows.</p><p><strong>You control what it knows.</strong> The agent only has access to what you put in its workspace. You share context to your comfort level. Want it to know your vendor relationships and meeting history? Add those files. Want to keep compensation data or sensitive negotiations out? Don&#8217;t put them in. The workspace is a folder you control, and the agent can&#8217;t reach beyond it. From a security standpoint, API calls to the model providers are covered under enterprise data policies that prohibit training on your inputs, and the workspace itself lives on hardware you own.</p><p><strong>Memory that compounds.</strong> The agent writes things down as it works. When it processes a meeting transcript, it stores the decisions and action items. When you tell it about a vendor relationship, it updates its knowledge base. When it handles a recurring task, it refines its approach based on what worked last time.</p><p>Most &#8220;use AI more&#8221; initiatives miss this entirely. They treat AI as stateless. A persistent agent accumulates. After a few months, it knows your team&#8217;s history, your recurring problems, your preferred formats.</p><p><strong>Scheduled work.</strong> The agent runs cron jobs. Morning briefings. Deadline reminders. Weekly status drafts. Data monitoring. This is table stakes in the coding agent world, and almost nobody is doing it for business workflows.</p><p>Picture an agent that posts a prioritized brief to your team&#8217;s Slack channel every morning at 7am, reviewing active projects, upcoming deadlines, and things that changed overnight.</p><p><strong>Readable configuration.</strong> The agent&#8217;s instructions live in a plain text file called something like AGENTS.md. Anyone can read it, edit it, and understand exactly what the agent is configured to do. The instructions are the configuration. When every behavioral rule is a readable text file, the trust barrier drops.</p><div><hr></div><h2><strong>Why this hasn&#8217;t crossed over yet</strong></h2><p>The tools are built for developers. Claude Code assumes you&#8217;re comfortable with a terminal. Codex assumes you have a GitHub repo. The documentation is written for people who already know what a cron job is.</p><p>The AI-for-business conversation is stuck on chatbots. The idea of an agent with a persistent workspace, scheduled tasks, and accumulated memory hasn&#8217;t entered the mainstream business conversation yet.</p><p>Nobody&#8217;s translating the playbook. The people building agentic tools are talking to engineers. The people writing about AI for business are writing surface-level content about prompting tips. The gap is the translation layer: taking the patterns that work and explaining them for an ops lead, a marketing director, or a CS manager.</p><p>That&#8217;s what I&#8217;m trying to do.</p><div><hr></div><h2><strong>What this looks like in practice</strong></h2><p>I&#8217;ll use my own domain. I run supply chain and operations. My team manages vendor relationships, runs procurement processes, tracks tariff exposure, and coordinates with engineering on product changes.</p><p>Here&#8217;s what an agent configured for this work does:</p><p>It knows our vendors. The relationship history, the open items, the last conversation, the pricing negotiations in progress. Before a vendor call, it surfaces all of that. After the call, it updates the record.</p><p>It processes meeting transcripts. We use transcription tools. The transcripts used to sit in inboxes unread. Now the agent extracts action items, updates the knowledge base, and generates prep briefs before the next meeting.</p><p>It monitors external signals. Tariff policy changes. Component market shifts. Vendor announcements. The agent checks on a schedule and surfaces what&#8217;s relevant.</p><p>It drafts recurring work. Weekly status updates. RFQ comparisons. Budget tracking summaries. The kind of work that takes 30-60 minutes every time and follows roughly the same pattern.</p><p>None of this requires engineering. The workspace is a folder of text files. The scheduled tasks are configured in plain language. The integrations are Slack and calendar.</p><div><hr></div><h2><strong>The argument for learning this now</strong></h2><p>The specific platforms will change. What won&#8217;t change is the underlying pattern: persistent context, accumulated memory, scheduled work, readable configuration.</p><p>The teams that start learning this now will have months of accumulated context and operational intelligence by the time everyone else starts. That knowledge base transfers to whatever platform exists next year.</p><p>The people who learn to work this way will also understand something about AI that most of the business world is still missing: the context you give an agent matters more than which model you&#8217;re running. Three months of your team&#8217;s accumulated context on a mid-tier model will produce better results than a frontier model with a blank slate.</p><p>Operations teams are the natural first adopter. We already think in systems, processes, and repeating workflows. We already maintain SOPs and checklists. The persistent agent playbook maps directly onto how good ops teams already work.</p><div><hr></div><p><em>I&#8217;m Brian Head. I run operations and supply chain at a VergeSense in the Bay Area. This is where I write about applying agentic AI patterns to operational work.</em></p>]]></content:encoded></item></channel></rss>