How To Talk To AI

In the late 18th century, the steam engine promised to power a new age. Yet the early machines were lumbering, wasteful beasts, consuming vast amounts of coal for little output. It was James Watt’s invention of the separate condenser that transformed steam power. Watt’s design cut coal use by two-thirds, making them capable of working not just on mines but in factories, mills, workshops, and anywhere else that needed power.

But Watt’s condenser alone wasn’t enough. Matthew Boulton, the manufacturer who financed and commercialised the engine, understood that accessing this value required skilled talent who could install and optimize these contraptions. Such skills were vanishingly rare, requiring a peculiar blend of practical craft and scholarly insight that merged experimental chemistry, physics, and engineering with the rhythms of manufacturing and commerce. Beyond patenting their technology, Boulton carefully controlled access to this expertise. Having skilled operators was what separated successful factories from failed experiments.

The world of artificial intelligence mirrors this. The raw power of modern AI models is undeniable; what remains scarce is the skill to deploy them effectively. Leaders and product managers who understand not only what AI can accomplish but how to make that transition profitable are in desperately short supply. As in Watt’s era, the returns for those who master this art will be immense.

But there’s one crucial difference: learning to make AI useful requires no formal training in advanced mathematics or computer science. English is the modern coding language. This levels the playing field for non-technical executives, with a solid grasp of how to communicate with AI, they can compete on equal terms with their more technical peers. What matters more than engineering expertise is subject knowledge and the ability to articulate requirements with clarity, precision, and context.

This article explores how to “talk to AI.” Prompting, the most familiar technique, is merely a starting point. Prompting proves powerful as an individual skill but remains limited as an organizational one. While I’ll touch briefly on prompting basics, my focus centers on helping business leaders understand the elements of AI-orchestrated workflows and the critical importance of well-articulated business knowledge that these workflows depend upon.

From Individual Prompting to Organisational Intelligence

At the simplest level, AI interacts through prompts. This is giving natural-language instructions to a model: you can ask it directly (zero-shot prompting), provide examples to copy (few-shot prompting), request step-by-step reasoning (chain-of-thought prompting), or assume a persona to influence tone and perspective, “act as a lawyer,” or “speak as a historian” (role prompting).

Prompting excels as a personal tool. Yet for businesses seeking systematic AI integration, prompting alone addresses perhaps only 10-15% of the challenge. The real organizational value lies in architecting AI automated workflows. Business leaders must therefore understand AI beyond the chat interface, grasping how to structure the broader systems that transform individual prompting into organisational intelligence.

The path from individual prompting to organizational AI deployment requires understanding three progressive layers: enhanced prompting through context injection, agentic orchestration of multi-step workflows, and the strategic balance of autonomy and structure that makes these systems reliable at scale.


Layer One: Context as Organizational Memory

The first step toward organizational AI deployment involves improving prompting through context injection – feeding relevant material into the model alongside instructions. These might be framed as precise contractual clauses or supplemented with policy documents and FAQs. Done well, contextual engineering transforms a generic system into something that appears to understand your business intimately.


Layer Two: Agentic Orchestration as Team Management

Beyond prompting and context lies agentic orchestration. Here, AI doesn’t simply answer but acts – running through sequences of interconnected tasks. Instead of single exchanges, the system retrieves information, calls external tools, and passes results from one step to the next. This represents the architectural shift from AI as a clever intern to AI as an operations team. This orchestration creates automated workflows between AI agents.

Crucially, successful agentic orchestration is less an engineering task and more an executive management challenge with absolute similarities to people management. Understanding how to talk to AI at the organisational level means mastering three fundamental pillars that every AI agent depends on – tools, memory, and guardrails – the same mechanisms managers use to extract reliable work from human teams.

Tools represent extended capability. They are the resources you provide employees to accomplish their jobs. A salesperson proves useless without CRM access; an analyst flounders without spreadsheets or data feeds. Similarly, AI agents require tools like internal calculators, databases, or APIs to function effectively. Equipping models with tools differs little from providing staff with appropriate software licenses or filing cabinet access.

Memory ensures context continuity. It represents what you expect employees to retain while executing tasks. You don’t expect trainees to remember company history, but you might ask them to recall recent client interactions or key process rules. AI systems need scope-limited memory—sometimes short-term for conversational coherence, sometimes long-term for accumulating client or project knowledge. The art lies in determining what they should remember and what they should forget, just as in designing human workflows.

Guardrails establish operational boundaries. Junior lawyers cannot approve contracts above certain values. Factory workers must halt processes when safety alarms activate. In AI, guardrails perform identical functions, preventing systems from straying into prohibited territory—whether inventing facts, mishandling sensitive data, or producing non-compliant results.

Underlying everything is reasoning—the cognitive ability you expect from your team. It shapes how individual AI agents operate and how entire orchestrations function as cohesive units, determining what actions each agent should take, when, and how.

Determining these represents an executive-level decision, not an engineering one. Only someone with intimate knowledge of the business can define what those boundaries should be.


Layer Three: Strategic Communication Architecture

We’re witnessing a fundamental paradigm shift where developers move from writing coded instructions to describing desired outcomes. The most powerful development tools today aren’t IDEs – they’re well articulated specifications.

Like effective people management, AI workflow efficiency depends on balancing autonomous behavior and deterministic scaffolding. Autonomous behavior (reasoning and planning) provides the adaptability that enables agents to handle novel queries, edge cases, and multi-step tasks. Deterministic scaffolding (context, tools, guardrails) supplies the structure, keeping outputs accurate, compliant, and aligned with business objectives.

Too much autonomy without scaffolding produces confident mistakes. Too much scaffolding without autonomy delivers traditional, rule-bound technology flows. The managerial craft lies in articulating scaffolding that allows autonomy to add value without introducing risk. The highest-performing AI workflows aren’t those with heavy AI automation but rather carefully designed deterministic scaffolding with AI sprinkled strategically at specific points.

Approximately 85% of what humans convey to AI for successful orchestration consists of specifications in plain business language, not code, not prompts, but clear communication of requirements and constraints.

This is where subject expertise becomes indispensable. Building effective scaffolding requires intimate knowledge of business processes, customer needs, regulatory requirements, and competitive dynamics. Only those with deep domain expertise can design workflows that capture genuine business value while avoiding costly mistakes.

The Strategic Advantage

The window between AI as a competitive advantage and AI as a competitive necessity will be remarkably brief. Every company will eventually have access to powerful AI models – the steam power of our era. What will separate winners from losers is how effectively they deploy this power and how quickly they align it with business objectives.

Building defensible moats will depend on each company’s ability to construct custom AI workflows that encode their unique processes, knowledge, and strategic choices. These specifications represent organizational IP; the documented intelligence about how your business operates, what your customers need, and where human judgment adds irreplaceable value. Companies that figure this out first won’t just move faster; they’ll likely capture majority market share before competitors realize what hit them.

The winners won’t necessarily be the most technically sophisticated but those who best articulate what they want, why they want it, and how it fits within their operational context—drawing on deep subject expertise to make thousands of micro-decisions about workflow design, tool selection, memory scope, and guardrail placement. The ability to create sophisticated automated workflows depends more on communication skills and business acumen than programming expertise. The steam engine required specialized mechanics; AI requires leaders who can think clearly and communicate precisely.