Chapter 3: The Building Blocks of Agentic Systems
Every agentic AI system is built from a common set of components. Understanding these building blocks is essential for designing effective solutions.
1. The Model (The Brain)
The large language model is the reasoning engine at the core of any agent. It interprets instructions, makes decisions, generates outputs, and determines what actions to take.
Key considerations:
- Capability vs cost: More powerful models reason better but cost more per interaction
- Latency: Some use cases require fast responses
- Context window: How much information the model can consider at once
Start with the most capable model you can afford during design and prototyping. Optimise for cost once you've proven the solution works.
2. Instructions + Knowledge (The DNA)
Every agent has a "DNA" — the combination of instructions and knowledge that defines who it is and what it knows. This is configured by the designer when the agent is built, not by end users at runtime.
Instructions define behaviour:
- Identity and role
- Behavioural guidelines
- Constraints and guardrails
- Tool guidance
Knowledge provides context:
- Domain expertise
- Organisational context
- Reference material
Together, instructions and knowledge form the agent's DNA — the unique configuration that makes it a specialist rather than a generalist.
In the Pragmatix Advisory Portal, each advisor shares the same underlying model (e.g. GPT-4o), but has different DNA. The Digital Governance Advisor embeds the pX-DTF methodology and Pragmatix's approach to IT governance. The Solution Architecture Advisor specialises in designing technology solutions. The Cyber Security Advisor focuses on risk assessment and security controls. Same brain, different expertise.
3. Prompts (The Input)
Prompts are the runtime inputs that trigger an agent to act. They come from users, other systems, or other agents.
- User prompts: Questions, requests, or commands from humans
- Agent prompts: Requests from other agents in a multi-agent system
- Event triggers: System events that initiate agent action
The quality of outputs depends on both — good instructions create a capable agent; good prompts help it understand what's needed right now.
4. Tools (The Hands)
Tools extend what an agent can do beyond just generating text. A tool is any external capability the agent can invoke:
- Information retrieval: Search the web, query a database, look up a document
- Actions: Send an email, create a calendar event, update a CRM record
- Computation: Run calculations, execute code, process data
- Communication: Call another agent, notify a human, request approval
The art of agent design is largely about defining the right set of tools — specific enough to be useful, general enough to handle variation.
The Pragmatix Solution Architecture Advisor could create a solution design document using a standard template and publish it directly to Confluence — combining information retrieval (accessing the template), generation (drafting the design), and action (creating the Confluence page).
5. Memory (The Context)
Agents need to remember things — both within a conversation and across interactions over time.
Short-term memory: The current conversation context. What's been said, what's been tried, what the current goal is.
Long-term memory: Persistent knowledge that carries across sessions. User preferences, past decisions, learned information.
Without memory, every interaction starts from zero. With well-designed memory, agents become more useful over time as they learn about your business and users.
6. Orchestration (The Conductor)
Orchestration is how you coordinate everything — deciding what happens when, managing the flow between steps, handling errors, and determining when to involve humans.
Simple agents might just be a loop: think → act → observe → repeat.
Complex agents might involve:
- Workflows with conditional branching
- Multiple specialised agents collaborating
- Human approval gates at critical decision points
- Fallback handling when things go wrong
Putting It Together
The diagram below shows how these six components work together at runtime. The orchestration layer wraps everything, managing the flow as prompts arrive, the model reasons using its DNA, and actions are taken through tools while memory maintains context.
Compute, Communicate
Long-term recall
