Chapter 27: The Platform Landscape
Understanding what is available at Layer 2 before you commit to anything.
In Chapter 4, we introduced the three-layer agentic AI stack and made a simple recommendation: rent the model, buy the platform, build the DNA. Most of this book has focused on the model and the DNA — how agents reason, how to design their instructions, and how to govern what they do.
This part goes deeper on the middle layer.
The platform landscape has changed fast. What began as a handful of open-source libraries used by researchers is now an ecosystem of over 170 frameworks, tools, and managed services. The risk is not too few options. It is choosing the wrong one, or building something that did not need to be built at all.
Chapter 27 maps the landscape. Chapter 28 gives you a way to evaluate your options. Chapter 29 addresses what happens when the platform decision goes wrong.
Three Categories, Three Audiences
The platform landscape in 2026 breaks into three groupings, each designed for a different audience with different needs and different trade-offs.
| Category | Designed For | Examples |
|---|---|---|
| Enterprise Managed Platforms | Enterprise IT teams, architects, developers in large organisations | Azure AI Foundry, AWS Bedrock Agents, Google Vertex AI, IBM Watsonx Orchestrate |
| Developer Frameworks | Software developers and AI engineers building custom systems | LangChain, LangGraph, CrewAI, Mastra, Microsoft Agent Framework |
| Accessible Agent Platforms | Non-technical business users, entrepreneurs, small business operators | Dify, n8n, OpenClaw |
These are not interchangeable. Choosing a developer framework when you need an enterprise managed platform — or building on one when an accessible platform would have done the job — creates problems that are expensive to undo.
Category 1: Enterprise Managed Platforms
Enterprise managed platforms are vendor-backed solutions from major cloud providers, built for professional developers and IT teams in large organisations. They cover the full agent lifecycle: model selection, agent deployment, memory management, tool integration, monitoring, access control, and audit logging. You do not operate the infrastructure. The platform does. Your job is to design agents and connect them to your systems.
For Australian organisations, data sovereignty is often the deciding factor in this category. Where data is processed matters as much as where it is stored. A platform can store your data onshore while processing your prompts on servers overseas. When evaluating enterprise platforms, always ask specifically about inference location, not just data residency.
Choosing the right platform is not enough. You must also choose the right deployment type within it. On Azure AI Foundry, selecting a Global deployment type means your prompts and completions may be processed in any Azure region worldwide, regardless of where your other data is stored. Only Standard Regional deployment guarantees inference stays within the region you select. The same distinction applies on AWS and Google. Always verify the deployment type, not just the region.
Microsoft Azure AI Foundry is Microsoft's unified enterprise AI platform, combining a model catalogue of over 11,000 foundation models, a managed agent runtime (Foundry Agent Service), enterprise knowledge integration (Foundry IQ), and a centralised governance and monitoring control plane. For organisations already in the Microsoft ecosystem, the depth of integration with Azure, Microsoft 365, and Teams is a real advantage.
Azure AI Foundry is available in Australia East (New South Wales) and Australia Southeast (Victoria), with Azure OpenAI models including GPT-4 accessible in the Australia East region. Data storage and model inference can both be kept onshore using Standard Regional deployment. Azure infrastructure is IRAP-assessed and aligned with Australia's Privacy Act and Australian Privacy Principles. Microsoft has committed to in-country processing of Microsoft 365 Copilot services for Australia by end of 2026.
AWS Bedrock Agents (AgentCore) takes a leaner approach. The open-source Strands Agents SDK is model-agnostic and keeps the developer experience thin. The managed AgentCore runtime handles isolation, scaling, session management, and identity without getting in the way of agent logic.
AWS has committed AU$20 billion to Australian infrastructure through 2029. Bedrock is available in both Sydney and Melbourne, with geo-fenced inference that keeps requests within Australian borders including cross-region failover. Many AWS services are IRAP-assessed for PROTECTED data. For organisations already on AWS, Bedrock Agents offers a clean fit with a credible sovereignty story.
Google Vertex AI Agent Builder offers the most streamlined path from development to production: an open-source Agent Development Kit (ADK) supporting Python, Java, Go, and TypeScript, paired with a managed Agent Engine runtime and a visual design studio. Google's Gemini models and deep integration with BigQuery and Google Workspace make it the natural choice for organisations already in the Google Cloud ecosystem.
Google's Sydney region (australia-southeast1) supports Vertex AI workloads, and Gemini models and Vertex agent services are accessible in Australia as of 2026. Google Cloud has held IRAP PROTECTED certification since 2021, and Assured Workloads can restrict data processing to Australian regions and vetted personnel. The caveat is model availability: not every Gemini version reaches the Australian region immediately. Verify that the models you need are accessible onshore before committing.
IBM Watsonx Orchestrate stands apart from the other three through its support for on-premises deployment. Data and agent operations can be kept entirely within your own data centres, never touching a public cloud. For Australian government agencies and heavily regulated industries, that level of control is sometimes the only acceptable option. IBM Cloud also maintains a multi-zone Sydney region adhering to IRAP and ISO 27001 standards for organisations that want IBM-managed infrastructure kept onshore.
| Platform | Australian Region | Inference Onshore | IRAP Assessed | On-Premises Option |
|---|---|---|---|---|
| Azure AI Foundry | Australia East, Australia Southeast | Yes (Standard Regional) | Yes | Partial (dev tools only) |
| AWS Bedrock (AgentCore) | Sydney, Melbourne | Yes (geo-fenced) | Yes (many services) | No |
| Google Vertex AI | Sydney | Yes (verify per model) | Yes (PROTECTED) | No |
| IBM Watsonx Orchestrate | Sydney | Yes (cloud or on-prem) | Yes | Yes |
Category 2: Developer Frameworks
Developer frameworks are open-source libraries for engineers who need to build custom agentic systems with precise control over agent behaviour. They do not provide managed infrastructure. You write the agent logic, deploy it, and operate it. What the framework provides is the plumbing: the agent loop, tool execution, memory management, multi-agent coordination, and integration patterns.
Over 170 frameworks are being tracked as of 2026. In practice, a small number account for most real-world adoption.
LangChain is the most widely used framework in the ecosystem, with over 95,000 GitHub stars and 34.5 million monthly downloads. It provides a broad, flexible toolkit for chaining LLM calls, integrating tools, connecting data sources, and building retrieval-augmented generation pipelines. Its strength is breadth — more than 100 integrations and an enormous community. Its weakness is complexity: for straightforward use cases, it can be verbose and hard to debug.
LangGraph is LangChain's companion for structured agent orchestration. It represents agent behaviour as a graph of actions and states rather than leaving all decisions to the LLM, which makes complex, long-running, or safety-critical workflows more manageable. It includes built-in support for human-in-the-loop interventions and memory across sessions. For production systems with complex multi-step workflows, LangGraph is the framework most organisations reach for.
CrewAI is built specifically for multi-agent systems. Developers define agents with distinct roles, goals, and tools, then combine them into a crew that works together on a task. The mental model maps naturally to how teams operate — a research agent, an analysis agent, a writing agent — and the framework handles coordination without a lot of boilerplate. CrewAI is built on LangChain, so it inherits its tool and model integrations.
Mastra fills a real gap: a TypeScript-first agent framework for web and full-stack developers who should not have to switch to Python to build agents. It ships agents, workflow steps, tool integration, vector database connectivity, and memory management as a single npm package, with a local development server and web UI for testing.
Microsoft Agent Framework — the 2026 merger of Semantic Kernel and AutoGen — is Microsoft's open-source developer SDK, supporting C# and Python. It is the code-first layer underpinning Azure AI Foundry for developers who want to build agents programmatically rather than through the managed service UI.
Developer frameworks are deployment-agnostic. An Australian organisation can run LangChain or LangGraph on servers in an Australian cloud region and point model calls at local endpoints — Azure OpenAI in Australia East, AWS Bedrock in Sydney — to keep all data and inference onshore. The framework does not enforce this; the architecture does. That flexibility is both the appeal and the risk. Sovereignty is achievable, but it requires deliberate design decisions that a managed platform would handle by default.
Developer frameworks give you maximum control and no managed infrastructure. That trade-off makes sense when you have the engineering capacity to build and operate what you need. It does not make sense when what you are building is commodity infrastructure that a managed platform already provides. Before choosing a developer framework, be clear about whether you are building genuine IP or rebuilding what Azure AI Foundry, Bedrock, or Vertex already does.
Category 3: Accessible Agent Platforms
The third category is the most recent to emerge and the least mature. Accessible agent platforms put agentic AI within reach of non-technical business users, entrepreneurs, and small business operators — people who know what they want agents to do but should not need to write code to make it happen.
The demand is clear. The supply is still catching up.
Dify is the most developed option here. It is an open-source platform with a visual workflow builder for assembling AI-powered applications without code. Users connect LLM providers, configure retrieval from their own data, and wire together multi-step agent flows through a drag-and-drop interface. Dify can be self-hosted via Docker or used as a managed cloud service, and has over 55,000 GitHub stars.
For Australian organisations, self-hosting Dify on an Australian cloud instance — AWS Sydney or Azure Australia East — keeps data local and allows model calls to be directed to onshore endpoints. Dify's managed cloud service is not hosted in Australia and should not be used for sensitive data. Self-hosted Dify on Australian infrastructure is a workable option for organisations with modest IT capability and real data residency requirements.
n8n approaches the problem from the other direction. It is primarily a workflow automation platform, comparable to Zapier, that has added AI capabilities through LLM-powered nodes. For users already comfortable with automation tools, n8n lets them embed AI decision-making into existing workflows without building a separate AI system. Its AI capabilities are constrained by the workflow model — it is not well-suited to open-ended, goal-directed agent behaviour — but for defined, repeatable business processes it is practical and reliable. Like Dify, n8n can be self-hosted in Australia; its managed cloud service is hosted in Europe.
OpenClaw is a different kind of product. It is a self-hosted autonomous agent — persistent, always-on, accessible through messaging apps including WhatsApp, Telegram, Slack, and Discord. Users configure it through plain-text files, define multiple agents with distinct personas and capabilities, and interact with them through chat interfaces they already use. OpenClaw passed 240,000 GitHub stars within months of its launch.
In its raw form, OpenClaw requires more technical confidence than Dify or n8n to deploy safely — you need to stand up a server, manage API keys, and understand the security implications of giving an agent broad access to your accounts and systems. Its WebSocket API is designed to be built on top of, however. A custom UI layer can transform it from a technically demanding self-hosted tool into something genuinely accessible, which opens real possibilities for domain-specific packaging and delivery.
On data sovereignty: OpenClaw is self-hosted by design, so no data leaves your infrastructure at the platform level. The caveat is the model. If OpenClaw calls the Anthropic or OpenAI API, prompt content is processed on their servers overseas. Pointing it at an Australian-hosted model endpoint, or running a local open-source model, closes that gap.
Accessible agent platforms represent where the market is heading, not where it has arrived. In 2026, genuinely no-code agentic AI for non-technical users is still nascent. The tools exist, but most require some technical competence to deploy and configure safely. Be realistic about that gap when evaluating options in this category.
The Layer Nobody Provides
All three categories provide Layer 2 infrastructure — the platform and framework layer that makes agents run. None of them provide the DNA — the domain knowledge, workflow logic, and instructions that make an agent genuinely useful in a specific business context.
A legal practice does not benefit from LangGraph. It benefits from an agent that understands legal document review, knows what to look for, and knows when to escalate. LangGraph is the engine. The domain knowledge is the product.
A healthcare organisation does not benefit from Azure AI Foundry. It benefits from agents that understand clinical workflows, funding rules, and escalation requirements, configured and governed for that specific context. Foundry is the infrastructure. The domain knowledge is the product.
Worse, some organisations are actively building the commodity. Custom agent runtimes, bespoke orchestration engines, hand-rolled memory management — infrastructure that managed platforms provide out of the box. Months of engineering effort producing the foundation, with nothing left for the DNA that would actually make it useful.
The platform is a commodity. Your differentiation is in the DNA.
The choice of platform matters — but it is a foundation decision, not a differentiation decision.
