01 · The Shift

The Shift in One Sentence

Earlier this week, the Facultad de Ingeniería of the Universidad de Buenos Aires (FIUBA) invited Sergio Mastrogiovanni, Chief Data Officer of Socradata, to deliver a master class on AI agents to its engineering students. The title was deliberate: From Engineers to AI Orchestrators: Building Real Agents with Gemini and Make.com. The framing is not rhetorical. It captures a structural shift now underway in enterprise software, and it is the same shift that is quietly rewriting the economics and ergonomics of the operating model in companies across every sector.

Traditional software, the dominant pattern of the last fifteen years, is built on APIs. Systems exchange structured data through well-defined endpoints, and humans orchestrate the decisions between those exchanges. Agent-based systems invert the premise. An agent interprets context, decides, and executes across multiple platforms. The human moves from operating the systems to supervising the orchestration. That movement is not an incremental capability improvement. It is a paradigm change, and it reallocates competitive advantage. The frontier is no longer about connecting software. It is about building systems that decide and execute.

So what: For fifteen years, enterprise advantage was built on integration. For the next fifteen, it will be built on execution.

02 · Context

Why Traditional Models Are Starting to Fall Short

Three forces are converging to erode the API-centric model as the dominant architecture. The first is data heterogeneity at scale: enterprises now operate across dozens of SaaS platforms, legacy systems, and operational data stores, and maintaining point-to-point integrations has become a permanent maintenance tax rather than a source of advantage. The second is decision density: operational work has shifted from moving data to making judgments about it, and humans remain the bottleneck wherever judgment must happen in real time. The third is the maturation of large language models and tool-use frameworks to the point where an agent can now be trusted to interpret an ambiguous instruction, plan a sequence of actions across several systems, and execute those actions with bounded autonomy. What was unreliable in 2023 is production-grade in 2026 for a widening set of workflows.

The consequence is that the traditional integration mindset, where engineers wire endpoints and humans operate dashboards on top of the flow, is no longer the shortest path to operational advantage. The shortest path now runs through agents that collapse integration and decision into a single executable layer.

03 · Framework

A Framework: The Three Layers of the New Stack

The master class framed the new architecture as three co-equal layers.

Integration Layer — APIs, Webhooks, Connectors

APIs, webhooks, connectors, and the canonical data products underneath them. APIs do not disappear. They become the substrate — the stable, governed foundation on which everything else is built. The difference is that integration is no longer the competitive layer; it is the infrastructure layer.

Decision Layer — Agents Powered by LLMs

The agents themselves, powered by LLMs such as Gemini, Claude, or GPT, equipped with tools, memory, and the ability to reason about context. This is where the interpretive work happens. This layer converts ambiguous instructions into structured plans and executes them across integrated systems.

Execution Layer — Orchestration Platforms

Orchestration platforms such as Make.com, n8n, or custom frameworks that sequence agent actions, manage handoffs between agents, persist state, and enforce human-in-the-loop gates where policy requires them. The three layers together replace what used to be a flat integration diagram with a purposeful, decidable, and auditable execution graph.

04 · Use Cases

What Agents Actually Do in Production

The concrete use cases already running in production share a common shape. A sales agent ingests a new lead, analyzes prior CRM interactions, evaluates fit against business objectives, drafts a personalized outreach, updates CRM fields, and schedules a follow-up — across Gmail, HubSpot or Salesforce, Google Sheets, and a calendar. A supply chain agent reconciles discrepancies between the WMS and the ERP, flags anomalies, and posts corrective entries with human approval for material thresholds. A back-office agent classifies incoming invoices, matches them to purchase orders, proposes journal entries, and routes exceptions to a reviewer. The pattern in each case is the same: the agent collapses what used to require five applications, three handoffs, and a human operator into a single supervised execution flow. The gain is not speed alone. The gain is the elimination of coordination overhead that was invisible on the balance sheet but dominant in the calendar.

Sergio Mastrogiovanni delivering the master class on AI agents to engineering students at FIUBA
Master class in session — FIUBA engineering students, Buenos Aires.
05 · Implementation

Implementation: What Actually Works

Four implementation choices separate production-grade agent systems from demoware. First, model selection as a portfolio, not a commitment: frontier LLMs for open-ended reasoning, smaller fine-tuned models for high-volume classification and extraction, with explicit routing logic between them. Second, orchestration platforms chosen to match the team's engineering depth: Make.com, n8n, or Zapier for low-code speed and rapid iteration; LangGraph, the Anthropic Agent SDK, or custom frameworks for teams that need deeper control. Third, memory architecture that distinguishes short-term conversational state from long-term organizational knowledge, typically backed by a vector store plus structured records. Fourth, explicit protocols for inter-agent and tool communication — the emergence of MCP and agent-to-agent standards is giving these systems the interoperability they need to scale beyond a single vendor.

06 · Governance

Governance: The Part Most Engineers Skip

The governance stack is the part that separates a production agent from a demo video. Human-in-the-loop gates must be defined by action class: pre-action for irreversible or high-blast-radius operations such as financial postings and external communications, post-action for reversible updates, confidence-based for routine flows. Decision traceability must be logged at the level of the individual agent call, with provenance that an auditor can follow from outcome back to data and prompt. Under EU AI Act Article 14 and the NIST AI Risk Management Framework, human oversight must be trained, measurable, and provable — not an assertion in a policy document. Engineering teams that treat governance as a compliance afterthought consistently produce agents that cannot be deployed in regulated operations.

07 · Metrics

Metrics That Matter

The KPI set for agent systems is small and unforgiving: automation rate as the percentage of flows executed end-to-end without human intervention; decision accuracy against a ground-truth sample; cost per automated decision, including inference, tools, and human review; mean time to detect and remediate when the agent errs; and auditable-decision ratio as the percentage of outputs with full provenance. KPIs before APIs. The measurement plan must exist before the first agent is deployed, not after the first quarterly review asks what the spend produced.

08 · Roadmap

Roadmap: From the Classroom to the Production Line

The pragmatic sequence is familiar. Begin with an assessment of three to five workflows where decision density is high and integration is already painful — sales qualification, invoice processing, support triage, supply-chain reconciliation. Pilot one agent with a measurable financial target and a four-to-eight-week time box. Productionize with observability, HITL gates, and a governance review. Replicate the pattern across two more workflows, and only then scale to an agent fleet with shared memory and inter-agent protocols. Avoid POC theater: any pilot that cannot publish a measurable delta within one reporting cycle is not a pilot, it is a cost line.

So what: The engineers who will matter in the next decade are not the ones who wire the most APIs. They are the ones who orchestrate the fewest agents to do the most work, safely and auditably.

Socradata Perspective

From integration to execution: the window is open now.

The FIUBA master class was not an academic exercise. It was a preview of the hiring specification that every enterprise will be writing by 2027. Socradata's position in this landscape is deliberate: we do not sell models and we do not sell integrations. We work at the decision and execution layers, designing the orchestration substrate — agent topology, model routing, memory architecture, HITL gates, and measurement instrumentation — that turns LLM capability into booked operational outcomes inside ERP, supply chain, and analytics systems.

The shift from integration to execution is the single most consequential architectural transition enterprise software has faced in fifteen years. The organizations that understand this early, and that invest in the orchestration capability rather than in more point solutions, will compound a measurable advantage. From pilot to policy, KPIs before APIs, interoperability or it does not scale — the principles hold, and the window to act on them is open now.

Is Your Enterprise Ready to Orchestrate?

If your teams are still wiring APIs while your competitors are deploying agents, the gap is architectural — and it is solvable in weeks, not years.

Request an Operational Diagnostic