Lead AI Architect

Location CO-Bogotá
Posted Date 2 days ago(3/24/2026 5:07 PM)
Job ID
2026-4516
# Positions
1
Category
Managed Teams

Job Summary

Grant Thornton is building an AI Factory to deliver production‑grade, agentic AI solutions that generate measurable business outcomes while meeting enterprise standards for trust, security, and governance. As a Lead AI Architect, you will serve as the technical authority within an AI Pod, responsible for designing, governing, and scaling agentic systems from concept through production. This role sits at the intersection of AI engineering, enterprise architecture, and responsible AI. You will define how agents think, act, integrate, and fail safely ensuring solutions are robust, observable, and fit for real‑world operations.

Responsibilities

 

Agentic Architecture & Technical Leadership

 

  • Own the end‑to‑end architecture for agentic AI solutions, from design through production
  • Define and implement agentic patterns, including:
    • Planner / Executor / Validator agents
    • Tool‑using and multi‑agent orchestration
    • Memory, retrieval (RAG), and context strategies
  • Ensure agent behavior is:
    • Bounded
    • Observable
    • Recoverable in failure scenarios

Platform & Integration Design

 

  • Select and standardize on appropriate platforms and services (e.g., Azure‑based AI stacks)
  • Design integration patterns for:
    • Enterprise systems (ERP, CRM, case management)
    • APIs and event‑driven workflows
    • Human‑in‑the‑loop escalation paths
  • Partner with Automation and Integration Engineers to ensure agents can execute actions, not just generate responses

Enterprise Readiness & Non‑Functional Requirements

 

  • Define and enforce non‑functional requirements, including:
    • Security, identity, and access control
    • Data privacy and handling constraints
    • Latency, reliability, and cost controls
  • Ensure solutions are auditable, traceable, and aligned with enterprise risk expectations
  • Design for scale, reuse, and long‑term maintainability across AI Pods

Evaluation, Monitoring & Guardrails

 

  • Establish evaluation frameworks for:
    • Accuracy and quality
    • Hallucination detection
    • Drift and degradation over time
  • Define monitoring and observability standards:
    • Model and prompt performance
    • Cost‑to‑serve and usage patterns
    • Failure and escalation metrics
  • Embed Responsible AI and safety controls by design, not as after‑the‑fact reviews

Collaboration & Enablement

 

  • Partner closely with:
    • AI Product Leads on use‑case framing and acceptance criteria
    • AI Engineers on implementation and optimization
    • Central Platform & Trust teams on standards and guardrails
  • Contribute to reusable patterns, reference architectures, and playbooks within the AI Factory

Skills and Experience

 

 

  • 8+ years in software architecture, AI engineering, or platform engineering
  • Hands‑on experience designing and deploying AI systems into production
  • Demonstrated ability to operate as a technical authority across multiple teams or initiatives
  • Experience working in enterprise or regulated environments

Agentic & AI Expertise

 

  • Deep understanding of:
    • Generative AI and LLM behavior
    • Agentic architecture and orchestration patterns
    • Prompt engineering as a software discipline
  • Practical experience implementing:
    • Tool calling and action frameworks
    • Memory and retrieval systems (RAG)
    • Multi‑step reasoning and control flows
  • Strong grasp of AI failure modes and mitigation strategies

Technical Skills

 

  • Proficiency in Python and/or TypeScript
  • Experience with:
    • AI/LLM SDKs and orchestration frameworks
    • API‑first and event‑driven architectures
    • CI/CD for AI workloads
  • Familiarity with cloud‑native architecture patterns (preferably Azure)

Preferred Qualifications

 

  • Experience designing AI solutions with:
    • Human‑in‑the‑loop controls
    • Regulatory or audit requirements
  • Background in MLOps, platform engineering, or large‑scale distributed systems
  • Exposure to Responsible AI, model risk management, or AI governance frameworks

#LI-AL1

Options

Sorry the Share function is not working properly at this moment. Please refresh the page and try again later.
Share on your newsfeed