Waqas Ahmed

Waqas Ahmed

VP, AI Engineering · OpenText

I lead the platforms and architecture at the intersection of enterprise data and AI, turning OpenText's information advantage into governed, scalable AI across cloud, SaaS, and hybrid environments.

Focus area

Last reviewed April 2026

As Vice President of AI Engineering at OpenText, I lead the design and evolution of enterprise AI platforms across cloud, SaaS, and hybrid environments, with a focus on scalable AI architecture, agentic systems, and grounding large language models in enterprise data, governance, and security.

Systems closest to the work

I'm closest to applied AI systems embedded in operational workflows, including AI assistants, RAG pipelines, and automation layers that support domain-specific work.

In practice, this includes augmenting content-centric workflows across industries, supporting cybersecurity analysts triaging alerts, supply chain operators managing partner interactions, IT service teams handling incidents and requests, and development teams through DevSecOps activities such as code analysis, test case automation, and vulnerability triage.

A key focus is building structured context through data assembly and context graphs and using that foundation to enable reliable agentic orchestration and secure interactions across systems.

The emphasis is on orchestration over prompting, structuring how models interact with enterprise data, tools, and workflows. Success is measured by reduced cycle time and cognitive load through tightly scoped, context-aware systems embedded into existing platforms.

Problem being solved

The problem is reducing the effort and latency of high-stakes operational work, especially where users must process fragmented information and act quickly.

This shows up as alert fatigue in cybersecurity operations, document overload in regulated industries, exception handling in supply chain and B2B ecosystems, and incident triage in IT service management environments.

Constraints are domain-driven, including strict compliance and auditability, real-time operational pressure, and data fragmentation across enterprise systems and partner networks. Outputs must be contextually accurate, traceable, and aligned with internal standards.

This is a leverage model, not replacement. Systems operate with bounded autonomy, supporting human decision-making while maintaining reliability, explainability, and cost control.

What operating AI in the real world teaches you

AI adoption slows when reliability, consistency, auditability, and explainability are not present. It is not enough to automate tasks. Systems must produce outputs that are repeatable, defensible, and aligned with operational standards.

This is critical for teams operating in high-stakes environments, where decisions must be traceable and justified. Inconsistent or opaque outputs quickly erode trust and limit usage.

Proof of value is equally important. Organizations require measurable ROI that is clear and often self-evident, such as reduced resolution time, improved throughput, or lower operational burden. Simple automation is not sufficient without demonstrable impact.

The systems that scale prioritize controlled behavior, verifiable outputs, and clear value. Proof and reliability together drive real adoption.

What changes in the next 12–24 months

AI systems are already taking actions, not just generating text. The next phase of enterprise AI will be defined less by generation and more by control, and control depends as much on the data layer as the model layer. Over the next 12 to 14 months, observability, explainability, and decision traceability will become non-negotiable requirements, supported by new data engineering practices like context graphs that keep enterprise information live and valid. Early stateful cognition layers will also begin to emerge, helping systems maintain memory and accountability across longer interactions.