
Jason Riback
President · MediaMint
Building Agentic Growth Services capabilities and embedding specialized teams powered with AI assistants into enterprise workflows to run and scale go-to-market, customer success, marketing and data operations with measurable outcomes.
Focus area
At MediaMint, we’re focused on helping enterprise companies operationalize AI across their go-to-market, customer, and data functions in a way that actually runs day to day.
As President, my work centers on partnering with teams that have already made significant investments in tools and data but are still working to make execution more consistent and reliable at scale. In many cases, the infrastructure is in place, but the work itself, how it gets done every day, can break down under volume or variation.
To address that, we work directly with operating teams inside client organizations to drive the development, adoption, and execution of agentic workflows. This spans sales, marketing, media, and data operations, where even small inconsistencies can create meaningful downstream issues. The focus is on improving how that work is carried out so it remains consistent as scale increases.
We are building toward a Service as a Software model, where AI is embedded into workflows and tied directly to real outputs. This reflects how MediaMint operates today, as we run and scale key parts of our clients’ go-to-market and customer operations.
Systems closest to the work
We are closest to systems that are already running in production across media operations, marketing operations, sales support, and platform operations. That includes building media plans from RFPs, supporting campaign setup, tracking delivery, and generating reporting outputs that teams use every day.
In practice, these systems handle repeatable parts of the work like validating inputs during setup, checking delivery against plan, and assembling reporting from multiple data sources. In some cases, that includes generating a first draft media plan in minutes or reducing manual exceptions in trafficking workflows. The teams are still actively involved in reviewing outputs and handling edge cases, so the work stays accurate.
Problem being solved
The teams we work with are managing high volumes of campaigns and customer operations across multiple environments, where different parts of the process are handled in different places. A common issue is that what gets planned, what gets trafficked, and what actually delivers do not always line up cleanly. That creates rework, delays, and a lot of manual reconciliation.
The work is about reducing those gaps so issues are caught earlier in the process. That includes tightening how campaigns are set up, making sure inputs are consistent, and improving how delivery is tracked and validated. The constraint is that all of this has to work within the systems clients already use, so the focus is on improving how those processes run rather than replacing them.
What operating AI in the real world teaches you
One thing I’ve learned is that most issues show up at the edges of a workflow, not in the core task. Things like incomplete inputs and data readiness, unclear ownership, or exceptions that were handled manually before tend to break first. If those are not accounted for, the system can look fine in testing but struggle once it is running in production. Additionally, agentic AI needs to be implemented strategically as part of the entire workflow, not just bolted on as an afterthought.
Because many organizations face the challenge of understanding if their data and operations are ready for AI, we created AIRA, an assessment to enable enterprises to gauge AI readiness.
I’ve also seen that teams adopt these systems when they reduce friction in work they already own. If something helps them catch issues earlier, reduces back and forth during setup, or saves time on recurring checks, they start to rely on it without needing to be pushed.
Where AI is creating measurable impact
The most consistent impact shows up in high volume workflows where teams are doing the same checks repeatedly. In campaign operations, for example, automating parts of delivery validation, reporting, and setup checks can reduce errors and increase the amount of work a team can handle. In some cases, that has led to around a 30% reduction in campaign errors along with better overall throughput.
Where things fall short is in areas that are less structured. If inputs vary, ownership is unclear, or there are too many exceptions handled manually, the setup struggles to stay consistent. That is usually where teams end up stepping back in and rechecking the work.
One clear example of measurable results we achieved is with Freestar, a leading publisher services and monetization partner for today’s most trusted publishers. To support their next phase of growth, Freestar scaled its onboarding framework by embedding MediaMint’s AI-powered assistant, Mia, helping Freestar maintain the white-glove service and validation standards that have long defined its enterprise publisher relationships. The results include:
- 3X deeper validation coverage - Freestar can now execute more comprehensive domain-level quality checks across its 20-point onboarding framework.
- 25-minute yield checks - Domain-level validation can now be completed in approximately 25 minutes, allowing large publisher batches to be processed in a single day.
- 100% governance at scale - Consistent validation across domains of all sizes, without skipped checks or quality trade-offs.
- 70% faster turnaround time - Parallel agentic execution increased on-time completion and improved onboarding efficiency.
What changes in the next 12–24 months
I expect more of the routine work to shift into systems that are directly tied to how workflows actually run, especially in areas like campaign setup, delivery tracking, and reconciliation where small inconsistencies create downstream issues. A common pattern I see is teams spending time fixing mismatches between what was booked, what was trafficked, and what actually delivered. As these systems mature, more of that gets caught earlier, closer to the point where the work is created.
I also think the gap between solutions that perform well on their own and those that hold up in production will become more visible. It comes down to whether they can handle edge cases in campaign setup, stay consistent across different clients or platforms, and produce outputs that teams trust without constant rechecking. The real test is whether it continues to perform across thousands of campaigns and changing inputs.
