Why Enterprise AI Struggles With ROI and Data Control
Enterprise leaders tackle AI ROI, GenAI governance, and data controls. Conduktor execs reveal why shift-left alone won't protect LLM inputs.

AI adoption is accelerating, but leaders are stuck on hard questions: What is AI actually used for? Will it justify the investment? How do we guarantee data safety for AI inputs? How do we trust AI outputs?
Quentin Packard (SVP Sales) and Stephane Derosiaux (CPTO) at Conduktor have spoken with many enterprise leaders about their AI concerns. The recurring themes: reliability and trust of inputs and outputs, unclear business outcomes, and the critical role of context for accurate LLM results.
Most Companies Cannot Define AI ROI
Leaders struggle to measure whether AI investments pay off.
"It's not always easy to determine what AI gives us in terms of benefits," one CTO admitted. "I'm struggling to define ROI. I can use GenAI, type faster code, or search better, but I don't have the ROI or the time frame down."
Another leader described AI as a solution searching for a problem. "Our CEO says we need to onboard AI. Whenever I ask my teams what problems they want to solve with AI, that never goes well. They say 'we don't have any problems,' because they don't want to lose anyone. Hunting down what we're trying to solve with AI requires a complete shift."
Part of the issue is a mismatch between AI types and intended purposes. "When we talk about AI, we often talk about GenAI and B2C use cases, and we try to apply these to businesses," Stephane explained. "Often, GenAI has nothing to do with these use cases, which were better served by classical machine learning."
The real value of AI lies in amplifying human effort. "Today, you can start a business with five people and GenAI to do something that, ten years ago, you would have needed 100 people for. It's not unlocking business outcomes. It's cost-efficiency outcomes."
AI Adoption Breaks Traditional Data Governance
Widespread AI adoption forces teams to rethink data governance. LLMs ingest massive amounts of sensitive data. Regulations like GDPR and the EU AI Act raise the stakes for misuse.
"People are sending data to LLMs, and people are taking data from LLMs to send somewhere else," Stephane explained. "If I feed an email to my LLMs, they will be able to train on this email, and the email content will be in the response. You will have absolutely no control over data and quality."
CxOs need to renovate their trust frameworks before issues arise. "How do people start to think about data controls and getting more proactive? The trust between human, AI, the code, and the collaboration between technologies?" Quentin asked.
Default-Open Access Is the Wrong Model for AI
One solution: make data restrictions the default, granting access only when explicitly required.
"Today, the default is that all data is accessible," Stephane explained. "But because of AI and LLMs, you are losing control of who can send data to LLMs and external partners. So we start to see a shift from having everything accessible, emails, financial data, to no access by default. You have to grant access instead. Organizations are starting to see federation, where they track who has access to what."
Shift-Left Security Is Not Enough for AI
Rethinking data governance doesn't fit existing security paradigms. It requires new ones.
"It's not a pure shift-left approach," Stephane continued. "It's shift left and right. You want to give access to people, but you also want someone in your business to know what is happening at any point in time, so you can remove access immediately if necessary."
Organizations that solve these obstacles, strategy, trust, and control, can adopt AI at scale and unlock its actual potential.
Conduktor provides tools to secure, scale, and control Kafka data for AI use cases. Eliminate risks, enforce governance, and gain visibility into streaming data pipelines. Sign up for a demo to see what Conduktor can do for your team.
