Welcome to FullStack. We use cookies to enable better features on our website. Cookies help us tailor content to your interests and locations and provide other benefits on the site. For more information, please see our Cookies Policy and Privacy Policy.
Generative AI ROI: Why 80% of Companies See No Results
Written by
Serena Clifford
Last updated on:
September 4, 2025
Written by
Last updated on:
September 4, 2025
Despite rapid adoption, most companies see little return from generative AI initiatives. Here’s why projects falter—and how focused strategies can change the outcome.
According to McKinsey & Co., generative AI has the potential to add up to $4.4 trillion in additional value to the AI market. This potential has led more than 78% of companies to use GenAI in at least one business function.
However, while nearly eight in ten companies report using GenAI, just as many report realizing “no significant bottom-line impact.” This begs the question: Why are so many companies failing to see an ROI from their AI projects? This “GenAI paradox,” as McKinsey dubs it, instills distrust and hesitancy around new GenAI initiatives.
Fortunately, while the numbers may be daunting, they can be traced back to a few common issues across GenAI projects—issues that can, with the right strategies and AI solutions provider, be addressed before they develop into larger issues.
Why GenAI Projects Struggle to Deliver
Though more companies are using generative AI than ever, McKinsey found that only one percent of companies view their GenAI strategies as mature. Knowing what’s behind that gap is essential to closing it.
Overreliance on Horizontal AI
One of the core reasons that companies fail to see ROI is the rift between horizontal and vertical AI usage. Horizontal AI, sometimes called “general AI,” refers to AI that can be used across a number of roles and industries. Vertical AI, in contrast, addresses industry and use-case specific pain points. McKinsey found that many enterprises use horizontal AI models, such as copilots and chatbots, with nearly 70% of Fortune 500 companies using Microsoft 365 Copilot.
Although 40% of workers find these general AI models extremely or very helpful, the benefits they provide tend to be spread out across employees—and, as a result, are less visible in terms of top- or bottom-line results.
Vertical AI, on the other hand, has a higher potential for direct economic impact. Vertical, or domain-specific, AI solutions are tailored to a specific company, industry, or vertical. These solutions utilize industry-specific knowledge, trends, and data to address the unique needs of their users.
Because vertical AI is built for targeted use cases, its outputs are more likely to impact revenue, reduce costs, or improve efficiency in measurable ways. For example, at FullStack, we built an AI-powered assistant for research and advisory firm Lux Research. The solution, Luxer, pulls from the firm’s proprietary research to generate answers for clients’ questions and allows clients to schedule inquiries with subject matter experts.
This vertical AI-powered assistant led to a 3.6x increase in user scheduling speed, improving Lux Research’s client experience.
Even the most advanced AI models can only perform as well as the data they’re trained on. However, according to Gartner, 85% of all AI models and projects fail due to poor data quality or a lack of relevant data.
Many companies train their generative AI models on incomplete, disorganized, or outdated datasets, leading to incorrect or subpar outputs. In a 2025 research article on the effects of data quality on machine learning performance, researchers found that the various algorithms tested suffered increasing performance issues as their data was polluted.
The researchers also found that data quality and algorithmic performance were directly correlated across different types of AI tasks, such as classification, regression, and clustering.
For example, IBM’s Telco Customer Churn dataset, which represented customers from a fictional telecommunications company, dropped nearly 10 percentage points in performance at 20% pollution. The research makes one thing clear: without high-quality data, even the most advanced AI models struggle to deliver reliable results.
Lack of Organizational Commitment
Another common factor behind GenAI failure is a lack of focus and alignment. According to McKinsey, less than 30% of companies report that their CEOs directly sponsor their AI agenda. Without strong leadership, many organizations take a bottom-up approach, where individual functions launch isolated AI projects without a coordinated strategy.
This lack of strategy limits the growth of many generative AI projects. According to Curt Jacobsen and his other McKinsey colleagues, about 30 to 50% of a team’s “innovation” time with GenAI is spent either ensuring solutions meet compliance standards or waiting for organizational policies to catch up.
“Teams that could be solving valuable problems are stuck re-creating experiments or waiting on compliance teams, who themselves are struggling to keep up with the pace of development,” Jacobsen notes. “Teams work on problems that don’t matter, duplicate work, and create one-off solutions that can’t be reused and often fail to unlock real value.”
Additionally, when a company’s team fails to communicate, the resulting projects may be built without core IT, data, or business functions in mind. These projects suffer from poor integration with existing operations, fragmented pipelines, and a lack of alignment, making scaling much more difficult.
Lack of AI Expertise
Even when companies have leadership support and clear AI goals, many still struggle because they lack in-house expertise to design, build, and scale generative AI solutions. A research article by Thomas Reuters estimated that there would be an AI talent gap of 50% in coming years, and that as many as 70% of employees may need additional training or upskilling to work effectively with new AI-driven tools and workflows.
Without experienced teams in place, companies initiate generative AI projects without understanding their data requirements, metrics of success, or timelines. The resulting solutions are often underdeveloped and unreliable, making it difficult for companies to see consistent, measurable results.
Unclear Expectations
The generative AI gold rush has led many companies to prioritize implementing AI as soon as possible without thoroughly analyzing use-case goals. According to a Gallup poll from late 2024, only 15% of US employees report that their workplaces have communicated a clear AI strategy. However, a 2025 McKinsey report found that 92% of surveyed executives planned to boost their AI spending in the next three years.
Given that the IBM Institute for Business Value found that enterprise-wide AI initiatives achieved an ROI of 5.9% despite incurring a 10% capital investment, it’s evident that this gap between planning and investment is creating more frustration than measurable impact.
Marina Danilevsky, the Senior Research Scientist of Language Technologies at IBM, notes: “People said, ‘Step one: we’re going to use LLMs (large language models). Step two: What should we use them for?’” This disconnect between hype and functionality costs companies millions in lost time and resources.
IBM writer Ivan Belcic, meanwhile, muses that some entrepreneurs felt that enterprise AI would be “the business strategy hammer for every nail”— the cure-all miracle solution that could be applied to any issue in the company. However, without a realistic idea of what AI can and can’t do for their business, companies are setting themselves up for disappointment and ROI that falls short of their expectations.
How Can You Avoid AI Failure?
Generative AI offers many benefits for companies of different shapes, sizes, and industries. However, realizing those benefits depends on having a clear plan and focused strategy.
Companies can improve their odds of success by:
Incorporating strong data governance practices, such as continuous monitoring and oversight, transparency, and regular data cleaning and processing.
Choosing vertical AI solutions that are tailored to their industry and business challenges, rather than relying only on general-purpose models.
Validating hypotheses around ROI by building AI proofs-of-concept (PoCs) prior to investing in large-scale builds.
Outlining clear, realistic goals about your AI solution and what success would look like, including specific KPIs.
Regularly reviewing and refining AI models to improve accuracy, performance, and long-term ROI.
Bringing business and technical teams together to stay aligned on goals.
Securing executive sponsorship to ensure AI projects have distinct priorities, funding, and support.
While these steps lay a strong foundation for successful GenAI projects, they aren’t a guarantee in and of themselves. If you’re worried about seeing the best ROI from your GenAI investment, working with a knowledgeable AI development partner can help you boost your odds of success.
The GenAI Divide: State of AI in Business 2025, a new report by MIT’s NANDA initiative, reports that purchasing AI tools from specialized vendors and building partnerships succeed about 67% of the time, while internal builds succeed only one-third as often.
At FullStack Labs, we work with companies to design and build custom software and AI solutions that fit their data, industry, and goals. Our experience allows us to help clients overcome GenAI’s challenges and build solutions that deliver real, lasting results.
Generative AI projects often fail due to a combination of strategy, technology, and organizational challenges. Many companies adopt AI tools quickly but lack the foundational elements needed for success, such as high-quality data, targeted use cases, executive alignment, and technical expertise. Without these in place, models produce inconsistent outputs, projects lack integration with core business goals, and teams struggle to measure ROI effectively.
How many generative AI projects fail?
Research shows that up to 85% of AI projects fail, with poor data quality being the leading cause. Additionally, while nearly 78% of companies report using generative AI, the majority report no significant bottom-line impact. This gap between adoption and results highlights the "GenAI paradox" — widespread usage, but limited measurable value.
What are the root causes of failure for artificial intelligence projects?
There are several factors behind generative AI project failure:
Overreliance on Horizontal AI
Many companies depend on general-purpose models like copilots or chatbots. While these tools help employees work faster, their benefits are often spread too thin to impact revenue or costs directly.
By contrast, vertical AI — tailored to industry-specific needs and data — produces measurable business outcomes.
Poor Data Quality
Incomplete, outdated, or unorganized datasets undermine model performance. Studies show that even 20% data pollution can cause a 10% drop in accuracy.
Lack of Strategic Alignment
Without executive sponsorship, many teams launch isolated AI experiments that aren’t aligned with business priorities. This leads to duplicated efforts, compliance bottlenecks, and poor integration.
Shortage of Expertise
AI initiatives often fail due to talent gaps. Without skilled developers, data scientists, and AI strategists, companies misjudge timelines, underestimate complexity, and deliver unreliable solutions
How can companies increase their chances of generative AI success?
To improve success rates, companies should adopt a structured, focused approach:
Choose vertical AI solutions tailored to industry-specific data and workflows rather than relying solely on general-purpose tools.
Implement strong data governance practices, including continuous monitoring, cleansing, and validation.
Start with proof-of-concept (PoC) projects to test feasibility before scaling up.
Ensure executive sponsorship to secure funding, alignment, and clear priorities.
Foster cross-functional collaboration between business, technical, and compliance teams.
Partner with AI experts when internal resources or expertise are limited.
Should I build my own generative AI solution or partner with an expert?
It depends on your company’s resources, expertise, and goals:
Build in-house if you already have a skilled data science team, well-structured data pipelines, and strong executive support.
Partner with experts if your organization lacks AI-focused engineering capacity or wants to minimize risk. Working with a specialized AI development partner can accelerate time-to-value and increase the likelihood of ROI.
For example, FullStack Labs has helped companies like Lux Research implement domain-specific AI assistants that delivered measurable impact — including a 3.6x boost in scheduling speed.
AI is changing software development.
The Engineer's AI-Enabled Development Handbook is your guide to incorporating AI into development processes for smoother, faster, and smarter development.
Enjoyed the article? Get new content delivered to your inbox.
Subscribe below and stay updated with the latest developer guides and industry insights.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.