The GenAI Divide - STATE OF AI IN BUSINESS 2025
"Tools like ChatGPT and Copilot are widely adopted. Over 80 percent of organizations have explored or piloted them, and nearly 40 percent report deployment. But these tools primarily enhance individual productivity, not P&L performance. Meanwhile, enterprise- grade systems, custom or vendor-sold, are being quietly rejected. Sixty percent of organizations evaluated such tools, but only 20 percent reached pilot stage and just 5 percent reached production. Most fail due to brittle workflows, lack of contextual learning, and misalignment with day-to-day operations."
"The core barrier to scaling is not infrastructure, regulation, or talent. It is learning. Most GenAI systems do not retain feedback, adapt to context, or improve over time."
The problem impact in various domains
Construction Industry
AI can play a transformative role in the estimation phase of construction projects by rapidly analyzing large and complex project documents—such as specifications, drawings, bids, and contracts—that traditionally require extensive manual review. Using natural language processing and computer vision, AI can extract quantities, identify key scope details, detect inconsistencies, and link relevant cost data from thousands of pages of technical material. This enables estimators to focus on higher-level judgment rather than document parsing, leading to faster, more accurate, and better-informed estimates. However, because these decisions directly affect project budgets and risk allocation, AI must be trustworthy—its analyses need to be explainable, auditable, and grounded in reliable data. Estimators must be able to verify AI outputs, understand its reasoning, and ensure that its interpretations align with project intent. Trustworthy AI not only improves confidence in cost estimates but also fosters collaboration and accountability across all stakeholders in the construction process.
Healthcare
In the medical field, AI inaccuracies can lead to severe, even life-threatening, consequences.
Misdiagnosis and improper treatment: AI tools trained on incomplete or biased datasets may miss critical information or misinterpret patient data. For example, a diagnostic tool trained primarily on data from one ethnic group may fail to correctly diagnose conditions in other populations.
Dangerous hallucinations: Generative AI models can produce plausible but completely false information. A study found hallucinations in "almost all" medical summaries generated by certain large language models (LLMs). The fabricated information included symptoms, diagnoses, and medicinal instructions, which could lead to medication errors or inappropriate treatment plans.
Propagation of errors: Once inaccurate information enters a patient's electronic health record, it can be copied and shared across multiple systems, making it difficult to trace and correct. This can have devastating long-term effects on a patient's care and insurance eligibility.
Erosion of trust: Repeated AI errors can cause healthcare professionals to lose trust in these technologies, reducing their willingness to adopt AI-driven decision support and potentially hindering innovation.
Autonomous vehicles
For self-driving cars, AI inaccuracies can have catastrophic outcomes that endanger the lives of passengers and pedestrians.
Phantom braking: AI can misinterpret sensor data, causing a vehicle to brake unexpectedly for no real reason. This has led to numerous complaints and can cause rear-end collisions.
Collision with unseen objects: If an AI model was not trained to recognize a specific object or road condition (an out-of-distribution scenario), it can fail to react appropriately, leading to a potential crash.
Failure to recognize pedestrians: Studies have shown that some AI pedestrian detection systems are less accurate at night or in different lighting conditions and can have a significantly higher error rate for individuals with darker skin tones.
Compromised safety from cyberattacks: A sophisticated attack could alter a vehicle's navigation, causing it to veer off course, or manipulate its sensors, such as making a 'left turn' sign appear as a 'right turn' sign to its camera.
Finance
Inaccurate AI models in finance can lead to significant financial losses for both institutions and individuals, along with severe regulatory and reputational consequences.
Credit scoring and lending bias: AI models can amplify historical biases present in training data, resulting in discriminatory outcomes. This can cause systems to be stricter on applicants from certain socioeconomic or racial groups, leading to unfair credit scoring and loan denials.
Ineffective fraud detection: Poor data quality can lead to more false positives, where legitimate transactions are wrongly flagged as fraudulent. It can also cause more false negatives, where actual fraud goes undetected. Both scenarios lead to financial losses and damage customer trust.
Regulatory compliance risks: If an AI model produces faulty reports or fails to flag a compliance issue, a company can face regulatory fines and legal liabilities. Regulators are increasingly scrutinizing AI models themselves to ensure fairness.
Market instability from herding behavior: If multiple AI agents across financial institutions use similar algorithms, they could react to market conditions in identical ways. A coordinated reaction could trigger bank runs or flash crashes.
Human resources
AI inaccuracies in hiring tools can perpetuate and even amplify human biases, resulting in discriminatory outcomes and reduced diversity.
Exclusion of qualified candidates: AI recruitment systems trained on historical data from a male-dominated company, such as Amazon's former tool, can learn to show bias against female applicants. This leads to missed opportunities and a homogenous workforce.
Unfair candidate screening: AI tools can reject qualified candidates for irrelevant reasons. Experiments have shown that an AI video interview platform could negatively score candidates based on accessories, hairstyles, or background elements.
Serious legal liabilities: The use of biased AI in hiring can open organizations up to discrimination lawsuits and regulatory penalties.
Loss of transparency: Because the exact decision-making process of some AI systems is not fully transparent, it can be difficult to identify and correct bias, leaving affected candidates without clear recourse.
Media and content generation
For generative AI in media and content creation, inaccuracies can result in the widespread propagation of misinformation.
Spreading misinformation and disinformation: AI models can generate plausible but false narratives, fabricated studies, and deepfakes that can manipulate public opinion. For example, AI-generated robocalls were used to spread disinformation during a U.S. election.
Damage to credibility: When AI-generated content includes factual inaccuracies, it can harm a brand's reputation. For marketers, relying on faulty AI-generated reports or predictions can lead to misguided strategies and wasted resources.
Fake legal precedent: In legal research, AI chatbots have been known to invent case law or offer inaccurate legal summaries based on skewed information, misleading professionals and potentially jeopardizing legal cases.