Skip to content

Editor’s Note: As part of the Winter 2026 eDiscovery Pricing Survey series, conducted by ComplexDiscovery OÜ in partnership with the EDRM (Electronic Discovery Reference Model), this post explores the pricing of generative AI-assisted review — the newest and most rapidly evolving segment of the eDiscovery pricing landscape.

In this final installment, the pricing pulse enters uncharted territory. Generative AI-assisted review represents the frontier of eDiscovery pricing — a space where established models are being tested, new structures are emerging, and the market is still forming its collective understanding of how to value AI-driven work. The high rates of uncertainty captured in these responses are not a weakness of the data but a signal of a market in its earliest stages of price discovery.

Industry Research

The Pricing Pulse: Generative AI-Assisted Review Insights from the Winter 2026 eDiscovery Pricing Survey

ComplexDiscovery Staff

Survey Background

The Winter 2026 eDiscovery Pricing Survey was conducted with responses collected from late December 2025 through February 21, 2026, and drew responses from 53 participants across the eDiscovery ecosystem. The respondent pool was overwhelmingly U.S.-based, with 92.5% of participants conducting their eDiscovery-related business in the United States. The remaining respondents were distributed across Europe — including the United Kingdom (3.8%) and non-UK Europe (1.9%) — and Asia/Asia Pacific (1.9%).

By segment, law firms represented the largest group at 43.4% of respondents, followed by software and/or services providers (24.5%), corporations (15.1%), consultancies (9.4%), and media/research organizations or educational associations (7.5%). From a functional standpoint, the pool was dominated by legal and litigation support professionals at 67.9%, with business and business support functions at 26.4% and IT and product development at 5.7%.

Pricing Models for GenAI-Assisted Review

Primary Pricing Model (Question 18)

When asked to identify their primary pricing model for generative AI-assisted review, respondents revealed a fragmented market with no single dominant approach. Hybrid models — combining multiple pricing structures — tied with per-document models as the most common response, each cited by 28.3% of respondents. Per-GB models accounted for 11.3%, while per-token pricing and flat monthly subscriptions each represented 5.7%. Outcome-based pricing was reported by 3.8%.

Some 17.0% selected “do not know/not applicable,” a figure that, while notable, is actually lower than the uncertainty rates observed for several traditional review pricing questions. This suggests that professionals who engage with GenAI-assisted review have begun to form views on how it should be priced, even as the market remains far from consensus.

The tie between hybrid and per-document models is the defining finding. It indicates that the market has not yet settled on a single billing paradigm for GenAI-assisted review. Hybrid approaches — which might combine per-document fees with platform subscriptions, volume tiers, or outcome adjustments — reflect the complexity of pricing a service that blends technology infrastructure costs with analytical output. Per-document pricing, by contrast, offers simplicity and comparability with traditional review, which may explain its appeal even as the underlying technology differs fundamentally from human review.


Review Pricing – Primary Model for Gen AI-Assisted Review in eDiscovery – Winter 2026 – Update


Per-Document Pricing for GenAI-Assisted Review

Per-Document Cost Ranges (Question 19)

Among respondents who could identify a per-document cost for GenAI-assisted review, the distribution centered in the middle tiers. The $0.26 to $0.50 per document range was the most commonly reported at 20.8%, followed by $0.11 to $0.25 (15.1%) and $0.05 to $0.10 (15.1%). At the extremes, 7.5% reported costs above $0.50 per document, while 5.7% fell below $0.05.

The largest single response, however, was “do not know/not applicable” at 35.8% — reflecting both those who use non-per-document models and those who have not yet encountered specific pricing for this service.

Among the 34 respondents (64.2%) who provided a specific price point, the $0.11 to $0.50 range captured the plurality, with 19 respondents (35.8% of the total) falling in this band. This mid-range concentration suggests that per-document GenAI pricing is finding a level that is substantially below traditional per-document human review rates (where $0.50 to over $1.00 per document was common) but not yet as low as the sub-$0.05 rates that pure automation might eventually enable.


Review Pricing – Average Cost Per Document in Per Document Model of Gen AI-Assisted Review – Winter 2026


Per-GB Pricing for GenAI-Assisted Review

Per-GB Cost Ranges (Question 20)

Per-GB pricing for GenAI-assisted review drew a notably high uncertainty rate: 64.2% of respondents selected “do not know/not applicable.” This finding is consistent with the relatively low adoption of per-GB models reported in Question 18 (11.3%) and suggests that most market participants do not encounter or think about GenAI review in per-GB terms.

Among those who provided a response, 17.0% reported costs in the $25 to $50 per GB range, while 13.2% fell below $25 per GB. A small number reported higher rates: 3.8% above $100 per GB and 1.9% in the $51 to $75 range.

The dominance of “do not know/not applicable” is itself the most important finding. It indicates that per-GB pricing — while a familiar and established structure for data processing — has not translated as naturally into the GenAI review context, where the relationship between data volume and review output is mediated by AI models in ways that make per-GB pricing less intuitive.


Review Pricing – Average Cost Range Per GB in Per GB Model of Gen AI-Assisted Review – Winter 2026


Outcome-Based Pricing for GenAI-Assisted Review

Outcome-Based Pricing Structures (Question 21)

Outcome-based pricing — in which costs are tied to results rather than inputs — drew the highest “do not know/not applicable” rate in the entire survey at 79.2%. Among the small group of respondents who provided insight, custom agreements based on specific project goals were most common at 9.4%, followed by tiered pricing based on review speed improvements (3.8%) and fixed fees based on achieved accuracy rates (3.8%). A combination of performance metrics was cited by 1.9%, as was a percentage of cost savings compared to traditional review (1.9%).

While the sample of respondents with direct experience is small, the variety of structures they reported is notable. It suggests that outcome-based pricing for GenAI review is not converging on a single model but rather being negotiated on a case-by-case basis, shaped by the specific goals, risk tolerances, and measurement frameworks of individual engagements.

The 79.2% “do not know/not applicable” rate underscores how nascent outcome-based GenAI pricing remains. For a respondent pool dominated by legal and litigation support professionals who are accustomed to time-and-materials billing, outcome-based models represent a fundamental shift in how value is measured and compensated — a shift that the market is still in the early stages of navigating.


Review Pricing – Typical Structure of Outcome-Based Pricing Models in Gen AI-Assisted Review – Winter 2026


Handling Processing Failures in GenAI-Assisted Review

Failed or Special-Handling Documents (Question 22)

When documents fail to process or require special handling in GenAI-assisted review, the market relies on a range of approaches with no single dominant method. “Do not know/not applicable” led at 39.6%, followed by “requires manual review at standard rates” (18.9%), “depends on the specific issue encountered” (17.0%), “charged as additional processing time” (9.4%), “included in the base price” (9.4%), and “charged separately on a per-document basis” (5.7%).

The distribution across multiple approaches reflects the operational reality that GenAI processing failures vary widely in nature — from format incompatibilities and corrupted files to language barriers and encrypted content — and that no single handling method is appropriate for all cases. The 18.9% citing manual review at standard rates suggests that human fallback remains the most common specific approach, creating a hybrid cost structure in which GenAI efficiency gains are partially offset by the need for human intervention on exception documents.

The “do not know/not applicable” rate (39.6%) was notably lower than the per-GB (64.2%) and outcome-based (79.2%) pricing questions, suggesting that respondents have somewhat greater familiarity with how processing exceptions are handled than with the more complex pricing structures for GenAI review.


Review Pricing – Accounting for Docs That Fail To Process or Require Special Handing (Gen AI) – Winter 2026


Aggregate Analysis: What the GenAI-Assisted Review Pricing Pulse Reveals

The Winter 2026 GenAI pricing results capture a market at the earliest stages of establishing pricing norms. The defining characteristic is not any single price point or model but rather the breadth of approaches and the depth of uncertainty.

No single pricing model dominates. Hybrid and per-document models tied at 28.3% each, with per-GB, per-token, subscription, and outcome-based approaches each capturing smaller shares. This fragmentation contrasts sharply with more established eDiscovery services, where dominant pricing models have emerged through years of market maturation.

Uncertainty escalates with pricing complexity. “Do not know/not applicable” rates rose steadily from 17.0% for the basic model question (Q18) to 35.8% for per-document pricing (Q19), 64.2% for per-GB pricing (Q20), and 79.2% for outcome-based pricing (Q21). This gradient reflects a market where general awareness of GenAI pricing exists, but specific knowledge of structured pricing models decreases as those models become more complex or less commonly encountered.

Per-document GenAI pricing undercuts traditional review. Where respondents could identify a per-document cost, the concentration in the $0.11 to $0.50 range sits well below the $0.50-to-over-$1.00 rates that characterize traditional managed review. This gap quantifies the cost advantage that GenAI-assisted review can offer, even as the market continues to debate how best to capture that value in pricing structures.

Outcome-based pricing is conceptually appealing but practically rare. Just 20.8% of respondents could describe an outcome-based pricing structure, and the approaches they reported were varied and individually negotiated. The gap between the theoretical appeal of tying cost to results and the practical difficulty of defining, measuring, and enforcing outcome-based agreements remains substantial.

Processing failures create hybrid cost structures. The range of approaches to handling failed documents — from manual review to per-document surcharges to inclusion in the base price — suggests that GenAI pricing is not purely a technology cost. Human fallback remains a necessary component, and how that fallback is priced shapes the total cost of GenAI-assisted review in ways that pure per-document or per-GB models may not fully capture.

Demographics highlight the knowledge gap. With 67.9% of respondents in legal and litigation support roles and 43.4% from law firms, the high uncertainty rates may reflect the buy-side perspective of professionals who are encountering GenAI pricing proposals but do not yet have the market experience to benchmark them. Service providers (24.5%) and consultancies (9.4%), who are more likely to set these prices, may have greater familiarity — but even among supply-side participants, the nascent state of the market limits the emergence of clear pricing standards.

Anticipating the Next Move

The pricing pulse for GenAI-assisted review in Winter 2026 registers as an emerging signal rather than a steady rhythm. The market is actively experimenting with how to price a technology whose capabilities, limitations, and value proposition are still being defined. Hybrid models and per-document pricing have gained early traction, but the high rates of uncertainty across all five GenAI questions suggest that broad market consensus remains years away.

What is already clear, however, is that GenAI-assisted review is being priced at levels substantially below traditional human review, creating competitive pressure that will likely reshape pricing across the entire review phase of eDiscovery. As adoption increases, measurement frameworks mature, and outcome-based models become more practical, the pricing structures captured in this survey will serve as an early baseline for tracking the market’s evolution.

This concludes the Winter 2026 eDiscovery Pricing Survey series. From the established rhythms of forensic collections through the commoditized tiers of processing and hosting, the labor-intensive costs of managed review, and the emergent dynamics of generative AI, the pricing pulse of eDiscovery reflects a market that values predictability where it has been earned, expertise where it is required, and innovation where it is most needed.

News Source

  • Rob Robinson and Holley Robinson, ComplexDiscovery OÜ, “Winter 2026 eDiscovery Pricing Survey,” February 2026.


Assisted by GAI and LLM Technologies

Additional Reading

Source: ComplexDiscovery OÜ

ComplexDiscovery’s mission is to enable clarity for complex decisions by providing independent, data‑driven reporting, research, and commentary that make digital risk, legal technology, and regulatory change more legible for practitioners, policymakers, and business leaders.

The post The Pricing Pulse: Generative AI-Assisted Review Insights from the Winter 2026 eDiscovery Pricing Survey appeared first on ComplexDiscovery.

Photo of Alan N. Sutin Alan N. Sutin

Alan N. Sutin is Chair of the firm’s Technology, Media & Telecommunications Practice and Senior Chair of the Global Intellectual Property & Technology Practice. An experienced business lawyer with a principal focus on commercial transactions with intellectual property and technology issues and privacy

Alan N. Sutin is Chair of the firm’s Technology, Media & Telecommunications Practice and Senior Chair of the Global Intellectual Property & Technology Practice. An experienced business lawyer with a principal focus on commercial transactions with intellectual property and technology issues and privacy and cybersecurity matters, he advises clients in connection with transactions involving the development, acquisition, disposition and commercial exploitation of intellectual property with an emphasis on technology-related products and services, and counsels companies on a wide range of issues relating to privacy and cybersecurity. Alan holds the CIPP/US certification from the International Association of Privacy Professionals.

Alan also represents a wide variety of companies in connection with IT and business process outsourcing arrangements, strategic alliance agreements, commercial joint ventures and licensing matters. He has particular experience in Internet and electronic commerce issues and has been involved in many of the major policy issues surrounding the commercial development of the Internet. Alan has advised foreign governments and multinational corporations in connection with these issues and is a frequent speaker at major industry conferences and events around the world.