The limits of competition regulation in an AI-driven market
Overview
Over the past few years, businesses have increasingly turned to algorithms to automate market analysis and pricing. Pricing algorithms themselves are not new — airlines have used them since the 1980s.[1] But the widespread adoption of these tools, combined with the integration of artificial intelligence technology and access to vast amounts of consumer data, is giving these tools new capabilities. The consequence is a fundamental shift in how firms compete and collude. California recognized this problem and attempted to tackle it with AB 325, which took effect January 1, 2026. But by targeting only explicit collusion through shared pricing tools, it confirms a larger problem: AI enables firms to achieve collusive outcomes like coordinated pricing or sustained supercompetitive pricing through conduct that looks like lawful independent decision-making. To address this, California could reimagine antitrust liability or create a specialized regulatory authority. But given the rapid evolution of algorithmic pricing tools a wait-and-see approach may be the most realistic path forward.
Analysis
Before diving into doctrine, a note on perspective: This piece proceeds from the view that algorithmic coordination harms consumers and warrants regulatory attention. That’s not universally accepted.
The Chicago School tradition in antitrust, which has significantly shaped antitrust jurisprudence, argues that underregulation is preferable to overregulation; the focus of enforcement should be on consumer welfare; and that courts should intervene only when challenged conduct decreases economic efficiency.[2] Within this framework, tacit collusion and price discrimination are not targets of enforcement because the overall welfare effects are ambiguous, and because market forces will generally correct firm behavior over time. Specifically, the argument goes that although some consumers may pay more than others and sellers capture surplus, price discrimination doesn’t necessarily reduce social welfare.[3] From this perspective, algorithmic pricing might actually benefit consumers by flattening price variations across markets and improving efficiency through better supply-demand matching.
This article takes a different stance. The concern is that as firms gather massive amounts of consumer data and deploy sophisticated pricing algorithms, their power to sustain coordinated pricing — and engage in pervasive price discrimination — increases to levels unachievable in traditional markets, to the point of becoming a difference of kind rather than degree. In concentrated markets dominated by firms with market power, this may lead firms to exploit pricing power rather than compete on efficiency or quality.[4] This is exactly what antitrust should address. The efficiency benefits may be real, but they do not negate consumer harm from algorithmic coordination that produces collusive outcomes. That’s the lens through which this analysis proceeds, and it’s the approach that originally motivated AB 325.
Antitrust law and the agreement requirement
AI-driven pricing algorithms can monitor and rapidly respond to changes in competitors’ pricing, market conditions, and demand with increased accuracy and speed that no human can match.[5] And they’re cheap to use.[6] With advanced learning capabilities and endless data to train on, these algorithms don’t just apply a static formula: they learn patterns, adapt strategies, and adjust recommendations dynamically. This technology can monitor competitors in real time and converge on pricing strategies without any human agreement, producing collusive outcomes that look identical to price fixing but emerge from what appears to be independent, profit-maximizing behavior.[7] As a result, AI enables new forms of coordination that traditional antitrust law was not built to address.
At the heart of antitrust law is a prohibition against agreements or concerted actions “that prevent the growth of healthy, competitive markets for goods and services and the establishment of prices through market forces.”[8] The goal is preserving consumer welfare by ensuring that genuine competition sets prices, not coordination among competitors.
But not all coordination is illegal. There is a critical distinction between explicit and tacit collusion. Explicit collusion involves an agreement among competitors to restrain trade, such as fixing prices or limiting output. This is illegal. Tacit collusion, or conscious parallelism, occurs when firms independently but knowingly engage in parallel conduct without any agreement.[9] This is generally lawful.
The dividing line is agreement. Collusion with the objective of raising or maintaining profits above the level they would reach in a competitive scenario requires firms to “make short-run sacrifices for long-run gains,” like holding prices high even when undercutting a rival would be immediately profitable.[10] Antitrust enforcement under the Cartwright Act — California’s primary antitrust law — is “predicated on finding an agreement among firms to encourage such short-run sacrifices.”[11] Without an agreement, parallel pricing is considered rational business behavior in oligopolistic markets, not illegal conspiracy.[12]
Agreement doesn’t require a written contract or smoking gun evidence. Courts can infer it from circumstantial evidence like the exchange of competitively sensitive information, the use of shared pricing mechanisms, or patterns of behavior that suggest coordination rather than independent decision-making.[13] But the agreement must exist. Parallel conduct alone, even when it produces the same anticompetitive market outcomes as explicit collusion, is not enough. This distinction made sense in markets run by humans. Algorithms, however, can exploit it, producing collusive outcomes through conduct that fits the legal definition of an independent decision.
AI-powered pricing changes the game
AI fundamentally changes the assumptions underlying traditional antitrust concepts that were developed for human actors. AI algorithms can monitor market conditions in real time, process vast amounts of competitor pricing data, and adjust prices automatically, all without human intervention.[14] These systems “set or recommend prices in ways that result in coordinated outcomes across competitors without any formal agreement.”[15] They can “respond rapidly to market changes, discouraging price competition and creating uniform pricing,” making them highly effective for profit maximization while preserving the appearance of independent decision-making.[16]
The problem is that those features of AI can produce collusion without agreement. Algorithms eliminate human error and irrational decisions.[17] They reduce the market uncertainty that traditionally kept competitors from sustaining coordinated pricing. When multiple firms in a market deploy pricing algorithms, those algorithms can learn to coordinate autonomously.[18] They assess competitors’ prices, recognize patterns, and converge on strategies that maximize profits across the market, not just for a single firm.[19] The algorithms aren’t instructed to collude, but collusion emerges as the optimal strategy.[20]
Imagine a hypothetical market where several competing firms each use independent pricing algorithms trained on publicly available competitor data. No shared platform, no data exchange, no human communication. Each algorithm observes rivals’ pricing and moves in real time to adjust accordingly. Over time, the algorithms learn that undercutting competitors triggers price wars that hurt everyone’s margins. They converge on stable, elevated prices (the classic collusive equilibrium) without any firm ever agreeing to coordinate. Traditional antitrust law has no clear way to reach this conduct. There’s no agreement, no conspiracy — just profit-maximizing algorithms producing the same outcome as illegal price fixing.[21]
Tacit collusion isn’t the only problem. AI also enables harms that have nothing to do with coordination among competitors. Dominant firms with vastly superior algorithms — like Amazon, which reprices items over 2.5 million times daily — can drive market-wide prices upward simply by outpacing rivals’ repricing capabilities, creating what amounts to algorithmic market power without any collusion at all.[22] And by collecting large amounts of consumer personal data, algorithms enable sophisticated price discrimination, creating granular pricing schemes and charging different customers different prices based on their willingness to pay for any good or service.[23] Whether this conduct itself harms consumers is debated, but algorithmic pricing combined with “increasingly concentrated industries dominated by firms with market power” risks firms exploiting that pricing power rather than competing on efficiency.[24] These forms of algorithmic harm, stemming from unilateral conduct instead of coordination, also fall outside traditional antitrust’s reach.
AB 325 targets coordination specifically. In response to mounting evidence of algorithmic collusion and high-profile cases alleging AI-driven price-fixing schemes in the housing, healthcare, and insurance industries, California moved to modernize its antitrust framework. Yet the law only reaches the most visible form of algorithmic collusion, leaving unaddressed how to handle coordination that produces collusive harms through seemingly lawful conduct.
What AB 325 reaches: explicit collusion through shared platforms
As of January 1, 2026, the Cartwright Act prohibits businesses from using or distributing common pricing algorithms as part of an agreement to restrain trade.[25] The law defines a common price algorithm as “any methodology, including a computer, software, or other technology” used by two or more people that uses “competitor data” to recommend, set, or influence pricing or commercial terms.[26] Businesses are also prohibited from coercing others to adopt an algorithm’s pricing recommendations.
The law focuses broadly on algorithms that use “competitor data,” but does not consider whether the algorithm is trained on public or nonpublic data. Importantly, algorithms using only a single firm’s internal data fall outside its reach. AB 325 also lowered the pleading standard for antitrust claims, no longer requiring plaintiffs to plead allegations that exclude the possibility of independent conduct, making it easier for plaintiffs to survive motions to dismiss.[27]
The RealPage case demonstrates what the law now covers. It involves allegations that RealPage’s algorithmic pricing software facilitated coordination among competing landlords by collecting their commercially sensitive rental data and using that data to train AI-driven pricing models that generated recommendations for all users.[28] Landlords from over 16 million units nationwide input their confidential data including vacancy rates, lease terms, and pricing decisions into the RealPage system, and RealPage’s algorithm synthesized that data across competitors to produce pricing suggestions designed to boost revenue for all landlords across the market.[29] RealPage also allegedly made it difficult and time consuming for landlords to decline the algorithm’s pricing recommendations, effectively pressuring adoption.[30]
This is explicit collusion through a shared platform. Users are exchanging competitively sensitive information via a common intermediary and that information is being used to coordinate pricing across the market. RealPage acts as the “hub” in a hub-and-spoke conspiracy, and its algorithm is the mechanism through which the conspiracy operates. There is agreement through use of the shared tool, exchange of competitor data, and coordinated outcomes. AB 325 reaches this conduct.
What AB 325 doesn’t cover: tacit collusion without data sharing
But not all algorithmic pricing looks like RealPage. Two other cases, Yardi Systems and Cendyn Group, illustrate AB 325’s limits.
Yardi Systems faced similar allegations that its pricing software facilitated rent coordination among competing landlords.[31] The critical difference was that the California court found no evidence that Yardi’s software shared data from its users among competitors.[32] The algorithm did not use one landlord’s data to generate pricing recommendations for other landlords. Each user received recommendations based on their own data and publicly available market information, not on confidential competitor data pooled across the platform.[33] In antitrust terms, there was only an agreement between the hub (Yardi) and the spokes (the landlords), but not between any of the spokes. Absent a rim tying the conspiracy together there is no horizontal agreement among competitors that violates the Cartwright Act.
Cendyn Group presented the same issue in the hospitality industry. Hotels input data into Cendyn Group’s software, which provided pricing recommendations.[34] The Ninth Circuit affirmed a motion to dismiss the case on the same grounds as Yardi Systems: while it would undoubtedly be an antitrust violation if competing hotels agreed among themselves to adopt Cendyn Group’s pricing recommendations when pricing their own hotel rooms, using the same pricing software isn’t illegal if there’s no evidence the software is sharing confidential information among competing users.[35] Independent use of a common tool, even if it produces parallel pricing, doesn’t constitute an agreement.
AB 325 doesn’t reach Yardi Systems or Cendyn Group. These cases involve no exchange of competitor data through the algorithm, so they don’t meet the statutory definition of a “common pricing algorithm” using “competitor data.” More fundamentally, they don’t involve the agreement element that antitrust law requires. Each firm is making independent decisions — it’s just that those decisions are informed by the same algorithmic tool observing the same market data. This is tacit collusion, and it’s not unlawful under antitrust doctrine. Interestingly, it’s precisely what the original version of AB 325 targeted.
Six separate algorithmic pricing bills were introduced in the 2025 legislative session. AB 325 is the only one that survived, and it passed in a significantly narrowed form. As originally introduced, AB 325 would have gone much further. It would have prohibited the use of pricing algorithms trained on nonpublic competitor data, and the use of any pricing algorithm, regardless of the type of data used, employed by multiple firms in the same market to set or recommend prices.[36] This would have captured situations where competitors indirectly share sensitive pricing data through a third-party vendor or common algorithm, and also targeted tacit coordination through independent use of the same algorithmic tool, even without explicit agreement or data sharing.
This language would have brought Yardi Systems and Cendyn Group within reach, but these provisions were removed. The final version limits the prohibition to algorithms that use “competitor data” and requires an agreement or concerted action, bringing it back within the bounds of traditional antitrust doctrine. The narrowing was not an accident or a drafting oversight. It reflects a deeper problem: extending antitrust liability to cover tacit coordination facilitated by AI would require fundamentally rethinking the doctrinal foundations of competition law. The drafting evolution of AB 325 highlights the deep structural problems with using traditional antitrust frameworks to regulate lawful AI-driven market behavior that produces collusive outcomes.
Obstacles to antitrust enforcement in the age of AI
There are three main obstacles that explain why California’s attempt to prohibit algorithmic coordination without agreement ultimately proved unworkable. The first is perhaps the most obvious: the centrality of the agreement. Algorithms are generally programmed to increase profits, not collude, and they produce coordination as an output, not as the result of human agreement.[37] AI has the capacity to comprehend “consumer and competitor behavior through data-driven pattern recognition, learning, and adaptation” and “reduce the risk of market failure by pursuing profit-maximizing strategies.”[38] Collusion emerges as the optimal strategy because the algorithm learns that coordinated pricing produces better outcomes than price competition.
This fundamentally challenges the doctrinal requirement of agreement. When independent algorithms deployed by competing firms converge on collusive pricing by observing the same market signals and learning the same lessons, where’s the conspiracy? The algorithms are doing exactly what they’re designed to do: respond rationally to market data and enhance profits. The conduct fits the description of lawful independent decision-making. The harm to consumers is real (prices rise above competitive levels) but the legal element required to prove a violation may simply not exist when algorithms operate without any human intervention at all.[39]
The second problem is deeper. Competition law rests on assumptions about how firms behave in markets, and AI undermines those assumptions. Traditionally, parallel pricing among competitors is presumed to reflect independent business decisions unless there’s evidence of coordination. If two competitors independently and simultaneously adjust their prices, the assumption is that they are responding to relevant economic information rather than coordinating with each other.[40] This assumption worked in human-run markets because uncertainty and information gaps prevented perfect coordination.[41] Firms couldn’t monitor competitors constantly, couldn’t predict their reactions with certainty, and couldn’t adjust prices instantaneously.
AI eliminates those frictions. Algorithms monitor vast amounts of market data in real time, detect patterns in competitor behavior, predict consumer preferences, and adjust prices automatically, faster and more precisely than humans ever could. The result is that parallel pricing can now reflect a form of coordination that emerges from algorithmic learning rather than human agreement. And determining whether conduct should be considered unlawful becomes nearly impossible because AI behaves in fundamentally different ways than human actors — ways we don’t yet fully understand. Indeed, AI decision-making is opaque at best, a black box at worst.[42] Anticompetitive outcomes may just reflect a computer’s impartial response to market data, and courts cannot assess intent or strategy when an algorithm’s reasoning isn’t accessible even to the firm deploying it.[43]
This creates a paradox. Competition law assumes firms will act in their own self-interest, and “reliance on competition to promote socially desirable outcomes” depends on that assumption.[44] The efficiency argument holds that algorithmic pricing reflects this dynamic: firms pursuing self-interest through better market intelligence, ultimately benefiting consumers through efficiency gains and price flattening. We cannot tell firms not to use tools that help them respond more effectively to market conditions. That would be telling them not to act in their own self-interest, which is economically untenable.[45] Yet when those tools produce collusive outcomes autonomously, traditional antitrust doctrine has no clear way to intervene without prohibiting rational business behavior.
And expanding liability to cover algorithmically facilitated coordination risks diluting the existing notion of what constitutes an agreement, misclassifying lawful interdependence, and penalizing efficiency-enhancing technology.[46] Here again courts have long held that the basic exchange of market information isn’t an antitrust violation.[47] And the Ninth Circuit recently explained that “using a competitive advantage gained from establishing a [data collection] infrastructure that renders [a firm] uniquely suited to serve its customers” is not unlawful or anticompetitive.[48] Ultimately, drawing a line between legitimate competitive intelligence and unlawful coordination becomes nearly impossible when algorithms are making the decisions.
The third problem explaining AB 325’s trajectory is that algorithmic pricing has genuine efficiency benefits that regulators don’t want to chill. Pricing algorithms can improve matching of supply and demand, reducing waste and mispricing.[49] In competitive markets, these efficiency gains can be passed on to consumers through lower prices, better service, or improved product quality.[50] Algorithms can even facilitate competition in that firms could use AI to compete more aggressively to offer lower prices by responding faster to market opportunities.
Yet the same AI capabilities that produce these benefits also produce the harms. An algorithm that monitors competitor pricing to identify opportunities for vigorous undercutting is using the same technology as an algorithm that monitors competitors to sustain collusive pricing. The tool is neutral, and the outcome depends on market structure, algorithm design, and factors that may not be visible even to the firms using these tools.[51] Whether these tools ultimately benefit or harm consumers is a matter of perspective and value — a policy judgment about which outcomes matter more, rather than a purely technical legal question.
This is why stretching antitrust liability to cover tacit algorithmic collusion is so difficult. It risks treating the use of any shared algorithmic tool, regardless of data sharing, as per se illegal. It would sweep in beneficial uses along with harmful ones, potentially chilling innovation in AI-driven pricing technology precisely when those tools are becoming ubiquitous across industries.[52]
California faced this trade-off directly. The original version of AB 325 would have prohibited any pricing algorithm used by multiple competitors, even without data sharing. That language was removed because the risk of overdeterrence was too high. But pulling back leaves California unable to address a real problem: AI enables coordination that produces the harms of explicit collusion through conduct that remains lawful under traditional antitrust doctrine. The tools have changed, but the aversion to regulating coordination without agreement persists.
Where we go from here
What comes next is uncertain. One option is for California to reimagine antitrust liability, basing it in a firm’s awareness of using algorithms that facilitate collusion.[53] Under an “awareness-based” approach, firms would be free to deploy algorithms but must refrain from conduct that is “known” or “should be known” to facilitate collusion.[54] Because it’s unclear how AI makes decisions, this approach would give firms an incentive to be concerned about how they use AI and hold them accountable for anticompetitive conduct resulting from AI’s decisions. But it could also create the opposite incentive: willful ignorance. If firms are held liable based on awareness, they may choose not to investigate or become aware of how their algorithms function, which would undermine the entire enforcement mechanism. This standard may also be unworkable because if parallel pricing can reflect either efficient competition or tacit coordination and enforcers cannot reliably distinguish between them, it’s not clear how courts would determine what firms “should have known.” An awareness-based approach presumes a clear line between lawful and unlawful conduct that algorithmic pricing blurs.
Another option, proposed by the California Law Review Commission, is a “digital-specific regulatory regime.”[55] This approach is based on the United Kingdom’s framework and suggests there be a dedicated regulator tasked specifically with identifying “problematic firms, problematic conduct, and [] solution[s] to protect competition and consumers.”[56] Solutions could include certain protections for consumers or requirements for access by users, portability of data, and so on.[57] This approach recognizes the legislature’s challenge in adapting a legislative framework that protects consumers and competition while enabling AI innovation.[58] A separate digital regulatory authority in California would remove that pressure and shift enforcement decisions from courts applying traditional antitrust doctrine to an agency with technical expertise in digital competition and markets. Given this approach has already been implemented in the UK, it may be more viable than reimagining antitrust liability from scratch, though creating a new California regulatory authority would require significant legislative and budgetary commitment.
Given the efficiency benefits algorithmic pricing provides and the rapid evolution of the technology, California may choose to wait and observe how these tools continue to develop before making significant regulatory moves. The alternative, revisiting the more aggressive prohibitions originally proposed in AB 325, would address tacit coordination directly but risks chilling beneficial uses of any algorithm used by multiple firms, even without data sharing. Waiting allows the state to gather better information about the technology’s harms and benefits, and for AI interpretability and steerability to potentially improve and address the black box problem. (Although, the research shows opacity of advanced AI models is increasing rather than decreasing.) But delay may be costly for consumers continuing to pay supercompetitive prices where coordination exists. And as algorithmic pricing becomes more entrenched, it may become politically or technically harder to regulate later.
The wait-and-see approach bets that better information will lead to better regulation, accepting short-term consumer harm as the cost of getting it right down the line. Situated as a leader in both technology innovation and regulation, this may be the best path forward for California navigating that tension. Until then, the onus is on Californians to use tools outside of antitrust law, like the California Consumer Privacy Act restrictions on data collection that enables personalized pricing, to address specific harms like algorithmic price discrimination.[59]
Conclusion
AB 325 provides a small window into the future of competition regulation in California. What we see is telling: California identified the harm, crafted a legislative response, and still couldn’t bridge the gap between what algorithms do and what antitrust law can reach. The problem isn’t that antitrust enforcement has failed to adapt, it’s that the doctrinal foundations may be incompatible with algorithmic coordination. For now, AB 325 stands as both progress and limitation: a law that modernizes antitrust for explicit algorithmic collusion while leaving unresolved how to address collusive outcomes stemming from lawful use of AI pricing tools. The only certainty here is continued disruption as California struggles to adapt to its new AI reality.
—o0o—
Morgan Mitruka is a senior research fellow at the California Constitution Center.
-
Weinstein, Pricing Algorithms — What Role for Regulation? (Feb. 2024) CPI Antitrust Chronicle at 3. ↑
-
See Fischer, The Rise of the Data-Opoly: Consumer Harm in the Digital Economy (2021) 99 Wash. U. L.Rev. 729, 735–37. ↑
-
See Mehra, Algorithmic Competition, Collusion, and Price Discrimination in The Cambridge Handbook of The Law of Algorithms (W. Barfield edit., 2020) at 206. ↑
-
See ibid. ↑
-
See Iacobucci, Algorithmic Pricing, Anticompetitive Counterfactuals, and Antitrust Law, U. Chi. L.Rev. Online (Dec. 6, 2024) at *2; Mehra, Algorithmic Competition, Collusion, and Price Discrimination in The Cambridge Handbook of the Law of Algorithms (W. Barfield edit., 2020) at 199–208. ↑
-
Mehra, Algorithmic Competition, Collusion, and Price Discrimination in The Cambridge Handbook of the Law of Algorithms (W. Barfield edit., 2020) at 201. ↑
-
See Iacobucci, Algorithmic Pricing, Anticompetitive Counterfactuals, and Antitrust Law, U. Chi. L.Rev. Online (Dec. 6, 2024) at 2. ↑
-
In re Cipro Cases I & II (2015) 61 Cal.4th 116, 136. ↑
-
Eddins v. Redstone (2005) 134 Cal.App.4th 290, 304–07. ↑
-
Mackay & Weinstein, Dynamic Pricing Algorithms, Consumer Harm, and Regulatory Response (2022) 100 Wash. U. L.Rev. 111, 117. ↑
-
Ibid. ↑
-
See Assem. Com. on Judiciary, Analysis of Assem. Bill No. 325 (2025-2026 Reg. Sess.) as introduced April 4, 2025 at 6. ↑
-
See ibid. at 5. ↑
-
Ibid. at 1. ↑
-
Ibid. ↑
-
Kuenzler, Why Algorithmic Collusion and Discrimination Upend the Foundations of Competition Law in & The Oxford Handbook of Regulatory Contract Law (Atamer & Hellgardt edits., 2026) at 25. ↑
-
See Weinstein, Pricing Algorithms — What Role for Regulation? (Feb. 2024) CPI Antitrust Chronicle at 5; Nazzini & Henderson, Overcoming the Current Knowledge Gap of Algorithmic “Collusion” and the Role of Computational Antitrust (2024) at 3. ↑
-
See Iacobucci, Algorithmic Pricing, Anticompetitive Counterfactuals, and Antitrust Law, U. Chi. L.Rev. Online (Dec. 6, 2024) at 2. ↑
-
See Nazzini & Henderson, Overcoming the Current Knowledge Gap of Algorithmic “Collusion” and the Role of Computational Antitrust (2024) at 3. ↑
-
See Stucke & Ezrachi, Antitrust, Algorithmic Pricing and Tacit Collusion in Research Handbook on the Law of Artificial Intelligence (Barfield & Pagallo edits., 2018) at 624–48. ↑
-
Weinstein, Pricing Algorithms — What Role for Regulation? (Feb. 2024) CPI Antitrust Chronicle at 5 (“[A]symmetries in pricing capabilities will tend to drive prices higher.”); Thomas Mattes, Algorithmic Price Administration: How Amazon Hijacks the Price System (2022) 37 Berkely Tech. L. J. 1489, 1489. ↑
-
See Nazzini & Henderson, Overcoming the Current Knowledge Gap of Algorithmic “Collusion” and the Role of Computational Antitrust (2024) at 4; Porat, Algorithmic Personalized Pricing in the United States in The Cambridge Handbook of Algorithmic Price Personalization and the Law (Esposito & Grochowski edits., 2025) at 300–30. ↑
-
Mehra, Algorithmic Competition, Collusion, and Price Discrimination in The Cambridge Handbook of the Law of Algorithms (W. Barfield edit., 2020) at 206. ↑
-
See Cal. Bus. & Prof. Code § 16729 et seq. ↑
-
Ibid. ↑
-
See Bell Atlantic Corp. v. Twombly (2007) 550 U.S. 544, 554 (“An antitrust conspiracy plaintiff with evidence showing nothing beyond parallel conduct . . . must include evidence tending to exclude the possibility of independent action[.]”). ↑
-
See generally Am. Compl., U.S. et al. v. RealPage, Inc. (M.D.N.C. Jan. 7, 2025) No. 1:24-cv-00710-LCB-JLW. ↑
-
Ibid. at ¶ 24. ↑
-
Ibid. at ¶¶ 62–65. ↑
-
See generally Mach, et al. v. Yardi Systems, Inc., et al. (Alameda Cty Super. Ct. Oct. 20, 2025) No. 24-cv-063117. ↑
-
See ibid. at 6–11. ↑
-
See ibid. ↑
-
See Gibson v. Cendyn Group LLC (9th Cir. 2025) 148 F.4th 1069, 1076. ↑
-
Ibid. ↑
-
See Assem. Com. on Judiciary, Analysis of Assem. Bill No. 325 (2025-2026 Reg. Sess.) as introduced April 4, 2025 at 7–9. ↑
-
Kuenzler, Why Algorithmic Collusion and Discrimination Upend the Foundations of Competition Law in & The Oxford Handbook of Regulatory Contract Law (Atamer & Hellgardt edits., 2026) at 6. ↑
-
Ibid. at 25–26. ↑
-
Porat, Algorithmic Personalized Pricing in the United States in The Cambridge Handbook of Algorithmic Price Personalization and the Law (Esposito & Grochowski edits., 2025) at 313. ↑
-
Kuenzler, Why Algorithmic Collusion and Discrimination Upend the Foundations of Competition Law in & The Oxford Handbook of Regulatory Contract Law (Atamer & Hellgardt edits., 2026) at 9–10. ↑
-
See Mehra, Algorithmic Competition, Collusion, and Price Discrimination in The Cambridge Handbook of the Law of Algorithms (W. Barfield edit., 2020) at 201. ↑
-
Kuenzler, Why Algorithmic Collusion and Discrimination Upend the Foundations of Competition Law in & The Oxford Handbook of Regulatory Contract Law (Atamer & Hellgardt edits., 2026) at 19. ↑
-
Ibid. at 29. ↑
-
Iacobucci, Algorithmic Pricing, Anticompetitive Counterfactuals, and Antitrust Law, U. Chi. L.Rev. Online (Dec. 6, 2024) at 1. ↑
-
Stucke & Ezrachi, Antitrust, Algorithmic Pricing and Tacit Collusion in Research Handbook on the Law of Artificial Intelligence (Barfield & Pagallo edits., 2018) at 637. ↑
-
Kuenzler, Why Algorithmic Collusion and Discrimination Upend the Foundations of Competition Law in & The Oxford Handbook of Regulatory Contract Law (Atamer & Hellgardt edits., 2026) at 4. ↑
-
See United States v. Citizens & S. Nat’l Bank (1975) 422 U.S. 86, 113 (“[T]he dissemination of price information is not itself a per se violation of the Sherman Act.”); United States v. U.S. Gypsum Co. (1978) 438 U.S. 422, 441 n.16 (“The exchange of pricing data and other information among competitors does not invariably have anticompetitive effects.”). ↑
-
Dreamstine.com, LLC v. Google LLC (9th Cir. 2022) 54 F.4th 1130, 1142. ↑
-
See Nazzini & Henderson, Overcoming the Current Knowledge Gap of Algorithmic “Collusion” and the Role of Computational Antitrust (2024) at 3. ↑
-
See Mehra, Algorithmic Competition, Collusion, and Price Discrimination in The Cambridge Handbook of the Law of Algorithms (W. Barfield edit., 2020) at 199. ↑
-
See Stucke & Ezrachi, Antitrust, Algorithmic Pricing and Tacit Collusion in Research Handbook on the Law of Artificial Intelligence (Barfield & Pagallo edits., 2018) at 625 (arguing the nature of sophisticated computer algorithms and AI “depends on how firms employ them, and whether their incentives are aligned with our interests, and certain market characteristics”). ↑
-
Kuenzler, Why Algorithmic Collusion and Discrimination Upend the Foundations of Competition Law in & The Oxford Handbook of Regulatory Contract Law (Atamer & Hellgardt edits., 2026) at 5. ↑
-
See Kuenzler, Why Algorithmic Collusion and Discrimination Upend the Foundations of Competition Law in & The Oxford Handbook of Regulatory Contract Law (Atamer & Hellgardt eds., 2026) at 18–19. ↑
-
Ibid. at 19. ↑
-
Memorandum 2024-47, Expert Report: Artificial Intelligence (Oct. 1, 2024) California Law Review Commission, Study B-750 at 9. ↑
-
Ibid. at 10. ↑
-
Ibid. ↑
-
See Stucke & Ezrachi, Antitrust, Algorithmic Pricing and Tacit Collusion in Research Handbook on the Law of Artificial Intelligence (Barfield & Pagallo edits., 2018)at 645. ↑
-
See Porat, Algorithmic Personalized Pricing in the United States in The Cambridge Handbook of Algorithmic Price Personalization and the Law (Esposito & Grochowski edits., 2025) at 326–29 (explaining the rights to limit, delete, and opt-out of the sale of data or its sharing with third parties granted by the CCPA as a means of combatting price discrimination). ↑