Your Lawyer Knows AI. That May Not Be Enough.

The conflict of interest no one in the legal profession is talking about yet.

The Question You Have Not Thought to Ask

In a previous article on Linkedin (Norma Berríos Silva- Linkedin), I described how agentic AI systems fail silently, how the model that passed review on the day you signed your contract may not be the same model making decisions about your business a year later.13 That phenomenon is called drift, and it is a documented, measurable risk that most AI contracts I have seen, particularly model contracts, do not adequately address.5,6,8

But there is a question that precedes the contract itself. Before you evaluate the terms, before you assess the vendor, there is something you should know about the attorney sitting across the table from you.

Does that attorney’s training, their firm’s relationships, and the templates they are using actually serve your interests, or have they been quietly shaped by the other side?

This article is about a structural conflict of interest that is forming in real time inside the legal profession. Most attorneys involved do not recognize it. That is not a moral failing. It is a knowledge gap, and in AI matters, knowledge gaps have legal consequences.

Two Chairs. One Profession.

As AI procurement has grown into a significant legal practice area, law firms have moved quickly to capture it. Many have added AI-focused attorneys, launched AI practice groups, and pursued vendor facing AI training and certifications.5,9 The intent is sound. The problem is structural.

These same firms are now sitting in two chairs simultaneously. On one side of the table, they help AI vendors design standard contract terms, carefully engineered to limit vendor liability for errors, drift, data use, and autonomous decision failures. On the other side of the same table, they tell corporate buyers that they will negotiate to protect them from exactly those risks.5,9

The conflict is not always transactional in the narrow sense. In my opinion, it may also be cognitive. A lawyer who has spent years drafting terms that minimize vendor exposure develops a mental model of what is “standard” and what is “reasonable” in AI contracts. That mental model does not switch off when the client on the other side of the engagement is a buyer. It shapes which risks feel urgent, which clauses feel worth fighting over, and which concessions feel acceptable.

In my opinion, you cannot represent sellers and buyers in this field with equal effectiveness. The frame of mind required is not interchangeable.

What the Ethics Rules Already Say

This is not a novel ethical concern requiring new rules. The framework already exists.

The American Bar Association’s Model Rule 1.7 on conflicts of interest with current clients states that an attorney may not represent a client if there is a significant risk that the representation will be materially limited by the attorney’s responsibilities to another client, a former client, a third person, or by the attorney’s own interests.1 The standard is not proof of actual harm. It is whether a significant risk of material limitation exists.1

In Puerto Rico, the ethical framework must be stated with precision. Under the former Código de Ética Profesional, the prohibition reached not only actual conflicts, but also the appearance of improper conduct.2 The new Reglas de Conducta Profesional, adopted through our Supreme Court’s resolution ER-2025-02, reorganize that framework, address current-client conflicts in Rule 1.7, and expressly incorporate technological competence in Rule 1.19, including the use of artificial intelligence.2,3

Rule 1.19 makes clear that it is no longer sufficient to know how to use AI tools. Attorneys are professionally obligated to understand the legal and ethical implications of the technology they are advising clients to adopt.3,4,7

Applied to AI procurement, a firm that has designed vendor-protective contract language, trained its attorneys on vendor-sponsored curricula, and built its AI practice templates around the seller’s position may face a serious question about independence when it then purports to represent a buyer in negotiating those same types of agreements.1,5,9

The Certification Problem

Many attorneys and firms now hold AI certifications. This signals initiative and awareness. But the source of those certifications matters enormously.9

When an attorney’s primary AI education comes from programs sponsored by the vendors themselves, the practical effect is that their view of what constitutes adequate risk disclosure, acceptable indemnification, and reasonable performance standards has been shaped by the party whose commercial interest is to minimize all three. The sample clauses in those programs were not drafted by consumer advocates. They were drafted by legal teams whose job was to facilitate adoption with minimum friction and maximum protection for the vendor.5,9

A certification is a record of participation. It is not a substitute for independent risk analysis conducted from the buyer’s perspective.

The Financial Component

Many AI procurement decisions are not paid in cash on day one. They are financed, directly or indirectly, through credit facilities, project finance, or broader corporate borrowing that supports technology investments. For banks and institutional investors, these systems are not only tools; they are part of the financed risk on the borrower’s balance sheet.10,12

Recent work by central banks and international bodies on financing the AI boom and on AI in finance highlights concerns about model opacity, third-party dependencies, and concentration risk. These are the same dynamics that arise when AI contracts rely on vague standards instead of concrete obligations on performance, transparency, drift management, and allocation of liability when the system fails.10,11

A lender that underwrites an AI-heavy project has a direct interest in the buyer’s contracts being specific, auditable, and enforceable. Weak AI procurement contracts make it more likely that things go wrong — and when they do, the lender feels it too. A well-advised buyer, by contrast, arrives at the credit table with contracts that include audit rights, defined performance thresholds, and vendor notification obligations for model updates, the kind of language that gives a lender something concrete to work with.10,11,12

That alignment of interests between buyers and their lenders is new territory, and it will be the subject of a future article. The short version: financial institutions and investors have every incentive to insist on stronger AI contracts, and in some deals, they already are.

Where Drift Makes This Worse

As I described in “The Agentic AI Trap,” AI systems do not remain static after deployment. They drift. Models are retrained, updated, and reconfigured by vendors, often without the buyer’s formal approval of a “new version.” The system behavior that was reviewed and contracted for may change materially over time.13,6,8

A buyer-side attorney who genuinely understands drift will insist on specific contract language: ongoing audit rights, performance transparency obligations, clear allocation of liability when post-deployment changes cause harm, and defined notification requirements when the vendor updates the model.5,6

A vendor-side attorney, even one advising a buyer, will tend toward language that sounds protective but leaves those questions open: “commercially reasonable efforts,” “best practices,” “as-is” warranties, and indemnification clauses that cover IP infringement but say nothing about autonomous decision failures or drift-related errors.5,9

The difference between these two documents is the difference between a client who is protected and a client who believes they are protected. Those are not the same thing.

A Note on Intent

Nothing in this article suggests that attorneys operating in this space are acting in bad faith. The legal profession is encountering AI risk faster than its educational infrastructure can absorb it. Attorneys who took vendor certifications were trying to stay current. Firms that built AI practice groups were responding to real client demand.4,7,9

The problem is not intention. The problem is structure. And structural conflicts do not require bad intent to create real harm.

The ethical response to a structural conflict is not shame. It is clarity. Attorneys who recognize this dynamic have an opportunity to get ahead of it, to audit their own positioning, disclose where appropriate, and, where necessary, decline engagements that their training and firm relationships do not allow them to handle independently.1,3

Firms that already hold vendor relationships may find that the most ethical and commercially sound solution is to bring in independent outside counsel, specifically oriented toward buyer representation for AI procurement matters. That is not a concession. That is a standard of care.

Conclusion

The standard of care in AI procurement law is being set right now. It is being set by the contracts that are being signed, the clauses that are being accepted, and the questions that are not being asked.

Some firms are in the business of selling AI systems. Some are in the business of certifying their adoption. This practice is in the business of making sure organizations understand what they are actually agreeing to before they agree to it.

If your organization is evaluating AI systems or re-negotiating vendor contracts, or if your firm is advising buyers or lenders on AI heavy projects and you want an independent, buyer-side view of the risk, you can reach out to explore a limited scope engagement focused on AI procurement, drift, and governance.

The question is not whether you will encounter this issue. You already have, or you will soon. The question is whether you will have had the right counsel when it mattered.


Footnotes:

1.           ABA Model Rules of Professional Conduct R. 1.7 (Am. Bar Ass’n 2020).

2.           Tribunal Supremo de Puerto Rico, Resolución ER-2025-02 (June 16, 2025), and prior Código de Ética Profesional de Puerto Rico.

3.           Tribunal Supremo de Puerto Rico, Reglas de Conducta Profesional de Puerto Rico (June 16, 2025).

4.           Anabelle Torres Colberg, La nueva Regla 1.19 de Conducta Profesional: Una visión de futuro para la profesión jurídica, Microjuris al Día (June 17, 2025).

5.           Wotton + Kearney, Navigating Legal Issues in Contracting for AI Solutions (Dec. 17, 2024).

6.           Thomson Reuters, Managing AI Models’ Opacity and Risk Management Challenges (Jan. 12, 2026).

7.           Practical Law The Journal, Ethical Duty of Technological Competence (Jan. 1, 2026).

8.           Moody’s Analytics, Model Risk Management in the Age of AI.

9.           BARBRI, AI Vendor Contracts: Data Rights, IP, Risk Allocation, and Compliance (Oct. 22, 2025).

10.         BIS, Financing the AI boom: from cash flows to debt (Jan. 6, 2026).

11.         OECD, AI in finance and related OECD work on regulatory approaches and supervision of AI in finance.

12.         Financing the Future: Credit Perspectives on the AI Investment Boom (Dec. 18, 2025).

13.         Norma Berríos, The Agentic AI Trap: What No One Tells You Before You Sign (LinkedIn, March 2026).