This week is disability awareness week. I suppose every week is disability awareness week for me😊. In keeping with the spirit of the week, we have another blog entry.
Previously, we have blogged on whether the work product privilege is jeopardized by the use of AI. In that blog entry, here, we talked about two cases that seemingly came up with irreconcilable approaches. I am not sure if those cases can be reconciled even if one was pro se and the other wasn’t considering the language in the two decisions. Now, we have a third case dealing with a pro se plaintiff and the plaintiff’s use of AI. Does such use violate work product privilege? Does that question really depend upon the platform that the plaintiff uses? A case that answered both of those questions is our blog entry for the week, Morgan v. V2X Inc., here, decided by Magistrate Judge Braswell of the United States District Court for the District of Colorado on March 30, 2026. As usual, the blog entry is divided into categories and they are: facts; court’s reasoning regarding whether work product protections apply to a pro se litigant’s use of AI; court’s reasoning for the need for a protective order expressly restricting the use of AI; and thoughts/takeaways. Of course, the reader is free to focus on any or all of the categories.
I
Facts
Plaintiff acting pro se, claimed he was subjected to a hostile work environment and eventually terminated based on his race and national origin in retaliation for protected activities, including opposing sexual harassment and engaging in protected whistleblowing activity. During the course of the litigation, plaintiff sought an insurance policy from the defendant. The defendant refused to supply it unless plaintiff disclosed his AI tool and submitted to certain restrictions on AI use. Both parties were using AI in connection with the litigation work, but disagreed on how AI should or should not be used in connection with confidential information as defined in a prior protective order.
II
Court’s Reasoning Regarding Whether Work Product Protections Apply to a Pro Se Litigant’s Use Of AI
- Federal Rules of Civil Procedure 26(b)(3)(A) protects documents and tangible things prepared in anticipation of litigation or for trial by or for another party or its representative.
- Federal Rules of Civil Procedure 26(b)(3)(B) says that mental impressions, conclusions, opinions, and legal theories, are content that may be embedded in such documents and tangible materials. It also recognizes that those materials may include mental impressions, conclusions, opinions, and legal theories.
- Federal Rules of Civil Procedure 26(b)(3)(B) adds another layer of protection to mental impressions and opinions when they come from a party’s attorney or other representative.
- Federal Rules of Civil Procedure 26(b)(3) refers to things prepared in anticipation of litigation by any party (emphasis in opinion), which is language that would seem to include material created by a party before retaining a lawyer as well as a party who never actually hires an attorney.
- A reading that the rule includes material created before retaining a lawyer is also reinforced by the history of the Rule. In particular, the Advisory Committee’s amendments were specifically designed to extend protection beyond attorney work product to materials prepared by or for a party (emphasis in opinion). Since then, courts routinely interpret the Rule to apply not just to attorney work product, but also to a pro se litigant’s work product as well.
- Courts have broadly interpreted the rule to protect not just litigation preparation materials, but also the mental impressions, opinions, and theories of parties (emphasis in opinion).
- While only attorneys and other representatives get the additional heightened protection under Federal Rules of Civil Procedure 26(b)(3)(B), a party’s own mental impressions are nevertheless generally protected under Federal Rules of Civil Procedure 26(b)(3).
- Pro se litigants are forced to act as both party and advocate simultaneously.
- For the first time in history, widespread access to powerful technology may make that dual role faced by pro se litigants surmountable.
- A reading of Federal Rules of Civil Procedure 26(b)(3) conditioning work product protection over AI materials and the involvement of counsel is not supported by the rule’s text and would further disadvantage unrepresented litigants.
- Since pro se litigants are held to the same standard as represented litigants, they should also be afforded the same protections.
- Heppner is distinguishable in at least two ways: 1) Heppner was a criminal matter; 2) Heppner involved a gap between the party and the attorney because the defendant acted entirely apart from his lawyer. No such gap exists when a pro se litigant is involved.
- While it is true that AI systems like ChatGPT, Claude, Gemini, and other AI widely available to the public, collect user data for training and other purposes, that does not eliminate all expectations of privacy or automatically waives protections.
- Nearly all electronic interaction today passes through third-party systems. Google, for example, host millions of accounts, and by extension, has access to millions of messages, emails, documents, videos, and more. Phones, computers, in-home smart devices, and other electronics, collect information about us to offer more customized services. That simply cannot mean that anyone with a Gmail account has forfeited all rights to confidentiality and privacy.
- Intermediary access does not alone extinguish privacy expectation.
- The Supreme Court has held that the mere fact that information is held by a third-party intermediary, does not automatically extinguish the reasonable expectation of privacy in that information.
- While the fourth amendment governs searches and seizures and has a completely different legal framework from the work-product doctrine, the principle involved in those cases is informative. That is, routing information through a third-party system does not forfeit all privacy.
- Unlike a general-purpose search engine, which passively returns results, many modern AI platforms are specifically designed and trained to engage. They invite candid and significant disclosure of information, including sensitive information. They also simulate empathy, foster trust, and interact in a way that feels genuine and intimate. Research confirms that people share personal and sensitive information with AI chat bots, often without appreciating what happens to that information once shared.
- The situation of a pro se litigant using AI to assist with their litigation preparation, closely resembles the kind of confidential, strategy-laden work product that Federal Rule Of Civil Procedure 26(b)(3) was designed to protect.
- Given how AI tools function, it is entirely reasonable for a person to expect some privacy and confidentiality when when interacting with these tools, even though they understand that a third-party is behind the AI collecting and storing their information.
- Work product protections are typically waived by disclosure to an adversary, or in circumstances substantially increasing the likelihood that an adversary will obtain the materials.
- Even though AI use technically discloses information to a third party, it is highly unlikely the information will fall into the hands of an adversary absent some legal process to compel it. Therefore, AI interactions do not automatically compromise work product protections.
- Defendant’s request for the name of the tool is legitimate and reasonable. If plaintiff has already submitted confidential information to an AI system, which it appears he has, the defendant is entitled to know which system it is.
III
Court’s Reasoning For The Need For A Protective Order Expressly Restricting The Use Of AI
- The suggested language submitted by both the plaintiff and the defendant for clarifying the protective order just doesn’t work for the court.
- The defendant’s suggested language is imperfect but makes more sense than the plaintiff’s.
- So, the language the court ultimately settled on is: “No party or authorized recipient may input, upload, or submit CONFIDENTIAL Information into any modern artificial intelligence platform, including any generative, analytical, or large language modelbased tool (“AI”), unless the AI provider is contractually prohibited from: (1) storing or using inputs to train or improve its model; and (2) disclosing inputs to any third party except where such disclosure is essential to facilitating delivery of the service. Where disclosure to a third party is essential to service delivery, any such third party shall be bound by obligations no less protective than those required by this Order. In addition, the AI provider must contractually afford the party or authorized recipient the ability to remove or delete all CONFIDENTIAL information upon request. A party intending to use AI that it contends meets these requirements must retain written documentation of these contractual protections.” (Emphasis in opinion).
- The court recognizes that practically speaking that the court’s language clarifying the use of AI in this case by the parties will bar the party from using most, if not all, mainstream low-to-no-cost AI to process confidential information. This type of restriction disadvantages pro se litigants, as enterprise tier AI account status having these requirements may be available only through organizational procurement processes, or at costs that a pro se litigant is unlikely to bear. Even so, the court can’t ignore the real risks associated with mainstream tools that persistently collect and store data and can compromise confidentiality. The clarifying language is not intended to leave the pro se plaintiff without the benefit of being able to use AI. Modern AI tools may be used in many ways that do not involve uploading confidential information, and nothing in the revised clarifying language restricts those uses.
IV
Thoughts/Takeaways
- Assuming the plaintiff is not happy with the clarification of the protective order and kicks it up to the District Judge, it will be interesting to see if the District Judge agrees with the scope of the protective order as clarified.
- It is absolutely true that AI can be used for information that is not confidential. One of the ways I use AI is to develop PowerPoint presentations (unfortunately, PowerPoint is not meaningfully accessible to voice dictation users), and it is pretty cool what AI can do in that situation. No confidential information is involved.
- For someone that is a litigant, especially a pro se litigant, figuring out what confidential information is in order to not put it into an AI system that is affordable, may be an extremely difficult thing to do. Privileges are not always the easiest thing to understand absent independent legal knowledge, even for lawyers. With regards to information submitted by another party, it is unclear how would a pro se litigant know whether that information is confidential, except for self-serving statements from the other party that the information is confidential. Even interpreting a protective order might be something difficult to do. In that situation, I could see a person asking AI to dumb down the order so that the pro se litigant could understand it. Of course, asking AI to dumb down that order would carry risks of its own, in that the dumbing down might change the meaning of the order.
- If not using enterprise systems, a pro se litigant with a protective order similar to this case, would definitely want to turn off the training feature of the AI if it is possible.
- One wonders whether the clarifying protective order goes further than the reasoning the court uses for holding the work product privilege applies to pro se litigants when using AI.
- What does any of this have to do with persons with disabilities? It does. People with disabilities are using AI in a variety of ways. For example, people with ADHD use AI to help them organize their thoughts. People with communication issues might use AI so that they can explain themselves better. I am sure that the list goes on and on. Also, I can’t tell you how many calls I get from people with disabilities that have meritorious disability discrimination cases, but simply cannot afford the kind of services I provide. AI is an equalizer but the enterprise systems do not come cheap. Further, I just read in the April 13, 2026, issue of the Wall Street Journal that there is a huge shortage of computer power for these AI systems, which means that the actual cost of using these AI systems is increasing all the time.
- If a litigant is not pro se, and the client uses AI to accommodate the disability, it makes a lot of sense for the attorney and the client to be quite explicit about how the AI is used and that the AI is being used to accommodate a disability. While a federal judge that is not part of an executive agency is free to do what they want in their courtroom with respect to discriminating or not against a person with a disability (with the exception of some limited rules pertaining to the hearing loss community), a lawyer has independent obligations under various provisions of the ADA to accommodate a client with a disability. So, that is a another reason for an attorney and their client to be explicit about AI use and whether it is being used as part of a reasonable accommodation process.
- It makes sense for an attorney to include in the retainer agreement how the client can expect AI to be used by the attorney. It also makes sense to discuss in that agreement the expectations for how a client might use AI.
- This case winds up on the side more closely resembling that work product privilege is waived when using AI than it does on the side of the case saying that work product privilege is not waived when using AI.
- Ultimately, the Supreme Court is going to have to step in. I have absolutely no idea what they might do when faced with issues of work-product and attorney-client privilege when using AI.
- These cases have bigger implications than just the evidentiary privilege. Journalists now are going big into AI. I read all the time about how journalists are using AI to expand the scope of their coverage. I am sure journalists are using AI in all kinds of other ways. Journalism protects its sources. Many of these journalists are freelance and may not be able to pay the freight for enterprise AI. Are they jeopardizing their sources by using Gen AI under the reasoning of this case and others like it?