The USPTO issued guidance on February 6, 2024 that clarified existing rules and policies and discussed how to apply them when AI is used in the drafting of submissions to the Patent Trial and Appeal Board (PTAB) and Trademark Trial and Appeal Board (TTAB). As a follow up, the USPTO has now published additional guidance in the Federal Register on some important issues that patent and trademark professionals, innovators, and entrepreneurs must navigate while using artificial intelligence (AI) in matters before the USPTO. The guidance recognizes that practitioners use AI to prepare and prosecute patent and trademark applications. It reminds individuals involved in proceedings before the USPTO of the pertinent rules and policies, identifies some risks associated with the use of AI, and provides suggestions to mitigate those risks. It states that while the USPTO is committed to maximizing AI’s benefits, the USPTO recognizes the need, through technical mitigations and human governance, to cabin the risks arising from the use of AI in practice before the USPTO. The USPTO has determined that existing rules protect the USPTO’s ecosystem against such potential perils and thus no new rules are currently being proposed.

In a stern warning, the guidance emphasizes that the USPTO does not tolerate fraud or intentional misconduct in any manner in a proceeding before the Office or in connection with accessing USPTO IT systems and that all individuals associated with a proceeding before the USPTO have a duty of candor and good faith. The duty extends not only to the personal actions of these individuals, but also to the actions these individuals take with any automated tools, including AI tools. It notes that the use of AI tools on USPTO websites for the unauthorized access, actions, use, modification, or disclosure of the data contained herein or in transit to/from USPTO web systems constitutes a violation of the Computer Fraud and Abuse Act, and that the USPTO monitors network traffic to identify such behaviors. It warns that violators are subject to criminal, civil, and/or administrative action and penalties.

Some of the issues addressed in the updated guidance includes the following:

  • The USPTO’s rules and policies described in this guidance – including those meant to ensure full, fair and accurate disclosure to the USPTO and to protect clients of USPTO practitioners – apply broadly, regardless of any AI assistance in preparing submissions to the USPTO.
  • There is no general prohibition against using AI-based tools in drafting documents for submission to the USPTO. Nor is there a general obligation to disclose to the USPTO the use of such tools (unless specifically requested by the USPTO), subject to adherence to applicable rules including those below. For example, if the use of an AI tool is material to patentability as defined in 37 CFR 1.56(b), the use of such AI tool must be disclosed to the USPTO. As discussed in more detail in the Inventorship Guidance for AI-Assisted Inventions, material information could include evidence that a named inventor did not significantly contribute to the invention because the person’s purported contributions were made by an AI system. This could occur where an AI system assists in the drafting of the patent application and introduces alternative embodiments which the inventor(s) did not conceive, and applicant seeks to patent. If there is a question as to whether there was at least one named inventor who significantly contributed to a claimed invention developed with the assistance of AI, information regarding the interaction with the AI system (e.g., the inputs/outputs of the AI system) could be material and, if so, should be submitted to the USPTO.
  • Duty of Candor and Good Faith – Everyone associated with any proceeding at the USPTO has a duty of candor and good faith in dealing with the Office. The duty of candor and good faith is broader than just the duty to disclose information material to patentability. The duty of candor and good faith applies to positions taken by applicants or parties involving the claimed subject matter. It also applies to errors that occur during the proceeding. The duty underlies the use of AI systems in matters before the USPTO.
  • Signature Requirement and Corresponding Certifications – Most correspondence filed in the USPTO must bear a person’s signature applied by the person signing. This act cannot be delegated to another person or entity. Thus, an AI tool cannot be used to apply a person’s signature. This requirement ensures that natural persons are overseeing the submissions to the USPTO and ensuring they are compliant with USPTO rules and policies. This ensures that documents drafted with the assistance of AI systems have been reviewed by a person and that person believes everything in the document is true and not submitted for an improper purpose. The party or parties should also perform an inquiry reasonable under the circumstances confirming all facts presented in the paper have or are likely to have evidentiary support and confirming the accuracy of all citations to case law and other references. This review must also ensure that all arguments and legal contentions are warranted by existing law, a nonfrivolous argument for the extension of existing law, or the establishment of new law. Simply relying on the accuracy of an AI tool, which is susceptible to hallucinations, is not a reasonable inquiry.
  • Confidentiality of Information – Practitioners must take steps to maintain the confidentiality of their clients’ information including reasonable steps to prevent inadvertent and unauthorized disclosure. The guidance states that use of AI systems may result in the inadvertent disclosure of client-sensitive or confidential information to third parties through the owners of these systems. Those using AI systems in practicing before the USPTO should be cognizant of the risks and take steps to ensure confidential information is not divulged. The guidance further explains that issues can arise when aspects of an invention are input into AI systems to perform prior art searches or generate drafts of specification, claims, or responses to Office actions. AI systems may retain the information that is entered by users. This information can be used in a variety of ways by the owner of the AI system including using the data to further train its AI models or providing the data to third parties in breach of practitioners’ confidentiality obligations to their clients. If confidential information is used to train AI, that confidential information or some parts of it may filter into outputs from the AI system provided to others. When practitioners rely on the services of a third party to develop a proprietary AI tool, store client data on third-party storage, or purchase a commercially available AI tool, practitioners must be especially vigilant to ensure that confidentiality of client data is maintained. This requires appropriate vendor diligence. Practitioners who supervise the work of other practitioners and non-practitioner assistants must ensure that the practitioners and staff under their supervision comply with the USPTO Rules of Professional Conduct when relying on AI tools and/or AI-related third-party services.
  • Foreign Filing Licenses and Export Regulations – Patent practitioners must comply with foreign filing license requirements prior to filing any patent application in a foreign country or exporting technical data for purposes related to the preparation, filing or possible filing, and prosecution of a foreign application. However, a foreign filing license from the USPTO does not authorize the exporting of subject matter abroad for the preparation of patent applications to be filed in the United States. Rather, the export of subject matter abroad pursuant to a license from the USPTO, such as a foreign filing license, is limited to purposes related to the filing of foreign patent applications. Practitioners must ensure data is not improperly exported when using AI systems. Specifically, practitioners must be mindful of the possibility that AI tools may utilize servers located outside the United States, raising the likelihood that any data entered into such tools may be exported outside of the United States, potentially in violation of existing export administration and national security regulations or secrecy orders. Even if the servers are located within the United States, certain activities related to the use of AI systems hosted by these servers by non-U.S. persons may be deemed an export subject to these regulations. Moreover, AI system developers or maintainers may suffer data breaches, further subjecting user data to disclosure risks. Therefore, before using these AI tools, it is imperative for practitioners to understand an AI tool’s terms of use, privacy policies, and cybersecurity practices.
  • USPTO Electronic Systems’ Policies – Access to USPTO electronic systems is subject to terms and conditions. Exceeding authorized access or violating those terms and conditions in connection with accessing USPTO electronic systems may result in criminal or civil liability under federal law (including the Computer Fraud and Abuse Act, 18 U.S.C. 1030) and/or state law. In addition, such conduct may result in penalties or sanctions administered by the USPTO.
  • Users of the USPTO’s websites may be required to create and use a dedicated account and complete verification forms and accept applicable subscriber agreements. The account is exclusive to an individual and it is not permitted to be shared with other users. Even support staff individuals who are sponsored by one or more practitioners must create and use their own individual account. The Terms of Use prohibit the unauthorized access, actions, use, modification, or disclosure of the data contained in the USPTO system or in transit to/from the system. accounts are limited to natural persons and cannot be obtained by non-natural persons. Therefore, AI systems may not obtain a account. Further, practitioners may not sponsor AI tools as a support staff individual to obtain an account.
  • While AI tools have the capabilities to access and interact with USPTO IT systems, attention should be paid to ensure the use of these tools does not run afoul of federal and state law, and USPTO regulations and policies. An important policy is the requirement that users must not file documents or access information for which they do not have authorization. An AI system or tool is not considered a “user” for filing and/or accessing documents via the USPTO’s electronic filing systems, and as such, cannot obtain a account. If a person is using a computer tool, including an AI system, to assist in submitting documentation to the USPTO, that person is responsible for ensuring that computer tool does not exceed authorized access, including submitting or accessing papers in an application that the person does not have authorization to access.
  • Duties owed to clients – The USPTO Rules of Professional Conduct require that a practitioner provide competent and diligent representation to a client. Practitioners must keep abreast of the benefits and risks associated with any technology (including AI) used to handle client matters before the USPTO. When using AI tools, practitioners must ensure they are not violating the duties owed to clients. For example, practitioners must have the requisite legal, scientific, and technical knowledge to reasonably represent their clients.