As artificial intelligence (AI), such as ChatGPT, continues to evolve and become more commonplace, many trade and professional associations and nonprofits are turning to AI technology to enhance their operations and decision-making processes and benefit their members and clients However, as with any emerging technology, the use of AI raises several important legal issues that must be carefully considered and worked through.
One of the primary legal issues associated with the use of AI is data privacy. AI systems rely on vast amounts of data to train and improve their algorithms, and nonprofit leaders must ensure that the data they collect is used in accordance with applicable federal, state, and international privacy laws and regulations. You must be transparent with members and clients about how their data will be collected, used, and protected, and must obtain the necessary member consents to use and share sensitive data.
Intellectual property is a key legal issue that leaders must consider when using AI. AI systems can generate new works of authorship, such as software programs, artistic works, and articles and white papers. You must ensure that you have the necessary rights and licenses to use and distribute these works, as well as being transparent about who/what created such works.
Take steps to ensure that AI-generated content is not, for instance, registered with the Copyright Office as your organization’s own unless it has been sufficiently modified to become a product of human creation and an original work of authorship of the association.
You must be mindful of any third-party intellectual property rights that might be implicated by their use of AI, such as copyrights or patents owned by AI vendors, developers, or others, and ensure that they do not infringe any third-party copyright, patent, or trademark rights.
Finally, as stated above, be mindful not to permit the inputting into an AI system of any confidential or otherwise-protected content (such as trade secrets or information subject to a nondisclosure obligation or the attorney client privilege), as such content will no longer be protected and confidential.
Another legal issue to consider is discrimination. AI systems can inadvertently perpetuate bias and discrimination, particularly if they are trained on data that reflects historic biases or inequalities.
You must ensure that AI systems do not discriminate on the basis of race, ethnicity, national origin, gender, age, disability, or other legally protected characteristics, and must take steps to identify and address any biases that may be present in their algorithms.
For instance, the use by large employers of AI systems to help screen applicant resumes and even analyze recorded job interviews is rapidly growing. If AI penalizes candidates because it cannot understand a person’s accent or speech impediment, for instance, that could potentially lead to illegal employment discrimination. While this will only become a legal issue in certain contexts (such as the workplace), the use of AI has the potential to create discriminatory effects in other association settings (such as membership and volunteer leadership) and needs to be carefully addressed.
You must consider the potential tort liability issues that might arise from their use of AI. If an AI system produces inaccurate, negligent, or biased results that harm members or other end users, your organization could potentially be held liable for any resulting damages. You must therefore ensure that their AI systems are reliable and accurate, and that all resulting work product (such as industry or professional standards set by an association) is carefully vetted for accuracy, veracity, completeness, and efficacy.
You need to ensure that your organization has appropriate insurance coverage in place to protect against potential liability claims in all of these areas of legal risk. Note that traditional nonprofit directors and officers (D&O) liability and commercial general liability insurance policies may be — and likely are — insufficient to fully protect organizations in all of these areas. You also should explore acquiring an errors and omissions liability/media liability insurance policy to fill those coverage gaps.
In conclusion, while the use of AI presents numerous opportunities and benefits, there are a number of legal issues that need to be carefully considered before going too far down the AI path. Among other things, you must ensure that you are transparent with their members about the use of their data, obtain necessary intellectual property rights and licenses and avoid infringing others’ rights, address any potential biases in their algorithms, protect themselves against potential tort liability claims, and secure appropriate insurance coverage to protect against these risks.
As the work of organizations involves both staff and member leaders, adopting and distributing appropriate policies governing AI usage by staff, officers, directors, and committee members is critical, as is policing compliance with such policies. Similar clauses should be built into employee handbooks and contracts with staff, contractors, and members (including agreements with volunteer speakers, authors, and board and committee members).
With careful planning and attention to these issues, associations can use ever-developing AI technology to enhance their operations, programs, and activities, better serve their members, and further advance their missions.
Jeffrey S. Tenenbaum, Esq., is managing partner of the Tenenbaum Law Group PLLC, in Washington, D.C. His email is [email protected]