A recent survey by the American Society of Association Executives (ASAE) and Avenue M found that only 6% of associations currently have an AI usage policy for staff, while 48% have plans to develop a policy.

In the realm of artificial intelligence (AI), there’s a common belief that once the genie is out of the bottle, it cannot be controlled. However, this notion doesn’t negate the possibility of managing AI effectively.

As AI (e.g., ChatGPT, Bard, and Bing) rapidly becomes an integral part of business operations, many believe AI usage policies will be crucial. They can serve as essential guardrails, guiding associations on how to manage and harness the power of AI technology effectively.

While generative AI usage for the masses is still in its infancy, a small number of associations said they currently have AI usage policies for staff, and many associations have one in the works.

According to a mid-June text poll conducted by ASAE and Avenue M, 48 percent of associations said while they don’t currently have an AI usage policy, they are developing one, 40 percent said they don’t have one and are not planning to create one, and only 6 percent said they have a policy in place.

“AI Usage Policies are incredibly important for two reasons,” said Blue Cypress Chairman Amith Nagarajan, author of Ascend: Unlocking the Power of AI for Associations. “First, we want to encourage people to use AI and learn about it. Without a policy, you leave people in an ambiguous territory where they don’t know if the organization encourages it or not. Second, we need to protect sensitive data from the association and ensure that consumer-grade AI like ChatGPT isn’t fed a bunch of confidential material without appropriate safeguards.”

When panel members were asked what their major concerns are around AI use by staff, several panelists who don’t have a plan but are developing one weighed in. An HR executive pointed to concerns about “security” and “the unknown.” Another HR leader said one worry is “plagiarism.”

One association executive noted that staff should always remember to doublecheck the tool’s output because “ChatGPT is great, but it can make mistakes.” One HR executive said, “It’s a developing technology that we don’t know enough about.”

Other AI Concerns Our Panelists Shared:

  • Data privacy
  • Ethical use
  • Verifying information accuracy and sources
  • Using AI to create legal documents
  • Security of employee or applicant information
  • Lack of regulatory framework for acceptable use
  • Copyright infringement
  • Discrimination
  • Liability

Click HERE to participate in future polls.

“There are of course solutions for these concerns, but some very basic training can help ensure that staff are compelled to learn and experiment while keeping association data safe,” Nagarajan said.

AI usage policies aren’t going to put the genie back in the bottle, but they will encourage responsible adoption, protect sensitive data, and address concerns, shaping a path towards harnessing AI’s transformative potential.

For more insights on putting an AI usage plan for staff in place, read Avenue M’s quick summaries of the following resources and click the links below.

Make Sure Generative AI Policies Cover Intellectual Property
Experts recommend protecting intellectual property and trade secrets when using generative AI tools like ChatGPT. They suggest implementing policies and taking precautions to prevent disclosure.

To address bias in AI, they suggest including legal and HR experts in innovation teams, maintaining human oversight in decision-making, and obtaining informed consent when necessary.

It’s also important to avoid sharing confidential information when interacting with these tools and ensure employees are aware of what is confidential. A chief AI officer and effective governance policies can ensure legal compliance, reduce errors, and tackle AI biases.

Five Key Legal Issues to Consider When It Comes to Generative AI
Associations diving into AI must tread carefully and address legal considerations. Transparency with members, avoiding biases, protecting intellectual property, and minimizing liability risks are vital.

Clear policies for staff, officers, and committee members, along with careful monitoring, ensure smooth sailing. The good news is, AI can help associations improve operations, serve members, and advance their missions with proper planning.

The Importance of AI Policies: Laying the Foundation for The AI Revolution
Creating an AI policy doesn’t have to be overly complex. It will help you avoid costly mistakes and adapt as you use AI more. Here are some topline suggestions.

  • Involve company leadership in developing and endorsing AI oversight policies.
  • Determine disclosure methods for AI use, consulting with legal experts if needed.
  • Address data ownership and the company’s rights over content created using AI.
  • Establish ethics guidelines to ensure ethical use of AI and address biases.

Exciting AI innovations are expected in the next decades, but it’s important to plan and avoid costly mistakes. Start with simple guidelines for an AI policy and adapt as you go.

 Want to be the first to be notified about articles like this? You can learn more about Avenue M’s texting poll service HERE.

Contributors: Sheri Jacobs, FASAE, CAE & Lisa Boylan

(Image: Adobe Stock)