Artificial Intelligence is Here. Now What?

Whether we know the details of where or how, Artificial Intelligence (AI) has been part of our lives for many years.

However, the acceleration of AI development, and most notably, its entrance into the mainstream through public-facing tools such as ChatGPT, has placed this technology on the doorstep of nearly every consumer and business.

As with many new technologies, AI offers the promise of great advances balanced out by varying degrees of consequences – both seen and unseen.

Sam Altman, the chief executive of OpenAI, the company whose platform powers ChatGPT, testified to a Senate committee that “if this technology goes wrong, it can go quite wrong.” Yikes.

It should be of little surprise in the context of this and similar commentary that both state and federal legislators are now scrambling to draft regulations governing the use of AI.

AI is Here

Artificial Intelligence is Here. Now What?

The Biden Administration proposed an AI Bill of Rights in October 2022, Senate Majority Leader Chuck Schumer is leading the effort to craft bipartisan AI legislation at the Congressional level.

Not to be outdone, AI-related legislation has been introduced in at least 17 states.

Now, this is the part where human resource executives and others responsible for screening candidates and hiring employees need to pay close attention.

The Harvard Business Review notes that one of the areas being scrutinized by these government entities is whether AI could – intentionally or not – “violate existing privacy and discrimination laws by crafting responses to user requests based on biased datasets.”

The answer to that question, and how the government decides to mitigate any potential risks, will shape how employers recruit, hire, and manage employees for years to come.

While we wait for the legislation to be hashed out (don’t expect anything to happen quickly), the U.S. Equal Employment Opportunity Commission (EEOC) has already published some guidance on this topic.

In the article “Assessing Adverse Impact in Software, Algorithms, and Artificial Intelligence Used in Employment Selection Procedures Under Title VII of the Civil Rights Act of 1964,” the EEOC details the potential liabilities of employers, complete with a list of FAQs.

Despite the less-than-catchy title, hiring executives at all levels should make it a point to familiarize themselves with the contents of this document.

One of the things the EEOC makes clear is that employers may be responsible for discriminatory selection procedures based on information derived from AI or algorithmic decision-making tools, even if those tools are developed and administered by a third-party vendor. 

This possibility opens up a Pandora’s Box for any employer that uses automated software tools to facilitate processes related to areas such as recruitment, hiring, retention, and performance monitoring, which is likely most businesses. 

So, now what?

Preparing for the unpredictable sounds like an impossible task, but there are steps businesses can take today to help position themselves for an unknown future.

  1. Start Asking Questions. Now. If you use any automated decision-making software, or applicant tracking systems during the hiring process, ask your internal team or third-party vendor if these tools employ AI, and if so, how.

    The odds are pretty good that they do, and you need to know if these applications are putting your company at risk of discriminatory practices.

  2. Sign One-Year Contracts. The software you will be using in five years is going to be drastically different from what you are using today.

    Avoid getting locked into long-term contracts in the current state of technological and political flux. You may pay more for a one-year contract in the short run, but it will be worth it to have flexibility in the future.

  3. Get on the Same Page. Every business, even a small enterprise, struggles with the integration of their various departments. AI is one area where you can’t afford a siloed approach.

    Identify your AI champion and ensure that person coordinates with HR, IT, the Executive Team, Legal, etc. about how this technology is being used across the company (and how it may be used in the future). 

  4. Humans Still Matter. AI does, and will continue to, help employers better manage their time and resources, but it’s not infallible. If left unchecked, biases could develop, and you may not even be aware.

    Put mechanisms in place requiring a measure of human oversight of these processes, including the checking of datasets for any indications of bias.

  5. Keep Tabs on Legislation. The various legislative initiatives being pursued at both the state and federal levels will eventually determine the regulations for how and where AI can be used, as well as the consequences for stepping outside the lines.

    Make sure to keep pace with the latest developments and be ready to adjust accordingly.

  6. Create Contingency Plans. It’s impossible to know at this point what the legislation might say, but the possibility exists that AI tools could be prohibited for use in certain aspects of the hiring process or temporarily halted altogether until legislators can get a better handle on AI’s impact.

    You should have plans in place if either scenario comes to pass.

The fact remains that technology evolves much faster than changes to the law, but when the law catches up, the changes can be implemented rather quickly.

If you use this time to begin aligning the key players within your organization and educate yourself about how your business is using AI today, you will be well-prepared to handle whatever the future has in store – a future that not even AI itself can’t predict.

Previous
Previous

Today's Background Investigation: Cyber and Social Media Checks

Next
Next

The Price is Right? Third Party Fees in Background Investigations