Home Industries Technology Businesses using AI for hiring, content creation must rely on murky guidelines

Businesses using AI for hiring, content creation must rely on murky guidelines

It’s human nature to want to make everyday tasks easier. At face value, implementing AI into the workplace could seem like a pretty good way to do just that. But, like with any emerging technology, there are several possible unintended consequences to be wary of. Eisenmann Larger companies that have big

Already a subscriber? Log in

To continue reading this article ...

Become a BizTimes Insider today and get immediate access to our subscriber-only content and much more.

Learn More and Become an Insider
Ashley covers startups, technology and manufacturing for BizTimes. She was previously the managing editor of the News Graphic and Washington County Daily News. In past reporting roles, covering education at The Waukesha Freeman, she received several WNA awards. She is a UWM graduate. In her free time, Ashley enjoys watching independent films, tackling a new recipe in the kitchen and reading a good book.

It’s human nature to want to make everyday tasks easier. At face value, implementing AI into the workplace could seem like a pretty good way to do just that. But, like with any emerging technology, there are several possible unintended consequences to be wary of.

[caption id="attachment_570802" align="alignleft" width="300"] Eisenmann[/caption]

Larger companies that have big budgets are already using AI regularly in the recruitment and hiring process as a tool to boost efficiency and save money, according to Erik Eisenmann, a labor and employment attorney at Husch Blackwell.

He’s seen AI used for everything from a simple chatbot to a full-blown avatar that conducts an entire interview.

“The way we are seeing it most frequently used is businesses using AI algorithms to conduct screenings of applications,” said Eisenmann.

If a company has identified a set of employees who have come into that organization and been successful, that company can then use their resumes as a template to screen for incoming applicants. Companies are teaching these AI algorithms what a successful job candidate would look like, in theory. The pitfall here, Eisenmann said, is that this data could have preference toward certain groups of people.

“The concern is what rules we’re establishing in this system,” said Eisenmann. “If we’re teaching AI to select candidates who have been historically successful within an organization, and we look at the demographics of those individuals, it happens to be mostly white men. Are we inadvertently programming in bias?”

For example, if a company were to program an AI system to screen for certain keywords or interests based on previously successful candidates, that system might then throw out language used in resumes that is more relevant to women or other minority groups. 

“The AI is only as good as the people teaching it and programming it,” said Eisenmann.

So far, there isn’t any official Wisconsin law for companies looking to use AI in the recruitment and hiring process. New York’s Local Law 144, which took effect on Jan. 1, is one of the first in the country to require companies using AI systems to take additional steps to ensure those systems are not building in bias.

The law states companies must conduct a bias audit essentially to reverse-engineer the AI process. This would mean looking at the final selected candidates and cross-referencing those results with the entire applicant pool to check for any inadvertent bias.

These audits must be conducted by an independent auditor, which New York’s Local Law 144 defines as an auditor outside the company.

“I think that’s where you’re going to see a lot of pushback because the requirement that you actually engage an outside party is going to make that process much more expensive,” said Eisenmann.

Illinois has an Artificial Intelligence Video Interview Act that requires disclosure to a job candidate if an interview will be conducted via AI as well as consent from that person. There are also reporting requirements for the company using AI.

It remains to be seen whether regulation like New York’s law will be passed in Wisconsin. There is, however, some general guidance that has been issued by the Federal Equal Employment Opportunity Commission. In 2021, the EEOC launched an initiative to ensure all software and emerging technologies used in hiring decisions comply with federal civil rights laws.

Hazards in content creation

As innocuous as it may seem, using AI to generate text or images could be a minefield for businesses. With the popularity of AI systems including ChatGPT and DALL-E, anyone can create content immediately on their computer or phone.

[caption id="attachment_570803" align="alignleft" width="300"] Miotke[/caption]

Joseph Miotke, intellectual property attorney and co-chair of the Intellectual Property Group at DeWitt, has seen generative AI have the most impact in the media and entertainment, finance, health care, retail and manufacturing industries and said he hasn’t seen very many specific laws on generative AI yet.

“What’s interesting is how powerful the generative AI tools can be as you learn to use them,” said Miotke. “The more I use it, I think how interesting this is – interesting and scary.”

He said businesses looking to mitigate the risk of using AI should always look at copyright protections as a first step. The U.S. Copyright Office declared over a year ago it will not issue copyright registration for content that is AI-generated. What becomes tricky is when AI platforms pull from content that’s human-made.

Miotke pointed to a recent Supreme Court decision that ruled a piece of art created by Andy Warhol, which was based off an original photograph of the musician Prince, did violate copyright laws. The Supreme Court ruled that Warhol’s use of the original photograph was not transformative enough to warrant a fair use.

“When I step back, what that means is if you’re the owner of the original copyrighted work, and then these AI platforms start to make modifications, that can trigger potential copyright infringement liability. The challenge here is the law is still somewhat unsettled in this area,” said Miotke.

He explained it’s difficult to get concrete guidance as to when content crosses the threshold into transformative use. Copyright law only states transformation is a fair use that builds on a copyrighted work in a different way than the original content.

“It really comes down to a matter of degree. If the AI just makes some minor tweaks to that underlying photograph, that’s probably still copyright infringement. If the AI drastically changes an image, then you get into an area called transformative use,” said Miotke.

When using a generative AI tool, a business should try to have that platform identify the source material it’s using, which isn’t always possible. This could help preemptively solve any potential copyright issues.

Another good rule of thumb when considering using an AI-generated image is to just stop and consider if that image resembles anything to you.

One thing to keep in mind regarding the use of AI-generated content is that since it cannot be copyrighted, a business can’t stop someone else from using it. Content that is a mix of AI and human work needs to be separated.

Being forthcoming is the best strategy a company can use when looking to use generative AI. Miotke said businesses should always get consent from clients before using generative AI in any work and make sure those clients are aware of the risks in doing so. On the employee side, it’s also helpful to consider a corporate policy stating employees cannot use AI without prior permission.  

“You’ve got to fact check and verify, just to make sure you’re not using AI in a harmful way,” he said.

BIZEXPO IS NEXT WEEK - REGISTER TODAY!

Stay up-to-date with our free email newsletter

Keep up with the issues, companies and people that matter most to business in the Milwaukee metro area.

By subscribing you agree to our privacy policy.

No, thank you.
Exit mobile version