+1
Share

Pinning Jello® to a Wall: Regulating AI – by John Sumser and Heather Bussing

The release of the European Union’s proposed regulation for AI started the conversation in earnest. AI (whatever that is) is creeping into every marketing pitch and every bit of commercial software. The consequences of badly wrought intelligent tools is not theoretical. The damage AI can inflict ranges from badly targeted advertising to life damaging decisions about credit, employment, advancement, incarceration, access to healthcare, and human rights.

Meanwhile, the Silicon Valley business ethic, exemplified by Uber, is to knowingly step outside the law, wait for enforcement to arrive, and drag it out in court. You can see it at work in Tesla’s marketing claims of self-driving capability that is contradicted in their licenses.

The strategy is rooted in the certain knowledge that technology will always be faster than legislation and enforcement. The method is designed to capture enough market share to pay the fines that emerge. The fines are a cost of doing business and nothing more.

So yes, AI could be dangerous. Yes, it’s financers and creators care more about money than the consequences of their technology. Yes, AI poses serious threats to privacy, liberty, and opportunity. And yes, it may encourage worse treatment of our workforce.

No, the public doesn’t understand it. No, it isn’t going to replace humans anytime soon. The biggest actual risks come from overestimating the capability of the technology and the idea that technology gives us more accurate and less biased information.

In a Tesla, the primary risk is allowing the car to drive itself. In HR and HRTech, the biggest risk is not arguing with the recommendations supplied by the machine. That’s like letting the car drive itself.

It may not be AI.

The first question, well before whether AI regulations would be enforceable, is how do you tell if it’s AI.

In technical and academic communities, there are precise definitions of AI and its subordinate technologies. In that world, AI is a collection of tools and techniques that recognize patterns in data. In those settings, AI has a very specific meaning.

In the regular world, AI is a marketing term that may or may not have anything to do with the academic definition. The term AI finds its way into descriptions of things that have nothing to do with AI. Chatbots, which are often no more than automated decision trees, fall into this category. Relatively simple uses of statistics, like regression analysis, are also often called AI.

But that doesn’t mean it’s not potentially harmful

The rhythm of technical development has an annual cadence. Trade shows and industry conferences drive the cycle as tech companies promote their latest offerings.

The law moves more slowly. Given the tech industry’s proclivity for testing the legal boundaries, it’s likely AI will always be a couple of steps ahead of the law.

The regulatory quandary is that trying to regulate technology is like pinning Jello® to the wall. And still, AI, in all its guises, is a high-risk proposition. This is particularly true when predictive tools are applied to humans.

The EU’s proposed regulation identifies HR and recruiting as areas that are important to address. This is because AI works without any information beyond its own system. It can’t tell us whether the patterns and matches it identifies make sense and often we have no idea how AI systems learn. This means we should use them with caution and skepticism. Otherwise, deploying AI is human experimentation. We won’t know what happens until it’s too late.

AI concerns

In the academic / laboratory worlds, AI is used to find recurring correlations and patterns. It functions best when there is a clear rule set guiding the operation and the people building the systems know what the rules are. Yet, the real world has few systems that contain a finite and easily formulated set of principles and exceptions. This means things will be connected that may not have any real connection, patterns will be identified that may not be relevant or applicable, and recommendations will be made without adequate context or understanding of risk.

For example, if there is a 30% chance of rain, I am comfortable going out without an umbrella. But if there is a 30% chance the plane is going down, I am not getting on it. Context and assessment of relevancy, accuracy, and risk are essential to using AI, but AI can’t give us any of those things.

Some specific problems for AI in HR Technology are:

  • • Malfunctioning tools. AI malfunctions can come from bad design, bad data, or ill-conceived algorithms and models. Systems also malfunction because the data they encounter ‘in the field’ is different from the data used to train them.
  • • Badly imagined systems. Many AI systems are based on the data that’s available rather than a real problem to be solved. Most problems have nuance, complications, and competing interests that aren’t or can’t be addressed by AI systems.
  • • Tools built on pseudoscience. The extraction of emotional data from facial recognition systems is a classic example. Many HR specific offerings are built on less than scientific foundations.
  • • Tools that categorize people. There is a strong move to use AI as a collection and identification tool in hiring. These tools tend to amplify both similarities and differences based on labels and categories that will never tell you if someone will be good at a job and often are based on biased data. Here, the problem being solved is not to find the best candidate; it’s to narrow down the infinite lists of potential candidates that can be sourced through the internet.
  • • Tools that force dynamic systems into the straightjackets of rigid categorization. The classic example of this involves generalizing skills or roles across a number of companies. The result is a narrowing of capability and ignoring potential and skills that may look different but are relevant and transfer across categories.
  • • Failure to maintain and monitor the AI tool. AI systems drift. Continuous observation and analysis is required to maintain a consistent level of quality in the output.
  • • Confusing what is measured with what should be measured. There are emerging employee monitoring tools that confuse utilization of a specific company’s software or being in the same place with actual productivity

 

The risks if using AI systems include:

  • • Violation of privacy rights. There are a couple of issues here. Assembling profiles of people from a variety of sources without a clear mechanism for correction risks multiple kinds of privacy violations. In addition, it is possible for AI tools to unintentionally create Personal Identifying Information (PII).
  • • Mis-categorization. A primary task of AI tools is the creation of categories while assigning data to those categories. All tools have limits to their ability to process the world. For example, facial recognition systems regularly misgender people with darker skin, and have confused people and apes. More subtle (but perhaps worse) damage happens when ill-conceived labels and tools are used to evaluate people.
  • • Bias against specific protected classes. This is the thing that gets a lot of media attention. Bias is embedded in the data we use for training, the data we collect and process, and in the tool itself. Anything that limits access to opportunity harms both the organization and the individual(s) who are kept on the outside.
  • • Reduction of corporate agility. AI narrows categories. Corporate agility comes from individual flexibility and expansive views and thinking. The two ideas are at odds with each other. For example, an AI system designed to assess and coach improvement for a specific job becomes outdated the moment the job changes.
  • • Hardening of categories. This is a particular problem when the data from multiple companies are merged to gain ‘big data’ efficiencies. What works in the general case may or may not be applicable in the individual case. This damage is particularly hard to catch
  • • Unquestioning use of outputs from the AI tool. Like a Tesla, which really needs the driver to remain alert, AI systems have variable outputs that are easy to miss. The apparent certainty of an AI result often lulls decision makers into a sleep-like state in which they ignore the information they know and let the machine make the decision.
  • • Human failure to manage the tool. AI tools require regular monitoring and maintenance. When they fall outside of normal operating parameters, they make incorrect recommendations. This is particularly difficult to manage when the vendor is responsible of the maintenance of the algorithm/model.
  • • Creation of labels that haunt the end user. Among the many bits of pseudo-science is the idea that a personality test can offer a precise answer to a job/personality match. When personality testing, skills assessment, or job description management get too rigid, they form barriers to individual actors. Think of this as unintentional blacklisting.
  • • Security vulnerabilities. The strength of AI is also its weakness. It forms its view of the world from the data it consumes. Hacking an AI is a simple matter of changing the data flow. This can be done maliciously and will damage corporate strategy, individual opportunity horizons, or give unplanned access to other systems.
  • • Overburdening employees with faulty measures. Measuring the wrong thing causes all sorts of stress and damage. In particular, confusing keystroke rhythm and volume with productivity causes employees to be mistakenly rated. When employees realize they are being rated based on a particular metric, they will often shift to increase their score rather than work effectively on the things that need doing.

 

Laws that can apply to AI:

For the most part, the damage that AI can do is covered by existing laws and regulations. Key regulations include:

  • GDPR (Europe) and CCPA (California) are useful beginnings to the governance of privacy rights.
  • CFAA (Computer Fraud and Abuse Act) cover intentional damage to computer systems as well as unauthorized access to PII.
  • FCRA (Fair Credit Reporting Act) covers background checks by third parties and can include systems that scores and evaluate people.
  • FTC (Federal Trade Commission) Act prohibits unfair or deceptive practices. That would include the sale or use of – for example – racially biased algorithms. It can also cover systems that claim to reduce or eliminate bias.
  • EEO (Equal Employment Opportunity) laws prohibit specific types of job discrimination in certain workplaces. The U.S. Department of Labor (DOL) has two agencies which deal with EEO monitoring and enforcement, the Civil Rights Center and the Office of Federal Contract Compliance Programs.

 

Considerations for new regulations

Before new regulations are crafted, stronger enforcement of these existing laws is the right place to begin. The US Federal Trade Commission (FTC) and the EU are in the process of developing extended guidelines for the deployment and consequences of AI. The challenge for both of these initiatives is to craft regulation that is sensitive to human damage while encouraging continued innovation.

Since its inception, software has been shielded from the intense consumer protection scrutiny afforded other complex products. Software Product Liability simply doesn’t exist. The result is a world awash in beta testing that turns human beings into guinea pigs. As AI makes software more intelligent, vendors will have to be held accountable for the damage done by their systems. A good way to address this is by expanding existing product liability laws.

Ethical review is essential

One of the major sources of AI malfunction involves the team that designs the tool. A combination of focus on clear goals and a limit to what the team can imagine create an environment that is ripe for ill-conceived execution. In order to compensate for this very normal risk, companies like Twitter are outsourcing the discovery of the sorts of ethical questions at the heart of the AI design problem.

Twitter’s first algorithmic bias bounty challenge was released to the field on Aug 2, 2021. They make subsets of their code available for critique. The best entries win cash prizes. It’s a form of ethics crowd sourcing.

For companies without Twitter’s reach, it is important to have an ethics review right along with other ‘architectural reviews’ before projects are given a go ahead. The ethics committee should have at least one external representative. They are also best served by having a representative Ethics Board of Advisors composed of people who are not employees.

This makes the internal ethics question more like a standard security review and audit.

When I say companies, I mean both employers and the vendors who serve them. By paying close attention to the actual impact of the technology, it is possible to avoid some unintended consequences. In these early days, there are tons of unintended consequences.

Let’s solve the problem of preventing harm to humans.

The real problem in the regulation of AI is that technology moves faster than legal systems. Any law that is too specific will cause more problems than it solves. Since business typically views legal compliance as a business decision rather than a moral obligation, establishing specific process boundaries would have the opposite of its intended effect. By the time specific techniques can be banned or regulated, the technology will have routed around the new requirements.

While it is logical to focus on law and the details of how technology works, the reality is that technology can rarely solve uniquely human questions like who should I promote and hire, should this person get a loan, and is this person doing well in my organization.

As we move forward, it’s essential to keep the real issues at the forefront: What could possibly go wrong, is this fair and how could this harm someone? Otherwise, AI will create new problems that are even harder to solve no matter how it is regulated.

 

About the Authors

Heather Bussing is an employment attorney and writer who explores the intersections of people, technology, and work. She writes regularly for the HRExaminer and has authored blog posts, white papers, and reports for HR Technology companies. She has been interviewed and quoted in the New York Times, Wall Street Journal, CNN, Mashable, Business Insider, and NPR.
She continues to provide preventative and strategic advice to employers and also teaches Legal Writing and Internet Law.

 

 

John Sumser is the founder and principal analyst at HRExaminer.com. The company focuses of the bleeding edge of HRTech, AI, and the ethics associated with deploying AI in HR environments. He is a senior fellow at the Conference Board focusing on HRTech and Intelligent Tools. These days, his practice involves helping companies and HR Departments establish good ethics programs intended to reduce the unintended consequences of AI in HRTech. Sumser is a long-term analyst of the overall industry and keeps the industry informed with reports and keynote talks around the world.

+1

Comments are closed.