Prof. Valery Yakubovich, ESSEC Business School, Prof. Peter Cappelli and Prof. Prasanna Tambe from Wharton School of the University of Pennsylvania, explore the potential of applying artificial intelligence to human resource management.
AI: A paradigm shift in people management by CoBS Editorialist Afifeh Fakori. Related research: Artificial intelligence in human resources management: Challenges and a path forward by Peter Cappelli, Prasanna Tambe, and Valery Yakubovich.
Incorporating science into people management
As artificial intelligence rapidly gains traction in industries like healthcare, companies are beginning to investigate the potential of AI in people management. Today only 22% of firms say that they have adopted analytics in human resources. This can be ascribed to the huge disconnect between the data science community, which understands analytics but not HR, and the HR community, which understands HR but not analytics.
We often ignore the fact that the myriad HR operations from hiring and training to performance management produce an incredible volume of data, often in the form of “digital exhaust” in the virtual space. Such digital exhaust, along with HR Information Systems, can be synthesized to build algorithms that can improve the efficiency of future HR operations. For example, once we have a view of all the applicant characteristics that have been associated with great job performance, this data can be used to select candidates in the future. Some companies like IBM also use algorithms to recommend training for their employees, based on the experiences of similar employees or their own preferences, much like how Netflix recommends content to its viewers. We have gleaned some interesting insights on how AI can be leveraged for human resource management from a 2018 workshop that brought data science faculty together with the heads of the workforce analytics function from 20 major US corporations.
But what if the machine says “hire more white men”?
The trade-off between efficiency and appropriateness can be tricky when it comes to AI in People Management. An algorithm which looks into attributes of good performers in the current workforce may very well recommend you to hire more white men. More often than not, this is due to the algorithm reproducing the lack of demographic diversity in historical data. Imagine the social and legal repercussions of acting on this recommendation! The fact that the company had hired fewer women or ethnically diverse people in the past does not imply these groups are poor performers. As such, it is important to build algorithms on more objective measures, such as who gets dismissed for poor performance, for instance.
The different challenges facing the application of AI to human resource management can be classified into 4 categories:
- Complexity of HR Phenomena: It is not fair to label someone a “good employee” based on his/her performance appraisal scores. This is because most jobs are reasonably complex and interdependent with other jobs, rendering it difficult to disentangle individual performance from group performance.
- Small Data: The data sets in human resources tend to be quite small by the standards of data science. Even the large corporations do not have enough employees or datapoints to enhance the predictive accuracy of machine learning.
- Ethical and Legal Constraints: As soon as a machine starts to make hiring and firing decisions, issues of procedural justice begin to surface. Employers must be able to explain and justify their practices in order to ensure that they are perceived as fair.
- Employee Reactions to AI Management: Some applicants today are beginning to reverse engineer algorithms; once they discover how a hiring algorithm functions, they can respond differently in interviews and render the algorithm worthless.
Getting the data right
Before launching a major Digital people management project, employers should determine what data is necessary and audit what is already available. For example, if a company wants to use a machine-learning algorithm in hiring, it needs to collect historical data on job candidates who were not hired as much as it needs data on the ones hired.
Once they know what data is required, companies often invest a lot of money to collect aggregated data on source of applicants, compensation, performance and so on. This is great because specialised vendors can combine data from hundreds of client companies to generate their algorithms. But then a client company often does not know how to integrate the data collected from different vendors because the different systems are rarely compatible. In fact, most of the HR practitioners we spoke to reported that they still use Excel – instead of more purpose-built tools – to manage their data from different sources. Problems of data integration aside, there is also the issue of being able to gauge the extent to which an algorithm built on data from diverse sources will make effective predictions in a specific organisation.
In order to benefit from a digital transformation, companies could start off with a few important steps:
- Aggregate data from multiple perspectives over time. Data sharing between the HR department and other functions should be made a priority in the short-run, and investment in data standardization and platform integration a priority for the long-run.
- Set objective performance measures: These measures should be complemented with more subjective evaluations to capture the less tangible outcomes, such as employee fit into company culture.
- Develop a causal model: Small data, coupled with managerial experience, should be used to identify causal predictors of the outcome of interest. Google runs randomised experiments for HR phenomena such as the optimal number of interviews per job candidate to test out causal assumptions. This is important because AI-analyses can be worthless in absence of a solid causal model that generates the outcome of interest.
Dealing with the ‘bad’ algorithms
HireVue is a vendor which helps companies to conduct video interviews and then uses facial expressions to predict candidates’ future performance in the company. But the bias lies in the fact that the algorithms used for prediction are trained on data from other top performers at the client firm. By examining only those who are successful, HireVue is essentially ignoring the factors which distinguish the best performers from other performers.
There is also the fact that employees of a company are usually comprised of majority populations (e.g. white employees) and minority populations (e.g. African American employees). Algorithms that maximise predictive success for the population as a whole may discriminate against predictive success for the minority population. This problem can be resolved by generating separate algorithms for each population. But that leads to the risk of a conflict with legal and ethical norms of equal treatment. In a nutshell, the trade-off between accuracy and fairness is not easy to manage when implementing machine learning.
Employee reactions to HRM transformation
Employers are increasingly delving into the social media pages of their employees or leveraging on technology to gauge the tone of comments that employees post on internal chat boards. This helps employers to predict employee flight risk. But naturally, employees consider such practices as infringements upon their privacy. The fact that data can persist well beyond its intended use is generating even more controversy. As a result, computer scientists are actively working to randomize data during the collection process in order to gather useful information about the population while learning nothing about an individual.
Employers who are delegating formal decision making to AI are also walking on thin ice. Imagine an employee who shares a great workplace relationship built around trust and empathy with her manager. If her manager requests her to work an extra weekend shift, she is probably not going to complain. But if it’s software that generates this schedule, she is likely to be rather disgruntled since there is no goodwill between her and the programme. The repercussions extend to good news as well. When an employee receives a bonus, it leads to an amiable relationship between the employee and her supervisor if the supervisor appears to have been involved in the decision. This does not happen if the decision for a bonus comes from an algorithm.
People Management: Need for a paradigm shift in mindset
Algorithms have been proven to perform better than human judgment when it comes to predicting repetitive outcomes. Unfortunately, though, the decision to apply machine learning to human resource management is riddled with trade-offs and controversies. Computer algorithms of causal discovery could help to minimise the causal dependence on factors outside an individual’s control, such as their race or birthplace. The perception of fairness can also be enhanced through randomisation.
But none of this would bear fruit without a transformation in the mindset of HR leaders and line managers. They need to train themselves to make informed use of the insights generated by workforce analytics. A digital transformation will be possible only when employers understand and facilitate machine learning in a way that minimises the tension between efficiency and appropriateness, and ultimately contributes to the company’s bottom-line.
- View Prof. Valery Yakubovich’s academic profile
- Link up with Prof. Valery Yakubovich via LinkedIn
- Visit the ESSEC Business School website
- Download the Global Voice magazine special Europe issue.
Learn more about the Council on Business & Society
- Website: www.council-business-society.org
- Twitter: @The_CoBS
- LinkedIn: the-council-on-business-&-society
The Council on Business & Society (The CoBS), visionary in its conception and purpose, was created in 2011, and is dedicated to promoting responsible leadership and tackling issues at the crossroads of business and society including sustainability, diversity, ethical leadership and the place responsible business has to play in contributing to the common good.
Member schools are all “Triple Crown” accredited AACSB, EQUIS and AMBA and leaders in their respective countries.
- ESSEC Business School, France-Singapore-Morocco
- FGV-EAESP, Brazil
- School of Management Fudan University, China
- IE Business School, Spain
- Keio Business School, Japan
- Trinity Business School, Trinity College Dublin, Ireland
- Warwick Business School, United Kingdom.
The Council on Business & Society Global Alliance is an international alliance between seven of the world’s leading business schools and an organiser of Forums focusing on issues at the crossroads of business and society – The Council Community helps bring together business leaders, academics, policy-makers, students and journalists from around the world. Follow us on Twitter @The_CoBS . Visit the Council’s website for a host of information, learning opportunities, and free downloads.
Hi. I wanted to correct something here. It’s not correct that HireVue only trains its algorithms on top performers. Our process for building algorithms is outlined in a series of three blog posts on the website. Since 2013, HireVue has trained its algorithms on current performers of the same job role at every level of performance. We are always happy to offer briefings to educators on precisely how we build our algorithms and test for adverse impact/bias. There is industry guidance to follow. In 1978, the Uniform Guidelines on Employee Selection Process were jointly adopted by the U.S. Civil Service Commission, the Department of Labor, the Department of Justice, and the Equal Employment Opportunity Commission (EEOC) to provide a uniform set of principles to help govern the appropriate use of employee selection procedures and employment-related decisions. Our customer companies are highly regulated organizations. They demand bias-tested and -mitigated algorithms that actively help them promote diversity, and we demand them of ourselves, as well. https://www.hirevue.com/blog/hirevue-assessments-and-preventing-algorithmic-bias We are happy to offer briefings to educators and researchers who’d like to know more about our work and how we’ve helped our customers increase diversity at the screening stage of job interviews.
Dear Cynthia Siemens, hello and many thanks for your valuable comment and feedback. I’d be only to happy to put you into contact with Prof. Yakubovich and am sure this knowledge and contact would be appreciated. With kind regards, Tom Gamble, Associate Director, CoBS
Dear Cynthia, thank you for your interest in our article. We relied on the WSJ report: https://youtu.be/8QEK7B9GUhM which is referenced in the article. And we report there some examples where algorithms do a better job than humans. If you have more info about the predictive accuracy of your algorithms, I’d be interested in taking a look.
Pingback: Management: Artificial versus natural intelligence – Council on Business & Society Insights·
Pingback: Is Information Technology Becoming the Next Oil and Gas? – Council on Business & Society Insights·
Pingback: How Big Data Will Impact the Role of Management Controllers – Council on Business & Society Insights·