Abstract
This paper concerns the technical issues raised when humans are replaced by artificial intelligence (AI) in organisational decision making, or decision making in general. Such automation of human tasks and decision making can of course be beneficial through saving human resources, and through (ideally) leading to better solutions and decisions. However, to guarantee better decisions, the current AI techniques still have some way to go in most areas, and many of the techniques also suffer from weaknesses such as lack of transparency and explainability. The goal of the paper is not to argue against using any kind of AI in organisational decision making. AI techniques have a lot to offer, and can for instance assess a lot more possible decisions—and much faster—than any human can. The purpose is just to point to the weaknesses that AI techniques still have, and that one should be aware of when considering to implement AI to automate human decisions. Significant current AI research goes into reducing its limitations and weaknesses, but this is likely to become a fairly long-term effort. People and organisations might be tempted to fully automate certain crucial aspects of decision making without waiting for these limitations and weaknesses to be reduced—or, even worse, not even being aware of those weaknesses and what is lost in the automatisation process.
Original language | English |
---|---|
Journal | Journal of Management & Governance |
Volume | 23 |
Issue number | 4 |
Pages (from-to) | 849-867 |
ISSN | 1385-3457 |
DOIs | |
Publication status | Published - 1 Dec 2019 |
Keywords
- Algorithmic bias
- Algorithmic decision making
- Artificial intelligence (AI)
- Connectionist AI
- Explainability
- Human decision making
- Symbolic AI
- Trust