Existing Solutions and Their Limitations
Currently, most companies have taken two main approaches when it comes to “ethical” AI implementation: data privacy protection and algorithmic bias mitigation. While it is important to address these macro-level concerns it is essential to note that this only protects the company while leaving employee well-being completely exposed. There have been many different implementations of AI that come with many flaws directly impacting employees.
The current trend in AI management prioritizes short-term gains in speed. However, our analysis suggests that this creates Institutional Fragility. When a workforce is managed by an inflexible algorithm, the “human buffer” for crisis and creativity disappears. If every second of a worker’s day is optimized, there is no cognitive room for problem-solving when the system fails. Some companies also opt to create an internal AI system to prevent any data from being accessed by third parties and mitigates risk from cyber attacks. This solution is very effective for protecting company assets and employee information however the price tends to be extremely high and is often used as a justification for increased monitoring. Companies track every keystroke or movement to “ensure safety and productivity.” However, our analysis suggests that constant digital surveillance actually creates profound Psychological Unsafety. Although many of these solutions keep the company and employees “safe” the limitations often backfire into negatively affecting employees’ psyche leading to lack of production and ethical distrust.