Towards ethical algorithmic management

  • Author

    Jai Vipra

    GovAI

  • Author

    Lukas Sonnenberg

    GIZ

@ Artem Podrez from Pexels

Automated decision-making systems have become an integral part of the economy. Many different industries draw on algorithms to automate decision-making and increase efficiency. Yet, there is an industry that exists primarily due to the use of algorithms: digital platforms that mediate work. This includes platforms that engage in ride hailing, food delivery, at-home services or online work. Many core functions of digital labour platforms are based on algorithms. Without algorithms, all decisions would have to be taken by humans, which would put severe restrictions on their ability to scale. But what exactly is an algorithm, how is it used, what consequences does algorithmic management have for workers, and what can be done about it?

What exactly are algorithms, how are they used, what are the consequences of algorithmic management for workers, and what can be prevented?

 

Algorithm as a Boss

An algorithm can be understood as a set of rules that leads a computer or software to a certain decision, with little to no human involvement once it is launched. Sometimes, algorithms can be designed to “learn”. Machine learning algorithms for instance can process large amounts of data and derive patterns.

While algorithms can design management more efficiently and are essential in the growth of platforms, we can witness harmful and unethical ways in which algorithms are being used. Platforms have been found to use algorithmic nudging to lead workers to behave in a certain way. For example, workers are rewarded through algorithms for completing deliveries in a short amount of time, endangering their own and others’ safety. Algorithms can also control how many tasks are made available to workers, and platforms use this ability of algorithms to limit the work opportunities by creating an oversupply of labour and therefore the pay of workers.

An underlying issue with algorithms is that they are opaque: whether due to design or technological reasons. Let’s take one example: a rating algorithm sums up customer ratings and divides the result by the number of ratings to arrive at the average rating of a worker. Platforms often deploy much more complex algorithms factoring in various data points and correlations to allocate work, determine wages, calculate the delivery routes and times, or even automatically dismiss or block workers without any appeal procedures. It is usually unclear to the worker how the algorithm arrives at these decisions.

Many use cases of algorithms on digital labour platforms can be quite alarming. For instance, an Indian delivery platform has been found to base workers’ access to health insurance on decisions taken by algorithms. Based on aggregated scores out of the overall ratings and performance, an algorithm ranks workers into a gold, silver, or bronze club membership. The membership determines the overall insurance coverage available to the workers and their families. Thereby the algorithm pays little attention to the possibility of flawed routes or delivery times allocated by the algorithm itself, nor does it account for other variables such as traffic jams, road accidents or the human variable – customers’ wishes or intentionally negative review submission. In this case algorithms decide not only on where to pick up a delivery, but if a worker will be able to access their right to healthcare.

While algorithmic management has many positive features in terms of efficiency and overall economic growth, algorithms are often not designed to consider unforeseen occurrences or disruptions. Many times, workers are left with the burden to prove that disruptions outside their control took place which caused a delay in completing a task. Platforms often do not offer easy reporting structures for workers, but usually follow rigid pay-out structures with insufficient redress mechanisms. Sometimes, algorithms can make mistakes even when developed and deployed with the ebay intentions. In other words, algorithms are sometimes unable to factor in the complexity of workers’ lives while making decisions for them.

 

How to algorithm – Let’s talk policy

Algorithms are a great attainment of the digital transformation, and they are here to stay. The use of algorithms for automated decision-making and management is already an essential feature in many industries, however, there are still many negative or unintended consequences as legal frameworks addressing the design and use of algorithms are still rare. To lift the potentials while mitigating risks of algorithms, policy makers must explore different policy options tackling algorithms’ complex nature.

The first step starts with the food of algorithms – data. The European Union’s GDPR and data protection laws in some other jurisdictions give people the right to an explanation when decisions are made by automated systems. Workers can and have used these provisions to challenge unilateral algorithmic management by platforms. Workers and Unions have also sought to use data protection provisions to get access to their data collected and used by algorithms, and to compel platforms to let workers hold their own data in a data trust. However, governments need to make it easier for workers to examine the data that the platform holds about them to decrease power imbalances.

Yet, more needs to be done to ensure that workers are not harmed by algorithmic management in the first place. Algorithmic transparency mandates can go a long way in helping workers understand the inputs and goals of an algorithm. In addition, governments can also mandate that algorithms must follow certain standards and meet given performance requirements as a sort of ethical/fair-by-design approach. For those algorithms already in place as well as for the operation phase of all algorithms, different mechanisms are needed. Algorithmic audits are one such mechanism. The government or a third party should have the right to audit the performance of algorithms for legal and ethical considerations to ensure accountability and transparency. The Fairwork Network for example provides a set of principles, that can help guide policymakers and platforms in adopting fair and ethical practices for deploying algorithms and AI at the workplace.

Many different policy options and pathways exist. However, there is no simple one-size-fits-all solution. Answers to data ownership and protection as well as the desired level of transparency and intervention thresholds will depend significantly on the dialogue between all parties involved as well as national socio-political considerations. While many initiatives already exist and  various stakeholders have started pushing for fair and ethical algorithmic management, we still have a long way ahead.

Supporting policy stakeholders, platforms, and workers toward fair and ethical algorithmic management, the GIZ’s Gig Economy Initiative is designing different activities. As part of trainings for workers and policy stakeholders, the Initiative informs on the use and effect of data and algorithms. Additionally, a workshop series aims to raise awareness regarding issues surrounding algorithmic management among intermediary organisations. This summer, the first pilot will launch together with the Friedrich-Ebert-Stiftung and focus on Trade Unions in Kenya.