Accompany all algorithmic decisions with an explanation of the most important reasons and/or parameter(s) behind the decision and how they can be challenged
A reason for the decision should always be made available to the worker, including with reference to the inputs, including worker personal data, and parameters that were decisive to the outcome or that, if changed, would have resulted in a different outcome. Sources of particular parameters and inputs must also be provided and explained – for example in the event that a decision is based on a customer feedback rating. Reasons given for a particular decision must be specific and personalised rather than wholly generic and should not be provided in overly technical language (for example, stating that ‘on this date you were expected to make X deliveries, but you only made Y’, rather than ‘your deliveries are slower than expected’).
Where an algorithmically generated score was used in relation to a decision – companies should provide workers with the overall distribution ratio of the score – i.e. how many workers fall into the low/medium/high risk categories within a given geographic area (for example the city in which the worker is operating). The purpose of this is to provide the context behind decision-making, which would in turn uphold algorithmic accountability and enable workers to challenge inaccurate parameters and inputs. This information could be provided through an aggregated percentage of workers with a certain score or rating. Given that this information is likely to change over time, the ratio should be provided to workers at the time that they face a particular decision, such as termination. Similarly, the company should also provide information that addresses the prevalence and complexity of any parameters that prompted a particular automated output at the time of the decision. For example, if a company flags a worker on suspicion of GPS spoofing and suspends his account as a result – it should provide information on whether other workers within the same geographical location also reported technical issues relating to the app’s collection of location data.
The purpose of this is not just to address information asymmetry and allow decisions to be challenged, but also to allow workers to understand why they are being treated a certain way and what changes they can make to get a better outcome. This doesn’t necessarily mean going into the details of the algorithm, but rather providing insight into what change(s) a worker could make to receive a more desirable outcome in the future.
A worker should be able to challenge any decision they think is wrong or unfair. Contact details of a human must be provided for this, as well as information on how to request a review and which teams have what oversight over the algorithm’s outputs.
About us
This campaign was launched by Privacy International (PI) on 21 January 2025 with the support of 12 organisations.
PI is a London-based non-profit, non-governmental organisation that researches and advocates globally against government and corporate abuses of data and technology. It exposes harm and abuses, mobilises allies globally, campaigns with the public for solutions, and pressures companies and governments to change. PI challenges overreaching state and corporate surveillance so that people everywhere can have greater security and freedom through greater personal privacy.
Learn more ↗