Uber, Waze, Airbnb… The algorithms that control these platforms are based on an optimisation of the service provided to the user rather than any collective, political or moral norms. The accusations against these algorithms expose the way technical architectures implicitly govern our lives.
This essay was taken from the book Gouverner la ville numérique, by Antoine Courmont and Patrick Le Galès (ed.) published in the collection Puf/Vie des idées.
Journalists in England, Italy and France create false profiles for restaurants, and manage to push them up in Tripadvisor’s rankings thanks to flattering comments and high marks. This is a way of denouncing the artificial popularity calculations that deceive clients and encourage unfair competition between restaurants. Researchers denounce the presence of ‘phantom cars’ on the Uber application, [1] proving that the ride sharing company simulates the supply and demand market that it claims to display as an unbiased intermediary, in order to control rates and give users the impression of an abundant offer. These two accusations against digital calculations are typical of the initiatives seeking to audit and criticise digital platforms and their algorithms. They reveal the growing concern among public authorities, market actors and citizens regarding the space these platforms occupy in our daily lives. With the rampant spread of mobile phone equipment (4,8 billion in 2017, 2,32 billion of which are smartphones), [2] the use of mobility services has risen enormously and no sector of activity is immune to the arrival of new entrants who reshape they way we orient ourselves, move around and consume in the city. This criticism and the initiatives taken by government institutions to regulate the situation, develop in a context where these platforms encounter massive success with users, and are deployed at a planetary scale. [3] Following the collaborative economy model, they claim to be platforms connecting people, and they use this positioning to develop hyper-competitive intermediation models using an off-site rationale, most often ignoring the specific local legal and market regulations in force.
It is striking to note that many of the interrogations about these new services involve algorithms that are central to their functioning. Given the locked application codes, it is often difficult to discover how they function and to anticipate the regular changes developers introduce to optimise services. While computer code is one means of regulating digital worlds, the development of services via mobile applications also reveals the power of code to act in physical space. By imposing an abstract and decontextualized form of calculation, algorithms are accused of being the agents responsible for transforming territories and mobility behaviours. It is hence useful to explore these controversies. Who are the agents responsible for these deregulations and what interests motivate their actions? How do platforms and their algorithms interact with pre-existing forms of regulation? What principles are invoked to denounce algorithms as a cause of territorial deregulation?
Based on an analysis of mediatised cases involving reputed applications offering indexing services for businesses, delivery services, rental services or mobility options in cities, this chapter seeks to show how the issue of transforming space into calculations is progressively framed as a public problem. To reveal the issues involved, we will study several ‘affairs’ [4] taken from a corpus of 41 cases published in the press, gathered between September 2017 and April 2018. [5] In this article, we chose to concentrate on a part of the corpus, consisting of 19 cases involving popular platforms like Uber, Waze, Tripadvisor, Airbnb or Deliveroo. The corpus focuses on situations where the question of simplified calculations, which we will call algorithms, is central to the affair. It hence excludes controversies related to general terms of use, labour law, unfair competition, etc. although in the controversy these issues may appear to be effects impacted by algorithms. This corpus is not an exhaustive survey of all the controversies generated by these platforms, but due to its diversity, it allows us to map the space of issues that reveals the way these new actors calculate the city.
Code is Eating the City
The power of algorithms lies in their ability to regulate digital spaces. Lawrence Lessig’s [6] work has shown that with the creation of cyberspace, a new source of regulation had to be taken into account. The technical architecture represented by computer code is far from neutral and largely regulates cyberspace as it encapsulates its designers’ political choices. This new form of regulation can be added to the three other pre-existing forms of regulation, which are the law, the market and the norm. We can hence modify Marc Andresseen’s famous words: “software is eating the world” [7] to “code is eating the city”. While code rules in cyberspace, with the development of mobile services, physical and digital space are now entwined, and regulatory activity in the digital world has direct implications in the physical world. Many of the questions the new mobility service platforms raise are provoked by computer code’s indifference to the norms established by the institutions that govern different spaces.
In 2017, California was the site of dramatic fires. The secure routes were jammed, and on the contrary, the most dangerous routes were empty, as they were closed to traffic. But the Waze application algorithm suggested itineraries for drivers that took them along roads in close proximity to the fires. [8] The algorithm calculates routes using traffic data and optimises the duration of the journey in real time. These incidents, which were denounced by users on social media, and reported in the press, did not lead to any deaths. Here we see a fairly concrete example of how the code was developed by encapsulating an objective that focuses solely on its usefulness to the driver, and does not include any safety concerns. The functionalities of GPS orientation systems generally include three options: “the fastest route’, the “shortest” route, and “the cheapest” route (avoiding tolls). These choices have become technical conventions, but they are only three ways, among others, of viewing what should be a journey defined by the users’ “presumed” aims. A blogger driving a hybrid vehicle recently suggested adding [9] an option for “the most energetically economical journey” for electric vehicles (avoiding high electricity consumption journeys on motorways). Similarly, we could imagine that these systems could suggest ‘the most beautiful’ route, through neighbouring forests, and the countryside.
Faster, safer, more ecological or more aesthetic, a whole range of possibilities exists, within which designers will make choices that directly regulate what the technology allows us to do or not do. This example also shows that these algorithmic choices are not limited to the digital space, as they will impact vehicular traffic. They direct drivers towards one route rather than another, amongst the possible options, and thus contradict other forms of traffic regulation, which in the case mentioned above are represented by the law and the security forces, who modify the authorised routes depending on the level of danger they represent in a crisis situation.
Various works show that applications that calculate itineraries deregulate pre-existing forms of traffic management, particularly as they are based on other types of data and other representations. [10] While the local authorities responsible for regulating traffic in agglomerations use different technical systems to measure the overall occupancy of the traffic in order to ensure maximum fluidity for all the traffic, and they define priorities for vehicular movement on certain streets by establishing a map of urban movement, platforms like Waze follow a ‘user-centric’ logic. They aggregate personal data to optimise each person’s travel time, paying no attention to the overall regulation of traffic or regulatory actions implemented by local authorities. The transfer of large flows of traffic to unplanned areas (residential streets, [11] or roads in the vicinity of schools, [12] for example) is one consequence of this “user-centric” approach, and it conflicts with other traffic management norms.
The Framing of a Public Problem
Through increasingly numerous press articles on the ‘diseases of algorithmic calculations’ a new “public problem” is taking shape in public opinion, related to the best way of being (or not being) transformed into an algorithm (Cardon 2018). What are the situations that create “problems” and those that do not? On the basis of what norms or principles can one qualify a calculation as “normal” (fair, balanced, sincere, etc.) and others as “problematic” (unfair, unbalanced, deceptive, discriminatory)? And who are the perpetrators and who are the victims?
Constructing a public problem requires constituting a repertoire of cases that enables the identification of the various facets of troubled situations, in order to arrive at a univocal and shared interpretation. [13] Based on our corpus of cases, we sought to expose the narrative structure of the affairs [14] by revealing four actants: the “accuser”, in the affair, “the algorithmic agent” (the code or calculation procedure), the “cause” attributed to the functioning of the algorithmic agent, and finally “the victim” of the algorithm’s effects. This breakdown of the narrative of affairs related to algorithms allows us to reveal a specific tension in debates on the effects produced by calculations: can responsibility for the calculation be attributed to the interests of platforms or the users’ behaviour? This debate is particularly relevant in the case of mobility service platforms where the service provided to the user, and the demands of common governance of the territory are clearly in conflict.
Spokespersons to bring imperceptible problems to light
The first thing we learn from an analysis of these controversies is that the “victims” of algorithms are rarely the “accusers”. In most cases they need a spokesperson to reveal the issues with the calculation, which are often invisible to them. In their daily usage, users have a hazy and imperfect knowledge of the way they are targeted by calculations. Of course, at times they are confused, intrigued or furious when suggestion or orientation systems fail to propose relevant information, but these situations rarely give rise to protests or publicity, or go beyond a few publications on social media. Most of the time, these calculation issues are never mentioned, and nobody notices them. In the affairs analysed, journalists use the work done by moral entrepreneurs, associations, groups, academics, or institutions, capable of making a case and giving it sufficient consistency to be brought to the public space (Boltanski, 1990). Questioning the trustworthiness of algorithms requires expertise, which in most cases is the product of academic work relayed by associations and the media. This is the case in Uber’s phantom car affair denounced by Data Society [15] researchers, or the discrimination and salary differences between male and female drivers on this platform, revealed by researchers at Stanford, [16] or Ben Edelman and Michael Luca’s work at Harvard University [17] demonstrating the effects of racial discrimination on Airbnb.
It is also quite common for the press to announce legal decisions taken by public authorities at the national or international level, seeking to sanction or regulate platforms. For example, it was the Federal Trade Commission that in 2016 [18] denounced the permission Uber granted its drivers to use the application “God view”, which allowed them to track their clients’ movements using a global positioning system. In reality the victims only start denouncing algorithmic systems when they suffer their economic effects. In 2017, for example, it was restaurant owners who contacted the daily newspaper Libération [19] to denounce the relevance and the transparency of the rules governing the functioning of the Deliveroo and UberEats algorithms. The owners accused these platforms of not using only the global positioning and delivery time criteria, which were supposed to be taken into account in the local search results, unfairly favouring certain restaurants in the algorithm’s classifications. It thus appears that the formulation of the issue of algorithms as a public problem is primarily an effort undertaken by experts, NGOs and regulators, while apart from those involved in business activities, the people who suffer the consequences (i.e. the “victims”) pay little attention to the effects platforms produce.
A range of causes
While it is fairly easy to identify the accusers and the victims, the role algorithms play is shrouded in far greater uncertainty. This emerges particularly through the conflicts around the interpretation of the reasons behind their actions. In public controversies, the attribution of responsibility for undesirable calculations focuses on a variety of more or less precise entities. In press articles, it is often the whole service that is held responsible: “Waze directs drivers towards the fires in California”; “Waze accused of protecting the lives of Israelis better than those of Palestinians”; “Airbnb as a Racial Gentrification Tool?” However, with the growing notoriety of the term algorithm and its rapid entry into public debates, increasingly frequently, the calculation system is designated as the agent responsible: “How Uber hides behind its algorithm”; “At Uber, AI will soon decide the price of a ride depending on the client’s characteristics” “How Airbnb Uses Big Data And Machine Learning To Guide Hosts To The Perfect Price”.
The dominant critical representation of the way algorithms behave is to attribute the cause of the problem to the economic interests of the company that designed it. And in more than one way, most of the time this is a correct and relevant hypothesis. But a more precise breakdown of the controversies related to calculations leads us to isolate the calculating agent (which was programmed by the service platform’s engineers) and the reasons for the result of the calculation. There are lots of controversies in which the actors involved evoke a wide range of reasons to explain why the algorithm works in such or such a manner, and they can be associated with the interests of the platform that programmed it. But the specificity of algorithmic controversies is that agency for the calculation can also be attributed to data produced by users, the behaviour of other actors, or regulatory principles imposed by external sources, for example State institutions. Unpacking the controversies allows us to shed light on some of the issues.
To look at a first example, at the end of 2015, after the Paris terrorist attacks, the Waze application was accused of alerting drivers to a police presence, [20] thus limiting the efficiency of police checkpoints to search for potential suspects. In reality, it was Waze users who indicated the presence of police on the application, and unlike other applications, Waze did not suspend its services around the targeted areas. In this case, the attribution of responsibility can take two different directions: it can incriminate the behaviour of the users, or the platform’s policy, as it intentionally chose not to block the service despite the public authorities’ requests not to report the presence of security forces.
A second, more complex case is the Mayor of Jerusalem’s denunciation of the Waze algorithm’s choice to avoid directing traffic towards the Eastern neighbourhoods of the city, occupied by Palestinian populations. [21] This resulted in detours for the Israelis, and traffic congestion in the Western part of the city. In this affair, Waze was accused of making a political choice by endorsing the partition of Jerusalem. If the Eastern neighbourhoods of the city were actually deleted from the algorithm, Waze based its justification on the fact that, on the one hand, on the application, users had reported these areas as dangerous in terms of people’s personal safety, and on the other hand, the company was working with the Israeli police to establish zones where its users could travel in safety. For the Mayor of Jerusalem, the algorithm’s decisions contain a political choice, for Waze it is only a question of fulfilling users’ demands. The controversy escalated when, for their part, the Palestinians also denounced the fact that Waze did not declare the ultra-orthodox Israeli colony zones potentially dangerous for them, thus stigmatizing the Palestinian zones and creating an asymmetry in the information shared with its users. While the stakeholders in this affair clearly blame the algorithm, the reasons the actors attribute to its choices are not intrinsic to the calculator, but involve political or economic justifications in conflicting ways.
Cold algorithm procedures
The uncertainty around what provokes the result of calculations is due to the difficulty actors encounter when it comes to grasping the “procedural” aspect of algorithmic calculations. Indeed, the rules algorithms follow in their calculations are procedural and not substantive. The artefacts do not have semantic access to the information they manipulate, which means they do not have a symbolic understanding of it. Hence, to produce their results, they have to find procedures that allow them to develop the best possible approximation of a principle, which users then interpret in a substantive manner. [22] When debates take shape publicly around the effects produced by calculations, the actors’ accusations evoke theories of responsibility that lead one to believe that calculators act intentionally and this intent could be related to an explicit project pursued by the designers of platforms. This view is reinforced today by the use of the term ‘Artificial Intelligence’, which gives rise to sweeping statements of the type: “My AI will find the best X”. The controversy then takes shape based on a discrepancy between the procedural method platforms apply to the city, and the substantive projects the authorities would like to impose on them. This misalignment feeds the tension between the governance of the city by algorithms and its governance by the public authorities.
Following the London terrorist attacks of June 2017, on social media, Uber platform users denounced the vast increase in rates when Londoners were trying to flee the sites of the attacks and reach safety. [23] The algorithm is programmed to adapt the price of the trip on a supply and demand basis. After the incidents the demand skyrocketed, and the algorithm mechanically increased the rates. Faced with this exceptional and unpredictable crisis situation, in its calculation the algorithm applies a procedure and is incapable of taking into account the irregular state of the world. At the same time, the London cabs offered to transport people free of cost to take them to safety. In this example we see how the algorithm’s procedural logic is confronted with the varied, diverse and unstable reality of the world it applies its calculations to.
Another source of controversy arises from situations in which one expects the algorithm to behave in a procedural manner, respecting the principle of neutrality, but a substantive cause is unexpectedly included. After being tested in several cities, Uber is about to generalise a payment system based on the ‘itinerary’. Users will find they are billed higher rates for their trips depending on the neighbourhoods they travel through. [24] If the arrival and departure zones are considered well-off neighbourhoods, users will pay more for their ride than other users travelling between a poor neighbourhood and a well-off one, for a trip that covers the same distance and takes the same time. Here, Uber includes a substantive cause in its algorithm by differentiating between the varied characteristics of the external world and modifying the simple logic of neutrality, which consists of calculating the price according to the duration and the distance travelled. Similarly, in the case described above involving Waze and the Eastern neighbourhoods of Jerusalem, asking the algorithm to choose “the fastest route” is procedural, however telling it to avoid “travelling through East Jerusalem” is substantive. This distinction must be applied cautiously, particularly because platforms use it to justify the supposedly “neutral” nature of procedural rules and sometimes to wriggle out of their responsibilities.
While today in the media algorithms are accused of deregulating social and economic activities, we should remember they actually function as building blocks, integrating into their calculations thresholds, restrictions, and objectives, which are reconfigured by the way the users of a service employ them. In numerous situations, the causes are multiple, and sometimes “external” to the way they were designed, or the intentions of those who programmed them. However, it would also be an error to think that debating this public issue is futile, as algorithms clearly play a more or less central role in the way we access information and the way the choices we make in our daily lives are oriented.
Competition Regarding the Principles
While it remains difficult to evaluate the responsibility we can attribute to algorithms and their level of involvement, accusations against these new actants are on the increase. By analysing the criticism against them, we can try to understand the different principles algorithms are accused of flouting.
The principle of Equality is one of the principles the most often mentioned. Algorithms are constantly accused of discriminating, producing imbalances, or providing oriented and partial information. In 2016, some journalists denounced the fact that to optimise their profits, ranking algorithms on platforms like Airbnb tended to promote aseptic environments with standardised aesthetics, [25] targeting white male clients, with high purchasing power, who travel the world.
The principle of loyalty is invoked to denounce the fact that algorithms betray us, lie or deceive us. In this case the algorithm is accused of not doing what it claims to do. The case of Uber’s phantom cars is a good example of users being deceived by the algorithm. Uber deliberately deceives its clients, it breaches the contract that says it is supposed to be nothing more than an intermediary displaying the market in real time.
The principle of respect for privacy is also invoked when algorithms are accused of making us the object of their calculations or monitoring us. The company Uber is accused by some of its employees of having used a programme called GreyBall, [26] which by cross referencing clients’ personal information (name, credit card, etc.) sought to identify people designated as potentially “opposed” to its development, in order to ignore them or cancel their rides on the service.
The principle of respect for individual autonomy can also be highlighted when algorithms are accused of limiting or restricting individual freedom. By placing the user in a controlled and guided environment, platforms subject individuals to choices that are not their own. These critiques appear, for example, in user surveys on route guidance, [27] but also to denounce the ultimately coercive nature of the pressure the algorithmic reward system exerts on Uber’s supposedly independent drivers. [28] The platform’s functionalities incite drivers to constantly work more, and they are driven by the rhythm imposed by the algorithm.
The principle of efficiency and effectiveness differs from the principles mentioned above, which are part of the fundamental principles as set out in the law. This principle is invoked when an algorithm is not capable of producing the result it claims to achieve, or when there is a dysfunction. Many of the cases presented above are related to the denunciation of this principle, for example the detection of false reviews or false restaurants on Tripadvisor, or Waze’s failure to detect high risk, or unsuitable areas, when it suggests alternative routes to optimise travel time.
Reading these cases, what emerges is a tension between these different normative principles and another means of justifying calculations, based less on respect for a political and moral norm than on the optimisation of the usefulness of the service provided to the user. These tensions between norms and usefulness are the arena of conflicts between new services and the regulators. In the light of these critiques, the question of usefulness seems to be the main justification for the choices platforms make to follow one or another of the algorithmic procedures available to them. It is central to the liberal justification for digital platforms that maintain they offer the least interventionist architecture possible in order to allow users to make use of the services on offer in the way they choose. Nonetheless, the agnostic laissez faire attitude of platforms can create highly unequal situations and reinforce an already deeply unequal urban distribution. Even when it claims to be working for the users’ benefit, the utilitarian logic platforms follow can also produce situations in which users’ data is abusively exploited, generate forms of dependency or enslavement, or even limit users’ capacity for action by forcing them to act in accordance with the algorithm’s parameters.
In public policy terms, it is striking to note that issues related to algorithmic calculations of the city reveal a paradigm shift where the regulation of the city deviates from a rationale of collective choices orienting the usage of the city, to a utilitarian optimisation of users’ satisfaction with platforms. Governance of the city presumes establishing limits, prohibitions and exceptions in order to respect the balance between populations, to preserve certain areas, manage the cohabitation between different categories of users, avoid the effects of concentration of resources, prices or populations. These decisions presume introducing substantive choices, or in other words, exceptions, within calculation ecosystems, which, because of their simplicity and the universality of the rules they establish, function in a procedural context. This tension is inseparable from the nature and availability of data for calculations. The inclusion of restrictions, revealing the governance of territories based on “substantive” political orientations in procedural calculations, requires data that is often in the possession of territorial operators, and do not have the same qualities (completeness, temporality, etc.) as the geopositioned footprint users give to the platforms. For this reason, the question of data sharing, and the circulation and ownership of data is one of the key issues of the algorithmic governance of territories.
Dominique Cardon & Maxime Crépel, « Algorithms and Territorial Regulation »,
Books and Ideas
, 16 December 2019.
ISSN : 2105-3030.
URL : https://booksandideas.net./Algorithms-and-Territorial-Regulation
Nota Bene:
If you want to discuss this essay further, you can send a proposal to the editorial team (redaction at laviedesidees.fr). We will get back to you as soon as possible.
[3] In 2017, across the world, there were 150 million Airbnb users, 75 million Uber users for 7 million drivers, and 4,26 million restaurants referenced on Tripadvisor. In 2017, Airbnb was present in 65000 cities and 191 countries, and had a turnover of 31 billion dollars, Uber is present in 78 countries and 600 cities with a turnover of 68 billion dollars, Tripadvisor had a turnover of 1,4 billion en 2016 (sources: https://expandedramblings.com et https://www.statista.com)
[4] Luc Boltanski, “ 4. La topique de la denonciation”, La Souffrance à distance. Morale humanitaire, médias et politique, Éditions Métailié, 1993, pp. 91-116.
[5] Data gathered by the Algoglitch project carried out at the Sciences Po medialab and supported by the Conseil National du Numérique.
[6] Lawrence Lessig, Code and Other Laws of Cyberspace, Basic books 1999.
[8] “Waze dirige les conducteurs vers les incendies en Californie”,Clubic, 08/12/17.
[9] “GPS: de la valeur par défaut”, Internet Actu, 05/12/17.
[10] A. Courmont, “Entre monde et réalités. Big data et recomposition du gouvernement urbain”, Revue française de Sociologie, 2018.
[11] “Une petite ville américaine est envahie par les embouteillages à cause de l’appli de navigation Waze”, Mashable, 27/12/17.
[12] “Loyauté des plateformes, d’accord, mais loyauté à quoi ?”, Nouvelobs, 10/12/16.
[13] Daniel Cefai, “La construction des problèmes publics. Définition de situations dans des arènes publiques”, Réseaux, 1996, n°75, pp. 43-66.
[14] Bruno Latour, Changer de société – refaire de la sociologie, La Découverte, 2006 ; Luc Boltanski, “2. Le système actanciel de la denunciation”, in L’amour et la justice comme compétences : Trois essais de sociologie de l’action (pp. 266-279). Éditions Métailié, 1990.
[16] C. Cook, R. Diamond & al. (2018), “The Gender Earnings Gap in the Gig Economy: Evidence from over a Million Rideshare Drivers”, Natural Field Experiments
[17] B. G. Edelman, M. Luca, “Digital Discrimination: The Case of Airbnb.com”, Harvard Business School, Working Paper No. 14-054, SSRN
[18] “Uber règle à l’amiable une plainte sur la protection des données”, Le Point, 15/08/17.
[19] “Après les livreurs, les restaurateurs bouffés par l’uberisation”, Libération, 22/08/17.
[20] “Les applis de géolocalisation critiquées en marge des attentats”, GQ, 17/11/15.
[21] “Waze accusé de protéger bien mieux la vie des Israéliens que celle des Palestiniens”, Numerama, 05/10/16.
[22] Dominique Cardon, “Le pouvoir des algorithmes”, Pouvoirs, 2018, vol. 164, no. 1, pp. 63-73.
[23] “Attentat de Londres: Uber scandalise ses utilisateurs en augmentant ses prix pendant les attaques”, Huffingtonpost, 04/06/17.
[24] “Chez Uber, une I.A. décidera bientôt du prix de la course ‘à la tête du client’”, Journal du Geek, 31/05/17.
[25] “Comment Airbnb et Instagram uniformisent nos lieux de vie”, Les Inrocks, 27/08/16.
[26] “La manipulation secrète des utilisateurs d’Uber dévoilée par des lanceurs d’alerte”, The Conversation, 18/04/17.
[27] Y. Bruna, “La déconnexion aux technologies de géolocalisation. Une épreuve qui n’est pas à la portée de tous”, Réseaux, n°86, 2014, pp. 141-161.