April 24, 2022.

Artificial Intelligence in the Latin American Public Sector


Versión en español

Across the world, governments are incorporating artificial intelligence (AI) into their arsenal of tools to improve efficiency, make better policy decisions, and increase engagement with the public. This trend holds true for many nations in Latin American and the Caribbean (LAC). A new report by the Organisation for Economic Co-operation and Development (OECD), an international policy analysis think tank, explores how LAC governments are incorporating AI into their processes for a more responsive public sector. We spoke with Ricardo Zapata, an analyst at the OECD and one of the co-authors of the report, “The Strategic and Responsible Use of Artificial Intelligence in the Public Sector of Latin America and the Caribbean” to learn more about how LAC governments are using AI and what this means for the future of responsible governance, public security, and digital rights in the region.

Photo by Robynne Hu via Unsplash

What are some of the most common ways that a government may use AI?
Governments will generally approach AI to improve efficiency. Concretely, this will mean automating simple tasks or upgrading the speed and quality of public services. Most of the visible use cases belong to this category. In some other cases, governments will also look to improving the policy design process by getting in-depth knowledge out of large quantities of data (and most governments are already good at producing a lot of data) and making more informed decisions. This is an area where AI can deliver good results, but that also calls for stronger ethical approaches in the whole cycle, from the collection of data to its processing. Finally, AI is being used also to enhance communication and engagement with the public, generally through chatbots, matching tools (producing better recommendations according to citizens’ needs or characteristics), or improving the abilities to better understand opinions and perspectives of citizens at scales that were previously unfeasible. Good to note that similar uses happen the other way around when citizens use AI to better understand what their governments are doing.

The report states that Colombia has emerged as a leader in the LAC region for its use of AI. Why is that?

Colombia is building strong AI policy and producing very interesting use cases. On the policy side, we documented various instruments and levers that secure strong governance of AI in the public sector. In the first place, its AI strategy is the only one in the region to have the whole set of enablers that can help drive implementation. With enablers we mean having objectives and specific actions, measurable goals, responsible actors, time frames, funding mechanisms, and a monitoring instrument. Although they do not guarantee successful implementation, having them in place can improve its performance, especially when we consider Colombia is a large country and the strategy has been made to endure across administrations.

Second, we analysed 15 policy levers that can improve the implementation of AI projects in the public sector and, in most of them, Colombia stood out as a regional leader. For example, when looking at the efforts to develop a responsible, trustworthy and human-centric AI, Colombia’s ethical framework was well aligned with our reference instrument, the OECD AI Principles. When looking at funding, Colombia’s national strategy was unique in the region in having an explicit funding mechanism. When looking at data governance or spaces for experimentation, it was also among those countries with the most complete instruments and practices.

This whole evaluation exercise did not intend to create a ranking, but to identify strengths among countries to enable others to learn from their practices and lessons. Our study does not say who the best is and neither who has the greatest impact. At this stage, we can compare what countries are doing with the policy frameworks we are developing and, being AI such a recent trend, it is possible that these frameworks evolve in the coming years. So, being a regional leader, in this case, means that countries should take a special look to what Colombia is doing.

Could you give an example of how Colombia uses AI in the public sector?

Aside from all the policy developments I described, Colombia is also making some interesting implementations in the public sector. I think PretorIA’s case stands among the most interesting we documented. This is a project developed by the Constitutional Court to help in the selection of key tutelas (i.e. Constitutional Action for the protection of fundamental rights) to set legal precedents on the provision of fundamental rights. The Court receives more than 2000 tutelas each day, so a solution to make this process more efficient can add a lot of value. PretorIA automatically reads and analyses all plaints, detects and predicts the presence of predefined criteria, and intuitively presents reports and statistics. It serves as a tool for judges, so it ensures there is a human in charge of the decision-making process.

What I found the most interesting about this case is how it was developed. Initially, it was an adaptation of Prometea, an AI system developed in Argentina to help justice providers. It acted as a virtual assistant that predicted case solutions (based on previous cases and solutions) and helped provide information required to assemble case files. When the Constitutional Court first announced the project in 2019, it sparked citizen participation and dialogues with many stakeholders due to concerns about the opacity and black-box effect of this system, especially when dealing with a process made to guarantee fundamental rights. Citizen participation then led the Court to transform the whole project into what we now know today as PretorIA, a system that uses topic modelling technology instead of neural networks, making it more explainable, interpretable and traceable. The institutional reaction to external voices is a nice example of responsible actors being held accountable, making the necessary changes to assure the trustworthiness of the AI system.

What is a “trustworthy, human-centered” approach to AI and how does it differ from traditional uses? How does Colombia rank in its use of human-centered AI?

Our reference framework to assess trustworthy and human-centered approaches are the OECD AI Principles, which intend to promote the development of innovative AI that respects human rights and democratic values. Coming from an institution like the OECD dedicated to sharing best practices, establishing international standards, and improving public policies in democratic and market-oriented economies, the report seeks then to interrogate how public sector AI systems are integrating these values. I think it is hard to say what a traditional approach is. What I would stress is that technology is not value-neutral, so every development embeds a vision of how the world should look like. Many systems we know today have put their focus in efficiency, profit, or control. As citizens and institutions, we should therefore permanently ask how we want technology to be and what values do we want it to reflect.

In this regard, Colombia has done a great effort issuing the AI Ethical Framework, aligning with the OECD AI Principles. Our report goes more in depth and looks at particular aspects of the responsible, trustworthy and human-centric approach, so we look at fairness and bias mitigation, transparency and explainability, safety and security, and accountability. In all of those, Colombia’s AI Ethical Framework gives the country good capacities to promote this type of approach. Where we see room for improvement is in setting guidance and methods for understanding user needs and for building diverse AI teams. Of course, all this concerns the existence of policy frameworks, which is in itself a great progress, so their application and impact will be a matter of future research.

How could human-centered AI help people coexist better in cities? Could you give an example of how this technology could be used in a city planning or public security intervention?
Cities are a context where digital government projects can have a more tangible and visible impact for citizens. Implementing a trustworthy and human-centric AI in domains like security, where we are dealing with the privacy and integrity of persons, could allow society to be confident on the right applications of the technology and held responsible actors accountable for its results. It could also help us decide where we do not want to apply it, which is a completely valid choice. We talk a lot about the benefits of AI, but sometimes a risk-based approach can say: these domains should remain tech-free. For instance, facial recognition, which has been banned in some places and remains a debated topic. It could also help us discover that AI is not the best solution, as it happened with Predpol in Montevideo.

At the urban level, many authorities in Latin America are using AI systems to analyse the enormous amounts of real-time and historical mobility data to improve traffic management, detect risks, or model future scenarios. Some are also using surveillance cameras and devices together with AI algorithms to track criminal activity, identify persons, and analyse historical data. Our report highlights that there are still two main challenges in this last area: establishing the necessary safeguards when processing sensitive personal data (e.g. biometric data), and defining clear frameworks for the use of these technologies to prevent possible abuses such as the profiling and persecution of political opponents or protesters.

We certainly need more active discussion to shape urban surveillance technologies. There is still a lot of progress to be made. In the end, that is one of the points of incorporating democratic values in the development of AI, that we can collectively shape these technologies and their applications. 


Ricardo Zapata is a Public Policy Analyst at the OECD and a co-author of the new report “The Strategic and Responsible Use of Artificial Intelligence in the Public Sector of Latin America and the Caribbean.”



Medellín, Colombia, April 19, 2022.

An Interview with the Designer of “Decoding Medellín”


Versión en español

We sat down with Sara Arango Franco, team member and one of the primary facilitators and designers of our new data analysis workshops to talk about the motivations behind the program and how it’s gone so far.

What is “Decoding Security, Coexistence, and Surveillance in Medellín” and how does it differ from “We are Recording You,” the research sprint that Edgelands did in the fall of 2021?
“Decoding” is a series of data workshops for those both interested and with experience in data analysis and visualization. The objective is to gain experience analyzing the open data that is gathered on citizens and the security of the city  through what we could call surveillance efforts. Our aim is to create a space to develop skill sets that allow participants to respond to security questions that have the potential to develop into policy or public initiatives.

The difference with We are Recording You is that the Decoding workshops are directed specifically towards a population with skills in data analysis and are meant to be hands-on from the beginning. If you observe the schedule of both workshops, you’ll see that We are Recording You had many more guest speakers and more time dedicated to discussion of topics related to security and surveillance. We think that both programs compliment one another quite well. In fact, one of the participants of We are Recording You attended the first few sessions of Decoding (although she had to stop because she started a new job 😊).

How is the program designed?
Participants meet for 8 weeks. In week 9, they will present the results of their research to an audience within our Edgelands network: decisionmakers, academics, and others who are relevant to the discussion.

Stats:
  • Project Supporters: these are groups or institutions in Medellín that work on security issues and who have proposed questions for the participants to resolve throughout the 8 weeks of the workshop. In our case, these groups are the Center for Political Analysis at EAFIT University (CAP), the Information System for Security and Coexistence (SISC), and Casa de las Estrategias.
  • Mentors: are 6 experts in data analysis who are responsible for one of each of our project groups. They guide the participants in their work.
  • Participants: About 25 participants meet regularly and are divided into 6 workgroups. Participants are mostly economists, mathematical engineers, and political scientists (and a couple people who study psychology or business). Participants were placed into workgroups based on their application responses as well as the technical requirements for each project.
  • We’ve also had guest speakers in almost every session: these are experts in data who have worked on matters related to security in Colombia and beyond. 


What is a typical session like?
A typical session consists of:
  1. A quick recap by the Edgelands team on what the previous session consisted of, plus any general announcements;
  2. On occasion (and in the majority of sessions up to this point), a presentation by a guest speaker who shares their experience in applied research and the role of data in their research; and
  3. Group work (participants and mentors) to develop the research questions that the project supporters have pitched.

What are some of the questions that participants are currently researching?
Our six workgroups are investigating the following questions:
  • Gender-based violence in Medellín: We are analyzing an array of metrics to establish if there are zones of the city in which vulnerable groups are less likely to ask for help before a violent act or a femicide (project supporters: Casa de las Estrategias & CAP. Mentor: Sarah Henao)
  • Criminal “ghettos:” Data analysis to determine if it is possible to deduce zones of the city where there are dynamics of a “ghetto” (project supporter: Casa de las Estrategias. Mentor: Santiago Rodríguez)
  • Sentiment analysis on Twitter: A pilot program that gathers tweets related to the perception of security in Medellín and studies how they are related with existing metrics of security. (project supporter: SISC. Mentor: Felipe Mira)
  • Study on the correlation between “hard” metrics of security (in this case, theft) and perception (project supporter: SISC. Mentor: Andrés Pérez
  • Study on the relation between urban configuration and the dynamics of security (project supporter: SISC. Mentor: Jessica Salazar)
  • Exploration of the ways to visualize and communicate homicide rates in Medellín to the public (project supporter: Casa de las Estrategias. Mentor: Manuela Henao)



How does this research that the participants are doing apply to real life issues?
All the research projects were pitched by the project supporters mentioned. These projects are based on long-standing issues in the city. It’s our hope that the outcome of these research projects can offer our project supporters a space or a buffer (as these public-facing organizations - along with many others - rarely have the time to explore such questions as they are always attending to urgent issues) to resolve these types of innovative questions and also accelerate their own internal processes with how they engage with the public. The work that SISC does is directly related to public policy, as they are part of the Secretariat of Security. Casa de las Estrategias is one of the most important actors in the city, via the focus of a think-do-tank.

How can one “decode” security, surveillance, and/or coexistence?
One can do so with the understanding that there’s not just one unique code and that data is always a limited representation of reality (but not to rule it out, for this reason either!). The true decoding comes when someone understands the process through which data and numbers end up being representations and can interpret them.


Has anything surprised you about the program so far?
The huge interest that we received from all those who applied although the time commitment is demanding — almost as long as an entire academic semester!

How will the program finish?
The final session will consist of the workgroups presenting the results of their research to the project supporters, guest speakers, and others within the Edgelands network. We plan to publish the results of this research on our website and social media and ensure that this research has the biggest impact possible. Throughout all this, the more new alliances and projects that can come from our work in these workshops, the better!



Medellín, Colombia, April 6, 2022.

Conversations for an Ethical Artificial Intelligence Policy in Colombia


By Santiago Uribe Sáenz

Versión en español

FERNANDO BOTERO, THE STREET, 2013, OIL ON CANVAS.
GALERIE GMURZYNSKA


AI is everywhere: from social networks and streaming services, to biometric immigration checkpoints, to medical diagnostics tools and apps on our smartphones. Indeed, algorithms and programs that run on artificial intelligence (AI) have reached such cognitive capacity that in some areas surpass our human capabilities (think playing chess, detecting tumors). The use of these technologies has become indispensable, transforming our relationships with technology and how we approach government and relate in our cities.. The economic, innovation, development and ways of working transformation potential is such that this digital transformation has been called the Fourth Industrial Revolution. It is therefore not surprising that the world is looking towards AI as a new economic frontier with the potential for economic growth for countries. In recent years the world's leading economies (European Union, United States, China, etc.) have developed national strategies and plans to foster and innovate in AI. The potential benefits are enormous.

AI has the potential to transform the economy, attract investment, generate skilled jobs, sell services to millions of people and enable access to information and the benefits of technology. In Colombia, for example, biometric surveillance systems such as facial recognition have improved security in public places and immigration control services at airports. During the Covid-19 pandemic, AI systems with biometrics were implemented in some countries to detect people with symptoms of Covid. The benefits are undoubtedly many, but they come not without risk.

While the algorithms running in AI programs are efficient in processing thousands of variables and computing millions of data, they are also opaque by nature in their operation. That is, it is impossible for users and even programmers to understand or have explanations about the process employed by the algorithm in making decisions (why A and not B?). On the other hand, the databases used to feed the program run the risk of being incomplete and having hidden biases, and therefore run the risk of replicating and perpetuating existing biases and discriminatory situations among certain populations.

Since 2019 Colombia has a public policy on digital transformation and artificial intelligence that seeks to create the enabling conditions to promote investment and development of digital technologies and digital transformation. Similarly, in 2021 the government developed the Ethical Framework for Artificial Intelligence which has recommendations and tools to ensure the ethical use of algorithms and thus mitigate the risks associated with their use. Furthermore, since 2021, the Expert Mission on Artificial Intelligence was established with the mandate to formulate recommendations on how Colombia should promote and implement AI for the future of work (employment) and environmental protection.

But artificial intelligence, in theory an infallible and value-adding tool in the economy, is not without risks. Potential harms caused by algorithmic decisions include, for example, discriminatory hiring practices and biased convictions in criminal proceedings. Governments and businesses must weigh the benefits of using AI (e.g., efficiency and productivity gains) against the risks of permanently institutionalizing opaque decision-making processes that impede inclusive and socially conscious development. In turn, it is incumbent upon us as citizens to demand institutional and citizen mechanisms to encourage the development and use of AI technologies with safeguards and correctives that prevent or minimize harm to society.

We all have a legitimate interest in the ethical and responsible use of this technology and we are all stakeholders, so how can we demand social responsibility from developers and implementers that goes beyond business? In the face of this potentially transformative force, open dialogue and participatory discussions where decision-makers consider the concerns of citizens are required.

During the first version of "Virtual Lunches" organized by the Edgelands Institute, we were able to facilitate a space for dialogue and participation essential to discuss these issues. One of our objectives at Edgelands is to create spaces for dialogue and build bridges to bring people closer to the discussions that are taking place in the circles of power and public policy. To this end, we had a direct and open dialogue with Sandra Cortesi, director of the Expert Mission. After her intervention where she explained the mandate and work of the mission, there was space for civil society sectors and universities to make their positions known and thus nurture the dialogue and the work of the mission. At Edgelands we aspire to do just that: foster dialogue, facilitate exchange, participation and discussions on issues that affect us.
Discussing AI during our first Virtual Lunch conversation

New technologies have always met with resistance in society: the steam engine, electricity, computers, and the internet were once rejected out of hand, branded as a passing trend, their risks highlighted more than their benefits. Distrust of the unknown is natural. It is more challenging to approach these technologies with critical thinking, to understand their dimensions, to weigh their costs and benefits. Doing so in spaces like the ones we propose in Edgelands gives us the possibility to influence decision making and ensure that technologies will benefit and not harm. In the cacophony of positions and arguments, at Edgelands we want to harmonize and gather informed, science-based, evidence-based and socially responsible opinions.

Sources

Kim, H., Giacomin, J., & Macredie, R. (2014). A qualitative study of stakeholders’ perspectives on the social network service environment. International Journal of Human-Computer Interaction, 30(12), 965-976.

O'Neil, C. (2017). Weapons of math destruction: How big data increases inequality and threatens democracy. Broadway Books.




Geneva, Switzerland, March 28, 2022.

Geneva, the City Where Spies are Welcome


By Jeanne Cordy, Geneva Research Team Versión en español


Who, walking by the fenced park of the United Nations in Geneva and looking at the high-level diplomats accredited to enter the immense buildings, has never wanted to put themselves into their fancy shoes and experience the important lives they have? Well, do not be fooled by the shiny appearances. International Geneva, as we call the cluster of foreign delegations and international organizations massed on the beautiful shores of Lake Geneva, hides a much darker side.

Although involved parties and the Swiss authorities try to hide it under the carpet, espionage among nations is a standard practice in Geneva. Sometimes, revelations of mysterious events  pierce through the veil of secrecy. Stories of Chinese agents following dissidents inside the UN buildings, of Kazak opponents complaining about cyberattacks and tracing, or of a convicted Russian spy dying in unexplained circumstances when returning to his homeland have caused stirs in the local press.

The most important revelations so far were made public by  Edward Snowden in 2013. Snowden, who at the time worked for the American National Security Agency (NSA) and the Central Intelligence Agency (CIA),  revealed to the public the extent of American surveillance across the world. In Geneva, where he was deployed under diplomatic cover between 2007 and 2009, he worked in one of NSA’s 80 listening stations spread over the globe. There, using cutting-edge espionage  technologies, such as hidden satellite antennas for radio waves interception, he helped monitor the movements of people and organizations of interest to  the USA intelligence services. Anyone within International Geneva, as well as Swiss institutions was under the American eye. Among the highest-profile surveillance targets under his scrutiny in Geneva were the International Atomic Energy Agency (IAEA), the International Telecommunication Union (ITU), World Trade Organisation (WTO) and Swiss banks.

What was the political reaction to revelations of this massive scale of unauthorized activities conducted on Geneva’s soil?


“Unsurprising” and “credible.” Was action taken to remove the antennas from the roof of the American embassy? They are still there, a decade later.

What explains the inaction of the Swiss state?


On the cantonal level, Geneva has a vested interest in pampering the high-profile international community meeting in its city. Some have called International Geneva a “business model,” and the canton’s statistics report that each year international organizations alone spend around 3.5 billion Swiss francs. On the federal level, the importance of the international platform in Geneva for Switzerland’s diplomacy and international clout is too important to be risked and must be preserved at any cost, even that of ethics.


How is this inaction allowed?


Impunity must be backed by something solid enough for the authorities to legitimize the absence of any sanction when espionage scandals emerge. The first and most important legal arrangement in this context is the 1961 Vienna Convention on Diplomatic Relations. This UN treaty established the immunity of diplomatic agents and the inviolability of the diplomatic mission’s facilities under international law. This means that foreign diplomats in Switzerland are not accountable to  Swiss law and Swiss authorities never have the right to search the buildings, archives or  electronic devices of foreign diplomats. As a result, if Swiss authorities had wanted to take any punitive action against American surveillance, for instance, their only resort would have been to make the American diplomats persona ingrata expelling them out of the country. Considering the importance of the American presence for the International Geneva business model, this was unthinkable. The Vienna Convention, however, only applies to foreign national delegations. In Switzerland, International Organisations are under another regime, namely the 2007 Host Act, under which they have the possibility to sign an agreement with the Swiss Confederation to have somewhat similar immunities and privileges as diplomatic missions.

Besides legal explanations for the impunity of espionage in International Geneva is the laissez-faire attitude of both the cantonal and the federal governments, which seem to be prioritizing economic interests over ethical concerns. Although every time a new story emerges, provoking parliamentarians in Bern to call for an investigation, the Swiss authorities never take a strong stance and just wait for the outrage to die down. It might be argued that Switzerland goes a step beyond neutrality and tolerates certain espionage activities. For instance, although denied by the authorities, reports of a secret agreement to allow Chinese intelligence agents to act freely on Swiss ground have been published.

Why should the residents of the city of Geneva care about espionage?


It might appear that what happens in International Geneva has nothing to do with the locals. But it does for two reasons: First, espionage is ethically wrong. Surveilling an individual’s every single move without them being aware is a breach of the right to privacy. By tolerating the widespread espionage operations in their city, the inhabitants of Geneva are indirectly supporting this questionable practice. Second, illegal surveillance practices in International Geneva also goes against the locals’ own interest. Indeed, Snowden revealed that private banks and the number one telecommunication company of the country, Swisscom, were also regular targets of American espionage. Thus, anyone with a bank account or a subscription with Swisscom is potentially in some American espionage database.

To be clear, the Americans are not the only ones practicing espionage in Geneva. Most of the other delegations probably do so as well but because of the opacity of the practice, we do not know exactly its extent. As a result, the potential consequences of espionage in International Geneva on the city’s locals but also on other people all around the world are unknown. In any case, they are harmful and should be stopped.

What is the attitude of the local inhabitants of the city of Geneva regarding all the espionage happening in their city?


If the public is aware of these questionable activities, it does not seem to perceive them as important enough to demand action from the authorities. Do the citizens and their local and federal governments value the business and the prestige brought to their city by its international activities so much as to be willing to sacrifice ethics and the rule of law in International Geneva? Are they scared of threatening the economically favorable status quo by trying to tighten the rules?

These questions remain open but should be answered. To do so, a general citizen discussion about espionage in the city and in the country more generally should take place. Ethics and locals’ interests have to be weighed against economic and geopolitical concern. In the end, the citizens of Geneva specifically and of Switzerland in general should make an informed choice about the position to be taken in the face of the current massive espionage operations taking place in Geneva.

Jeanne Cordy is a member of Edgeland's Geneva-based Research Team and a master’s candidate in International Development at the Graduate Institute for International and Development studies in Geneva. Fascinated by everything global, her research interests go from environmental protection all the way to global security. During her time at Edgelands Institute, she has been focusing on the impact of digital security technologies and their impacts on the social fabric of the city of Geneva.



Geneva, Switzerland, March 5, 2022.

Bots & Pizzas: Global Trends and Local Responses to the Algorithmic Management of App-Based Delivery Workers

By Fabian Hofmann, Geneva Research Team

Versión en Español


Image by Henrique Hanemann via Unsplash

You have probably come across them many times on a crowded city street, the bicycle couriers in their brightly colored outfits and square cargo boxes. App-based food-delivery drivers, working for Uber Eats, Deliveroo, and other online labor platforms, are nowadays a highly visible part of cities all around the world. However, their overt observability stands in stark contrast to the opaque "algorithmic" management practices and precarious working conditions they are subjected to. When we order a pizza using a delivery app, we are rarely conscious of the underlying data collection and digital surveillance technologies that enable timely deliveries. Bearing this in mind, this blog post sets out to shed light on local responses to secure app-based delivery workers' employment and data rights in Geneva, Switzerland.

Anyone See the Boss? Algorithmic Management Practices of Digital Labor Platforms


Central to the proliferation and success of app-based food delivery platforms such as Uber Eats or Deliveroo is the algorithmic management of their disaggregated workforce. Algorithmic management stands for technological work process and performance management reliant on data collection and digital surveillance. These "soft” surveillance technologies include, among others, the monitoring of couriers' location using the Global Positioning System (GPS), assessing working hours and job acceptance and completion rates through app usage, and the employment of facial recognition technologies for fraud detection. This availability of real-time data collected from the workforce allows for (semi-) automated management decisions in the form of "nudges" and penalties to incentivize worker behavior, which is good for the company as a whole. A classic example is Uber's "surge pricing" system, which nudges delivery workers to be available in high-demand locations or during busy hours by offering them higher delivery rates. Moreover, platform workers' performance assessments are replaced by algorithmically generated rating systems based on client reviews, customer feedback, and job acceptance and rejection rates.

Taken together, all these algorithmic management practices entrench the power imbalance between delivery workers and digital labor platforms, resulting in a profoundly exploitative working environment. Through their terms of service agreements, platforms such as Uber Eats and Deliveroo unilaterally determine the conditions for accepting or rejecting work, working hours, deactivating platform accounts, and data ownership. More importantly, these agreements tend to characterize platform workers as freelancers and not as employees of the digital labor platform, excluding them from workplace and data protections enjoyed in a regular employment relationship. Despite their supposed independent and flexible contractual relationship, the algorithmic rating systems that match delivery workers with customers effectively limit bicycle couriers' freedom to reject work. Out of fear of the negative impact on their ratings, many delivery workers cannot refuse or cancel jobs because this could lead to reduced work access, financial penalties, or even account deactivation. As a result, most workers in the delivery sector work long, high-intensity shifts, and many experience stressrelated to long working hours, insufficient pay, and the pressure to drive quickly.

Here to Stay? Increased Digital Surveillance of Delivery Workers in Response to Covid-19


The Covid-19 pandemic has brought the poor working conditions and data vulnerabilities of app-based service workers to the fore, as digital labor platforms have introduced new monitoring and control mechanisms to ensure health and safety. For instance, delivery workers were required to do regular temperature scans, inform their supervisor of the result, and send the platform selfies to prove that they were wearing protective equipment such as face masks. Platforms' sanitary measures revealed a skewed stakeholder focus, as, for example, contactless delivery or scanning workers' temperature are weighed more towards consumer protection. Even though platform workers were at a higher risk of contracting Covid-19, many of them were unable to discontinue their work during the pandemic due to their dependency on income generated through their delivery rides. As a result, many delivery drivers faced the impossible choice between infection and impoverishment. Furthermore, increased data collection and workplace surveillance under the guise of guaranteeing health and safety have infringed on delivery workers' privacy, revealing that employment and data rights are inextricably linked.

On a more general level, the conflation of safety with surveillance measures showcased the central logic behind workplace surveillance: more and better data begets better oversight and management. Increased surveillance of platform workers allows for better algorithmic control and improves customers' experiences, thereby generating more revenue for digital labor platforms. The risk of this becoming normalized in the pandemic's aftermath is genuine, as many platforms have not provided any guarantees that they will discontinue the newly adopted surveillance practices. This would aggravate the already existing informational asymmetry between delivery platforms and their riders, with many platform workers being unaware of formal processes to obtain access to their data. In sum, excessive surveillance and algorithmic controls severely undermine platform workers' freedom to work and their ability to bargain for more secure employment conditions and data rights.

What Can Be Done? Landmark Legislation Regarding Delivery Workers' Employment Rights in Geneva


In June 2019, the canton of Geneva had recognized the vulnerability of platform workers and called upon Uber Eats, Eat.ch, and other app-based food delivery platforms active in the city to "respect the law" and recognize their drivers as employees. While Smood.ch and Eat.ch followed suit and offered delivery workers employment contracts and social guarantees, Uber Eats appealed to the administrative tribunal in Geneva. In 2020, the court ruled against Uber Eats and concluded that the meal delivery service is effectively an employer and is obliged to hire its drivers according to the cantonal minimum wage. To continue its delivery business in Geneva, the platform was subsequently forced to hire its drivers through an intermediary personnel agency named Chaskis SA. While the food delivery workers still receive their orders from Uber Eats, their contracts with Chaskis now provide them with social benefits, labor protections, and employment stability. Meanwhile, Uber Eats is continuing its legal battle at the federal level, appealing against the cantons' ruling at the federal tribunal in Lucerne. It is to be expected that the federal judges won't overturn the initial ruling, which would have a strong signaling effect for legislation in the rest of Switzerland.

To conclude, app-based food-delivery workers face many challenges, ranging from exploitative working conditions, over insufficient social protection, to circumscribed data rights. Power asymmetries between workers and platforms are exacerbated by excessive surveillance, algorithmic management practices, and reduced bargaining power. These worrying global trends of increasing workplace surveillance and control have intensified during the Covid-19 pandemic and risk being normalized in its aftermath. The example of Geneva's landmark legislation against Uber Eats and the alternative business models of local meal delivery services, such as Smood.ch, highlight that to seize the economic opportunities of the platform economy, digital labor platforms need to be embedded in social welfare structures. Only if we take political action to protect platform workers' vulnerabilities and secure their employment and data rights can digital labor platforms contribute to sustainable and inclusive economic growth. As responsible consumers and attentive citizens, we all must bear this in mind the next time we order a pizza on a delivery app.

Fabian Hofmann is a member of Edgeland's Geneva-based Research Team. He holds a BA in Political Science and Sociology from the University of Basel, Switzerland, and is currently pursuing a Master in International Relations and Political Science at the Graduate Institute for International and Development Studies.





edgelands.institute rue de candolle, 20
1211 geneva 
switzerland


contact
info@edgelands.institute