Platform Values: Conflicting Rights, Artificial Intelligence and Tax Avoidance
The purpose of this call is to gather a diverse range of analytical perspectives on the multiform notion of platform values. The special issue resulting from this call will be presented in a dedicated workshop at the IGF 2019, hosted by the Government of Germany in Berlin from 25 to 29 November 2019. To guarantee that authors receive the most extensive feedback on their contributions, an additional workshop for feedback and paper discussion will be hosted at FGV Law School, in Rio de Janeiro, Brazil, on 27 July 2019 (see the timeframe at the bottom of this call). The authors of the papers selected after the first round of reviews will be invited to come – at their own expenses – to present their draft papers at the workshop.
Platform regulations are having enormous impact on the lives of several billion individuals, and this impact is poised to increase over the next decade. This special issue aims at exploring three of the most crucial points of contention with regard to values underlying the operation of digital platforms: the dispute resolution mechanisms they design and the ways such mechanisms are structured to deal with conflicting rights and principles; the values that can or should be baked into platforms’ automated decision-making and the rights of appropriation in relation to the development of artificial intelligent systems; and the tax avoidance strategies that are frequently pursued by tech giants to minimise their fiscal responsibility across the multiple jurisdictions in which they provide their services.
This Call for Papers celebrates five years of activities of the UN IGF Coalition on Platform Responsibility1. Over the first years of activity, the Coalition has explored the role of digital platforms as gateways to speech, innovation and value creation; it has highlighted that their ascendance as central elements of our society, economy, and public sphere is redefining the concepts of “private” and “public”, and challenging conventional approaches to regulation and governance. Along those lines, this Call for Papers starts from the consideration that, to guarantee the balance and sustainability of governance systems, the exercise of power should be constrained. To do so, a deliberative process over the aims, mechanisms and boundaries of regulation is needed. Accordingly, when private entities rise to the level of quasi-sovereigns or private regulators, it is natural to expect discussion, shared understanding and scrutiny of the choices and trade-offs embedded in their private ordering. Yet, there is little discussion of the ways in which platforms are generating, shaping and championing values in an increasingly intermediated society.
More work is needed to question and enquire what counts as value and how value judgment ought to be made in these hybrid spaces, exploring the elements that should underpin legal and policy-making initiatives, and the risks that may occur when decision-making remains in the sole province of contractual and self-regulation. Contributions to this work should enquire whether it is appropriate for deliberations over platform values and user rights to be exclusively driven by the economic imperatives of shareholders, and whether they should not also take into account the broader set of concerns and expectations of the stakeholders affected by platform regulations.
In this perspective, we call for papers providing analyses and putting forward concrete solutions and policy proposals with regard to platform values. This call is therefore aimed at papers analysing conflicting rights, artificial intelligence systems and tax avoidance strategies with regard to digital platforms. Particularly, the call targets analyses regarding:
1. Conflicting rights.
The first set of governance questions pertain to the intersection of conflicting rights and values: should platforms prioritise certain rights or principles over others? Are they best-placed to identify which rights should be privileged when - privately - regulating social interactions? How should such balancing be conducted between conflicting rights of the same nature, for example between conflicting economic freedoms or conflicting fundamental human rights? What is the relevance of the sources of those rights, for instance in conflicts between rights enshrined in terms of service and diverging conceptions of those rights under the “law of the land”? Should principles, community guidelines and rules of practice (including internal precedents) be weighed any differently as part of balancing? Should balancing be ruled out for certain conflicts?
2. Artificial intelligence.
This second set of questions can be seen as twofold. On the one hand, it relates to value appropriation, in particular in the scramble for data and insights that can be extracted from it to power a new breed of artificial intelligence applications. Since data is a key input for the improvement of algorithms, profiling, and the elaboration of new cognitive services, should data subjects and other players in the platform ecosystems share in the value generated by their marginal input? Should platforms be the only beneficiaries of this learning process, or should the law constrain their ability to exclude others (including consumers, workers, competitors and complementors) from sharing in the benefits generated by the platform ecosystem? On the other hand, the development and implementation of artificial intelligence systems to automatise decision-making functions calls into question the values that should be “baked” into such systems in order to minimise negative consequences and strive towards the design and development of ethical automated systems. In this respect, what are the fundamental values that should orientate the design, development and deployment of artificial intelligence within platforms? How can those values be appropriately incorporated into artificial intelligence solutions implemented within platforms? Are the principles of transparency, non-discrimination and due process sufficient to prevent unfair value extraction, or do we need stronger intervention?
3. Tax avoidance.
Finally, it is necessary to appreciate whether platforms provide long-term value with their functionalities (for example, bringing together different sides) or rather primarily engage in value extraction (for instance, limiting choice and deriving advantages in favouring certain kinds of behaviors or business models) and regulatory arbitrage. Defining how and where the value is created is crucial in determining the tax regime that is applicable to their activities, and in identifying unfair or fraudulent transfers of wealth. How should value be constructed for tax purposes, and how should regulators around the world deal with global tech giants? Are recent legislative initiatives on digital VAT marking the beginning of an inevitable race to the bottom to attract investment by global platforms, or do they set the foundations for interstate cooperation? Are existing reflections, such as the OECD’s works on transfer pricing and Base Erosion and Profit-Shifting sufficiently mature to be implemented by states? And, most importantly, are states willing and able to implement existing proposals?Is a national or local tax on intermediaries for data collection and aggregation a viable way to account for the transfer of value that takes place between users and platforms?