Svoboda | Graniru | BBC Russia | Golosameriki | Facebook
Skip to main content
The paper introduces a new model of telepresence. First, it criticises the standard model of presence as epistemic failure, showing it to be inadequate. It then replaces it with a new model of presence as successful observability. It... more
The paper introduces a new model of telepresence. First, it criticises the standard model of presence as epistemic failure, showing it to be inadequate. It then replaces it with a new model of presence as successful observability. It further provides reasons to distinguish between two types of presence, backward and forward. The new model is then tested against two ethical issues whose nature has been modified by the development of digital information and communication technologies, namely pornography and privacy, and shown to be effective.
The Copernican revolution displaced us from the center of the universe. The Darwinian revolution displaced us from the center of the biological kingdom. And the Freudian revolution displaced us from the center of our mental lives. Today,... more
The Copernican revolution displaced us from the center of the universe. The Darwinian revolution displaced us from the center of the biological kingdom. And the Freudian revolution displaced us from the center of our mental lives. Today, Computer Science and digital ICTs are causing a fourth revolution, radically changing once again our conception of who we are and our “exceptional centrality.” We are not at the center of the infosphere. We are not standalone entities, but rather interconnected informational agents, sharing with other biological agents and smart artifacts a global environment ultimately made of information. Having changed our views about ourselves and our world, are ICTs going to enable and empower us, or constrain us? This paper argues that the answer lies in an ecological and ethical approach to natural and artificial realities. It posits that we must put the “e” in an environmentalism that can deal successfully with the new issues caused by the fourth revolution.
The topic of this paper may be introduced by fast zooming in and out of the philosophy of information. In recent years, philosophical interest in the nature of information has been increasing steadily. This has led to a focus on semantic... more
The topic of this paper may be introduced by fast zooming in and out of the philosophy of information. In recent years, philosophical interest in the nature of information has been increasing steadily. This has led to a focus on semantic information, and then on the logic of being informed, which has attracted analyses concentrating both on the statal sense in which S holds the information that p (this is what I mean by logic of being informed in the rest of this article) and on the actional sense in which S becomes informed that p. One of the consequences of the logic debate has been a renewed epistemological interest in the principle of Dretske. This is the topic of the paper, in which I seek to defend PIC against the sceptical objection. If I am successful, this means-and we are now zooming out-that the plausibility of PIC is not undermined by the sceptical objection, and therefore that a major epistemological argument against the formalization of the logic of being informed of distribution discriminates between normal and non-normal modal logics, this means that a potentially good reason to look for a formalization of the logic of being informed among the non-normal modal logics, which reject the axiom, is informed in terms of the normal modal logic B (also known as KTB argue that the sceptical objection against PIC fails, so it is not a good reason to abandon the normal modal logic B as a good formalization of the logic of being informed.
The paper analyses six ethical challenges posed by cloud computing, concerning ownership, safety, fairness, responsibility, accountability and privacy. The first part defines cloud computing on the basis of a resource-oriented approach,... more
The paper analyses six ethical challenges posed by cloud computing, concerning ownership, safety, fairness, responsibility, accountability and privacy. The first part defines cloud computing on the basis of a resource-oriented approach, and outlines the main features that characterise such technology. Following these clarifications, the second part argues that cloud computing reshapes some classic problems often debated in information and computer ethics. To begin with, cloud computing makes possible a complete decoupling of ownership, possession and use of data and this helps to explain the problems occurring when different providers of cloud computing retain or relinquish the right to use or own users‘ data. The problem of safety in cloud computing is coupled to that of reliability, insofar as users have to trust providers to preserve their data, applications and content in a reliable manner. It is argued that, in this context, data insurance could play an important role. Regarding fairness, the paper argues that cloud computing is already reshaping the nature of the Digital. Responsibility, accountability and privacy close the ethical analysis of cloud computing. In this case, the thesis is that the necessity to account for the actions of cloud computing users imposes delicate trade-offs between users‘ privacy and the traceability of their operations.
‘First, do no harm,’ is the cornerstone of contemporary Western medical ethics. The practice of evidence-based medicine supports this principle by advocating for the full and timely publication of all studies examining effectiveness and... more
‘First, do no harm,’ is the cornerstone of contemporary Western medical ethics. The practice of evidence-based medicine supports this principle by advocating for the full and timely publication of all studies examining effectiveness and safety of different interventions and treatments. Given how important robust and valid evidence is to upholding the first-principle, it is surprising that policymakers, regulators and legislators appear to have paid so little heed to the way in which new technologies – particularly digital health interventions (exemplified by health apps which make claims of efficacy) – are changing the value, nature and reliability of medical evidence. Individuals are being encouraged to become ‘empowered’ to manage their own health through the use of often poorly evidenced and barely governed digital health interventions (health apps). This raises questions of trust and has the potential for widespread direct or indirect harm. In order to contribute to this task, the article sets out to establish the baseline quality of evidence available to support the claims made by apps on the Apple App Store. It does so by conducting a scoping study of the evidence available to support a purposive sample of apps on the Apple App Store. The results show that the evidence available to support the claims made by the health apps analysed is often unavailable or of questionable quality. The article concludes with X=number specific actions that should be taken to improve the quality of evidence available for health apps and thus protect individuals and groups from harm.
This paper discusses the influence of Sextus Empiricus' works on Renaissance culture and the recovery of Pyrrhonism during the fifteenth and sixteenth centuries. It investigates what primary and secondary sources were available at the... more
This paper discusses the influence of Sextus Empiricus' works on Renaissance culture and the recovery of Pyrrhonism during the fifteenth and sixteenth centuries. It investigates what primary and secondary sources were available at the time, and who knew and made use of such sources. The article concludes that the dearth of Pyrrhonic arguments in Renaissance literature was due to the prevailing and incompatible culture of humanism rather than to a lack of interest in Sextus Empiricus’ works during this period.
This paper explores a fundamental issue in epistemology, namely, that the world is completely different in general from the way our sensory impacts and our internal makeup lead us to believe (Stroud 1994). Three hypotheses are considered:... more
This paper explores a fundamental issue in epistemology, namely, that the world is completely different in general from the way our sensory impacts and our internal makeup lead us to believe (Stroud 1994). Three hypotheses are considered: first, that there is something like an independent external reality; second, that the epistemic relationship occurring between this reality and the knowing subject is somehow such as not to allow the latter to know the intrinsic nature of the former; and finally, that the human knower has a spontaneous desire to know what the intrinsic nature of external reality is.
An important lesson that philosophy can learn from the Turing Test and computer science more generally concerns the careful use of the method of Levels of Abstraction (LoA). In this paper, the method is first briefly summarised. The... more
An important lesson that philosophy can learn from the Turing Test and computer science more generally concerns the careful use of the method of Levels of Abstraction (LoA). In this paper, the method is first briefly summarised. The constituents of the method are "observables", collected together and moderated by predicates restraining their "behaviour". The resulting collection of sets of observables is called a "gradient of abstractions" and it formalises the minimum consistency conditions that the chosen abstractions must satisfy. Two useful kinds of gradient of abstraction-disjoint and nested-are identified. It is then argued that in any discrete (as distinct from analogue) domain of discourse, a complex phenomenon may be explicated in terms of simple approximations organised together in a gradient of abstractions. Thus, the method replaces, for discrete disciplines, the differential and integral calculus, which form the basis for understanding the complex analogue phenomena of science and engineering. The result formalises an approach that is rather common in computer science but has hitherto found little application in philosophy. So the philosophical value of the method is demonstrated by showing how making the LoA of discourse explicit can be fruitful for phenomenological and conceptual analysis. To this end, the method is applied to the Turing Test, the concept of agenthood, the definition of emergence, the notion of artificial life, quantum observation and decidable observation. It is hoped that this treatment will promote the use of the method in certain areas of the humanities and especially in philosophy.
Throughout history, dogmatists and sceptics of various branches have been inclined to agree on the description of man as a 'filaletes zoon' - a 'truth-loving animal' as Sextus Empiricus had defined him - on the fact that 'the desire to... more
Throughout history, dogmatists and sceptics of various branches have been inclined to agree on the description of man as a 'filaletes zoon' - a 'truth-loving animal' as Sextus Empiricus had defined him - on the fact that 'the desire to know is innate in man' and on interpreting this as the ideal force inspiring the search for knowledge. The two parties have, however, always dissented considerably about the consequences to be drawn from such a vision of man as a knowledge-seeker. This paper seeks to clarify the discrepancies occurring between the sceptical and the dogmatic understanding of man's epistemophilics impulse, through first using the metaphysical argument ex communi omnium sciendi desiderio proposed by Pierre de Villemandy in his Scepticismus Debellatus, and then Cicero's more sceptical and purely anthropological reading of the characterization of man as a knowledge-seeker. The paper then goes on to discuss the salient features that in different times and manners have characterized the philosophical debate on the topic.
On 21 April 2021, the European Commission published the proposal of the new EU Artificial Intelligence Act (AIA) — one of the most influential steps taken so far to regulate AI internationally. This article highlights some foundational... more
On 21 April 2021, the European Commission published the proposal of the new EU Artificial Intelligence Act (AIA) — one of the most influential steps taken so far to regulate AI internationally. This article highlights some foundational aspects of the Act and analyses the philosophy behind its proposal.
Artificial intelligence (AI) has the potential to play an important role in addressing the climate emergency, but this potential must be set against the environmental costs of developing AI systems. In this commentary, we assess the... more
Artificial intelligence (AI) has the potential to play an important role in addressing the climate emergency, but this potential must be set against the environmental costs of developing AI systems. In this commentary, we assess the carbon footprint of AI training processes and offer 14 policy recommendations to reduce it.
Since the first case was reported to the World Health Organisation in December 2019, SARS-CoV-2 (COVID-19) has caused social and economic devastation on a scale not seen since World War 2. As the milestone of 2 years of ‘living with the... more
Since the first case was reported to the World Health Organisation in December 2019, SARS-CoV-2 (COVID-19) has caused social and economic devastation on a scale not seen since World War 2. As the milestone of 2 years of ‘living with the virus’ approaches, Governments and businesses are desperate to develop interventions that can facilitate the reopening of society whilst still protecting public health. As the roll-out of COVID-19 vaccinations has gathered pace worldwide, particularly in wealthier countries, those responsible for developing such interventions have begun to focus on the use of digital ‘COVID-19 Vaccine Passports’, which can be used to prove that an individual has had an approved COVID-19 vaccination (both doses where applicable). Governments hope that Vaccine Passports may be used to facilitate international travel and permit increased domestic liberties, for example allowing people to access public venues, to attend large gatherings, or to return to work without compromising personal safety and public health. “Yellow Fever certificates”, required to enter a specific list of countries maintained by the World Health Organisation, provide a precedent for this type of intervention. However, there are concerns that the use of COVID-19 Vaccine Passports could be viewed as a mechanism for introducing a mandatory vaccination policy, and there are also concerns that due to issues related to the unequal global distribution of effective vaccines and ‘the digital divide’ their use could exacerbate inequalities.

Here we discuss the ethical and human rights implications of COVID-19 vaccine passports, based on a systematised literature review and documentary analysis. We find that in the context of a global public health emergency, COVID-19 vaccine passports (or, as we discuss, the broader status passes) are ethically and legally permissible under relevant human rights and international health regulations, provided they are designed, implemented, and used in accordance with the least infringement principle and the value of equality. We then set out 18 concrete recommendations for supranational bodies, national governments, and businesses to help ensure they develop and deploy COVID-19 Vaccine Passports accordingly.
The European Union (EU) has, with increasing frequency, outlined an intention to strengthen its "digital sovereignty" as a basis for safeguarding European values in the digital age. Yet, uncertainty remains as to how the term should be... more
The European Union (EU) has, with increasing frequency, outlined an intention to strengthen its "digital sovereignty" as a basis for safeguarding European values in the digital age. Yet, uncertainty remains as to how the term should be defined, undermining efforts to assess the success of the EU's digital sovereignty agenda. The task of this paper is to reduce this uncertainty by i) analysing how digital sovereignty has been discussed by EU institutional actors and placing this in a wider conceptual framework, ii) mapping specific policy areas and measures that EU institutional actors cite as important for strengthening digital sovereignty, iii) assessing the effectiveness of current policy measures at strengthening digital sovereignty, and iv) proposing policy solutions that go above and beyond current measures and address existing gaps. To do this, we introduce a conceptual understanding of digital sovereignty and then empirically ground this within the specific EU context via an analysis of a corpus of 180 EU webpages that have mentioned the term "digital sovereignty" within the past year. We find that existing policies, in particular those pertaining to data governance, help to achieve some of the EU's specific aims in regard to digital sovereignty, such as conditioning outward data flows, but they are more limited concerning other aims, like advancing the EU's competitiveness and regulating the private sector. This is problematic insofar as it constrains the EU's ability to safeguard and promote its values. The policy solutions we propose represent steps towards the further strengthening of the EU's digital sovereignty and firmer protection of EU values.
In September 2021, the UK government released for public consultation a set of proposed reforms to its data protection regime. The reforms are part of a broader national strategy, which aims to incentivise innovation and make the UK an... more
In September 2021, the UK government released for public consultation a set of proposed reforms to its data protection regime. The reforms are part of a broader national strategy, which aims to incentivise innovation and make the UK an international "data hub". Some of the suggested reforms pursue these goals by lowering data protection standards that were acquired with the adoption of the GDPR into UK law. In this article, we analyse the major proposals from a data protection perspective. We argue that some of the reforms undermine data privacy, regulatory probity, harm prevention, and have contraindications for data subjects and businesses, especially when considering the "Brussels effect" and the growing international compliance with the EU GDPR. We also highlight the reforms that have the potential to facilitate data-driven innovation without weakening data protection standards, and suggest other reforms of a similar kind.
Italian Abstract: Questo volume raccoglie i preprint di articoli, saggi, prefazioni e altri brevi testi occasionali che ho pubblicato in italiano tra il 2011 e il 2021. È una sorta di diario, in cui temi, idee, e approcci ricorrono anche... more
Italian Abstract: Questo volume raccoglie i preprint di articoli, saggi, prefazioni e altri brevi testi occasionali che ho pubblicato in italiano tra il 2011 e il 2021. È una sorta di diario, in cui temi, idee, e approcci ricorrono anche a distanza di anni.
Nel raccogliere questi testi ho cercato di mantenere gli originali intoccati, anche se a volte le versioni pubblicate sono poi state modificate per ragioni editoriali. Ho corretto solo gli errori linguistici e quelli fattuali quando sono riuscito a identificarli. I titoli sono miei, in parte per differenziare questi preprints dalle versioni pubblicate successivamente, in parte perché i titoli dei giornali sono di solito scelti senza consultare l’autore (l’eccezione in questa raccolta è rappresentata da Innovazione del Corriere della Sera).
Sono molto grato a tutte le persone che hanno reso possibili queste proficue collaborazioni, nel corso di molti anni e in particolare a Luca De Biase (Nòva24 – Il Sole 24 Ore), Marco Pacini (L’Espresso), Valeria Palermi (D - la Repubblica delle donne) e Massimo Sideri (Corriere Innovazione) per i consigli e per avermi insegnato il mestiere della scrittura breve. Se non ho ancora imparato abbastanza è solo colpa mia.
Il titolo fa riferimento al numero di battute (spazi inclusi) disponibili per un articolo anche lungo da pubblicare su un quotidiano, un settimanale, o un mensile.

English Abstract: This volume collects the preprints of articles, essays, prefaces and other occasional short texts that I published in Italian between 2011 and 2021. It is a sort of diary, in which themes, ideas, and approaches recur even after many years.

In collecting these texts I have tried to keep the originals untouched, even if sometimes the published versions have since been modified for editorial reasons. I only corrected the linguistic and factual errors when I was able to identify them. The titles are mine, in part to differentiate these preprints from the versions published later, in part because the newspaper headlines are usually chosen without consulting the author (the exception in this collection is represented by Innovazione del Corriere della Sera).

I am very grateful to all the people who have made these fruitful collaborations possible, over many years and in particular to Luca De Biase (Nòva24 - Il Sole 24 Ore), Marco Pacini (L'Espresso), Valeria Palermi (D - la Repubblica delle Donne) and Massimo Sideri (Corriere Innovazione) for the advice and for teaching me the craft of short writing. If I haven't learned enough yet, it's my fault.

The title refers to the number of characters (including spaces) available for an article, even a long one, to be published in a newspaper, a weekly, or a monthly.
As China and the United States strive to be the primary global leader in AI, their visions are coming into conflict. This is frequently painted as a fundamental clash of civilisations, with evidence based primarily around each country's... more
As China and the United States strive to be the primary global leader in AI, their visions are coming into conflict. This is frequently painted as a fundamental clash of civilisations, with evidence based primarily around each country's current political system and present geopolitical tensions. However, such a narrow view claims to extrapolate into the future from an analysis of a momentary situation, ignoring a wealth of historical factors that influence each country's prevailing philosophy of technology and thus their overarching AI strategies. In this article, we build a philosophy-oftechnology-grounded framework to analyse what differences in Chinese and American AI policies exist and, on a fundamental level, why they exist. We support this with Natural Language Processing methods to provide an evidentiary basis for our analysis of policy differences. By looking at documents from three different American presidential administrations-Barack Obama, Donald Trump, and Joe Biden-as well as both national and local policy documents (many available only in Chinese) from China, we provide a thorough comparative analysis of policy differences. This article fills a gap in US-China AI policy comparison and constructs a framework for understanding the origin and trajectory of policy differences. By investigating what factors are informing each country's philosophy of technology and thus their overall approach to AI policy, we argue that while significant obstacles to cooperation remain, there is room for dialogue and mutual growth.
In September 2021, the UK government released a set of proposed reforms to its data protection regime for public consultation. The reforms are part of a broader national strategy, which aims to incentivise data-driven innovation and make... more
In September 2021, the UK government released a set of proposed reforms to its data protection regime for public consultation. The reforms are part of a broader national strategy, which aims to incentivise data-driven innovation and make the UK an international “data hub”. In this article, we argue that taken together, the proposed reforms risk (1) undermining the data subjects’ rights that were ensured with the adoption of the EU GDPR into UK law; (2) introducing an accountability framework that is inadequate to address harm prevention; and (3) eroding the regulatory probity of the Information Commissioner’s Office (ICO). We also comment on the analysis of the expected impact of the reform, discussing the negative impact for both public and private stakeholders, especially in light of the “Brussels effect” and growing international compliance with the EU GDPR.
In December 2020, the European Commission issued the Digital Services Act (DSA), a legislative proposal for a single market of digital services, focusing on fundamental rights, data privacy, and the protection of stakeholders. The DSA... more
In December 2020, the European Commission issued the Digital Services Act (DSA), a legislative proposal for a single market of digital services, focusing on fundamental rights, data privacy, and the protection of stakeholders. The DSA seeks to promote European digital sovereignty, among other goals. This article reviews the literature and related documents on the DSA to map and evaluate its ethical, legal, and social implications. It examines four macro-areas of interest regarding the digital services offered by online platforms. The analysis concludes that, so far, the DSA has led to contrasting interpretations, ranging from some stakeholders expecting it to be more challenging for gatekeepers, to others objecting that the proposed obligations are unjustified. The article contributes to this debate by arguing that a more robust framework for the benefit of all stakeholders should be defined.
NB I made the mistake of pre-publishing this text before submitting it to the editor's of the Festschrift. For this reason, he decided it could no longer be included. My mistake.
In this paper, I address some key points raised by Massimo Durante about my work, to understand philosophy as conceptual design, briefly discuss the debate on positivism vs naturalism in the philosophy of law, argue that philosophy needs... more
In this paper, I address some key points raised by Massimo Durante about my work, to understand philosophy as conceptual design, briefly discuss the debate on positivism vs naturalism in the philosophy of law, argue that philosophy needs to be “urgent”, and defend the view that a relational philosophy cannot be based only on binary relations.
We have developed capAI, a conformity assessment procedure for AI systems, to provide an independent, comparable, quantifiable, and accountable assessment of AI systems that conforms with the proposed AIA regulation. By building on the... more
We have developed capAI, a conformity assessment procedure for AI systems, to provide an independent, comparable, quantifiable, and accountable assessment of AI systems that conforms with the proposed AIA regulation. By building on the AIA, capAI provides organisations with practical guidance on how high-level ethics principles can be translated into verifiable criteria that help shape the design, development, deployment and use of ethical AI. The main purpose of capAI is to serve as a governance tool that ensures and demonstrates that the development and operation of an AI system are trustworthy – i.e., legally compliant, ethically sound, and technically robust – and thus conform to the AIA.
Events such as the riot at the United States Capitol and tightening constraints on the Russian public sphere have highlighted the socio-political significance of app store governance. This is dominated by Apple and Google as operators of... more
Events such as the riot at the United States Capitol and tightening constraints on the Russian public sphere have highlighted the socio-political significance of app store governance. This is dominated by Apple and Google as operators of the two largest smartphone platforms. In this article, we analyse two case studies: the removals from app stores in 2021 of the fringe American social media app Parler and of the Russian opposition app Smart Voting. On the basis of this analysis, we identify three critical limitations for app store governance at present: Apple's and Google's dominance, the substantive opacity of their respective app store guidelines, and the procedural arbitrariness with which these guidelines are applied to specific cases. We then assess the potential efficacy of legislative proposals in the EU and US to intervene in this domain and conclude by offering some recommendations supporting more efficacious and socially responsible app store governance.
This is a short article published in Italian by ENEL magazine ReS in 2004, to celebrate Kant's anniversary (1724-1804). It provides a simple overview of some classic themes in Kant's philosophy.
This is a short article published in Italian by ENEL magazine ReS in 2002. It provides a simple overview of what the nature of scepticism is and what epistemological strategies are available to overcome it.
The U.S. Algorithmic Accountability Act of 2022 (US AAA) constitutes a pragmatic approach to balancing the benefits and risks brought by automated decision systems. Yet there is still room for improvement. In this correspondence, we... more
The U.S. Algorithmic Accountability Act of 2022 (US AAA) constitutes a pragmatic approach to balancing the benefits and risks brought by automated decision systems. Yet there is still room for improvement. In this correspondence, we highlight both promising aspects of the bill and areas in which further revisions or clarifications are needed.
On March 29 th 2023, the United Kingdom (UK) government published its AI Regulation White Paper, a "proportionate and pro-innovation regulatory framework" for AI designed to support innovation, identify and address risks, and establish... more
On March 29 th 2023, the United Kingdom (UK) government published its AI Regulation White Paper, a "proportionate and pro-innovation regulatory framework" for AI designed to support innovation, identify and address risks, and establish the UK as an "AI superpower". In this article, we assess whether the approach outlined in this policy document is appropriate for meeting the country's stated ambitions. We argue that the proposed continuation of a sector-led approach, which relies on existing regulators addressing risks that fall within their remits, could support contextually appropriate and novel AI governance initiatives. However, a growing emphasis from the central government on promoting innovation through weakening checks, combined with domestic tensions between Westminster and the UK's devolved nations, will undermine the effectiveness and ethical permissibility of UK AI governance initiatives. At the same time, the likelihood of the UK's initiatives proving successful is contingent on relationships with, and decisions from, other jurisdictions, particularly the European Union. If left unaddressed in subsequent policy, these factors risk transforming the UK into a reluctant follower, rather than a global leader, in AI governance. We conclude this paper by outlining a set of recommendations for UK policymakers to mitigate the domestic and international risks associated with the country's current trajectory.
In this article, I argue that the development of AI in terms of successful agency without intelligence does not lead to any fanciful realisation of science fiction scenarios (Singularity), which are at best distracting and at worst... more
In this article, I argue that the development of AI in terms of successful agency without intelligence does not lead to any fanciful realisation of science fiction scenarios (Singularity), which are at best distracting and at worst irresponsible; and that any denial of AI as a revolution in how we create, control, and conceptualise agency is also wrong. The article concludes by highlighting how this calls for ethical foresight and design of the kind of infosphere and information societies we would like to develop.
On the whole, the U.S. Algorithmic Accountability Act of 2022 (US AAA) is a pragmatic approach to balancing the benefits and risks of automated decision systems. Yet there is still room for improvement. This commentary highlights how the... more
On the whole, the U.S. Algorithmic Accountability Act of 2022 (US AAA) is a pragmatic approach to balancing the benefits and risks of automated decision systems. Yet there is still room for improvement. This commentary highlights how the US AAA can both inform and learn from the European Artificial Intelligence Act (EU AIA).
Today, Open Source Intelligence (OSINT), i.e. information derived from publicly available sources, makes up between 80 and 90 per cent of all intelligence activities carried out by Law Enforcement Agencies (LEAs) and intelligence services... more
Today, Open Source Intelligence (OSINT), i.e. information derived from publicly available sources, makes up between 80 and 90 per cent of all intelligence activities carried out by Law Enforcement Agencies (LEAs) and intelligence services in the West. Developments in data mining, machine learning, visual forensics and, most importantly, the growing computing power available for commercial use, have enabled OSINT practitioners to speed up, and sometimes even automate, intelligence collection and analysis, obtaining more accurate results more quickly. As the infosphere expands to accommodate everincreasing online presence, so does the pool of actionable OSINT. These developments raise important concerns in terms of governance, ethical, legal, and social implications (GELSI). New and crucial oversight concerns emerge alongside standard privacy concerns, as some of the more advanced data analysis tools require little to no supervision. This article offers a systematic review of the relevant literature. It analyses 571 publications to assess the current state of the literature on the use of AIpowered OSINT (and the development of OSINT software) as it relates to the GELSI framework, highlighting potential gaps and suggesting new research directions.
While the use of artificial intelligence (AI) systems promises to bring significant economic and social benefits, it is also coupled with ethical, legal, and technical challenges. Business leaders thus face the question of how to best... more
While the use of artificial intelligence (AI) systems promises to bring significant economic and social benefits, it is also coupled with ethical, legal, and technical challenges. Business leaders thus face the question of how to best reap the benefits of automation whilst managing the associated risks. As a first step, many companies have committed themselves to various sets of ethics principles aimed at guiding the design and use of AI systems. So far so good. But how can well-intentioned ethical principles be translated into effective practice? And what challenges await companies that attempt to operationalize AI governance? In this article, we address these questions by drawing on our first-hand experience of shaping and driving the roll-out of AI governance within AstraZeneca, a biopharmaceutical company. The examples we discuss highlight challenges that any organization attempting to operationalize AI governance will have to face. These include questions concerning how to define the material scope of AI governance, how to harmonize standards across decentralized organizations, and how to measure the impact of specific AI governance initiatives. By showcasing how AstraZeneca managed these operational questions, we hope to provide project managers, CIOs, AI practitioners, and data privacy officers responsible for designing and implementing AI governance frameworks within other organizations with generalizable best practices. In essence, companies seeking to operationalize AI governance are encouraged to build on existing policies and governance structures, use pragmatic and action-oriented terminology, focus on risk management in development and procurement, and empower employees through continuous education and change management.
The US is promoting a new vision of a "Good AI Society" through its recent AI Bill of Rights. This offers a promising vision of community-oriented equity unique amongst peer countries. However, it leaves the door open for potential rights... more
The US is promoting a new vision of a "Good AI Society" through its recent AI Bill of Rights. This offers a promising vision of community-oriented equity unique amongst peer countries. However, it leaves the door open for potential rights violations. Furthermore, it may have some federal impact, but it is non-binding, and without concrete legislation, the private sector is likely to ignore it.
Online controlled experiments, also known as A/B tests, have become ubiquitous. While many practical challenges in running experiments at scale have been thoroughly discussed, the ethical dimension of A/B testing has been neglected. This... more
Online controlled experiments, also known as A/B tests, have become ubiquitous. While many practical challenges in running experiments at scale have been thoroughly discussed, the ethical dimension of A/B testing has been neglected. This article fills this gap in the literature by introducing a new, soft ethics and governance framework that explicitly recognizes how the rise of an experimentation culture in industry settings brings not only unprecedented opportunities to businesses but also significant responsibilities. More precisely, the article (a) introduces a set of principles to encourage ethical and responsible experimentation to protect users, customers, and society; (b) argues that ensuring compliance with the proposed principles is a complex challenge unlikely to be addressed by resorting to a one-solution response; (c) discusses the relevance and effectiveness of several mechanisms and policies in educating, governing, and incentivizing companies conducting online controlled experiments; and (d) offers a list of prompting questions specifically designed to help and empower practitioners by stimulating specific ethical deliberations and facilitating coordination among different groups of stakeholders.
In this short article, I discuss the nature of two kinds of disasters: tragic and catastrophic. I argue that climate change is a tragedy, likely to be addressed more seriously only once a catastrophe occurs. This is the terrible hope.
Within Just War Theory, the Doctrine of Double Effect (DDE) modifies the principle of distinction by reference to the intent of an act: the unintentional though foreseeable killing of noncombatants is morally permissible (providing a... more
Within Just War Theory, the Doctrine of Double Effect (DDE) modifies the principle of distinction by reference to the intent of an act: the unintentional though foreseeable killing of noncombatants is morally permissible (providing a proportionality clause is met), and the intentional killing of noncombatants is morally impermissible. One concern is that the development of Lethal Autonomous Weapon Systems (LAWS) has superseded DDE because of the separation they introduce between the agent with intention-the human operator-and the agent who targets-the LAWS. As a result, DDE may be incapable of capturing, and thus evaluating, noncombatant deaths resulting from using LAWS. In this article, we address this concern by proposing a revised account of DDE to address cases of noncombatant harm caused by LAWS. We argue that when LAWS cause harm to noncombatants, a distinctive moral wrong occurs because that harm is instrumental to LAWS deployment. This wrong is a consequence of the fact that military organisations deploying LAWS involve noncombatants in circumstances useful to the military organisation precisely by way of involving those noncombatants.
In this article we analyze whether Twitter can be used to detect barriers to voting at polling places. We use 20,322 tweets geolocated to U.S. states that match a series of keywords on the 2010, 2012, 2014, 2016, and 2018 general election... more
In this article we analyze whether Twitter can be used to detect barriers to voting at polling places. We use 20,322 tweets geolocated to U.S. states that match a series of keywords on the 2010, 2012, 2014, 2016, and 2018 general election days. We fine-tune BERTweet, a pre-trained language model, using a training set of 6,365 tweets labeled as issues or non-issues. We develop a model with an accuracy of 96.9% and a recall of 72.2%, and another model with an accuracy of 90.5% and a recall of 93.5%, far exceeding the performance of baseline models. Based on these results, we argue that these BERTweet-based models are promising methods for detecting polling place issues on U.S. election days. We suggest that outputs from these models can be used to supplement existing voter protection efforts and to research the impact of policies, demographics, and other variables on voting access.
The emergence of large language models (LLMs) represents a major advance in artificial intelligence (AI) research. However, the widespread use of LLMs is also coupled with significant ethical and social challenges. Previous research has... more
The emergence of large language models (LLMs) represents a major advance in artificial intelligence (AI) research. However, the widespread use of LLMs is also coupled with significant ethical and social challenges. Previous research has pointed towards auditing as a promising governance mechanism to help ensure that AI systems are designed and deployed in ways that are ethical, legal, and technically robust. However, existing auditing procedures fail to address the governance challenges posed by LLMs, which are adaptable to a wide range of downstream tasks. To help bridge that gap, we offer three contributions in this article. First, we establish the need to develop new auditing procedures that capture the risks posed by LLMs by analysing the affordances and constraints of existing auditing procedures. Second, we outline a blueprint to audit LLMs in feasible and effective ways by drawing on best practices from IT governance and system engineering. Specifically, we propose a three-layered approach, whereby governance audits, model audits, and application audits complement and inform each other. Finally, we discuss the limitations not only of our three-layered approach but also of the prospect of auditing LLMs at all. Ultimately, this article seeks to expand the methodological toolkit available to technology providers and policymakers who wish to analyse and evaluate LLMs from technical, ethical, and legal perspectives.
The expected societal impact of quantum technologies (QT) urges us to proceed and innovate responsibly. This article proposes a conceptual framework for Responsible QT that seeks to integrate considerations about ethical, legal, social,... more
The expected societal impact of quantum technologies (QT) urges us to proceed and innovate responsibly. This article proposes a conceptual framework for Responsible QT that seeks to integrate considerations about ethical, legal, social, and policy implications (ELSPI) into quantum R&D, while responding to the Responsible Research and Innovation dimensions of anticipation, inclusion, reflection and responsiveness. After examining what makes QT unique, we argue that quantum innovation should be guided by a methodological framework for Responsible QT, aimed at jointly safeguarding against risks by proactively addressing them, engaging stakeholders in the innovation process, and continue advancing QT (‘SEA’). We further suggest operationalizing the SEA-framework by establishing quantum-specific guiding principles. The impact of quantum computing on information security is used as a case study to illustrate (1) the need for a framework that guides Responsible QT, and (2) the usefulness of the SEA-framework for QT generally. Additionally, we examine how our proposed SEA-framework for responsible innovation can inform the emergent regulatory landscape affecting QT, and provide an outlook of how regulatory interventions for QT as base-layer technology could be designed, contextualized, and tailored to their exceptional nature in order to reduce the risk of unintended counterproductive effects of policy interventions.

Laying the groundwork for a responsible quantum ecosystem, the research community and other stakeholders are called upon to further develop the recommended guiding principles, and discuss their operationalization into best practices and real-world applications. Our proposed framework should be considered a starting point for these much needed, highly interdisciplinary efforts.
The arrival of Foundation Models in general, and Large Language Models (LLMs) in particular, capable of ‘passing’ medical qualification exams at or above a human level, has sparked a new wave of ‘the chatbot will see you now’ hype. It is... more
The arrival of Foundation Models in general, and Large Language Models (LLMs) in particular, capable of ‘passing’ medical qualification exams at or above a human level, has sparked a new wave of ‘the chatbot will see you now’ hype. It is exciting to witness such impressive technological progress, and LLMs have the potential to benefit healthcare systems, providers, and patients. However, these benefits are unlikely to be realised by propagating the myth that, just because LLMs are sometimes capable of passing medical exams, they will ever be capable of supplanting any of the main diagnostic, prognostic, or treatment tasks of a human clinician. Contrary to popular discourse, LLMs are not necessarily more efficient, objective, or accurate than human healthcare providers. They are vulnerable to errors in underlying ‘training’ data and prone to ‘hallucinating’ false information rather than facts. Moreover, there are nuanced, qualitative, or less measurable reasons why it is prudent to be mindful of hyperbolic claims regarding the transformative power ofLLMs. Here we discuss these reasons, including contextualisation, empowerment, learned intermediaries, manipulation, and empathy. We conclude that overstating the current potential of LLMs does a disservice to the complexity of healthcare and the skills of healthcare practitioners and risks a ‘costly’ new AI winter. A balanced discussion recognising the potential benefits and limitations can help avoid this outcome.
The EU Artificial Intelligence Act (AIA) defines four risk categories: unacceptable, high, limited, and minimal. However, as these categories statically depend on broad fields of application of AI, the risk magnitude may be wrongly... more
The EU Artificial Intelligence Act (AIA) defines four risk categories: unacceptable, high, limited, and minimal. However, as these categories statically depend on broad fields of application of AI, the risk magnitude may be wrongly estimated, and the AIA may not be enforced effectively. This problem is particularly challenging when it comes to regulating general-purpose AI (GPAI), which has versatile and often unpredictable applications. Recent amendments to the compromise text, though introducing context-specific assessments, remain insufficient. To address this, we propose applying the risk categories to specific AI scenarios, rather than solely to fields of application, using a risk assessment model that integrates the AIA with the risk approach arising from the Intergovernmental Panel on Climate Change (IPCC) and related literature. This integrated model enables the estimation of AI risk magnitude by considering the interaction between (a) risk determinants, (b) individual drivers of determinants, and (c) multiple risk types. We illustrate this model using large language models (LLMs) as an example.
This groundbreaking volume, published by Springer, provides a comprehensive overview of the complex ethical landscape surrounding the application of artificial intelligence (AI) in achieving the United Nations Sustainable Development... more
This groundbreaking volume, published by Springer, provides a comprehensive overview of the complex ethical landscape surrounding the application of artificial intelligence (AI) in achieving the United Nations Sustainable Development Goals (UN SDGs).

It brings together many expert perspectives across disciplines to shed light on the multifaceted implications of AI use within this context. The book illuminates the transformative potential of AI in advancing the SDGs, using case studies to demonstrate how this technology can foster efficiency, inclusivity, innovation, and sustainability. For instance, AI's potential impact in sectors such as precision agriculture, predictive analytics for education, and smart energy grids is critically explored.

Simultaneously, the authors delve into the pressing governance and ethical challenges associated with AI. These include the risks of exacerbating socio-economic disparities, violating privacy rights, and navigating ethical quandaries such as AI bias. The necessity of robust regulatory frameworks, transparency, and inclusive design is emphasized to ensure fair and equitable AI deployment and to mitigate potential adverse consequences.

The balanced and comprehensive analysis offered by this collection makes it an invaluable resource for students, scholars, and practitioners interested in the ethical governance of AI, sustainability, the fourth revolution, and the intersection of these with the UN SDGs. The authors underscore the importance of thoughtful, conscientious strategies for harnessing the power of AI in the global development arena.
The article explores the cultural shift from recording to deleting information in the digital age and its implications on privacy, intellectual property (IP), and Large Language Models like ChatGPT. It begins by defining a delete culture... more
The article explores the cultural shift from recording to deleting information in the digital age and its implications on privacy, intellectual property (IP), and Large Language Models like ChatGPT. It begins by defining a delete culture where information, in principle legal, is made unavailable or inaccessible because unacceptable or undesirable, especially but not only due to its potential to infringe on privacy or IP. Then it focuses on two strategies in this context: deleting, to make information unavailable; and blocking, to make it inaccessible. The article argues that both strategies have significant implications, particularly for machine learning (ML) models where information is not easily made unavailable. However, the emerging research area of Machine Unlearning (MU) is highlighted as a potential solution. MU, still in its infancy, seeks to remove specific data points from ML models, effectively making them 'forget' completely specific information. If successful, MU could provide a feasible means to manage the overabundance of information and ensure a better protection of privacy and IP. However, potential ethical risks, such as misuse, overuse, and underuse of MU, should be systematically studied to devise appropriate policies.
The EU Artificial Intelligence Act (AIA) defines four risk categories for AI systems: unacceptable, high, limited, and minimal. However, it lacks a clear methodology for the assessment of these risks in concrete situations. Risks are... more
The EU Artificial Intelligence Act (AIA) defines four risk categories for AI systems: unacceptable, high, limited, and minimal. However, it lacks a clear methodology for the assessment of these risks in concrete situations. Risks are broadly categorized based on the application areas of AI systems and ambiguous risk factors. This paper suggests a methodology for assessing AI risk magnitudes, focusing on the construction of real-world risk scenarios. To this scope, we propose to integrate the AIA with a framework developed by the Intergovernmental Panel on Climate Change (IPCC) reports and related literature. This approach enables a nuanced analysis of AI risk by exploring the interplay between (a) risk determinants, (b) individual drivers of determinants, and (c) multiple risk types. We further refine the proposed methodology by applying a proportionality test to balance the competing values involved in AI risk assessment. Finally, we present three uses of this approach under the AIA: to implement the Regulation, to assess the significance of risks, and to develop internal risk management systems for AI deployers.
The proposed European AI Liability Directive (AILD) is an important step towards closing the ‘liability gap’, i.e., the difficulty in assigning responsibility for harms caused by AI systems. However, if victims are to bring liability... more
The proposed European AI Liability Directive (AILD) is an important step towards closing the ‘liability gap’, i.e., the difficulty in assigning responsibility for harms caused by AI systems. However, if victims are to bring liability claims, they must first have ways of knowing that they have been subject to algorithmic discrimination or other harms caused by AI systems. This ‘information gap’ must be addressed if the AILD is to meet its regulatory objectives. In this article, we argue that the current version of the AILD reduces legal fragmentation but not legal uncertainty; privileges transparency and disclosure of evidence of high-risk systems over knowledge of harm and discrimination; and shifts the burden on the claimant from proving fault to accessing and understanding the evidence provided by the defendant. We conclude by providing four recommendations on how to improve the AILD to address the ‘liability gap’ and the ‘information gap’.
This paper proposes a set of guiding principles for responsible quantum innovation. The principles are organized into three functional categories: safeguarding, engaging, and advancing (SEA), and are grounded in the values of responsible... more
This paper proposes a set of guiding principles for responsible quantum innovation. The principles are organized into three functional categories: safeguarding, engaging, and advancing (SEA), and are grounded in the values of responsible research and innovation (RRI). Utilizing a global equity normative framework, we link the Quantum-SEA categories to promise and perils specific to quantum technology. The paper operationalizes the Responsible Quantum Technology framework by proposing ten actionable principles to help address the risks, challenges, and opportunities associated with quantum technology. Our proposal aims to catalyze a much-needed interdisciplinary effort within the quantum community to establish a foundation of quantum-specific and quantum-tailored principles for responsible quantum innovation. The overarching objective of this interdisciplinary effort is to steer the development and use of quantum technology in a direction not only consistent with a values-based society but also a direction that contributes to addressing some of society's most pressing needs and goals.
In recent years, policymakers, academics, and practitioners have increasingly called for the development of global governance mechanisms for artificial intelligence (AI). This paper considers the prospects for these calls in light of two... more
In recent years, policymakers, academics, and practitioners have increasingly called for the development of global governance mechanisms for artificial intelligence (AI). This paper considers the prospects for these calls in light of two other geopolitical trends: digital sovereignty and digital expansionism. While calls for global AI governance promote the surrender of some state sovereignty over AI, digital sovereignty and expansionism seek to secure greater state control over digital technologies. To demystify the tensions between these trends and their potential consequences, we undertake a case analysis of digital sovereignty and digital expansionism in China, the European Union, and the United States. We argue that the extraterritoriality embedded in these three actors' policies and escalatory competitive narratives, particularly those from the US, will likely undermine substantive global AI governance cooperation. However, nascent areas of alignment or compromise, notably in data governance and technical standards, could prove fruitful starting points for building trust in multilateral fora, such as the G20 or United Nations.
This article reviews two main approaches to human control of AI systems: supervisory human control and human-machine teaming. It explores how each approach defines and guides the operational interplay between human behaviour and system... more
This article reviews two main approaches to human control of AI systems: supervisory human control and human-machine teaming. It explores how each approach defines and guides the operational interplay between human behaviour and system behaviour to ensure that AI systems are effective throughout their deployment. Specifically, the article looks at how the two approaches differ in their conceptual and practical adequacy regarding the control of AI systems based on foundation models-i.e., models trained on vast datasets, exhibiting general capabilities, and producing non-deterministic behaviour. The article focuses on examples from the defence and security domain to highlight practical challenges in terms of human control of automation in general, and AI in particular, and concludes by arguing that approaches to human control are better served by an understanding of control as the product of collaborative agency in a multi-agent system rather than of exclusive human supervision.
The article criticises the neutrality thesis (all technology, AI included is neutral and can be used for good and evil purposes). It argues that it must be replaced by the value double-charge thesis, according to which the design of any... more
The article criticises the neutrality thesis (all technology, AI included is neutral and can be used for good and evil purposes). It argues that it must be replaced by the value double-charge thesis, according to which the design of any technologic is a moral act, no technology is ever neutral, and every technology can have a more or less "static equilibrium" of values, that is, being subject to forces that push it in morally evil or good directions. It concludes by arguing that the neutrality thesis hides, while the double-charge thesis discloses, the significant responsibilities involved in finding the right values to be implemented, the trade-offs to be reached, and the policies to be devised when designing, developing, and deploying any technology. This is crucial, especially when the technology in question is as powerful, disruptive, and influential as AI.
The widespread integration of autoregressive-large language models (AR-LLMs), such as ChatGPT, across established applications, like search engines, has introduced critical vulnerabilities with uniquely scalable characteristics. In this... more
The widespread integration of autoregressive-large language models (AR-LLMs), such as ChatGPT, across established applications, like search engines, has introduced critical vulnerabilities with uniquely scalable characteristics. In this article, we analyse these vulnerabilities, their dependence on natural language as a vector of attack, and their challenges to cybersecurity best practices. We offer recommendations designed to mitigate these challenges.
The present article analyses the main legal features of the EU Data Act, identifying some innovative aspects and shortcomings. Moving from the general approach elaborated by the Union towards data governance – a value-based approach... more
The present article analyses the main legal features of the EU Data Act, identifying some innovative aspects and shortcomings. Moving from the general approach elaborated by the Union towards data governance – a value-based approach flowing from the idea of enforcing the strategic autonomy of the Union – the article considers first the most relevant piece of legislation which complements the DA, that is, the EU Data Governance Act, and then assesses the DA in light of both the original proposal of the European Commission and the position adopted by the two EU co-legislators – the European Parliament and the Council. An overview of the major competitive legal approaches to the governance of IoT data (namely, the U.S. and Chinese approaches) is also discussed, stressing possible synergies and conflicts with the EU’s approach.
Extended reality (XR) technologies have experienced cycles of development - “summers” and “winters” - for decades, but their overall trajectory is one of increasing uptake. In recent years, immersive extended reality (IXR) applications, a... more
Extended reality (XR) technologies have experienced cycles of development - “summers” and “winters” - for decades, but their overall trajectory is one of increasing uptake. In recent years, immersive extended reality (IXR) applications, a kind of XR that encompasses immersive virtual reality (VR) and augmented reality (AR) environments, have become especially prevalent. The European Union (EU) is exploring regulating this type of technology, and this article seeks to support this endeavor. It outlines safety and privacy harms associated with IXR, analyzes to what extent the existing EU framework for digital governance - including the General Data Protection Regulation, Product Safety Legislation, ePrivacy Directive, Digital Markets Act, Digital Services Act, and AI Act - addresses these harms, and offers some recommendations to EU legislators on how to fill regulatory gaps and improve current approaches to the governance of IXR.

And 649 more

Zan Boag: Technology in various forms has been a part of human life for some time now, but, as philosophers such as Heidegger argue, recently there has been a profound change in the nature of technology itself. What's so different about... more
Zan Boag: Technology in various forms has been a part of human life for some time now, but, as philosophers such as Heidegger argue, recently there has been a profound change in the nature of technology itself. What's so different about current technologies? Luciano Floridi: What is different is that it is no longer just a matter of interacting with the world by other means: a wheel rather than pushing stuff, or an engine rather than a horse. We have this new environment where we are spending more and more time – a digital environment, where agency is most successful because the technologies that we have are meant to interact successfully in a digital environment. Think of a fish in a swimming pool or in a lake. Well, we are kind of scuba diving now in the infosphere, whereas the artificial agents that we have, those are the fish – they live within an environment that is their environment. The digital interacting with the digital – software, databases, big data, algorithms, you name it – they are the natives, they are the locals. We are being pushed into an environment where we are scuba diving. You can't start imagining what it means for an artificial agent to interact with something that is made of its own same stuff.
Research Interests:
Interview for Mercedes Benz Magazin
Research Interests:
intervista con Marzia Apice per ANSA NEWS
Research Interests:
Intervista con Fabio Chiusi per L'Espresso, 26 Febbraio 2017
Research Interests:
Research Interests:
Intervista con Antonio Dini
Research Interests:
Intervista con Antonio Dini per L'Impresa - Sole24 Ore - Prima Parte
Research Interests:
Intervista a Luciano Floridi, Professore di Filosofia ed Etica dell’informazione, Oxford University
di Agnese Bertello
Research Interests:
Environmental Engineering, Environmental Science, Philosophy, Environmental Education, Environmental Law, and 52 more
biografia
Research Interests:
For more than 100 videos of lectures, seminars, talks, interviews and debates covering topics discussed in paper uploaded in academia.edu please visit the YouTube channel:
Research Interests:
Preface
The goal of the book is to present the latest research on the new challenges of data technologies. It will offer an overview of the social, ethical and legal problems posed by group profiling, big data and predictive analysis and of the... more
The goal of the book is to present the latest research on the new challenges of data technologies. It will offer an overview of the social, ethical and legal problems posed by group profiling, big data and predictive analysis and of the different approaches and methods that can be used to address them. In doing so, it will help the reader to gain a better grasp of the ethical and legal conundrums posed by group profiling. The volume first maps the current and emerging uses of new data technologies and clarifies the promises and dangers of group profiling in real life situations. It then balances this with an analysis of how far the current legal paradigm grants group rights to privacy and data protection, and discusses possible routes to addressing these problems. Finally, an afterword gathers the conclusions reached by the different authors and discuss future perspectives on regulating new data technologies.
Research Interests:
Philosophy, Political Philosophy, Free Will, Moral Responsibility, Digital Divide, Moral Psychology, and 160 more
Online Service Providers (OSPs)—such as AOL, Facebook, Google, Microsoft, and Twitter—are increasingly expected to act as good citizens, by aligning their goals with the needs of societies, supporting the rights of their users (Madelin... more
Online Service Providers (OSPs)—such as AOL, Facebook, Google, Microsoft, and Twitter—are increasingly expected to act as good citizens, by aligning their goals with the needs of societies, supporting the rights of their users (Madelin 2011; Taddeo and Floridi 2015), and performing their tasks according to “principles of efficiency, justice, fairness, and respect of current social and cultural values” (McQuail 1992, 47). These expectations raise questions as to what kind of responsibilities OSPs should bear, and which ethical principles should guide their actions. Addressing these questions is a crucial step to understand and shape the role of OSPs in mature information societies (Floridi 2016). Without a clear understanding of their responsibilities, we risk ascribing to OSPs a role that is either too powerful or too little independent. The FBI vs. Apple case,1 Google’s and Yahoo!’s experiences in China,2 or the involvement of OSPs within the NSA’s PRISM program3 offer good examples of the case in point. However, defining OSPs’ responsibilities is challenging. Three aspects are particularly problematic: disentangling the implications of OSPs’ gatekeeping role in information societies; defining fundamental principles to guide OSPs’ conduct; and contextualising OSPs’ role within the broader changes brought about by the information revolution.
This is the Introduction to The Routledge Handbook of Philosophy of Information (Routledge Handbooks in Philosophy) Hardcover, 2016
Research Interests:
Information Systems, Computer Science, Information Science, Information Retrieval, Artificial Intelligence, and 77 more
Unsere Computer werden immer schneller, kleiner und billiger; wir produzieren jeden Tag genug Daten, um alle Bibliotheken der USA damit zu füllen; und im Durchschnitt trägt jeder Mensch heute mindestens einen Gegenstand bei sich, der mit... more
Unsere Computer werden immer schneller, kleiner und billiger; wir produzieren jeden Tag genug Daten, um alle Bibliotheken der USA damit zu füllen; und im Durchschnitt trägt jeder Mensch heute mindestens einen Gegenstand bei sich, der mit dem Internet verbunden ist. Wir erleben gerade eine explosionsartige Entwicklung von Informationsund Kommunikationstechnologien. Luciano Floridi, einer der weltweit führenden Informationstheoretiker, zeigt in seinem meisterhaften Buch, dass wir uns nach den Revolutionen der Physik (Kopernikus), Biologie (Darwin) und Psychologie (Freud) nun inmitten einer vierten Revolution befinden, die unser ganzes Leben verändert. Die Trennung zwischen online und offline schwindet, denn wir interagieren zunehmend mit smarten, responsiven Objekten, um unseren Alltag zu bewältigen oder miteinander zu kommunizieren. Der Mensch kreiert sich eine neue Umwelt, eine »Infosphäre«. Persönlichkeitsprofile, die wir online erzeugen, beginnen, in unseren Alltag zurückzuwirken, sodass wir immer mehr ein »Onlife« leben. Informations- und Kommunikationstechnologien bestimmen die Art, wie wir einkaufen, arbeiten, für unsere Gesundheit vorsorgen, Beziehungen pflegen, unsere Freizeit gestalten, Politik betreiben und sogar, wie wir Krieg führen. Aber sind diese Entwicklungen wirklich zu unserem Vorteil? Was sind ihre Risiken? Floridi weist den Weg zu einem neuen ethischen und ökologischen Denken, um die Herausforderungen der digitalen Revolution und der Informationsgesellschaft zu meistern.  Ein Buch von großer Aktualität und theoretischer Brillanz.
Research Interests:
This book presents the latest research on the challenges and solutions affecting the equilibrium between freedom of speech, freedom of information, information security, and the right to informational privacy. Given the complexity of the... more
This book presents the latest research on the challenges and solutions affecting the equilibrium between freedom of speech, freedom of information, information security, and the right to informational privacy. Given the complexity of the topics addressed, the book shows how old legal and ethical frameworks may need to be not only updated, but also supplemented and complemented by new conceptual solutions. Neither a conservative attitude (“more of the same”) nor a revolutionary zeal (“never seen before”) is likely to lead to satisfactory solutions. Instead, more reflection and better conceptual design are needed, not least to harmonise different perspectives and legal frameworks internationally. The focus of the book is on how we may reconcile high levels of information security with robust degrees of informational privacy, also in connection with recent challenges presented by phenomena such as “big data” and security scandals, as well as new legislation initiatives, such as those concerning “the right to be forgotten” and the use of personal data in biomedical research. The book seeks to offer analyses and solutions of the new tensions, in order to build a fair, shareable, and sustainable balance in this vital area of human interactions.
Research Interests:
This book offers an overview of the ethical problems posed by Information Warfare, and of the different approaches and methods used to solve them, in order to provide the reader with a better grasp of the ethical conundrums posed by this... more
This book offers an overview of the ethical problems posed by Information Warfare, and of the different approaches and methods used to solve them, in order to provide the reader with a better grasp of the ethical conundrums posed by this new form of warfare.
The volume is divided into three parts, each comprising four chapters. The first part focuses on issues pertaining to the concept of Information Warfare and the clarifications that need to be made in order to address its ethical implications. The second part collects contributions focusing on Just War Theory and its application to the case of Information Warfare. The third part adopts alternative approaches to Just War Theory for analysing the ethical implications of this phenomenon. Finally, an afterword by Neelie Kroes - Vice President of the European Commission and European Digital Agenda Commissioner - concludes the volume. Her contribution describes the interests and commitments of the European Digital Agenda with respect to research for the development and deployment of robots in various circumstances, including warfare.
Luciano Floridi develops an original ethical framework for dealing with the new challenges posed by Information and Communication Technologies (ICTs). ICTs have profoundly changed many aspects of life, including the nature of... more
Luciano Floridi develops an original ethical framework for dealing with the new challenges posed by Information and Communication Technologies (ICTs). ICTs have profoundly changed many aspects of life, including the nature of entertainment, work, communication, education, health care, industrial production and business, social relations, and conflicts. They have had a radical and widespread impact on our moral lives and on contemporary ethical debates. Privacy, ownership, freedom of speech, responsibility, technological determinism, the digital divide, and pornography online are only some of the pressing issues that characterise the ethical discourse in the information society. They are the subject of Information Ethics (IE), the new philosophical area of research that investigates the ethical impact of ICTs on human life and society.

Since the seventies, IE has been a standard topic in many curricula. In recent years, there has been a flourishing of new university courses, international conferences, workshops, professional organizations, specialized periodicals and research centres. However, investigations have so far been largely influenced by professional and technical approaches, addressing mainly legal, social, cultural and technological problems. This book is the first philosophical monograph entirely and exclusively dedicated to it.

Floridi lays down, for the first time, the conceptual foundations for IE. He does so systematically, by pursuing three goals:

a) a metatheoretical goal: it describes what IE is, its problems, approaches and methods;
b) an introductory goal: it helps the reader to gain a better grasp of the complex and multifarious nature of the various concepts and phenomena related to computer ethics;
c) an analytic goal: it answers several key theoretical questions of great philosophical interest, arising from the investigation of the ethical implications of ICTs.

Although entirely independent of The Philosophy of Information (OUP, 2011), Floridi's previous book, The Ethics of Information complements it as new work on the foundations of the philosophy of information.
Research Interests:
Who are we, and how do we relate to each other? Luciano Floridi, one of the leading figures in contemporary philosophy, argues that the explosive developments in Information and Communication Technologies (ICTs) is changing the answer to... more
Who are we, and how do we relate to each other? Luciano Floridi, one of the leading figures in contemporary philosophy, argues that the explosive developments in Information and Communication Technologies (ICTs) is changing the answer to these fundamental human questions.

As the boundaries between life online and offline break down, and we become seamlessly connected to each other and surrounded by smart, responsive objects, we are all becoming integrated into an "infosphere". Personas we adopt in social media, for example, feed into our 'real' lives so that we begin to live, as Floridi puts in, "onlife". Following those led by Copernicus, Darwin, and Freud, this metaphysical shift represents nothing less than a fourth revolution.

"Onlife" defines more and more of our daily activity - the way we shop, work, learn, care for our health, entertain ourselves, conduct our relationships; the way we interact with the worlds of law, finance, and politics; even the way we conduct war. In every department of life, ICTs have become environmental forces which are creating and transforming our realities. How can we ensure that we shall reap their benefits? What are the implicit risks? Are our technologies going to enable and empower us, or constrain us? Floridi argues that we must expand our ecological and ethical approach to cover both natural and man-made realities, putting the 'e' in an environmentalism that can deal successfully with the new challenges posed by our digital technologies and information society.
- Result of “the Onlife Initiative,” a one-year project funded by the European Commission to study the deployment of ICTs and its effects on the human condition - Inspires reflection on the ways in which a hyperconnected world forces the... more
- Result of “the Onlife Initiative,” a one-year project funded by the European Commission to study the deployment of ICTs and its effects on the human condition
- Inspires reflection on the ways in which a hyperconnected world forces the rethinking of the conceptual frameworks on which policies are built
- Draws upon the work of a group of scholars from a wide range of disciplines including, anthropology, cognitive science, computer science, law, philosophy, political science

What is the impact of information and communication technologies (ICTs) on the human condition? In order to address this question, in 2012 the European Commission organized a research project entitled The Onlife Initiative: concept reengineering for rethinking societal concerns in the digital transition. This volume collects the work of the Onlife Initiative. It explores how the development and widespread use of ICTs have a radical impact on the human condition.

ICTs are not mere tools but rather social forces that are increasingly affecting our self-conception (who we are), our mutual interactions (how we socialise); our conception of reality (our metaphysics); and our interactions with reality (our agency). In each case, ICTs have a huge ethical, legal, and political significance, yet one with which we have begun to come to terms only recently.

The impact exercised by ICTs is due to at least four major transformations: the blurring of the distinction between reality and virtuality; the blurring of the distinction between human, machine and nature; the reversal from information scarcity to information abundance; and the shift from the primacy of stand-alone things, properties, and binary relations, to the primacy of interactions, processes and networks.

Such transformations are testing the foundations of our conceptual frameworks. Our current conceptual toolbox is no longer fitted to address new ICT-related challenges. This is not only a problem in itself. It is also a risk, because the lack of a clear understanding of our present time may easily lead to negative projections about the future. The goal of The Manifesto, and of the whole book that contextualises, is therefore that of contributing to the update of our philosophy. It is a constructive goal. The book is meant to be a positive contribution to rethinking the philosophy on which policies are built in a hyperconnected world, so that we may have a better chance of understanding our ICT-related problems and solving them satisfactorily.

The Manifesto launches an open debate on the impacts of ICTs on public spaces, politics and societal expectations toward policymaking in the Digital Agenda for Europe’s remit. More broadly, it helps start a reflection on the way in which a hyperconnected world calls for rethinking the referential frameworks on which policies are built.
Research Interests:
Information Systems, Business Ethics, Computer Science, Information Science, Information Retrieval, and 137 more
Luciano Floridi presents a book that will set the agenda for the philosophy of information. PI is the philosophical field concerned with (1) the critical investigation of the conceptual nature and basic principles of information,... more
Luciano Floridi presents a book that will set the agenda for the philosophy of information. PI is the philosophical field concerned with (1) the critical investigation of the conceptual nature and basic principles of information, including its dynamics, utilisation, and sciences, and (2) the elaboration and application of information-theoretic and computational methodologies to philosophical problems. This book lays down, for the first time, the conceptual foundations for this new area of research. It does so systematically, by pursuing three goals. Its metatheoretical goal is to describe what the philosophy of information is, its problems, approaches, and methods. Its introductory goal is to help the reader to gain a better grasp of the complex and multifarious nature of the various concepts and phenomena related to information. Its analytic goal is to answer several key theoretical questions of great philosophical interest, arising from the investigation of semantic information.
Research Interests:
We live an information-soaked existence - information pours into our lives through television, radio, books, and of course, the Internet. Some say we suffer from 'infoglut'. But what is information? The concept of 'information' is a... more
We live an information-soaked existence - information pours into our lives through television, radio, books, and of course, the Internet. Some say we suffer from 'infoglut'. But what is information? The concept of 'information' is a profound one, rooted in mathematics, central to whole branches of science, yet with implications on every aspect of our everyday lives: DNA provides the information to create us; we learn through the information fed to us; we relate to each other through information transfer - gossip, lectures, reading. Information is not only a mathematically powerful concept, but its critical role in society raises wider ethical issues: who owns information? Who controls its dissemination? Who has access to information? Luciano Floridi, a philosopher of information, cuts across many subjects, from a brief look at the mathematical roots of information - its definition and measurement in 'bits'- to its role in genetics (we are information), and its social meaning and value. He ends by considering the ethics of information, including issues of ownership, privacy, and accessibility; copyright and open source. For those unfamiliar with its precise meaning and wide applicability as a philosophical concept, 'information' may seem a bland or mundane topic. Those who have studied some science or philosophy or sociology will already be aware of its centrality and richness. But for all readers, whether from the humanities or sciences, Floridi gives a fascinating and inspirational introduction to this most fundamental of ideas.
Research Interests:
Information and Communication Technologies (ICTs) have profoundly changed many aspects of life, including the nature of entertainment, work, communication, education, healthcare, industrial production and business, social relations and... more
Information and Communication Technologies (ICTs) have profoundly changed many aspects of life, including the nature of entertainment, work, communication, education, healthcare, industrial production and business, social relations and conflicts. They have had a radical and widespread impact on our moral lives and hence on contemporary ethical debates. The Cambridge Handbook of Information and Computer Ethics provides an ambitious and authoritative introduction to the field, with discussions of a range of topics including privacy, ownership, freedom of speech, responsibility, technological determinism, the digital divide, cyber warfare, and online pornography. It offers an accessible and thoughtful survey of the transformations brought about by ICTs and their implications for the future of human life and society, for the evaluation of behaviour, and for the evolution of moral values and rights. It will be a valuable book for all who are interested in the ethical aspects of the information society in which we live.

The Cambridge Handbook of Information and Computer Ethics provides an ambitious and authoritative introduction to the field, with discussions of a range of topics including privacy, ownership, freedom of speech, responsibility, technological determinism, the digital divide, and online pornography.
Research Interests:
Review 'Philosophy and Computing is a stimulating and ambitious book that helps lay a foundation for the new and vitally important field of Philosophy of Information. This is a worthy addition to the brand new and rapidly developing... more
Review

'Philosophy and Computing is a stimulating and ambitious book that helps lay a foundation for the new and vitally important field of Philosophy of Information. This is a worthy addition to the brand new and rapidly developing field of Philosophy of Information, a field that will revolutionise philosophy in the Information Age.' - Terrell Ward Bynum, Southern Connecticut State University

'What are the philosophical implications of computers and the internet? A pessimist might see these new technologies as leading to the creation of vast encyclopaedic databases far exceeding the capacities of any individual. Yet Luciano Floridi takes a different view, aruging ingeniously for the optimistic conclusion that the computer revolution will lead instead to a reversal of the trend towards specialisation and a return to the Renaissance mind.' - Donald Gillies, King's College London

'In his seminal book, Philosophy and Computing, Luciano Floridi provides a rich combination of technical information and philosophical insights necessary for the emerging field of philosophy and computing.' - James Moor, Dartmouth College

'Luciano Floridi's book discusses the most important and the latest branches of research in information technology. He approaches the subject from a novel philosophical viewpoint, while demonstrating a strong command of the relevant technicalities of the subject.' - Hava T. Siegelman, Technion

Product Description
Philosophy and Computing is the first accessible and comprehensive philosophical introduction to Information and Communication Technology.
Research Interests:
Review "The Blackwell Guide to the Philosophy of Computing and Information is a rich resource for an important, emerging field within philosophy. This excellent volume covers the basic topics in depth, yet is written in a style that is... more
Review
"The Blackwell Guide to the Philosophy of Computing and Information is a rich resource for an important, emerging field within philosophy. This excellent volume covers the basic topics in depth, yet is written in a style that is accessible to non–philosophers. There is no other book that assembles and explains systematically so much information about the diverse aspects of philosophy of computing and information. I believe this book will serve both as an authoritative introduction to the field for students and as a standard reference for professionals for years to come. I highly recommend it." James Moor, Dartmouth College <!––end––>

"There are contributions from a range of respected academics, many of them authorities in their field, and this certainly anchors the work in a sound scholarly foundation. The scope of the content, given the youthfulness of the computing era, is signigficant. The variety of the content too is remarkable. In summary this is a wonderfully fresh look at the world of of computing and information, which requires its own philosophy in testimony that there are some real issues that can exercise the mind." Reference Reviews

"The judicious choice of topics, as well as the degree of detail in the various chapters, are just what it takes neither to deter the average reader requiring this Guide, nor to makeit unfeasible placing this volume in the hands of students. Floridi′s book is clearly a valuable addition to a worthy series." Pragmatics & Cognition
Product Description
This Guide provides an ambitious state–of–the–art survey of the fundamental themes, problems, arguments and theories constituting the philosophy of computing.

    * A complete guide to the philosophy of computing and information.
    * Comprises 26 newly–written chapters by leading international experts.
    * Provides a complete, critical introduction to the field.
    * Each chapter combines careful scholarship with an engaging writing style.
    * Includes an exhaustive glossary of technical terms.
    * Ideal as a course text, but also of interest to researchers and general readers.
Research Interests:
Synopsis Computing and information, and their philosophy in the broad sense, play a most important scientific, technological and conceptual role in our world. This book collects together, for the first time, the views and experiences of... more
Synopsis
Computing and information, and their philosophy in the broad sense, play a most important scientific, technological and conceptual role in our world. This book collects together, for the first time, the views and experiences of some of the visionary pioneers and most influential thinkers in such a fundamental area of our intellectual development. This is yet another gem in the 5 Questions Series by Automatic Press / VIP.
Research Interests:
Review Floridi's complete and rigorous book constitutes a major contribution for the knowledge of the transmission and influence of Sextus' writings, which makes it an essential work of reference for any study in this field. (The British... more
Review

Floridi's complete and rigorous book constitutes a major contribution for the knowledge of the transmission and influence of Sextus' writings, which makes it an essential work of reference for any study in this field. (The British Journal for the History of Philosophy )

A fascinating read for anyone interested in the history of Scepticism. (Greece & Rome )
Research Interests:
Can knowledge provide its own justification? This sceptical challenge - known as the problem of the criterion - is one of the major issues in the history of epistemology, and this volume provides its first comprehensive study, in a span... more
Can knowledge provide its own justification? This sceptical challenge - known as the problem of the criterion - is one of the major issues in the history of epistemology, and this volume provides its first comprehensive study, in a span of time that goes from Sextus Empiricus to Quince. After an essential introduction to the notions of knowledge and of philosophy of knowledge, the book provides a detailed reconstruction of the history of the problem. There follows a conceptual analysis of its logical features, and a comparative examination of a phenomenology of solution that have been suggested in the course of the history of philosophy in order to overcome it, from Descartes to Popper. In this context, an indirect approach to the problem of the criterion is defended as the most successful strategy against the sceptical challenge.
Research Interests:
2. Che cos&#x27; è un computer? 26 3. Il cervello, il cuore e il motore: il microprocessore 32 4. Traffico e trasporti: Bus, Clock e Cache 34 5. Intel, ovvero la sacra famiglia 35 6. L&#x27;antitesi: Motorola e PowerPC 37 7. Non... more
2. Che cos&#x27; è un computer? 26 3. Il cervello, il cuore e il motore: il microprocessore 32 4. Traffico e trasporti: Bus, Clock e Cache 34 5. Intel, ovvero la sacra famiglia 35 6. L&#x27;antitesi: Motorola e PowerPC 37 7. Non dimenticar le mie parole: la memoria su disco 39 8. La fase dell&#x27;input: tastiera e mouse 40 9. La fase dell&#x27;output: il Video 43 10. Dentro il negozio 45
A chapter in Archives in Liquid Times, edited by Frans Smit, Arnoud Glaudemans, and Rienk Jonker
Research Interests:
History, Computer Science, Information Technology, Philosophy, Library Science, and 158 more
We increasingly rely on AI-related applications (smart technologies) to perform tasks that would be simply impossible by un-aided or un-augmented human intelligence. This is possible because the world is becoming an infosphere... more
We increasingly rely on AI-related applications (smart technologies) to perform tasks that would be simply impossible by un-aided or un-augmented human intelligence. This is possible because the world is becoming an infosphere increasingly well adapted to AI’s limited capacities. Being able to imagine what adaptive demands this process will place on humanity may help to devise technological solutions that can lower their anthropological costs.
Research Interests:
... See now That nothing is known, ed. by Elaine Limbrick, Eng. Trans, by Douglas FS Thomson (Cambridge: Cambridge UP, 1988). ... Cartesianism is among the sources of Villemandy&#x27;s epistemological optimism and &#x27;lay&#x27; faith in... more
... See now That nothing is known, ed. by Elaine Limbrick, Eng. Trans, by Douglas FS Thomson (Cambridge: Cambridge UP, 1988). ... Cartesianism is among the sources of Villemandy&#x27;s epistemological optimism and &#x27;lay&#x27; faith in the inteiligibility of the universe. ...
... Bibliotec@SWIF Page 2. Linee di Ricerca – SWIF Coordinamento Editoriale: Gian Maria Greco Supervisione Tecnica: Fabrizio Martina Supervisione: Luciano Floridi Redazione: Eva Franchino, Federica Scali. LdR è un e-book, inteso come... more
... Bibliotec@SWIF Page 2. Linee di Ricerca – SWIF Coordinamento Editoriale: Gian Maria Greco Supervisione Tecnica: Fabrizio Martina Supervisione: Luciano Floridi Redazione: Eva Franchino, Federica Scali. LdR è un e-book, inteso come numero speciale della rivista SWIF. ...
ABSTRACT
Abstract Various conceptual approaches to the notion of information can currently be traced in the literature in logic and formal epistemology. A main issue of disagree-ment is the attribution of truthfulness to informational data, the so... more
Abstract Various conceptual approaches to the notion of information can currently be traced in the literature in logic and formal epistemology. A main issue of disagree-ment is the attribution of truthfulness to informational data, the so called Veridicality Thesis (Floridi 2005). The ...
... analyses, and account for most of the literature in CyberEthics (see for example Spinello and Tavani [2001] and other chapters in the present volume). ... concerning the self through personal homepages (Chandler [1998], see also... more
... analyses, and account for most of the literature in CyberEthics (see for example Spinello and Tavani [2001] and other chapters in the present volume). ... concerning the self through personal homepages (Chandler [1998], see also Adamic and Adar [online]). ...
In 1963 Arthur C. Clarke published a story called Dial F for Frankenstein, in which he imagined the following scenario. On 31 January 1974, the last communications satellite is launched in order to achieve, at last, full interconnection... more
In 1963 Arthur C. Clarke published a story called Dial F for Frankenstein, in which he imagined the following scenario. On 31 January 1974, the last communications satellite is launched in order to achieve, at last, full interconnection of the whole, international telephone system. ...
ABSTRACT
In 1963 Arthur C. Clarke published a story called Dial F for Frankenstein, in which he imagined the following scenario. On 31 January 1974, the last communications satellite is launched in order to achieve, at last, full interconnection... more
In 1963 Arthur C. Clarke published a story called Dial F for Frankenstein, in which he imagined the following scenario. On 31 January 1974, the last communications satellite is launched in order to achieve, at last, full interconnection of the whole, international telephone system. ...
Research Interests:

And 52 more

La Filosofia dell’Informazione: una sfida etica ed epistemologica Workshop con Luciano Floridi Professore Ordinario di Filosofia e Etica dell’Informazione dell’Università di Oxford, Direttore di Ricerca all’Oxford Internet Institute... more
La Filosofia dell’Informazione: una sfida etica ed epistemologica

Workshop con Luciano Floridi Professore Ordinario di Filosofia e Etica dell’Informazione dell’Università di Oxford, Direttore di Ricerca all’Oxford Internet Institute Copernicus Visiting Professor, IUSS Ferrara 1391
Research Interests:
Research on the ethics of algorithms has grown substantially over the past decade. Alongside the exponential development and application of machine learning algorithms, new ethical problems and solutions relating to their ubiquitous use... more
Research on the ethics of algorithms has grown substantially over the past decade. Alongside the exponential development and application of machine learning algorithms, new ethical problems and solutions relating to their ubiquitous use in society have been proposed. This article builds on a review of the ethics of algorithms published in 2016 (Mittelstadt et al. 2016). The golas are to contribute to the debate on the identification and analysis of the ethical implications of algorithms, to provide an updated analysis of epistemic and normative concerns, and to offer actionable guidance for the governance of the design, development and deployment of algorithms.
Research Interests:
Healthcare systems across the globe are struggling with increasing costs and worsening outcomes. This presents those responsible for overseeing healthcare with a challenge. Increasingly, policymakers, politicians, clinical entrepreneurs... more
Healthcare systems across the globe are struggling with increasing costs and worsening outcomes. This presents those responsible for overseeing healthcare with a challenge. Increasingly, policymakers, politicians, clinical entrepreneurs and computer and data scientists argue that a key part of the solution will be 'Artificial Intelligence' (AI)-particularly Machine Learning (ML). This argument stems not from the belief that all healthcare needs will soon be taken care of by "robot doctors." Instead, it is an argument that rests on the classic counterfactual definition of AI as an umbrella term for a range of techniques that can be used to make machines complete tasks in a way that would be considered intelligent were they to be completed by a human. Automation of this nature could offer great opportunities for the improvement of healthcare services and ultimately patients' health by significantly improving human clinical capabilities in diagnosis, drug discovery, epidemiology, personalised medicine, and operational efficiency. However, if these AI solutions are to be embedded in clinical practice, then at least three issues need to be considered: the technical possibilities and limitations; the ethical, regulatory and legal framework; and the governance framework. In this article, we report on the results of a systematic analysis designed to provide a clear overview of the second of these elements: the ethical, regulatory and legal framework. We find that ethical issues arise at six levels of abstraction (individual, interpersonal, group, institutional, sectoral, and societal) and can be categorised as epistemic, normative, or overarching. We conclude by stressing how important it is that the ethical challenges raised by implementing AI in healthcare settings are tackled proactively rather than reactively and map the key considerations for policymakers to each of the ethical concerns highlighted.
It has been suggested that to overcome the challenges facing the UK’s National Health Service (NHS) of an ageing population and reduced available funding, the NHS should be transformed into a more informationally mature and heterogeneous... more
It has been suggested that to overcome the challenges facing the UK’s National Health Service (NHS) of an ageing population and reduced available funding, the NHS should be transformed into a more informationally mature and heterogeneous organisation, reliant on data-based and algorithmically-driven interactions between human, artificial, and hybrid (semi-artificial) agents. This transformation process would offer significant benefit to patients, clinicians, and the overall system, but it would also rely on a fundamental transformation of the healthcare system in a way that poses significant governance challenges. In this article, we argue that a fruitful way to overcome these challenges is by adopting a pro-ethical approach to design that analyses the system as a whole, keeps society-in-the-loop throughout the process, and distributes responsibility evenly across all nodes in the system.
The debate about the ethical implications of Artificial Intelligence dates from the 1960s (Wiener, 1960) (Samuel, 1960). However, in recent years symbolic AI has been complemented and sometimes replaced by (Deep) Neural Networks and... more
The debate about the ethical implications of Artificial Intelligence dates from the 1960s (Wiener, 1960) (Samuel, 1960). However, in recent years symbolic AI has been complemented and sometimes replaced by (Deep) Neural Networks and Machine Learning (ML) techniques. This has vastly increased its potential utility and impact on society, with the consequence that the ethical debate has gone mainstream. Such a debate has primarily focused on principles-the 'what' of AI ethics (beneficence, non-maleficence, autonomy, justice and explicability)-rather than on practices, the 'how.' Awareness of the potential issues is increasing at a fast rate, but the AI community's ability to take action to mitigate the associated risks is still at its infancy. Therefore, our intention in presenting this research is to contribute to closing the gap between principles and practices by constructing a typology that may help practically-minded developers 'apply ethics' at each stage of the pipeline, and to signal to researchers where further work is needed. The focus is exclusively on Machine Learning, but it is hoped that the results of this research may be easily applicable to other branches of AI. The article outlines the research method for creating this typology, the initial findings, and provides a summary of future research needs. 2
Research Interests:
Significance. The environment can elicit biological responses such as oxidative stress (OS) and inflammation as consequence of chemical, physical or psychological changes. As population studies are essential for establishing these... more
Significance. The environment can elicit biological responses such as oxidative stress (OS) and inflammation as consequence of chemical, physical or psychological changes. As population studies are essential for establishing these environment-organism interactions, biomarkers of oxidative stress or inflammation are critical in formulating mechanistic hypotheses.  Recent advances. By using examples of stress induced by various mechanisms, we focus on the biomarkers that have been used to assess oxidative stress and inflammation in these conditions. We discuss the difference between biomarkers that are the result of a chemical reaction (such as lipid peroxides or oxidized proteins that are a result of the reaction of molecules with reactive oxygen species, ROS) and those that represent the biological response to stress, such as the transcription factor NRF2 or inflammation and inflammatory cytokines. Critical issues. The high-throughput and holistic approaches to biomarker discovery used extensively in large-scale molecular epidemiological exposome are also discussed in the context of human exposure to environmental stressors. Future directions. We propose to consider the role of biomarkers as signs and distinguish between signs that are just indicators of biological processes and proxies that one can interact with and modify the disease process.
In October 2016, the White House, the European Parliament, and the UK House of Commons each issued a report outlining their visions on how to prepare society for the widespread use of AI. In this article, we provide a comparative... more
In October 2016, the White House, the European Parliament, and the UK House of Commons each issued a report outlining their visions on how to prepare society for the widespread use of AI. In this article, we provide a comparative assessment of these three reports in order to facilitate the design of policies favourable to the development of a 'good AI society'. To do so, we examine how each report addresses the following three topics: (a) the development of a 'good AI society'; (b) the role and responsibility of the government, the private sector, and the research community (including academia) in pursuing such a development; and (c) where the recommendations to support such a development may be in need of improvement. Our analysis concludes that the reports address adequately various ethical, social, and economic topics, but come short of providing an overarching political vision and long-term strategy for the development of a 'good AI society'. In order to contribute to fill this gap, in the conclusion we suggest a two-pronged approach.
Research Interests:
Engineering, Robotics, Algorithms, Parallel Algorithms, Software Engineering, and 119 more
In information societies, operations, decisions and choices previously left to humans are increasingly delegated to algorithms, which may advise, if not decide, about how data should be interpreted and what actions should be taken as a... more
In information societies, operations, decisions and choices previously left to humans are increasingly delegated to algorithms, which may advise, if not decide, about how data should be interpreted and what actions should be taken as a result. More and more often, algorithms mediate social processes, business transactions, governmental decisions, and how we perceive, understand, and interact among ourselves and with the environment. Gaps between the design and operation of algorithms and our understanding of their ethical implications can have severe consequences affecting individuals as well as groups and whole societies. This paper makes three contributions to clarify the ethical importance of algorithmic mediation. It provides a prescriptive map to organise the debate. It reviews the current discussion of ethical aspects of algorithms. And it assesses the available literature in order to identify areas requiring further work to develop the ethics of algorithms.
Research Interests:
Abstracts are invited for the workshop “The Ethics of Data Science: The Landscape for the Alan Turing Institute”. This event is being organised as part of a series of activities promoted by the Alan Turing Institute (ATI) in order to... more
Abstracts are invited for the workshop “The Ethics of Data Science: The Landscape for the Alan Turing Institute”. This event is being organised as part of a series of activities promoted by the Alan Turing Institute (ATI) in order to define the national and international landscape around data science and to support the ATI’s scientific programme.
Research Interests:
In recent years, there has been a huge increase in the number of bots online, varying from Web crawlers for search engines, to chatbots for online customer service, spambots on social media, and content-editing bots in online... more
In recent years, there has been a huge increase in the number of bots online, varying from Web crawlers for search engines, to chatbots for online customer service, spambots on social media, and content-editing bots in online collaboration communities. The online world has turned into an ecosystem of bots. However, our knowledge of how these automated agents are interacting with each other is rather poor. In this article, we analyze collaborative bots by studying the interactions between bots that edit articles on Wikipedia. We find that, although
Research Interests:
Information Systems, Business Ethics, Robotics, Computer Science, Software Engineering, and 172 more
In our information societies, we increasingly delegate tasks and decisions to automated systems, devices and agents that mediate human relationships, by taking decisions and acting on the basis of algorithms. Their increased intelligence,... more
In our information societies, we increasingly delegate tasks and decisions to automated systems, devices and agents that mediate human relationships, by taking decisions and acting on the basis of algorithms. Their increased intelligence, autonomous behavior and connectivity are changing crucially the life conditions of human beings as well as altering traditional concepts and ways of understanding reality. Algorithms are directed to solve problems that are not always detectable in their own relevance and timeliness. They are also meant to solve those problems through procedures that are not always visible and assessable in their own. In addition, technologies based on algorithmic procedures more and more infer personal information from aggregated data, thus profiling human beings and anticipating their expectations, views and behaviors. This may have normative, if not discriminatory, consequences. While algorithmic procedures and applications are meant to serve human needs, they risk to create an environment in which human beings tend to develop adaptive strategies by conforming their behaviour to the expected output of the procedures, with serious distortive effects. Against this backdrop, little room is often left for a process of rational argumentation able to challenge the results of algorithmic procedures by putting into question some of their hidden assumptions or by taking into account some neglected aspects of the problems under consideration. At the same time, it is widely recognized that scientific and social advances crucially depend on such an open and free critical discussion.
Research Interests:
Information Systems, Business Ethics, Computer Science, Algorithms, Randomized Algorithms, and 149 more
Recommendations to myself
Research Interests:
This is a unique opportunity for early career researchers to join The Alan Turing Institute. The Alan Turing Institute (ATI) is the UK’s new national institute for data science, established to bring together world-leading expertise to... more
This is a unique opportunity for early career researchers to join The Alan Turing Institute. The Alan Turing Institute (ATI) is the UK’s new national institute for data science, established to bring together world-leading expertise to provide leadership in the emerging field of data science. The Institute has been founded by the universities of Cambridge, Edinburgh, Oxford, UCL and Warwick and the EPSRC.

This is a targeted call, by which we intend to recruit researchers in subjects currently underrepresented by our fellowship cohort. Fellowships are available for 3 years with the potential for an additional 2 years of support following interim review. Fellows will pursue research based at the Institute hub in the British Library, London. Fellowships will be awarded to individual candidates and fellows will be employed by a joint venture partner university (Cambridge, Edinburgh, Oxford, UCL or Warwick).
Research Interests:
job details
Research Interests:
Workshop con Luciano Floridi Professore Ordinario di Filosofia e Etica dell’Informazione dell’Università di Oxford, Direttore di Ricerca all’Oxford Internet Institute Copernicus Visiting Professor, IUSS Ferrara 1391 La Filosofia... more
Workshop con Luciano Floridi
Professore Ordinario di Filosofia e Etica dell’Informazione dell’Università di Oxford, Direttore di Ricerca all’Oxford Internet Institute
Copernicus Visiting Professor, IUSS Ferrara 1391
La Filosofia dell’Informazione: una sfida etica ed epistemologica Ferrara 24 – 26 marzo e 28 – 30 aprile 2016
Research Interests:
Workshop con Luciano Floridi Professore Ordinario di Filosofia e Etica dell’Informazione dell’Università di Oxford, Direttore di Ricerca all’Oxford Internet Institute Copernicus Visiting Professor, IUSS Ferrara 1391 La Filosofia... more
Workshop con Luciano Floridi
Professore Ordinario di Filosofia e Etica dell’Informazione dell’Università di Oxford, Direttore di Ricerca all’Oxford Internet Institute
Copernicus Visiting Professor, IUSS Ferrara 1391
La Filosofia dell’Informazione: una sfida etica ed epistemologica Ferrara 24 – 26 marzo e 28 – 30 aprile 2016
Luciano Floridi
Oxford University
Research Interests:
This theme issue has the founding ambition of landscaping Data Ethics as a new branch of ethics that studies and evaluates moral problems related to data (including generation, recording, curation, processing, dissemination, sharing, and... more
This theme issue has the founding ambition of landscaping Data Ethics as a new branch of ethics that studies and evaluates moral problems related to data (including generation, recording, curation, processing, dissemination, sharing, and use), algorithms (including AI, artificial agents, machine learning, and robots), and corresponding practices (including responsible innovation, programming, hacking, and professional codes), in order to formulate and support morally good solutions (e.g. right conducts or right values). Data Ethics builds on the foundation provided by Computer and Information Ethics but, at the same time, it refines the approach endorsed so far in this research field, by shifting the Level of Abstraction of ethical enquiries, from being information-centric to being data-centric. This shift brings into focus the different moral dimensions of all kinds of data, even the data that never translate directly into information but can be used to support actions or generate behaviours, for example. It highlights the need for ethical analyses to concentrate on the content and nature of computational operations—the interactions among hardware, software, and data—rather than on the variety of digital technologies that enables them. And it emphasises the complexity of the ethical challenges posed by Data Science. Because of such complexity, Data Ethics should be developed from the start as a macroethics, that is, as an overall framework that avoids narrow, ad hoc approaches and addresses the ethical impact and implications of Data Science and its applications within a consistent, holistic, and inclusive framework. Only as a macroethics Data Ethics will provide the solutions that can maximise the value of Data Science for our societies, for all of us, and for our environments.
Research Interests:
The debate on whether and how the Internet can protect and foster human rights has become a defining issue of our time. This debate often focuses on Internet governance from a regulatory perspective, underestimating the influence and... more
The debate on whether and how the Internet can protect and foster human rights has become a defining issue of our time. This debate often focuses on Internet governance from a regulatory perspective, underestimating the influence and power of the governance of the Internet's architecture. The technical decisions made by Internet Standard Developing Organisations (SDOs) that build and maintain the technical infrastructure of the Internet influences how information flows. They rearrange the shape of the technically mediated public sphere, including which rights it protects and which practices it enables. In this article, we contribute to the debate on SDOs' ethical responsibility to bring their work in line with human rights. We defend three theses. First, SDOs' work is inherently political. Second, the Internet Engineering Task Force (IETF), one of the most influential SDOs, has a moral obligation to ensure its work is coherent with, and fosters, human rights. Third, the IETF should enable the actualisation of human rights through the protocols and standards it designs by implementing a responsibility-by-design approach to engineering. We conclude by presenting some initial recommendations on how to ensure that work carried out by the IETF may enable human rights.
Research Interests: