Centre for Trustworthy Technology

Ideas Mined from Trustworthy Tech Dialogues

IdeasMined Image

“The structure of data governance regimes today incentive habits of passivity.”

“I am optimistic that if we connect data empowerment structures with participatory design interfaces for various AI tools – we can produce the next generation to be better equipped than us to build things that empower us.”

In Conversation:

IdeasMined Potrait image

Sylvie Delacroix is the inaugural Jeff Price Chair in Digital Law and director of the Centre for Data Futures Centre for Data Futures at King’s College London. Her research focusses on philosophy, law, ethics, and regulation to address human agency amidst data-intensive technologies and novel approaches to human-computer interaction design. She brings moral philosophy considerations to concrete design interventions to make visionary strides in trustworthy technology. Our conversation surveys how enabling human agency through participatory design methods can strengthen human-computer interactions, AI applications, and big data possibilities.

Rejecting a ‘One-Size-Fits-All’ Approach Data Governance:

Global standards for data governance are developing through proposed top-down regulations, such as the GDPR, to structure the data relationship between users and providers. While these pieces of legislation play the key role of explicitly granting personal data rights, they alone cannot ensure individuals are prepared or empowered to actualize these rights. In this vein, bottom-up data empowerment structures are the complement to top-down regulation for a trustworthy and thriving data ecosystem.

Professor Delacroix conceptualizes Data Trusts as one example of a bottom-up data empowerment structure which takes a collective approach to data governance. Data trusts are a legal mechanism that allows groups of people to pool together the rights they have over their data and trust those rights to an independent intermediary with fiduciary responsible to act with undivided loyalty to the beneficiaries.

This collective approach to rights is particularly salient to data rights because data is inherently relational– both the risk and assets are cumulative by nature. One key strength of Data Trusts is having a competitive range of options with different constitutional terms to support diverse approaches to data governance across the private and public sector. Professor Delacroix discusses the unique offerings and challenges of both private and public initiatives along with some emerging exhibits. She envisions how on a regional basis, public sector data trusts could address trustworthy data-sharing between medical, educational, and social work data to combat the concern of individual cases which often fall through ‘the gaps in the system.’ Additionally, she argues, private initiatives can deploy a more traditional ‘fee for a service’ model, like providing individuals with support managing their data-sharing decisions across platforms.

An independent intermediary between data subjects and collectors introduces a robust institutional safeguard which will aid us move beyond contractual or corporate structures, in which data subjects are rarely able to bargain. Professor Delacroix refers to the shift in the 19th century when the role of the medical professional was born out of progress in the medical sciences. Today, the data intermediary or data steward, is the 21st century professional born out of our progress in data science.

Moving away from a one-size-fits-all approach to data governance is critical to reintroduce agency and genuine choice; this is conducive for debate, competition, and thus, improvement.

Professor Delacroix’s recent research also tackles how human agency can foster a trustworthy relationship between Large Language Models (LLMs) and humans in high-impact domains like healthcare, justice, and education.

Designing for human-LLM linguistic collaboration:

For LLMs to live up to their potential in these domains, they must avoid unwarranted epistemic confidence and effectively communicate uncertainty. This is a rightfully in-demand field of research in LLMs and is especially relevant for the domains which rely on consistent inquiry, discussion, and value alignment to fulfill their function, with or without machine collaboration. She exemplifies how LLMs could support judges in court and general practitioners if calibrated for effective collaboration through further participatory design methods.

Bringing key theories from moral philosophy and ethics to designing human-LLM linguistic collaboration, Professor Delacroix engages with Computer Science literature “to find the terms which can be repurposed to incentivize critical engagement on the part of users of augmentations tools.”

She bridges these disciplines through concepts like Ensemble Contestability- a computer science concept reframed to further the field of transparency to make LLMs trustworthy partners and collaborators in providing services. Professor Delacroix explains how expanding on machine learning ensemble techniques, we can augment base learners trained on different sub-datasets to reintroduce optionality and contestability into AI-products. Rather than consistently harmonizing results to receive an overarching answer, there is value in surveying multiple options and collaborating on choosing a path forward. Professor Delacroix emphasizes the risk of automating choices out of our systems which uphold and shape the fabric of our society. She succinctly puts it –“The practice of asking each other questions is at risk of being pulled down by increasingly using automated tools that discourage critical engagement.”

Professor Delacroix reiterates that the biggest obstacle to integrating these helpful AI tools is not the lack of advanced algorithms, but the lack of high-quality data. We struggle to produce a high-quality dataset to create these systems, let alone produce multiple sub- datasets for parallel algorithms for contestability purposes.

“This is tricky because we don’t have a data sharing culture… We can do better than make-believe consent. We need bottom-up data structures urgently.”

Both Data Trusts and Ensemble Contestability are in their nascent stages but hold tremendous potential for advancing trustworthy technology. Prof. Delacroix’s vision of integrating data empowerment with participatory design interfaces offers a hopeful path forward, one where the future generation are better equipped to build empowering technologies. Her work underscores the necessity of moving beyond passive data governance structures to create a more active, engaged, and ethically sound technological landscape.

“The structure of data governance regimes today incentive habits of passivity.”

“I am optimistic that if we connect data empowerment structures with participatory design interfaces for various AI tools – we can produce the next generation to be better equipped than us to build things that empower us.”

In Conversation:

IdeasMined Potrait image

Sylvie Delacroix is the inaugural Jeff Price Chair in Digital Law and director of the Centre for Data Futures Centre for Data Futures at King’s College London. Her research focusses on philosophy, law, ethics, and regulation to address human agency amidst data-intensive technologies and novel approaches to human-computer interaction design. She brings moral philosophy considerations to concrete design interventions to make visionary strides in trustworthy technology. Our conversation surveys how enabling human agency through participatory design methods can strengthen human-computer interactions, AI applications, and big data possibilities.

Rejecting a ‘One-Size-Fits-All’ Approach Data Governance:

Global standards for data governance are developing through proposed top-down regulations, such as the GDPR, to structure the data relationship between users and providers. While these pieces of legislation play the key role of explicitly granting personal data rights, they alone cannot ensure individuals are prepared or empowered to actualize these rights. In this vein, bottom-up data empowerment structures are the complement to top-down regulation for a trustworthy and thriving data ecosystem.

Professor Delacroix conceptualizes Data Trusts as one example of a bottom-up data empowerment structure which takes a collective approach to data governance. Data trusts are a legal mechanism that allows groups of people to pool together the rights they have over their data and trust those rights to an independent intermediary with fiduciary responsible to act with undivided loyalty to the beneficiaries.

This collective approach to rights is particularly salient to data rights because data is inherently relational– both the risk and assets are cumulative by nature. One key strength of Data Trusts is having a competitive range of options with different constitutional terms to support diverse approaches to data governance across the private and public sector. Professor Delacroix discusses the unique offerings and challenges of both private and public initiatives along with some emerging exhibits. She envisions how on a regional basis, public sector data trusts could address trustworthy data-sharing between medical, educational, and social work data to combat the concern of individual cases which often fall through ‘the gaps in the system.’ Additionally, she argues, private initiatives can deploy a more traditional ‘fee for a service’ model, like providing individuals with support managing their data-sharing decisions across platforms.

An independent intermediary between data subjects and collectors introduces a robust institutional safeguard which will aid us move beyond contractual or corporate structures, in which data subjects are rarely able to bargain. Professor Delacroix refers to the shift in the 19th century when the role of the medical professional was born out of progress in the medical sciences. Today, the data intermediary or data steward, is the 21st century professional born out of our progress in data science.

Moving away from a one-size-fits-all approach to data governance is critical to reintroduce agency and genuine choice; this is conducive for debate, competition, and thus, improvement.

Professor Delacroix’s recent research also tackles how human agency can foster a trustworthy relationship between Large Language Models (LLMs) and humans in high-impact domains like healthcare, justice, and education.

Designing for human-LLM linguistic collaboration:

For LLMs to live up to their potential in these domains, they must avoid unwarranted epistemic confidence and effectively communicate uncertainty. This is a rightfully in-demand field of research in LLMs and is especially relevant for the domains which rely on consistent inquiry, discussion, and value alignment to fulfill their function, with or without machine collaboration. She exemplifies how LLMs could support judges in court and general practitioners if calibrated for effective collaboration through further participatory design methods.

Bringing key theories from moral philosophy and ethics to designing human-LLM linguistic collaboration, Professor Delacroix engages with Computer Science literature “to find the terms which can be repurposed to incentivize critical engagement on the part of users of augmentations tools.”

She bridges these disciplines through concepts like Ensemble Contestability- a computer science concept reframed to further the field of transparency to make LLMs trustworthy partners and collaborators in providing services. Professor Delacroix explains how expanding on machine learning ensemble techniques, we can augment base learners trained on different sub-datasets to reintroduce optionality and contestability into AI-products. Rather than consistently harmonizing results to receive an overarching answer, there is value in surveying multiple options and collaborating on choosing a path forward. Professor Delacroix emphasizes the risk of automating choices out of our systems which uphold and shape the fabric of our society. She succinctly puts it –“The practice of asking each other questions is at risk of being pulled down by increasingly using automated tools that discourage critical engagement.”

Professor Delacroix reiterates that the biggest obstacle to integrating these helpful AI tools is not the lack of advanced algorithms, but the lack of high-quality data. We struggle to produce a high-quality dataset to create these systems, let alone produce multiple sub- datasets for parallel algorithms for contestability purposes.

“This is tricky because we don’t have a data sharing culture… We can do better than make-believe consent. We need bottom-up data structures urgently.”

Both Data Trusts and Ensemble Contestability are in their nascent stages but hold tremendous potential for advancing trustworthy technology. Prof. Delacroix’s vision of integrating data empowerment with participatory design interfaces offers a hopeful path forward, one where the future generation are better equipped to build empowering technologies. Her work underscores the necessity of moving beyond passive data governance structures to create a more active, engaged, and ethically sound technological landscape.

Related Blogs

Red Teaming

Last month, 28 countries and several industry stakeholders convened at the second iteration of the AI Safety Summit in Seoul, South Korea.

Read More
Scroll to Top