Centre for Trustworthy Technology

Videos

Episode Notes

IdeasMined Image

Ideas Mined from Trustworthy Tech Dialogues Data Trusts and Large Language Models

Episode Notes

OTN Cover Image

Augmenting the Global Digital Economy through Open Transaction Networks

OTN image 2

Open Transaction Network: Could this shift in technology transform the global economy?

Open Transaction Network: What is it and what does it mean for the incoming era?

Scroll to Top

Chapters of Sylvie Delacroix Podcast

Journey from Moral Philosophy to Data (03:03)

Introduction to Data Trusts (05:03)

Diverse Approaches to Data Trusts (10:32)

Private vs Public Sector Data Trusts (14:01)

The need for A Multi-Layered Approach to Data Governance (19:01)

Large Language Models and Ensemble Contestability (21:54)

Implementation Challenges and Data-Sharing Arrangements (29:04)

LLMs in Domains of Healthcare, Justice and Education (33:01)

Trustworthy human large language model collaboration in healthcare (38:51)

Collective modalities for feedback in LLM design (43:39)

Looking to the future (46:25)

Chapters of OTN video

Prof. Sylvie Delacroix

Podcast Dialogues Transcript

Satwik Mishra:
Hello, everyone. I’m Satwik Mishra, Executive Director for Centre for Trustworthy Technology of World Economic Forum, Centre for Fourth Industrial Revolution. In this edition of Trustworthy Tech Dialogues, we are delighted to have with us Professor Sylvie Delacroix. Sylvie is the inaugural Jeff Price Chair in Digital Law and the Director of Centre for Data Futures at King’s College London.

Her research spans philosophy, law, ethics and regulation to address human agency amidst data intensive technologies. She has acted as an expert for public bodies such as the Department for Digital, Culture, Media, and Sport of the United Kingdom, and served on the Public Policy Commission on the use of algorithms in the justice system. She focuses on designing participatory infrastructure over the life of data-reliant tools deployed in varied contexts.

This spans right from data generation all the way to the design of human computer interfaces. Sylvie is also the author of the book called “Habitual Ethics”, published by Bloomsbury in 2022. The book offers insights into the operation of habits in our lives, its contributions to modern deliberation and judgment, and its potential for distraction and distortion. Welcome, Sylvie. Thank you so much for being here.
Sylvie Delacroix:
Thank you, Satwik. I look forward to our discussion.
Satwik Mishra:
Likewise. So, let’s begin by understanding your journey and how did you come to be interested in data.
Sylvie Delacroix:
Well, I actually come at data from, probably fairly unusual, starting point in that I’ve always been fascinated, and worried to some extent about agency. And what do I mean by agency?
I mean, a capacity to keep asking questions, to keep looking at the world and thinking, oh, maybe it can be made better. Maybe it can be different. So, this capacity we all have, a priority, much of my work has been focused on, on saying, , we can’t take it for granted. This is actually a capability that can be compromised by the way we organize our institutions.

And one thing that really struck me was that if you look at data governance regime today, it’s structured in a way that really incentivizes, what is best described as a habit of passivity. So, I’m sure you’ve experienced these pop-up windows that pop up on your screen asking you – Do you consent to this data being collected or that data? and most of us try to click these windows away as quickly as we can. And I think, I mean, I really, I am concerned about the extent to which this encourages a deep rooted, attitude or habit of passivity that I think having, widespread effects when it comes to the kind of civic participation we need to encourage if our democracies are to stand a chance.

So, I come at this from a more philosophical angle, probably. But it hasn’t stopped me from looking at fairly practical examples of how we can change things. So, I’m really passionate about bringing philosophical discussions to bear on very concrete design interventions.
Satwik Mishra:
Speaking of taking philosophical, thoughts into practical implementations, that’s like a perfect segue way into understanding your work on data trusts. So, what are data trusts and why is it important in today’s age?
Sylvie Delacroix:
So, data trusts, I mean, they were born from, random coffee conversations that I happen to have with a colleague, Neil Lawrence. Maybe six years ago, seven years I lose track now. So, this was born really, in an unexpected way. We started chatting and we chatted more, and at the end of the day, we wrote this paper called “Bottom-Up Data Trust”, which was meant to highlight the extent to which the top-down regulation efforts that have taken place to give us data rights. Yeah. For this, in Europe, we have the GDPR, which gives us personal data rights.

Now that is a super important tool. Nobody’s going to deny that. But it is quite striking that on its own, this top-down regulation governance regime is not sufficient to basically reversing the imbalance of power that has taken such a strong hold on our society today that its overly optimistic, I think, to think that we can move away from a situation that is, well, that has very detrimental effect at several levels, and we document those, we can’t do that just by sitting back and hoping that, somehow data rights will solve the problem.

So, they’re very important tools, but they’re not enough on their own. And so, what we’ve tried to argue is that we need bottom-up data empowerment institutions and data trusts are one kind of bottom-up data empowerment institution and what’s key about data trust is two things. One thing which is coming to many data institutions is they’re there to basically say, look, we need to pay attention to groups, not just individuals.

So, this is a real problem that our legal structures today are very much geared towards individuals. We have individual data rights, we have individual actions, etc. And yet when it comes to data, data is relational. Yeah, my data is also my neighbor’s data and my family’s data. So, it doesn’t make sense to think only of individuals.

One thing that we try to address is the fact that we need structures that allow people to come together, and in this case, pool together the rights they have over the data. So, in Europe, that would be GDPR rights and entrust those rights to an intermediary. In this case, if it’s a data trust, it will be a data trustee who is a professional who has fiduciary responsibilities to act with undivided loyalty towards the people who’ve joined the data trust, the beneficiaries in this case, but they’re also the citizens. I’m not going to go into too much detail.

But what’s key about data trust is two things. So, this emphasis on groups that’s common to other data empowerment institutions. But the key thing is that data trusts come with built-in institutional safeguards that are stronger than any other institution based on contract or corporate structures.

There’s nothing wrong per se with contractual structures or corporate structures. The only problem that one has to be vigilant about is that the burden of proof if not reversed. So, when you have a trust, it is for the trustee to demonstrate that they have acted with undivided loyalty. And that’s super important if you don’t want to continue a situation that is not really empowering people.

So, the imbalance of power cannot really be addressed if you can’t, time-poor, often, resource- poor individuals to launch legal actions to try and get the rights recognized. So, this is one very important aspect that trust brings that the contractual structure cannot bring. I think I’ll stop here because I could speak for a very long time about data trust.

But the really two important things are enabling groups to pool together resources, in this case data rights and trust or task an intermediary who is a professional that’s the other really important thing, is that there has been talk of data intermediaries and they easily confused with corporations. I am not talking, when I say data intermediaries, I’m not talking of corporations. I’m talking of a much needed 21st century profession that is slow in developing, but I think is more needed than ever today. Yeah. So just like at the end of the 19th century, we had the medical profession, profession that was born out of basically progress in the medical sciences today, I would argue that progress in data sciences around the birth of the new profession, and in this case, let’s call them Data Stewards to be agnostic as to the legal form of the of the data institution.
Satwik Mishra:
It’s absolutely a fascinating paper, and we’ll link it to our show notes for everyone else to also go through it. But let’s dig a little bit deeper into the idea of data trust. One of the key strengths of it is having a range of options with different terms to support varied approaches to data governance. So, what are the approaches that you think or you’re most excited about, and what approaches do you see being very effective in the digital economy that we occupy today?
Sylvie Delacroix:
That’s a great question. And I think it’s very important to be concrete actually and one example that is close to my heart at the moment is, in fact, it’s not happened yet, but I was contacted at some point by, a group of schools who are interested in creating a data trust because they were interested in delivering more personalized education to their pupils.

And the idea was to say, well, we have different regulatory regimes across the world. We operate in different countries. Rather than just trying to meet the minimum regulatory requirements in each country, why don’t we empower both the children and the parents to choose a data trust that basically meets their aspirations and their attitude to risk. And accordingly, we as the school, we would be talking to the data trustee representing each group of parents and children. And we would, be able to dynamically negotiate which data we have access to, in light of the terms of each trust.

So, for instance, to give you a concrete example, you could have at one end of the spectrum a group of parents and children. And I’ll come back to that because there’s a difference sometimes. Who are very happy for a lot of data to be collected by the school and the in-kind benefits that they expect are basically better personalized education.

But they don’t only expect those in-kind benefits for themselves, they also want to join in that, a kind of charitable aim. So, let’s say they want to improve. We move to remote education for kids who don’t have access to in-person education worldwide. And so, they, task the intermediary, in this case the data trustee, with negotiating data sharing agreements with education providers around the world for whom access to this rich data or granular data, could improve the delivery of education for kids other than themselves.

And I think that’s a great example of the fact that these data trusts need not bottom out into basically profit maximization. Some people may want to prioritize financial returns and that is a possibility, and there may have to be regulation to basically put a stop in some cases to what could be seen as abusive monetization of certain kinds of data.

But what I find very important is to highlight, something that too many people have forgotten is that sharing your data and, in this case, tasking intermediary with the stewarding of this data can unlock really important in-kind benefits that otherwise are impossible to achieve.
Satwik Mishra:
And so just one follow up on that. It’s also about thinking about, like you mentioned, corporations. Let’s talk about the operation models, now data trusts in general, its diverse approach can be funded either through the private sector or the public sector as well. So how do you envision the public sector coming in and implementing the concept of data trust for the private sector, offering it as a unique, initiative in its own operating model? What do you think about the varied challenges that if the public sector comes out with it, or the private sector comes out with supporting or operationalizing data trust.
Sylvie Delacroix:
So, one thing we emphasize in this paper is that it’s high time we move away from the one size fits all approach to data governance? So, we need to be able to give a choice to people, so that they may choose to join Data Trust at time ‘T’ like today, according to the fact that they think it matches their aspirations and attitude to risk. And maybe tomorrow they may change their mind and join another one. Now this is one thing that’s important to flag in terms of facilitating a very much a debate that is lacking today. Because there’s no choice, no genuine choice. There’s also no debate. It’s not very surprising, choice is important for more than debate. But it’s crucial.

And the other thing then in terms of private public distinction is I think we have to acknowledge the fact that there are a bit different needs that are best addressed by different kinds of structures. So, for instance, in the UK, a lot of concern and a lot of attention is being paid today about the fact that we really need to do better when it comes to sharing medical research, medical data, education data and social care data.

There are too many cases that fall through the gaps because of lack of data sharing. And in this case, there’s quite a compelling argument to say maybe this would be something best funded by the public sector, possibly on a regional basis, not a national basis. And this could be something that could be, and here I’m always very careful about the words I choose, but it could be a default structure which in the absence of choice, people are assigned to. Acknowledging the fact that today, unfortunately, many people don’t actually have much awareness of the fact that data is being collected every day, whether we like it or not, when we go shopping, when we go online, etc.

And so, the lack of data awareness today means that realistically, there is a risk that even if we were to develop a wide ecosystem of bottom up data empowerment structures, we don’t want to end up in a situation where we only empower the least vulnerable part of the population, i.e. the people who are most aware of the risks they take when they share data. That is a problem that we can’t fix with a magic wand.

I am always concerned about this because advocating a default data trust has its risks and dangers. Obviously then it would have to be a very strong oversight mechanism to avoid abuses, etc. But in answer to your question about private- public, I do think it’s very important that in some cases we do have publicly financed, bottom-up empowerment institutions, which are complemented by private structures that can offer a choice and alternative.

Now, that is one example. There are other examples where you could consider, for instance , a data trust that’s created purely to offer a service. Effectively answering for all those people who feel that they don’t have time to or the intellectual resources to make informed choices about which data they share on which occasion, for what purpose, etc.

They’re like, ‘we need we need someone to whom we can trust’, a professional, who can basically take on the brief of managing these data sharing decisions, monitoring them, etc. and so there I would happily pay a fee for service, in which case it’s basically the service of data stewardship on a personal level or maybe a family level.

So that is another model, is the fee for service model. There are others. but that’s probably a model that effectively best, instantiated through private initiatives rather than public ones.
Satwik Mishra:
Let’s consider three sort of levels of this: one being the users that you’re talking about coming together to enlist the data in these data trust.

Then you talk about a new layer of intermediaries, which should come up in the 21st century, which is very much required. But this ecosystem also exists in top-down regulation and legislation. GDPR is a great example. There is a privacy law being debated in the US right now. How do you envisage those laws at the federal level or at the state level, supporting initiatives like bottom-up data empowerment through data trust?
Sylvie Delacroix:
The first essential level is to give rights to people. In the US, for the moment, that’s still lacking. Yeah. In California you do have personal data rights. But other than that that is a major drawback, I would argue. And so the first step, the essential step is to grant personal data rights.

That’s not enough, of course, because also you have to be prepared to implement those data rights. And that is still lacking today. I have to say there is an increasingly, a very high level of frustration, among those who are trying to see to what extent people who file a data portability request or data access request, these are not rights that are easy to exercise.

Often, they give very poor answers. And so we are still we still have a long way to go in terms of implementing those data rights in a robust way that basically empowers people. So that has to be highlighted. But beyond that, I do think we will also need a multi-layered approach to data governance. So just like say, take medical governance.

We don’t just have top-down national legislation when it comes to medical practice. We also have a layer of professional regulation. We often have regional levels of regulation or hospital-based levels of regulation. There are multiple layers and I think this is very important also when it comes to data, is to allow for complementary levels that respect the fact that sometimes decisions need to be made at a very local level and are best or overseen by a very local oversight committee.

And at the same time, you do need, of course, national legislation. So that is something that’s still missing today. We don’t have the kind of multi-layered approach to governance that we need.
Satwik Mishra:
Another, very important work that you put forth and is especially important in the current climate is around large language models.

And a feature of your work is something you call Ensemble Contestability. So, tell us a little bit about what it is and how can it foster a trustworthy relationship between users and data driven systems?
Sylvie Delacroix:
I use this term because I was keen to engage with the existing computer science literature and look at what are the tools that are currently being used by computer scientists. That could be, repurposed if you want, for the sake of increasing and incentivizing critical engagement on the part of users of augmentation tools. And by augmentation tools, it could be recommender systems, it could be, a system that’s there to optimize the delivery of homework for children who follow remote education, etc. There are many, many examples. The key thing here is to say there has been much talk about transparency, about the need to provide explanations to users of those automated systems and per se.

There’s nothing wrong with that. But I do think this is based on problematic… Almost, it’s a fiction, in the sense that it’s based on the idea that what matters is to protect our, individual deliberative selves. So, when I make a decision, it’s very important that I make it, in a way that could be seen as autonomous.

So, I’m not being misled into believing x, y, or z. , now that’s all laudable, but that’s not enough. If we want these tools to be the trustworthy partners, we want them to be in, especially in morally- loaded contexts like education, justice, or healthcare. So, what do we need there?

When you think of how we interact with doctors or teachers, etc. What do we do?

We don’t just say, I want an explanation. And then we suppose the conversation stops there. We tend to ask questions of each other. And it’s absolutely crucial that these conversations take place where there is an open-ended character to that conversation. And I am trying basically what they’ve tried to do is to say look at the way in which some computer scientists use so-called ensemble techniques to avoid the risk of basically, base learners being overfitted to a certain training data sample.

So, for that, what they try to do is to say let’s identify subsets of training data, which they are choosing according to such rules. And then they train sub learners on each of these sub datasets. So, let’s take the idea that there’s five different sub datasets. And you have based on the A, B,C,D,E. And so they’re each trained on slightly different bits of training data.

And so of course they’re going to result in differences, at the end of the learning process. And so, what they do typically is they try to then average the differences between those base learners, or they have various methods to harmonize the results. And what I propose to do is to say, look, forget about this last step. We don’t actually want to harmonize the results.

What we want to do is to give a very concrete chance. Let’s imagine, I am receiving remote homework from an education provider, and I complained that I always get easier physics homework than my brother. So, I ask, why am I getting easier physics homework.

Now, according to the dominant model, I could be given a counterfactual explanation that says, well, Sylvie, if you hadn’t scored so highly on anxiety kind of tests, and if you hadn’t scored quite poorly in the last test in physics, you would have been given the same homework as your brother. Now, does that empower me? No. There’s nothing much I can do except think, oh, I should be less anxious and maybe score highly, more highly to my last test. And so instead, what I propose to do is to say, look, what if I received instead an example of the kind of homework I would have received had the homework provider used a slightly different sub-learner.

So, a system that’s been trained, let’s say, on data from girls only schools. Yeah. Oh, and then I see also a slightly different homework base that’s been trained on data from boys only schools. And so, then I could be saying to my homework provider, well why are you using this particular model that’s been used, that’s been trained on mix schools data I think, say the homework, that I would have had had you chosen this sub training dataset is much better.

And so, what’s interesting here is that I can give feedback. I can justify why I prefer output from a slightly different model. And this could be conducive to a wider conversation where my parents can pitch in, or the teachers can pitch in. And this is key in the sense of saying, well, what is valuable about education is the fact that it’s a never-ending process of questions and answers.

We can’t know ever for sure what constitutes good education. So, we have to keep asking each other questions. And so that practice of asking each other questions is at risk of being compromised, of being dulled down if you want by or increasingly using automated tools that discourage this kind of critical engagement, that just give me a counterfactual, explanation saying, oh, well, Sylvie, you’re just a bit too anxious.

Surely, we can do better and so, this is my attempt to say, let’s not just be philosophers who wave their hands in the air and say we should do better and have some kind of complicated, philosophical conception of why it should be different. Let’s look in practice as to how we can change things.

So, this is a hands on attempt if you want to say, look, this is something we can do actually. There are complications, of course, and maybe I’ll come to that, but this is something that we can implement tomorrow if there is the willingness to do it.
Satwik Mishra:
So, this brings me to a couple of follow ups. One of it is a little bit practical, implementation and the other one is more philosophical about why you chose the fields that you spoke and that’s a little bit from your work. But a fundamental challenge to providing users with multiple outputs and choices is having enough valid data to train these algorithms. So how can we address this problem of data for base learners while also encouraging a multiplicity of uses from these algorithms?
Sylvie Delacroix:
So, basically there is a problem. Nobody’s going to deny that at the moment. Actually, the biggest obstacle to all having these, helpful tools that can improve education is not fancy algorithm like we have the fancy algorithm, its data.The biggest obstacle to my proposal is absolutely the fact that at the moment we struggle to put together a good enough quality dataset to code, let alone creating five sub datasets.

One from girls on these schools, one from boys on these school. That’s going to be tricky.

It’s going to be tricky because we don’t have a data sharing culture. And why don’t we have a data sharing culture? Because we’ve been assuming that people are happy to sign blank checks. That’s what happens today, when I’m asked if I’m happy to share my data for X and Y study, well, I sign there and that’s it.

Do I have any means of monitoring the data sharing agreement? Do I have any way of thinking “Yes, someone is looking out for me. Someone is checking that the terms that govern this data sharing agreement are respected.” Well, I could hope so. I could put my faith in the very overworked institutions that are supposed to be cracking down on abuses of data sharing agreements, but in practice, that’s not enough.

I think that’s very close to a blank check, and so I think we can do better than that. We can do better than make believe consent. But for that we need bottom-up empowerment structures urgently. And that’s what we also need to keep an open mind. I’m not arguing that every data empowerment structure has to be a data trust. Far from it.

What I’m arguing is we can’t afford to wait any longer when it comes to changing the way in which our data governance structure is organized. And so, what my best hope is, is to precisely join the dots between huge opportunities we have today when it comes to, say, large language models. I’m sure we are going to talk about this and data empowerment structures.

These are actually connected, and education is one of the best examples of that. Think of the example I gave you of those schools who wanted to create data trust to empower children and educate them about the choices pertaining to data. Well, this is a great example where you could say, well, we not only just going to create data trust to empower children and parents to make choices. We can also, through those structures, educate children to say, actually, you can have a say over what the tools look like, why you might prefer a tool that is slightly different.

And you can leave feedback and this feedback is your data. And you can have kind of agency over this data. Now that’s a different world that we don’t have yet today. And that’s missing. So, I do think it’s really important to join those so far separate conversations about, on one hand, data empowerment and on the other hand, how can we create basically participatory interfaces when it comes to AI.
Satwik Mishra:
My second follow up to that is more philosophical. And you mentioned this earlier as well. You mentioned three fields and it’s mentioned in your work around taking large language models to the domains of healthcare, the justice system or education. You say something to the effect of “domains which hinge upon the perpetual reinterrogation of the foundational values.”

What is the important of taking large language models to these fields? And why is the idea of contestability so important to these particular domains?
Sylvie Delacroix:
Thank you. Now, I have to give you, almost an apology, but the paper I put online is a paper designed for Philosophy Journal and so the perpetual reinterrogation of foundational values.

But basically, what does this mean? It means that today, if you look at the, like 95% of the research that’s currently taking place, and this is a massively big field of research when it comes to improving the way in which large language models communicate uncertainty. Why is it a massively big field of research?

Because everybody knows that these tools are not going to be fulfilling their promises in fields like healthcare, justice or education until they’re better at communicating uncertainty. It’s a major imperative in order for those tools to be to be living up to the to their potential. Now, if you look at this research, what’s fascinating is that almost all of it is focused on one objective, ‘to simplify’.

And that objective is let’s avoid unwarranted epistemic confidence on the part of the user, not now to translate in normal language as basically saying, let’s make sure that you don’t just take the output of the large language model at face value. So sometimes you ought to do further fact- finding. Sometimes you ought to go and check what the large language model is saying.

So how do we convey to the user of a large language model the need to sometimes, question the output and go and check further. Well, that is moving fast right now. There are many techniques that are being developed to basically change the incentives so that at the moment, large language models are mostly incentivized to provide an output that’s most liked by the user.

Now the question is, can we slightly change the incentive structure so that they also have incentives to communicate uncertainty and here the extra complication is that there are many different kinds of uncertainty of course… I’m not going to go into further details, but this is fascinating research. For me, this is an elephant in the room .

And that’s what I’ve tried to flag in this paper. It’s to say when we communicate uncertainty, yes. Sometimes it’s to invite my interlocutor to go and do some for the fact finding, but sometimes it’s a completely different objective. Sometimes, let’s say we are talking about, the merits of gender equality. Now, I might express uncertainty in the way I speak not to invite you to further fact finding, but to mark the fact that I’m committed to a certain type of conversation.

And that’s a conversation that is open to a variety of reasonable views. So, it’s basically saying, I am expressing uncertainty because I want us, me and my interlocutors to commit to a conversation that is sufficiently open and so that can be inclusive of a variety of world views, basically. Now that’s the objective. So, communicating uncertainty as a kind of humility marker that unlocks certain types of conversations, that is very different from the objective of inviting you to further fact finding.

And it happens to be super important in fields like justice, healthcare, and education. Why? Because these fields are morally loaded. They are always work in progress. We will never, I hope, we will never stop asking ourselves, what does it mean to say something is unjust? What does it mean to say that someone is not healthy? What does health mean?

These are actually not neutral words. And so, what I’m trying to argue here is that large language models, I see them as a fantastically exciting opportunity in these fields. If and this is an important “if” we pay sufficient attention to the subtle way in which the expression of uncertainty can change the qualitative nature of future conversations.

And if you think about the scale at which these large language models are going to be deployed, this is going to have a massive impact. Because imagine every judge in the country relying on a certain type of large language model just for advice, ,- for summarizing previous cases, etc., etc. Well, the way we do that large language model expresses itself will gradually shape the type of discourse, the type of conversation that can be dominant in a certain field.

I think I find it bewildering, actually, that there hasn’t been more talk about this yet. And I also see this as very exciting because there’s something we can do about this.
Satwik Mishra:
In your research, you also speak about the healthcare case study in this domain. So, walk us through that. How can general practitioners participate in this trustworthy human large language model collaboration via this interface that you’re talking about?
Sylvie Delacroix:
So, I provided an example actually, based on conversations I had with a colleague who’s a retired general practitioner in the UK. And the idea was to say, look, let’s imagine we have a general practitioner in the UK is basically a primary care doctor.

Typically, these doctors have ten minutes to have a conversation with a patient and figure out what needs to be done, if anything. So, it’s a very intense, time limited conversation. Now, we can imagine a situation where these doctors, who are typically time starved, they may be unsure. They may rely on the large language model to advise them on the kind of test they should be considering in light of the complaints or in light of the concerns expressed by the patient. Now, let’s say I’m a general practitioner and I see the list, and I’m very concerned that the list doesn’t include a particular test that I think is particularly important.

But it’s not been listed by the large language model. And I’m basically saying, look, surely, , expressing uncertainty in terms of a color coded. So, I could have the most obvious test in red and then tests that are less certain to be salient in gradually, I don’t know, blue shades or whatever.

So, that’s an interesting attempt to communicate uncertainty to me. But what you haven’t communicated is incompleteness. So, you haven’t told me that you, the large language model, are not in front of the patient and that there are things you may not be aware of because you have incomplete information.

And so how do you how does a large language model communicate in completeness, like, telling a primary doctor, these are the tests that seem salient based on what you’ve told me. But remember, I’m not in front of the patient. Please use your eyes and your ears to complete the list. Counting. Now, I let’s say I’m a very, diligent primary doctor, and I give feedback.

Let’s say there’s a discussion board somewhere, and I can say I was disappointed by this output because it forgot ‘x’. Now, wouldn’t it be better, given the fact that as a primary doctor, I work as part of a community of other doctors? Wouldn’t it be better if, instead of just my feedback, influencing the system on an individual basis? My feedback was recorded and discussed then in an in-person kind of group conversation that takes place, let’s say, every two weeks or something, where doctors can then discuss the experience in terms of how the large language model has helped them, how it has failed to communicate uncertainty or not, etc.

And then you could imagine a situation where you curate the discussion and you even validate, then the feedback that’s produced on the basis of that conversation. So, for instance, in the UK we have the National Health Service. It’s very credible to imagine that if there were to be for any reason. But if they were to be an official large language model used by primary doctors, any feedback fed back to the system would have to be validated on a national basis.

Otherwise, you end up with a system that could go in bizarre ways without being very predictable. And so, that is a very interesting case where there is a strong argument for modalities for collective feedback. And these cases tend to be precisely in these morally loaded fields like healthcare and education. You could imagine the same with teachers, etc.

And so, this is, again, a field, an opportunity that’s not really been explored to its full potential yet. I presented it at a conference with computer scientists a few months ago. They didn’t laugh at my face. They seem to think it would not be impossible. So, I am optimistic. but again, it’s high risk. This is not something that’s been done before.

And I really hope I’m talking to wise people right now to convince them to kind of implement this.
Satwik Mishra:
I hope the wise people take up your suggestion. So let me summarize the idea of contestability. It’s a little bit of expressing uncertainty, talking about incompleteness, and having a little bit of a participatory interface with which you can engage a large language model.

Now, let’s talk about the UI of the existing large language models in the marketplace today. Do you think we’ll need to devise new methods for this collaboration, or will you integrate it in the existing large language model UI set up which is there. How do you envisage this being integrated into large language models.
Sylvie Delacroix:
You mean the collective modalities for feedback?
Satwik Mishra:
Yeah.
Sylvie Delacroix:
At the moment these large language models are mostly being developed by corporations. The scenario where the National Health Service in the UK were to design its own large language model, but it would be a highly desirable one, if you ask me. They have an amazing dataset.

Will it happen? I haven’t met a single person who’s optimistic enough to think that this will happen, and it’s a great change, by the way. But so we have a problem that is that we don’t at the moment have serious activities like publicly funded builders of large language models to be used for the public.

And so, then comes the questions what are the buttons you push to get, let’s say the builders of, commercial builders of large language models to take on board concerns of, let’s say, a community of primary care doctors. And I think this is not impossible. Or you could imagine, like, let’s say that you have the in the UK, you have the Royal College of GP’s.

It’s a professional body that represents all these primary care doctors. You could say, well, the only way we can adopt this model and the only way it could be approved for use is if you put in place these collective modalities for feedback. So, you will need the kind of collective bargaining effectively, to put pressure on design choices that go beyond just the most liked output.

Yeah. Which is going to be the model that dominates. So, yeah. It’s a great question and, I very much hope that there will be enough examples of this in the near future.
Satwik Mishra:
We are going to have a data driven future, there is no doubt about it. What are you most optimistic about in this future? And what do you think we should be wary of?
Sylvie Delacroix:
Part of my answer is kind of already aired and one thing I’m very concerned about is in the optimism about building data, empowerment, infrastructure, etc. at the moment we still are at risk of developing solutions that only help the least vulnerable part of the population.

So how do you bridge the gap? How do you not only acknowledge, but take on board the fact that a large proportion of the population hasn’t even thought about data, is not concerned about data governance? How do you go from there to a system that gives them choice, encourages debate, etc., etc.? It’s tricky. And this is a crucial moment.

I feel that so much is at stake right now in the choices that are about to be made. And that’s why I do think one crucial thing is that we experiment with enough variety of structures that we can learn from failures as well. Not every structure is going to succeed and by structure, I mean bottom-up empowerment structures.

But we need a variety of them, we need enough of them to see what works, what doesn’t work, and how, we take on board what I would call the data awareness gap. This is one concern I have. I have many others, because also, of course, there’s a risk of being too European centered or US centered.

This system at the moment presupposes data rights that don’t exist in many, many countries. So, what are the alternatives? The alternatives to create structures that collect the data itself and use the data as leverage, to bargain better working conditions. That’s one example that’s being used in the country in for Uber drivers, etc. That is one model.

But again, I am mindful of, of those risks. It’s a one is to import too much of a European centric also understanding of what constitutes vulnerability. Well what Europeans think of us as a vulnerability. People in China may think of us completely different thing. and so how do we learn from each other as well in terms of, we are going to come up with different solutions.

Given the climate today, it requires quite a large dose of optimism to think we can, share and learn enough from each other on that front. Because data is not national, like data doesn’t have borders. So, this is one of the things that we have not really come up with a solution, we’ve not really addressed properly yet.
Satwik Mishra:
And optimism?
Sylvie Delacroix:
My optimism is very much tied to the fact that today we have an opportunity to join the dots between that data empowerment initiatives, which are starting to gain momentum. As a social movement, the Data Empowerment Fund had over 900 applications.

Now, that was bewildering. Applications from all over the world. High quality applications. To me, that’s fantastic. It means that a movement that started five, six years ago as a small academic thing, becoming a real worldwide movement. And I do think that the best hope I have is that if we connect these data empowerment structures, particularly in the context of education with participatory design for interfaces of values, AI tools we can then produce, the next generations will be much better equipped than us in some ways to move things along in ways that empower us rather than the opposite.
Satwik Mishra:
Professor Sylvie, thank you so much for being here, your research is fascinating and I’m sure its practical implementations will always land up creating a more trustworthy technology landscape. Thank you so much for being here.
Sylvie Delacroix:
Thank you, Satwik.

Open Transaction Networks

Trustworthy Tech Dialogues,
Transcript

Satwik Mishra:
How would you define digital infrastructure?
Dr. Pramod Varma:
industrial era is over.
Dr. Pramod Varma:
We are very excited about that portion where AI is supercharging OTN.
Satwik Mishra:
what amazes me is the private sector industry getting unleashed by a public layer.
Satwik Mishra:
When AI and DPI comes together, in the case of OTN
Dr. Pramod Varma:
when OTN and AI comes together, it is not 1 + 1,
Dr. Pramod Varma:
it’s not additive effect anymore, it’s exponential effect.
Dr. Pramod Varma:
Interoperability was always there, even in the pre digital world.

Satwik Mishra:
Hello, everyone. I’m Satwik Mishra, the Executive Director for Centre for Trustworthy Technology.

It’s a World Economic Forum Centre for 4th Industrial Revolution. And today I have with me Doctor Pramod Varma.

Now every time I have to introduce Doctor Pramod Varma, I find myself wrestling with brevity. His influence, fans, countless technological ecosystems, each marked by its own compelling narrative of innovation and a deep-seated commitment to building trust in technology. His work isn’t just about technological advancement, it’s also about shaping a future where technology amplifies society trust.

Doctor Pramod Varma is the chief architect of Aadhaar, India’s digital identity program that has successfully covered more than a billion people. He’s also the chief architect for various citizen stack layers in India such as e sign, Digital Locker and Unified Payment Interface. He’s currently the CTO of Ek Step Foundation, a non-profit creating learner centric digital public goods and the Co-Chair for Center for Digital Public Infrastructure, a global digital public infrastructure advisory. He is the genesis author of the open source Beckn networks such as Open Network for Digital Commerce and Unified Health Interface. Finally, he’s an old friend, mentor, and to my mind, the quintessential public interest technologists in the world today. I’m eagerly looking forward to this conversation. Doctor Pramod, thank you so much for being here.

Dr. Pramod Varma:
Lovely to be here, and it’s a fantastic topic that you have selected. I look forward to having the conversation.
Satwik Mishra:
Okay, so let’s, dive in. Firstly, to kick things off, to your mind, what is digital infrastructure in the world today? In this age of relentless technological advancement, what are its technical and structural tenets? How would you define digital infrastructure?

Dr. Pramod Varma:
I think the vocabulary called digital public infrastructure was sort of came about during the G20 discussions, as you know. The industrial era is over. We thick and fast went through information era and continued to go through information era post the public Internet. Dramatic change in the way digital technologies have influenced humanity. Internet, GPS, cloud computing, smartphone, you know everything that came about and even faster than that we are walking straight into the intelligence era.

The good advantage where there we just got enough time to think through from an academic perspective, think through labor laws, how, you know labor, productivity, economy, And with now AI coming into the intelligence era is going to be a lot of questions asked.

Now for us as we navigated India’s digital infrastructure journey. Public infrastructure as public good in one sense, right journey. Our questions were twofold. One, we continue to see, a massive division in the Society of people who have access to access or get outcomes.

You know, a mere access. Access to financial products, access to banking, access to cards, access to lending, access to saving opportunities or similar to access to healthcare primary care, primary care, access to education, better education if you’re in agriculture if you’re a farm or access to knowledge, you know agriculture knowledge or agriculture market access. Much of this is driven by having access.

First cut is to access. Access will drive knowledge and agency and then it’ll drive outcomes. So, we can’t go straight to outcome without unblocking access and unblocking agency. Then comes outcomes, right eventually, is that a good outcome?

And when we were asking the question the why India specifically and it’s good. I’ll give you some statistics.

In 2009, India was most poorly one of the most poorly banked nations on earthless than 20 percentage of the people had bank account access. Nobody had a portable identity. But by then, post 1992, India had, you know, changed our economic policies in 1992. Resurrected ourselves, but the good thing is that allowed people to explore newer opportunities and that meant people had to move from their villages and in their hometown. And India, as you know, there’s no unifying language, there’s no unifying culture, there’s no unifying weather, there’s no unifying food.

India is just like a continent. So that meant lack of identity, lack of bank account, lack of mobile connections. All this meant huge impediments for a large section of the society. 1.3 billion then, now currently it’s expected to be 1.4 billion people. Out of that, 50 million people have access to it or 75 million people have access to everything. Everybody else lives in the completely what’s called informal economy. That means they somehow survive. They have roadside money lenders.

There are some informal saving schemes they cook up among themselves. It’s not well regulated. There’s no consumer protection and people lose money, get cheated.

But really the question was how we bridge this gap or how do we formalize their economy. It turned out that much of that access was simply because of cost structures in the system. And post 9/11 as you know you know much of the global norms’ financial money laundering, terrorism funding all those norms got tightened. The tighter it got the less people got in you know into the system because they would ask hundred more papers, you know they go you know to just prove you’re ok. So we were asking the question then what does that cost amount to?

So it amounted to a few things, very few basic things, which is cost of who you are, identity verification. That means proving your credentials, proving oh, I am 10th, you know, the high school pass, or I’m you know, graduate or I work in this company. Proof of work, proof of skill, proof of earning and proof of revenue in a small business. If you’re a small business, right?

Proof of existence, you know there’s identity and trade licenses and all. All, every one of them adds cost in the system. And when the cost is high, that means your revenue should be higher than the cost to be able to make it wire. But these are people who can put $5 in a bank account. You know, $2 are savings, $5, are banking cost of dealing with customer acquisition, customer transaction, customer engagement, cost of compliance and all the paperwork and cost of overall trust.

This cost added up so much that most systems would not go after the people beyond the top 50, 75 million people. So that every company and every bank and every lender and every system was targeting the same cohort of the 50, 75 million because the rest was almost simply unviable. So that started our journey of asking that question.

How do you do use digital to collapse, dramatically, collapse the cost as a shared infrastructure. If the shared infrastructure is built in such a way that is that is universal, inclusive and low cost and high volume, remember that’s very important for us. Universal, inclusive, high volume, low cost.

If you can build those infrastructure as public infrastructure, we strongly believe we will close the access gap and that precisely what happened in India through the e-digital public infrastructure.

If you don’t reduce the cost of KYC and cost of customer acquisition and cost of paperwork, you can’t open a bank account, you can’t lend. It’s obvious, right?

You know, today 1.4 billion people have digital identity in the country. They can digitally authenticate themselves anywhere, it’s a public good. It’s like GPS. GPS tells you where you are, identity tells who you are to the system, not in the philosophical sense, but who you are in the system. And then payment UPI completely collapsed the cost of moving money. It’s one 700th of a dollar moving money. It’s really cheap.

And Digi Locker collapsed the cost of digital credentialing, verifiable credentialing and so on. So, it’s very interesting that when we built this, we were able to do high volume, low-cost public infrastructure.

Now, today from 50 million people in 2016, we only had 50 million people doing

digital payments around that. Today 500 million people digital payments

in span of 6-7 years, we went up 10X and identity from nobody to 1.4 billion people, having nobody having bank account is 17 percentage of bank account penetration or 18 percentage in 2009 to near universal bank coverage. So, this nonlinear inclusion and formalisation also was very key. The exact outcome that we were after that access vector was opened up by collapsing the cost of underlying common things like payment or identity, data sharing. So that’s a story of DPI.

Satwik Mishra:
That’s fascinating. And to be, I always think of the DPI story and what amazes me is the private sector industry getting unleashed by a public layer. And that is fascinating. What happened over the last 10-15 years over the private sector just booming in India due to that public infrastructure layer that came out in the DPI story.

Let’s pivot to our main topic today.

So, as you know that we did some research on open transaction networks, which came out a couple of months back. We link it to the show notes as we release this video.

What are open transaction networks and why is it important in the technological world that we occupy today?

Dr. Pramod Varma:
I think this is a natural evolution from a platform centric economy to a network centric economy. Doesn’t mean platforms will go away.

The platforms will exist, but many platforms would join. When the platforms join together it forms a network. But when the platforms join together, the network needs a set of underlying trust layers, protocols, which is nothing but the language of how these platforms talk to each other. So, if we were able to design that we thought network would create a much higher universal architecture then purely one concentrated platform story.

And why were we inspired? We were inspired by Internet. Internet is not a platform. The internet is a network. Our telephone network is a network, it’s not a platform. Our e-mail networks are networks, not a platform.

So, anything that we have seen at humanity scale in one sense has been done mostly because or including SWIFT payments or payments networks around the world, right banks we can transfer banks. Inefficient in some sense in the new world we should find a more efficient ways to do it.

But nevertheless, there were a universal payment infrastructure. So, anything that you look at universal, multi country, universal infrastructure, you will see a network playing out there. Otherwise, you will have one company or one platform from some part of the world connecting the entire humanity. And problem with that is that either it becomes extremely monopolistic in nature, or it’ll be very hard to do.

Either way, we must think hard. So, because we were building public infrastructure or digital infrastructure as public goods. We were thinking, what if we start extendingthe idea of internet. Internet was held together by a set of protocols and standards, right? That’s what we did.

When you type HTTP on the browser, that really means Hypertext Transfer Protocol. That’s exactly what it stands for.

The fact that you can send e-mail from Yahoo mail to my Gmail, to someone else. Proton Mail in you know maybe in Europe somebody wants to use Proton Mail or privacy centric guys want to use it. Sure, we need a choice. Choices are good, but people, competition is good, choice is good.

But if you to create interoperability between this platform, you need an interconnecting language between this platform which is called a protocol and then a set of standards like HTML and all the standards came about.

But for some reason, post internet and GPS, our standards bodies did not extend or they were they were somewhat weakened in one sense, and no different from traditional institutions. Sometimes they there’s a rise of this institution at the appropriate time and after sometimes they don’t know why they existed. The value this institution brings reduces.

This happened for text standards bodies as well, and the speed at which companies were driving this, you know, close platform economy. We are so fast that the neither the academicians nor the standards folks could catch up. And take a step back and say, you know, let’s think about how did internet became pervasive and universal.

It’s because of the underlying protocols and standards. How did telephony network become pervasive and universal?

That’s because underlying protocols and standards. Shouldn’t we be doing this for next generation payments? Shouldn’t we be doing this for next generation data portability. Next generation, you know, commerce, trade.

Because individuals and companies exchange assets and economic assets and economic resources. There must be a means to exchange this. Money is their economic resource. Data is an economic resource, but you can also do other resources that other, any value, any products. If that ought to happen, we must start aggressively.

Think about the underlying connection, exchange protocols, exchange protocol or transfer product in like hypertext transfer protocol, SDP. We need a commerce transfer protocol or money transfer protocol and so on.

When somebody needs to define this and by designing such protocols and standards ought to be open. Protocol by themselves cannot be locked in and proprietary, but platforms can be private.

So, you can have AT&T or a Verizon or Airtel in India. Sure, they’re all private companies, nothing wrong in it. And you know you can use private innovation on the handheld or phones and handhelds and so on devices. But the protocol underline, if you want into voice interoperability, if you want content interoperability, if you want money interoperability and commerce interoperability, data interoperability. We must think about protocols.

And so, we were navigating from first principles, we were thinking from first principles. And we were motivated and inspired by what happened in the Internet and GSM and earlier payment networks and so on, and saying we must continue to extend this protocol story, especially now.

Why is it important now is because finally every human is walking around with the computer in their hand with connected computer. Today our smartphones are you know faster and more powerful than a supercomputer in the 90s.

Really, powerful computer we are walking around with, with extremely powerful capabilities such as voice and pictures and camera.

Everything is being commoditized, you know, then if both two parties have these devices, connected devices, computers in their hand, shouldn’t they be able to exchange money? Shouldn’t they be able to exchange trade and commerce? And if I’m a doctor and if I’m a patient, shouldn’t they be able to connect and do a Tele medicine, call Tele, you know, things like that.

Why do they all need to be in one mega platform in the world?

You know that that didn’t gel correctly for us. So, we wanted universality. You want a choice and fair competition. So, we went one level below the platform and said let’s attack the protocol layer

and let’s build the protocol and when that happens, we will have if we thought platform economy was fast, network economy is 100 X fast because every node that adds into a network creates combinatorial value to the network.

So, I think that that’s what internet was all about, right? Internet is exploded because of this. It also creates a new set of innovation possibility for private entrepreneurs. Because when you create a new set of protocols like Internet beyond an Internet which we saw with UPI in India. UPI was a unified payment interface, was a protocol story where a Google Pay and a WhatsApp and a Walmart on phone pay and all could exchange money with private banks and public banks all in real time.

350 banks, hundreds of apps and workflows all integrated through common protocol exchanging money instantly right in a guaranteed trusted manner, consumer protected manner. And we will talk about trusts separately.

But these protocols with trust embedded networks can create a hundred X more growth and new innovation possibilities and then a single platform story and that takes us to a universal infrastructure approach rather than a platform centric, more siloed approach.

So, it was very first principal sort of computer scientists like thinking, and people thought we are a little bit wacko, maybe we were wacko, we were thinking crazy, but you know what we were doing anyway, open-source stuff.

you know, we thought, what do you lose? What is the worst case that way?

One year later, we would have thought, hey, you know, this damn thing, we were smoking some stuff and it didn’t work out, you know, but we wouldn’t have lost anything.

But what we saw was hundreds of open-source folks joining. Now we have thousands of them around the world joining the movement to help the protocol in the open source, and then entrepreneurs joining and say, I can use this protocol to do what I thought I’d I know I can’t do otherwise, faster, cheaper. And they’re all coming together.

So, we said a maybe we unleashed a next set of values into the ecosystem, private innovation ecosystem. And if the ecosystem likes it, they will run with the ball as long as we keep the protocol open and open source. And that’s what we’ve been doing lastly 4 years and you brilliantly covered in that in the paper you produced. If people are listening to it, you must go back and read that paper.

Satwik Mishra:
So, on open transaction networks, one of the core tenets is interoperability. And we see over the last couple of years this word thrown about a lot. Now it’s suddenly become way more prominent than what it was then when we were working on it five years back, six years back.

Europe has its own interoperability layer coming about with DMA. We’re pushing it from, we’re pushing it from the open transaction networks and from protocols.

Give us a sense of the philosophical sense of interoperability. Why was it considered important when ARPANET and World Wide Web came about all the Tim Berners Lee and Vint Cerf and why did they think that was so important and why is it important today? What is it for what does it mean for the world economy today?

Dr. Pramod Varma:
Funny thing, interoperability was always there, even pre digital world. The fact that random companies hired that idea to a specification can fit into a car manufactured by completely another company without them talking to each other at all tells you there’s interoperability in the physical world.

Your house is constructed with a set of interoperable components. If everybody had to sit together and build a custom doorknobs and hinges and you know all that damn thing, you would have never built a house or every house would have been so costly to build it made no sense to the specification. To unbundle and dynamically re-bundle later was a norm we have always seen in the history of humanity.

There’s no other way we would have survived without Interoperability. is not a new word, it’s not anything cool and new, it existed.

Anything that works at human scale has interoperability built in. No other way, it would have survived. It nicely fits in. Few things kind of come together.

When Digital World came, we extended the idea of, you know, interoperability into the digital world. Computer protocols, TCP, IP and all the protocols that we talked about are all interoperability.

How do two machines talk? If two machines are built by two different companies, how do they talk? Some standards, some specifications by which can talk.

How do independent modems? In the early days of internet talked, everything was about specification. If you look at a Windows laptop, Windows operating system, Microsoft doesn’t build a mouse. Logitech builds a mouse, some other company builds a camera.

I use some headphone earphones right now, which is built by some cheap stuff in India. I think you know, some not even expensive one. They don’t, It’s not that they all have to sit in a room, and they agree that how will my pin connect to you, right. Standards like USB pins, all those were a device driver standards. All the things were built as interoperability.

What do you what happens when interoperability is built. A system can be now unbundled and independently developed by a new ecosystem in the car. Car is an assembly. The car companies primarily looking at the main engine and chassis, nobody builds the dashboard or steering wheel, they are all built by different people, right?

So, when you unbundle it, each component, each component in the unbundle can have its own ecosystem, bunch of people making mouse, bunch of people making printers, but they all come together because specifications necessitate that they come together. Its not new, always been there.

That’s the only way we’ll survive as humanity, Anything at human scale and sustainable. You need interoperability as a fundamental element. When that comes, why is it important now in the digital world?

Why is it important? What sadly happened this post Internet. Much of the advancements cloud computing or all that thing was private goods. And when they create private goods, they obviously want to create a mode. You know, if I were running a private company, I would want to create a mode. Nothing wrong about it. That’s the right thing to do.

In fact, if I were a private company, I will dig the pole. I’ll build bigger mode, why not? And but when you create the mode.

But what happens is these modes become silo, they become sort of the walled gardens, you know, it disconnected walled gardens that’s happening. And that works at some scale beyond which it stops working and beyond works replicates of that.If it becomes really large like in the US you know drivers would start complaining or in the Europe against a large you know ride hailing company or against a large e-commerce company they would say hey anti-monopoly you know you guys are screwing small companies and my products are being downgraded in the marketplace unless I pay up my premium, all the drama starts happening right.

So, what happens is eventually if the platform wall gardens or closed loop platforms either they remain small, not enough value it, it seems big because valuation is big, PC driven valuation is big, but the volumes are really not that big, that’s what happens. Or if it becomes big then all the anti-competitive discussions are coming up, right.

So, we believe interoperability as a means to create fair market competition and new innovation playground and thus choice and options for the people is a very interesting possibility. Even beyond that, I feel if you want anything at 8 billion people scale, you very well deal with interoperability and you can’t have one platform story, you have to have multiplatform story. But if you have multiplatform siloed story, then individuals and Smas get screwed in the deal because they are not portable and interoperable.

And so, you know if you want anything at internet, or telephony scale across the world, you know anybody can exchange voice, anybody can send money. Now with appropriate rules, no big deal. You can layer regulations and rules on top of that.

But the infrastructure itself should be inter universal and interoperable across the world. I think it’s somewhat a no brainer, but it may be, it may be.

If you are a private company wanting to build a full platform mode, it might look like the protocol story or interoperability stories against you.

Because why should I interoperate? You know, I’ll build my own platform and I’ll get it. But if you want the universality, you want many, many such platforms, but they should be able to interchange, you know, data, money, trade, commerce, you know, interoperate between them, right.

When you look at electric charging stations in most countries, none of them are interoperable. It’s like saying it’s your car but to fill the gas you’re to only go to car companies gas station.

We when we look back, it would have never worked across at the human scale, it would have never worked.

So obviously we had unbundled and created interoperability between traditional cars and gas filling stations. But somehow with electric cars we have a proprietary charging thing going on.But I suppose that is the first phase.

I think the second phase would be always a network phase. First phase will be a platform phase. Second phase will be a network phase because other than there’s no other way to scale to humanity scale.

Satwik Mishra:
So, it’s not as much as an anti-platform play as a new way of platforms to speak, It’s getting platforms to speak to each other and create more scale.

Dr. Pramod Varma:
Not only platforms to talk to each other, it creates independent, innovative ecosystem. If you create a standard for chargers, for electricity, cars for electric bikes and cars, a new slew of innovators will come about to create all kinds of ATM like charge machines, small charge machines in your apartment, big charging in all the in the highway they a new set of innovation will kick in.

But if it is a full vertical closed loop play, then the car company has to innovate everything all the way down. And every car company saying my charger, my pin, my things, you know, it made no sense by the way. It is like saying your mouse and your laptop all had custom pins. You know, type C, USB or you know, you would have gone bonkers.

You would say what the hell is this, but we seem to accept it when the innovation is new as a phase one strategy, but it’s never a scale strategy.

Satwik Mishra:
What would be the phase two strategy for traditional industry like you’re speaking about new entrepreneurship coming up with open transaction networks. There’ll be more opportunities as we unbundle, and we’ll create more spaces for innovation.

What do you emphasize is the role of industry to operate in open transaction networks?

Dr. Pramod Varma:
In fact, open transaction network doesn’t mean free innovation, open source, everything, nothing. It simply means think HTTP, think GSM, think SMTP, think payment protocols but entire all the Nodes on the network are all private, you know could be completely private innovation.

Now there might be some open source like Firefox versus GROM or something you know. There might be some open-source Nodes also because some societies might need some free government given because extreme poverty or you know somebody else will support them and all that thing.

But majority of the nodes on a network open transaction network is the network is open, but the nodes are all private.

OK, remember, so in this without industry, there is no OTN.

No, without private industry and private innovation, don’t even bother attempting creating protocols and OTNS.

Satwik Mishra:
That’s true. And this is what I’ve been arguing most of the times that we need the actual win of open transaction networks and open-source protocols would be when industry joins in and they see merit in it and that’s the only way it can scale going ahead.

Dr. Pramod Varma:
We have seen that in the automobile industry, we have seen that in the healthcare industry. It’s not that hospital person is building the device to everything.

I mean all these have been unbundled through independent specifications. Eventually industry sets up and say you know enough of this custom game, man, let’s, you know, let’s do, let’s come together to do it something, you know, so that universality happened. You know, that’s, you know, it’s like it has to happen.

Satwik Mishra:
Now I know that there are like so many pilots ongoing across the world. In our paper, we’ve got pilots. We’ve exhibited pilots from Brazil, Gambia and India. But you know that it’s happening in the developed world also in the developing world, in low-income countries as well in Amsterdam as well.

Paint us a picture on what you envisage open transaction networks to do in different economies, in a developed economy, in a middle-income economy, low-income economies.

Would it play out differently? Would it? Would it serve the same purpose? How would it help community and industry in different economies?

Dr. Pramod Varma:
In fact, the idea of open transaction network, it has nothing to do with global power or developing nations or developed nations. Nothing. It has relevance to every digital economy.

Now which industries most, which use case and which industry is the most attractive depends on the context of that region or the country or whatever, right?

Like if you’re dealing with energy transition, new problems, Europe is going to energy transition. They will say, OK, everybody’s going electric because it’s pretty Europe. because you don’t, you know smaller countries more compact and enhance electric cars can make it much more meaningful with the long highways and everything.

US might be still a little bit hard to not have enough charging stations and long distance maybe still have you know you might still have anxiety, you know, So, because of that the European nations

for example, we see lot more conversations related to interoperability in charging infrastructure, interoperability in new age energy trade for example, energy distribute, energy production is going decentralized as you know behind the grid, solar, all kinds of and then you are having energy storage also decentralized because batteries and battery banks and all that thing.

So, no more are the days where the energy is being centrally produced using one typically one dominant source, then distributed as a one-way distribution to the homes.

Today, it’s being produced, stored, exchanged in a two-way situation. When such things happen, you need lot more programmability and interoperability among battery systems, solar systems, smart grid systems. You need to be able to because you don’t want to pump all the energy into a grid and grid might collapse, grid might not take it.

So, you need to readjust the load reprogram, pause, you know bring back load only when the peak time maybe this pricing different differentiation you have a price leverage you can play with, you can sell at peak time versus you know nighttime and all that thing, right.

All these are actually coming through right now. If you look at IEA, IEAS, energy papers, brilliant set of papers they produce, it’s very clear, composability, programmability, dynamism of the whole energy world is changing.No more are the linear one-sided energy distribution problem.

Now people are asking the same question, what is the interoperability standard for smart games? Smart meters or smart grids or programmable exchange and trades and so on. Somebody has to sit together and figure out these things.

Otherwise, it will create a bunch of silos, which is not so bad as the early part of the innovation to prove the innovation out. But it will never scale the innovation further, right?

So, we are seeing developing nations focusing on newer problems such as energy and climate resilience and sustainability, recycle economy, sustainable agriculture. Even in the US we are having this discussion with Stanford and Berkeley and on energy and sustainability and all in the Portland we have a pilot that’s supposed to kick in on climate resilient Township.

How do you bring, how do you reduce coordination cost at the time of a disaster? Because at the time of disaster, you know everything is on random WhatsApp and you don’t know what to do. And so how do you remove, create a coordination fabric and reduce the cost of coordination when a disaster happens, right. And that also is a protocol thinking I can think about because it’s a network, there’s, you know, everything is a network of providers and consumers.

So, you have to connect them. But in the US, all the problems like commerce, ecommerce, food delivery, ride hailing is not that compelling. Although drivers might be complaining and restaurants may be complaining, it’s not yet reached a situation where people are saying we need an alternate model. So, but energy and new problems US is very keenly thinking about AI. They keep thinking about is there an interoperability story on AI and all that.

In Europe, you see new problems including old problems because much of Europe has digital markets act or other interoperability push. And sometimes the labour laws are pushing away from these platforms out, like Germany and Copenhagen and all those kinds of Uber and other companies have been, you know sort of had tough time surviving because of the regulations and other issues in Europe, labour laws and other issues in Europe.

So, then they are also looking at many alternate Amsterdam project was an open mobility and open commerce infrastructure for example. It was not an energy climate problem. It was an open mobility, ride hailing in Amsterdam.

They wanted a more network approach so that multiple ride hailing platforms, including public transport platforms can join in. So that consumer can get choose between public transport, private transport, last mile and all the good stuff, electric bikes. Because there are a lot of that going on in Europe, right?

Global South, it’s completely fragmented because global South is not yet digitized.Even you know ride hailing largest ride hailing platform in India does across India daily. They do less transaction than one cities metro in Delhi, I mean for example or Bangalore.

So, because you know our India is so widely large and diverse that these platforms have been only in in single digit. All these platforms have only reached single digit penetration including largest e-commerce platform has only reached single digit penetration in D to C commerce.

So, India is not a monopoly problem, it’s a fragmentation problem we are seeing that means much of the economy is high information asymmetry, high friction, high cost and low performing equilibrium.

So, we are looking at a network approach to bring them all together onto a unified and more formalized economy so that everybody gets their advantage, so that every platform wins in this game. No platform will be a loser in India when or in Brazil when we do the network because it’s an obvious large country problem, you know unpenetrated, you know not at all penetrated problem, but global north we see different set of conversations.

So, networks are here to stay open standards and specifications and protocols that interconnect part many systems is a norm. We always had for hundreds of years. We are going to continue that thing. The use cases, what are the real compelling, compelling use cases in different countries differ.

Satwik Mishra:
But as these use cases and pilots go forward, another aspect that we should discuss, and I’d love to know your views on it is, what do you foresee the risks coming about in these ecosystems as they’re being piloted in different industries and also, different geographies with different economic conditions and incentives? How do we think about risks in this space and what do you envisage?

What should we be mindful about as we go forward?

Dr. Pramod Varma:
Yeah, most, by the way, much of the just for the audience, much of the protocol story by design, are decentralized. There’s no centralization, data centralization, or a flow centralization. So no, individual institution is in the middle. Controlling this protocol allow you and I to exchange money, that’s about it right.

Now, there might be a bank into which my money is stored. That’s OK that other than that really, it’s more of a decentralized and when it comes to commerce ride ailing for India for example,

it’s between the driver and the you know passenger, so it’s no big deal. But there are two big risks that we always should look at it one, a business viability risk. That means some use cases. You have to really analyze what is that society?

What is the industry in that society in Brazil, India or Gambia or whichever country or Amsterdam or US. In that environment, do we have enough private entrepreneurs who are ready to play the game without PRI?

As we said earlier, without the industry, without the private industry, there is no network play. Network is only meaningful when there’s a private entrepreneurial play going on.

OK, remember, it’s nothing to do with government. So, we have to analyze + 1 use case. What is a use case? What is most attractive?

That means what is the most value unlocking use case in that society with least friction? Are their entrepreneurs ready to play the game? Because it is like Internet, if there’s a new playground, you need new players and new players need new infused VC funding. All that thing has to happen.

If everybody is going after them building another LLM now and there’s not enough money, then maybe there’s a business risk here, right?

There are not enough entrepreneurs to bootstrap. Bootstrap networks need bootstrapping. Remember, network is about two sides. You know, it’s a chicken and egg problem. Without supply, there’s no demand. Without demand there’s no supply. So, we have to bootstrap these networks through use cases that are most value unlocking and least friction to the society.

We need entrepreneurs there. And if you don’t get that correct context or environment, very high risk of a failed network because you went after a problem statement for which the society, neither the society nor the entrepreneurs were ready. We have to get both society and entrepreneurs, and funders, all ready for it.

So that is very important. And 2nd risk is contract and consumer protection risks. Because when you unbundle and through a network and then rebundle dynamically through a programmable rebundling layer, you might be money sending buying something on app one from another platform in a platform two. And then platform two is interacting through product called the platform three to transport that product and everything nicely come.

What if it doesn’t come now? How does consumer protection work? How does contract adherence work? If everything ends up in court, then we have a mess in hand, right?

So, we have to think through a contract trust layer. Basically, trust the design of the network. Trust design includes participation agreements, SLA, compliance contract adherence, consumer protection grievance, handling tracking grievances to make sure grievances are not going through the roof because then you know, network is going to collapse.

So, there is there exist a need for a neutral, neutral facilitator. Neutral facilitator doesn’t have to be government or it should not be government according to me. It should be a market alliance, like GSMA type of market alliance, OK, some sort of market alliance rising up and saying we like the idea of network.

We are enthused about new entrepreneurs are enthused about doing it because we can do it at scale now money movement across the world and all that thing. But for us to all talk together like GSM guys did, we need a GSMA so that we can all talk about each other and say guys tech has to be trustworthy. Compliance & complaints has to be handled.

Money contracts, when I call from my Airtel phone here to an AT&T or a Verizon phone, there is money being also moved right between companies.

What if you stop paying? Who takes the risk the network will start collapsing if people start cheating each other?

So, you need all this. You need a neutral party to make sure, there is a healthy, trustworthy network. So, two parts economic and business viability, trust and consumer protection and you know, trust of the whole network. These are the biggest risks of any network. And if you don’t get it right you will realize you will think, no, the idea of network work, no idea of network always works.

The use case may not work in the correct if you don’t get the context right or the way you went about. You know, in terms of governance, in terms of grievance handling, consumer protection you didn’t think through this well.

And hence a massive amount of consumer complaints are going on. And when the complaints rise up, network automatically loses trust and it will collapse itself.

Satwik Mishra:
This sets you up for the like a question that I’ve been meaning to ask you for about a month and half after the paper released. And we’ve just not had the time to speak in the paper. Like you were very kind enough to review the paper and give your thoughts. You have this port where you talk about as we are explaining the technical aspects of it to our leaders and trying to explain this new way of thinking.

And you say that’s the whole way of thinking. You have this quote, which I’m paraphrasing, which says something to the effect of it’s not the technical interoperability, which is a challenge, it’s the structural trust creating trust in this network, which is a challenge now as you’ve spoken about earlier.

So, a network play will require multiple parties to come together for every unique transaction. With these pilots coming about across the world, what are the some of the lessons in building structural trust in open transaction networks and how important has given your experience with other ecosystems that we’ve developed over the last 10-15 years, how important is trust in technology in open transaction networks as we go forward?

Dr. Pramod Varma:
Yeah, so it’s very clear that anyone wanting to set up an open network for commerce, mobility, energy, circular economy networks, education networks, killing job networks, whatever networks you want to create and we have covered some of those examples in your paper. I think these networks need, as I said earlier, this network needs two parts.

One very systematic analysis of the use case readiness to understand is it really society really ready? Are entrepreneurs really ready to play this game?

Otherwise take a different use case because not all use cases may be ready for it because of many other reasons. Nothing to do with protocol or technology or anything, right?

It’s just maybe it’s just not mature enough. And maybe society, like with healthcare, we see some of the habits in the society are so strong that they are not ready to, you know, migrate to a digital health doctor. OK, I would rather go to a physical doctor near me, right.

Things like that. So, it’s because it’s just a cultural habit and so on. So, you have to whoever is attempting the network must have a systematic way to answer, objective way to answer the use case, readiness, consumer readiness, entrepreneurial readiness. OK, that point number one.

And if you don’t do that, we’ll walk into a we’ll say it me, it makes intuitively make sense to me. I want to do the same thing in Gambia. But Gambia may not be Amsterdam. You have a difference there. Different culture, different habits, different entrepreneurial energy, different funding structures, different people.

So, you have to analyze, don’t copy paste. Basically, you have to do an analysis of that in that context. That’s point number one.

Point number two, is that you must, that’s our learning.

You must incubate some sort of a by the entrepreneurs for the entrepreneur’s coalition or cooperative, digital, cooperative, digital collective, coalition, whatever you want to call it. Alliance. You must create this and ideally keep it. That is not-for-profit neutral because the playground maker can’t be also a player, OK, because that becomes very messy.

So, the create a playground maker who is a with the membership of all these entrepreneurs being member of that collective with you know, ideally kept as not-for-profit. So that you have a round table where you can actually discuss about trust contract issues, pricing issues. You know when you exchange there might be Commission structures,

Commission money that everybody might have agreed to some of those terms, those conversation had to happen in a neutral facilitating fashion, not a controlling fashion.

It can’t be a platform with a for profit entity saying you shall do it because then the network falls apart, it’s a then it’s a platform play. Then you should play the platform game, close loop platform claim, not an open network game.

So, our learning is that you need to incubate very early such facilitating entity and create the environment for conversation and co-creation of new governance rules. Remember no governance rules can be created upfront.

Rules have to evolve as the networks evolve. So, you need a mechanism, you need a facilitating environment that allows such conversation to happen and allow such rulemaking to happen in a participatory way so that everybody of the network, all the nodes of the network as the voice there can participate.

Create an open and transparent mechanism for such conversation to happen so that trust is established and so on.

At the protocol level, technology level, lot of these are taken care. Digital signature, cryptographic contract, cryptographic agreements, all the things taken care that is not to worry.

But technology alone won’t solve it. Technology plus a you know conversation, governance, conversation will make it a much more trustworthy network.

So, when you instantiate a network. You need that facilitating organization also to be instantiated. That creates an environment for co-creation and government open governance to happen. This is a learning, there’s no other way to skip it.It’s very hard to skip it by the industry. For the industry or facilitating organization ought to happen and not-for-profit. Ideally A neutral player, ideally without it. It’s a bit messy.

Satwik Mishra:
As we go towards the end of our paper, we end on sustenance, and we talk about how do we create sustainability in these open transaction networks as they’re going forward because they’re just starting off.

Do you think that these will be lessons which will come in as the time goes forward and there’ll be a mixed set of lessons from different geographies, so, there will be more of a plug and play basis that or will we be able to in this diverse sort of pilots be able to create some common standards and common schemes for creating trust and sustainability of transaction networks?

Dr. Pramod Varma:
Excellent question.

There will be common patterns, but they will also be contextualization of these patterns that is necessary. So common patterns will emerge. So, it’ll be wonderful if multiple networks can sort of come together, publish their own learning, publish their governance structures.

Anyway, for their own transparency, they should publish to exchange knowledge. But that should not be a controlling or regulating cyber model.

It should be purely; I don’t need your blessing. You don’t need my blessing. But I will share my learning. You share my learning. I am almost with my you know I’m old enough to see that.

No way, we are not going to see a pattern. We are going to see patterns emerge. But contextualization must happen because different countries, for example, Singapore you know, contract adherence is nobody even doubts that if you sign a contract with me that you will not.

But in India or developing nation signing a contract is assigning a contract. OK after that there are all kinds of, you know, drama happens, right.

So, we have now we have to put additional layer of governance so that it doesn’t end up in court because court is also clogged in India. So, we are so different contexts that have different ways to govern. So let them be, let them contextualize it.

But common patterns will emerge and there must be shared. Ideally there must be shared. You know that’s the only way we are going to learn from each other.

Satwik Mishra:
We have managed to do something which rarely happens. We’ve managed to speak for about 50 minutes without bringing up AI.

But I can’t let you go. Before we answer this question, it is fascinating what’s happening and as somebody who’s been following and I’m sure you’d find it, you’ll have even more of a context to it in the last 10-12 years.

We are truly in a unique moment right now with the development of AI and what it’s offering. What do you see its role in open transaction networks?

What do you see is the productivity frontier that AI could bring about in this new play that we’re talking about.

Dr Pramod Varma:
Excellent question.

We have been talking about the idea of AI and whether public infrastructure or network as OTN as a public infrastructure open in our transaction networks.

As a public infrastructure. I feel a lot of us feel AI is a massive catalyst to boost many things that the network is trying to do. So, we, you know and then keep saying it’s DPI to the power of AI. Because when AI and DPI come together, in the case of OTN, when OTN and AI comes together, it is not 1+1, it’s not additive effect anymore, it’s exponential effect.

You know, suddenly that’s because consumer interface, you know on these networks now can supercharge through AI. I can have a, you know, natural language bot that is ordering. I can have broken language Hindi languages or Swahili or language people saying yeah, I want to order this, I’m not sure.

Can you first search for my price? What is it? Just I’m interested in buying this in broken, grammatically incorrect language and suddenly AI is able to dissect some of these and provide a new way of interacting with computers beyond keyboard and touch. Keyboard itself was completely geeky and nobody could use it really frankly.

But that’s at least allowed lot more humans to use it, including my mother and all that thing. But we still have in India. We would still see transactions.

When it comes to transactions, only about 100 million people actually do self-service transaction and at best maybe 200 million people. We have 1.4 billion people. Think about it. It’s still a 20 percentage, right?

What happened to 80% of the people? How do they fill a form? This form filling, although it is digital, it is very archived.

By the way, some text box, some you know, list box, select your country, radio buttons, fill up a you know area for you and I.So naturally we’ll just fill it up because in English I sort of know it.

Most people can’t fill this stuff, so they’ll always need an assistance. Please help me. Please help me to fill a bank form or a loan phone.

It’s not easy to you know, it’s not at all easy. It’s so unintuitive. It’s like very silly that we create all these forms. AI can boost. AI can dramatically create voice, space, next generation human computer, human interactions right at the same time on the provider side, look at SMEs, small medium businesses.

Why do we struggle with small medium businesses and nano entrepreneurs in India to bring become digital. Large companies have cataloging people.

They can digitize their catalogue, They can digitize like inventory. They can digitize their you know, they can IT outsource, create an API endpoint. All the staff casually say they should do.

What happens to all that stuff is one man shop. We have millions of one man shop, literally husband wife doing shops or whatever right or father son doing or daughter mother doing.

They have no time to catalog it properly. They don’t even have the language to describe a product in a catalog. We think literally using you know whether the open a Open AI or any Lama three  or with these multi model LVMS, LLMS multi model.

We know whether it’s voice, what you call visual or whether it is language-based model, visual models.

Now multi model, you know LLMS or not LLMS now they’re called whatever LXMS you know models that are multi model in nature can literally shoot my inventory, shoot my inventory and actually auto catalog.

So, I can create productivity tools, advanced tool sets and supercharge peoples and companies, small medium business, ability to join the economy, their ability to join the digital economy and transact in an open transaction network.We need to transact. To transact we need to make transaction.

Coming on to the grid easy and transacting easy.I should be able to.

Shopkeeper should be able to say, oh, I received a new order through the network. I should say, yeah, yeah, I accept it and you know, then they’ll say but when can you ship?

Imagine computer asking me, I can say commit day after tomorrow. I can ship day after tomorrow, not a problem. No need to go click some mouse, computer click some form, accept order, nothing.

I can actually talk to the computer and accept an order as a shopkeeper while I’m still serving.Maybe my customer, right, it’s amazing stuff you can do with you know LLM.

So you or AI can dramatically reduce the access and ease convenience, productivity and cost structures in the transaction networks.

So, we are very excited about that portion where AI supercharging, OTNs. We also feel on the other hand, if you want a dream where the dream is AI agents interacting with agents, that’s like one day my agent will schedule with you know some a doctor, book vacation. But to book a vacation I had to search, I had to book a cab in that new city.

I had to book hotels, I had to book flights. Imagine agents are all doing this. How do agents interoperate?

How do agents contract? How do agents create micro contracts that are legally valid contracts so that what you commit, you committed a booking room, I booked a room, you have to commit to that right?

So, you need cryptographic techniques and you need interoperability guidelines and all that thing to ensure agent to agent trust, agent to agents contracting, agent to agent discovery and transactability between agent to agents taking care.

So, we are also think OTNs could be a way we could actually supercharge a human scale agent network. This is the Part two of the story.

OK, so part one is simply OTNs getting supercharged through AI techniques like you know language and voice and everything and cataloging and all the good stuff.

Second thing is OTNs enabling the dream of agent driven networks to come about.

Satwik Mishra:
Doctor Varma, thank you so much as always. There are far too many topics and very little time to discuss with you. But hopefully we will continue this conversation with our work on open transaction networks.

It was an absolute pleasure to have you out here and also learn from your insights.

Dr. Pramod Varma:
Thank you for having me and very much look forward to your support and continue with other newer publications on the OTN.

The idea of networks is here to stay, different context, different use cases, but networks are here to stay. Interoperability always existed here to stay. So, you know it’s good times ahead. I think if you put our heads together,

Satwik Mishra:
Good times ahead. Thank you.