Welcome to our newest season of HumAIn podcast in 2021. HumAIn is your first look at the startups and industry titans that are leading and disrupting ML and AI, data science, developer tools, and technical education. I am your host David Yakobovitch, and this is HumAIn. If you like this episode, remember to subscribe and leave a review. Now on to our show.
Listeners, welcome back to the HumAIn podcast. As we’re diving deeper into 2021, there’s one thing that’s been on everyone’s mind and it’s not the pandemic. It’s the rise of AI. It’s the great divide that we’ve been seeing in the last decade, the splintering of the internet, the splintering of AI and research and science, and whether technology is being used for the greater good or for alternative purposes.
Today’s guest on our show is Steven Umbrello, who is a Managing Director for the Institute for Ethics and Emerging Technologies in the European Union. Steven’s work focuses on ethics and design thinking around building AI systems, and how policy can shape the future of these autonomous systems that many of us think about every day. Steven, thank you so much for joining us on the show.
Steven Umbrello
No problem. Happy to be here.
David Yakobovitch
I am really pleased to have this conversation because I’ve had many conversations with colleagues out East and by East, Singapore, Taiwan, China, Korea, and other countries and Island nations. And also with colleagues out West in the United States, in Canada, in the European Union. And there seems to be diverging thoughts on where we’re going to be going with these systems.
Let’s just start with some background from you about what type of work do you do with autonomous systems and AI systems and where are we at today with the work that you’re doing?
Steven Umbrello
You could say some of my work could be described as being somewhat eclectic. However, it’s all within the more umbrella domain of right engineering ethics. So I’m trying to provide tools, clarification of what would normally be abstract, philosophical concepts, like human values, to engineers in a way that they can implement those more abstract values or translate those more abstract values into design requirements.
And more specifically, my work has focused on, regardless of which technology I was looking at, whether it be advanced nanotechnology, atomically precise manufacturing, industry 4.0, or artificial intelligence, it has been on a particular approach, which has garnered a lot more interest in the last two decades, which is value sensitive design, simply a principled design approach for how we can incorporate human values, which are often abstract, into technological design.
David Yakobovitch
Human values. I’m someone who has certain values and you, Steven, have certain values. And are our values, all rule-based engines? Are they these decision trees or can we quantify that?
Steven Umbrello
Quantify, probably not. So this is actually one of the basic precepts of value sensitive design, that doesn’t really affirm a root based or a universalist conception or an absolute understanding of human values. And in many ways, it’s culturally or socially culturally sensitive. It’s fundamentally predicated on the fact that technology design has to be situated in the context of design use and deployment.
So part of value of sensitive design, particularly in one of the initial phases, which is understood as conceptual investigations, because the value sense of design is fundamentally broken up into three parts and it’s often described as being a tripartite methodology. For that reason, it has been the fact that we can begin with some, perhaps a priori, value designers that can look at the philosophical literature on specific technology. If it exists, look at some of the values that have been implicated, maybe come up with some definitions, some working definitions of those values, and the definitions of values remain working throughout the design phase of any given technology.
It’s informed, it’s reflexive. And it’s iterative in the sense that once we begin empirical investigations, whether that’s bringing in the stakeholders, and of course, stakeholders are different depending on the social, cultural context in which those investigations are being carried out, then we can start to revise those working definitions of those values.
And therefore, how can we translate those values into design requirements, using these social cultural norms of the place that we’re doing those design programs? The difficulty with AI and with many technologies in a globalized world is that we can develop a technology here in X, but unfortunately that technology has cross-cultural, cross-domain, cross-border impacts. So, it’s trying to incorporate different understandings of values from across the globe into a single technology. So these are some of the difficulties that designers are facing right now.
David Yakobovitch
That’s right. And as startups go global, there’s been many stories over the years of startups that have a logo and the logo could mean delicious in the United States, and it can mean something explicit in Japan and it gets lost in translation over the cross-border, the cross-language and the cross-culture. So there’s definitely value sensitive design.
The startups today that have been very successful overall with software scaling across borders have hired locals. They’ve taken humans that have these values and these cultural norms who can understand the local dependencies of a geography, and then implement the same technology across those borders.
And that has led to this new internet of things, our industry 4.0, where startups are global, where we’re in a distributed world, where you’re a team that can be following the sun model, somewhat the United States, the European Union and Asia always working and never sleeping. But there’s always humans involved. It’s not completely automated at least yet.
Steven Umbrello
So that’s actually a pretty good example of within the value sensitive design paradigm, within this philosophy, this approach of engaging or enrolling stakeholders. And in that case would be direct stakeholders, because we’re drawing on a community, a population that would be directly working on the design of any given technology that is fundamentally situated in a cultural or social context to bring some of those values or value understandings directly into the design.
David Yakobovitch
And it’s fascinating, because when I talk with a lot of startups, everyone thinks they’re inventing something incredibly new, but in general, it’s the same software with a new programming language, a new technology with a unique business model and a unique culture. And today, it’s still very human-focused, but we’re very rapidly seeing that everything’s being automated or augmented in a certain capacity. And most of it has been with software and the rise of multi-cloud.
But today it’s also including the rise of physical devices, of hardware. And these can be small things that developers, we know as our raspberry pies, and little computers that we’re trying to run machine learning and control the humidifiers in our apartments or lower lofts in some good use cases. But then we see even bigger use cases that are arising. The classic story is this company called Boston Dynamics that many of us have heard about in the last few years. It seems as if only yesterday, their spot was barely able to get up on its four legs.
And then, there started being eight of them moving a giant 10 wheeler truck. And then the newer version looks like a human in a space suit that can actually catch boxes as if it’s doing CrossFitonly in the last couple years, better than I can. Better than you can. I love CrossFit, but I get tired. I don’t have that much juice. And it’s gone even further at the start of 2021, at the end of 2020, Boston Dynamics revealed another new video of the dancing robot, and this dancing robot can dance. As in the next season of Dancing with the Stars should have the dancing robot on it.
Steven Umbrello
I have to look at it a few times to determine if I was watching a CGI video rather than an actual robot doing it, because the movement seems so natural, so fluid.
David Yakobovitch
I literally thought that, too. I am a terrible dancer overall. I took some classes in salsa merengue, back and in college, and some things in New York, but I just cannot do the moves. Of course everything is practiced. It’s human. If I continue to commit to it and practice, I would become more proficient and better at the moves. I was so surprised that here in 2021, the dancing was, I wouldn’t say flawless, but it was fantastic. It looked like CGI.
Steven Umbrello
Definitely. You brought up a pretty good point. It merits hitting on it. You were talking about these technologies or startups that seem to mark it at least, or maybe even believe that what they’re offering or what they’re developing seemingly is revolutionary or something entirely new. But your point is prescient in the sense that you essentially reiterated a fundamental precept of philosophy of technology, particularly the most recent turn, which is the design turn in the philosophy, technology or an engineering ethics, whatever you want to call it. And that’s that technologies don’t come from nothing.
They’re all built on these design histories of previous technologies. And that’s because society and technology, co-construct one another in many ways. Technology is not purely deterministic. Nor is society purely constructive and nor is technology purely instrumental. It’s just a neutral tool. It doesn’t embody any type of values whatsoever. And that was really illustrated, really nicely by Langdon Winner in 1980 in his famous essay, two artifacts, artifacts, and politics. And he showed how the New York bridges were designed purposefully too low, not to allow buses from low-income neighborhoods to pass through them, to get to the Long Island beaches that he really, really liked.
That was an example of racist design. So it really those bridges, which are relatively simple technology embodied, have such a strong social value there that he held. But that has changed over time. And we see how values change over time and how those bridges now, vehicles have become more widely accessible, and even to low income neighborhoods that cannot pass underneath those bridges.
So we can see how technologies that embody certain values, those values will also change over time. So you brought up that really good example. Whether it’s a new type of programming language, we’re still working within similar bounds and on these design histories that have kind of been away softly determine what comes after it.
And that really is important, because that means that the decisions that we make today as engineers, as designers, philosophers, do have a real substantive impact into the future. So, the fundamental precept, the philosophy or ethics of engineering is that engineers have to take responsibility for the responsibility of others.That means into the future multi-generational design.
David Yakobovitch
This multi-generational design is at the heart of everyone’s mind in the United States. This year we had a historic moment at the beginning of 2021 that hadn’t happened since, supposedly, the war of 1812, which was when the Capitol Hill was broken into and stormed by supporters of then president Donald J.Trump. And what was so interesting about this is that technology enabled this opportunity for humans with their values to come together and make a decision, and make a choice based on human interaction, using platforms like Facebook and Google and Twitter, and a lot of these social networks that, at the time, claimed that they are pro free-speech and pro-democracy and pro-communication.
And 2021 is going to be a lot unfolding of where is that line drawn and where do the rights and policies get held? Are they with each individual human? Are they with the government? Are they with the private entities? There’s so much to unpack there, which has triggered an additional splintering of technology.
Steven Umbrello
Definitely. We have to try to avoid the dichotomy of black and white, good and evil, when it comes to us, because there’s too many issues at play. There’s too many agents at play. We have the dirtying of too many hands when we’re talking about responsibility. I’m not sure if we shouldn’t get too much on the topic of the Twitter ban and what Facebook, Google, and actually so many other platforms have somewhat followed suit. The underlying motive in many of these cases is protecting their bottom line. In a fear of some sort of social pushback, in the event that they don’t follow suit with some of their larger competitors or similar types of social networks.
But what it does show is where power lies right now, and how these companies are able, regardless of what you think about what the ex president has done to deserve or not deserve being banned and what social media has. For example, Twitter, their rationale for doing so, regardless of that, shows how much power these companies have over social discourse.
They’ve shown the exact thing. As you mentioned, the ability for these people to come together because of a common set of values, and then take action in the real world. They realize that there is that power that people have on their platforms, but they also have the power to also determine what those values are and the people hold them coming together.
They can easily relate, remove that. And if we don’t want to get too off topic, my view was that they can’t have the cake and eat it too, in the sense that they can’t say that they’re not publishers, but then act as sensors like a publisher would, because publishers are legally liable for the things that they publish. They are not. In the United States, that was a debate over, I don’t want to get too beyond my specialty. It was section 230. But they can’t have the ability to not be regulated, but also act as sensors in that way. In principle they have a problem with banning or enforcing their own policy for who can and who cannot speak on their networks, despite the distribution of their network and the power that it has, but it also has to come with regulations. They can’t have it both ways.
David Yakobovitch
This is so timely and relevant though, because having a ban is similar to the embargoes, the embargoes on trade that we’ve seen against China and Iran and North Korea and other countries, and bans are embargoes. And whether they’re physical goods or technology, it creates the space for other conversations to start happening at other dichotomies in directions history will take that may have not taken previously.
And so, I find that really interesting, because of your work with ethics and design and thinking about what is the future of technology around these physical hardware. Like these Boston Dynamics robots. I love these robots. I’m actually the biggest proponent of them. I keep telling my dad that I can’t wait for the day when I can buy him a fully functioning robot that can assist him and cook and move and do all these great things. But then, there seems to be conversations that you and I have spoken about offline, about bands around a lot of these robotic devices. And I wonder, whether people are thinking about them and are they thinking the right about how we ban what we ban in our people having these deep conversations?
Steven Umbrello
Some of the issue with even prohibiting or banning is, or where the conflict lies, there is a lack of nuance. Often, when abandon is brought up, it’s a reaction to usually what is marginal. So the extremes of any certain technology is a knee jerk reaction to this reductio ad absurdum. So moving all the way to the extreme. What’s the danger of AI? We like to think of the Terminator. Like to think of the killer robot, which I do agree with in a certain sense. Maybe a ban on certain kinds of autonomous weapon systems, killer robots, whatever you want to call them, make sense.
But we have to be very careful about which kinds, because broad brushing too often may actually have the opposite effect, particularly when we want to have international multilateral coordination and treaties that we want. Especially the super powers we want the Nation States that are actually developing these technologies signed onto. But if it’s too broad a brush, if it’s too restrictive, if it’s not nuanced enough, then they may not sign on to it at all.
And then it will have the opposite effect. So everything like the discourse with the social media companies and these online platforms is that you have to be able to walk the middle path and the gray. And that’s difficult, because you’re going to get tension on all sides. But that is really the only way for, because there really is no black and white. That we can easily choose from and that’ll do more harm than good in the end.
David Yakobovitch
As a result of what 2020 was, where there’s been so much disruption to supply chains and social networks and human interaction, we’ve heard about different countries, even in Africa, who have shut down social media prior to elections to prevent uprisings.
All of this human tension does get triggered or channeled into other modes, and very fortunate about the scenario in the United States, of the capital, where tragically several lives were lost, but it could have been so much worse.
And this begs the question. Having seen riots and protests in Hong Kong, having seen riots and protests now in New York City from the Black Lives Matter movement over the killing of George Floyd. And now with the conversations around what the history and the future of presidency holds like with the 25th amendment in the United States and impeachment, it begs the question. If we had other modalities of enabling citizens or enabling governments or enabling organizations, who would that be serving? And to what extent would that be beneficial? And that, namely, would be these autonomous weapons. So, can you dive a little bit more philosophically into that, Steven,and tell us also about what autonomous weapons are, because I’m not sure if everyone knows what those are.
Steven Umbrello
That’s the question. Will we have to actually start. What are these weapons? And is it really an art in the sense that there is one kind. So that’s where we can begin to really break down the debate on it, whether we should ban not ban. So I think when most people hear killer robots, autonomous weapon systems, they think Terminator.
And to maybe an extent that’s true. But that also highlights this point of nuance of type rather than token. So technological innovations have always played a key role in military operations. And autonomous weapon systems, at least within the last few years, last decade, definitely are receiving asymmetric attention, both in public and, as well, academic discussions.
And it’s for good reasons. These systems are designed to carry out more and more tasks that were once in the domain of human operators and questions regarding their autonomy, potential recalcitrance have sparked discussions that have highlighted a potential accountability gap between their use and who, if anyone, is to be held accountable if something goes wrong. And at the international level discussions, regarding how to exercise control over the development and deployment of these autonomous military systems have been undergoing for this decade with very little consensus.
And even up until this year where they had The Convention On Conventional Weapons, determined about prohibition and regulation. There’s very little consensus as to what constitutes a sufficient level of control. My research essentially has been to discuss what does it mean to have control over these systems?
I kind of shift away from the concept that you can attribute any type of accountability to the system themselves, but there’s always going to be a human or a group of humans in which responsibility and accountability lies. And that goes back to our earlier point that’s because these things don’t develop XnE Hilo from nothing. There’s a design history, design decisions that have been made that have allowed a system to get to a certain point. Now, when you say autonomous weapon system, killer robot Terminator, you have a specific image that comes into your mind. That’s like a ground basis that maybe it’s morphic. Maybe it’s something wrong, like treads on tracks. Something like the Boston Dynamics’ robot may be holding an assault rifle. And to an extent, there’s a difficulty in having meaningful human control over those kinds of systems.
And that’s actually a good point. Because I argue that you may be as meaningful human control as possible for certain kinds of systems, in particular, aerial assault systems, because that’s not the first thing that comes to mind when you think of an autonomous weapon system. However, most warfare right now has been shifting towards, and it has over the last hundred years, particularly since the Second World War towards aerial warfare. And that’s because aerial warfare is a force multiplier. It provides. It’s fundamentally asymmetric in the capacity that offers military operations when assaulting ground forces.
And we’ve seen the shift and the increase in the use of drone warfare, particularly by the United States, since the Obama administration, particularly the gap between the Bush administration to the Obama administration. It’s been like an exponential increase. It makes sense why. There’s a host of factors why it costs rather than putting troops on the ground or whatever it may be. So those types of systems feasibly could be, and probably will be the first type of fully autonomous weapon system. And what do we mean by that? There’s different levels of autonomy.
So when we say fully autonomous, the operative word is timeless and it’s fully. So that means there’s different kinds. There’s different levels of autonomy. And actually Noel Sharkey, who’s been a big voice, particularly pushing for a ban, and has distinguished between five levels of autonomy.
The basic first level, lowest level of autonomy would be when a human engages with it, selects a target and then initiates an attack. So that’s like our common notion. That’s our autonomy in a way. Whereas the highest level, level five autonomy is where a program, regardless of its embodiment, like I say, ground type of vehicle, the ground type of robotic or an aerial robotic or Naval robotic, the program selects the target, initiates an attack without human involvement. So that would be the highest level five.
Then you have the three levels in between. So you have things like level three would be a program, selects a target, and then the human must approve before an attack or maybe a higher level. The program selects the target and the human has a restricted time to veto it. Without the veto,
the pack will be carried out. Those are not more attractive options for militaries because of the lag between what can happen. And it doesn’t really provide that much of an advantage for the military to have a human directly in the loop like this or on the loop. So there’s this incentive towards full autonomy.
But there’s also the fear of recalcitrance. If something goes wrong, who’s going to be held accountable for this, whether it’s a war crime, whether it’s lack of proportionality in a strike? And there’s this intuitive desire for that level five autonomy. No, we can’t have this because there’s definitely going to be an accountability gap here where the program selects a target and initiates an attack without human involvement. That intuitive desire to ban that type of system makes sense. But what it lacks is a full understanding of the context in which these types of systems are being used. At a pragmatic scale, these types of autonomous weapons systems are not autonomous really, but at level five.
And that’s because as in a United States document, in the United States Air Force, there’s no such thing as a fully autonomous weapons system. Just like there’s no such thing as a fully autonomous soldier, marine, airmen. Because there’s a structure, an institutional structure that constrains full autonomy in a certain sense.
And that’s echo Hawk. For example, in 2019, I wrote a paper on operational controls and that’s the military industrial complex to a certain degree constraints, this type of autonomy. So this level five autonomy really is not problematic for Arial autonomous weapon systems. And that’s because the military, for example, conventional air operations, which frame human involvement as a dynamic targeting process, and by framing the role of human agent, decision-making within a distributed system, outlines ways that policy makers and theorists can use to determine how military planning and operations actually function, and thus frame the use of autonomous weapons systems within those practices.
And then characterize in the human role in military decision-making you can outline at least a six part pre-operation landscape for mission execution. So there’s like a pre mission briefing. That goes on. So before a mission actually is undertaken, the air component is briefed with the information on mission execution, which often can be highly detailed to include information like target, location, times. What types of munition, with also less detailed when we consider dynamic in situ targeting. And that information is distributed to various domains of operations specialists who then vet and use it for more detailed planning. The executioners of that mission, in the case, for example, traditional airstrikes would be a fighter pilot and a fighter pilot would be a cocktail of the plane.
And they have a lot of control over this, but they’re brought in, their brief on the mission details. They take the time to study the information provided while still making sure of any last minute preparations for execution. And in this, even in this pre-briefing package, there’s a lot of components there.
We were talking about the description of the target. Is it like a military compound consisting of all the available knowledge? Like the target’s coordinates? What are the collateral damage estimations to provide the operator with an estimation, not a certainty? It’s never a certainty of the expected collateral damage.
A recommendation of the quantity type and mix of lethal and non-lethal weapons needed to achieve a desired effect, the joint desired impact used as a standard to identify aim points. And then there’s things like the weather forecasts, so you can imagine sometimes strikes take place at night. It could also be overcast, stormy, heavy rainfall, these things limit, you could say standard visuals, and you lead the ability to confirm a target in the more intuitive sense.
And then that goes directly into the in situ operations, the actual operations of deployment on the ground. So, intelligence and data is required in order to sufficiently find the target for the operation. And, for an example case, use a target would be pre-programmed to the fighter jets navigation, as well as in the payloads navigational system. Whereas dynamic targets would require data collection. Here the target involves arriving at a pre-program weapon on an envelope, the area in which the weapon is capable of effectively reaching the target.
And this program is often the entire process, often displayed on the operations heads up display of the pilot. Then they fix the targeting know at this stage, once the operators arrived in this weapons envelope that the onboard systems aim to positively identify the target that was confirmed previously during the operational planning to ensure that the payload delivery is compliant with the relevant military and legal norms.
Given that in this case, the targets were pre-planned and confirmed, the operator usually doesn’t even engage in visual confirmation for positive target identification. Instead they rely on the onboard system for validation that took place during the operational planning to ensure that the identified target is lawfully engaged.
Therefore, even in a fixed case like this of pre-programming, the human pilot doesn’t even require attending to anything else during the phase, other than simply arriving within this weapons envelope. So we can already start to see how autonomous weapons, like the role of the pilot, can be easily substituted for a fully autonomous weapon system.
In the sense, that level five autonomy without really changing the relevant philosophical nuance here. So we can already start to not apologize for, but show the nuance in debate that level five autonomy in and of itself is not the problematic point interest, but rather what type of system has this level five autonomy, but we can already start to see some of the issues of on the ground, like the classic Terminator type that has that level five autonomy cause on the ground, a lot of the relevant factors change.
A lot of that is highly dynamic. So it’s really, what is the context of use and what type of systems embody this Level five autonomy? Therefore become philosophically problematic. And maybe in that case, maybe we could even find ways in which those are not either problematic, either certain situations in which they are simply not even permitted to engage with a target, without the relevant information.
But there’s a lot of that epistemic barrier, even with human agents, human operators on the ground. But we can already see that if level five autonomy remains the nexus for which a ban is focused, it’s predicated on. It becomes overly restrictive and States developing these types of systems may not sign on to that. And then at which point everything goes, but this is not a banal point. Certain types. Yes. Certain types. The important thing is the fact that these aerial types of weapons, that embody level five autonomy are not problematic. We have already said that there’s already this trend towards aerial type of warfare.
So these may be the types of systems that are preferred anyway, rather than complicating on the ground deployment and operations. So, yes, there’s been discussions of operators saying that they wouldn’t trust on the ground systems of working alongside them. And fair enough. Maybe even I, if I were an operator, wouldn’t trust to have that type of system next to me, but I don’t see those systems as being overly problematic, because the trend of military operations doesn’t seem to really be moving in that way. Despite Boston Dynamics. They’re more for support rather than direct engagement being like the vanguard, the front line of special operations. I don’t really see them in the near future.
I see more of this trend towards aerial operations because that’s the trend that’s been going on so far. Like I said with drones, we’ve already been using drones more than people. And it’s simply removing that human operator there. And we’ve got to stay away from these narratives that these systems are just going out there on the road, deciding who to drop bombs on.
That’s really not the case. That’s not the case now. And that’s despite the legality of what for example, the United States has been doing with drone strikes. That’s beyond the point if it’s illegal, it’s illegal. That’s what they’re doing. But how do actual air operations take place? There’s this entire institutional structure that takes place before a strike legally has taken place.
There’s all these assessments. This is the nuance in the debate. If we really want to prohibit, if we want this ban on killer robots to be sufficiently salient, to really have teeth, then we have to be able to distinguish not the level of autonomy per se, but the context of use and how military operations actually take place rather than look at them, looking at these technologies, that isolated, independent entity, which they are not. Just like no human operator within any military is fully independent or autonomous.
David Yakobovitch
That’s right. The operators were being augmented by the machine and the distinction is so brilliant that you share across here in the episode, Steven, that on the ground verse in error. And typically in error, as you mentioned, it’s the drones and it’s only been the last decade or so it’s a massive shift, similar to how we’ve seen software shifting as well. We look on the ground with systems like the iron dome system that was. It’s deployed in the Middle East for anti missile or basically anti air damage.
Then similarly, that type of technology has been considered also in Korea and Japan and Taiwan, for other reasons, for potential tensions in those regions. And the challenge is that system as well, it’s all about airfare. It’s not about on the ground. All these systems we’re talking about in our minds, especially in the United States, go towards on the ground because we think of the mass shootings at the concerts and the churches and the schools, which led to that March of Our Lives movements to repeal the authorization of bump stocks on guns in the States to help them become not automatic. And they never were automatic. That’s probably not level one or level five, but that’s moving, towards level two or three.
Steven Umbrello
It seems like it’s functionally automatic, not in the way we would understand an automatic rifle and normal sense where you can hold the trigger down and innumerable amounts of rounds will cycle depending on how the magazine capacity. It’s a perfect example of how the human body functions in relation to the technology of use, because it’s using the cycle, the power of a single round cycle to push back and forth on this kind of mechanical augmentation to the outside of the gun, to allow your finger to hit the trigger faster than we can do without it.
David Yakobovitch
As the operator. Augmenting the operator. And I guess the challenge is that when we think about killer robots, it’s that to this point, that. With the use of technology, the proposed ban that’s been coming up is just a carte blanche blanket ban. It’s not considering these different areas, these different institutions that we need to be more nuanced. And perhaps it wasn’t thoroughly discussed when it was introduced as policy.
Steven Umbrello
Definitely to be fair on your point, the iron dome, for example, there is no human in, on the loop. Like that level one to level four kind of autonomy that I discussed before. And that’s because for one reason, these systems need to respond really quickly because bombs or area vehicles don’t move slowly. So they need to be able to respond usually faster than a human operator can.
I argue to a certain extent. That’s also true for lethal autonomous weapons rather than defensive autonomous weapons. If they’re not as fast or faster than a human operator, then there is no incentive for their use, but that kind of level is not even being researched. There’s no incentive to research something that is less than what we currently have.
It’s always more than. And to be fair, what I’m arguing here is not novel in the sense that it’s not like I’m the first person to say, no ban, I’m saying ban in a certain kind. And I’m also saying regulation of another other kind. The military, industrial complex itself to use that kind of controversial term in many circles provides this institutional infrastructure, these practices. Whether they actually do those practices is another thing.
But the institution was designed to carry out these types of legality and proportionality analysis to determine the legality, the ethics of a strike of a military engagement. So the institutionalization of arms control norms can be updated.
It can all be similarly augmented. Given these new types of systems. The replacement of the human operator completely within an area of vehicle, for example, making it level five autonomous, the ability to select them, and engage with the target. So the infrastructure provides a solid foundation in which we can then start to manipulate the existing norms and policy measures to address those.
David Yakobovitch
Because the body of technology will always change. It’s always updating. It’s always becoming more advanced. As we’re thinking about this decade, the US Air Force had recently announced they’ve now successfully implemented their first artificial intelligence into the air force. Actually the US Air Force has done business with different companies I’ve been involved with before in training to actually help their divisions become more savvy with technology. To learn data science.
Skills and software engineering skills so that they could be aware of all this technology, but the awareness does not mean always on the offense. Often, it is on the defense. It’s building systems that are resilient. And when countries like the United States, for example, And the US Air Force are integrating these technologies, a lot of people at first glance, when they don’t think about the nuance, they fail to forget that. Actually in many countries in the world, the US is obligated to defend them. There’s actually contracts and agreements. So a lot of it is very defensive based and supportive for our world to continue to grow without violence.
And this dichotomy that we’ve talked about in the beginning of the show about ban of social media or ban of weapons. It’s that whenever we go so far to the left or right, the controversy, it changes the course of history. We do not know where those bands will take us, and sometimes better and worse. We know today in certain countries out East, there are different bans on speech, different bans on possession of certain items. And we’ve seen that generally, a lot of the societies do start to crumble and start to move backwards. It’s not always the case, if there’s enough social movement in that society, though. It’s really fascinating what you’ve shared here today. And what do you think snacks like this year’s policy? Are we going to have some traction around these topics we talked about today?
Steven Umbrello
If the last few years or in any indication is, seems that multilateral and unilateral cooperation will continue. Definitely seems that there’s a vested interest by nation States to see the threat, but these types of weapons proposal or not. Although I did already mention that when we’re talking about autonomous weapons systems, usually a single type comes to mind and that’s often what will bring a lack of nuance to the debate.
But fundamentally, why nation States have yet to agree on a ban, a type of ban, whatever type of prohibition or regulation it is, is maybe focusing on the wrong nexus point. And that would be, instead of focusing on the level of autonomy, let’s focus on what types of systems specifically, or even categories of systems.
If embodied with this level of autonomy, provide this lack of sufficient human control over them. I began to tease out with problematized autonomy per se, in the last 40 minutes. Is that there are convincing arguments against the autonomous weapon systems, other than the supposed accountability gap proposed by those types of ordinance systems, such as the dehumanization of war and its deleterious effects on human dignity?
It appears though that actual military operations planning and deployment intuitively constrains the autonomy of any given agent soldier or autonomous weapons system to being a function of a larger, a priori, plan that bears little, if any intrinsic operational value outside their functional capacity as being able to carry out such plans.
And this doesn’t of course, extricate autonomous weapons systems deployed within such constraints from limitless actions or wanting recalcitrance or excess. The technical design as a predicate of the technological requirements has to reflect both the proximal and distal intentions and goals of the relevant agents within a deployment envelope, for example, and these would be the commanders who employ these types of weapons in their areas of operations, as well as the potential human operators.
That may be engaging with them in a symbiotic relationship there on the ground while they’re being supported by these aerial autonomous weapons systems, fully autonomous drones, for example, but regardless the capacity for these systems to be responsive to these relevant, more reasons of these agents must be considered as a foundational variable in the weaponeering decision-making process for any given context of deployment in these pre mission stages.
David Yakobovitch
And so tying this all together, I feel like just scratched the surface on many of these topics here today. And if people want to learn more about the body of work, if people want to learn more about the predictions on the policy, whether some things you have to say on that
Steven Umbrello
For those who are interested in the philosophical foundations of what is meaningful human control, what are the issues with autonomy, or even how can designers start thinking about this? How can they implement something like the value sensitive design approach?
You can find my work on my website and on my social media linked here. And if people are more interested in following the actual debate itself on the prohibition of autonomous weapons systems, they can watch many of the online multilateral meetings hosted both by the UN and outside of their auspices as they take place.
And you can find information on that as they take place probably this year, maybe the spring, maybe the summer Human Rights Watch and on the campaign to stop killer robots for news on those events. But all in all, like most things in life, the issues with autonomous weapon systems and what it means to have meaningful human control are not black and white.
As I’ve said many times and painting with a broad brush may ultimately do more harm than good. At the very least the ban will turn out to be symbolic. And at worst, it will lead developers to entirely sideline these discussions and any progress being made. So the middle path, which I’ve been trying to advocate for, is finding this balance between military priorities and justice and the design and deployment of these types of systems. And philosophical nuance explorations are the way to make sure that this kind of nuance makes itself manifest. If we can do that successfully, then maybe we can sleep better. Knowing that we preserve justice and protect human dignity, even in a place like war where these things are often hard to find.
David Yakobovitch
Steven Umbrello, the Managing Director for Institute for Ethics and Emerging Technologies. Thank you for joining us today on HumAIn and let’s look forward to a decade where we do find that middle ground.
Steven Umbrello
It’s been a pleasure.
David Yakobovitch
Thank you for listening to this episode of the HumAIn podcast. Did the episode measure up to your thoughts and ML and AI data science, developer tools and technical education. Share your thoughts with me at HumAInpodcast.com/contact. Remember to share this episode with a friend, subscribe and leave a review. And listen for more episodes of HumAIn.