The Islamic University of Madinah, Saudi Arabia
* Corresponding author

Article Main Content

This article examines the complex intersection of digital technology and international humanitarian law (IHL), with a particular focus on autonomous weapons systems (AWS). The rapid advancement of artificial intelligence and robotics has fundamentally transformed modern warfare, challenging the traditional legal frameworks designed for conventional conflicts. Through a comprehensive analysis of existing legal instruments, scholarly discourse, and emerging state practices, this study explores how the principles of distinction, proportionality, and precaution apply to increasingly autonomous military technologies. This article argues that while current IHL provisions remain relevant, significant gaps exist regarding accountability, meaningful human control, and ethical decision-making in autonomous systems. It concludes by proposing adaptive interpretations of existing frameworks and potential new regulatory approaches to address the unique challenges posed by digital warfare technologies while preserving humanitarian protection in armed conflicts.

Introduction

The exponential advancement of digital technology has profoundly transformed the nature of armed conflict in the twenty-first century. The integration of artificial intelligence (AI), machine learning algorithms, and autonomous capabilities into weapons systems represents a paradigm shift in warfare that challenges conventional understanding of international humanitarian law (IHL) (Schmitt, 2025). As Crootof (2015, p. 1838) observes, “autonomous weapon systems are not merely another weapon—they are an entirely new kind of weapon.” This technological revolution raises fundamental questions regarding the adequacy of existing legal frameworks to regulate emerging forms of digital warfare and protect humanitarian values in contemporary armed conflicts.

The Importance of Research

The exponential advancement of digital technology has profoundly transformed the nature of armed conflict in the twenty-first century. The integration of artificial intelligence (AI), machine learning algorithms, and autonomous capabilities into weapons systems represents a paradigm shift in warfare that challenges conventional understanding of international humanitarian law (IHL). The development and deployment of autonomous weapons systems (AWS) have accelerated dramatically in recent years, with significant implications for the interpretation and application of IHL principles.

The Research Problem

Autonomous weapons systems, characterized by their capacity to select and engage targets without direct human intervention, operate across a spectrum of autonomyfrom human-supervised to fully autonomous decision-making architectures. This technological revolution raises fundamental questions about the adequacy of existing legal frameworks to regulate emerging forms of digital warfare and protect humanitarian values in contemporary armed conflicts. Complex technical, legal, and ethical questions arise when machines are empowered to make life-and-death decisions on the battlefield.

The Purpose of Research

The primary purpose of this article is to analyze the impact of digital technology, particularly autonomous weapons systems, on the interpretation and application of international humanitarian law.

Specifically, this study addresses three interconnected questions: How do existing IHL principles apply to autonomous weapons systems? What legal and ethical gaps emerge when digital technologies make targeting decisions? What regulatory approaches can effectively address these challenges while preserving humanitarian protection?

Research Methodology

Methodologically, this study adopts a multidisciplinary approach that integrates legal analysis, ethical inquiry, and technological assessment. It draws upon primary legal sources, including the Geneva Conventions and Additional Protocols, customary international law, and state practice, as well as secondary literature from legal scholars, ethicists, and technical experts. This analysis is further informed by recent developments in international forums, particularly the United Nations Convention on Certain Conventional Weapons (CCW) discussions on lethal autonomous weapons systems.

The development and deployment of autonomous weapons systems (AWS) have accelerated dramatically in recent years, with significant implications for the interpretation and application of IHL principles. These systems, characterized by their capacity to select and engage targets without direct human intervention, operate across a spectrum of autonomy from human-supervised to fully autonomous decision-making architectures (Boulanin & Verbruggen, 2017, p. 22). The United States Department of Defense defines autonomous weapons systems as those that, “once activated, can select and engage targets without further intervention by a human operator.” This definition, while seemingly straightforward, belies the complex technical, legal, and ethical questions that arise when machines are empowered to make life and death decisions on the battlefield.

The primary objective of this article is to analyze the impact of digital technology, particularly autonomous weapons systems, on the interpretation and application of international humanitarian law. Specifically, this study addresses three interconnected questions: How do existing IHL principles apply to autonomous weapons systems? What legal and ethical gaps emerge when digital technologies make targeting decisions? And what regulatory approaches might effectively address these challenges while preserving humanitarian protection? Through a systematic examination of these questions, this study contributes to the growing scholarly discourse on the regulation of emerging military technologies.

This article proceeds as follows: Section II establishes the conceptual framework, defines key technological terms, and outlines core IHL principles. Section III analyzes the application of distinction, proportionality, and precaution principles to autonomous weapons systems, highlighting the areas of legal uncertainty. Section IV examines the ethical dimensions of autonomous weapons, including questions on human dignity, moral agency, and cultural perspectives. Section V evaluates the existing and proposed regulatory approaches and offers recommendations for addressing the challenges posed by increasingly autonomous military technologies. section VI is a conclusion containing the most important results and recommendations.

Conceptual Framework: Digital Technology and International Humanitarian Law

Defining Digital Warfare Technologies

The contemporary battlefield has undergone profound transformation through the integration of digital technologies, creating what scholars increasingly refer to as the “digitalization of warfare” (Solovyeva & Hynek, 2018). Digital warfare technologies encompass a broad spectrum of computational systems that collect, process, analyze, and act upon information in military contexts. These technologies range from cyber capabilities and electronic warfare tools (Papanastasiou, 2010) to increasingly sophisticated autonomous weapons systems (Kumar & Batarseh, 2020) that incorporate artificial intelligence and machine-learning algorithms (Davison, 2020).

As Buchanan (2025) observes, technological innovation has historically driven evolutionary changes in warfare; however, the current digital revolution represents a more fundamental shift in the conception and execution of military operations. Digital technologies differ from conventional weapons in their capacity to operate across physical and virtual domains simultaneously, to process vast quantities of information at superhuman speeds, and most significantly, to make increasingly complex decisions with minimal human intervention (Chu, 2025).

The technological foundation for digital warfare rests on several interconnected developments. First, exponential increases in computational power have enabled unprecedented volumes of battlefield data to be processed. Second, advances in sensor technology have dramatically expanded the capacity to collect information across multiple domains. Third, sophisticated algorithms, particularly those employing machine-learning techniques, have enhanced the ability to analyze complex patterns and make predictive assessments. Finally, robotics and autonomous systems have created new possibilities for the physical application of force based on digital decision-making (Horowitz & Scharre, 2021).

Evolution of Autonomous Weapons Systems

Autonomous weapons systems may represent the most significant manifestation of digital technology in warfare. These systems operate along a continuum of autonomy, from semi-autonomous weapons that require human approval for critical functions to fully autonomous systems that are capable of independently selecting and engaging targets (Boulanin & Verbruggen, 2017). The evolution of these systems has progressed through several distinct phases, each characterized by increasing technological sophistication and diminishing requirements for direct human control.

Early precursors to modern autonomous weapons included automated defense systems, such as the Soviet Teletank program of the 1930s and the 1940s, which demonstrated rudimentary remote operation capabilities (Fourtané, 2020). The subsequent development of guided munitions during the Cold War introduced limited autonomous functions, primarily for navigation and terminal guidance. However, these systems still require significant human direction and lack the capacity for independent target selection (United Kingdom Ministry of Defence, 2011).

The contemporary generation of autonomous weapons systems emerged in the early twenty-first century and is characterized by increasingly sophisticated sensing, processing, and decision-making capabilities. As Crootof (2015, p. 1847) notes, “these new weapons have moved beyond mere automation to incorporate adaptive learning and independent decision-making.” Modern systems, such as loitering munitions, autonomous underwater vehicles, and AI-enabled combat aircraft, demonstrate progressively greater autonomy in critical functions, including target identification, tracking, and engagement (Human Rights Watch & International Human Rights Clinic at Harvard Law School, 2025).

The technological trajectory suggests continued advancement toward systems with greater autonomy. Current research focuses on developing weapons with enhanced situational awareness, improved discrimination capabilities, and sophisticated decision-making algorithms (Amoroso & Tamburrini, 2020). These developments raise profound questions regarding the appropriate balance between technological capability and human control in lethal decision-making.

Core Principles of International Humanitarian Law

International humanitarian law (IHL), also known as the law of armed conflict, comprises the legal framework governing the conduct of hostility and the protection of persons not participating in combat. This body of law, codified primarily in the Geneva Conventions of 1949 and the Additional Protocols of 1977, establishes fundamental principles designed to limit the effects of armed conflict and protect human dignity, even in the most extreme circumstances (Schmitt, 2017).

The principle of distinction stands as a cornerstone of IHL, requiring parties to a conflict to distinguish between civilians and combatants and between civilian objects and military objectives. Article 48 of Additional Protocol I establishes that “the Parties to the conflict shall at all times distinguish between the civilian population and combatants and between civilian objects and military objectives and accordingly shall direct their operations only against military objectives” (First Additional Protocol to the Geneva Conventions, 1977). This principle prohibits indiscriminate attacks and requires weapons to be capable of being directed at specific military targets.

The principle of proportionality prohibits attacks that may be expected to cause incidental loss of civilian life, injury to civilians, or damage to civilian objects that would be excessive in relation to the anticipated concrete and direct military advantage. This principle requires commanders to balance humanitarian considerations against military necessity in each operational context (Schmitt, 2013). As articulated in Article 51(5)(b) of Additional Protocol I, proportionality assessment represents one of the most complex judgments required in armed conflict.

The principle of precaution, codified in Article 57 of Additional Protocol I, requires parties to take all feasible measures to verify that targets are military objectives, choose means and methods of warfare that minimize civilian harm, and refrain from attacks expected to cause disproportionate civilian damage. This principle emphasizes the obligation to exercise constant care to spare the civilian population throughout military operations (Gunawanet al., 2022).

Additional principles relevant to the regulation of autonomous weapons include the prohibition against weapons that cause unnecessary suffering or superfluous injury, the Martens Clause’s protection of the “principles of humanity and the dictates of public conscience,” and the requirement for legal review of new weapons under Article 36 of Additional Protocol I (Al-Jasmi, 2022; UNIDIR, 2025).

Theoretical Approaches to Technology Regulation in Armed Conflict

The regulation of emerging technologies in armed conflict has generated diverse theoretical approaches, each offering distinct perspectives on how international law should adapt to technological changes. These approaches inform both scholarly discourse and state positions on international forums that address autonomous weapons systems.

The interpretive adaptation approach maintains that the existing IHL principles are sufficiently flexible to accommodate technological developments, including autonomous weapons systems. Proponents argue that, while new technologies may present novel challenges, fundamental humanitarian concerns remain consistent, allowing for the adaptive interpretation of established legal frameworks (Schmitt, 2025). This approach emphasizes the enduring relevance of the core IHL principles and their capacity to govern new means and methods of warfare.

In contrast, the regulatory innovation approach contends that autonomous weapons systems represent a fundamental departure from conventional warfare in that new legal instruments are necessary. Advocates of this perspective, including (Gunawanet al., 2022) argue that the unique characteristics of autonomous decision-making, particularly the removal of direct human judgment in lethal targeting, creates regulatory gaps that cannot be adequately addressed through interpretive adaptation alone. This approach often calls for specific prohibitions or limitations on autonomous weapons systems.

The third perspective, the technical compliance approach, focuses on embedding legal requirements directly into the design and operation of autonomous systems. This approach, exemplified by Davison’s (2020) concept of “human-centered AI,” emphasizes technical solutions such as verifiable programming constraints, ethical governors, and meaningful human control mechanisms. Proponents suggest that technological safeguards can ensure compliance with humanitarian principles even as systems become increasingly autonomous.

Finally, the ethical-legal integration approach advocated by scholars such as Lee (2024) argues that effective regulation requires explicit consideration of ethical dimensions alongside legal requirements. This perspective emphasizes that questions of human dignity, moral agency, and responsibility cannot be separated from legal analysis when evaluating autonomous weapons systems. This approach calls for regulatory frameworks that incorporate both deontological concerns about the morality of autonomous killing and consequentialist assessments of humanitarian outcomes.

These theoretical approaches are not mutually exclusive and comprehensive regulation requires elements from each perspective. As Rosert and Sauer (2019) observe, the effective governance of autonomous weapons systems demands multi-level approaches that address technical, legal, ethical, and institutional dimensions simultaneously. The following sections examine how these theoretical perspectives inform the application of IHL principles to autonomous weapons systems, and shape potential regulatory responses.

Legal Analysis of Autonomous Weapons Systems under IHL

Application of Distinction Principle to Autonomous Targeting

The principle of distinction, which requires belligerents to differentiate between combatants and civilians, presents significant challenges when applied to autonomous weapons systems. As codified in Article 48 of Additional Protocol I to the Geneva Conventions, this principle constitutes “the foundation on which the codification of the laws and customs of war rests’ (International Committee of the Red Cross, 2019). The fundamental question is whether autonomous systems possess sufficient technical capability to make nuanced discrimination that human combatants are legally required to perform.

Current sensor and algorithmic technologies demonstrate both promise and limitations in this regard. Computer vision systems have achieved remarkable accuracy in certain controlled environments but continue to face challenges in complex, dynamic battlefield scenarios (Davison, 2020). As Amoroso and Tamburrini (2020, p. 31) observe, “the unpredictable and chaotic nature of armed conflict environments creates significant obstacles for reliable target identification by autonomous systems.” Contextual understanding—distinguishing a civilian holding a weapon in self-defense from an active combatant, for example—remains particularly problematic for algorithmic decision-making.

The legal standard for distinction does not require perfect discrimination, but rather reasonable precautions based on available information. Schmitt (2013) argued that autonomous weapons must be evaluated against this standard rather than an idealized notion of perfect distinction. From this perspective, the relevant question is whether an autonomous system can achieve discrimination capabilities comparable to those of human combatants under similar circumstances. Some scholars have suggested that advanced sensor fusion and machine learning algorithms may eventually surpass human capabilities in certain targeting scenarios (Umbrelloet al., 2020).

However, Gaeta (2023, p. 1040) counters that “the distinction principle encompasses qualitative judgments that extend beyond pattern recognition to include contextual understanding and normative reasoning.” This position suggests that even technologically advanced autonomous systems may struggle to fulfill the legal requirements of distinction in complex operational environments. The human capacity to interpret ambiguous situations through cultural understanding and contextual knowledge represents a significant advantage over algorithmic approaches.

State practices reflect these tensions, with major military powers adopting various positions. The United States Department of Defense (2023) Directive 3000.09 requires autonomous weapons to “distinguish effectively between military and non-military targets,” while acknowledging that this capability must be verified through “appropriate levels of human judgment.” Similarly, the United Kingdom’s approach emphasizes that autonomous systems must be “capable of being used in accordance with IHL,” including the distinction requirement (United Kingdom Ministry of Defence, 2011).

Proportionality Assessments in Algorithmic Decision-making

The principle of proportionality presents perhaps the most formidable challenge for autonomous weapon systems. This principle prohibits attacks that may be expected to cause incidental civilian harm “excessive in relation to the concrete and direct military advantage anticipated” (Additional Protocol I, Article 51(5)(b)). The inherently qualitative and contextual nature of proportionality assessments raises profound questions about the capacity of algorithms to balance incommensurable values.

Proportionality determinations require weighing anticipated military advantage against expected civilian harm—a process that involves not merely quantitative calculation but also qualitative judgment informed by professional experience, ethical considerations, and contextual understanding (Gunawanet al., 2022). As Human Rights Watch & International Human Rights Clinic at Harvard Law School (2025, p. 42) notes, “proportionality assessments involve value judgments that cannot be reduced to algorithmic formulas without fundamentally altering their nature.” The question of whether civilian casualties are “excessive” in relation to military advantage inherently involves normative reasoning that extends beyond computational capabilities.

Some proponents of autonomous systems suggest that algorithms could potentially apply predetermined values and weightings to standardize proportionality assessments. Horowitz and Scharre (2021) proposed that machine-learning systems can analyze historical precedents to develop models for proportionality calculations. However, this approach has significant limitations, including the difficulty of quantifying military advantage, context-specific nature of proportionality judgments, and risk of encoding problematic historical practices into algorithmic decision-making.

The legal requirement for proportionality assessments also raises questions regarding timing and information availability. Article 57 of Additional Protocol I requires that proportionality be evaluated based on the information available at the time of the attack. For autonomous systems operating in dynamic environments, this creates challenges in determining when and how proportionality should be calculated, particularly if circumstances change after deployment, but before engagement (Schmitt, 2017). Thus, the temporal dimension of proportionality assessment presents additional complications for autonomous decision-making architectures.

State positions on this issue reflect cautious approach. The United States emphasizes that proportionality assessments must incorporate “appropriate levels of human judgment” (United States Department of Defense, 2023), while the International Committee of the Red Cross maintains that “the rules on proportionality in attack and precautions in attack establish obligations that are inherently complex value judgments that humans must make” (International Committee of the Red Cross, 2019). These positions suggest significant skepticism about fully delegating proportionality assessments to autonomous systems in the near future.

Precautionary Measures and Autonomous Systems

The principle of precaution, codified in Article 57 of Additional Protocol I, requires parties to a conflict to take “constant care” to spare civilians and civilian objects, including obligations to verify targets, select means and methods that minimize civilian harm, and cancel or suspend attacks when disproportionate effects become apparent. This principle presents both opportunities and challenges for autonomous weapon systems.

Autonomous systems offer potential advantages for implementing certain precautionary measures. Advanced sensors and processing capabilities could enable a more comprehensive battlefield awareness than human combatants, potentially improving target verification (Umbrellaet al. 2020). Additionally, the absence of human emotions, such as fear, anger, or vengeance, might reduce the risk of impulsive actions that disregard precautionary obligations (Schmitt, 2025).

However, the requirement to cancel or suspend attacks when circumstances change presents significant technical challenges. Article 57(2)(b) of Additional Protocol I mandates that an attack “shall be cancelled or suspended if it becomes apparent that the objective is not a military one, is subject to special protection, or that the attack may be expected to cause [disproportionate civilian harm].” Implementing this requirement in autonomous systems demands sophisticated capabilities for real-time reassessment and adaptation, capabilities that remain technically challenging (Davison, 2020).

The precautionary principle also requires the selection of means and methods of warfare that minimize civilian harm while achieving military objectives. This obligation has implications on when and how autonomous weapons should be deployed. As United Nations Institute for Disarmament Research (2025, p. 18) observes, “the obligation to take feasible precautions may restrict the use of autonomous weapons to specific operational contexts where their effects can be reasonably predicted and controlled.” This suggests a context-dependent approach to regulation that considers specific characteristics of different autonomous systems and operational environments.

State practice increasingly emphasizes the importance of testing, verification, and validation processes as precautionary measures (United Kingdom Ministry of Defence, 2011) for autonomous weapons. The United Kingdom Ministry of Defence (2011) highlights the need for “rigorous testing and verification” before deployment, while the US Department of Defense (2023) requires “appropriate levels of testing and certification” for autonomous systems. These approaches reflect an emerging consensus that precautionary obligations extend to the development and testing phases of autonomous weapons and not merely their operational employment.

Command Responsibility and Accountability Challenges

The deployment of autonomous weapons systems raises profound questions regarding command responsibility and accountability under international humanitarian law. Traditional models of command responsibility, as articulated in Article 86(2) of Additional Protocol I and developed through international jurisprudence, presume a human chain of command with the knowledge of and control over subordinates’ actions. Autonomous systems challenge this framework by introducing new forms of decision-making that may not align with conventional command structures.

Traditionally, the doctrine of command responsibility requires three elements: (1) a superior-subordinate relationship, (2) knowledge or reason to know that crimes are committed or about to be committed, and (3) failure to take necessary and reasonable measures to prevent such crimes or punish perpetrators (Gunawanet al., 2022). When applied to autonomous weapons, each element presents a unique challenge. The superior-subordinate relationship becomes ambiguous when decisions are made by algorithmic processes rather than by human agents. Knowledge requirements are complicated by the opacity of complex AI systems and the difficulty in predicting their behavior in novel situations. The obligation to prevent or punish violations becomes problematic when the “perpetrator” is a technological rather than a moral agent.

Crootof (2015, p. 1876) identifies this as the “responsibility gap” in autonomous weapons regulation: “As weapons become increasingly autonomous, it becomes increasingly difficult to hold anyone criminally liable for unintended harm they cause.” This gap raises concerns about accountability for IHL violations and the potential impunity of harm caused by autonomous systems. Several theoretical approaches have been proposed to address this issue.

The “meaningful human control” approach, advocated by scholars such as Rosert and Sauer (2019), emphasizes maintaining sufficient human oversight to preserve clear lines of responsibility. This approach requires humans to retain certain decision-making functions and supervisory capabilities, ensuring that responsibility can be appropriately attributed. The “distributed responsibility” model, by contrast, recognizes multiple human agents—designers, programmers, commanders, operators—who share responsibility for autonomous systems’ actions based on their respective contributions and roles (Gaeta, 2023).

The third approach focuses on state responsibility rather than on individual criminal liability. Under this framework, states bear the responsibility for ensuring that autonomous weapons comply with IHL obligations, regardless of the specific attribution of individual responsibility (Schmitt, 2017). This approach aligns with existing state obligations to ensure respect for IHL under Common Article 1 of the Geneva Conventions.

State practice reflects these theoretical tensions with various approaches to accountability frameworks. The United States emphasizes that people who authorize the use of, direct the use of, or operate autonomous and semi-autonomous weapon systems must do so with appropriate care and in accordance with the law of war, applicable treaties, weapon system safety rules, and applicable rules of engagement” (US Department of Defense, 2023). This position focuses on human decision makers while acknowledging the distributed nature of responsibility across the deployment chain.

The accountability challenges posed by autonomous weapons systems ultimately reflect deeper questions regarding the relationship between human judgment and technological capability in warfare. As Blanchard (2024) argues, “the fundamental issue is not whether machines can make decisions, but whether humans should delegate certain decisions to machines, and how responsibility should be allocated when they do.” Resolving these questions requires not merely technical solutions, but also normative judgments about the appropriate role of human decision-making in lethal operations.

Ethical Dimensions of Autonomous Weapons Systems

Human Dignity and Machine Decision-making in Lethal Force

The deployment of autonomous weapons systems raises profound ethical questions about human dignity and the moral implications of delegating lethal decision-making to machines. The concept of human dignity, central to both international humanitarian law and broader ethical frameworks, demands that human life be treated with inherent respect and value. As Lee (2024, p. 48) argues, “the delegation of lethal decision-making to autonomous systems potentially represents a fundamental affront to human dignity by removing human moral judgment from the ultimate decision to take a life.”

This concern reflects what Amoroso and Tamburrini (2020) identify as the ‘meaningful human control’ argument: certain decisions, particularly those involving lethal force, require human moral agency to preserve dignity and respect for human life. This argument suggests that even technically perfect autonomous targeting would remain ethically problematic because it removes the moral deliberation that acknowledges the gravity of human life. As Blanchard (2024) observes, “the act of killing without human moral consideration may constitute a form of objectification that denies the victim’s humanity.”

Proponents of autonomous weapons systems counter that concerns about human dignity must be balanced against potential humanitarian benefits. Umbrelloet al. (2020) suggested that if autonomous systems can reduce civilian casualties through superior discrimination capabilities, prohibiting their use might constitute an ethical failure. This consequentialist perspective prioritizes the outcomes over the moral character of the decision-making process. However, as Human Rights Watch & International Human Rights Clinic at Harvard Law School (2025, p. 67) notes, “this approach risks reducing human dignity to a purely consequentialist calculation, neglecting its deontological dimensions.”

The tension between deontological concerns about the process of lethal decision-making and consequentialist assessments of outcomes reflects deeper philosophical questions about the nature of human dignity. The Martens Clause of the Hague Conventions, which references “the principles of humanity and the dictates of public conscience,” provides a legal anchor for these ethical considerations within international humanitarian law (First Additional Protocol to the Geneva Conventions, 1977). This clause suggests that ethical concerns about autonomous weapons cannot be dismissed as merely philosophical abstractions, but constitute relevant legal considerations.

Cultural and religious perspectives further complicate the ethical landscapes. Different traditions hold different views on the moral significance of human agency in lethal decisions. Some religious frameworks emphasize human moral responsibility as divinely ordained and non-delegable, while others focus more on outcomes regardless of the decision-making process (Rosert & Sauer, 2019). These diverse perspectives highlight the importance of inclusive global dialogue in the ethical dimensions of autonomous weapons systems.

Moral Agency and Responsibility Gaps

The concept of moral agency—the capacity to make moral judgments and be held accountable for actions— is at the center of ethical debates about autonomous weapons systems. Traditional ethical frameworks presume that moral agents can be praised or blamed for their decisions. Autonomous systems challenge this paradigm by introducing non-human decision-makers who lack moral consciousness, yet make consequential choices that affect human lives.

This creates what Crootof (2015, p. 1872) terms the “responsibility gap”—situations where harm occurs but no moral agent bears clear responsibility. As Gaeta (2023, p. 1042) explains, “neither the programmer who created the algorithm nor the commander who deployed the system may have reasonably foreseen the specific decision made by the autonomous weapon, yet someone must bear moral responsibility for its consequences.” This gap raises concerns about accountability, justice for victims, and the potential normalization of unattributable harm in warfare.

Several theoretical approaches have attempted to address this gap. The “distributed responsibility” model, advocated by scholars such as Davison (2020), distributes moral responsibility across the development and deployment chain, including designers, programmers, commanders, and political authorities. This approach recognizes the collaborative nature of autonomous weapons development but risks diluting responsibility to the point where no individual bears meaningful accountability.

An alternative approach emphasizes “meaningful human control” as a prerequisite for moral responsibility. Under this framework, humans must maintain sufficient oversight and decision-making authority to preserve clear lines of moral responsibility (United Nations Institute for Disarmament Research, 2025). This approach may require limiting autonomy in certain critical functions to ensure that meaningful human judgments remain in the decision-making loop.

A third perspective, proposed by Schmitt (2025), suggests reconceptualizing moral responsibility for autonomous systems through the lens of “responsible innovation.” This approach focuses on establishing robust processes for testing, verification, and validation to ensure that autonomous systems operate as intended, and comply with ethical and legal norms. Responsibility then attaches to failures in these processes rather than to unforeseeable autonomous decisions.

The philosophical question of whether artificial intelligence could possess genuine moral agency further complicates these debates. While current autonomous systems clearly lack the consciousness, intentionality, and moral reasoning capabilities associated with human moral agency, some scholars have speculated about future developments in artificial general intelligence that might blur these distinctions (Umbrelloet al., 2020). However, as Lee (2024, p. 51) observes, “even if future AI systems develop capacities that mimic aspects of moral reasoning, fundamental questions would remain about whether synthetic moral agency could ever substitute for human moral responsibility in matters of life and death.”

Military Necessity versus Humanitarian Considerations

The ethical tension between military necessity and humanitarian considerations lies at the heart of international humanitarian law and takes on new dimensions in the context of autonomous weapons systems. Military necessity permits actions necessary to achieve legitimate military objectives, whereas humanitarian considerations constrain those actions to minimize unnecessary suffering and protect fundamental human values.

Autonomous weapons systems potentially serve military necessities through several mechanisms. They may reduce the risk of friendly forces by removing human operators from dangerous environments, enhancing precision in targeting to achieve military objectives more efficiently, and processing battlefield information more rapidly than human decision-makers (Horowitz & Scharre, 2021). These capabilities could provide significant military advantages, particularly in high-intensity conflicts or contested electromagnetic environments, where remote human control becomes difficult or impossible.

However, humanitarian considerations have raised countervailing concerns. The potential for algorithmic bias, technical malfunction, or unpredictable behavior in complex environments creates risks of unintended civilian harm (Human Rights Watch & International Human Rights Clinic at Harvard Law School, 2025). More fundamentally, the delegation of lethal decision making to autonomous systems may erode humanitarian constraints imposed by human moral intuition and empathy on warfare. As the Kumar & Batarseh (2020) observes, “human reluctance to kill represents an important, if imperfect, humanitarian safeguard that autonomous systems would lack.”

The ethical balancing of military necessity against humanitarian considerations requires a careful assessment of both the capabilities and limitations of autonomous systems in specific operational contexts. Boulanin and Verbruggen (2017) argued for a context-dependent approach that evaluates autonomous weapons against specific use cases rather than seeking universal ethical judgments. This approach recognizes that the ethical implications of autonomy vary significantly based on the type of system, operational environment, and specific functions being automated.

The principle of humanity that underlies international humanitarian law provides an important ethical framework for this balancing exercise. As articulated in the Martens Clause and developed through subsequent legal instruments, this principle emphasizes that even in warfare, certain humanitarian values remain inviolable (First Additional Protocol to the Geneva Conventions, 1977). The challenge lies in determining how these values apply to novel technologies that the original frameworks of humanitarian law could not have anticipated.

Cultural and Regional Perspectives on Autonomous Warfare

Ethical perspectives on autonomous weapons systems vary significantly across cultural, regional and developmental contexts. These diverse viewpoints reflect different philosophical traditions, security concerns, technological capabilities, and historical warfare experiences. Understanding these varied perspectives is essential to developing inclusive and effective regulatory approaches.

The Western ethical discourse on autonomous weapons often emphasizes individual rights, human dignity, and the moral responsibility of human agents. This perspective, informed by liberal philosophical traditions and experience with technological warfare, tends to focus on questions of meaningful human control and individual accountability (Blanchard, 2024). For instance, the European Union has emphasized the importance of “human-centric approaches” to autonomous weapons that preserve human dignity and agency (Rosert & Sauer, 2019).

By contrast, some non-Western perspectives place greater emphasis on collective security, sovereignty, and the potential for autonomous weapons to alter power dynamics in the international system. As Chu (2025) observes, “for many developing nations, concerns about technological colonialism and unequal access to advanced military capabilities shape ethical assessments of autonomous weapons regulation.” These nations may view overly restrictive regulatory approaches as potentially cementing the existing military advantages held by technologically advanced states.

Religious and philosophical traditions provide diverse ethical perspectives. Islamic legal scholars have examined autonomous weapons through the lens of principles such as the distinction between combatants and non-combatants (furqan) and the prohibition of excessive harm (ihsan) (Gunawanet al., 2022). Buddhist perspectives often emphasize the moral intention behind actions, raising questions about how autonomous systems lacking intentionality should be evaluated ethically (Solovyeva & Hynek, 2018).

Indigenous perspectives highlight additional ethical dimensions, including relationships with land and non-human entities, which may be affected by autonomous warfare. These viewpoints often emphasize the interconnectedness of human and natural systems and the potential ecological impacts of autonomous weapons deployment (Davison 2020).

The Global South perspective raises important questions about who participates in defining the ethical norms for emerging technologies. As Gunawanet al. (2022, p. 8) note, “the discourse on autonomous weapons ethics has been dominated by scholars and institutions from technologically advanced nations, potentially marginalizing perspectives from regions most likely to experience these weapons in conflict.” This observation highlights the importance of inclusive dialogue that incorporates diverse ethical frameworks and acknowledges the differential impact of autonomous weapons across global contexts.

These varied cultural and regional perspectives do not necessarily lead to irreconciled ethical positions. Rather, they enrich ethical discourse by highlighting the different dimensions of complex moral questions surrounding autonomous weapons. As United Nations Institute for Disarmament Research (2025, p. 42) concludes, “finding common ethical grounds requires recognizing the legitimacy of diverse perspectives while identifying shared humanitarian values that transcend cultural and regional differences.” This approach suggests the possibility of developing ethical frameworks for autonomous weapons that respect cultural diversity, while affirming universal humanitarian principles.

Regulatory Approaches and Future Directions

Current International Initiatives and State Positions

The international community has engaged in extensive deliberations regarding the regulation of autonomous weapons systems, primarily within the framework of the United Nations Convention on Certain Conventional Weapons (CCW). Since 2014, CCW has hosted discussions through Groups of Governmental Experts (GGE) focused on emerging technologies in the area of lethal autonomous weapons systems (LAWS). These discussions have revealed significant divergences in state positions while also identifying areas of potential consensus (United Nations Institute for Disarmament Research, 2025).

Several distinct regulatory approaches have emerged from these international deliberations. A coalition of states led by Austria, Brazil, and New Zealand advocates for a legally binding instrument prohibiting fully autonomous weapon systems. This position, supported by the Campaign to Stop Killer Robots, emphasizes the ethical and legal concerns discussed in the previous sections, particularly regarding human dignity and meaningful human control (Human Rights Watch & International Human Rights Clinic at Harvard Law School, 2025). As of 2025, approximately 30 states have explicitly called for such prohibitions.

A second group of states, including the United States, the United Kingdom, Russia, and Israel, opposes new legal instruments, arguing that the existing IHL adequately addresses autonomous weapons systems. These states emphasize the potential humanitarian benefits of autonomous technologies and maintain that premature regulation could impede beneficial innovation (US Department of Defense, 2023). They generally favor non-binding guidelines or political declarations that articulate best practices, while preserving flexibility for technological development.

A middle-ground position, advocated by France, Germany, and several other European states, calls for a regulatory framework that establishes certain prohibitions, while permitting the regulated development of autonomous capabilities. This approach typically emphasizes the concept of ‘meaningful human control’ as a central regulatory principle (Rosert & Sauer, 2019). The European Parliament’s 2021 resolution on artificial intelligence in military applications exemplifies this position (European Parliament Resolution, 2021), calling for human oversight of lethal decision-making while supporting continued research on defensive autonomous systems.

Non-state actors have also significantly influenced regulatory discourse. The International Committee of the Red Cross (ICRC) has emphasized that autonomous weapons must remain under human control and supervision to ensure compliance with the IHL. Civil society organizations, coordinated through the Campaign to Stop Killer Robots, have mobilized public opinion and advocated for preventive prohibitions based on humanitarian concerns (Blanchard, 2024). Technical experts and industry representatives have contributed perspectives on the feasibility of various regulatory approaches and technical challenges of ensuring compliance.

Regional organizations have developed distinct positions that reflect their members’ security contexts and technological capabilities. The African Union has expressed support for preventive prohibitions, citing concerns about autonomous weapons exacerbating power asymmetries in the international system. The Association of Southeast Asian Nations (ASEAN) has emphasized the importance of inclusive dialogue that respects sovereignty while addressing humanitarian concerns (Chu, 2025).

Adaptive Interpretations of Existing IHL Frameworks

While debates continue regarding the need for new legal instruments, significant work has focused on clarifying how existing IHL frameworks apply to autonomous weapons systems. This adaptive interpretation approach seeks to leverage the flexibility of established legal principles to address novel technological challenges, without requiring new treaties or conventions.

Article 36 of Additional Protocol I, which requires legal review of new weapons, provides an important mechanism for regulating autonomous systems. This provision obligates states to determine whether new weapons would be prohibited under international law in some or all circumstances (First Additional Protocol to the Geneva Conventions, 1977). As Schmitt (2025, p. 28) argues, the “robust implementation of Article 36 reviews could address many concerns about autonomous weapons by ensuring systematic evaluation of compliance with distinction, proportionality, and other IHL requirements.” However, the effectiveness of this mechanism depends on the state’s willingness to conduct thorough reviews and the specific methodologies they employ.

The Martens Clause offers another avenue for adaptive interpretation. This clause, which references “the principles of humanity and the dictates of public conscience,” provides a legal basis for considering ethical concerns within existing IHL frameworks (Schmitt, 2017). Some scholars suggest that the public conscience element could be interpreted as reflecting an emerging ethical consensus regarding autonomous weapons, potentially constraining their development and using even absent specific prohibitions (Gunawanet al., 2022).

Customary international law, which evolves through state practice and opinio juris, represents the third pathway for adaptive interpretation. As states develop policies and practices regarding autonomous weapons, they may crystallize into customary norms that complement treaty-based regulations. Schmitt, 2017 exemplifies this approach in the cyber domain, articulating how established legal principles apply to novel technologies based on emerging state practices and legal opinions.

Interpretive guidance from authoritative bodies such as the International Court of Justice or the ICRC could further clarify how the existing IHL applies to autonomous weapons. While such guidance lacks a formal binding force, it can significantly influence state practices and legal interpretations. The ICRC’s position that “some form of human control or judgment” is required for compliance with the IHL illustrates this approach (International Committee of the Red Cross, 2019).

Critics of adaptive interpretation argue that it may be insufficient to address the unique challenges posed by autonomous weapons. Gaeta (2023, p. 1045) contends that “existing legal frameworks were designed with human decision-makers in mind and may contain implicit assumptions about human judgment that cannot be straightforwardly translated to algorithmic contexts.” This perspective suggests that, while adaptive interpretation offers valuable flexibility, it may ultimately require supplementation through new legal instruments.

Proposals for New Regulatory Mechanisms

Recognizing the potential limitations of existing frameworks, various stakeholders have proposed new regulatory mechanisms specifically designed to address autonomous weapon systems. These proposals range from comprehensive prohibition treaties to targeted governance frameworks that focus on specific functions or applications.

A legally binding prohibition on fully autonomous weapons is the most restrictive proposal. Advocates argue that such a prohibition addresses fundamental ethical and legal concerns while preventing a destabilizing arms race in autonomous technologies (Human Rights Watch & International Human Rights Clinic at Harvard Law School, 2025). The proposed structure typically resembles existing weapons prohibition treaties, such as the Convention on Cluster Munitions or the Treaty on the Prohibition of Nuclear Weapons, establishing clear definitions, verification mechanisms, and implementation requirements.

Less restrictive proposals focus on regulation rather than prohibiting autonomous capabilities. The concept of “meaningful human control” features prominently in these approaches, although significant debate continues regarding its precise definition and implementation (Rosert & Sauer, 2019). Regulatory frameworks based on this concept typically establish requirements for human oversight, approval of targeted decisions, and the ability to deactivate autonomous systems when necessary.

Function-based regulatory approaches focus on specific capabilities rather than on the entire weapon system. This approach, advocated by Davison (2020), prohibits autonomy in certain critical functions (such as target selection), while permitting it in others (such as navigation or defensive countermeasures). The advantage of this approach lies in its precision and adaptability to different technological configurations, although it requires careful definition of the regulated functions.

Transparency and confidence-building measures represent another category of the proposed regulations. These include requirements for information sharing about autonomous capabilities, joint testing and verification protocols, and established communication channels to address concerns regarding autonomous systems (Boulanin & Verbruggen, 2017). While less restrictive than formal prohibitions, such measures could reduce the risk of misperception and unintended escalation associated with autonomous technologies.

Soft law approaches, including political declarations, codes of conduct, and best-practice guidelines, offer more flexible alternatives to binding treaties. The Tallinn Manual process for cyber operations provides a potential model for developing detailed interpretive guidance through expert consultation, without requiring formal state agreement (Schmitt, 2017). Such approaches can evolve more rapidly than treaty negotiations, and may facilitate broader participation, although they lack formal enforcement mechanisms.

Multi-stakeholder governance frameworks incorporate perspectives from states, international organizations, civil society, and technical experts. Horowitz and Scharre (2021) proposed a layered governance approach that combines international legal standards with industry self-regulation, technical standards bodies, and civil society monitoring. This approach recognizes that effective regulation of autonomous weapons requires coordination across multiple governance levels and stakeholder groups.

Technical Solutions for Compliance and Verification

Technical approaches to ensure compliance with legal and ethical requirements represent an important complement to formal regulatory frameworks. These approaches focus on embedding constraints directly into the design and operation of autonomous systems, potentially addressing concerns regarding unpredictability and unintended consequences.

Verifiable programming constraints offer one technical solution, establishing hard limits on the behavior of autonomous systems that can be mathematically proven and externally verified. These constraints might include geographic restrictions (preventing operations in certain areas), temporal limitations (requiring periodic human reauthorization), or functional boundaries (prohibiting certain types of targets or weapons effects) (Davison, 2020). Such constraints can provide technical guarantees for compliance with specific regulatory requirements.

“Ethical governor” systems represent another approach, implementing algorithmic frameworks that evaluate potential actions against ethical and legal criteria before execution. These systems typically incorporate rule-based constraints derived from IHL principles, potentially preventing actions that would violate distinction, proportionality, or precautionary requirements (Umbrelloet al., 2020). Although promising in theory, significant challenges remain in translating complex legal principles into computational frameworks.

Explainable AI techniques address concerns regarding algorithmic opacity by making the decision-making processes of autonomous systems more transparent and interpretable. These approaches aim to enable meaningful human oversight by providing understandable explanations for targeted recommendations or decisions (Amoroso & Tamburrini, 2020). Explainability is particularly important to verify compliance with proportionality assessments and other context-dependent legal requirements.

Testing, verification, and validation protocols provide mechanisms for evaluating compliance of autonomous systems with legal and ethical standards before deployment. These protocols might include simulation testing across diverse scenarios, adversarial testing to identify potential failure modes, and field testing under controlled conditions (Horowitz & Scharre, 2021). Robust testing regimes can help identify and address compliance issues during development, rather than operational contexts.

Technical verification mechanisms for arm-control agreements present particular challenges for autonomous weapons. Unlike nuclear or chemical weapons, autonomous capabilities may not have easily observable physical characteristics, which complicates traditional verification approaches. Boulanin and Verbruggen (20) proposed alternative verification methods focused on process controls (monitoring development activities) and performance testing (evaluating systems against standardized scenarios) rather than physical inspection.

Although technical solutions offer important tools for addressing autonomous weapons challenges, they face significant limitations. As Human Rights Watch & International Human Rights Clinic at Harvard Law School (2025, p. 78) observes, “technical fixes cannot resolve fundamental ethical questions about the appropriate role of human judgment in lethal decision-making.” Moreover, rapid technological evolution may outpace technical safeguards, requiring continuous adaptation of compliance mechanisms. These limitations suggest that technical approaches should complement rather than replace legal and ethical frameworks for autonomous weapons governance.

Conclusion

The integration of digital technology into warfare, particularly through autonomous weapons systems, presents profound challenges for the interpretation and application of international humanitarian law. This article has examined how core IHL principles—distinction, proportionality, and precaution—apply to increasingly autonomous military technologies while also addressing the ethical dimensions and regulatory approaches that shape this evolving landscape.

The analysis reveals that existing IHL frameworks retain fundamental relevance to autonomous weapons systems but require careful adaptation to address novel challenges. The principle of distinction requires technological capabilities for target discrimination that remain imperfect in complex operational environments. Proportionality assessments involve qualitative judgments about incommensurable values that resist straightforward algorithmic implementations. Precautionary obligations present both opportunities and challenges, potentially enhancing certain protective measures, while complicating others. Most significantly, command responsibility and accountability frameworks face fundamental questions when lethal decisions involve distributed human-machine interactions rather than clear chains of human command.

The ethical dimensions of autonomous weapons systems further complicate the regulatory considerations. Questions on human dignity arise when lethal force decisions are delegated to non-human systems, creating tension between deontological concerns about the decision-making process and the consequentialist assessments of humanitarian outcomes. The concept of moral agency, central to traditional ethical frameworks, becomes problematic when applied to autonomous systems that lack consciousness, yet make consequential choices. The balance between military necessity and humanitarian considerations takes on new dimensions when technological capabilities promise military advantages while potentially eroding humanitarian safeguards. Cultural and regional perspectives further diversify the ethical landscape, highlighting the importance of inclusive dialogue that respects various philosophical traditions.

The current regulatory approaches reflect these complex legal and ethical considerations. International initiatives through the UN Convention on Certain Conventional Weapons have revealed significant divergences in state positions, ranging from calls for preventive prohibition to an emphasis on the adequacy of existing frameworks. Adaptive interpretations of IHL offer flexibility but may struggle to address the fundamental challenges posed by autonomous decision-making. Proposals for new regulatory mechanisms span a spectrum from comprehensive prohibition treaties to targeted governance frameworks focused on specific functions or applications. Technical solutions for compliance and verification provide important tools but face limitations in addressing fundamental normative questions.

Several key findings emerge from this analysis. First, the concept of ‘meaningful human control’ represents a potential bridging principle that addresses both legal and ethical concerns while accommodating technological evolution. While precise definitions remain contested, this concept provides a framework for ensuring that human judgment remains appropriately involved in lethal decision making without categorically prohibiting beneficial autonomous capabilities. Second, context-dependent approaches to regulation offer advantages over universal prohibitions or permissions, recognizing that the implications of autonomy vary significantly based on the specific system, operational environment, and functions being automated. Third, multilevel governance approaches that combine international legal standards with technical constraints, industry self-regulation, and civil society monitoring offer promising pathways for addressing the multifaceted challenges posed by autonomous weapons systems.

These findings have significant implications for international law and policy. States developing or deploying autonomous weapons systems should implement robust weapons review processes under Article 36 of Additional Protocol I, with particular attention paid to the unique challenges of algorithmic decision-making. International organizations should prioritize developing a shared understanding of key concepts, such as “meaningful human control” and “appropriate human judgment” to facilitate more productive regulatory discussions. Technical experts should focus on developing verifiable constraints and explainable AI approaches to enhance compliance with humanitarian principles. Civil society organizations should continue monitoring development and advocating for humanitarian considerations while engaging constructively with technical and operational realities.

Therefore, future research should address several critical issues. First, empirical studies on how autonomous systems perform in complex environments would provide valuable evidence for assessing compliance with IHL principles. Second, interdisciplinary work that connects technical capabilities with legal requirements could help develop more effective compliance mechanisms. Third, a comparative analysis of cultural and regional perspectives would enhance our understanding of how diverse ethical frameworks might inform inclusive regulatory approaches. Finally, the exploration of verification mechanisms appropriate for software-defined capabilities will address a critical gap in current regulatory proposals.

The challenges posed by autonomous weapons systems ultimately reflect deeper questions regarding the relationship between technological capability and human judgment in warfare. As digital technologies continue to evolve, maintaining humanitarian protection established through international humanitarian law will require thoughtful adaptation of legal frameworks, ethical principles, and technical approaches. By engaging these complex questions through multidisciplinary analysis and inclusive dialogue, the international community can work toward ensuring that technological advancement serves humanitarian value rather than undermining it.

Conflict of Interest

The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest and received no external funding that might influence the outcomes.

References

  1. Al-Jasmi, N. A. (2022). International protection from unconventional weapons. Journal of Legal Sciences, 37(2), 433–466. https://doi.org/10.35246/jols.v37i2.556.
     Google Scholar
  2. Amoroso, D., & Tamburrini, G. (2020). Autonomous weapons systems and meaningful human control: Ethical and legal issues. Current Robotics Reports, 1, 187–194. https://doi.org/10.1007/s43154-020-00024-3.
     Google Scholar
  3. Blanchard, A. (2024). The Road Less Travelled: Ethics in the International Regulatory Debate on Autonomous Weapon Systems. ICRC Humanitarian Law & Policy Blog. https://blogs.icrc.org/law-and-policy/2024/04/25/the-road-less-travelled-ethics-in-the-international-regulatory-debate-on-autonomous-weapon-systems/.
     Google Scholar
  4. Boulanin, V., & Verbruggen, M. (2017). Mapping the Development of Autonomy in Weapon Systems. Stockholm International Peace Research Institute. https://www.sipri.org/publications/2017/other-publications/mapping-development-autonomy-weapon-systems.
     Google Scholar
  5. Buchanan, R. A. (2025). History of Technology. Chicago: Encyclopaedia Britannica. https://www.britannica.com/technology/history-of-technology.
     Google Scholar
  6. Chu, J. (2025). Autonomous weapon systems and autonomous cyber weapons: Convergence in respect of concepts, features, scope, and implications on international law. Chinese Journal of International Law, 24(1), jmaf005. https://doi.org/10.1093/chinesejil/jmaf005.
     Google Scholar
  7. Crootof, R. (2015). The killer robots are here: Legal and policy implications. Cardozo Law Review, 36(5), 1837–1915. https://doi.org/10.2139/ssrn.2534567.
     Google Scholar
  8. Davison, N. (2020). Artificial intelligence and machine learning in armed conflict: A human-centred approach. International Review of the Red Cross, 102(913), 463–479. https://doi.org/10.1017/S1816383120000454.
     Google Scholar
  9. European Parliament Resolution. (2021). European Parliament resolution of 20 January 2021 on artificial intelligence: Questions of interpretation and application of international law in so far as the EU is affected in the areas of civil and military uses and of state authority outside the scope of criminal justice (2020/2013(INI)). https://www.europarl.europa.eu/doceo/document/A-9-2021-0001_EN.html.
     Google Scholar
  10. First Additional Protocol to the Geneva Conventions. (1977, June 8). Protocol additional to the Geneva Conventions of 12 August 1949, and relating to the protection of victims of international armed conflicts (Protocol I), 1125 U.N.T.S. 3. https://www.refworld.org/legal/agreements/icrc/1977/en/104942.
     Google Scholar
  11. Fourtané, S. (2020). Autonomous military robots as warfighters. Interesting Engineering. https://interestingengineering.com/innovation/autonomous-military-robots-as-warfighters.
     Google Scholar
  12. Gaeta, P. (2023). Who acts when autonomous weapons strike? The act requirement for individual criminal responsibility and state responsibility. Journal of International Criminal Justice, 21(5), 1033–1055. https://doi.org/10.1093/jicj/mqae001.
     Google Scholar
  13. Gunawan, Y., Aulawi, M. H., Anggriawan, R., & Putro, T. A. (2022). Command responsibility of autonomous weapons under international humanitarian law. Cogent Social Sciences, 8(1), 2139906. https://doi.org/10.1080/23311886.2022.2139906.
     Google Scholar
  14. Horowitz, M. C., & Scharre, P. (2021). AI and International Stability: Risks and Confidence-Building Measures. Center for a New American Security. https://www.cnas.org/publications/reports/ai-and-international-stability-risks-and-confidence-building-measures.
     Google Scholar
  15. Human Rights Watch & International Human Rights Clinic at Harvard Law School. (2025). A Hazard to Human Rights: Autonomous Weapons Systems and Digital Decision-Making. Human Rights Watch. https://www.hrw.org/report/2025/04/28/hazard-human-rights/autonomous-weapons-systems-and-digital-decision-making.
     Google Scholar
  16. International Committee of the Red Cross. (2019). International Humanitarian Law and the Challenges of Contemporary Armed Conflicts: Recommitting to Protection in Armed Conflict on the 70th Anniversary of the Geneva Conventions. International Committee of the Red Cross. https://www.icrc.org/en/document/international-humanitarian-law-and-challenges-contemporary-armed-conflicts.
     Google Scholar
  17. Kumar, A., & Batarseh, F. A. (2020). The use of robots and artificial intelligence in war. LSE Business Review. https://blogs.lse.ac.uk/businessreview/2020/02/17/the-use-of-robots-and-artificial-intelligence-in-war/.
     Google Scholar
  18. Lee, C. (2024). The ethics of autonomous weapons systems in warfare. International Journal of Defence and Strategic Studies, 2(1), 47–56. https://ijdssjournal.com/2024/03/08/the-ethics-of-autonomous-weapons-systems-in-warfare/.
     Google Scholar
  19. Papanastasiou, A. (2010). Application of international law in cyber warfare operations. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.1673785.
     Google Scholar
  20. Rosert, E., & Sauer, F. (2019). Prohibiting autonomous weapons: Put human dignity first. Global Policy, 10(3), 370–372. https://doi.org/10.1111/1758-5899.12691.
     Google Scholar
  21. Schmitt, M. N. (2013). Autonomous weapon systems and international humanitarian law: A reply to the critics. Harvard National Security Journal (Features). https://doi.org/10.2139/ssrn.2184826.
     Google Scholar
  22. Schmitt, M. N. (Ed.). (2017). Tallinn Manual 2.0 on the international law applicable to cyber operations. Cambridge: Cambridge University Press. https://doi.org/10.1017/9781316822524.
     Google Scholar
  23. Schmitt, M. N. (2025). AI & autonomous weapons: Can international humanitarian law adapt to modern warfare? SSRN Electronic Journal. https://doi.org/10.2139/ssrn.5019468.
     Google Scholar
  24. Solovyeva, A., & Hynek, N. (2018). Going beyond the killer robots debate: Six dilemmas autonomous weapon systems raise. Central European Journal of International and Security Studies, 12(3), 166–208. https://www.cejiss.org/images/issue_articles/2018-volume-12-issue-3/07-solovyeva-new-1.pdf.
     Google Scholar
  25. Umbrello, S., Torres, P., & De Bellis, A. F. (2020). The future of war: Could lethal autonomous weapons make conflict more ethical? AI & Society, 35(1), 273–282. https://doi.org/10.1007/s00146-019-00879-x.
     Google Scholar
  26. United Kingdom Ministry of Defence. (2011). Joint Doctrine Note 2/11: The UK Approach to Unmanned Aircraft Systems. Development, Concepts and Doctrine Centre, Shrivenham. https://www.gov.uk/government/publications/jdn-2-11-the-uk-approach-tounmanned-aircraft-systems.
     Google Scholar
  27. United Nations Institute for Disarmament Research. (2025). The Interpretation and Application of International Humanitarian Law in Relation to Lethal Autonomous Weapon Systems. United Nations Institute for Disarmament Research. https://unidir.org/publication/the-interpretation-and-application-of-international-humanitarian-law-in-relation-to-lethal-autonomous-weapon-systems.
     Google Scholar
  28. United States Department of Defense. (2023). DoD Directive 3000.09: Autonomy in Weapon Systems. Washington, DC: Department of Defense. https://www.esd.whs.mil/portals/54/documents/dd/issuances/dodd/300009p.pdf.
     Google Scholar