Isaac Asimov's Three Laws of Robotics are a renowned set of ethical rules designed for robots, first introduced in his 1942 short story "Runaround" and later central to his classic collection, I, Robot. These foundational laws dictate that: 1. A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2. A robot must obey human orders, except where such orders would conflict with the First Law. 3. A robot must protect its own existence, as long as such protection does not conflict with the First or Second Law.
These laws served as an organizing principle for much of Asimov's robot-based fiction, often leading to complex plots exploring the unintended consequences and paradoxes arising from their application. Asimov later expanded on these principles by introducing a "Zeroth Law," which prioritized the safety of humanity as a whole above individual human life. The Three Laws have profoundly permeated science fiction across books, films, and other media, significantly influencing discussions and thought on the ethics of artificial intelligence.