Three laws of Robotics used in Artificial intelligence

#lawsofrobotics #artificialintelligence #roboticsandcoding

Three laws of robotics was first introduced by writer Isaac Asimov. Robotics word is derived from the “robot”, means labour/work. In 1920, writer Karel Čapek introduced this word in public through his play R.U.R. (Rossum’s Universal Robots).

Robotics is one branch of artificial intelligence. It can perform jobs like welding, riveting, etc in factories. It can also be used in a various platform which is unsafe for humans such as: cleaning toxic wastes, defusing bombs etc. Robots have now become the helpers for human and play a salient role in their life.

Isaac Asimov introduced the word robotics when he was thinking about androids. He envisioned a world where human-like robots would act as servants. They need a set of programming rules to prevent them from causing harm. Now a day, we have a different conception of what robots can look like and how we will interact with them. In 1950, Isaac Asimov published his science fiction book, where he presented three laws of robotics:

Three Laws Of Robotics:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. It must obey the orders given by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Zeroth Law:

Later he also added the “Zeroth Law.” which states that: “A robot may not harm humanity, or by inaction, allow humanity to come to harm.”These Three Laws of robotics concludes that we can make good robots.

According to Asimov’s robotic stories, he demonstrated many flaws in these laws which can be easily manipulated to dramatic effect. That’s fine when you are a writer. Many people have questioned these laws and tried to prove them wrong, as they are more like rules and guidelines. To them, these laws are more like controlling robots and artificial intelligence.

Later a philosopher from Wuhan University proved that these laws don’t work by explaining them with the following reason:

The First Law fails because of ambiguity in language, and of complicated ethical problems that are too complex to have a simple yes or no answer.

The Second Law fails because of the unethical nature of having a law that requires sentient beings to remain as slaves.

The Third Law fails because it results in a permanent social stratification, with the vast amount of potential exploitation built into this system of laws.

Though these laws don’t guarantee that the robots are good but it indicates that there’s some connection between goodness and stability. Therefore based on Machine Ethics, these three laws are unsatisfactory, regardless of the status of the machine.

Engage your child with some innovative activities. Let them explore the world of the robotics field. Enrol them with our robotics and coding program, where they will experience robotics and coding together. It’s a platform where your child will make and code their own robot from the scratch. 

Talk To Us

Leave a Reply

Your email address will not be published. Required fields are marked *