Governing a world of 250 million robots

The lines of ethical principles are often blurred when it comes to embracing ubiquitous machines.

Robots will shape the cities of the future – from raising children, to cleaning streets, to protecting borders from military threats and much more.

While robot ubiquity won’t arrive tomorrow, it’s closer than many think – by 2030, humanoid robots (like personal assistant robots) are set to exceed 244 million worldwide, soaring from 220 million in 2020.

Examples of some cities with existing robot infrastructures are the Personal Rapid Transit (PRT) of Masdar City and The Line, a future city in Neom, Saudi Arabia, the Songdo Waste Management system of South Korea, the collaborative robots or cobots of Odense Denmark’s City, and Japan’s traffic navigation robots. at Takeshiba District.

But the rise of robotics poses thorny ethical questions about how we govern entities that sit between the conscience of humanity and the mechanical nature of machines like a dishwasher or lawnmower.

Getting on the front foot with governance could make a huge difference by the end of the decade.

Robots can be designed to mimic humans (such as humanoids or androids) and used in almost every sector: healthcare, manufacturing, logistics, space exploration, military, entertainment, hospitality and even in the home.

Also Read :  Germany needs vital win to avoid embarrassment at Qatar 2022

Robots are designed to deal with human limitations, being repeatably accurate, long-lasting, and unfazed by emotions.

They are not designed to topple the executive and seize power, unlike what movies like The Terminator might claim.

In dangerous jobs or tasks that require intensive manual labor, robots can supplement or replace the human workforce.

In the agricultural sector, drones have a huge potential to help agricultural activities.

In early education, robots accompany children to learn and play. ‘Little Sophia’, a ‘robot friend’, aims to inspire children to learn about coding, AI, science, technology, engineering and mathematics through a safe, interactive, human-robot experience.

The rising trend of ubiquitous humanoid robots living alongside humans has raised the issue of responsible technology and robot ethics.

Debates about ethical robotics that began in the early 2000s continue to center on the same key issues: privacy and security, opacity/transparency, and algorithm biases.

Also Read :  Amazon's Latest Matter Moves Should Make the Smart Home a Little Less Confusing

To overcome such problems, researchers also proposed five ethical principles, along with seven high-level messages for responsible robotics. The principles include: Robots should not be designed as weapons, except for national security reasons. Robots must be designed and operated to comply with existing laws, including privacy and security.

Robots are products: as with other products, they should be designed to be safe and secure.

Robots are manufactured artifacts: the illusion of emotions and intent should not be used to exploit vulnerable users.

It would be possible to find out who is responsible for some bots. Researchers also suggest that robot city designers rethink how ethical principles such as the above can be respected during the design process, such as providing external switches.

For example, having an effective control system such as actuator mechanisms and algorithms to automatically turn off the robots.

Without agreed principles, robots could pose a real threat to humans. For example, the cyber threats of ransomware and DDoS attacks, the physical threats of increasingly autonomous devices and robots, and the emotional threats of being over-connected to robots, ignoring real human relationships such as depicted in the 2013 film, “She “.

Also Read :  US and Japan to strengthen military relationship with upgraded Marine unit in attempt to deter China

Other negative environmental impacts of robotics include excessive energy consumption, accelerated resource depletion, and uncontrolled electronic waste.

Cities and lawmakers will also face the emerging threat of Artificial Intelligence (AI) terrorism.

From expanding autonomous drones and introducing robot swarms, to remote attacks or disease delivery via nanorobots, law enforcement and defense organizations are facing a new frontier of potential threats.

To prepare, future robotics, AI law and ethics research oriented towards developing policy is advised.

Robots should make life better. In the face of rapid innovation, prohibition or suffocating development are not viable answers.

The onus then falls on governments to cultivate more robot-aware citizens and responsible (approved) robot creators.

This, coupled with a proactive approach to legislation, offers cities the opportunity to usher in a new era of robotics with more harmony and urgency.

(This story has not been edited by Devdiscourse staff and is automatically generated from a syndicated feed.)

Source

Leave a Reply

Your email address will not be published.

Related Articles

Back to top button