The late Supreme Court Justice Potter Stewart once said, "Ethics is knowing the difference between what you have a right to do and what is right to do."
As artificial intelligence (AI) systems become more and more advanced, can the same statement apply to computers?
According to many technology moguls and policymakers, the answer is this: We're not quite there yet.
Industry leaders say the ethics surrounding technology need to change to keep up with advancements in fields from robotics and nanotechnology to neuroscience and biotechnology.
And the issue isn't unique to the United States. In a keynote speech held in Brussels last month, European Parliament President Martin Schulz referred to such technological innovation as a "wrecking ball," citing Internet giants Amazon, Google (News - Alert) and Facebook as major culprits.
Schulz's efforts were not entirely in vain. Immediately following his speech, the European Data Protection Supervisor (EDPS) — an independent supervisory body aimed at ensuring privacy and data protection — announced the development of a new ethics advisory group.
It's a step in the right direction — especially when Internet security is such a hotly debated topic.
But what about some of the more challenging ethical dilemmas facing the tech world, such as robots and self-driving cars? Sometimes it seems like there are more questions than answers.
Will Robots Take Over the World?
It's a familiar plot line in some of the most popular science-fiction movies: robots and androids developing consciousness and seizing power from human beings.
But scientists like Stephen Hawking and tech entrepreneurs like Bill Gates (News - Alert) and Elon Musk, who owns Tesla Motors and SpaceX (News - Alert), warn the scenario is a realistic concern.
In fact, last year, the three joined hundreds of others in unveiling a letter at the International Joint Conference in Buenos Aires suggesting that AI has the potential to be more dangerous than nuclear weapons.
The development of these super-robots has already begun. For example, U.S. scientists have used robots to predict the military strategy of Islamic extremists, and the company Boston Dynamics has been successful in building powerful mobile robots.
With the advances in AI happening so quickly, how can we ensure that society uses robots for ethical and not malevolent purposes?
The jury's still out on who can best control that decision, although Silicon Valley entrepreneurs have started to make some headway. Last year, Musk, Peter Thiel and others from the tech mecca funded $1 billion into a new nonprofit called OpenAI, with the aim of keeping AI "an extension of human wills."
How Will Self-Driving Cars Be Programmed to React?
Self-driving cars are already here. In fact, Google recently announced its plans to expand its self-driving car pilot program to Kirkland, WA after the cars have driven 1.4 million miles around Montain View, CA (News - Alert), and Austin, TX.
But there are plenty of dilemmas surrounding self-driving cars. The prospect of not having to cling to the steering wheel during those grueling rush-hour commutes may seem like a dream come true — but how do you program the car to react as ethically as possible in the event of a collision?
For example, consider a scenario in which five young children run out in front of a self-driving car. Should the car be programmed to save the occupants at all costs? Or is the ethical decision to minimize the overall loss of life, even if doing so means the occupants may be injured or killed?
A consensus has yet to be reached, but the ethics of self-driving cars remain an important issue because they could influence how well these cars are accepted into society. For example, it's unlikely that drivers would choose to purchase a vehicle designed to sacrifice them during an accident.
Scientists, philosophers and engineers alike are conducting research to explore the real-world ethical dilemmas associated with self-driving cars — the so-called ethics of the road.
Researchers still have a lot of work to do before these cars can be released to the public. A recent study by the University of Michigan Transportation Research Institute reported that, so far, self-driving cars have a higher accident rate than conventional vehicles.
Where Else Do We Need to Change Our Tech Ethics?
Artificial intelligence and self-driving cars aren't the only tech topics wrought with ethical dilemmas.
According to the John J. Reilly Center's 2016 Top 10 List of Ethical Dilemmas and Policy Issues in Science and Technology, other areas of moral importance this year include genetic engineering (including the recently introduced CRISPR technology), head transplants, artificial wombs and digital labor rights.
No matter where you align yourself politically — whether you plan to vote for Donald Trump or Bernie Sanders — there's no underestimating the importance of making sure the proper ethics are in place as we plow forward on this technological journey.
After all, as the French philosopher and author once wrote, "A man without ethics is a wild beast loosed upon the world."
And a computer without ethics could be a monster.