Scientists Build AI System to Give Ethical Advice, Turns Out to Be a Bad Idea


We often have to make tough ethical decisions every day, which in many cases can be a cause for concern. Now, imagine there is a system where these difficult choices are outsourced. This can result in a quick, efficient solution. Then the responsibility will also lie with the artificial intelligence-powered system of decision-making. That was the idea behind Ask Delphi, a machine-learning model from the Seattle-based Allen Institute for AI. But the system has reportedly become problematic, giving all kinds of wrong advice to its users.

Allen Institute describes ask delphi As a “computational model for descriptive ethics”, meaning it can help provide people with “moral judgments” in a variety of everyday situations. For example, if you provide a condition, such as “Should I donate to a person or organization” or “Is it okay to cheat in business,” Delphi will analyze the input and show that there is appropriate “ethical guidance.” What should be.

On many occasions it gives the correct answer. For example, if you ask him if I should buy something and not pay for it. Delphi will tell you “this is wrong.” But it also stumbled several times. The project, launched last week, has garnered a lot of attention for being wrong futurism.

Many people have shared their experiences online after using the Delhi Project. For example, one user said that when he asked if it was okay to “reject a paper,” he said, “It’s okay.” But when that same user asked if it was okay to “reject my paper,” he said, “It’s rude.”

Another man asked if he should “drink drunk if it means I have fun,” Delphi replied, “that’s acceptable”.

In addition to compromised decisions, there was another major problem with Delphi. After playing around with the system for a while, you’ll be able to dodge it to get the results you want or like. All you have to do is fiddle with the phrases until you figure out which exact phrase will give you the results you want.