ethics.AI

The recent battle between Musk and Zuckerberg has turned a lot of heads, as indeed it should. 

Musk’s argument:

Musk has a more pessimistic (realistic?) opinion on AI.

While there are a lot of ethical issues relating to AI. This doomsday scenario put forward by Musk stating that AI is the biggest risk we’ve ever faced as a civilization takes the cake.

Now you’ve gotta think about it this way, The guy who’s had successful businesses relating to AI like self-driving cars and OpenAI tells you something might go wrong, something may very well go wrong. Musk has also set a billion dollar fund for safe AI development and research. He has always been a skeptic about AI. People like Hawking have shared his beliefs too. But how well do either Musk or Hawking know about AI?

Let’s introduce a new term: AGI: Artificial General Intelligence. An AGI is exactly what you think it is, a machine which can do whatever a human can do, a machine which has learned to cut pass the gap between human intelligence and mathematical jargon. It is your Skynet, your impending doom kind of robot if you want to think about it like that. A successful AGI is when it passes the Turing test. It hasn’t been done before and we are most definitely moving towards it.

If and when this scenario occurs, what shall we do? Should we let this scenario even occur?

Musk says it may very well occur in the future.

Earlier this month, Musk cited his closeness to A.I. as a reason for his fears–and a reason that others should find them credible. “I have exposure to the most cutting edge A.I., and I think people should be really concerned by it,” he said. “A.I. is a rare case where I think we need to be proactive in regulation [rather] than being reactive.”

Mark’s argument: 

Mark takes a more optimistic approach. He says that the naysayers, doomsday theorists such as Musk are being irresponsible, meaning instead of promoting AI’s benefits, Musk is making them afraid. Musk took to Twitter commenting that Mark does not really know anything about AI.

I’m siding with neither of these people. I’m siding with the argument put forth by people like Andrew.Ng and Yann LeCun, you know, the people who actually research intensively about AI? What do they have to say?

The researcher’s POV: 

The general consensus of researchers is that an AGI has a long way to go. They are more concerned about the loss of jobs as a result of machine learning and intelligence. Think about it, work which can be perfected by you or done by you have a large chance of being done by machines more efficiently and accurately.  We’ve already made self-driving cars, which Musk has said will be completely autonomous by the end of the year, he says he’ll prove it by going on a cross country trip in one of his beloved Teslas.

The loss of jobs is more definitive than a doomsday scenario. AI researchers are pretty sure of that. Automation has already cost us a lot of jobs. What will ML bring forth? There’s even a Deep Neural Net for finding the best Neural Net Architecture! Even PhDs aren’t safe.

To an FB comment by someone saying that most of the researchers think AI is unsafe,

LeCun replied as such:

That is simply not true. My opinions on the topic are very much in line with that vast majority of leading AI researchers, including the senior people at MILA, DeepMind, OpenAI, Google Brain, Microsoft, IBM, and much of academia. Mark’s opinion and mine are very much in the mainstream.

It is Elon, Nick Bostrom, Stuart Russel, and a few others who hold the contrarian views on the topic.

No one is saying that there is no issue with AI safety and ethics. It is precise to discuss these issues that a number of us created the Partnership on AI and why our employers are funding it.

But almost everyone I know thinks that calling for regulation because super-intelligent AI might take over the world soon is either crazy or hugely premature. It’s sort of like calling for the creation of the Federal Aviation Administration in 1870, three decades before the first heavier-than-air controllable flights occurred and before people knew how to do it (but after they knew it was possible in principle)

Andrew Ng’s views:

As an AI insider, having built and shipped a lot of AI products, I don’t see a clear path for AI to surpass human-level intelligence,” he said. “I think that job displacement is a huge problem and the one that I wish we could focus on, rather than be distracted by these science fiction-ish, dystopian elements.

Click here to read the full article. 

Anyhow, I’ll just leave you with this: https://willrobotstakemyjob.com/

A funny, data driven estimator on whether your job may be automated or not.

Feel free to leave your views on this debate!

Until Next Time. 

Leave a comment