Artificial Intelligence has become the buzz topic of the past couple of months. The technology has evolved so quickly that some are calling for a moratorium on its development until we have a better grasp of what it can and can’t-- or should and shouldn’t-- do. As computers learn to “think” for themselves without the boundaries or restrictions imposed by humankind, what are the concerns and pitfalls that human programmers are overlooking, unaware of, or outright ignoring? To discuss the future of AI, 710 WOR’s Len Berman and Michael Riedel in the Morning program invited two guests to share their insights.
ABC News Crime and Terrorism Analyst Brad Garrett posed the ethical dilemma to Berman and Riedel this way: “When AI becomes as smart or smarter than us, and it can do this part now, they can write the code for a program and then they can implement the code. So basically, computers could start controlling things the military does in battle and think about it, if computers are doing it, they’re gonna do it a gazillion times faster than [people] and so, could there be a mistake made, a catastrophic mistake? The answer is yes.”
Dr. Arthur Caplan, Professor of Bioethics at NYU Langone Medical Center, takes a more pragmatic approach to the concerns about what AI can do. “I’m worried, too, but I’m not worried about AI taking over humanity or replacing us. What I’m worried about is that AI is fast, and it can often plagiarize. It can often send out information that’s disinformation… There’s a lot of ways in which very simple disinformation can be used right now. That’s what worries me… Look, I’ve got students turning in papers that they didn’t write that they asked AI to write.”
Photo Credit: Getty Images