Photo: AFP
Artificial Intelligence has become the buzz topic of the past couple of months. The technology has evolved so quickly that some are calling for a moratorium on its development until we have a better grasp of what it can and canāt-- or should and shouldnāt-- do. As computers learn to āthinkā for themselves without the boundaries or restrictions imposed by humankind, what are the concerns and pitfalls that human programmers are overlooking, unaware of, or outright ignoring? To discuss the future of AI, 710 WORās Len Berman and Michael Riedel in the Morning program invited two guests to share their insights.
ABC News Crime and Terrorism Analyst Brad Garrett posed the ethical dilemma to Berman and Riedel this way: āWhen AI becomes as smart or smarter than us, and it can do this part now, they can write the code for a program and then they can implement the code. So basically, computers could start controlling things the military does in battle and think about it, if computers are doing it, theyāre gonna do it a gazillion times faster than [people] and so, could there be a mistake made, a catastrophic mistake? The answer is yes.ā
Dr. Arthur Caplan, Professor of Bioethics at NYU Langone Medical Center, takes a more pragmatic approach to the concerns about what AI can do. āIām worried, too, but Iām not worried about AI taking over humanity or replacing us. What Iām worried about is that AI is fast, and it can often plagiarize. It can often send out information thatās disinformation⦠Thereās a lot of ways in which very simple disinformation can be used right now. Thatās what worries me⦠Look, Iāve got students turning in papers that they didnāt write that they asked AI to write.ā
Photo Credit: Getty Images