Download Free Audio of Still, the most probable outcome is that we won’... - Woord

Read Aloud the Text Content

This audio was created by Woord's Text to Speech service by content creators from all around the world.


Text Content or SSML code:

Still, the most probable outcome is that we won’t allow robots to dominate our lives completely. Jobs that are based on the human connection will continue to thrive. For example, while it is completely conventional to buy music on digital platforms with just one click, we still wish to attend concerts to turn listening to music into an event that will connect people. And we do that despite the fact that concerts cost a lot more than buying digital music. Thus, jobs that revolve around human connections or motivating people will be more on the rise as robots start to dominate other professions. Of course, the thought of losing your job is worrisome. But scientists argue that AI poses much more serious risks. Some even argue that the technology of AI can potentially result in an apocalypse. Chapter 8 - The potential risks of Artificial General Intelligence raise controversy. It is not shocking that the rapid development of AI has caused concern among some people. But concerns toward the human-like AGI has not been as public as the concerns for AI itself. How would our society be if the robots outnumbered humanity vastly and became much better versions of humans? Of course, robot domination still remains an unrealistic scenario. However, philosopher Nick Bostrom and other researchers emphasize the potential risks of the development of AGI. Have you ever heard of the famous thought experiment paperclip problem? Nick Bostrom fundamentally lays out the potential risks of AGI with the paperclip problem. Suppose that you train AI to operate a paperclip factory. Not so surprisingly, the AI gets better each time it creates a new paperclip and becomes efficient in managing the factory and mass-producing paperclips. Eventually, the AI betters itself so tremendously that it comes to the conclusion about the most efficient way to make paperclips: to seize control and turn the world into paperclips. This scenario, of course, is a lighthearted example. Yet the scary ability of AI to become “too good” with the tasks it is assigned is demonstrated really well. The general conclusion of the researchers the author interviewed is that Bostrom’s prediction does not reflect the truth. They emphasize that there are various methods with which we can prevent such a situation from happening. For instance, the power to control an electrical grid wouldn’t be given to the AI. A more beneficial way to prevent the dystopic scenario is to train the AI to understand the concepts of good and wrong. A well-developed AI wouldn’t work with the sole aim of making paperclips. There are better ways to hinder a potential AGI apocalypse. Bryan Johnson proposes the radical idea that we have to improve humanity along with artificial intelligence. In order to achieve improved humanity, Johnson founded the company Kernel, which aimed to benefit from neuroscience to upgrade our brains. His idea was to use neuroscience to hack into people’s brains and benefit from a computer chip implant to upgrade their cognitive abilities. There is not a consensus on when we would achieve AGI. Still, we should keep in mind the pros and cons of artificial intelligence and prevent any possible risks of intelligence that rivals ours. Architects of Intelligence: The truth about AI from the people building it by Martin Ford Book Review Artificial intelligence has helped humanity immensely with the rapid development of deep learning and neural networks. Still, scientists need more research on Artificial General Intelligence, a hybrid form of intelligence that will transform robots into entities that can train themselves and interpret the world. Perhaps we won’t be able to see the creation of AGI in our lifetime, but we will live enough to see the impacts of AI in various fields such as healthcare and the military.