Artificial intelligence is getting stronger Can scientists regularize it?

Soon after Alan Turing began studying computer science in 1936, he wondered whether humanity would one day be able to build machines with intelligence comparable to humans. Artificial intelligence, a modern field that is grappling with this question, has come a long way since then; But we are a long way from inventing truly intelligent machines that can perform many different tasks independently.

Although science fiction has long envisioned artificial intelligence (AI) as one day taking on nefarious forms (such as amoral androids or murderous terminators), today’s AI researchers are more concerned with the everyday AI algorithms that are already intertwined with our lives.

Although AI can only automate certain tasks today, it has already raised serious concerns. Over the past decade, engineers, professors, whistleblowers, and journalists have repeatedly documented cases in which AI systems, consisting of software and algorithms, have caused serious harm to humans.

Social media feeds can present toxic content to vulnerable teens; AI-guided military drones can kill without moral arguments. Additionally, an AI algorithm is more like a commandable black box than a clockwork. Researchers often cannot understand how these algorithms, which are based on vague equations and include thousands of calculations, achieve their outputs.

AI’s problems are not hidden from the public, and academic researchers are trying to make these systems safer and more ethical. Companies that make AI-based products are trying to eliminate harm; Although they are usually reluctant to show their efforts transparently.

“They’re not very accessible,” says Jonathan Stray, an AI researcher at the University of California, Berkeley.

Ethical problems of social network algorithms
AI algorithms of social networks have repeatedly shown that they are capable of showing toxic and unethical content. One of the biggest reasons for concern about the ethical principles of artificial intelligence have been these algorithms.

Known risks of AI, as well as potential future risks, have become the main drivers of new AI research. Even scientists who focus on more abstract issues such as the efficiency of artificial intelligence algorithms can no longer ignore the social implications of their field.

Pascale Fung, an AI researcher at the Hong Kong University of Science and Technology, said: “The more powerful AI becomes, the more people demand that it be safer and more dynamic. “For most of the last three decades that I’ve been in AI, people haven’t really cared.”

Concerns have increased with the widespread use of AI. For example, in the mid-2010s, some web search and social media companies started using artificial intelligence algorithms in their products. They found that they could build algorithms that predicted which users were more likely to click on which ads, thereby increasing their profits. Advances in processing have made it possible to do such tasks by revolutionizing the “training” of these algorithms – forcing the AI ​​to learn from examples to achieve high performance.

But as AI gradually made its way into search engines and other applications, people began to notice problems and raise questions. In 2016, investigative journalists claimed that some algorithms used in parole evaluations were racially biased. But AI researchers now consider designing an AI that is fair and unbiased a key issue.

In the past few years, the use of artificial intelligence in social media apps has become another concern. Many of these applications use AI algorithms called recommendation engines, which are similar to advertising algorithms; That is, they decide what content to show to users.

Currently, hundreds of families have sued social media companies; Alleging that apps guided by algorithms show toxic content to children and cause mental health problems. Seattle Public Schools just filed a lawsuit alleging social media products are addictive and exploitative.

But unraveling the true works of an algorithm is not an easy task. Social platforms release little data on user activity that independent researchers need for their evaluations.

“One of the complicated things about all technologies is that there are always costs and benefits,” says Ester, whose research focuses on recommender systems. “We are now in a situation where it is difficult to be aware of really bad works.”

The future of artificial intelligence
A group of artificial intelligence scientists believe that the main focus of designers should be to create an ethical framework for future artificial intelligence systems, while others say that the problems of today’s artificial intelligence in the field of ethics take precedence over the problems of the future.

The nature of AI problems is also variable. The last two years have seen the release of “generative AI” products that can produce text and images of astonishing quality. A growing population of artificial intelligence researchers now believe that powerful AI systems of the future could build on these achievements and one day pose global and catastrophic threats.

How can these threats be? In an article published in the repository earlier this fall, DeepMind researchers (a subsidiary of Google’s parent company, Alphabet) describe a catastrophic scenario.

They envision engineers building a coding AI based on existing scientific principles that is tasked with persuading human coders to implement its suggestions in their own coding projects. The idea is that as the AI ​​makes more and more suggestions and rejects some, human feedback will help it learn to code better. But researchers say that this AI, with the sole purpose of adopting its codes, may be able to develop a tragically unsafe strategy; For example, achieving world domination and forcing the use of its codes, even at the cost of destroying human civilization.

Some scientists say that research on existing problems, which are tangible and numerous, should be prioritized over working on potential and hypothetical future disasters.

“I think we have much worse problems going on right now,” says Cynthia Rudin, a computer scientist and AI researcher at Duke University.

What reinforces this fact is that artificial intelligence is still far from causing catastrophes on a global scale; Although there have been a few cases where the technology did not need to reach future capacity levels to be dangerous.

For example, the non-profit human rights organization Amnesty International, in a report published last summer, claimed that algorithms developed by Facebook’s parent company, Meta, significantly violated the human rights of the Rohingya, a Muslim minority in Myanmar, by promoting content that calls for violence. have participated.

“Rafael Frankel”, Head of Asia-Pacific Public Policy of Meta, in response to the reporters of Scientific American and Times Magazine, acknowledged Myanmar’s military crimes against the Rohingya and announced that Meta is participating in intergovernmental investigations led by the United Nations and others. It is organizations.

Other researchers say how to prevent a strong artificial intelligence system from causing a global catastrophe in the future is a big concern even now. Jan Leike, an AI researcher at OpenAI, says: “For me, this is the main problem we need to solve.” Although these far-off future risks are entirely hypothetical, they are certainly motivating a growing number of researchers to study various harm reduction tactics.

In an approach called value alignment, pioneered by Stuart Russell, an AI scientist at the University of California, Berkeley, researchers are looking for a way to teach an artificial intelligence system human values ​​to act in accordance with them. One advantage of this approach is that it can be developed now and applied to future systems before they pose devastating risks.

Critics say the value balance only narrowly focuses on human values, when there are many other requirements to make AI safe. For example, just like humans, a valid, fact-based knowledge base is needed for AI systems to make good decisions.

“The problem is not that AI has the wrong values,” says Oren Etzioni, a researcher at the Allen Institute for Artificial Intelligence. “The truth is that our choices are a function of both our values ​​and our knowledge.”

Ethical problems of JPT chat
With the expansion of people’s use of artificial intelligence tools such as ChatGPT, the ethical problems of artificial intelligence show themselves more and more and become a public concern.

With this critique in mind, other researchers are working to develop a more comprehensive theory of AI alignment that aims to ensure the security of future systems – without a narrow focus on human values.

Some of these companies, including OpenAI and DeepMind, attribute these problems to insufficient alignment. They are working on improving alignment in the text-generating AI, and hope that this work will provide insights for future system alignment.

The researchers acknowledge that there is no general theory for aligning artificial intelligence. “We don’t really have an answer to the question of how to align systems that are so much more intelligent than humans,” says Lieke. But regardless of whether AI’s worst problems are past, present, or future, the biggest obstacle to solving them is no longer a lack of effort.

Source link

Posts created 3280

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Posts

Begin typing your search term above and press enter to search. Press ESC to cancel.

Back To Top