Black Friday Kicks Off: How to Navigate the Latin American Market?
Nov 20, 2024 10:36 AM
Exploring Uncharted Territories in the Middle East: The Innovators Going Global
Nov 19, 2024 03:20 PM
With GPT-4 hot off the presses and the AI arms race on the rise, more than a thousand tech leaders, including Elon Musk and Apple co-founder Steve Wozniak, have signed an open letter imploring researchers to suspend development of AI systems more powerful than GPT-4 for six months.
New AI
The letter cites concerns about the spread of misinformation, the risk of labor market automation and the potential for civilization to spiral out of control. The following are the main points:
1. Runaway artificial intelligence
Companies are racing to develop advanced artificial intelligence technologies that even their creators cannot "understand, predict, or reliably control. The letter reads: "We must ask ourselves: Should we allow machines to flood our information channels with propaganda and untruths? Should we automate all work, including that which is fulfilling? Should we develop non-human brains so that they eventually outnumber us, outwit us, eliminate and replace us? Should we risk losing control of our civilization? These decisions must not be entrusted to unelected technology leaders. Powerful AI systems should be developed only if we are confident that their effects are positive and their risks are manageable. "
2. A dangerous race
The letter warns that AI companies are caught in a "race out of control" to "develop and deploy" new advanced systems. The viral popularity of OpenAI's ChatGPT in recent months appears to have prompted other companies to accelerate the release of their own AI products.
The open letter urges companies to reap the rewards of the "summer of AI" while giving society a chance to adapt to new technologies rather than rushing into an "unprepared fall.
3, a six-month moratorium
The letter said that the moratorium would provide time for the introduction of "shared security protocols" for AI systems. The letter also said, "If such a moratorium cannot be implemented quickly, the government should step in and establish a moratorium period.
The pause, the letter says, should be a step back from the "dangerous race" around advanced technology, not a complete halt to the development of Artificial general intelligence (AGI).
To be fair, while the industry would never agree to a six-month moratorium, the current pace of AI development is putting speed over safety and does need to be a concern for society. I think we need to sound as many alarm bells as possible to wake up regulators, policymakers and industry leaders. In other situations, subtle warnings may suffice, but language modeling technologies are coming to market faster than anything people have experienced before. Slowing down a bit and thinking about the road ahead is necessary for society to adapt to new AI technologies.
Simeon Campos, CEO of the artificial intelligence security startup SaferAI, said he signed the letter because it's impossible to manage the risks to the system if even the inventors of these systems don't know how they work, what they're capable of, or how to place limits on their behavior. "We are expanding the capabilities of such systems to unprecedented levels, in a full-speed race to have a transformative impact on society. Their development must be slowed to allow society to adapt and accelerate alternative AGI architectures that are secure by design and can be formally validated."
Gary Marcus, a signatory to the open letter and professor emeritus at New York University, believes the letter will be a turning point. "I think this is a very important moment in the history of AI - and perhaps in the history of humanity." I'm afraid this is an overstatement. The open letter is wrong in some important places.
The letter says: "As numerous studies have shown, human-competitive AI systems pose a profound risk to society and humanity." In fact, the risk comes more from the use of large language models (LLMs) in an oppressive system, which is much more concrete and urgent than the hypothetical anti-utopian prospect of AI.
No matter how much we marvel at the "intelligence" of LLMs and the speed with which they learn - there are all kinds of very aggressive innovations emerging every day, and a group of Microsoft researchers who tested GPT-4 even reported that it flashes the spark - yet LLMs cannot be self-aware in nature; they are simply neural networks trained in vast text libraries to produce their own probabilistic texts by recognizing patterns. The risks and hazards are never sourced out of powerful AI; instead, we need to worry about the concentration of power in large language models in the hands of large players, replicating systems of oppression, disrupting information ecosystems, and destroying natural ecosystems by wasting energy resources.
Other issues include privacy concerns, as AI is increasingly able to interpret large surveillance datasets; copyright issues, as companies including Stability AI are facing lawsuits from artists and copyright owners who object to the unauthorized use of their work to train AI models; and energy consumption issues, as AI models require energy every time it is prompted. AI models use the same powerful GPU processors as gaming computers because they need to run multiple processes in parallel.
The open letter refers to "the dramatic economic and political disruption that AI will cause," but in reality, AI won't cause any of that. They want to make as much money as possible but care little about the impact of technology on democracy and the environment. Alarmists who talk about the harm caused by AI already make the listener feel that it's not "people" who are deciding to deploy these things.
As it happens, OpenAI is also concerned about the dangers of artificial intelligence. But the company wants to err on the side of caution rather than stop. "We want to be successful in dealing with the huge risks. In facing these risks, we recognize that what seems right in theory is often stranger than expected in practice," Altman wrote in OpenAI's statement on planning for general artificial intelligence. "We believe we must continue to learn and adapt by deploying less powerful versions of the technology to minimize 'one-off successes'."
In other words, OpenAI is rightly following the usual human path of acquiring new knowledge and developing new technologies - that is, learning from iterative experimentation rather than "one-time success" through supernatural foresight. This is the right approach, and it is also the right approach to put AI tools in the hands of the general public, because "democratized access will lead to more and better research, decentralized power, more interest, and more people contributing new ideas. We just also need AI labs to provide transparency about training data, model architectures and training regimes so that they can be better studied by society at large.
Microsoft says it is "driven by ethical human-centered principles"; Google says it will move forward "boldly and responsibly"; and OpenAI says its mission is to use technology "for the benefit of all humanity. ". We've seen many of these companies launch powerful systems, but also somehow shirk their responsibilities, and it's less clear how they will manage themselves to become truly accountable. Time will tell if these declarations will be backed up by action, or just declarations at best.
Black Friday Kicks Off: How to Navigate the Latin American Market?
Nov 20, 2024 10:36 AM
Exploring Uncharted Territories in the Middle East: The Innovators Going Global
Nov 19, 2024 03:20 PM