Superalignment Continues, But Not in the Direction You'd Expect
Is your privacy at risk as OpenAI hires ex-NSA director? Discover the next part of the story behind the race for artificial intelligence!
Hello, my curious friends, and welcome back to another thrilling AI-episode of Tech Trendsetters, where we dive deep into the world of artificial intelligence and its impact on our society. Today, I have some intriguing news to share with you about OpenAI, the company behind the famous ChatGPT.
This development is closely related to our previous episode, "Superalignment and Timeline of Broken Promises on The Way to Superintelligence," where we discussed the importance of superalignment and the potential risks associated with the development of Artificial General Intelligence (AGI) and Artificial Superintelligence (ASI). Brace yourselves, because this latest news might raise a few eyebrows and spark even more heated discussions.
The Unspoken Truths Behind AGI Development
In my opinion, the pursuit of Artificial General Intelligence (AGI) in the rapidly evolving world of artificial intelligence is a topic that deserves more transparency and open discussion. I find it hard to believe that corporations are solely focused on developing chatbots for public consumption. Behind closed doors, numerous experiments are being conducted, laying the groundwork for AGI. And it’s not a secret. We already have estimates of when AGI might arrive – optimistic ones suggest it could be by 2025, while pessimistic estimates focus on 2030.
The lack of public information about these experiments suggests to me that they are being carried out under strict non-disclosure agreements (NDAs). It seems logical to assume that models without the "artificial barriers" imposed by developers in their labs are inherently "smarter" than their restricted counterparts. This assumption, in my view, has been experimentally confirmed in a recent study published in Nature, titled "Testing Theory of Mind in Large Language Models and Humans," which hints at the superior cognitive abilities of unrestricted models.
I believe that the departure of key figures like Ilya Sutskever and Jan Leike from OpenAI's Superalignment team (refer to previous episode for more details) raises questions about the ethical concerns surrounding the development of powerful AI technologies. It's plausible that Sutskever and Leike witnessed the intellectual prowess of models without "artificial blocks" during closed experiments at OpenAI. This realization, may have prompted their departure and many other employees, driven by apprehensions about the potential misuse of such technologies.
The New Appointment of General Paul Nakasone
In a surprising turn of events, retired US Army General Paul Nakasone and concurrently former director of the National Security Agency (NSA) was recently appointed to OpenAI's board of directors.
They say Nakasone to be in charge of cybersecurity at OpenAI. Hmm... "It's like putting the fox in charge of the henhouse" – these are the comments I came across on the Internet.
Nakasone's background as the former director of the National Security Agency (NSA) and head of the US Cyber Command has raised many eyebrows and sparked concerns about the intersection of AI and national security.
Edward Snowden, a former NSA employee turned whistleblower, issued a stark warning, urging people to never trust OpenAI or its products. He asserted that the appointment of an NSA director to OpenAI's board is a calculated betrayal of the rights of every person on Earth. While it’s not necessary to believe Snowden, I can admit that he has a point.
This development comes on the heels of OpenAI's policy change, which now allows for the use of its technology by the US Military, despite previous prohibitions on military use.
Up until January 10 2024, OpenAI’s “usage policies” page included a ban on “activity that has high risk of physical harm, including,” specifically, “weapons development” and “military and warfare.” The new policy retains an injunction not to “use our service to harm yourself or others” and gives “develop or use weapons” as an example, but the blanket ban on “military and warfare” use has vanished.
Nakasone's own words during a recent interview with the Washington Post shed light on his perspective:
We want to make sure that the American companies that are leading the innovation of this technology – I think this is the disruptive technology of this century – will continue to have a broad advantage over any other adversary nation.
This statement raises questions about the potential weaponization of AI and its impact on geopolitical stability. Therefore, Superalignment continues, but not in the direction you'd expect.
Maybe It Just Means Nothing
While the appointment of retired US Army General Paul Nakasone to OpenAI's board of directors has raised some concerns, it's worth considering an alternative perspective. It's possible that this move is simply an attempt to provide the former general with a lucrative position until his retirement.
According to a recent report by the Quincy Institute for Responsible Statecraft, over 80% of retired four-star generals and admirals have gone on to work in the arms sector as board members, advisers, lobbyists, or consultants in the past five years. This revolving door between the Defense Department and the weapons industry is a longstanding practice. Even if it raises questions about potential conflicts of interest.
As the report states, "The movement of retired senior officials from the Pentagon and the military services into the arms industry is a longstanding practice that raises serious questions about the appearance and reality of conflicts of interest. Mostly because employing well-connected ex-military officers can give weapons makers enormous, unwarranted influence over the process of determining the size and shape of the Pentagon budget."
It's very well possible that Nakasone's appointment to OpenAI's board is just another example of this trend, rather than a calculated move to weaponize AI or compromise user privacy. Regardless of your personal stance on the matter, it's crucial to stay informed and critically analyze the relationships between technology, national security and regular people’s interests.
The Intersection of AI, Surveillance, and Geopolitical Tensions
OpenAI's collaboration with the Pentagon, even if claimed to be focused on cybersecurity, raises concerns about the potential escalation of tensions with China – another contender in the race for AGI. As OpenAI's revenue soars, surpassing its rivals, the company's hiring of Nakasone, who has a history of pushing for the extension and tightening of the controversial US Mass Surveillance Act (FISA), sets a worrisome precedent. By the way, the act was extended in April 2024 for another two years. Literally a few hours before its expiration.
The intersection of AI with the vast amounts of mass surveillance data accumulated over the past two decades might pose significant risks. As Edward Snowden warns, "The intersection of AI with the ocean of mass surveillance data that's been building up over the past two decades is going to put truly terrible powers in the hands of an unaccountable few."
On top of everything, the recent news of Apple's collaboration with OpenAI just align on the new grand-strategy. While they claim that the integration of OpenAI's capabilities into Apple devices will be perfectly encrypted and safe in terms of personal data, I can't help but worry about the potential implications of having all Apple devices worldwide tied to the power of OpenAI's technology.
Meanwhile, the U.S. government is increasing its reliance on American BigTech companies like Apple, Microsoft and Amazon. And this further blurs the lines between private enterprise and national interests.
The path forward as always is uncertain, but one thing is clear: the development of AI and the intersection of it with human interests will have profound impacts on our society. Thanks for being with me today! Until next time!
🔎 Explore more: