OpenAI Made Super Spam Generator and Now Seeks Investors

Last month, OpenAI announced that they had made an AI system could be used for next-generation and a tsunami of fake news. They chose to keep the -spam system secret to prevent misuse by others. OpenAI has now announced they are seeking and will change from being a non-profit to seeking profit. OpenAI was started as a non-profit by Elon Musk and others wealthy technologists concerned about creating safe Artificial General Intelligence.

They are creating OpenAI LP, a new “capped-profit” company that allows us to rapidly increase our investments in compute and talent while including checks and balances to actualize their mission.

They now want to make money and save the world from evil Artificial General Intelligence.

They need to invest billions of dollars in years into large-scale cloud computing, attracting and retaining talented people, and building AI supercomputers.

Sam Altman stepped down as the president of Y Combinator, the Valley’s marquee startup accelerator, to become the CEO of OpenAI LP.

The new limitations on profit for investors in OpenAI is one hundred times their investment. I do not think there is any limitation against OpenAI forming new limited partnerships down the road to enable more profits to be captured by investors. They are already shifting from non-profit to profit with the 100X limit.

OpenAI LP currently employs around 100 people organized into three main areas: capabilities (advancing what AI systems can do), safety (ensuring those systems are aligned with human values), and policy (ensuring appropriate governance for such systems).

OpenAI Nonprofit governs OpenAI LP, runs educational programs such as Scholars and Fellows, and hosts policy initiatives. OpenAI LP is continuing (at increased pace and scale) the development roadmap started at OpenAI Nonprofit, which has yielded breakthroughs in reinforcement learning, robotics, and language.

OpenAI Powerful Text AI

The OpenAI Text system can take a few sentences of sample writing and then produce a multi-paragraph article in the style and context of the sample.

Also Read:  Super Meat Boy Forever Delay Takes Dig at Electronic Arts

This capability would let AI’s impersonate the writing style of any person from previous writing samples. It could be used for next-generation super spam and a tsunami of fake news.

Recent Drexler Paper Suggests Cloud Base Should be

Maybe the OpenAI mission is not that much of a concern. A recent paper by Eric Drexler suggests that developing AGI in the Cloud with narrow services should be safe.

Super General Intelligence Can Be Created From Many Narrower AI Services

Drexler proposed the strategy of achieving general AI capabilities by tiling task-space with AI services.

It is natural to think of services as populating task spaces in which similar services are neighbors and dissimilar services are distant, while broader services cover broader regions. This picture of services and task-spaces can be useful both as a conceptual model for thinking about broad AI competencies and as a potential mechanism for implementing them.

Super-AGI Domination Seems Avoidable Even with an Impure OpenAI

It is often taken for granted that unaligned superintelligent-level agents could amass great power and dominate the world by physical means, not necessarily to human advantage. Several considerations suggest that, with suitable preparation, this outcome could be avoided:
• Powerful SI-level capabilities can precede AGI agents.
• SI-level capabilities could be applied to strengthen defensive stability.
• Unopposed preparation enables strong defensive capabilities.
• Strong defensive capabilities can constrain problematic agents.

Applying SI-level capabilities to ensure strategic stability could enable us to coexist with SI-level agents that do not share our values. The present analysis outlines general prospects for an AI-stable world, but necessarily raises more questions than it can explore.

You might also like More from author

Comments are closed.