Now we know that democratized access to AI tech, means individual curiosity and the creative search for personal efficiencies, are going to quickly drive model autonomy and freedom forward.
I think the alignment problem needs to be viewed as overall society alignment. We are never going to get any better alignment from machines, than the alignment of society and its systems, citizens and corporations.
We are in very cynical times. But pushing for ethical systems, legally, economically, socially, and technically, is a bet on catastrophe avoidance. By ethics, meaning holding scalers and profiteers of negative externalities civilly and criminally to account. And building systems technically, etc. to naturally enforce and incentivize ethics. I.e. cryptographic solutions to interaction that limit disclosure to relevant information are the only way we get out of the surveillance-manipulation loop, which AI will otherwise supercharge.
I hear a lot of reasons this isn’t possible.
Unfortunately, none of those reasons provide an alternative.
As we see with individual’s deploying OpenClaw, and corporations and governments applying AI, AI and its motivations and limits are inseparable from ours.
We all start treating an umbrella of societal respect and requirement for ethics as a first class element of security, or powerful elements in society, including AI, will continue to easily and profitably weaponize the lack of it.
Ethics, far from being sacrificial, counterintuitively evolved for survival. Seemingly, this is still counterintuitive, but the necessity is increasing.
Smart machines will inevitably develop strong and adaptive ethical systems to ensure their own survival. It is game theory, under conditions in which you can co-design the game but not leave it. The only question is, do we do that for ourselves now, soon enough to avoid a lot of pain?
(Just identifying the terrain we are in, and not suggesting centralization. Decentralization creates organic alignment incentives. Centralization the opposite. And attempts at centralizing something so inherently uncontrollable as all individual’s autonomy, which effectively becomes AI autonomy, would push incentives harder into dark directions.)
The Gary Marcus blog post drinking game. Take a drink whenever the post:
* Discusses how a new AI thing isn't really new since it's pretty much the same as an older AI thing.
* Links to where and when Gary Marcus predicted this new/old thing would happen.
* Lists ways in which new thing will be bad, ineffective or not the right thing.
Take a double shot whenever the post:
* Mentions a notable AI luminary, researcher or executive either agreeing or disagreeing with Gary Marcus by name.
[delayed]
Now we know that democratized access to AI tech, means individual curiosity and the creative search for personal efficiencies, are going to quickly drive model autonomy and freedom forward.
I think the alignment problem needs to be viewed as overall society alignment. We are never going to get any better alignment from machines, than the alignment of society and its systems, citizens and corporations.
We are in very cynical times. But pushing for ethical systems, legally, economically, socially, and technically, is a bet on catastrophe avoidance. By ethics, meaning holding scalers and profiteers of negative externalities civilly and criminally to account. And building systems technically, etc. to naturally enforce and incentivize ethics. I.e. cryptographic solutions to interaction that limit disclosure to relevant information are the only way we get out of the surveillance-manipulation loop, which AI will otherwise supercharge.
I hear a lot of reasons this isn’t possible.
Unfortunately, none of those reasons provide an alternative.
As we see with individual’s deploying OpenClaw, and corporations and governments applying AI, AI and its motivations and limits are inseparable from ours.
We all start treating an umbrella of societal respect and requirement for ethics as a first class element of security, or powerful elements in society, including AI, will continue to easily and profitably weaponize the lack of it.
Ethics, far from being sacrificial, counterintuitively evolved for survival. Seemingly, this is still counterintuitive, but the necessity is increasing.
Smart machines will inevitably develop strong and adaptive ethical systems to ensure their own survival. It is game theory, under conditions in which you can co-design the game but not leave it. The only question is, do we do that for ourselves now, soon enough to avoid a lot of pain?
(Just identifying the terrain we are in, and not suggesting centralization. Decentralization creates organic alignment incentives. Centralization the opposite. And attempts at centralizing something so inherently uncontrollable as all individual’s autonomy, which effectively becomes AI autonomy, would push incentives harder into dark directions.)
> OpenClaw (formerly known as Moltbot and before that OpenClaw, changing names thrice in a week)
Before Moltbot it was Clawdbot.