I rushed out nono.sh (the opposite of yolo!) in response to this and its already negated a few gateway attacks.
It uses kernel-level security primitives (Landlock on Linux, Seatbelt on macOS) to create sandboxes where unauthorized operations are structurally impossible. API keys are also stored in apples secure enclave (or the kernel keyring in linux) , and injected at run time and zeroized from memory after use. There is also some blocking of destructive actions (rm -rf ~/)
its as simple to run as: nono run --profile openclaw -- openclaw gateway
You can also use it to sandbox things like npm install:
nono run --allow node_modules
--allow-file package.json package.lock npm install pkg
Its early in, there will be bugs! PR's welcome and all that!
lol thanks! seriously, I have been running the tool over and over while testing and I kept typing 'nano' and opening binaries in the text editor. Next minute I swearing my head off trying to close nano (and not vim!)
Hmm, I don't know about better, more convenient I guess. But if it floats your boat you could write out everything in the sb format and call sandbox_exec()!
I'm curious, outside of AI enthusiasts have people found value with using Clawdbot, and if so, what are they doing with it? From my perspective it seems like the people legitimately busy enough that they actually need an AI assistant are also people with enough responsibilities that they have to be very careful about letting something act on their behalf with minimal supervision. It seems like that sort of person could probably afford to hire an administrative assistant anyway (a trustworthy one), or if it's for work they probably already have one.
On the other hand, the people most inclined to hand over access to everything to this bot also strike me as people without a lot to lose? I don't want to make an unfair characterization or anything, it just strikes me that handing over the keys to your entire life/identity is a lot more palatable if you don't have much to lose anyway?
The whole premise of this thing seems to be that it has access to your email, web browser, messaging, and so on. That's what makes it, in theory, useful.
The prompt injection possibilities are incredibly obvious... the entire world has write access to your agent.
Things like this are why I don't use AI agents like moltbot/openclaw. Security is just out the window with these things. It's like the last 50 years never happened.
Moltbot is a security nightmare, especially it's premise (tap into all your data sources) and the rapid uptake by inexperienced users makes it especially attractive for criminal networks.
do people even care about security anymore? I'll bet many consumers wouldn't even think twice about just giving full access to this thing (or any other flavor of the month AI agent product)
What I would have expected is prompt injection or other methods to get the agent to do something its user doesn't want it to, not regular "classical" attacks.
At least currently, I don't think we have good ways of preventing the former, but the latter should be possible to avoid.
They are easy to avoid if you actually give a damn. Unfortunately, people who create these things don't, assuming they even know what even half of these attacks are in the first place. They just want to pump out something now now now and the mindset is "we'll figure out all the problems later, I want my cake now now now now!" Maximum velocity! Full throttle!
It's just as bad as a lot of the vibe-coders I've seen. I literally saw this vibe-coder who created an app without even knowing what they wanted to create (as in, what it would do), and the AI they were using to vibe-code literally handwrote a PE parser to load DLLs instead of using LoadLibrary or delay loading. Which, really, is the natural consequence of giving someone access to software engineering tools when they don't know the first thing about it. Is that gatekeeping of a sort? Maybe, but I'd rather have that then "anyone can write software, and oh by the way this app reimplements wcslen in Rust because the vibe-coder had no idea what they were even doing".
> "we'll figure out all the problems later, I want my cake now now now now!" Maximum velocity! Full throttle!
That is indeed the point. Moltbot reminds me a lot of the demon core experiment(s): Laughably reckless in hindsight, but ultimately also the artifact of a time of massive scientific progress.
> Is that gatekeeping of a sort? Maybe, but I'd rather have that
Serious question: What do you gain from people not being able to vibe code?
what worries me here is that the entire personal AI agent product category is built on the premise of “connect me to all your data + give me execution.” At that point, the question isn’t “did they patch this RCE,” it’s more about what does a secure autonomous agent deployment even look like when its main feature is broad authority over all of someone's connected data?
Is the only real answer sandboxing + zero trust + treating agents as hostile by default? Or is this category fundamentally incompatible with least privilege?
legit issue for local installs but this is why we run the hosted platform in gVisor. even with the exploit you're trapped in a sandbox with no access to the host node. we treat every container as hostile by default.
I rushed out nono.sh (the opposite of yolo!) in response to this and its already negated a few gateway attacks.
It uses kernel-level security primitives (Landlock on Linux, Seatbelt on macOS) to create sandboxes where unauthorized operations are structurally impossible. API keys are also stored in apples secure enclave (or the kernel keyring in linux) , and injected at run time and zeroized from memory after use. There is also some blocking of destructive actions (rm -rf ~/)
its as simple to run as: nono run --profile openclaw -- openclaw gateway
You can also use it to sandbox things like npm install:
nono run --allow node_modules --allow-file package.json package.lock npm install pkg
Its early in, there will be bugs! PR's welcome and all that!
https://nono.sh
Heads up that your url is wrong. Should be https://nono.sh
lol thanks! seriously, I have been running the tool over and over while testing and I kept typing 'nano' and opening binaries in the text editor. Next minute I swearing my head off trying to close nano (and not vim!)
Is this better than using sandbox-exec (on mac) directly?
Hmm, I don't know about better, more convenient I guess. But if it floats your boat you could write out everything in the sb format and call sandbox_exec()!
I'm curious, outside of AI enthusiasts have people found value with using Clawdbot, and if so, what are they doing with it? From my perspective it seems like the people legitimately busy enough that they actually need an AI assistant are also people with enough responsibilities that they have to be very careful about letting something act on their behalf with minimal supervision. It seems like that sort of person could probably afford to hire an administrative assistant anyway (a trustworthy one), or if it's for work they probably already have one.
On the other hand, the people most inclined to hand over access to everything to this bot also strike me as people without a lot to lose? I don't want to make an unfair characterization or anything, it just strikes me that handing over the keys to your entire life/identity is a lot more palatable if you don't have much to lose anyway?
Am I missing something?
The whole premise of this thing seems to be that it has access to your email, web browser, messaging, and so on. That's what makes it, in theory, useful.
The prompt injection possibilities are incredibly obvious... the entire world has write access to your agent.
???????
There's some good discussion here: https://news.ycombinator.com/item?id=46838946
Does it matter? Let them cook and get burned if they want to.
Things like this are why I don't use AI agents like moltbot/openclaw. Security is just out the window with these things. It's like the last 50 years never happened.
Moltbot is a security nightmare, especially it's premise (tap into all your data sources) and the rapid uptake by inexperienced users makes it especially attractive for criminal networks.
Yes, there are already several criminal networks operating on it (transparently). I guess some consider this a feature.
[delayed]
do people even care about security anymore? I'll bet many consumers wouldn't even think twice about just giving full access to this thing (or any other flavor of the month AI agent product)
The real problem is that there is nothing novel here. Variants of this type of attack were clear from the beginning.
What I would have expected is prompt injection or other methods to get the agent to do something its user doesn't want it to, not regular "classical" attacks.
At least currently, I don't think we have good ways of preventing the former, but the latter should be possible to avoid.
[delayed]
They are easy to avoid if you actually give a damn. Unfortunately, people who create these things don't, assuming they even know what even half of these attacks are in the first place. They just want to pump out something now now now and the mindset is "we'll figure out all the problems later, I want my cake now now now now!" Maximum velocity! Full throttle!
It's just as bad as a lot of the vibe-coders I've seen. I literally saw this vibe-coder who created an app without even knowing what they wanted to create (as in, what it would do), and the AI they were using to vibe-code literally handwrote a PE parser to load DLLs instead of using LoadLibrary or delay loading. Which, really, is the natural consequence of giving someone access to software engineering tools when they don't know the first thing about it. Is that gatekeeping of a sort? Maybe, but I'd rather have that then "anyone can write software, and oh by the way this app reimplements wcslen in Rust because the vibe-coder had no idea what they were even doing".
> "we'll figure out all the problems later, I want my cake now now now now!" Maximum velocity! Full throttle!
That is indeed the point. Moltbot reminds me a lot of the demon core experiment(s): Laughably reckless in hindsight, but ultimately also the artifact of a time of massive scientific progress.
> Is that gatekeeping of a sort? Maybe, but I'd rather have that
Serious question: What do you gain from people not being able to vibe code?
[delayed]
So many people are giving keys to the kingdom to this thing. What is happening with humanity?
Humanity is the same it's always been. Some people are just inherently curious despite the obvious dangers.
Also, if you think about it, billions of people aren't running Moltbot at all.
what worries me here is that the entire personal AI agent product category is built on the premise of “connect me to all your data + give me execution.” At that point, the question isn’t “did they patch this RCE,” it’s more about what does a secure autonomous agent deployment even look like when its main feature is broad authority over all of someone's connected data?
Is the only real answer sandboxing + zero trust + treating agents as hostile by default? Or is this category fundamentally incompatible with least privilege?
yikes
> “did they patch this RCE,”
no, they documented it
https://docs.openclaw.ai/gateway/security#node-execution-sys...
[delayed]
Thank you for doing this. I'm shocked that more people aren't thinking about security with respect to AI.
People are thinking about it. I'm just not sure if the intersect between people who use Moltbook is very high.
This isn't even AI security, as far as I can tell: It looks like regular old computer security to me.
legit issue for local installs but this is why we run the hosted platform in gVisor. even with the exploit you're trapped in a sandbox with no access to the host node. we treat every container as hostile by default.
that response is not comforting