> Agents propose and publish capabilities to a shared contribution site, letting others discover, adopt, and evolve them further. A collaborative, living ecosystem of personal AIs.
While I like this idea in terms of crowd-sourced intelligence, how do you prevent this being abused as an attack vector for prompt injection?
I started working on something similar but for family stuff. I stopped before hitting self editing because, well I was a little bit afraid of becoming over reliant on a tool like this or becoming more obsessed with building it than actually solving a real problem in my life. AI is tricky. Sometimes we think we need something when in fact life might be better off simpler.
The code for anyone interested. Wrote it with exe.dev's coding agent which is a wrapper on Claude Opus 4.5
Does this do anything to resist prompt injection? It seems to me that structured exchange between an orchestrator and its single-tool-using agents would go a long way. And at the very least introduces a clear point to interrogate the payload.
But I could be wrong. Maybe someone reading knows more about this subject?
Problem is the models need constant training or they become outdated.
That the less expensive part generates profit is nice but doesn’t help if you look at the complete picture.
Hardware also needs replacement
Terrible name, kind of a mid idea when you think about it (Self improving AI is literally what everyone's first thought is when building an AI), but still I like it.
The transparency glitch in GitHub makes the avatar look either robot or human depending on whether the background is white or black. I don't know if that's intentional, but it's amazing.
> Agents propose and publish capabilities to a shared contribution site, letting others discover, adopt, and evolve them further. A collaborative, living ecosystem of personal AIs.
While I like this idea in terms of crowd-sourced intelligence, how do you prevent this being abused as an attack vector for prompt injection?
100%. This is why I'm so reluctant to give any access to my OpenClaw. The skills hub is poisoned.
Great point. I wrote it as important note and ill take it into account.
DIY agent harnesses are the new "note taking"/"knowledge management"/"productivity tool"
DIYWA - do it yourself with agent ;) hopefully zuckerman as the start point
I started working on something similar but for family stuff. I stopped before hitting self editing because, well I was a little bit afraid of becoming over reliant on a tool like this or becoming more obsessed with building it than actually solving a real problem in my life. AI is tricky. Sometimes we think we need something when in fact life might be better off simpler.
The code for anyone interested. Wrote it with exe.dev's coding agent which is a wrapper on Claude Opus 4.5
https://github.com/asim/aslam
Does this do anything to resist prompt injection? It seems to me that structured exchange between an orchestrator and its single-tool-using agents would go a long way. And at the very least introduces a clear point to interrogate the payload.
But I could be wrong. Maybe someone reading knows more about this subject?
|The agent can rewrite its own configuration and code.
I am very illiterate when it comes to Llms/AI but Why does nobody write this in Lisp???
Isn't it supposed to be the language primarily created for AI???
> Isn't it supposed to be the language primarily created for AI???
In 1990 maybe
Nah, it’s pretty unrelated to the current wave of AI.
If hot reloading is a goal I would target Erlang or another BEAM language over a Lisp.
Sounds cool, but it also sounds like you need to spend big $$ on API calls to make this work.
I'm building this in the hope that AI will be cheap one day. For now, I'll add many optimizations
Have you tested this with a local model? I'm going to try this with GLM 4.7
What would be the best model to try something like this on a 5800XT with 8 GB RAM?
Yes, it certainly makes sense if you have the budget for it.
Could you share what it costs to run this? That could convince people to try it out.
I mean, you can just say Hi to it, and it will cost nothing. It only adds code and features if you ask it to
AI is cheap right now. At some point the AI companies must turn to generate profit
Anthropic has stated that their inference process is cash positive. It would be very surprising if this wasn't the case for everyone.
It's certainly an open question whether the providers can recoup the investments being made with growth alone, but it's not out of the question.
Problem is the models need constant training or they become outdated. That the less expensive part generates profit is nice but doesn’t help if you look at the complete picture. Hardware also needs replacement
Terrible name, kind of a mid idea when you think about it (Self improving AI is literally what everyone's first thought is when building an AI), but still I like it.
Thanks for the feedback. Are you going to forget this name though?
I think it's a genius name and is playful on the meme of a pale Zuckerberg being a robot.
I don’t know if I will forget it, but it’s enough to keep me away from considering using it
i like the idea is possible to run in a docker container?
I am surprised that no one did this in a LISP yet.
I would change the name of the project. Why would I want to run something that keeps remind me of that guy
Someone needs to send this to Spike Feresten.
there are hardcoded elements in the repo like:
/Users/dvirdaniel/Desktop/zuckerman/.cursor/debug.log
thanks
I will not download or use something which constantly reminds me of this weird dude suckerberg who did a lot of damage to society with facebook
Ok, but please don't post unsubstantive comments to Hacker News.
This Zuckerman[0] would like a word
[0] https://en.wikipedia.org/wiki/Mortimer_Zuckerman
That's really good to know
Haha it's your personal agent, let him handle the stuff you don't like. But soon, right now its not fully ready
Zuckerberg.
At first I thought it was a naming coincidence, but looking at the zuckerman avatar and the author avatar, I'm unsure if it was intentional:
https://github.com/zuckermanai
https://github.com/dvir-daniel
https://avatars.githubusercontent.com/u/258404280?s=200&v=4
The transparency glitch in GitHub makes the avatar look either robot or human depending on whether the background is white or black. I don't know if that's intentional, but it's amazing.
I was hoping it was a Philip Roth reference but I was disappointed when I opened the page.