There is, but the procedural generation part is not what makes it fun to me. It's what you create and how you choose to "live" in the game. It really is like the real universe - isotropic, the same in all directions - it only takes a few hours to be overwhelmed by how pointless it all seems, knowing there's an infinity of anything you discover elsewhere.
Once you build a base or create some goal for yourself, it becomes interesting.
Yep. People have been doing this kind of stuff for computer games for decades. It's actually not that difficult. It's not clear what novel problem is being solved here.
Yeah but those traditional procgen techniques don't use AI, and this one does use AI. They solved the problem of them not being AI enough for the AI era. AI!
> Minecraft is of course the poster child for very large worlds of interest these days.
Minecraft used to create very interesting worlds until they changed the algorithm and the landscapes became plain and boring. It took them about 10 year until the Caves and Cliffs Update to make the world generation interesting again.
In Mario 64 there is a staircase you can run up forever, granted it looks the same no matter how long you have Mario run up the stairs, but that certainly fits "big but uninteresting 3d world."
I know 'interesting' is subjective, but your comment is demonstrably false. Just type "mario 64 staircase" into youtube, and look at the hundreds (thousands? millions?) of videos and many millions of views.
People are interested in it as a form of trivia. It is extremely uninteresting from the perspective of the player and more importantly how the word was actually used, which was in reference to the quality of world generation.
Redefining “interesting” just so you can provide a completely irrelevant “correction” is bad faith trolling.
Not sure why you're so defensive about this. I'm not trolling. Whether something is interesting or not is subjective, which is my point. You might think you know why that staircase is interesting to people (it's just trivia), but that's just your opinion. This is a tech community, so you're obviously unimpressed by the technology used to make it, but most people don't care about that at all.
There's no secret formula to culture. Some programmers and AI people seem to think there is some magic AI model that will be able to produce cultural hits at the click of a button. If you're a boring person, you're not likely to "get" why something is interesting, or why that part can't just be automated away. No technology can help with that.
Consider the patterns generated by cellular automata.
Both tend to stay interesting in the small scale but lose it to boring chaos in the large.
For this reason I think the better approach is to start with a simple level-scale form and then refine it into smaller parts, and then to refine those parts and so on.
>Both tend to stay interesting in the small scale but lose it to boring chaos in the large.
I think that's a good way to put it. I started writing a reply before reading your comment entirely and arrived at basically the same conclusion as this but more verbosely:
> For this reason I think the better approach is to start with a simple level-scale form and then refine it into smaller parts, and then to refine those parts and so on.
It seems hard to get away from having some sort of overarching goal, and then constantly looking back at it. At progressively smaller levels. Like what is the universe of the thing you are generating randomly. Is it a dungeon in a roguelike? It it meant to be one of many floors? Or is it a space inside a building? Is it a house? Is it an office? Is the office a stand alone building or a sky scraper?
Perhaps a good algorithm would start big and go small.
- assume the universe to generate is a world
- pick a location and assign stuff to generate. lets say its a city
- pick a type of city thing to generate. lets say its an sky scraper
- etc. going, smaller and smaller
- look at the city so far. pick another type of city thing to generate based on what has been generated so far
- look at the world so far. pick another type of thing to generate
Or maybe instead of looking back you could pre-divide into zones.
But then, if you want to make an entire universe (as in multiple worlds), you need to just make random worlds which leads to your original problem (boring chaos at large scale) like this or go up another level to more intelligently generate.
Point being, you need some sort of top down perspective on it.
This could be a great way to make backrooms horror environments!
I've dreamed of a NeRF-powered backrooms walking simulator for quite a while now. This approach is "worse" because the mesh seems explicit rather than just the world becoming what you look at, but that's arguably better for real-world use cases of course.
I wonder if they also have a strategy for deleting generate tiles, otherwise the infinite is limited to the size of available memory. I also wonder if with their method can exactly recreate tiles that have been deleted. Or in other words, that they have a method for generating unique seeds for all tiles. The paper does not give much technical details. If the seed has a limited size and there is a method for generating seeds for each 2D coordinate, I wonder if it is possible to make a non-repeating infinite world. I think it is not possible with a limited size seed.
> The code is being prepared for public release; pretrained weights and full training/inference pipelines are planned.
Any ideas of how it would different and better compared to "traditional" PCG? Seems like it'd give you more resource consumption, worse results and less control, neither of which seem like a benefit.
The description in the linked YouTube video for some reason has more info than the github repo:
> We tackle the challenge of generating the infinitely extendable 3D world — large, continuous environments with coherent geometry and realistic appearance. Existing methods face key challenges: 2D-lifting approaches suffer from geometric and appearance inconsistencies across views, 3D implicit representations are hard to scale up, and current 3D foundation models are mostly object-centric, limiting their applicability to scene-level generation. Our key insight is leveraging strong generation priors from pre-trained 3D models for structured scene block generation. To this end, we propose WorldGrow, a hierarchical framework for unbounded 3D scene synthesis. Our method features three core components: (1) a data curation pipeline that extracts high-quality scene blocks for training, making the 3D structured latent representations suitable for scene generation; (2) a 3D block inpainting mechanism that enables context-aware scene extension; and (3) a coarse-to-fine generation strategy that ensures both global layout plausibility and local geometric/textural fidelity. Evaluated on the large-scale 3D-FRONT dataset, WorldGrow achieves SOTA performance in geometry reconstruction, while uniquely supporting infinite scene generation with photorealistic and structurally consistent outputs. These results highlight its capability for constructing large-scale virtual environments and potential for building future world models.
That seems to compare to other similar "generating infinite 3D worlds" approaches, but not to traditional PCG, which would give you all of that except higher quality, better performance and more/better control.
Games have used procedural world generation since at least the 1980s, and on 8-bit home computers. Glancing through the video and webpage, the results don't look much different from what's possible with traditional Wave Function Collapse tbh.
Is it just me, or some of the places it generates are just not realistic? Like a small area of some kind which is a dead space, and there is a giant window into it.
It's not just you. The generated stuff - in my opinion - doesn't make any sense at all, with regard to structure or meaning. Unless, perhaps, the aim was to generate some kind of badly designed Ikea store.
Yeah, I think either the method doesn't work well, or there is something off with their tuning.
Their block-by-block generation method seems to be too local in its considerations, where each 3x3 section (= the ones generated based on the immediate neighbors) looks a lot more coherent than the 4x4 sections and above. I think it might need to be extended to be less local and might also in general need to be paired with some sort of guidance systems (e.g. in the office example would generate the overall floor layout).
I don't think generating virtual space is the issue.
It's about generating interesting virtual space!
Indeed, this has been described in the past as "The Oatmeal Problem" [1]
[1] https://www.challies.com/articles/no-mans-sky-and-10000-bowl...
Kate Compton's GDC talk: https://www.gdcvault.com/play/1024213/Practical-Procedural-G...
Important to note that article was written 9 years ago and NMS has received numerous content updates since. There's a lot more to the game now.
There is, but the procedural generation part is not what makes it fun to me. It's what you create and how you choose to "live" in the game. It really is like the real universe - isotropic, the same in all directions - it only takes a few hours to be overwhelmed by how pointless it all seems, knowing there's an infinity of anything you discover elsewhere.
Once you build a base or create some goal for yourself, it becomes interesting.
Yep. People have been doing this kind of stuff for computer games for decades. It's actually not that difficult. It's not clear what novel problem is being solved here.
Yeah but those traditional procgen techniques don't use AI, and this one does use AI. They solved the problem of them not being AI enough for the AI era. AI!
Do you have some particular piece of software or tech demo or game in mind with interesting very large generated 3D worlds?
Age of Empires got me into tinkering with content generation. The flexible map rules were fantastic in making this possible.
Minecraft is of course the poster child for very large worlds of interest these days.
Dwarf Fortress crafts an entire continent complete with a multi-century history, the results of which you can explore freely in adventure mode.
Most of the recent examples of 3D worlds like the post tend to do it through wave function collapse.
> Minecraft is of course the poster child for very large worlds of interest these days.
Minecraft used to create very interesting worlds until they changed the algorithm and the landscapes became plain and boring. It took them about 10 year until the Caves and Cliffs Update to make the world generation interesting again.
In Mario 64 there is a staircase you can run up forever, granted it looks the same no matter how long you have Mario run up the stairs, but that certainly fits "big but uninteresting 3d world."
> big but uninteresting 3d world.
I know 'interesting' is subjective, but your comment is demonstrably false. Just type "mario 64 staircase" into youtube, and look at the hundreds (thousands? millions?) of videos and many millions of views.
People are interested in it as a form of trivia. It is extremely uninteresting from the perspective of the player and more importantly how the word was actually used, which was in reference to the quality of world generation.
Redefining “interesting” just so you can provide a completely irrelevant “correction” is bad faith trolling.
Not sure why you're so defensive about this. I'm not trolling. Whether something is interesting or not is subjective, which is my point. You might think you know why that staircase is interesting to people (it's just trivia), but that's just your opinion. This is a tech community, so you're obviously unimpressed by the technology used to make it, but most people don't care about that at all.
There's no secret formula to culture. Some programmers and AI people seem to think there is some magic AI model that will be able to produce cultural hits at the click of a button. If you're a boring person, you're not likely to "get" why something is interesting, or why that part can't just be automated away. No technology can help with that.
Valheim and No Man's Sky are ones I've played recently.
Minecraft surely fits those criteria.
You reminded me of this https://book.leveldesignbook.com/process/layout
And Valve I think used to have a series on level design, involving from big to small and "anchor points", but I seem to have misplaced the link.
” The generated scenes are walkable and suitable for navigation/planning evaluation.”
Maybe the idea is to create environments for AI robotics traini ng.
Consider the levels generated in any roguelike.
Consider the patterns generated by cellular automata.
Both tend to stay interesting in the small scale but lose it to boring chaos in the large.
For this reason I think the better approach is to start with a simple level-scale form and then refine it into smaller parts, and then to refine those parts and so on.
(Vs plugging away at tunnel-building like a mole)
>Both tend to stay interesting in the small scale but lose it to boring chaos in the large.
I think that's a good way to put it. I started writing a reply before reading your comment entirely and arrived at basically the same conclusion as this but more verbosely:
> For this reason I think the better approach is to start with a simple level-scale form and then refine it into smaller parts, and then to refine those parts and so on.
It seems hard to get away from having some sort of overarching goal, and then constantly looking back at it. At progressively smaller levels. Like what is the universe of the thing you are generating randomly. Is it a dungeon in a roguelike? It it meant to be one of many floors? Or is it a space inside a building? Is it a house? Is it an office? Is the office a stand alone building or a sky scraper?
Perhaps a good algorithm would start big and go small.
Or maybe instead of looking back you could pre-divide into zones.But then, if you want to make an entire universe (as in multiple worlds), you need to just make random worlds which leads to your original problem (boring chaos at large scale) like this or go up another level to more intelligently generate.
Point being, you need some sort of top down perspective on it.
Here are 2 graphical examples of that strategy
http://fleen.org
https://www.flickr.com/photos/jonathanmccabe/albums/72157622...
Nethack/Slashem and DCSS, maybe.
The levels are made to fit under 80x24 terminal with maybe a max of 7/8? -can't remember- rooms per level.
The worlds from Cataclysm DDA:Bright Nights are pretty regular, and you have an overworld, labs, subways...
Or at least coherent.
Did they reinvent "wave function collapse" (https://github.com/mxgmn/WaveFunctionCollapse)?
No. WFC is fundamentally different from this.
Indeed, but it does serve more or less the same purpose in procgen pipelines (and folks have tweaked WFC for infinite worlds before[1]).
[1]: https://www.youtube.com/watch?v=7ffT_8wViBA
WFC?
This could be a great way to make backrooms horror environments!
I've dreamed of a NeRF-powered backrooms walking simulator for quite a while now. This approach is "worse" because the mesh seems explicit rather than just the world becoming what you look at, but that's arguably better for real-world use cases of course.
> backrooms horror environments
True, it sounds (and looks) a lot like https://scp-wiki.wikidot.com/scp-3008
i'm thinking a new version of LSD dream emulator could be really interesting.
Oh great, it's the Severance simulator.
I wonder if they also have a strategy for deleting generate tiles, otherwise the infinite is limited to the size of available memory. I also wonder if with their method can exactly recreate tiles that have been deleted. Or in other words, that they have a method for generating unique seeds for all tiles. The paper does not give much technical details. If the seed has a limited size and there is a method for generating seeds for each 2D coordinate, I wonder if it is possible to make a non-repeating infinite world. I think it is not possible with a limited size seed.
It is only a paper as of now:
> The code is being prepared for public release; pretrained weights and full training/inference pipelines are planned.
Any ideas of how it would different and better compared to "traditional" PCG? Seems like it'd give you more resource consumption, worse results and less control, neither of which seem like a benefit.
The description in the linked YouTube video for some reason has more info than the github repo:
> We tackle the challenge of generating the infinitely extendable 3D world — large, continuous environments with coherent geometry and realistic appearance. Existing methods face key challenges: 2D-lifting approaches suffer from geometric and appearance inconsistencies across views, 3D implicit representations are hard to scale up, and current 3D foundation models are mostly object-centric, limiting their applicability to scene-level generation. Our key insight is leveraging strong generation priors from pre-trained 3D models for structured scene block generation. To this end, we propose WorldGrow, a hierarchical framework for unbounded 3D scene synthesis. Our method features three core components: (1) a data curation pipeline that extracts high-quality scene blocks for training, making the 3D structured latent representations suitable for scene generation; (2) a 3D block inpainting mechanism that enables context-aware scene extension; and (3) a coarse-to-fine generation strategy that ensures both global layout plausibility and local geometric/textural fidelity. Evaluated on the large-scale 3D-FRONT dataset, WorldGrow achieves SOTA performance in geometry reconstruction, while uniquely supporting infinite scene generation with photorealistic and structurally consistent outputs. These results highlight its capability for constructing large-scale virtual environments and potential for building future world models.
That seems to compare to other similar "generating infinite 3D worlds" approaches, but not to traditional PCG, which would give you all of that except higher quality, better performance and more/better control.
An unpublished paper.
cant wait for the new diablo :)
It looks more like the Stanley parable.
With a quarter the size of the development team, 'cause productivity!
This is cool. And could be fun in games. Not sure I get the point otherwise... The thought that came to mind was "Architectural slop".
Games have used procedural world generation since at least the 1980s, and on 8-bit home computers. Glancing through the video and webpage, the results don't look much different from what's possible with traditional Wave Function Collapse tbh.
This is great and really cool! Thank you for sharing.
Is it just me, or some of the places it generates are just not realistic? Like a small area of some kind which is a dead space, and there is a giant window into it.
It's not just you. The generated stuff - in my opinion - doesn't make any sense at all, with regard to structure or meaning. Unless, perhaps, the aim was to generate some kind of badly designed Ikea store.
Yeah, I think either the method doesn't work well, or there is something off with their tuning.
Their block-by-block generation method seems to be too local in its considerations, where each 3x3 section (= the ones generated based on the immediate neighbors) looks a lot more coherent than the 4x4 sections and above. I think it might need to be extended to be less local and might also in general need to be paired with some sort of guidance systems (e.g. in the office example would generate the overall floor layout).
Not only not realistic but also not explicit: not so much as a peachy bottom in sight.