A few years ago I asked myself "Why spend hundreds of hours sucking at video games when I could spend the same time sucking at Blender?"
Since then I have spent many an enjoyable evening making terrible 3d models, some of which actually made it into a game. Apart from my lack of skill, there is no reason why somebody like me can't do world-class renders in a piece of software they downloaded for free. It isn't even that hard to use any more.
I just recently had this revelation! I a full time software dev who has dabbled in game dev for years, but i’ve always given up on ideas because i can’t make “good” art/assets. just a couple of months ago it dawned on me that i love inept/amateurish/DIY/outsider art in most other mediums (except writing maybe) and decided to just put time into to awkward crappy looking models. and i love them! now i’m just trying to create a distinctive shambolic aesthetic for my tiny games. it’s so freeing.
"distinctive shambolic aesthetic", what a great phrase. I use the friend of every hack fraud - "extreme stylization" to cover a multitude of sins.
Somebody actually nominated my interactive fiction game for a best graphics ribbon, which amused me no end.
I have often thought that we spend too much time studying and trying to emulate the great artists, musicians, and writers. It is more productive to see what the mediocre talents are doing, how their works succeed, and try to copy their techniques. Even if you fail you will find your own voice and produce something distinctive.
While a plant growing in shade might grow slower than a plant growing in sunlight, a part of a plant growing in shade will grow faster than the parts growing in sunlight (mechanically, this is done using auxin plant hormones[0]). This is how a plant is able to "chase" sunlight. Picture a flower growing vertically with one half shaded, the other half sunned. If the sunny side grew faster, it would become longer than the shady side, causing it to curve into the shade, potentially killing the plant. Instead, the shady side "pushes" the plant into the sun.
Houdini is a fast-moving target, but it seems like both blender and unreal engine— even in core features, not just in plugins— are gaining on it. For my particular use case, Blender is the least useful of the three (unless I need to do sculpting and don’t have access to a zbrush license), but it’s looking better and better.
Blender's development would explode if they had a C++ API like Maya has. Compile plugin, load plugin, unload plugin, recompile plugin. In Maya, basically everything is a node, be it HUD elements, tools, geometry, modifiers, absolutely everything, so your plugin has access to everything to which baked-in nodes have access to, they are not second-class citizens.
From what I understood is that you need to recompile Blender to add a new node, even if it is just one which renders a simple line. Blender (like Maya) can do this in Python, but for many things this is not enough.
I love Blender and was majorly responsible for getting our university to switch to Blender from 3Ds Max. I have written easily 100 pages of support for Blender and have been evangelical in converting my colleagues to it. Nonetheless we lost some things in this move.
I was speaking to someone who worked at a very large animation production studio. They took a serious look at Blender to see if they could accommodate it in their pipeline. This would have saved them a ton of money. Some of the reasons they did not I list below. At the top of the list are the things that affected us in our move.
- Max and Maya are insanely fast at loading large files. On our school computers, importing a 5 gb .obj can take minutes, as opposed to seconds in Max/Maya.
- In Blender Managing large files is similarly slow, only possible by using linked proxies.
- In Max/maya the Arnold render engine comes with proxy management that makes loading large textures manageable.
- May/max are much better for chartecter animation, though Blender seems to be catching up.
- If Max does become unresponsive, it has cool tools such as delaying the screen re-draw for a defined number of seconds.
For the studio in question
- any bug they encountered could be addressed overnight by Autodesk support. Maybe Blender has got close to this with their long term support plan. Don’t know.
- max/maya are comprehensively documented, Blender is not.
All that being said, Blender is certainly finding a place in smaller studios. Simply: it inspire love. The re-factor and UI re-design a few years ago kick-started this.
The artists I know have very little love for max/maya. They use it because they have to. There has been near zero new features in these apps for years and max in particular can be clunky to use. Developers like Tyson Ibele have taken over adding new features with their plugins (check out his tyflow add on which replaces pflow).
Houdini is another matter. Development has been fast and users love it. I believe that in a few years this will have taken a large chunk out of Autodesk’s business.
Houdini really is a different thing. Its procedural-only focus is all-encompassing, resource intensive, and difficult to combine with other paradigms. People that talk about Blender overtaking Houdini as a professional tool have no clue what they’re talking about. It’s not that it can’t, it shouldn’t because they’ve got totally different focuses that make for a pretty awkward fit. Blenders sims are making great strides but to shoot for feature parity would make blender a worse program. So much unnecessary complexity if you don’t need that niche toolkit.
I know more people that love Maya then love Max, which is funny because IMO Max is much better for modeling. Maya, however, really is great for spline-based animation, generally, but specifically character animation. Blender has been making big jumps there though. The reason I’m glad that I learned the big propriety clients in school— Houdini, Maya, Max, Zbrush, Nuke, Mari, etc.— is because it’s a much more marketable skill for big studios, and much more difficult to get that experience yourself. Our program’s tooling was entirely focused on getting students into the big studio career pipeline be it in vfx, animation, tech art, game design, etc. I guarantee that students would have come out of that program better artists, fundamentally, had they learned how to do all of that stuff in Blender. Given my career ambitions, I’m not sad I got what I got though.
Completely agree that Houdini is not currently in the same class as Blender, Maya, Max etc…. It is a different animal.
that being said, it is hard not to see how much their character animation tools are improving without seeing this as a direct threat on eastablished ‘trad’ 3d tools.
What fundamentally sets it apart is that (in my super limited experience) under the hood it closer to being a language than an app in the traditional sense. This makes it more future proof than its competitors. Lack of future proof is what has almost killed Modo, an app I adored. In the end, it proved far too slow and no amount of updates addressed this fact.
It’s not even like a language — it’s several languages spread across half a dozen purposes -built environments. I’m definitely interested to see where apex goes, but it’s going to need a lot of work in both the workflow and how they communicate its value before it gets any real adoption outside of some tech artists working on super complex skeletons and rigs. As of now, everything I’ve seen has been essentially a coding demo with visuals. Working with Advanced Skeleton in Maya is just so elegant and capable— between that and industry inertia, I have a hard time imagining it’s going to be the standard anytime soon. I think the real sleeper here is UE. You can model, make skeletons, rig, paint skin weights all right in there with a pretty smooth UI. I think they’re doing to lap everyone in like a decade.
I’m setting aside UE because Blender isn’t and was never trying to be a game engine. When it comes to high-end physics sims of any kind— particles, pyro, rbd, liquid, cloth, hair, etc– blender isn’t close to Houdini. The blender toolkit will get ‘something’, but the Houdini toolkit will get you ‘anything.’ More broadly, it’s so easy (well, “easy” like you don’t have to modify the program source. This stuff is going to be complex and fussy no matter what) to infinitely control absolutely anything using everything from node-based no-code stuff (VOPs)that compiles down to performant machine code, to their built-in c-like for performant calculations in complex networks (vex), to popping in nodes where you can write your own openCL if you really wanted to, all animatable using anything from key frames to their built-in non-time-bound math system (CHOPs… I know… the names…) originally built to process wave-level audio. If you’re clever enough, you can even use all of that to drive your geometry, audio and shaders to some extent simultaneously, without (textual) code. The geometry nodes in blender are cool, and they seem to be developing them the right way – ensuring quality rather than racing for feature parity with bifrost or Houdini or wherever — but last I checked you couldn’t even animate their parameters. Huge progress, a great tool, and many people’s preferred tool for modeling. Frankly you wouldn’t catch me dead sculpting in Houdini, and Houdini has a way to go before their compositor is on par with Blender’s, even if the procedural bent makes it more useful for some unusual use cases now. But for the really deep procedural stuff and simulations, anything involving volumes, and also fitting into complex pipelines, we’re talking etch-a-sketch vs oil paints. Imagine if blender started as a purely procedural tool and only focused on that use case? The differences would go so much deeper than the feature list would let on.
I think the differences are a lot less significant with other modeling-focused DCCs like Maya, Max, and C4D. All great programs in their own right. Maya really is an incredible tool for character animation, C4D is so killer for motion graphics type stuff… but they’re all much closer to blender for their intended use cases.
Oh and I forgot to mention that it’s all glued together with python, which replaced their previous non-compiled scripting language based on csh. (No shit. Yes, that csh. I do believe the first version of Houdini’s precursor came out in the late 80s. Wasn’t tcsh the application that spawned the turn-of-phrase ‘considered harmful?’) You can do pretty much anything with Python— not just glue work and making procedural parameters. But from what I gather, it’s not nearly as performant as vex or VOPs for doing per-particle or per-voxel operations when you’ve got more than a few million. I’ve never pushed Houdini’s Python that far, though, so that might be old news. Vex and VOPs are stricter on type, but are still garbage collected, etc so I don’t know where the efficiency lies.
Differential meshes they show there remind me of holomorphic functions like [1] - is there a connection between such generative processes and minimal surface / curvature?
It is awe-inspiring. However, from another angle you could say “It really is beautifully amazing how the same source code compiles roughly the same on everyone’s computer”.
Kinda. There is far more going on in biology than in our computer systems though. Code compiling is generally done without much of the world changing concurrently around it. Also, with repeatable compilation you can get exactly the same output every time. Change the output just a little and potentially the whole thing fails. Meanwhile, in biology, people/creatures are born with quite different attributes and can still thrive. The success of a species often relies on the fact that there is mutation over time. Even just the insane interconnectedness within single biological organisms dwarves the complexity of any computer system on earth.
Alan Turing was fascinated with morphogenesis, and wrote an article by the title "The Chemical Basis of Morphogenesis". He modeled cell growth as a reaction-diffusion systems, now known as Turing Patterns. And he was firmly against and intended to defeat the religious claptrap pseudoscience we now call "Intelligent Design".
I typed in the preface to "Morphogenesis: Collected Works of A.M. Turing", and scanned the drawing inside the front cover by Alan Turing's mother of her son watching the daisies grow:
If for some irrational reason you choose to believe in the pseudoscience of Intelligent Design, then you might not like to hear what Turing thought about that, which P. T. Saunders mentions in the foreword to Turing's collected works, citing what Hodges wrote about and quoted Robin Gandy saying in his "excellent biography":
For Turing, however, the fundamental problem of biology had always been to account for pattern and form, and the dramatic progress that was being made at that time in genetics did not alter his view. And because he believed that the solution was to be found in physics and chemistry it was to these subjects and the sort of mathematics that could be applied to them that he turned. In my view, he was right, but even someone who disagrees must be impressed by the way in which he went directly to what he saw as the most important problem and set out to attack it with the tools that he judged appropriate to the task, rather than those which were easiest to hand or which others were already using. What is more, he understood the full significance of the problem in a way that many biologists did not and still do not. We can see this in the joint manuscript with Wardlaw which is included in this volume, but it is clear just from the comment he made to Robin Gandy (Hodges 1983, p. 431) that his new ideas were "intended to defeat the argument from design".
This single remark sums up one of the most crucial issues in contemporary
biology. The argument from design was originally put forward as a scientific
proof of the existence of God. The best known statement of it is William
Paley's (1802) famous metaphor of a watchmaker. If we see a stone on some
waste ground we do not wonder about it. If, on the other hand, we were to
find a watch, with all its many parts combining so beautifully to achieve
its purpose of keeping accurate time, we would be bound to infer that it had
been designed and constructed by an intelligent being. Similarly, so the
argument runs, when we look at an organism, and above all at a human being,
how can we not believe that there must be an intelligent Creator?
Turing was not, of course, trying to refute Paley; that has been done almost
a century earlier by Charles Darwin. But the argument from design had
survived, and was, and indeed remains, still a potent force in biology. For
the essence of Darwin's theory is that organisms are created by natural
selection out of random variations. Almost any small variation can occur;
whether it persists and so features in evolution depends on whether it is
selected. Consequently we explain how a certain feature has evolved by
saying what advantage it gives to the organism, i.e. what purpose it serves,
just as if we were explaining why the Creator has designed the organism in
that way. Natural selection thus takes over the role of the Creator, and
becomes "The Blind Watchmaker" (Dawkins 1986).
If you would permit me changing/hijacking topics temporarily, please do consider compiling all your comments like these (including history and anecdotes on all things Postscript), spread over different forums, hackernews through usenet.
I think I will not be far off, speaking for the community, that many would love to own a copy. If that costs and an arm and a leg, so be it.
Another sidenote, any chance of seeing the usenet archives back ? Was very happy when Google purchased it. Thought we would get a better UI, better indexed presentation ... then it disappeared
Kagi search has this to say about the difference between differential growth and L-systems (plot spoiler: l-systems are maths-based and address mostly branching phenomena, differential growth is derived from the fact that within a single organism growth rate is uneven):
One of the things that attracted me to 3D was Maya’s magnificent paint effects system, which is lsystem-based. This was begging to be spun off as a separate product.
My bad. I did not know that. Won’t make that mistake again. Pasting below the key info.
Differential Growth and L-systems are both concepts used in modeling biological growth, particularly in plants, but they approach the subject from different angles.
Differential Growth
Definition: Differential growth refers to the varying rates of growth in different parts of an organism, leading to shape formation and structural changes. This concept is crucial in understanding how plants adapt their forms in response to environmental stimuli (like light and gravity) and internal signals (like hormones).
Mechanism: It involves the controlled distribution of growth factors and varying growth rates among different tissues. For example, in plants, differential growth can lead to bending or twisting of stems and leaves, as seen in the formation of the apical hook during germination12.
Applications: This concept is used in various fields, including biology, architecture, and design, to create models that simulate how structures grow and change over time3.
L-systems (Lindenmayer Systems)
Definition: L-systems are a mathematical formalism introduced by Aristid Lindenmayer in 1968 for modeling the growth processes of plants. They use a set of rules (productions) to rewrite strings of symbols, which can represent different parts of a plant.
Mechanism: An L-system starts with an initial string (axiom) and applies production rules to generate new strings iteratively. These strings can be interpreted graphically to create complex plant structures. L-systems can be context-free or context-sensitive, allowing for a wide variety of growth patterns45.
Applications: L-systems are widely used in computer graphics for simulating plant growth, generating fractals, and even in architectural design6.
Differential L-systems
Integration: Recent developments have combined differential growth principles with L-systems, known as differential L-systems. This approach allows for more realistic simulations of plant growth by incorporating the effects of differential growth rates into the L-system framework78.
Functionality: In differential L-systems, the growth rules can depend on local conditions, such as the density of neighboring structures or external environmental factors, enhancing the realism of the generated models46.
Summary
Differential Growth focuses on how different parts of an organism grow at different rates due to various factors, leading to complex shapes.
L-systems provide a rule-based framework for simulating plant growth through string rewriting.
The combination of both concepts in differential L-systems allows for advanced modeling that captures both the structural complexity and the dynamic nature of biological growth.
References
[1] Differential growth and shape formation in plant organs www.ncbi.nlm.nih.gov
[2] A Model of Differential Growth-Guided Apical Hook Formation in Plants www.ncbi.nlm.nih.gov
Please don't litter HN with LLM generated slop, there's more than enough of it out there as is. The value of HN is the human discussion. I'm sure each and every one of us is capable of writing a question in an input if they please. Some sides of the internet are already dead, with LLMs chatting with other bots, let's not make HN that place.
Thank you I know what l systems are. A Semi-Thue grammar which is not really a Chomsky grammar - the way the production rules are applied differs. They are after the name of famous biologist called Lindenmayer, thus the name. I’ve been teaching these.
Now my question was - is this an L-system or another one. Not what are L-systems which are. As far as I get from your reply, the plug-in does not facilitate. Thanks.
Anyone know if it’s possible to do what touchdesigner does with point grids but in blender or any other open source way? I really want to create those point cloud 3d audio reactive experiences but I’m struggling to gain traction.
Wow, this is an incredibly detailed and fascinating exploration of differential growth in Blender! I appreciate how you've broken down the process and provided such clear examples. It's inspiring to see how mathematical concepts can be translated into such beautiful visual art. Thanks for sharing your insights and techniques—this is a great resource for anyone interested in generative design!
Blender is an amazing piece of software.
A few years ago I asked myself "Why spend hundreds of hours sucking at video games when I could spend the same time sucking at Blender?"
Since then I have spent many an enjoyable evening making terrible 3d models, some of which actually made it into a game. Apart from my lack of skill, there is no reason why somebody like me can't do world-class renders in a piece of software they downloaded for free. It isn't even that hard to use any more.
I just recently had this revelation! I a full time software dev who has dabbled in game dev for years, but i’ve always given up on ideas because i can’t make “good” art/assets. just a couple of months ago it dawned on me that i love inept/amateurish/DIY/outsider art in most other mediums (except writing maybe) and decided to just put time into to awkward crappy looking models. and i love them! now i’m just trying to create a distinctive shambolic aesthetic for my tiny games. it’s so freeing.
"distinctive shambolic aesthetic", what a great phrase. I use the friend of every hack fraud - "extreme stylization" to cover a multitude of sins.
Somebody actually nominated my interactive fiction game for a best graphics ribbon, which amused me no end.
I have often thought that we spend too much time studying and trying to emulate the great artists, musicians, and writers. It is more productive to see what the mediocre talents are doing, how their works succeed, and try to copy their techniques. Even if you fail you will find your own voice and produce something distinctive.
I agree - one of those bits of software you can't believe is free. I've also done some pretty terrible modelling, even my doughnuts suck.
Those are the best kind!
> "Why spend hundreds of hours sucking at video games when I could spend the same time sucking at Blender?"
Please come and repeat this in front of my students.
Maybe also check out this free Blender geometry nodes differential grown add-on from the brilliant Alex Martinelli…
https://www.blendernation.com/2023/07/25/differential-growth...
I think a cool addition would be to add a light source, and inhibit growth when a vertex doesn't receives light.
While a plant growing in shade might grow slower than a plant growing in sunlight, a part of a plant growing in shade will grow faster than the parts growing in sunlight (mechanically, this is done using auxin plant hormones[0]). This is how a plant is able to "chase" sunlight. Picture a flower growing vertically with one half shaded, the other half sunned. If the sunny side grew faster, it would become longer than the shady side, causing it to curve into the shade, potentially killing the plant. Instead, the shady side "pushes" the plant into the sun.
[0] https://en.wikipedia.org/wiki/Auxin
Maybe also to inhibit growth when exposed to a saline environment?
If this were recreated in Blender’s geo nodes these functions would be relatively easy to add using the raycast node.
Houdini is a fast-moving target, but it seems like both blender and unreal engine— even in core features, not just in plugins— are gaining on it. For my particular use case, Blender is the least useful of the three (unless I need to do sculpting and don’t have access to a zbrush license), but it’s looking better and better.
Blender's development would explode if they had a C++ API like Maya has. Compile plugin, load plugin, unload plugin, recompile plugin. In Maya, basically everything is a node, be it HUD elements, tools, geometry, modifiers, absolutely everything, so your plugin has access to everything to which baked-in nodes have access to, they are not second-class citizens.
From what I understood is that you need to recompile Blender to add a new node, even if it is just one which renders a simple line. Blender (like Maya) can do this in Python, but for many things this is not enough.
I’m curious, what are your use cases where Blender isn’t yet a strong enough fit?
I love Blender and was majorly responsible for getting our university to switch to Blender from 3Ds Max. I have written easily 100 pages of support for Blender and have been evangelical in converting my colleagues to it. Nonetheless we lost some things in this move.
I was speaking to someone who worked at a very large animation production studio. They took a serious look at Blender to see if they could accommodate it in their pipeline. This would have saved them a ton of money. Some of the reasons they did not I list below. At the top of the list are the things that affected us in our move.
- Max and Maya are insanely fast at loading large files. On our school computers, importing a 5 gb .obj can take minutes, as opposed to seconds in Max/Maya.
- In Blender Managing large files is similarly slow, only possible by using linked proxies.
- In Max/maya the Arnold render engine comes with proxy management that makes loading large textures manageable.
- May/max are much better for chartecter animation, though Blender seems to be catching up.
- If Max does become unresponsive, it has cool tools such as delaying the screen re-draw for a defined number of seconds.
For the studio in question
- any bug they encountered could be addressed overnight by Autodesk support. Maybe Blender has got close to this with their long term support plan. Don’t know.
- max/maya are comprehensively documented, Blender is not.
All that being said, Blender is certainly finding a place in smaller studios. Simply: it inspire love. The re-factor and UI re-design a few years ago kick-started this.
The artists I know have very little love for max/maya. They use it because they have to. There has been near zero new features in these apps for years and max in particular can be clunky to use. Developers like Tyson Ibele have taken over adding new features with their plugins (check out his tyflow add on which replaces pflow).
Houdini is another matter. Development has been fast and users love it. I believe that in a few years this will have taken a large chunk out of Autodesk’s business.
Houdini really is a different thing. Its procedural-only focus is all-encompassing, resource intensive, and difficult to combine with other paradigms. People that talk about Blender overtaking Houdini as a professional tool have no clue what they’re talking about. It’s not that it can’t, it shouldn’t because they’ve got totally different focuses that make for a pretty awkward fit. Blenders sims are making great strides but to shoot for feature parity would make blender a worse program. So much unnecessary complexity if you don’t need that niche toolkit.
I know more people that love Maya then love Max, which is funny because IMO Max is much better for modeling. Maya, however, really is great for spline-based animation, generally, but specifically character animation. Blender has been making big jumps there though. The reason I’m glad that I learned the big propriety clients in school— Houdini, Maya, Max, Zbrush, Nuke, Mari, etc.— is because it’s a much more marketable skill for big studios, and much more difficult to get that experience yourself. Our program’s tooling was entirely focused on getting students into the big studio career pipeline be it in vfx, animation, tech art, game design, etc. I guarantee that students would have come out of that program better artists, fundamentally, had they learned how to do all of that stuff in Blender. Given my career ambitions, I’m not sad I got what I got though.
Completely agree that Houdini is not currently in the same class as Blender, Maya, Max etc…. It is a different animal.
that being said, it is hard not to see how much their character animation tools are improving without seeing this as a direct threat on eastablished ‘trad’ 3d tools.
What fundamentally sets it apart is that (in my super limited experience) under the hood it closer to being a language than an app in the traditional sense. This makes it more future proof than its competitors. Lack of future proof is what has almost killed Modo, an app I adored. In the end, it proved far too slow and no amount of updates addressed this fact.
It’s not even like a language — it’s several languages spread across half a dozen purposes -built environments. I’m definitely interested to see where apex goes, but it’s going to need a lot of work in both the workflow and how they communicate its value before it gets any real adoption outside of some tech artists working on super complex skeletons and rigs. As of now, everything I’ve seen has been essentially a coding demo with visuals. Working with Advanced Skeleton in Maya is just so elegant and capable— between that and industry inertia, I have a hard time imagining it’s going to be the standard anytime soon. I think the real sleeper here is UE. You can model, make skeletons, rig, paint skin weights all right in there with a pretty smooth UI. I think they’re doing to lap everyone in like a decade.
I’m setting aside UE because Blender isn’t and was never trying to be a game engine. When it comes to high-end physics sims of any kind— particles, pyro, rbd, liquid, cloth, hair, etc– blender isn’t close to Houdini. The blender toolkit will get ‘something’, but the Houdini toolkit will get you ‘anything.’ More broadly, it’s so easy (well, “easy” like you don’t have to modify the program source. This stuff is going to be complex and fussy no matter what) to infinitely control absolutely anything using everything from node-based no-code stuff (VOPs)that compiles down to performant machine code, to their built-in c-like for performant calculations in complex networks (vex), to popping in nodes where you can write your own openCL if you really wanted to, all animatable using anything from key frames to their built-in non-time-bound math system (CHOPs… I know… the names…) originally built to process wave-level audio. If you’re clever enough, you can even use all of that to drive your geometry, audio and shaders to some extent simultaneously, without (textual) code. The geometry nodes in blender are cool, and they seem to be developing them the right way – ensuring quality rather than racing for feature parity with bifrost or Houdini or wherever — but last I checked you couldn’t even animate their parameters. Huge progress, a great tool, and many people’s preferred tool for modeling. Frankly you wouldn’t catch me dead sculpting in Houdini, and Houdini has a way to go before their compositor is on par with Blender’s, even if the procedural bent makes it more useful for some unusual use cases now. But for the really deep procedural stuff and simulations, anything involving volumes, and also fitting into complex pipelines, we’re talking etch-a-sketch vs oil paints. Imagine if blender started as a purely procedural tool and only focused on that use case? The differences would go so much deeper than the feature list would let on.
I think the differences are a lot less significant with other modeling-focused DCCs like Maya, Max, and C4D. All great programs in their own right. Maya really is an incredible tool for character animation, C4D is so killer for motion graphics type stuff… but they’re all much closer to blender for their intended use cases.
Blender Game Engine was a thing at one point, died a long time ago though, but now has a maintained fork!
https://en.wikipedia.org/wiki/Blender_Game_Engine
That’s cool — never heard of it. I’ll have to check it out.
Oh and I forgot to mention that it’s all glued together with python, which replaced their previous non-compiled scripting language based on csh. (No shit. Yes, that csh. I do believe the first version of Houdini’s precursor came out in the late 80s. Wasn’t tcsh the application that spawned the turn-of-phrase ‘considered harmful?’) You can do pretty much anything with Python— not just glue work and making procedural parameters. But from what I gather, it’s not nearly as performant as vex or VOPs for doing per-particle or per-voxel operations when you’ve got more than a few million. I’ve never pushed Houdini’s Python that far, though, so that might be old news. Vex and VOPs are stricter on type, but are still garbage collected, etc so I don’t know where the efficiency lies.
This is really cool, I'd echo other comments here that ask for a math explainer - I'd love to understand exactly what's going on under the hood.
https://inconvergent.net/generative/differential-line/
Differential meshes they show there remind me of holomorphic functions like [1] - is there a connection between such generative processes and minimal surface / curvature?
https://www.pngwing.com/en/free-png-bvwbg
Holomorphic functions are solutions of the u_xx+u_yy=0 diff equation. Differential meshes seem to be made by a similar equation.
Great set of animated blog posts
It really is beautifully amazing how one cell can keep dividing and even the blood vessels end up roughly the same in everyone.
It is awe-inspiring. However, from another angle you could say “It really is beautifully amazing how the same source code compiles roughly the same on everyone’s computer”.
Kinda. There is far more going on in biology than in our computer systems though. Code compiling is generally done without much of the world changing concurrently around it. Also, with repeatable compilation you can get exactly the same output every time. Change the output just a little and potentially the whole thing fails. Meanwhile, in biology, people/creatures are born with quite different attributes and can still thrive. The success of a species often relies on the fact that there is mutation over time. Even just the insane interconnectedness within single biological organisms dwarves the complexity of any computer system on earth.
Alan Turing was fascinated with morphogenesis, and wrote an article by the title "The Chemical Basis of Morphogenesis". He modeled cell growth as a reaction-diffusion systems, now known as Turing Patterns. And he was firmly against and intended to defeat the religious claptrap pseudoscience we now call "Intelligent Design".
https://en.wikipedia.org/wiki/Morphogenesis
https://en.wikipedia.org/wiki/The_Chemical_Basis_of_Morphoge...
https://en.wikipedia.org/wiki/Reaction%E2%80%93diffusion_sys...
https://en.wikipedia.org/wiki/Turing_pattern
https://en.wikipedia.org/wiki/Intelligent_design
https://www.dna.caltech.edu/courses/cs191/paperscs191/turing...
https://www.goodreads.com/book/show/1701864.Morphogenesis
I typed in the preface to "Morphogenesis: Collected Works of A.M. Turing", and scanned the drawing inside the front cover by Alan Turing's mother of her son watching the daisies grow:
https://imgur.com/AX6Bg9q
http://donhopkins.com/home/archive/Turing/Morphogenesis.txt
If for some irrational reason you choose to believe in the pseudoscience of Intelligent Design, then you might not like to hear what Turing thought about that, which P. T. Saunders mentions in the foreword to Turing's collected works, citing what Hodges wrote about and quoted Robin Gandy saying in his "excellent biography":
For Turing, however, the fundamental problem of biology had always been to account for pattern and form, and the dramatic progress that was being made at that time in genetics did not alter his view. And because he believed that the solution was to be found in physics and chemistry it was to these subjects and the sort of mathematics that could be applied to them that he turned. In my view, he was right, but even someone who disagrees must be impressed by the way in which he went directly to what he saw as the most important problem and set out to attack it with the tools that he judged appropriate to the task, rather than those which were easiest to hand or which others were already using. What is more, he understood the full significance of the problem in a way that many biologists did not and still do not. We can see this in the joint manuscript with Wardlaw which is included in this volume, but it is clear just from the comment he made to Robin Gandy (Hodges 1983, p. 431) that his new ideas were "intended to defeat the argument from design".
This single remark sums up one of the most crucial issues in contemporary biology. The argument from design was originally put forward as a scientific proof of the existence of God. The best known statement of it is William Paley's (1802) famous metaphor of a watchmaker. If we see a stone on some waste ground we do not wonder about it. If, on the other hand, we were to find a watch, with all its many parts combining so beautifully to achieve its purpose of keeping accurate time, we would be bound to infer that it had been designed and constructed by an intelligent being. Similarly, so the argument runs, when we look at an organism, and above all at a human being, how can we not believe that there must be an intelligent Creator?
Turing was not, of course, trying to refute Paley; that has been done almost a century earlier by Charles Darwin. But the argument from design had survived, and was, and indeed remains, still a potent force in biology. For the essence of Darwin's theory is that organisms are created by natural selection out of random variations. Almost any small variation can occur; whether it persists and so features in evolution depends on whether it is selected. Consequently we explain how a certain feature has evolved by saying what advantage it gives to the organism, i.e. what purpose it serves, just as if we were explaining why the Creator has designed the organism in that way. Natural selection thus takes over the role of the Creator, and becomes "The Blind Watchmaker" (Dawkins 1986).
If you would permit me changing/hijacking topics temporarily, please do consider compiling all your comments like these (including history and anecdotes on all things Postscript), spread over different forums, hackernews through usenet.
I think I will not be far off, speaking for the community, that many would love to own a copy. If that costs and an arm and a leg, so be it.
Another sidenote, any chance of seeing the usenet archives back ? Was very happy when Google purchased it. Thought we would get a better UI, better indexed presentation ... then it disappeared
You can still access Usenet posts:
https://www.big-8.org/wiki/Web-to-news_gateways
Thanks.
It’s… beautiful.
This is one of those moments where I can feel my capabilities grow in real time. Thank you for making this, and thank you for making this open source.
But is this l-system or otherwise based? Why neither the page nor GitHub tells nothing about the math behind the beauty?
Jason Webbs blog post (which is linked in the github ReadMe) explains the math really well.
https://medium.com/@jason.webb/2d-differential-growth-in-js-...
Kagi search has this to say about the difference between differential growth and L-systems (plot spoiler: l-systems are maths-based and address mostly branching phenomena, differential growth is derived from the fact that within a single organism growth rate is uneven):
https://kagi.com/search?q=differential+Growth+vs+L+system%3F...
One of the things that attracted me to 3D was Maya’s magnificent paint effects system, which is lsystem-based. This was begging to be spun off as a separate product.
Someone without an active Kagi account won’t be seeing the LLM’s quick answer, FYI.
My bad. I did not know that. Won’t make that mistake again. Pasting below the key info.
Differential Growth and L-systems are both concepts used in modeling biological growth, particularly in plants, but they approach the subject from different angles.
Differential Growth
Definition: Differential growth refers to the varying rates of growth in different parts of an organism, leading to shape formation and structural changes. This concept is crucial in understanding how plants adapt their forms in response to environmental stimuli (like light and gravity) and internal signals (like hormones).
Mechanism: It involves the controlled distribution of growth factors and varying growth rates among different tissues. For example, in plants, differential growth can lead to bending or twisting of stems and leaves, as seen in the formation of the apical hook during germination12.
Applications: This concept is used in various fields, including biology, architecture, and design, to create models that simulate how structures grow and change over time3.
L-systems (Lindenmayer Systems)
Definition: L-systems are a mathematical formalism introduced by Aristid Lindenmayer in 1968 for modeling the growth processes of plants. They use a set of rules (productions) to rewrite strings of symbols, which can represent different parts of a plant.
Mechanism: An L-system starts with an initial string (axiom) and applies production rules to generate new strings iteratively. These strings can be interpreted graphically to create complex plant structures. L-systems can be context-free or context-sensitive, allowing for a wide variety of growth patterns45.
Applications: L-systems are widely used in computer graphics for simulating plant growth, generating fractals, and even in architectural design6.
Differential L-systems
Integration: Recent developments have combined differential growth principles with L-systems, known as differential L-systems. This approach allows for more realistic simulations of plant growth by incorporating the effects of differential growth rates into the L-system framework78.
Functionality: In differential L-systems, the growth rules can depend on local conditions, such as the density of neighboring structures or external environmental factors, enhancing the realism of the generated models46.
Summary
Differential Growth focuses on how different parts of an organism grow at different rates due to various factors, leading to complex shapes.
L-systems provide a rule-based framework for simulating plant growth through string rewriting.
The combination of both concepts in differential L-systems allows for advanced modeling that captures both the structural complexity and the dynamic nature of biological growth.
References
[1] Differential growth and shape formation in plant organs www.ncbi.nlm.nih.gov
[2] A Model of Differential Growth-Guided Apical Hook Formation in Plants www.ncbi.nlm.nih.gov
[3] Interactive differential growth simulation for design - GitHub Pages em-yu.github.io
[4] (PDF) Modeling Growth with L-Systems & Mathematica www.researchgate.net
[5] Modeling plant development with L-systems - Algorithmic Botany algorithmicbotany.org
[6] [PDF] L-systems and partial differential equations∗ - Algorithmic Botany algorithmicbotany.org
[7] Differential L-Systems Part 1 | Houdini 20 - YouTube www.youtube.com
[8] Differential L-Systems Part 2 | Houdini 20 - YouTube www.youtube.com
Please don't litter HN with LLM generated slop, there's more than enough of it out there as is. The value of HN is the human discussion. I'm sure each and every one of us is capable of writing a question in an input if they please. Some sides of the internet are already dead, with LLMs chatting with other bots, let's not make HN that place.
You are quite right. I won't do it again.
no l-systems are grammar based rewriting systems. Have a look at my simple 2d generator there https://m__nick.gitlab.io/l-systems/#Fractal
Thank you I know what l systems are. A Semi-Thue grammar which is not really a Chomsky grammar - the way the production rules are applied differs. They are after the name of famous biologist called Lindenmayer, thus the name. I’ve been teaching these.
Now my question was - is this an L-system or another one. Not what are L-systems which are. As far as I get from your reply, the plug-in does not facilitate. Thanks.
Anyone know if it’s possible to do what touchdesigner does with point grids but in blender or any other open source way? I really want to create those point cloud 3d audio reactive experiences but I’m struggling to gain traction.
Wow, this is an incredibly detailed and fascinating exploration of differential growth in Blender! I appreciate how you've broken down the process and provided such clear examples. It's inspiring to see how mathematical concepts can be translated into such beautiful visual art. Thanks for sharing your insights and techniques—this is a great resource for anyone interested in generative design!
[dead]