K0nserv 2 days ago

I've been thinking about the notion of "reasoning locally" recently. Enabling local reasoning is the only way to scale software development past some number of lines or complexity. When reasoning locally, one only needs to understand a small subset, hundreds of lines, to safely make changes in programs comprising millions.

I find types helps massively with this. A function with well-constrained inputs and outputs is easy to reason about. One does not have to look at other code to do it. However, programs that leverage types effectively are sometimes construed as having high cognitive load, when it in fact they have low load. For example a type like `Option<HashSet<UserId>>` carries a lot of information(has low load): we might not have a set of user ids, but if we do they are unique.

The discourse around small functions and the clean code guidelines is fascinating. The complaint is usually, as in this post, that having to go read all the small functions adds cognitive load and makes reading the code harder. Proponents of small functions argue that you don't have to read more than the signature and name of a function to understand what it does; it's obvious what a function called last that takes a list and returns an optional value does. If someone feels compelled to read every function either the functions are poor abstractions or the reader has trust issues, which may be warranted. Of course, all abstractions are leaky, but perhaps some initial trust in `last` is warranted.

  • 0xFACEFEED 2 days ago

    > A function with well-constrained inputs and outputs is easy to reason about.

    It's quite easy to imagine a well factored codebase where all things are neatly separated. If you've written something a thousand times, like user authentication, then you can plan out exactly how you want to separate everything. But user authentication isn't where things get messy.

    The messy stuff is where the real world concepts need to be transformed into code. Where just the concepts need to be whiteboarded and explained because they're unintuitive and confusing. Then these unintuitive and confusing concepts need to somehow described to the computer.

    Oh, and it needs to be fast. So not only do you need to model an unintuitive and confusing concept - you also need to write it in a convoluted way because, for various annoying reasons, that's what performs best on the computer.

    Oh, and in 6 months the unintuitive and confusing concept needs to be completely changed into - surprise, surprise - a completely different but equally unintuitive and confusing concept.

    Oh, and you can't rewrite everything because there isn't enough time or budget to do that. You have to minimally change the current uintuitive and confusing thing so that it works like the new unintuitive and confusing thing is supposed to work.

    Oh, and the original author doesn't work here anymore so no one's here to explain the original code's intent.

    • justinram11 2 days ago

      > Oh, and the original author doesn't work here anymore so no one's here to explain the original code's intent.

      To be fair, even if I still work there I don't know that I'm going to be of much help 6 months later other than a "oh yeah, I remember that had some weird business requirements"

      • stouset 2 days ago

        Might I recommend writing those weird business requirements down as comments instead of just hoping someone will guess them six months down the line?

        • mnsc 2 days ago

          So even if comments are flawlessly updated they are not a silver bullet. Not everyone are good at explaining confusing concepts in plain English so worst case you have confusing code and a comment that is 90% accurate but describe one detail in a way that doesn't really match what the code says. This will make you question if you have understood what the code does and it will take time and effort to convince yourself that code is in fact deterministic and unsurprising.

          (but most often the comment is is just not updated or updated along with the code but without full understanding, which is what caused the bug that is the reason you are looking at the code in question)

          • michaelcampbell a day ago

            > So even if comments are flawlessly updated they are not a silver bullet.

            This "has to be perfect in perpetuity or it is of no value" mentality I don't find helpful.

            Be kind to FutureDev. Comment the weird "why"s. If you need to change it later, adjust the comment.

            • mnsc a day ago

              I don't think comments need to be perfect to have value. My point was that if a certain piece of code is solving a particularly confusing problem in the domain, explaining it in a comment doesn't _necessarily_ mean the code will be less confusing to future dev if the current developer is not able to capture the issue in plain English. Future dev would be happier I think with putting more effort into refactoring and making the code more readable and clear. When that fails, a "here be dragons" comment is valuable.

              • MichaelZuo a day ago

                They can write a very long comment explaining why it is confusing them in X, Y, Z vague ways. Or even multilingual comments if they have better writing skills in another lanaguage.

                And even if they don’t know themselves why they are confused, they can still describe how they are confused.

                • stouset a day ago

                  And any attempt whatsoever is some improvement over doing nothing and wishing luck to the next guy.

                • mnsc a day ago

                  And that time spent writing a small paper in one's native language would be better spent trying to make the code speak for itself. Maybe get some help, pair up and tackle the complexity. And when both/all involved is like, we can't make this any clearer and it's still confusing af. _Then_ it's time to write that lengthy comment for future poor maintainers.

                  • Ma8ee 10 hours ago

                    You can only do the “what” with clearer code. The “why” needs some documentation. Even if it is obvious what the strange conditionals do, someone needs to have written down that this particular code is there because the special exemption from important tariffs of cigarettes due to the trade agreement between Serbia and Tunis that was valid between the years years 1992 and 2007.

                    • mnsc an hour ago

                      This is where a good comment really can help! And in these types of domains I would guess/hope that there exists some project master list to crossref that will send both developers and domain experts to the same source for "tariff-EU-92-0578" specifically the section 'exemptions'. So the comment is not not just a whole paragraph copied in between a couple of /*/

            • chairmansteve a day ago

              Thing is, good documentation has to be part of the company's process. eg, a QA engineer would have to be responsible for checking the documentation and certifying it. Costs money and time.

              You can't expect developers, already working 60 hour weeks to meet impossible deadlines, to spend another 15 hours altruistically documenting their code.

              • ropable a day ago

                Any documentation at all > no documentation, 99 times out of 100. And requiring your people to work 60 hours/week is symptomatic of larger problems.

                • switchbak 17 hours ago

                  How about old, out of date documentation that is actively misleading? Because that’s mostly what I run into, and it’s decidedly worse that no documentation.

                  Give me readable code over crappy documentation any day. In an ideal world the docs would be correct all of the time, apparently I don’t live in that world, and I’ve grown tired of listening to those who claim we just need to try harder.

              • yearolinuxdsktp a day ago

                Every line of documentation is a line of code and is a liability as it will rot if not maintained. That’s why you should be writing self documenting code as much as possible that’s obviates the need for documentation. But unlike code, stale/wrong doc will not break tests.

                Spending 15 hours documenting the code is something no leader should be asking of engineering to do. You should not need to do it. Go back and write better code, one That’s more clear at a glance, easily readable, uses small functions written at a comparable level of abstraction, uses clear, semantically meaningful names.

                Before you write a line of documentation, you should ask yourself whether the weird thing you were about to document can be expressed directly in the name of the method of the variable instead. Only once you have exhausted all the options for expressing the concept in code, then, only then, are you allowed to add the line of the documentation regarding it.

                • RaftPeople 5 hours ago

                  > Only once you have exhausted all the options for expressing the concept in code, then, only then, are you allowed to add the line of the documentation regarding it.

                  But that's what people are talking about when talking about comments. The assumption is that the code is organized and named well already.

                  The real world of complexity is way beyond the expressiveness of code, unless you want function names like:

                  prorateTheCurrentDaysPartialFactoryReceiptsToYesterdaysStateOfSalesOrderAllocationsInTheSamePrioritySequenceThatDrivesFinancialReportingOfOwnedInventoryByBusinessUnit()

                  The code that performs this function is relatively simple, but the multiple concepts involved in the WHY and HOW are much less obvious.

              • actionfromafar a day ago

                Or you know, work the devs 40 hour weeks and make sure documentation is valued. Everything costs one way or another, it's all trade-off turtles all the way down.

              • sanderjd a day ago

                Don't let perfect be the enemy of good.

                "We don't write any documentation because we can't afford a dedicated QA process to certify it" <- that's dumb.

            • bccdee a day ago

              Yeah: "what if this code becomes tech debt later" applies to everything, not just comments. It's a tradeoff.

              The best thing you can do to avoid creating debt for later maintainers is to write code that's easy to delete, and adding comments helps with that.

          • rtpg 2 days ago

            An outdated comment is still a datapoint! Including if the comment was wrong when it was first written!

            We live in a world with version history, repositories with change requests, communications… code comments are a part of that ecosystem.

            A comment that is outright incorrect at inception is still valuable even if it is at least an attempt by the writer to describe their internal understanding of things.

            • more-coffee a day ago

              This. I have argued with plenty of developers on why comments are useful, and the counter arguments are always the same.

              I believe it boils down to a lack of foresight. At some point in time, someone is going to revisit your code, and even just a small `// Sorry this is awful, we have to X but this was difficult because of Y` will go a long way.

              While I (try to) have very fluid opinions in all aspects of programming, the usefulness of comments is not something I (think!) I'll ever budge on. :)

              • temporallobe a day ago

                > // Sorry this is awful, we have to X but this was difficult because of Y

                You don’t know how many times I’ve seen this with a cute little GitLens inline message of “Brian Smith, 10 years ago”. If Brian couldn’t figure it out 10 years ago, I’m not likely going to attempt it either, especially if it has been working for 10 years.

                • larsrc a day ago

                  But knowing what Brian was considering at the time is useful, both due avoiding redoing that and for realising that some constraints may have been lifted.

            • xnx a day ago

              We should call them code clues

            • buttercraft a day ago

              What if you don't know that the comment is wrong?

              • lexicality a day ago

                IMO the only thing you can assume is that the person who wrote the comment wasn't actively trying to deceive you. You should treat all documentation, comments, function names, commit messages etc with a healthy dose of scepticism because no one truly has a strong grip on reality.

              • rtpg a day ago

                Right, unlike code (which does what it does, even if that isn't what the writer meant) there's no real feedback loop for comments. Still worth internalizing the info based on that IMO.

                "This does X" as a comment when it in fact does Y in condition Z means that the probability you are looking at a bug goes up a bit! Without the comment you might not be able to identify that Y is not intentional.

                Maybe Y is intentional! In which case the comment that "this is intentional" is helpful. Perhaps the intentionality is also incorrect, and that's yet another data point!

                Fairly rare for there to be negative value in comments.

            • temporallobe a day ago

              It just occurred to me that perhaps this is where AI might prove useful. Functions could have some kind of annotation that triggers AI to analyze the function and explain it plain language when you do something like hover over the function name in the IDE, or, you can have a prompt where you can interact with that piece of code and ask it questions. Obviously this would mean developer-written comments would be less likely to make it into the commit history, but it might be better than nothing, especially in older codebases where the original developer(s) are long gone. Maybe this already exists, but I’m too lazy to research that right now.

              • bccdee a day ago

                But then could you trust it not to hallucinate functionality that doesn't exist? Seems as risky as out-of-date comments, if not more

                What I'd really like is an AI linter than noticed if you've changed some functionality referenced in a comment without updating that comment. Then, the worst-case scenario is that it doesn't notice, and we're back where we started.

          • Zondartul a day ago

            Comments that explain the intent, rather than implementation, are the more useful kind. And when intent doesn't match the actual code, that's a good hint - it might be why the code doesn't work.

          • hedora a day ago

            If a developer can’t write intelligible comments or straightforward code, then I’d argue they should find another job.

            • pixl97 a day ago

              I mean it's easy to say silly things like this, but in reality most developers suck in one way or another.

              In addition companies don't seem to give a shit about straightforward code, they want LOC per day and the cheapest price possible which leads to tons of crap code.

              • hallway_monitor a day ago

                Each person has their own strengths, but a worthwhile team member should be able to meet minimum requirements of readability and comments. This can be enforced through team agreements and peer review.

                Your second point is really the crux of business in a lot of ways. The balance of quality versus quantity. Cost versus value. Long-term versus short term gains. I’m sure there are situations where ruthlessly prioritizing short term profit through low cost code is indeed the optimal solution. For those of us who love to craft high-quality code, the trick is finding the companies where it is understood and agreed that long-term value from high-quality code is worth the upfront investment and, more importantly, where they have the cash to make that investment.

                • pixl97 a day ago

                  >I’m sure there are situations where ruthlessly prioritizing short term profit through low cost code is indeed the optimal solution

                  This is mostly how large publicly traded corps work, unless they are ran by programmers that want great applications or are required by law, they tend to write a lot of crap.

              • mlloyd a day ago

                >In addition companies don't seem to give a shit about straightforward code, they want LOC per day and the cheapest price possible which leads to tons of crap code.

                Companies don't care about LOC, they care about solving problems. 30 LOC or 30k LOC doesn't matter much MOST of the time. They're just after a solution that puts the problem to rest.

            • michaelt a day ago

              If a delivery company has four different definitions of a customer’s first order, and the resulting code has contents that are hard to parse - does the Blake lie with the developer, or the requirements?

              • TrololoTroll a day ago

                If the developer had time to do it, with him. Otherwise with the company

                I'm sure there's some abysmal shit that's extremely hard to properly abstract. Usually the dev just sucks or they didn't have time to make the code not suck

        • larodi 2 days ago

          Business requirements deviate from code almost immediately. Serving several clients with customisation adds even more strain on the process. Eventually you want to map paragraphs of business req to code which is not a 1:1 mapping.

          Aging codebase and the ongoing operations make it even harder to maintain consistently. eventually people surrender.

        • ozim 2 days ago

          Then in 3 months someone in between came changing the code slightly that makes comment obsolete but doesn’t update the comment. Making all worse not better.

          Issue trackers are much better because then in git you can find tickets attached to the change.

          No ticket explaining why - no code change.

          Why not in repo? because business people write tickets not devs. Then tickets are passed to QA who also does read the code but also need that information.

          • jounker a day ago

            Why did the reviewer approve the change if the developer didn’t update the comment?

            It sounds like people are failing at their jobs.

            • ozim a day ago

              Oh that is one of my pet peeves.

              "If only people would do their jobs properly".

              So we just fire all the employees and hire better ones only because someone did not pay attention to the comment.

              Of course it is an exaggeration - but also in the same line people who think "others are failing at their jobs" - should pick up and do all the work there is to be done and see how long they go until they miss something or make a mistake.

              Solution should be systematic to prevent people from failing and not expecting "someone doing their job properly".

              Not having comments as something that needs a review reduces workload on everyone involved.

              Besides, interfaces for PRs they clearly mark what changed - they don't point what hasn't been changed. So naturally people review what has changed. You still get the context of course and can see couple lines above and below... But still I blame the tool not people.

        • ajuc a day ago

          Requirements should be in the JIRA. JIRA number should be in the commit message.

          You do git blame and you see why each line is what it is.

          Comments are nice too, but they tend to lie the older they are. Git blame never lies.

          • SleepyMyroslav a day ago

            A code tends to be reused. When it happens jira is not likely to travel alongside the code. All 'older' jira tickets are useless broken links. All you have in practice is jira name. It usually happen with 'internal documentation' links as well.

            Git blame often lies when big merge was squashed. I mostly had these in Perforce so I might be wrong. Also when code travels between source version control servers and different source version control software it also loses information.

            I would say in my gamedev practical experience the best comments I saw are TODO implement me and (unit) test code that still runs. First clearly states that you have reached outside of what was planned before and 2nd allows you to inspect what code meant to do.

            • jounker a day ago

              One of my favorite conventions is ‘TODO(username): some comment’. This lets attribution survive merges and commits and lets you search for all of someone’s comments using a grep.

              • ben_w 3 hours ago

                I tend to do:

                  // TODO: <the name of some ticket>: <what needs to happen here>
                
                e.g.

                  // TODO: IOS-42: Vogon construction fleet will need names to be added to this poetry reading room struct
                
                I've not felt my name is all that important for a TODO, as the ticket itself may be taken up by someone else… AFAICT they never have been, but they could have been.
          • TeMPOraL a day ago

            Jira entries get wiped arbitrarily. Git blame may not lie, but it doesn't survive larger organizational "refactoring" around team or company mergers. Or refactoring code out into separate project/library. Hell, often enough it doesn't survive commits that rename bunch of files and move other stuff around.

        • K0nserv 2 days ago

          Comments are decent but flawed. Being a type proponent I think the best strategy is lifting business requirements into the type system, encoding the invariants in a way that the compiler can check.

          • larsrc a day ago

            Comments should describe what the type system can't. Connect, pitfalls, workarounds for bugs in other code, etc.

      • throwup238 2 days ago

        Thank god we’re held to such low standards. Every time I’ve worked in a field like pharmaceuticals or manufacturing, the documentation burden felt overwhelming by comparison and a shrug six months later would never fly.

        • mnau 2 days ago

          We are not engineers. We are craftsmen, instead of working with wood, we work with code. What most customers want is an equivalent of "I need a chair, it should look roughly like this."

          If they want blueprints and documentation (e.g. maximum possible load and other limits), we can supply (and do supply, e.g. in pharma or medicine), but it will cost them quite a lot more. By the order of magnitude. Most customers prefer cobbled up solution that is cheap and works. That's on them.

          Edit: It is called waterfall. There is nothing inherently wrong with it, except customers didn't like the time it took to implement a change. And they want changes all the time.

          • namaria 2 days ago

            > We are not engineers. We are craftsmen

            Same difference. Both appellations invoke some sort of idealized professional standards and the conversation is about failing these standards not upholding them. We're clearly very short of deserving a title that carries any sort of professional pride in it. We are making a huge mess of the world building systems that hijack attention for profit and generate numerous opportunities for bad agents in the form of security shortfalls or opportunities to exploit people using machines and code.

            If we had any sort of pride of craft or professional standards we wouldn't be pumping out the bug ridden mess that software's become and trying to figure out why in this conversation.

            • alternatex a day ago

              That is quite a cynical take. A lot of us take pride in our work and actively avoid companies that produce software that is detrimental to society.

              • namaria a day ago

                It is cynical but it is also a generalization better supported by the evidence than "we're craftsmen" or "we're engineers".

                If you can say "I'm a craftsman" or "I'm an engineer" all the power to you. Sadly I don't think we can say that in the collective form.

                • nyarlathotep_ a day ago

                  > If you can say "I'm a craftsman" or "I'm an engineer" all the power to you. Sadly I don't think we can say that in the collective form.

                  My cynicism of the software "profession" is entirely a function of experience, and these titles are the (very rare) exception.

                  The norm is low-quality, low complexity disposable code.

                  • computerdork a day ago

                    Hmm, thinking back, think most companies I worked (from the small to the very large tech companies) had on average pretty good code and automated tests, pretty good processes, pretty good cultures and pretty good architectures. Some were very weak with one aspect, but made up for it others. But maybe I got lucky?

            • mnau a day ago

              > Both appellations invoke some sort of idealized professional standards

              The key point of the comment was that engineers do have standards, both from professional bodies and often legislative ones. Craftsmen do not have such standards (most of them, at least where I am from). Joiners definitely don't.

              Edit: I would also disagree with "pumping out bug ridden mess that software's become."

              We are miles ahead in security of any other industry. Physical locks have been broken for decades and nobody cares. Windows are breakable by a rock or a hammer and nobody cares.

              In terms of bugs, that is extraordinary low as well. In pretty much any other industry, it would be considered a user error, e.g. do not put mud as a detergent into the washing machine.

              Whole process is getting better each year. Version control wasn't common in 2000s (I think Linux didn't use version control until 2002). CI/CD. Security analyzers. Memory managed/safe languages. Automatic testing. Refactoring tools.

              We somehow make hundreds of millions of lines of code work together. I seriously doubt there is any industry that can do that at our price point.

              • generic92034 a day ago

                > We are miles ahead in security of any other industry. Physical locks have been broken for decades and nobody cares. Windows are breakable by a rock or a hammer and nobody cares.

                That is not such a great analogy, in my opinion. If burglars could remotely break into many houses in parallel while being mostly non-trackable and staying in the safety of their own home, things would look differently on the doors and windows front.

                • mnau a day ago

                  The reason why car keys are using chips is because physical safety sucks so much in comparison with digital.

                  The fact is we are better at it because of failure of state to establish the safe environment. Generally protection and safe environment is one of reason for paying taxes.

                  • namaria a day ago

                    > The reason why car keys are using chips is because physical safety sucks so much in comparison with digital.

                    Not the reason. There is no safe lock, chip or not. You can only make it more inconvenient then the next car to break in.

                    > The fact is we are better at it because of failure of state to establish the safe environment. Generally protection and safe environment is one of reason for paying taxes.

                    Exactly backwards. The only real safety is being in a hi-sec zone protected by social convention and State retribution. The best existing lock in a place where bad actors have latitude won't protect you, and in a safe space you barely need locks at all.

        • rcxdude a day ago

          OTOH, the level of documentation you get for free from source control would be a godsend in other contexts: the majority of the documentation you see in other processes is just to get an idea of what changed when and why.

        • Yiin 2 days ago

          there is difference between building a dashboard for internal systems and tech that if failed can kill people

          • throwup238 2 days ago

            Most software work in pharma and manufacturing is still CRUD, they just have cultures of rigorous documentation that permeates the industry even when it's low value. Documenting every little change made sense when I was programming the robotics for a genetic diagnostics pipeline, not so much when I had to write a one pager justifying a one line fix to the parser for the configuration format or updating some LIMS dependency to fix a vulnerability in an internal tool that's not even open to the internet.

          • Mikhail_Edoshin 2 days ago

            Well, a hand watch or a chair cannot kill people, but the manufacturing documentation for them will be very precise.

            Software development is not engineering because it is still relatively young and immature field. There is a joke where a mathematician, a physicist and a engineer are given a little red rubber ball and asked to find its volume. The mathematician measures the diameter and computes, the physicist immerses the ball into water and sees how much was displaced, and an the engineer looks it up in his "Little red rubber balls" reference.

            Software development does not yet have anything that may even potentially grow into such a reference. If we decide to write it we would not even know where to start. We have mathematicians who write computer science papers; or physicists who test programs; standup comedians, philosophers, everyone. But not engineers.

            • ozim 2 days ago

              Difference is that code is the documentation and design.

              That is problem where people don’t understand that point.

              Runtime and running application is the chair. Code is design how to make “chair” run on computer.

              I say in software development we are years ahead when it comes to handling complexity of documentation with GIT and CI/CD practices, code reviews and QA coverage with unit testing of the designs and general testing.

              So I do not agree that software development is immature field. There are immature projects and companies cut corners much more than on physical products because it is much easier to fix software later.

              But in terms of practices we are way ahead.

              • dambi0 a day ago

                Isn’t this similar to saying the valves and vessels of a chemical processing system is the design and documentation of the overall process?

                I know that it’s frequently reposted but Peter Naur’s Programming as Theory Building is always worth a reread.

                The code doesn’t tell us why decisions were made, what constraints were considered or what things were ruled out

          • larodi 2 days ago

            The word code comes from Latin coudex which seems mean - to hack a tree. Are we then not mere lumberjacks with the beards and beer and all :)))

    • mgkimsal 2 days ago

      > Oh, and in 6 months the unintuitive and confusing concept needs to be completely changed into - surprise, surprise - a completely different but equally unintuitive and confusing concept.

      But you have to keep the old way of working exactly the same, and the data can't change, but also needs to work in the new version as well. Actually show someone there's two modes, and offer to migrate their data to version 2? No way - that's confusing! Show different UI in different areas with the same data that behaves differently based on ... undisclosed-to-the-user criteria. That will be far less confusing.

      • terribleperson 2 days ago

        As a user 'intuitive' UIs that hide a bunch of undisclosed but relevant complexity send me into a frothing rage.

        • chefandy 2 days ago

          In many problem spaces, software developers are only happy with interfaces made for software developers. This article diving into the layers of complex logic we can reason about at once perfectly demonstrates why. Developers ‘get’ that complexity, because it’s our job, and think about GUIs as thin convenience wrappers for the program underneath. To most users, the GUI is the software, and they consider applications like appliances for solving specific problems. You aren’t using the refrigerator, you’re getting food. You’re cooking, not using the stove. The fewer things they have to do or think about to solve their problem to their satisfaction, the better. They don’t give a flying fuck about how software does something, probably wouldn’t bother figuring out how to adjust it if they could, and the longer it takes them to figure out how to apply their existing mental models UI idioms to the screen they’re looking at, the more frustrated they get. Software developers know what’s going on behind the scenes so seeing all of the controls and adjustments and statuses and data helps developers orient themselves save figure out what they’re doing. Seeing all that stuff is often a huge hindrance to users that just have a problem they need to solve, and have a much more limited set of mental models and usage idioms they need to use figuring how which of those buttons to press and parameters to adjust. That’s the primary reason FOSS has so few non-technical users.

          The problem comes in when people that aren’t UI designers want to make something “look designed” so they start ripping stuff out and moving it around without understanding how it works affect different types of users. I don’t hear too many developers complain about the interface for iMessage for example despite having a fraction of the controls visible at any given time, because it effectively solves their problem, and does so easier than with a visible toggle for read receipts, SMS/iMessages, text size, etc etc etc. It doesn’t merely look designed, it it’s designed for optimal usability.

          Developers often see an interface that doesn’t work well for developers usage style, assume that means it doesn’t work well, and then complain about it among other developers creating an echo chamber. Developers being frustrated with an interface is an important data point that shouldn’t be ignored, but our perspectives and preferences aren’t nearly as generalizable some might think.

          • terribleperson 2 days ago

            I'm not particularly bothered by non-developer UI. I'm bothered by the incessant application of mobile UI idioms to desktop programs (remember when all windows programs looked somewhat similar?), by UI churn with no purpose, by software that puts functionality five clicks deep for no reason other than to keep the ui 'minimal', by the use of unclear icons when there's room for text (worse, when it's one of the bare handful of things with a universally-understood icon and they decided to invent their own), by UIs that just plain don't present important information for fear of making things 'busy'. There's a lot to get mad about when it comes to modern UIs without needing to approach it from a software developer usage style perspective.

            • chefandy a day ago

              You're making a lot of assumptions about who's doing what, what problems they're trying to solve by doing it, and why. The discipline of UI design is figuring out how people can solve their problems easily and effectively. If you have advanced users that need to make five mouse clicks to perform an essential function, that's a bad design and the chance of that being a UI design decision is just about zero. Same thing with icons. UI design, fundamentally, is a medium of communication: do you think it's more likely a UI designer-- a professional and likely educated interactivity communicator-- chose those icons, or a developer or project manager grabbing a sexy looking UI mockup on dribble and trying to smash their use case into it?

              Minimalism isn't a goal-- it's a tool to make a better interface and can easily be overused. The people that think minimalism is a goal and will chop out essential features to make something "look designed" are almost always developers. Same thing with unclear icons. As someone with a design degree that's done UI design but worked as a back-end developer for a decade before that, and worked as a UNIX admin off and on for a decade before that, I am very familiar with the technical perspective on design and it's various echo-chamber-reinforced follies.

              It's not like all UI designers are incredibly qualified or don't underestimate the importance of some particular function within some subset of users, and some people that hire designers don't realize that a graphic designer isn't a UI designer and shouldn't be expected to work as one. But 700 times out of 1000, that's something dev said "this is too annoying to implement" or some project manager dropped it from the timeline. Maybe 250 of those remaining times, the project manager says "we don't need designers for this next set of features, right? Dev can just make it look like the other parts of the project?"

              Developers read an edward tufte book, think they're experts, and come up with all sorts of folk explanations about what's happening with a design and why people are doing it, then talk about it in venues like this with a million other developers agreeing with them. That does a whole lot more damage to UIs in the wild than bad design decisions made by designers.

              • terribleperson a day ago

                You seem to think I'm attacking UI designers. I'm not. I think software would be a lot better with professional UI designers designing UIs.

                edit: I am making a lot of assumptions. I'm assuming that most UIs aren't really designed, or are 'designed' from above with directions that are primarily concerned about aesthetics.

            • suzzer99 a day ago

              +1 to all this. And when did it become cool to have icons that provide no feedback they've been clicked, combined with no loading state? I'm always clicking stuff twice now because I'm not sure I even clicked it the first time.

            • namaria 2 days ago

              I think a lot of this is bike shedding. Changing the interface design is easy. Understanding usability and building usable systems is hard.

          • namaria 2 days ago

            > That’s the primary reason FOSS has so few non-technical users.

            Yeah, citation needed. If your argument that 'non-technical users' (whatever that is - being technical is not restricted to understanding computers and software deeply) don't use software that exposes a lot of data on its internals as exemplified by FOSS having few 'non-technical users' meaning people who are not software developers, this is just false. There are entire fields where FOSS software is huge. GIS comes to mind.

            • chefandy a day ago

              Normally in this rant I specifically note that non-software technical people are still technical. For genuinely non-technical software, what are the most popular end-user facing FOSS-developed applications? Firefox, signal, blender, Inkscape, Krita maybe… most of those are backed by foundations that pay designers and in Mozilla’s case, actually do a ton of open usability research. I don’t believe Inkscape does but they do put a ton of effort into thinking about things from the user workflow perspective and definitely do not present all of the functionality to the user all at once. Blender, at first, just made memorize a shitload of shortcuts but they’ve done a ton of work figuring out what users need to see in which tasks in different workflows and have a ton of different purpose-built views. For decades, Gimp treated design, workflow and UI changes like any other feature and they ended up with a cobbled-together ham fisted interface used almost exclusively by developers. You’ll have a hard time finding a professional photographer that hasn’t tried gimp and an even harder time finding one that still uses it because of the confusing, unfocused interface. When mastodon stood a real chance of being what Bluesky is becoming, I was jumping up and down flailing my arms trying to get people to work on polishing the user flow and figure out how to communicate what they needed to know concisely. Dismissal dismissal dismissal. “I taught my grandmother how federation works! They just need to read the documentation! Once they start using it they’ll figure it out!” Well, they started using it, didn’t have that gifted grandmother-teaching developer to explain it to them, and they almost all left immediately afterwards.

              Just like human factors engineering, UI design is a unique discipline that many in the engineering field think they can intuit their way through. They’re wrong and if you look beyond technical people, it’s completely obvious.

              • chefandy a day ago

                Worth noting that Gimp just made a separate UI design repo and seem to be doing a great job at confronting this systemic problem in the project.

        • robocat 2 days ago

          I'm trying to learn acceptance: how not to get so angry at despicable UIs.

          Although I admit I'm kinda failing. My minor successes have been by avoiding software: e.g. giving up programming (broken tools and broken targets were a major frustration) and getting rid of Windows.

          • shnock 2 days ago

            Having given up programming, what do you do now?

    • mcdeltat 2 days ago

      IMO the fact that code tends to become hard over time in the real world, is even more reason to lower cognitive load. Because cognitive load is related to complexity. Things like inheritance make it far too easy to end up with spaghetti. So if it's not providing significant benefit, god damn don't do it in the first place (like the article mentions).

      • Simon_O_Rourke a day ago

        That depends on who thinks it's going to be a significant benefit - far far too many times I've had non-technical product managers yelling about some patch or feature or whatever with a "just get it done" attitude. Couple that with some junior engineering manager unwilling to push back, with an equally junior dev team and you'll end up with the nasty spaghetti code that only grows.

    • dogcomplex 2 days ago

      Sounds like a bunch of excellent excuses why code is not typically well factored. But that all just seems to make it more evident that the ideal format should be more well-factored.

    • chii 2 days ago

      > Where just the concepts need to be whiteboarded and explained because they're unintuitive and confusing.

      they're intuitive to somebody - just not the software engineer. This simply means there's some domain expertise which isn't available to the engineer.

      • tsimionescu a day ago

        Not necessarily. There are a lot of domains where you're digitizing decades of cobbled together non-computer systems, such as law, administration, or accounting. There's a very good chance that no single human understands those systems either, and that trying to model them will inevitably end up with obscure code that no one will ever understand either. Especially as legislation and accounting practices accrete in the future, with special cases for every single decision.

    • larsrc a day ago

      Oh, and there's massive use of aspect-oriented programming, the least local paradigm ever!

      • feoren a day ago

        I have never actually seen aspect-oriented programming used in the wild. Out of curiosity, in what context are you seeing AOP used?

        • computerdork a day ago

          Also good for having default activities performed on object or subsystem. For instance, by default, always having an object have security checks to make sure it has permission to perform the tasks it should be (have seen this, and sounds like a good idea at least). And also, to have some basic logging performed to show when you've entered and left function calls. It's easy to forget to add these to a function, especially with large codebase with lots of developers

        • looperhacks a day ago

          We use it to automatically instrument code for tracing. Stuff like this is IMO the only acceptable use to reduce boiler-plate but quickly becomes terrible if you don't pay attention.

    • yearolinuxdsktp a day ago

      Nothing about computers is intuitive. Not even using a mouse.

      A late-breaking change is a business advantage—-learn how to roll with them.

    • ilvez 2 days ago

      Plus to everything said. It's an everyday life of "maintainer", picking the next battle to pick the best way to avoid sinking deeper and defending the story that exactly "this" is the next refactoring project. All that while balancing different factors as you mention to actually believe oneself, because there are countless of paths..

    • haliskerbas 2 days ago

      This puts things really well. I’ll add into it that between the first white boarding session and the first working MVP there’ll be plenty of stakeholders who change their mind, find new info, or ask for updates that may break the original plan

    • jimbokun a day ago

      In my experience, the more convoluted code is more likely to have performance issues.

    • lukan 2 days ago

      It can be done. Sometimes.

      I am so proud and happy, when I can make a seemingly complicated change quickly, because the architecture was well designed and everthing neatly seperated.

      Most of the time though, it is exactly like you described. Or randalls good code comic:

      https://xkcd.com/844/

      Allmost too painful to be funny, when you know the pain is avoidable in theory.

      Still, it should not be an excuse to be lazy and just write bad code by default. Developing the habit of making everything as clean, structured and clear as possible allways pays of. Especially if that code, that was supposed to be a quick and dirty throw away code experiment somehow ended up being used and 2 years later you suddenly need to debug it. (I just experienced that joy)

    • SkyBelow a day ago

      >It's quite easy to imagine a well factored codebase where all things are neatly separated.

      If one is always implementing new code bases that they keep well factored, they should count their blessings. I think being informed about cognitive load in code bases is still very important for all the times we aren't so blessed. I've inherited applications that use global scope and it is a nightmare to reason though. Where possible I improve it and reduce global scope, but that is not always an option and is only possible after I have reasoned enough about the global scope to feel I can isolate it. As such, letting others know of the costs is helpful to both reduce it from happening and to convince stakeholders of the importance of fixing it after it has happened and accounting for the extra costs it causes until it is fixed.

      >The messy stuff is where the real world concepts need to be transformed into code.

      I also agree this can be a messy place, and on a new project, it is messy even when the code is clean because there is effectively a business logic/process code base you are inheriting and turning into an application. I think many of the lessons carry over well as I have seen an issue with global scope in business processes that cause many of the same issues as in code bases. When very different business processes end up converging into one before splitting again, there is often extra cognitive load created in trying to combine them. A single instance really isn't bad, much like how a single global variable isn't bad, but this is an anti-pattern that is used over and over again.

      One helpful tool is working ones way up to the point of having enough political power and earned enough respect for their designs to have suggestions of refactoring business processes be taken into serious consideration (one also has to have enough business acumen to know when such a suggestion is reasonable).

      >the original author doesn't work here anymore so no one's here to explain the original code's intent.

      I fight for comments that tell me why a certain decision is made in the code. The code tells me what it is doing, and domain knowledge will tell most of why it is doing the things expected, but anytime the code deviates from doing what one would normally expect to be done in the domain, telling me why it deviated from expected behavior is very important for when someone is back here reading it 5+ years later when no one is left from the original project. Some will suggest putting it in documentation, but I find that the only documentation with any chance of being maintained or even kept is the documentation built into the code.

      • jeffreygoesto a day ago

        The "why" is the hardest part. You are writing to a future version of most probably a different person with a different background. Writing all is as wrong as writing nothing. You have to anticipate the questions of the future. That takes experience and having been in different shoes, "on the receiving side" of such a comment. Typically developers brag what they did, not why, especially the ones who think they are good...

    • rtpg 2 days ago

      I mean really nobody wants an app that is slow, hard to refactor, with confusing business logic etc. Everyone wants good proporties.

      So then you get into what you’re good at. Maybe you’re good at modeling business logic (even confusing ones!). Maybe you’re good at writing code that is easy to refactor.

      Maybe you’re good at getting stuff right the first time. Maybe you’re good at quickly fixing issues.

      You can lean into what you’re good at to get the most bang for your buck. But you probably still have some sort of minimum standards for the whole thing. Just gotta decide what that looks like.

      • larsrc a day ago

        Some people are proud of making complex code. And too many people admire those who write complex code.

    • namaria 2 days ago

      > you also need to write it in a convoluted way because, for various annoying reasons, that's what performs best on the computer.

      That's nothing to do with hardware. The various annoying reasons are not set in stone or laws of physics. They are merely the path dependency of decades of prioritizing shipping soon because money.

  • nevi-me 2 days ago

    > If someone feels compelled to read every function either the functions are poor abstractions or the reader has trust issues, which may be warranted.

    I joined a company with great code and architecture for 3 months last year. They deal with remittances and payments.

    Their architecture leads are very clued up, and I observed that they spent a lot of quality time figuring out their architecture and improvements, continuously. They'd do a lot of refactors for all the various teams, and the cadence of feature development and release was quite impressive.

    In that period though, I and another long-standing colleague made a few errors that cost the company a lot of money, like an automated system duplicating payments to users for a few hours until we noticed it.

    Part of their architectural decision was to use small functions to encapsulate logic, and great care and code review was put into naming functions appropriately (though they were comment averse).

    The mistakes we committed, were because we trusted that those functions did what they said they did correctly. After all, they've also been unit tested, and there's also integration tests.

    If it weren't for the fortitude of the project manager (great guy hey) in firmly believing in collective responsibility if there's no malice, I'd probably have been fired after a few weeks (I left for a higher offer elsewhere).

    ---

    So the part about trust issues resonates well with me. As a team we made the decision that we shouldn't always trust existing code, and the weeks thereafter had much higher cognitive load.

    • avg_dev 2 days ago

      That sounds like a very difficult situation. Would you be willing to elaborate on what kinds of bugs lay in the pre-existing functions? Was some sort of operation that was supposed to be idempotent (“if you call it with these unique parameters over and over, it will be the same as if you only called it once”) not so? I am trying to imagine what went wrong here. A tough situation, must have been quite painful. How serious were the consequences? If you don’t feel comfortable answering that is okay.

      • nevi-me 2 days ago

        I can't remember the exact detail, but one instance was a function checking whether a user should be paid based on some conditions. It checked the db, and I think because the codebase and db move fast, there was a new enum added a few months prior which was triggered by our transaction type.

        So that helped function didn't account for the new enum, and we ended up sending >2 payments to users, in some cases I think over 10 to one user.

        The issue was brought to customer support's attention, else we might have only noticed it at the end of the week, which I think would have led to severe consequences.

        The consequences never reached us because our PM dealt with them. I suppose in all the financial loss instances, the business absorbed the losses.

        • noisy_boy a day ago

          > So that helped function didn't account for the new enum

          This is where Scala/Rust's enforcement of having to handle all arms of a match clause help catch such issues - if you are matching against the enum, you won't even be able to compile if you don't handle all the arms.

          • matt_kantor a day ago

            Sounds like the source of truth for the enum members may have been in the database.

            (But yes, exhaustiveness checking for sum types is a great feature.)

            • galangalalgol a day ago

              The only db work I've done in rust required a recompile if the db schema changed, or even the specific queries your program used, because the rust types got generated from the schema. So in those cases the db change would have driven a rust type change and rust would have verified exhaustive handling.

    • barrkel a day ago

      Function names aren't wholly distinct from comments. They suffer from the same problems as comments - they can go stale and no longer reflect the code they're naming.

    • layer8 a day ago

      Functions generally need to be documented, especially if there are any gotchas not obvious from the function signature. And one should always read the documentation. Good names are for discovery and recollection, and for the call-site code to be more intelligible, but they don’t replace having a specification of the function’s interface contract, and client code properly taking it into account.

    • SkyBelow a day ago

      >The mistakes we committed, were because we trusted that those functions did what they said they did correctly. After all, they've also been unit tested, and there's also integration tests.

      As it is stated, I don't see where it is your mistake. You should be able to trust things do what they say, and there should be integration testing that happens which adds the appropriate amount of distrust and verification. Even with adequate unit testing, you normally inject the dependencies so it wouldn't be caught.

      This seems an issue caused by two problems, inadequate integration testing and bugs in the original function, neither of which are your fault.

      Building a sixth sense of when to distrust certain code is something you see from more experienced developers at a company, but you were new so there is no reason to expect you to have it (and the system for making code changes shouldn't depend upon such intuition anyways).

  • motorest 2 days ago

    > I've been thinking about the notion of "reasoning locally" recently. Enabling local reasoning is the only way to scale software development past some number of lines or complexity. When reasoning locally, one only needs to understand a small subset, hundreds of lines, to safely make changes in programs comprising millions.

    That was supposedly the main trait of object-oriented programming. Personally that was how it was taught to me: the whole point of encapsulation and information hiding is to ensure developers can "reason locally", and thus be able to develop more complex projects by containing complexity to specific units of execution.

    Half of SOLID principles also push for that. The main benefit of Liskov's substitution principle is ensure developers don't need to dig into each and every concrete implementation to be able to reason locally about the code.

    On top of that, there are a multitude of principles and rules of thumb that also enforce that trait. For example, declaring variables right before they are used the first time. Don't Repeat Yourself to avoid parsing multiple implementations of the same routine. Write Everything Twice to avoid premature abstractions and tightly coupling units of execution that are actually completely independent, etc etc etc.

    Heck, even modularity, layered software architectures, and even microservices are used to allow developers to reason locally.

    In fact, is there any software engineering principle that isn't pushing for limiting complexity and allowing developers to reason locally?

    • KronisLV a day ago

      > In fact, is there any software engineering principle that isn't pushing for limiting complexity and allowing developers to reason locally?

      Both DRY and SOLID lead to codebases that can be worse in this respect.

      DRY and SRP limit what will be done in a single method or class, meaning that both the logic will eventually be strewn across the codebase, as well as any changes to that will need to take all of the pieces using the extracted logic into account. Sometimes it makes sense to have something like common services, helper and utility classes, but those can be in direct opposition to local reasoning for any non-trivial logic.

      Same for polymorphism and inheritance in general, where you suddenly have to consider a whole class structure (and any logic that might be buried in there) vs the immediate bits of code that you’re working with.

      Those might be considered decent enough practices to at least consider, but in practice they will lead to a lot of jumping around the codebase, same for any levels of abstraction (resource/controller, service, mappers, Dto/repository, …) and design patterns.

      • bccdee a day ago

        Yeah I think that, though experienced programmers tend to understand what makes code good, they're often bad at expressing it, so they end up making simplified and misleading "rules" like SRP. Some rules are better than others, but there's no substitute for reading a lot of code and learning to recognize legibility.

        • KronisLV a day ago

          > Yeah I think that, though experienced programmers tend to understand what makes code good, they're often bad at expressing it, so they end up making simplified and misleading "rules" like SRP.

          I mean, I'm not saying that those approaches are always wholly bad from an organizational standpoint either, just that there are tradeoffs and whatnot.

          > Some rules are better than others, but there's no substitute for reading a lot of code and learning to recognize legibility.

          This feels very true though!

    • sunshowers 2 days ago

      Encapsulation is the good part of object-oriented programming for precisely this reason, and most serious software development relies heavily on encapsulation. What's bad about OOP is inheritance.

      Microservices (in the sense of small services) are interesting because they are good at providing independent failure domains, but add the complexity of network calls to what would otherwise be a simple function call. I think the correct size of service is the largest you can get away with that fits into your available hardware and doesn't compromise on resilience. Within a service, use things like encapsulation.

      • jimmaswell 2 days ago

        Inheritance is everyone's favorite whipping boy, but I've still never been in a codebase and felt like the existing inheritance was seriously hindering my ability to reason about it or contribute to it, and I find it productive to use on my own. It makes intuitive sense and aids understanding and modularity/code resuse when used appropriately. Even really deep inheritance hierarchies where reasonable have never bothered me. I've been in the industry for at least 8 years and a volunteer for longer than that, and I'm currently in a role where I'm one of the most trusted "architects" on the team, so I feel like I should "get it" by now if it's really that bad. I understand the arguments against inheritance in the abstract but I simply can't bring myself to agree or even really empathize with them. Honestly, I find the whole anti-inheritance zeitgeist as silly and impotent as the movement to replace pi with tau, it's simply a non-issue that's unlikely to be on your mind if you're actually getting work done IMHO.

        • Mikhail_Edoshin 2 days ago

          The problem of inheritance is that it should be an internal mechanism of code reuse, yet it is made public in a declarative form that implies a single pattern of such reuse. It works more or less but it also regularly runs into limitations imposed by that declarativeness.

          For example, assume I want to write emulators for old computer architectures. Clearly there will be lots of places where I will be able to reuse the same code in different virtual CPUs. But can I somehow express all these patterns of reuse with inheritance? Will it be clearer to invent some generic CPU traits and make a specific CPU to inherit several such traits? It sounds very unlikely. It probably will be much simpler to just extract common code into subroutines and call them as necessary without trying to build a hierarchy of classes.

          Or lets take, for example, search trees. Assume I want to have a library of such trees for research or pedagogic purposes. There are lots of mechanisms: AVL trees, 2-3, 2-3-4, red-black, B-Trees and so on. Again there will be places where I can reuse the same code for different trees. But can I really express all this as a neat hierarchy of tree classes?

          • motorest 12 hours ago

            > The problem of inheritance is that it should be an internal mechanism of code reuse, yet it is made public in a declarative form that implies a single pattern of such reuse.

            Not quite. A simplistic take on inheritance suggests reusing implementations provided by a base class, but that's not what inheritance means.

            Inheritance sets a reusable interface. That's it. Concrete implementations provided by a base class, by design, are only optional. Take a look at the most basic is-a examples from intro to OO.

            Is the point of those examples reusing code, or complying with Liskov's substitution principle?

            The rest of your comment builds upon this misconception, and thus is falsified.

            • sunshowers 2 hours ago

              My example complied perfectly with Liskov's substitution principle. Much better than examples like "a JSON parser is a parser". The system I worked on had perfect semantic subtyping.

              Liskov substitution won't save you, and I'm quite tired of people saying it will. The problem of spaghetti structures is fundamental to what makes inheritance distinct from other kinds of polymorphism.

              Just say no to inheritance.

        • 59nadir 2 days ago

          > [...] it's simply a non-issue that's unlikely to be on your mind if you're actually getting work done IMHO.

          Part of why I get (more) work done is that I don't bother with the near-useless taxonomical exercises that inheritance invites, and I understand that there are ways of writing functions for "all of these things, but no others" that are simpler to understand, maintain and implement.

          The amount of times you actually need an open set of things (i.e. what you get with inheritance) is so laughably low it's a wonder inheritance ever became a thing. A closed set is way more likely to be what you want and is trivially represented as a tagged union. It just so happens that C++ (and Java) historically has had absolutely awful support for tagged unions so people have made do with inheritance even though it doesn't do the right thing. Some people have then taken this to mean that's what they ought to be using.

          > I've been in the industry for at least 8 years and a volunteer for longer than that, and I'm currently in a role where I'm one of the most trusted "architects" on the team, so I feel like I should "get it" by now if it's really that bad.

          I don't think that's really how it works. There are plenty of people who have tons of work experience but they've got bad ideas and are bad at what they do. You don't automatically just gain wisdom and there are lots of scenarios where you end up reinforcing bad ideas, behavior and habits. It's also very easy to get caught up in a collective of poorly thought out ideas in aggregate: Most of modern C++ is a great example of the kind of thinking that will absolutely drag maintainability, readability and performance down, but most of the ideas can absolutely sound good on their own, especially if you don't consider the type of architecture they'll cause.

        • juunpp 2 days ago

          So you've never worked on a code base with a 3-level+ deep inheritance tree and classes accessing their grandparent's protected member variables and violating every single invariant possible?

          • jimmaswell 2 days ago

            > 3-level+ deep inheritance tree and classes accessing their grandparent's protected member variables

            Yes, I have. Per MSDN, a protected member is accessible within its class and by derived class instances - that's the point. Works fine in the game I work on.

            > violating every single invariant possible

            Sure, sometimes, but I see that happen without class inheritance just as often.

        • bccdee a day ago

          The difference between inheritance and composition as tools for code reuse is that, in composition, the interface across which the reused code is accessed is strictly defined and explicit. In inheritance it is weakly defined and implicit; subclasses are tightly coupled to their parents, and the resulting code is not modular.

        • sunshowers 2 days ago

          I'm glad it's been useful to you!

          I can only share my own experience here. I'm thinking of a very specific ~20k LoC part of a large developer infrastructure service. This was really interesting because it was:

          * inherently complex: with a number of state manipulation algorithms, ranging from "call this series of external services" to "carefully written mutable DFS variant with rigorous error handling and worst-case bounds analysis".

          * quite polymorphic by necessity, with several backends and even more frontends

          * (edit: added because it's important) a textbook case of where inheritance should work: not artificial or forced at all, perfect Liskov is-a substitution

          * very thick interfaces involved: a number of different options and arguments that weren't possible to simplify, and several calls back and forth between components

          * changing quite often as needs changed, at least 3-4 times a week and often much more

          * and like a lot of dev infrastructure, absolutely critical: unimaginable to have the rest of engineering function without it

          A number of developers contributed to this part of the code, from many different teams and at all experience levels.

          This is a perfect storm for code that is going to get messy, unless strict discipline is enforced. I think situations like these are a good stress test for development "paradigms".

          With polymorphic inheritance, over time, a spaghetti structure developed. Parent functions started calling child functions, and child functions started calling parent ones, based on whatever was convenient in the moment. Some functions were designed to be overridden and some were not. Any kind of documentation about code contracts would quickly fall out of date. As this got worse, refactoring became basically impossible over time. Every change became harder and harder to make. I tried my best to improve the code, but spent so much time just trying to understand which way the calls were supposed to go.

          This experience radicalized me against class-based inheritance. It felt that the easy path, the series of local decisions individual developers made to get their jobs done, led to code that was incredibly difficult to understand -- global deterioration. Each individual parent-to-child and child-to-parent call made sense in the moment, but the cumulative effect was a maintenance nightmare.

          One of the reasons I like Rust is that trait/typeclass-based polymorphism makes this much less of a problem. The contracts between components are quite clear since they're mediated by traits. Rather than relying on inheritance for polymorphism, you write code that's generic over a trait. You cannot easily make upcalls from the trait impl to the parent -- you must go through a API designed for this (say, a context argument provided to you). Some changes that are easy to do with an inheritance model become harder with traits, but that's fine -- code evolving towards a series of messy interleaved callbacks is bad, and making you do a refactor now is better in the long run. It is possible to write spaghetti code if you push really hard (mixing required and provided methods) but the easy path is to refactor the code.

          (I think more restricted forms of inheritance might work, particularly ones that make upcalls difficult to do -- but only if tooling firmly enforces discipline. As it stands though, class-based inheritance just has too many degrees of freedom to work well under sustained pressure. I think more restricted kinds of polymorphism work better.)

          • Fluorescence a day ago

            > This experience radicalized me against ...

            My problem with OO bashing is not that it isn't deserved but seems in denial about pathological abstraction in other paradigms.

            Functional programming quickly goes up it's own bum with ever more subtle function composition, functor this, monoidal that, effect systems. I see the invention of inheritance type layering just in adhoc lazy evaluated doom pyramids.

            Rich type systems spiral into astronautics. I can barely find the code in some defacto standard crates instead it's deeply nested generics... generic traits that take generic traits implemented by generic structs called by generic functions. It's an alphabet soup of S, V, F, E. Is that Q about error handling, or an execution model or data types? Who knows! Only the intrepid soul that chases the tail of every magic letter can tell you.

            I wish there were a panacea but I just see human horrors whether in dynamically-typed monkey-patch chaos or the trendiest esoterica. Hell I've seen a clean-room invention of OO in an ancient Fortran codebase by an elderly academic unaware it was a thing. He was very excited to talk about his phylogenetic tree, it's species and shared genes.

            The layering the author gives as "bad OO" admin/user/guest/base will exist in the other styles with pros/cons. At least the OO separates each auth level and shows the relationship between them which can be a blessed relief compared to whatever impenetrable soup someone will cook up in another style.

            • sunshowers a day ago

              The difference, I think, is that much of that is not the easy path. Being able to make parent-child-parent-child calls is the thing that distinguishes inheritance from other kinds of polymorphism, and it leads to really bad code. No other kind of polymorphism has this upcall-downcall-upcall-downcall pattern baked into its structure.

              The case I'm talking about is a perfect fit for inheritance. If not there, then where?

        • briantakita a day ago

          If you are reading a deep * wide inheritance hierarchy with override methods. You will have to navigate through several files to understand where the overrides occurred. Basically multiply the number of potential implementations by inheritance depth * inheritance width.

          You may not be bitten by such an issue in application code. But I've seen it in library code. Particularly from Google, AWS, various Auth libraries, etc. Due to having to interop with multiple apis or configuration.

      • oivey 2 days ago

        Encapsulation arguably isn’t a good part, either. It encourages complex state and as a result makes testing difficult. I feel like stateless or low-state has won out.

        • Tainnor 2 days ago

          Encapsulation can be done even in Haskell which avoids mutable state by using modules that don't export their internals, smart constructors etc. instead. You can e.g. encapsulate the logic for dealing with redis in a module and never expose the underlying connection logic to the rest of the codebase.

        • sunshowers 2 days ago

          Hmm, to me encapsulation means a scheme where the set of valid states is a subset of all representable states. It's kind of a weakening of "making invalid states unrepresentable", but is often more practical.

          Not all strings are valid identifiers, for example, it's hard to represent "the set of all valid identifiers" directly into the type system. So encapsulation is a good way to ensure that a particular identifier you're working with is valid -- helping scale local reasoning (code to validate identifiers) up into global correctness.

          This is a pretty FP and/or Rust way to look at things, but I think it's the essence of what makes encapsulation valuable.

          • oivey 2 days ago

            What you’re talking about is good design but has nothing to do with encapsulation. From Wikipedia:

            > In software systems, encapsulation refers to the bundling of data with the mechanisms or methods that operate on the data. It may also refer to the limiting of direct access to some of that data, such as an object's components. Essentially, encapsulation prevents external code from being concerned with the internal workings of an object.

            You could use encapsulation to enforce only valid states, but there are many ways to do that.

            • sunshowers 2 days ago

              Well whatever that is, that's what I like :)

      • gf000 a day ago

        Not only network calls, but also parallelism, when that microservice does some processing on its own, or are called from a different microservice as well.

        Add to it a database with all the different kinds of transaction semantics and you have a system that is way above the skillset of the average developer.

    • hiAndrewQuinn 2 days ago

      In theory, you could design a parallel set of software engineering best practices which emphasize long-term memory of the codebase over short-term ability to leaf through and understand it. I guess that would be "reasoning nonlocally" in a useful sense.

      In practice I think the only time this would be seen as a potentially good thing by most devs is if it was happening in heavily optimized code.

      • K0nserv 2 days ago

        An interesting point. Would there be any benefits to this non-local reasoning?

        • hiAndrewQuinn 2 days ago

          Not unless you own and run the business, I suspect. You probably buy yourself a much higher absolute threshold of complexity you can comfortably handle in the codebase, but it's not exactly like software developers are known to take kindly to being handed an Anki deck of design decisions, critical functions, etc. and being told "please run this deck for 3 weeks and then we'll get started".

          I suspect it's much more common that codebases evolve towards requiring this nonlocal reasoning over time than being intentionally designed with it in mind.

    • 708145_ 2 days ago

      > The main benefit of Liskov's substitution principle is ensure developers don't need to dig into each and every concrete implementation to be able to reason locally about the code.

      Yeah, but doesn't help in this context (enable local reasoning) if the objects passed around have too much magic or are mutated all over the place. The enterprise OOP from 2010s was a clusterfuck full of unexpected side effects.

      • dboreham 2 days ago

        I suspect that enterprise anything is going to be a hot mess, just because enterprises can't hire many of the best people. Probably the problem we should address as an industry is: how to produce software with mostly low wattage people.

        • brokencode 2 days ago

          The eventual solution will probably be to replace the low wattage people with high wattage machines.

          • gf000 a day ago

            Sure, once they can solve advent of code problems on the second week..

    • 6510 2 days ago

      Out of curiosity I sometimes rewrite things as spaghetti (if functions are short and aren't called frequently) or using globals (if multiple functions have to many params) it usually doesn't look better and when it does it usually doesn't stay that way for very long. In the very few remaining cases I'm quite happy with it. It does help me think about what is going on.

  • cle a day ago

    > I find types helps massively with this. A function with well-constrained inputs and outputs is easy to reason about. One does not have to look at other code to do it. However, programs that leverage types effectively are sometimes construed as having high cognitive load, when it in fact they have low load. For example a type like `Option<HashSet<UserId>>` carries a lot of information(has low load): we might not have a set of user ids, but if we do they are unique.

    They sometimes help. But I think it's deeper than this. A function with inputs and outputs that are well-constrained with very abstract, complex types is still hard to reason about, unless you're used to those abstractions.

    I think it's more accurate to say that something is "easy to reason about" if its level of abstraction "closely matches" the level of abstraction your brain is comfortable with / used to. This can vary dramatically between people, depending on their background, experience, culture, etc.

    I could describe the Option<HashSet<UserId>> type in terms of functors and applicatives and monads, and though it would describe exactly the same set of valid values, it has a much higher cognitive load for most people.

    > However, programs that leverage types effectively are sometimes construed as having high cognitive load, when it in fact they have low load.

    Cognitive load is an individual experience. If someone "construes" something as having high cognitive load, then it does! (For them). We should be writing programs that minimize cognitive load for the set of programmers who we want to be able to interact w/ the code. That means the abstractions need to sufficiently match what they are comfortable with.

    It's also fine to say "sorry, this code was not intended to have low cognitive load for you".

  • DarkNova6 2 days ago

    100% agree and this not only concerns readability. The concept of "locality" turns out to be a fairly universal concept, which applies to human processes just as much as technical ones. Side-effects are the root of all evil.

    You don't see a waiter taking orders from 1 person on a table, but rather go to a table and get orders from everybody sitting there.

    And as for large methods, I find that they can be broken into smaller once just fine as long as you keep them side-effect free. Give them a clear name, a clear return value and now you have a good model for the underlying problem you are solving. Looking up the actual definition is just looking at implementation details.

  • jonahx a day ago

    > Proponents of small functions argue that you don't have to read more than the signature and name of a function to understand what it does; it's obvious what a function called last that takes a list and returns an optional value does.

    I used to be one of those proponents, and have done a 180.

    The problems are:

    1. The names are never as self-evident as you think, even if you take great care with them.

    2. Simply having so many names is an impediment in itself.

    The better way:

    Only break things up when you need to do. This means the "pieces" of the system correspond to the things you care about and are likely to change. You'll know where to look.

    When you actually need an abstraction to share code between parts of the system, create it then.

  • jreback 2 days ago

    Re: trust issues...I'd argue this is the purpose of automated tests. I think tests are too often left out of architectural discussions as if they are some additional artifact that gets created separately from the running software. The core / foundational / heavily reused parts of the architecture should have the most tests and ensure the consumers of those parts has no trust issues!

    • K0nserv 2 days ago

      Tests are good but moving left by lifting invariants into the type system is better.

      Compare

         fn send_email(addr: &str, subject: &str, body: &str) -> Result<()>
      
      to

          fn send_email(add: &EmailAddr, subject: &str, body: &str) -> Result<()>
      
      In the second case, the edge cases of an empty or invalid email address don't need to be tested, they are statically impossible.
      • galangalalgol a day ago

        Thanks for the small concrete example. I try to explain this a lot. It also makes coverage really easy to get with fewer tests.

  • casenmgreen 2 days ago

    I may be wrong, but my view of software is : you have functions, and you have the order in which functions are called. Any given function is straightforward enough, if you define its function clearly and keep it small enough - both of which can reasonably be done. Then we have the problem, which is the main problem, of the order in which functions are called. For this, I use a state machine. Write out the state machine, in full, in text, and then implement it directly, one function per state, one function per state transition.

    The SM design doc is the documentation of the order of function calling, it is exhaustive and correct, and allows for straightforward changes in future (at least, as straightforward as possible - it is always a challenge to make changes).

    • euph0ria a day ago

      Would love to understand this better. Is there any example you could point to?

      • casenmgreen a day ago

            init -> success -> red
            init -> failure -> cleanup
        
            red -> success -> red_yellow
            red -> failure -> cleanup
        
            red_yellow -> success -> green
            red_yellow -> failure -> cleanup
        
            green -> success -> yellow
            green -> failure -> cleanup
        
            yellow -> success -> red
            yellow -> failure -> cleanup
        
            cleanup -> done -> finish
        
        init/red/etc are states.

        success/failure/etc are events.

        Each state is a function. The function red() for example, waits for 20 seconds, then returns success (assuming nothing went wrong).

        To start the state machine, initializes state to "init", and enter a loop, in the loop you call the function for the current state (which makes that state actually happen and do whatever it does), and that function returns its event for whatever happen when it was run, and you then call a second function, which updates state based on the event which just occurred. Keep doing that, until you hit state "finish", then you're done.

        • euph0ria a day ago

          Got it, thanks. But it seemed from your original post that you tend to write state machines a lot more than the usual engineer does, would that be correct? Would you use this in a crud rest API for example?

          • casenmgreen 10 hours ago

            When writing code, the amount of structure depends on the amount of code.

            More and more complex code requires more structure.

            Structure takes time and effort, so we write the minimum amount of structure which is appropriate for the code (where code often grows over time, and then by that growth becomes unmanagable, and then we need more structure, which may require a rewrite to move from the existing structure to a new, fuller structure).

            So with methods for organizing code, we go something like, in order of less to more structure,

            . lines of code . functions . libraries . OO libraries . classes

            A state machine is a form of structure, separate from how we organize code, and moderately high cost. I don't often use one, because most of the code I write doesn't need to be particularly rigorous - but for example I did write a mailing list, and that really is used, so it really did have to be correct, so I wrote out the state machine and implemented based on the state machine.

            State machines also help with testing. You can keep track of which states you have tested and which events from each state you have tested.

            I've never written a REST API in my life, so I can't tell you if I would use a state machine for that :-)

  • mcdeltat 2 days ago

    In regards to small functions, I think an important - but not often mentioned - aspect is shared assumptions. You can have many small functions with garbage abstractions that each implictly rely on the behaviour of each other - therefore the cognitive load is high. Or, you can have many small functions which are truly well-contained, in which case you may well need not read the implementation. Far too much code falls into the former scenario, IMO.

  • blago a day ago

    > Proponents of small functions argue that you don't have to read more than the signature and name of a function to understand what it does;

    Although this is often the case, the style of the program can change things significantly. Here are a few, not so uncommon, examples where it starts to break down:

    1. When you’re crafting algorithms, you might try to keep code blocks brief, but coming up with precise, descriptive names for each 50-line snippet can be hard. Especially if the average developer might not even fully understand the textbook chapter behind it.

    2. At some point you have to build higher than "removeLastElementFromArray"-type of functions. You are not going to get very far skimming domain-specific function names if don’t have any background in that area.

    More examples exist, but these two illustrate the point.

    • K0nserv a day ago

      Both examples stem from not understanding the problem well enough I think. My best work is done when I first write a throwaway spaghetti solution to the problem. Only through this endeavour do I understand the problem well enough to effectively decompose the solution.

      • nuancebydefault 2 hours ago

        You understand your final fine grained code after your 'spaghetti' intermezzo. Others and your future you, probably less so.

        • K0nserv 2 hours ago

          My point is that the factoring and abstractions one produce after the spaghetti intermezzo will be better than a blind stab at them; a greater understanding of the problem helps.

          • nuancebydefault an hour ago

            Agree that intermezzos - even of the spaghetti kind - help understanding.

            I thought this thread was more about (non) maintainability of code consisting of many procedures for each of which names are to be found that will make their usage self-explaining.

            From my experience, simple API's with complex and often long implementations can be very well suited. As long as those are low on side effects and normally DRY, as opposed to puristicly DRY.

  • hinkley a day ago

    Types are somewhat a different dimension. Sort of the classic 1 dimensional argument about a 2 dimensional problem domain. Which quadrant you’re talking about alters whether the arguments support reality or argue with it.

    If understanding a block of code requires knowing a concept that the team feels everyone should know anyway, then it’s not such an imposition. If the code invites you to learn that concept, so much the better. The code is “discoverable” - it invites you to learn more. If the concept is incidental to the problem and/or the team is objectively wrong in their opinion, then you have tribal knowledge that is encroaching on the problem at hand. And whether it’s discoverable or not is neither here nor there. Because understanding the code requires knowing lots of other things, which means either memorization, or juggling more concepts than comfortably fit in short term memory - cognitive overload.

    You know you’ve blown past this point when you finally trace the source of a bad piece of data but cannot remember why you were looking for it in the first place.

    I’m hoping the problem of cognitive load gets more attention in the near future. We are overdue. But aside from people YouTubing code reviews, I’m still unclear what sorts of actionable metrics or feedback will win out in this arena. Maybe expanding code complexity to encompass the complexity of acquiring the values used in the code, not just the local data flow.

  • LtWorf 2 days ago

    I've seen functions called getValue() that were actually creating files on disk and writing stuff.

    Also, even if the function actually does what advertised, I've seen functions that go 4-5 levels deep where the outer functions are just abstracting optional parameters. So to avoid exposing 3 or 4 parameters, tens of functions are created instead.

    I think you do have a point but ideas get abused a lot.

  • sunshowers 2 days ago

    This is absolutely the right way to think about things.

    I like thinking about local reasoning in terms of (borrowing from Ed Page) "units of controversy". For example, I like using newtypes for identifiers, because "what strings are permitted to be identifiers" is a unit of controversy.

  • konschubert a day ago

    The first step for allowing local reasoning is to break your product into independent subdomains that are as independent as possible.

    For a software company, this means crafting the product ownership of your team such that the teams can act as independently as possible.

    This is where most companies already fail.

    Once this has been achieved, you can follow this pattern on smaller and smaller scales down to individual functions in your code.

  • tdiff a day ago

    > Proponents of small functions argue that you don't have to read more than the signature and name of a function to understand what it does; it's obvious what a function called last that takes a list and returns an optional value does.

    It's also interesting that in comment to the same article many people argue against PR process. I hardly see how else that level of discipline required not to undermine trust in names of small methods can be maintained for any team with more than 3 developers.

  • meehai a day ago

    At work we have a pretty big Python monorepo. The way we scale it is by having many standalone CLI mini apps ( about 80) atm with most of them outputting json/parquet in GCS or bigquery tables. Inputs are the same.

    I insisted a lot on this unix (ish as it's not pipes) philosophy. It paid off so far.

    We can test each cli app as well as make broader integration tests.

  • holri 2 days ago

    I do not agree that typing leads to less cognitive load. Typing often leads to more and more complicated code. Dynamically typed code is often shorter and more compact. If dynamically typed code is well written, its function, inputs and outputs are clear and obvious. Clear and easy to understand code is not primarily a matter of typed or not typed code, it is a matter of a great programmer or a poor one.

    • com2kid a day ago

      There is a function. It takes in 4 parameters. One of them is called ID

      Is ID a string, a number, a GUID? Better check the usage within the function.

      Oh, the declaration is `id: number`

      Mystery solved.

      Even better if the language supports subtyping so it is something like id: userID and userID is a subtype of number.

      • holri a day ago

        In a dynamically duck typed language it should not matter if an ID is a string, a number or a GUID. The code should work with all of them. The semantically important thing is that this is an identifier. No String, number or GUI data type expresses this true meaning of the value.

        • com2kid a day ago

          It matters a lot even in a duck typed language.

          If there are multiple types of user IDs, I don't want to pass the wrong one into a DB call.

          This is often the case when dealing with systems that have internal IDs vs publicly exposed IDs. A good type system can correctly model which I have a hold of.

          For complex objects proper typing is even more important. "What fields exist on this object? I better check the code and see what gets accessed!"

          Even worse are functions where fields get added (or removed!) to an object as the object gets processed.

          Absolute nightmare. The concept of data being a black box is stupid, the entire point of data is that at some point I'll need to actually use it, which is a pain in the ass to do if no one ever defines what the hell fields are supposed to be laying around.

          • holri a day ago

            By naming the variable ID it is crystal clear what the value is. Most of the time an explicit type only adds cognitive load to the reader, and limits the universality of the code. At an high abstraction level, most of the time a type is from a program logic point of view an irrelevant machine implementation detail. If a specific duck is required it is explicitly tested. This makes code very clear when the duck type is important and when not.

        • anon-3988 17 hours ago

          That's how you get a stray string in a column of integers.

    • K0nserv 2 days ago

      Statically typed code definitely requires more effort to read, but this is not cognitive load. Cognitive load is about how much working memory is required. Statically typed code requires less cognitive load because some of the remembering is outsourced to the source code.

      Statically typed code can lead to more complicated code; it can also accurately reflect the complexity inherent in the problem.

    • mirekrusin 2 days ago

      This is true at smaller scales and flips over on larger scales (larger codebase, dependencies, team/teams sizes).

      • holri 2 days ago

        A function is clear or not. I fail to see how the scale of the code, team, dependence is a factor in that.

        • K0nserv a day ago

          I split local reasoning into horizontal or vertical.

          Vertical reasoning is reasoning inside a module or function. Here information hiding and clear interfaces help.

          Horizontal reasoning is reasoning across the codebase in a limited context; adding a new parameter to a public function is a good example. The compiler helps you find and fix all the use sites, and with good ability to reason vertically at each site, even a change like this is simple.

  • larsrc a day ago

    > If someone feels compelled to read every function either the functions are poor abstractions or the reader has trust issues, which may be warranted.

    Or it's open source and the authors were very much into Use The Source, Luke!

  • movpasd a day ago

    I feel that one big way in which engineers talk past each other is in assuming that code quality is an inherent property of the code itself. The code is meaningless without human (and computer) interpretation. Therefore, the quality of code is a function of the relationship between that code and its social context.

    Cognitive load is contextual. `Option<HashSet<UserId>>` is readable to someone knowledgeable in the language (`Option`, `HashSet`) and in the system (meaning of `UserId` -- the name suggests it's an integer or GUID newtype, but do we know that for sure? Perhaps it borrows conventions from a legacy system and so has more string-like semantics? Maybe users belong to groups, and the group ID is considered part of the user ID -- or perhaps to uniquely identify a user, you need both the group and user IDs together?).

    What is the cognitive load of `Callable[[LogRecord, SystemDesc], int]`? Perhaps in context, `SystemDesc` is very obvious, or perhaps not. With surrounding documentation, maybe it is clear what the `int` is supposed to mean, or maybe it would be best served wrapped in a newtype. Maybe your function takes ten different `Callable`s and it would be better pulled out into an polymorphic type. But maybe your language makes that awkward or difficult. Or maybe your function is a library export, or even if it isn't, it's used in too many places to make refactoring worthwhile right now.

    I also quite like newtypes for indicating pragmatics, but it is also a contextually-dependent trade-off. You may make calls to your module more obvious to read, but you also expand the module's surface area. That means more things for people writing client code to understand, and more points of failure in case of changes (coupling). In the end, it seems to me that it is less important whether you use a newtype or not, and more important to be consistent.

    In fact, this very trade-off -- readability versus surface area -- is at the heart of the "small vs large functions" debate. More smaller functions, and you push your complexity out into the interfaces and relationships between functions. Fewer large functions, and the complexity is internalised inside the functions.

    To me, function size is less the deciding factor [0], but rather whether your interfaces are real, _conceptually_ clean joints of your solution. We have to think at a system level. Interfaces hide complexity, but only if the system as a whole ends up easier to reason about and easier to change. You pay a cost for both interface (surface area) and implementation (volume). There should be a happy middle.

    ---

    [0] Also because size is often a deceptively poor indicator of implementation complexity in the first place, especially when mathematical expressions are involved. Mathematical expressions are fantastic exactly because they syntactically condense complexity, but it means very little syntactic redundancy, and so they seem to be magnets for typos and oversights.

  • mithametacs a day ago

    Not everything is a functional program though and side effects are important. Types can’t* represent this.

    *Not for practical programs

  • bb88 2 days ago

    The larger problem are things that have global effect: databases, caches, files, static memory, etc. Or protocols between different systems. These are hard to abstract away, usually because of shared state.

    • mrkeen 2 days ago

      Weird, I read that between the lines of parent's post. Of course local reasoning precludes global effects.

  • peterlada 2 days ago

    There is an issue of reading a code that is written by somebody else. If it's not in a common style, the cognitive load of parsing how it's done is an overhead.

    The reason I used to hate Perl was around this, everyone had a unique way of using Perl and it had many ways to do the same thing.

    The reason I dislike functional programming is around the same, you can skin the cat 5 ways, then all 5 engineers will pick a different way of writing that in Typescript.

    The reason I like Python more is that all experienced engineers will eventually gravitate towards the idea of Pythonic notion and I've had colleagues whose code looked identical to how I'd have written it.

    • mrkeen 2 days ago

      Python 2, Python 3? Types or no types?

  • mattmcknight 2 days ago

    Types? "Option<HashSet<UserId>>" means almost nothing to me. A well defined domain model should indicate what that structure represents.

    • anon-3988 17 hours ago

      > A well defined domain model should indicate what that structure represents.

      "Should", but does it? If a function returns Option<HashSet<UserId>> I know immediately that this function may or may not return the set, and if it does return the set, they are unique.

      This is a fact of the program, I may not know "why" or "when" it does what. But as a caller, I can guarantee that I handled every possible code path. I wouldn't get surprised later one because, apparently, this thing can throw an exception, so my lock didn't get released.

    • jbggs 2 days ago

      it seems like you're just not familiar with the domains defined by those types, or at least the names used here

      • mattmcknight 19 hours ago

        Are you? What would an option for a hash set of userids? I just don't find that "types" magically solve problems of cognitive load.

        • K0nserv 2 hours ago

          > What would an option for a hash set of userids?

          As an argument: An optional filter for a query e.g. "return me posts form these users"

          As a return value: The users who liked a post or nothing if it's not semantically valid for the post to be liked for some reason.

          > I just don't find that "types" magically solve problems of cognitive load.

          Cognitive load is about working memory and having to keep things in it. Without types one only has a name, say "userIds". The fact that it's possible for it to be null and that it's supposed to contain unique values has to be kept in working memory(an increase in cognitive load)

    • bbkane 2 days ago

      Even that means a lot more than `{}`, who's tortured journeys I have to painstakingly take notes om in the source code while I wonder what the heck happened to produce the stack trace...

      • mattmcknight 19 hours ago

        Yes, but it means less than something like UserGroup. I hear you on {} though, I'm currently looking at "e: ".

  • ajuc a day ago

    Last is something that is embarrassingly extractable, which makes it a bad example (you shouldn't write that function anyway in 99% of cases - surely someone wrote it already in stdlib of your language).

    It's like taking "list.map(x -> x*x)" as a proof that parallelism is easy.

    Most code is not embarrassingly extractable (or at least not at granularity of 3 lines long methods).

  • dataflow 2 days ago

    Absolutely with you on the idea in the abstract, but the problem you run into in practice is that enabling local reasoning (~O(1)-time reading) often comes at the cost of making global changes (say, ~O(n)-time writing in the worst case, where n is the call hierarchy size) to the codebase. Or to put it another way, the problem isn't so much attaining local readability but maintaining it -- it imposes a real cost on maintenance. The cost is often worth it, but not always.

    Concrete toy examples help here, so let me just give a straight code example.

    Say you have the following interface:

      void foo(void on_completed());
    
      void callback();
    
      void bar(int n)
      {
        foo(callback);
      }
    
    
    Now let's say you want to pass n to your callback. (And before you object that you'd have the foresight to enable that right in the beginning because this is obvious -- that's missing the point, this is just a toy example to make the problem obvious. The whole point here is you found a deficiency in what data you're allowed to pass somewhere, and you're trying to fix it during maintenance. "Don't make mistakes" is not a strategy.)

    So the question is: what do you do?

    You have two options:

    1. Modify foo()'s implementation (if you even can! if it's opaque third party code, you're already out of luck) to accept data (state/context) along with the callback, and plumb that context through everywhere in the call hierarchy.

    2. Just embed n in a global or thread-local variable somewhere and retrieve it later, with appropriate locking, etc. if need be.

    So... which one do you do?

    Option #1 is a massive undertaking. Not only is it an O(n) changes for a call hierarchy of size n, but foo() might have to do a lot of extra work now -- for example, if it previously used a lock-free queue to store the callback, now it might lose performance as it might not be able to do everything atomically. etc.

    Option #2 only results in 3 modifications, completely independently from the rest of the code: one in bar(), one for the global, and one in the callback.

    Of course the benefit of #1 here is that option #1 allows local reasoning when reading the code later, whereas option #2 is spooky action at a distance: it's no longer obvious that callback() expects a global to be set. But the downside is that now you might need to spend several more hours or days or weeks to make it work -- depending on how much code you need to modify, which teams need to approve your changes, and how likely you are to hit obstacles.

    So, congratulations, you just took a week to write something that could've taken half an hour. Was it worth it?

    I mean, probably yes, if maintenance is a rare event for you. But what if you have to do it frequently? Is it actually worth it to your business to make (say) 20% of your work take 10-100x as long?

    I mean, maybe still it is in a lot of cases. I'm not here to give answers, I absolutely agree local reasoning is important. I certainly am a zealot for local reasoning myself. But I've also come to realize that achieving niceness is quite a different beast from maintaining it, and I ~practically never see people try to give realistic quantified assessments of the costs when trying to give advice on how to maintain a codebase.

    • vacuity a day ago

      Initial implementation and maintenance need to keep design in mind, and there should be more clarity around responsibility and costs of particular designs and how flexible the client is with the design at a given point in time. It's an engineering process and requires coordination.

    • o_nate a day ago

      Add a global variable? Let's not go there, please. Anything would be better than that. In this case I would bite the bullet and change the signature, but rather than just adding the one additional parameter, I would add some kind of object that I could extend later without breaking the call signature, since if the issue came up once, it's more likely to come up again.

  • TZubiri a day ago

    >I've been thinking about the notion of "reasoning locally" recently. Enabling local reasoning is the only way to scale software development past some number of lines or complexity. When reasoning locally, one only needs to understand a small subset, hundreds of lines, to safely make changes in programs comprising millions.

    Have you never heard of the word of our lord and saviour oop, or functions? It's called encapsulation.

    You might have learned it through prog langs as it is an embedded ideal

    • K0nserv a day ago

      As another sibling comment pointed out there are many tools that enable local reasoning, encapsulation is one such tool.

      I'm not claiming the idea is novel, just that I haven't encountered a name for it before.

      • TZubiri a day ago

        I'm not saying that encapsulation is a tool for local reasoning, I'm saying they are the same concept.

        How is the concept of local reasoning distinct from that of encapsulation?

        • galangalalgol a day ago

          I think most of us associate the word encapsulation with OOP nightmare code that spread mutable state across many small classes that often inherited from one another and hid the wrong state. Stateless and low state are the reaction to that. If you expand the term to include those aids to local reasoning then many more might agree with you.

Aurornis 2 days ago

> Mantras like "methods should be shorter than 15 lines of code" or "classes should be small" turned out to be somewhat wrong.

These hard rules may be useful when trying to instill good habits in juniors, but they become counterproductive when you start constraining experienced developers with arbitrary limits.

It’s really bad when you join a team that enforces rules like this. It almost always comes from a lead or manager who reads too many business books and then cargo cults those books on to the team.

  • quesomaster9000 2 days ago

    This is the bane of my existence at the moment after ~20 years into my career, and it frustrates me when I run into these situations when trying to get certain people to review pull requests (because I'm being kind, and adhering to a process, and there is really valuable feedback at times). But on the whole it's like being dragged back down to working at a snails pace.

    - Can't refactor code because it changes too many files and too many lines.

    - Can't commit large chunks of well tested code that 'Does feature X', because... too many files and too many lines.

    - Have to split everything down into a long sequence of consecutive pull requests that become a process nightmare in its own right

    - The documentation comments gets nitpicked to death with mostly useless comments about not having periods at the ends of lines

    - End up having to explain every little detail throughout the function as if I'm trying to produce a lecture, things like `/* loop until not valid */ while (!valid) {...` seemed to be what they wanted, but to me it made no sense what so ever to even have that comment

    This can turn a ~50 line function into a 3 day process, a couple of hundred lines into a multi-week process, and a thousand or two line refactor (while retaining full test coverage) into a multi-month process.

    At one point I just downed tools and quit the company, the absurdity of it all completely drained my motivation, killed progress & flow and lead to features not being shipped.

    Meanwhile with projects I'm managing I have a fairly good handle on 'ok this code isnt the best, but it does work, it is fairly well tested, and it will be shipped as the beta', so as to not be obstinate.

    • sarchertech 2 days ago

      After 20 years of doing this, I’m convinced that required PR reviews aren’t worth the cost.

      In the thousands of pull requests I’ve merged across many companies, I have never once had a reviewer catch a major bug (a bug that is severe enough that if discovered after hours, would require an oncall engineer to push a hot fix rather than wait for the normal deployment process to fix it).

      I’ve pushed a few major bugs to production, but I’ve never had a PR reviewer catch one.

      I’ve had reviewers make excellent suggestions, but it’s almost never anything that really matters. Certainly not worth all the time I’ve spent on the process.

      That being said, I’m certainly not against collaboration, but I think required PR reviews aren’t the way to do it.

      • dullcrisp 2 days ago

        The point of code reviews isn’t to catch bugs. It’s for someone else on the team to read your code and make sure they can understand it. If no one else on your team can understand your code, you shouldn’t be committing it to the repository.

        • TeMPOraL a day ago

          Maybe. But then, sure, I can understand the code you wrote - on a syntactic/operational level. This adds Foos to bar instead of baz, and makes Quux do extra Frob() call. Whatever, that's stupid stuff below junior level. What would actually matter is for me to understand why you're doing this, what it all means. Which I won't, because you're doing some code for symbolic transformation of equations for optimizing some process, and I'm doing data exchange between our backend and a million one-off proprietary industrial formats, and we only see each other on a team call once a week.

          I'm exaggerating, but only a little. Point is, in a deep project you may have domain-specialized parts, and those specialties don't overlap well. Like, ideally I'd take you aside for an hour to explain the 101 of the math you're doing and the context surrounding the change, but if neither you nor me have the time, that PR is getting a +2 from me on the "no stupid shit being done, looks legit code-wise; assuming you know your domain and this makes sense" basis.

        • klabb3 a day ago

          HN moment. I’ve never seen in practice that someone says ”I don’t understand it” and the author says ”good point, I will simplify it”.

          Rather, the opposite. I often saw people make unnecessary complex or large PRs that were too much workload to review, leading the reviewer to approve, on the grounds of ”seems like you know what you’re doing and tbh I don’t have half a day to review this properly”.

          Code review is a ritual. If you ask why we have it people will give you hypothetical answers more often than concrete examples. Personally I’m a proponent of opt-in CRs, ie ask for a second pair of eyes when your spidey senses tell you.

          • throw-qqqqq a day ago

            Our juniors write horribly complex code that senior devs have to ask to simplify. This happens all the time. And the juniors simplify and thank us for teaching and mentoring. It’s a big reason we do reviews. So we can control how dirty the code is before merging and so we can grow each other with constructive feedback. Sometimes it’s also just “LGTM” if nothing smells.

            90% of comments in my team’s PRs come with suggestions that can be applied with a click (we use GitLab). It requires almost no effort to apply suggestions and it’s often not much extra work for reviewers to explain and suggest a concrete change.

            I agree that reviews should be used pragmatically.

          • delusional a day ago

            Get (or create) better colleagues. It's usually pretty easy to identify if people are approving pull requests that they don't understand. Pull them aside and have a professional talk about what a pull request review is. People want to do good, but you have to make it clear that you value their opinion.

            If you treat the people around you as valuable collaborators instead of pawns to be played to fulfill your processes, your appreciation for reviews will transform. Remember that it's their work too.

          • Capricorn2481 a day ago

            > I often saw people make unnecessary complex or large PRs that were too much workload to review, leading the reviewer to approve, on the grounds of ”seems like you know what you’re doing and tbh I don’t have half a day to review this properly”

            That just seems like company wide apathy to me. Obviously you have to make an effort to read the code, but there are lots of ways developers can overcomplicate things because they were excited to try a pattern or clever solution. It doesn't make them bad devs, it's just an easy trap to fall into.

            These should not pass a code review just because the code "works." It's totally acceptable to say "we're not gonna understand this in 3 months the way it's written, we need to make this simpler" and give some suggestions. And usually (if you're working with people that care about the workload they make for others) they will stop after a few reviews that point this out.

            We've done this at our company and it's helped us immensely. Recognizing whether the code is unnecessarily complex or the problem is inherently complex is part of it, though.

        • sarchertech a day ago

          I watched pre merge code reviews become a requirement in the industry and catching bugs was almost always the #1 reason given.

          The times I've seen a 2nd set of eyes really help with the understandability of code, it was almost always collaboration before or while the code was being written.

          I would estimate something like 1 out of 100 PR reviews I've seen in my life were really focussed on improving understandability.

        • deeviant a day ago

          I feel if you ask 5 people what "the point" of codes review is, you'd get 6 different answers.

          • zimpenfish a day ago

            And a 7th complaining about the formatting of the question.

      • kevmo314 2 days ago

        Wow someone who finally has this same unpopular opinion as I do. I'm a huge fan of review-optional PRs. Let it be up to the author to make that call and if it were really important to enforce it would be more foolproof to do so with automation.

        Unfortunately every time I've proposed this it's received like it's sacrilegious but nobody could tell me why PR reviews are really necessary to be required.

        The most ironic part is that I once caught a production-breaking bug in a PR while at FAANG and the author pushed back. Ultimately I decided it wasn't worth the argument and just let it go through. Unsurprisingly, it broke production but we fixed it very quickly after we were all finally aligned that it was actually a problem.

        • sarchertech a day ago

          >Unfortunately every time I've proposed this it's received like it's sacrilegious but nobody could tell me why PR reviews are really necessary to be required.

          Obvious signs of cargoculting in my opinion.

        • mattmanser a day ago

          I'll bite.

          To catch stupid mistakes like an extra file, an accidental debug flag, a missing compiler hint that has to be added to migration scripts etc.

          To ensure someone who doesn't quite understand the difference between dev and production build pipelines doesn't break it.

          To ensure a certain direction is being followed when numerous contractors are working on the code. For example a vague consistency in API designs, API param names, ordering, etc.

          To check obvious misunderstandings by juniors and new hires.

          To nix architect astronauts before their 'elegant' solution for saving a string to a database in 500 lines gets added.

          To check the code is actually trying to solve the ticket instead of a wrong interpretation of the ticket.

          To get introduced to parts of the codebase you haven't worked on much.

          But as with anything you get from it what you put in.

          • sarchertech 20 hours ago

            None of those are good reasons why PR reviews are necessary. They are examples of things that it's theoretically possible a PR review might catch. But there's no information there about how likely those things are to be caught.

            Without that absolutely critical information, no cost benefit analysis is possible.

            In my experience across many companies, PR reviews almost never catch any of those bugs or bring any of those benefits.

      • JKCalhoun a day ago

        I agree with you. If you give each dev a kind of sand-box to "own" within a project they'll learn to find their own bugs, write both simple and robust code, lots of param checking — grow as an engineer that way.

      • ozim 2 days ago

        Unfortunately for compliance reasons PRs are required.

        Funny part is that not even in highly regulated markets.

        ISO270001 or SOC2 are pretty much something every software company will have to do.

        • sarchertech a day ago

          SOC2 doesn't require code reviews. SOC2 is just a certification that you are following your own internal controls. There's nothing that says required PR reviews have to be one of your internal controls. That's just a common control that companies use.

          • ozim a day ago

            I would argue that "common control that companies use" falls under "industry standard" and I would say it would make it harder to pass certification without PR reviews documented on GitHub or something alike. So it does not require but everyone expects you to do so :)

            • sarchertech 20 hours ago

              The reason that this is common is that a company hires a SOC2 consultant who tells them that PR reviews are required despite that fact that this is a complete fabrication.

              Locking yourself into an enormously expensive process with no evidence of its efficacy just because you don't want read up on the process yourself or push back on a misinformed auditor is a terrible business decision.

        • __turbobrew__ a day ago

          Yes, this is why we have required PR reviews at my company. It is to meet compliance controls.

          We recently talked about not requiring reviews for people in L5 and above levels but ultimately got shut down due to compliance.

        • kevmo314 a day ago

          Curious because I am not familiar: are PRs required or are PR reviews required?

          • ozim a day ago

            Well "Peer Review" or "Code Review" is required - pull requests are easiest way to have it all documented with current state of art tooling. Otherwise you have to come up with some other way to document that for purpose of the audit.

      • tdrz 2 days ago

        The fact that there is a PR review process in place, makes commiters try harder. And that's good!

        • gizzlon 2 days ago

          Or try less because they have to spend time doing pr reviews

          • queuep a day ago

            Yes, same for QA sometimes.. dev sets bar lower as the QA can test it. Just makes a bunch of back and forth. And when stuff breaks nobody feels responsible.

      • zimpenfish a day ago

        > I have never once had a reviewer catch a major bug

        Just in 2024, I've had three or four caught[0] (and caught a couple myself on the project I have to PR review myself because no-one else understands/wants to touch that system.) I've also caught a couple that would have required a hotfix[1] without being a five-alarm alert "things are down".

        [0] including some subtle concurrency bugs

        [1] e.g. reporting systems for moderation and support

      • sneak a day ago

        Required PR reviews means that if someone steals your credentials, or kidnaps your child, you can't get something into production that steals all the money without someone else somewhere else having to push a button also.

        It's the two-person rule, the two nuclear keyswitches.

        • sarchertech a day ago

          This is definitely not why PR reviews are required. Most companies don't really know why they require them, but I've definitely never heard one say it was because they were afraid of malicious code from stolen credentials.

          There's so many other ways you can inject malicious code with stolen credentials that doesn't require a PR in every production environment I've ever worked in. There's much lower hanging fruit that leaves far fewer footprints.

      • jasonlotito a day ago

        Allowing anyone to promote anything to production without any other eyes on it is problematic. Not realizing this is extremely telling.

        The presumed claim that no one at the company benefited from a second set of eyes is amazing, too.

        • sarchertech a day ago

          >Allowing anyone to promote anything to production without any other eyes on it is problematic.

          In my experience the people who are promoting things to production that shouldn't be will find a way to do it. They'll either wear down the people who want to stop it, or they'll find someone else to approve it who doesn't know why it shouldn't be approved or doesn't care.

          My hypothesis is that requiring any 2nd random engineer in the company to approve production code doesn't provide enough value to justify the cost.

          There may be other controls that are worth the cost.

          However, our industry has been shipping software for a long time without this requirement, and I've seen no evidence that the practice has saved money, reduced the number of bugs, or improved software quality by any other metric. I think it's time we examine the practice instead of taking it on faith that it's a net benefit.

          >Not realizing this is extremely telling.

          Nice way of saying, I don't agree with you so I must be an idiot.

        • jvans a day ago

          but there isn't actually a second set of eyes because the second set of eyes you're thinking about is complaining about formatting or slamming the approve button without actually looking

    • charlie0 2 days ago

      I'm one of the rare individuals who really tries to review code and leave helpful comments. I've been on the receiving end of really big PRs and can say I understand why you're being told to break things up into smaller chunks.

      Most of the devs who submit large PRs just don't have a good grasp of organizing things well enough. I've seen this over and over again and it's due to not spending enough time planning out a feature. There will be exceptions to this, but when devs keep doing it over and over, it's the reviewer's job to reject it and send it back with helpful feedback.

      I also understand most people don't like the friction this can create and so you end you with 80% of PRs being rubber stamped and bugs getting into production because the reviewers just give up on trying to make people better devs.

      • sneak a day ago

        The reviewer's job is primarily to ensure business continuity, and only marginally to make people better devs.

        • fmbb a day ago

          When I review code I never think I am there to make people better devs.

          I’m reviewing the code because I don’t want shit code merged into the code base I am responsible for operating. I’m going to be the one debugging that. Don’t just merge shit you feel like merging.

    • NotBoolean 2 days ago

      I don’t have your experience but I personally think some of this feedback can be warranted.

      > Can't refactor code because it changes too many files and too many lines.

      This really depends on the change. If you are just doing a mass rename like updating a function signature, fair enough but if you changing a lot of code it’s very hard to review it. Lots of cognitive load on the reviewer who might not have the same understanding of codebase as you.

      > Can't commit large chunks of well tested code that 'Does feature X', because... too many files and too many lines.

      Same as the above, reviewing is hard and more code means people get lazy and bored. Just because the code is tested doesn’t mean it’s correct, just means it passes tests.

      > Have to split everything down into a long sequence of consecutive pull requests that become a process nightmare in its own right

      This is planning issue, if you correctly size tickets you aren’t going to end up in messy situations as often.

      > The documentation comments gets nitpicked to death with mostly useless comments about not having periods at the ends of lines

      Having correctly written documentation is important. It can live a long time and if you don’t keep an eye on it can becomes a mess. Ideally you should review it before you submitting it to avoid these issues.

      > End up having to explain every little detail throughout the function as if I'm trying to produce a lecture, things like `/* loop until not valid */ while (!valid) {...` seemed to be what they wanted, but to me it made no sense what so ever to even have that comment

      I definitely agree with this one. Superfluous comments are a waste of time.

      Obviously this is just my option and you can take things too far but I do think that making code reviewable (by making it small) goes a long way. No one wants to review 1000s lines of code at once. It’s too much to process and people will do a worse job.

      Happy to hear your thoughts.

      • lazyasciiart 2 days ago

        > This is planning issue, if you correctly size tickets you aren’t going to end up in messy situations as often.

        No, it’s “this refactor looks very different to the original code because the original code thought it was doing two different things and it’s only by stepping through it with real customer data that you realized with the right inputs (not documented) it could do a third thing (not documented) that had very important “side effects” and was a no-op in the original code flow. Yea, it touches a lot of files. Ok, yea, I can break it up step by step, and wait a few days between approval for each of them so that you never have to actually understand what just happened”.

        • withinboredom 2 days ago

          so, it's not just a refactoring then; it's also bug fixes + refactoring. In my experience, those are the worst PRs to review. Either just fix the bugs, or just refactor it. Don't do both because now I have to spend more time checking the bugs you claim to fix AND your refactoring for new bugs.

          • rcxdude 2 days ago

            There are certainly classes of bugs for which refactoring is the path of lowest resistance

            • edflsafoiewq 2 days ago

              The most common IME are bugs that come from some wrong conceptual understanding underpinning the code. Rewriting the code with a correct conceptual understanding automatically fixes the bugs.

              • ludston 2 days ago

                The classic example of this is concurrency errors or data corruption related to multiple non-atomic writes.

            • t-writescode 2 days ago

              And there are multi-PR processes that can be followed to most successfully convert those changes in a comprehensible way.

              It'll often include extra scaffolding and / or extra classes and then renaming those classes to match the old classes' name after you're done, to reduce future cognitive load.

              • rcxdude a day ago

                I'm unconvinced that adding extra code churn in order to split up a refactor that fixes bugs into a bugfix and a refactor is worthwhile

                • withinboredom a day ago

                  One metric I like to give my team is to have any new PR start a review in less than 15 minutes and be completed within 15 minutes. So, the longest you should wait is about 30 minutes for a review. That means teams either go "fuck it" and rubber stamp massive PRs -- which is a whole different issue -- or they take it seriously and keep PRs small to get their PRs reviewed in less than 30 minutes.

                  In most cases where I see responses like this, they're not surprised to wait hours or days for a PR review. In that case, it makes sense to go big, otherwise you'll never get anything done. If you only have to wait half an hour, max, for a PR review; the extra code churn is 1000% worth it.

                  • t-writescode a day ago

                    This is where my stance is.

                    As a developer, I want my PRs to actually be reviewed by my coworkers and to have issues caught as a second layer of defense, etc.

                    As a reviewer, I effectively stopped approving things I couldn't give at least a cursory, reasonable glance (and tried to encourage others to follow suit because if we're not reviewing things, why not just push directly to main).

                    As a consequence, I have:

                      * tried to review most things within like half an hour of their announcement
                        in the shared MR channel
                    
                      * requested a pair programming session and offered to do a pair programming
                        session for any large and semi-or-fully automated refactoring session,
                        like running a linter or doing a multi-file variable rename
                        (the pair programmer immediately comments on and approves the MR when it
                        appears)
                    
                      * tried to limit my PRs to approximately 400 lines (not a rigid rule)
                    
                    There were some specific instances of people not liking the "you must pair program if you're going to touch 400 files in one PR" requirement; but otherwise, I would like to think those on my team liked the more regular PRs, more people doing the PRs, etc, that resulted from this and some healthy culture changes.

                    I would also like to feel like the more junior devs were more willing to say anything at all in the PRs because they could follow the change.

                    • withinboredom a day ago

                      I’ve seen this and variations done by teams to implement the metric. Usually, the “biggest” friction comes from “how do we know a PR needs to be reviewed within the time frame?” To which I always want to answer: “you have a mouth, put noises through it.” Sigh, sometimes I miss the military… anyway, toxic behavior aside, this is usually the biggest thing. I have to remind them that they go get coffee or smoke at least every hour, but rarely at the same time; so maybe then might be a good time to just do a quick check for an open PR. Or turn on notifications. Or if it’s urgent, mention it in the dev team channel.

                      But yeah, it’s hard to get the culture rolling if it isn’t already in place nor has anyone in the company worked with a culture like that.

                  • rcxdude a day ago

                    I'm all for low-latency reviews, but this target seems crazy: a perfect recipe for a lot of apparent activity for little actual progress. Maybe it depends on the project, but for a lot of projects 15 minutes of review time means you basically are only going to accept trivial changes.

                    • t-writescode a day ago

                      As it turns out, most of the work that most developers do is updating or enhancing CRUD apps. There's already a plan and an intent that just needs to be typed out.

                      I've found 15-30 minutes to be plenty of time to review about a day's worth of code. It's enough time to process what the code is doing and iterate over the tests, in general.

                      Here's a scary thought: if something small takes 15-30 minutes to appropriately process ... how much longer do *large* changes take? Can someone keep all that in their mind that whole time to comprehend and process a huge change?

                      And a better question, will they?

                    • withinboredom a day ago

                      > 15 minutes of review time means you basically are only going to accept trivial changes.

                      Um, yes. This is 100% the point. There is no amount of refactoring, bug fixing, or features that cannot be expressed as a chain of trivial changes.

                      What you usually see happen is that instead of spending a week experimenting with 15 different refactors, is that an engineer opens a PR with what they think they're going to try first. Other engineers point out how they had tried that before and it didn't work; but maybe this other way will. So, they end up "working together" on the refactor instead of one developer getting lost in the sauce for a week seeing what sticks to a wall.

                      In essence, about the same amount of time is spent; but the code is higher quality and no architecture reviews during code reviews (which is another rule that should exist on a team -- architecture reviews should happen before a single line of code is touched).

          • lazyasciiart 21 hours ago

            No, I very deliberately did not describe any bug fixes.

        • justatdotin 2 days ago

          > only by stepping through it with real customer data that you realized with the right inputs (not documented) it could do a third thing (not documented) that had very important “side effects” and was a no-op in the original code flow

          sounds like the 'nightmare' was already there, not in the refactor. First step should be some tests to confirm the undocumented behaviour.

          Some of your complaints seem to be about peer review ('approval'). I found my work life improved a lot once I embraced async review as a feature, not a bug.

          As for 'break it up step by step' - I know how much I appreciate reviewing a feature that is well presented in this way, and so I've got good at rearranging my work (when necessary) to facilitate smooth reviews.

          • lazyasciiart 21 hours ago

            > sounds like the 'nightmare' was already there, not in the refactor

            I admit that I am pretty allergic to people who avoid working with imperfect code.

        • grey-area 2 days ago

          The way I normally approach this is one big pr for context and then break it into lots of small ones for review.

          • jaredsohn 2 days ago

            I've found processes like this to work better, too. Basically, the one big pr is like building a prototype to throw away. And the benefit is it has to get thrown away because the PR will never pass review.

          • F-W-M 2 days ago

            A PR with self-contained smaller commits would be possible as well.

            • t-writescode 2 days ago

              Yes, though it does depend on how good the commenting system is; and, for something like that, you're still probably going to want a meeting to walk people through such a huge change.

              And you'd better hope you're not squashing that monstrous thing when you're done.

      • quesomaster9000 2 days ago

        I do object to the notion of something being a planning issue when you're talking about a days worth of work.

        Implement X, needs Y and Z, ok that was straightforward, also discovered U and V on the way and sorted that out, here's a pull request that neatly wraps it up.

        Which subsequently gets turned into a multi-week process, going back & forth almost every day, meaning I can't move on to the next thing, meanwhile I'm looking at the cumulative hourly wages of everybody involved and the cost is... shocking.

        Death by process IHMO.

        • bspammer 2 days ago

          > Implement X, needs Y and Z, ok that was straightforward, also discovered U and V on the way and sorted that out, here's a pull request that neatly wraps it up

          This sounds very difficult to review to be honest. At a minimum unrelated changes should be in their own pull request (U and V in your example).

          • tacitusarc 2 days ago

            I work as a tech lead, so I get a lot of leeway in setting process. For small PRs, we use the normal “leave comments, resolve comments” approach. For large PRs, we schedule 30m meetings, where the submitter can explain the changes and answer questions, and record any feedback. This ensures everyone is on the same page with the changes, gives folks a chance to rapidly gather feedback, and helps familiarize devs who do not work in that area with what is going on. If the meeting is insufficient to feel like everyone is on the same page and approves the changes, we schedule another one.

            These are some of the best meetings we have. They are targeted, educational, and ensure we don’t have long delays waiting for code to go in. Instead of requiring every PR to be small, which has a high cost, I recommend doing this for large/complex projects.

            One additional thing to note on small PRs: often, they require significant context, which could take hours or even days, to be built up repeatedly. Contrast that with being able to establish context, and then solve several large problems all at once. The latter is more efficient, so if it can be enabled without negative side effects, it is really valuable.

            I want my team to be productive, and I want to empower them to improve the codebase whenever they see an opportunity, even if it is not related to their immediate task.

            • quesomaster9000 2 days ago

              One minor piece of insight from me is about release management vs pull-requests.

              As you say it's much easier to schedule a 30 minute meeting, then we can - with context - resolve any immediate nitpicks you have, but we can also structure bigger things.

              'Would this block a release?'

              'Can we just get this done in the PR and merge it'

              'Ok, so when it's done... what is the most important thing that we need to document?'

              Where the fact that even after it's merged, it's going to sit in the repo for a while until we decide to hit the 'release' button', this lets people defer stuff to work on next and defines a clear line of 'good enough'

          • shakna 2 days ago

            How do you rework a core process, then? If you rework a major unit that touches just about everything... Sharding something like that can break the actual improvement it is trying to deliver.

            Like... Increase the performance of a central VM. You'll touch every part of the code, but probably also build a new compiler analysis system. The system is seperate to existing code, but useless without the core changes. Seperating the two can ruin the optimisation meant to be delivered, because the context is no longer front and center. Allowing more quibling to degrade the changes.

          • pbh101 2 days ago

            Agree. Another item here that is contextual: what is the cost of a bug? Does it cost millions, do we find that out immediately, or does it take months? Or does it not really matter, and when we’ll find the big it will be cheap? The OP joining a new company might not have the context that existing employees have about why we’re being cautious/clear about what we’re changing as opposed to smuggling in refactors in the same PR as a feature change.

            I’m going to be the guy that is asking for a refactor to be in a separate commit/PR from the feature and clearly marked.

            It doesn’t justify everything else he mentioned (especially the comments piece) but once you get used to this it doesn’t need to extend timelines.

        • justatdotin 2 days ago

          Yes, wrapping other discoveries into your feature work is a planning issue that might impact on the review burden.

      • callc 2 days ago

        > This is planning issue, if you correctly size tickets you aren’t going to end up in messy situations as often.

        I think the underlying issue is what is an appropriate “unit of work”. Parent commenter may want to ship a complete/entire feature in one MR. Ticketing obsessed people will have some other metric. Merge process may be broken in this aspect. I would rather explain to reviewer to bring them up to speed on the changes to make their cognitive load easier

        • gjadi 2 days ago

          This. The solution to long and multiple reviews to MR is single pair review session where most of the big picture aspects can be addressed immediately and verbally discussed and challenged.

          IMHO it is the same as chat. If talking about an issue over mail or chat takes more than 3-5 messages, trigger a call to solve it face to face.

      • 8note 2 days ago

        code reviews that are too small, i think are worse than ones that are too big, and let through more bugs.

        10 different reviewers can each look at a 100 lin change out of the 1000 line total change, but each miss how the changes work together.

        theyre all lying by approving, since they dont have the right context to approve

    • flakes 2 days ago

      > The documentation comments gets nitpicked to death with mostly useless comments about not having periods at the ends of lines > End up having to explain every little detail throughout the function

      For these cases I like to use the ‘suggest an edit’ feature on gitlab/github. Can have the change queued up in the comments and batch commit together, and takes almost no additional time/effort for the author. I typically add these suggestion comments and give an approve at the same time for small nitpicks, so no slow down in the PR process.

      • F-W-M 2 days ago

        I good process would be to just push the proposal to the branch in review.

        • flakes 2 days ago

          I still want to let the author have the final say on if they decide to accept or reject the change, or modify it further. Editing the branch directly might cause some rebasing/merge conflicts if they’re addressing other peoples comments too, so I don't typically edit their working branch directly unless they ask me to.

    • notShabu 2 days ago

      there is huge incentive for people who don't know how to code/create/do-stuff to slow things down like this b/c it allows them many years of runway at the company.

      they are almost always cloaked in virtue signals.

      almost every established company you join will already have had this process going for a long time.

      doing stuff successfully at such a company is dangerous to the hierarchy and incurs an immune response to shut down or ostracize the doing-of-stuff successfully so the only way to survive or climb is to do stuff unsuccessfully (so they look good)

    • spion 2 days ago

      Indeed, cognitive load is not the only thing that matters. Non-cognitive toil is also a problem and often enough it doesn't get sufficient attention even when things get really bad.

      We do need better code review tools though. We also need to approach that process as a mechanism of effectively building good shared understanding about the (new) code, not just "code review".

    • lifeisstillgood 2 days ago

      I am trying my best to build in an inordinate amount of upfront linting and automated checks just to avoid such things - and then I still need to do a roadshow, or lots of explanations- but that’s probably good.

      But the good idea is to say “we all have the same brutal linting standards (including full stops in docs!) - so hopefully the human linger will actually start reading the code for what it is, not what it says”

      • whstl 2 days ago

        I'm also a fan of linting everything. Custom linter rules ftw.

        This and documenting non-lintable standards so that people are on the same page ("we do controllers like this").

        This is how I like to build and run my teams. This makes juniors so much more confident because they can ship stuff from the get go without going through a lengthy nitpicky brutal review process. And more senior devs need to actually look at code and business rules rather than nitpicking silly shit.

        • t-writescode 2 days ago

          > This makes juniors so much more confident because they can ship stuff from the get go without going through a lengthy nitpicky brutal review process.

          I had not considered that linters could greatly help new developers in this way, especially if you make it a one-button linting process for all established development environments.

          Thanks for the insight! I will use this for the future.

      • justatdotin 2 days ago

        if a colleague wants to argue over placement of a curly boy, I'll fight to the death.

        if it's a linter, I shrug and move on.

    • MarkMarine 2 days ago

      I’m 15 years in and I feel basically the same. I end up making a feature or change, then going back and trying to split it into chunks that are digestible to my colleagues. I’ve got thousands of lines of staged changes that I’m waiting to drip out to people at a digestible pace.

      I yearn for the early stage startup where every commit is a big change and my colleagues are used to reviewing this, and I can execute at my actual pace.

      It’s really changed the way I think about software in general, I’ve come around to Rich Hickey’s radically simple language Clojure, because types bloat the refactors I’m doing.

      I’d love to have more of you where I work, is there some way I can see your work and send some job descriptions and see if you’re interested?

      • withinboredom 2 days ago

        > I end up making a feature or change, then going back and trying to split it into chunks that are digestible to my colleagues.

        If you are doing this AFTER you've written the code, it is probably way easier to do it as you go. It's one thing if you have no idea what the code will look like from the beginning -- just go ahead and open the big PR and EXPLAIN WHY. I know that I'm more than happy to review a big PR if I understand why it has to be big.

        I will be annoyed if I see a PR that is a mix of refactoring, bug fixes, and new features. You can (and should) have done those all as separate PRs (and tickets). If you need to refactor something, refactor it, and open a PR. It doesn't take that long and there's no need to wait until your huge PR is ready.

        • quesomaster9000 2 days ago

          Solving creative problems is often iterative, and one things I'm very concerned about when doing engineering management is maintaining momentum and flow. Looking at latency hierarchies is a really good example, you have registers, then cache, then memory, SSD, network etc. and consulting with another human asynchronously is like sending a message to Jupiter (in the best case).

          So, with an iterative process, the more times you introduce (at best) hour long delays, you end up sitting on your arse twiddling your thumbs doing nothing, until the response comes back.

          The concept of making PRs as you go fails to capture one of the aspects of low-latency problem solving, which is that you catch a problem, you correct it and you revise it locally, without exiting that loop. Which is problematic because not only have you put yourself in a situation where you're waiting for a response, but you've stopped half-way through an unfinished idea.

          This comes back to 'is it done', a gut feel that it's an appropriate time to break the loop and incur the latency cost, which for every developer will be different and is something that I have grown to deeply trust and and adjust to for everybody I work with.

          What I'm getting at is the iterative problem solving process often can't be neatly dissected into discrete units while it's happening, and after we've reached the 'doneness' point it takes much more work to undo part of your work and re-do it than it took to do originally, so not only do you have the async overhead of every interaction, but you have the cognitive burden of untangling what was previously a cohesive unit of thought - which again is another big time killer

          • withinboredom 2 days ago

            What I mean is, you make your commit, cherry pick it over to the main branch, and open a draft pr. It doesn't break your flow, it doesn't stop anything, and is pretty quick. It also gives you a quick gut-check to see the PR; if you think your team members won't understand "why" it needs to be refactored, then you have one of two problems:

            1. your refactoring is probably going in the wrong direction. Team members will be able to help here more than ever. Let them bikeshed, but don't stop working on your main refactor yet. Revist later and integrate their changes.

            2. the PR is too small. it will have to be part of a larger PR.

            In my experience, people tend to have the first problem, and not the second one, but they think they have the second one. There are many of these "massive refactoring" PRs I've reviewed over the last 20 years where the refactoring makes the code worse, overall. Why? Because refactoring towards a goal (implementing a feature, fixing a bug, etc.) doesn't have the goal refactoring should have: improving code maintainability. So, the refactored code is usually LESS maintainable, but it does what they wanted.

        • __MatrixMan__ 2 days ago

          If you make refactor PRs as you go, do you end up merging redactors towards a dead end and then--once you realize it's a dead end--merging even more refractors in the other direction?

          I usually wait until I have the big PR done and then merge redactors towards it because then at least I know the road I'm paving has a workable destination.

          • t-writescode 2 days ago

            This is why I design the heckin' huge change at the start, and then cherry pick the actual change (and associated tests) into a ton of smaller PRs, including "refactor here", "make this function + tests", "make this class + tests", "integrate the code + tests", and so on, as many times as necessary to have testable and reviewable units of code.

            If I went about and made a ton of changes that all went into dead ends, honestly, I would get pretty demoralized and I think my team would get annoyed, especially if I then went through and rolled back many of those changes as not ending up being necessary.

      • hellisothers 2 days ago

        These same people also want to see your GitHub history filled with deep green come review time. I start to wonder if they think high levels of GitHub activity is a proxy of performance or if it’s a proxy of plying the game the way they insist you play.

        • MarkMarine 2 days ago

          Dunno where you get that from, but that was not my intent and is not a metric I use to judge who I’d like to be my coworkers.

    • tdiff a day ago

      As a reviewer I've seen numerous examples of PRs that were basically out of sync with the rest of the project, did not solve the problem they were supposed to solve, or added buggy or unmaintainable code.

      Arguments like "but it works in majority of cases" are a way to delegate fixing issues to somebody else later. Unless noone will be using that code at all, in which case it should not be merged either.

    • epolanski 2 days ago

      You seem to be describing a company where bureaucracy is a feature not a bug.

      Been there. Left, live thousands times better.

    • gre 2 days ago

      The process is introducing more room for bugs to somehow creep in. Damn.

      • quesomaster9000 2 days ago

        This is a big problem with reviews where the author is capitulating because they, with gritted teeth, acknowledge it's the only way to get the desired result (jumping over a hurdle).

        So you blindly accept an ill-informed suggestion because that's the only way you can complete the process.

    • jschrf 2 days ago

      Aye. Sign of the times. You're 20+ years in, so I'm preaching to the choir and old-man-yelling-at-cloud here.

      Cargo culting + AI are the culprits. Sucks to say, but engineering is going downhill fast. First wave of the shitularity. Architects? Naw, prompt engineers. Barf. Why write good code when a glorified chatbot could do it shittier and faster?

      Sign of our times. Cardboard cutout code rather than stonemasonry. Shrinkflation of thought.

      Peep this purified downvote fuel:

      Everything is bad because everyone is lazy and cargo cults. Web specifically. Full-stop. AI sucks at coding and is making things recursively worse in the long run. LLMs are nothing more than recursive echo chambers of copypasta code that doesn't keep up with API flux.

      A great example of this is the original PHP docs, which so, so many of us copypasta'd from, leading to an untold amount of SQL injections. Oopsies.

      Simalarily and hunting for downvotes, React is a templating framework that is useful but does not even meet its original value proposition, which is state management in UI. Hilariously tragic. See: original example of message desync state issue on FB. Unsolved for years by the purported solution.

      The NoSQL flash is another tragic comedy. Rebuilding the wheel when there is a faster, better wheel already carefully made. Postgres with JSONB.

      GraphQL is another example of Stuff We Don't Need But Use Because People Say It's Good. Devs: you don't need it. Just write a query.

      -

      You mention a hugely important KPI in code. How many files, tools, commands, etc must I touch to do the simplest thing? Did something take me a day when it should have taken 30s? This is rife today, we should all pay attention. Pad left.

      Look no further than hooks and contexts in React land for an example. Flawed to begin with, simply because "class is a yucky keyword". I keep seeing this in "fast moving" startups: the diaspora of business logic spread through a codebase, when simplicity and unity is key, which you touch on. Absolute waste of electricity and runway, all thanks to opiniation.

      Burnt runways abound. Sometimes I can't help but think engineering needs a turn it off and then on again moment in safe mode without fads and chatbots.

      • sgarland 2 days ago

        > Everything is bad because everyone is lazy and cargo cults.

        It’s an interesting series of events that led to this (personal theory). Brilliant people who deeply understood fundamentals built abstractions because they were lazy, in a good way. Some people adopted those abstractions without fully comprehending what was being hidden, and some of those people built additional abstractions. Eventually, you wind up with people building solutions to problems which wouldn’t exist if, generations above, the original problem had been better understood.

        • quesomaster9000 2 days ago

          The road is paved with good intentions, it's not they were lazy but they had intent to distill wisdom to save time. Then yes, the abstractions were adopted without fully comprehended what was hidden, and those people then naively built additional layers of abstractions.

          So yes, if the original problem had been better understood, then you wouldn't have a generation of React programmers doing retarded things.

          Having watched many junior developers tackle different problems with various frameworks, I have to say React is conducive to brainrot by default. Only after going through a fundamentals-first approach do you not end up with one kind of spaghetti, but you end up with another kind because it's fundamentally engineered towards producing spaghetti code unless you constantly fight the inertia of spaghettification.

          It's like teaching kids about `GOTO`... That is, IMO, the essence of React.

          • sgarland a day ago

            > it's not they were lazy but they had intent to distill wisdom to save time.

            Yes – I was referring to lazy in the sense of the apocryphal quote from Bill Gates:

            “I choose a lazy person to do a hard job, because a lazy person will find an easy way to do it.”

            > Only after going through a fundamentals-first approach do you not end up with one kind of spaghetti, but you end up with another kind because it's fundamentally engineered towards producing spaghetti code unless you constantly fight the inertia of spaghettification.

            I’ve been guilty of this. Thinking that a given abstraction is unnecessary and overly-complicated, building my own minimal abstraction for my use case, and then slowly creating spaghetti as I account for more and more edge cases.

    • jesse__ 2 days ago

      I've had a similar experience several times over the years. Even at companies with no working product that ostensibly wanted to 'move fast and break things'. And I do the same thing; quit and move on. I'm pretty convinced people like that more-or-less can't be reasoned with.

      My question is .. is this getting more common as time goes on, or do I just feel like it is..

    • spockz 2 days ago

      This sounds more like a case where you need a “break-the-glass” like procedure where some checks don’t apply. Or the checks should be non blocking anyway.

    • shinycode 2 days ago

      No wonder why software development used to be expensive if 50 lines of code takes multiples days for several people …

      • LtWorf 2 days ago

        Well maybe they do critical systems.

        • shinycode 2 days ago

          Valid point, it’s even mandatory in this case. Sometimes people do it for the sake of it. Maybe because there nothing else to make them feel important ? In critical systems I hope it’s the case though

        • DavidPiper 2 days ago

          Narrator: "They don't."

          (Glib, but in my experience, mostly true.)

    • romellem 2 days ago

      > mostly useless comments about not having periods at the ends of lines

      Oh my god, this sounds like a nightmare. I definitely would not be able to tolerate this for long.

      Did you try to get them to change? Were you just not in a senior enough position for anyone to listen?

    • SeptiumMMX 2 days ago

      You always need to look at the track record of the team. If they were not producing solid consistent results before you joined them, it's a very good indicator that something's fishy. All that "they are working on something else that we can't tell you" is BS.

      If they were, and you were the only one treated like that, hiring you was a decision forced upon the team, so they got rid of you in a rather efficient way.

    • nosefurhairdo 2 days ago

      That's rough. Of course some amount of thoughtfulness towards "smallest reasonable change" is valuable, but if you're not shipping then something is wrong.

      As for the "comments on every detail" thing... I would fight that until I win or have to leave. What a completely asinine practice to leave comments on typical lines of code.

  • charles_f 2 days ago

    I like to call these smells, not rules. They're an indication that something might be wrong because you've repeated code, or because your method is too long, or because you have too many parameters. But it might also be a false positive because in this instance it was acceptable to repeat code or have a long method or have many parameters.

    Sometimes food smells because it turned bad, and sometimes it's smelly because it's cheese.

  • nradov 2 days ago

    It's the same with writing. The best authors occasionally break the rules of grammar and spelling in order to achieve a specific effect. But you have to learn the rules first, and break them only intentionally rather than accidentally. Otherwise your writing ends up as sloppy crap.

    (Of course some organizations have coding conventions that are just stupid, but that's a separate issue.)

  • deergomoo 2 days ago

    Same deal with DRY, the principle is obviously correct but people can take it too literally. It's so easy to get yourself in a huge mess trying to extract out two or three bits of code that look pretty similar but aren't really used in the same context.

    • skeeter2020 2 days ago

      The problem with DRY and generic rules around size, etc. really seems to be figuring out the boundaries, and that's tough to get right, even for experienced devs, plus very contextual. If you need to open up a dozen files to make a small change you're overwhelmed, but then if you need to wade through a big function or change code in 2 places you're just as frustrated.

  • diekhans a day ago

    Hard rules are the problem. There is a lot of "it depends."

    After over 40 years of programming, I continue to reduce the size of functions and find it easier to write and understand when I return to them. Ten lines are now a personal guideline.

    However, a linear function with only tiny loops or conditionals can be easily understood when hundreds of lines are long, but not so much with nested conditionals and loops, where there is natural decomposition into functions.

    I observed that the same guidelines became rules problems when test coverage became popular. They soon became metrics rather than tools to think about code and tests. People became reluctant to add sanity check code for things that could should never happen because it brought down code coverage.

    • asdff a day ago

      There are certainly functions written too cleverly to be apparent how they manage to work at all in a few lines. By my own hand six months ago sometimes. The solution is an unsexy one but always works: write a books worth of comments near that function code that explains absolutely everything and why it was done.

  • dennis_jeeves2 2 days ago

    >It almost always comes from a lead or manager who reads too many business books and then cargo cults those books on to the team.

    Worse, they behave as though they have profound insights, and put themselves on an intellectually elevated pedestal, which the rest of their ordinary team mortals cannot achieve.

  • raincole 2 days ago

    I've seen a book promoting the idea that methods should not be longer than 5 lines.

    Of course now I know these ridiculous statements are from people hardly wrote any code in their lives, but if I'd read them at 18 I would have been totally misled.

    • arzke a day ago

      > I know these ridiculous statements are from people hardly wrote any code in their lives

      Some people who actually wrote a decent amount of code in their lives are sharing that opinion, so your comment just sounds like an ad-hominem attack.

      • wholinator2 a day ago

        I disagree that it's an attack, I've also never heard anyone say methods should be less than 5 lines. 5 lines is an insane limit, 15 is much more reasonable. This kind of enforcement reeks to me of unnecessarily "one-lining" complicated statements into completely unreadable garbage. I mean seriously though, 5 lines? Why not 4, or 3, or 6? 15 lines of well thought out code is infinitely preferable to 3 different 5-line monstrosities. Who(m'st've) among us that actually writes code would preach such a guideline, and can i please see their code for reference. Maybe they are just better than us, i still don't think that makes it a reasonable general rule. And i disagree that calling that out as crazy counts as a personal ad-hominem attack against this nebulous entity

    • quesomaster9000 2 days ago

      Weirdly if you do break everything down into purely functional components it's entirely possible to uncompromisingly make every concept a few lines of code at most, and you will end up with some extremely elegant solutions this way.

      You wouldn't be misled at all, only that the path you'd go down is an entirely different one to what you expected it to be.

  • asdff a day ago

    Doing things the right way always introduces a shackle to your ankle. Oh am I to package my functions as discrete packages I call via library name carefully crafted to install into some specific folder structure that I now have to learn and not make mistakes with. Or I can do it “improperly” and just write a function and start using it immediately.

    Not everything has to be designed like some standardized mass produced part ready to drop into anything made in the last 40 years. And what is crazy is that even things written to that standard aren’t even compatible and might have very specific dependencies themselves.

  • psychoslave 2 days ago

    If a function is longer than what I can display on a single screen, it better has to be argumented with very exceptional relevant requirements, which is just as straight forward to judge for anyone with a bit of experience.

  • ozim 2 days ago

    In my experience it usually devs that do that to themselves after reading stuff on the internet and thinking “I want to be a professional and I want to show it to everyone.”

    Then rules stay and new people just continue with same silly rules instead of thinking if those are really that useful.

  • zahlman 2 days ago

    I'm an experienced developer and I enforce these kinds of rules upon myself without giving it much thought, and I very much prefer the results.

  • dboreham 2 days ago

    You're confusing this with a software development process problem. It's really just good old fashioned psychological abuse.

nbyron 2 days ago

The spirit of this piece is excellent, and introduces some useful terms from psychology to help codify - and more importantly, explain - how to make tasks less unnecessarily demanding.

However, as someone who spends their days teaching and writing about cognitive psychology, worth clarifying that this isn’t quite correct:

Intrinsic - caused by the inherent difficulty of a task. It can't be reduced, it's at the very heart of software development.

Intrinsic load is a function of the element interactivity that results within a task (the degree to which different elements, or items, that you need to think about interact and rely upon one another), and prior knowledge.

You can’t really reduce element interactivity if you want to keep the task itself intact. However if it’s possible to break a task down into sub tasks then you can often reduce this somewhat, at the expense of efficiency.

However, you can absolutely affect the prior knowledge factor that influences intrinsic load. The author speaks of the finding from Cowan (2001) that working memory can process 4+—1 items simultaneously, but what most people neglect here is that what constitutes an “item” is wholly depending upon the schemas that a given person has embedded in their long-term memory. Example: someone with no scientific knowledge may look at O2 + C6H12O6 -> CO2 + H2O as potentially up to 18 items of information to handle (then individual characters), whereas someone with some experience of biology may instead handle this entire expression as a single unit - using their knowledge in long-term memory to “chunk” this string as a single unit - ‘the unbalanced symbol equation for respiration’.

  • ilrwbwrkhv 2 days ago

    Another interesting thing is when there is inherent complexity in the system, things remain simple.

    For example in game programming, nobody is doing function currying.

    And yet in React and frontend land because it is a button on screen which toggles a boolean field in the db, there are graphs, render cycles, "use client", "use server", "dynamic islands", "dependency arrays" etc. This is the coding equivalent of bullshit jobs.

    • afarviral 12 hours ago

      What's the alternative in front-end? I had assumed those things were needed to essentially reverse engineer the web to be more reactive and stateful? Genuinely want it to be simpler.

lr4444lr 2 days ago

Mantras like "methods should be shorter than 15 lines of code" or "classes should be small" turned out to be somewhat wrong.

So much this.

The whole point of functions and classes was to make code reusable. If the entire contents of a 100 line method are only ever used in that method and it's not recursive or using continuations or anything else weird, why the hell would it be "easier to read" if I had to jump up and down the file to 7 different submethods when the function's entire flow is always sequential?

  • lll-o-lll 2 days ago

    > The whole point of functions and classes was to make code reusable.

    I’m amazed that here we are >40 years on from C++, and still this argument is made. Classes never encapsulated a module of reusability, except in toy or academic examples. To try and use them in this way either leads to gigantic “god” classes, or so many tiny classes with scaffolding classes between them that the “communication overhead” dwarfs the actual business logic.

    Code base after code base proves this again and again. I have never seen a “class” be useful as a component of re-use. So what is? Libraries. A public interface/api wrapping a “I don’t care what you did inside”. Bunch of classes, one class, methods? So long as the interface is small and well defined, who cares how it’s structured inside.

    Modular programming can be done in any paradigm, just think about the api and the internal as separate things. Build some tests at the interface layer, and you’ve got documentation for free too! Re-use happens at the dll or cluster of dll boundaries. Software has a physical aspect to it as well as code.

    • lr4444lr 2 days ago

      This is not my experience. Multiple inheritance within a code base of certain sub-functionalities and states is a perfectly good example of reuse. You do not need to go all the way out to the library level. In fact, it is the abstract bases that really minimize the reusable parts that I find most useful.

      I'm not saying you have to use classes to do this, but they certainly get the job done.

      • lll-o-lll 2 days ago

        We are talking about different things. If you want to use inheritance inside your module, behind a reasonable API, in order to re-use common logic, I won’t bat an eye. I won’t know, I’m working with the public part of your module.

        If you structure your code so that people in my team can inherit from your base class (because you didn’t make an interface and left everything public), and later you change some of this common logic, then I will curse your name and the manner of your conception.

      • liontwist 2 days ago

        Since learning functional programming well. I feel a need to use inheritance in C++ maybe a handful of places.

        The problem with inherentice reuse is if you need to do something slightly different you are out of luck. Alternatively with functions you call what you need. And can break apart functionality without changing the other reuses.

      • redman25 a day ago

        I know that a lot of people advocate for composition over inheritance. Inheritance can add a lot of complexity especially if it is deep or involves a lot of overrides. It can be difficult to find out where a method came from inside the inheritance chain or if it has been overridden and consequently how it will behave.

        Composition at least makes things a little more obvious where methods are getting their functionality. It also has other benefits in terms of making objects easier to mock.

    • aiisjustanif a day ago

      Surely this is use case dependent? I’ve worked on projects where modular programming works well and others where not so much.

      • lll-o-lll a day ago

        Specifically here I am talking about the concept of “re-use”. That is, the ability to write a bunch of code that does a “thing” and use that more than once, without significant modification.

        Modularity is a much bigger concept, related to the engineering of large software systems. These days, “micro-services” is one way that people achieve modularity, but in the old days it was needed for many of the same reasons, but inside the monolith. The overall solution is composed of blocks living at different layers.

        Re-use also exists inside modules, of course, by using functions or composition or — shudder — inheritance of code.

        Modular programming has value as soon as more than one team needs to work on something. As it’s impossible to predict the future, my opinion is that it always has value to structure a code-base in this way.

  • onionisafruit 2 days ago

    To paraphrase a recentish comment from jerf, “sometimes you just have a long list of tasks to do”. That stuck with me. Now I’m a bit quicker to realize when I’m in that situation and don’t bother trying to find a natural place to break up the function.

    • runevault 2 days ago

      For me it depends. Sometimes I find value in making a function for a block of work I can give its own name to, because that can make the flow more obvious when looking at what the function does at a high level. But arbitrarily breaking up a function just because is silly and pointless.

      • lostdog 2 days ago

        Plus, laying the list of tasks out in order sometimes makes it obvious how to split it up eventually. If you try to split it up the first time you write it, you get a bunch of meaningless splits, but if you write a 300 line function, and let it simmer for a few weeks, usually you can spot commonalities later.

        • runevault 2 days ago

          That's also true, though in this case I'm not necessarily worried about commonalities, just changing the way it reads to focus on the higher level ideas making up the large function.

          But revisiting code after a time, either just because you slept on it or you've written more adjacent code, is almost always worth some time to try and improve the readability of the code (so long as you don't sacrifice performance unnecessarily).

      • F-W-M 2 days ago

        Define that function directly in the place where it is used (e.g. as a lambda, if nesting of function definitions is not allowed). Keeps the locality and makes it obvious that you could just have put a comment instead.

    • tugu77 2 days ago

      A useful trick is to then at least visually structure those 150 lines with comments that separate some blocks of functionality. Keeps the linear flow but makes it still easier to digest.

      • mr_mitm 2 days ago

        Why not just do something like this then? This:

            myfunction(data) {
                # do one thing to the data
                ...
        
                # now do another
                ...
            }
        
        becomes that:

            myfunction(data) {
                do_one_thing_to_the_data(data)
                now_do_another(data)
            }
        
            do_one_thing_to_the_data(data) {
                ...
            }
        
            now_do_another(data) {
                ...
            }
        
        Still linear, easier to get an overview, and you can write more modular tests.
        • ropejumper 2 days ago

          Because now you have to jump around in order to see the sequence of events, which can be very frustrating if you have to constantly switch between two of these functions.

          Plus, if we're dealing with a "long list of tasks" that can't be broken up in reusable chunks, it probably means that you need to share some context, which is way easier to do if you're in the same scope.

          One thing I find useful is to structure it in blocks instead, so you can share things but also contain what you don't want shared. So e.g. in rust you could do this:

              let shared_computation = do_shared_computation();
              
              let result_one = {
                  let result = do_useful_things();
                  other_things(&shared_computation);
                  result
              }
              
              ...
          
          I think it's a nice middleground. But you still can't write modular tests. But maybe you don't have to, because again, this is just a long list of tasks you need to do that conceptually can't be broken down, so maybe it's better to just test the whole thing as a unit.
        • quietbritishjim a day ago

          Instead of, say, 10 functions in a file that are all individually meaningful, you now have maybe 50 functions that are mostly tiny steps that don't make much sense on their own. Good like finding the "real" 10 functions buried amongst them. It's certainly higher cognitive load in my (painful) experience.

        • rocqua a day ago

          If the arguments to the function required are small, then breaking such a block down makes sense. Otherwise, it usually feels like an unnatural function to me.

        • tugu77 a day ago

          We have different ideas about what "linear" means.

  • zahlman 2 days ago

    >why the hell would it be "easier to read" if I had to jump up and down the file to 7 different submethods when the function's entire flow is always sequential?

    Because you don't jump up and down the file to read it.

    Each method that you create has a name, and the name is an opportunity to explain the process - naturally, in-line, without comments.

    I write code like this all the time - e.g. from my current project: https://github.com/zahlman/bbbb/blob/master/src/bbbb.py . If I wanted to follow the flow of execution, I would be hammering the % key in Vim. But I don't do that, because I don't need or want to. The flow of the function is already there in the function. It calls out to other functions that encapsulate details that would be a distraction if I want to understand the function. The functions have names that explain their purpose. I put effort into names, and I trust myself and my names. I only look at the code I'm currently interested in. To look at other parts of the code, I would first need a reason to be interested in it.

    When you look at yourself in the mirror, and notice your hand, do you feel compelled to examine your hand in detail before you can consider anything about the rest of your reflection? Would you prefer to conceive of that image as a grid of countless points of light? Or do you not find it useful that your mind's eye automatically folds what it sees into abstractions like "hand"?

    35 years into my journey as a programmer, the idea of a 100-line function frightens me (although I have had to face this fear countless times when dealing with others' code). For me, that's half of a reasonable length (though certainly not a hard limit) for the entire file.

    • arzke a day ago

      This is how I work as well, and the reason I tend to write many small functions rather than few large ones is precisely because it reduces cognitive load. You don't have to understand what the canSubmit function does, unless you are interested in knowing what the conditions to submit this form are.

      Ironically, the author of the post claims it has the opposite effect.

    • ajuc 2 days ago

          # Can't import at the start, because of the need to bootstrap the
          # environment via `get_requires_for_build_*`.
        
      
      This comment is a great example of what information you lose when you split linear code into small interrelated methods. You lose ordering and dependencies.

      Sometimes it's worth it. Sometimes it isn't. In my opinion it's almost never worth it to get to the Uncle Bob's approved length of methods.

      10-30 lines is OK. 3 is counterproductive except for a small subset of wrappers, getters etc. Occasionally it's good to leave a method that is 300 lines long.

      If your code always does 9 things in that exact order - it's counterproductive to split them artificially into 3 sets of 3 things to meet an arbitrary limit.

      • zahlman 2 days ago

        >This comment is a great example of what information you lose when you split linear code into small interrelated methods.

        Inlining `_read_toml` or `_read_config` would change nothing about the reasoning. The purpose was to make sure the import isn't tried until the library providing it is installed in the environment. This has nothing to do with the call graph within my code. It's not caused by "splitting the code into interrelated methods" and is not a consequence of the dependencies of those functions on each other. It's a consequence of the greater context in which the entire module runs.

        The way that the system (which is not under my control) works (I don't have a really good top-down reference handy for this - I may have to write one), a "build frontend" will invoke my code - as a subprocess - multiple times, possibly looking for and calling different hooks each time. The public `get_requires_for_build_wheel` and `get_requires_for_build_sdist` are optional hooks in that specification (https://peps.python.org/pep-0517/#optional-hooks).

        However, this approach is left behind from an earlier iteration - I don't need to use these hooks to ask the build frontend to install `tomli`, because the necessary conditions can be (and currently are) provided declaratively in `pyproject.toml` (and thus `tomli` will be installed, if necessary, before any attempts to run my backend code). I'll rework this when I get back to it (I should just be able to do the import normally now, but of course this requires testing).

  • shivawu 2 days ago

    I agree except I think 100 lines is definitely worth a method, whereas 15 lines is obviously not worthy for the most cases and yet we do that a lot.

    My principle has always been: “is this part a isolated and intuitive subroutine that I can clearly name and when other people see it they’ll get it at first glance without pausing to think what this does (not to mention reading through the implemention)”. I’m surprised this has not been a common wisdom from many others.

    • andrewingram 2 days ago

      In recent years my general principle has been to introduce an abstraction (in this case split up a function) if it lowers local concepts to ~4 (presumably based on similar principles to the original post). I’ve taken to saying something along the lines of “abstractions motivated by reducing repetition or lines of code are often bad, whilst ones motivated by reducing cognitive load tend to be better”.

      Good abstractions often reduce LOC, but I prefer to think of that as a happy byproduct rather than the goal.

    • zahlman 2 days ago

      >My principle has always been: “is this part a isolated and intuitive subroutine that I can clearly name and when other people see it they’ll get it at first glance without pausing to think what this does (not to mention reading through the implemention)”.

      I hold this principle as well.

      And I commonly produce one-liner subroutines following it. For me, 15 lines has become disturbingly long.

      • galangalalgol a day ago

        I tend toward John Carnack's view. He seemed annoyed that he was being pressed to provide a maximum at all and specified 7000 lines. I don't think I have ever gone that high. But really is just a matter of what you are doing. We expect to reuse things way more often than we actually do. If you wrote out everything you need to do in order and then applied the rule of three to make a function out of everything you did three times, it is very possible you wouldn't remove anything. In which case I think it should just be the one function.

        • zahlman a day ago

          > We expect to reuse things way more often than we actually do.

          This is about readability (which includes comprehensibility), not reuse. When I read code from others who take my view, I understand. When I read code from those who do not, I do not, until I refactor. I extract a piece that seems coherent, and guess its purpose, and then see what its surroundings look like, with that purpose written in place of the implementation. I repeat, and refine, and rename.

          It is the same even if I never press a key in my editor. Understanding code within my mind is the same process, but relying on my memory to store the unwritten names. This is the nature of "cognitive load".

    • toasterlovin 2 days ago

      Yeah, I find extracting code into methods very useful for naming things that are 1) a digression from the core logic, and 2) enough code to make the core logic harder to comprehend. It’s basically like, “here’s this thing, you can dig into it if you want, but you don’t have to.” Or, the core logic is the top level summary and the methods it calls out to are sections or footnotes.

  • westcoast49 2 days ago

    It comes down to the quality of the abstractions. If they are well made and well named, you'd rather read this:

      axios.get('https://api.example.com', {
          headers: { 'Authorization': 'Bearer token' },
          params: { key: 'value' }
      })
      .then(response => console.log(response.data))
      .catch(error => console.error(error));
    
    than to read the entire implementations of get(), then() and catch() inlined.
  • joshuamorton 2 days ago

    I find "is_enabled(x)" to be easier to reason about than

        if (x.foo || x.bar.baz || (x.quux && x.bar.foo))
    
    Even if it's only ever used once. Functions and methods provide abstraction which is useful for more than just removing repetition.
    • emn13 2 days ago

      If you're literally using it just once, why not stick it in a local variable instead? You're still getting the advantage of naming the concept that it represents, without eroding code locality.

      However, the example is a slightly tricky basis to form an opinion on best practice: you're proposing that the clearly named example function name is_enabled is better than an expression based on symbols with gibberish names. Had those names (x, foo, bar, baz, etc) instead been well chosen meaningful names, then perhaps the inline expression would have been just as clear, especially if the body of the if makes it obvious what's being checked here.

      It all sounds great to introduce well named functions in isolated examples, but examples like that are intrinsically so small that the costs of extra indirection are irrelevant. Furthermore, in these hypothetical examples, we're kind of assuming that there _is_ a clearly correct and unique definition for is_enabled, but in reality, many ifs like this have more nuance. The if may well not represent if-enabled, it might be more something like was-enabled-last-app-startup-assuming-authorization-already-checked-unless-io-error. And the danger of leaving out implicit context like that is precisely that it sounds simple, is_enabled, but that simplicity hides corner cases and unchecked assumptions that may be invalidated by later code evolution - especially if the person changing the code is _not_ changing is_enabled and therefore at risk of assuming it really means whether something is enabled regardless of context.

      A poor abstraction is worse than no abstraction. We need abstractions, but there's a risk of doing so recklessly. It's possible to abstract too little, especially if that's a sign of just not thinking enough about semantics, but also to abstract too much, especially if that's a sign of thinking superficially, e.g. to reduce syntactic duplication regardless of meaning.

      • lazyasciiart 2 days ago

        Pretty sure every compiler can manage optimizing out that method call, so do whichever makes you and your code reviewer happy.

      • joshuamorton 2 days ago

        A local variable is often worse: Now I suffer both the noise of the unabstracted thing, and an extra assignment. While part of the goal is to give a reasonable logical name to the complex business logic, the other value is to hide the business logic for readers who truly don't care (which is most of them).

        The names could be better and more expressive, sure, but they could also be function calls themselves or long and difficult to read names, as an example:

            if (
                x.is_enabled ||
                x.new_is_enabled ||
                (x.in_us_timezone && is_daytime()) ||
                x.experimental_feature_mode_for_testing 
                )...
        
        That's somewhat realistic for cases where the abstraction is covering for business logic. Now if you're lucky you can abstract that away entirely to something like an injected feature or binary flag (but then you're actually doing what I'm suggesting, just with extra ceremony), but sometimes you can't for various reasons, and the same concept applies.

        In fact I'd actually strongly disagree with you and say that doing what I'm suggesting is even more important if the example is larger and more complicated. That's not an excuse to not have tests or not maintain your code well, but if your argument is functionally "we cannot write abstractions because I can't trust that functions do what they say they do", that's not a problem with abstractions, that's a problem with the codebase.

        I'm arguing that keeping the complexity of any given stanza of code low is important to long-term maintainability, and I think this is true because it invites a bunch of really good questions and naturally pushes back on some increases in complexity: if `is_enabled(x)` is the current state of things, there's a natural question asked, and inherent pushback to changing that to `is_enabled(x, y)`. That's good. Whereas its much easier for natural development of the god-function to result in 17 local variables with complex interrelations that are difficult to parse out and track.

        My experience says that identifying, removing, and naming assumptions is vastly easier when any given function is small and tightly scoped and the abstractions you use to do so also naturally discourage other folks who develop on the same codebase from adding unnecessary complexity.

        And I'll reiterate: my goal, at least, when dealing with abstraction isn't to focus on duplication, but on clarity. It's worthwhile to introduce an abstraction even for code used once if it improves clarity. It may not be worthwhile to introduce an abstraction for something used many times if those things aren't inherently related. That creates unnecessary coupling that you either undo or hack around later.

        • sgarland 2 days ago

          > Now I suffer both the noise of the unabstracted thing, and an extra assignment.

          Depends on your goals / constraints. From a performance standpoint, the attribute lookups can often dwarf the overhead of an extra assignment.

          • joshuamorton 2 days ago

            I'm speaking solely from a developer experience perspective.

            We're talking about cases where the expression is only used once, so the assignment is free/can be trivially inlined, and the attribute lookups are also only used once so there is nothing saved by creating a temporary for them.

    • gizzlon 2 days ago

      Wouldn't you jump to is_enabled to see what it does?

      That's what I always do in new code, and probably why I dislike functions that are only used once or twice. The overhead of the jump is not worth it. is_enabled could be a comment above the block (up to a point, notif it's too long)

      • joshuamorton 2 days ago

        > Wouldn't you jump to is_enabled to see what it does?

        That depends on a lot of things. But the answer is (usually) no. I might do it if I think the error is specifically in that section of code. But especially if you want to provide any kind of documentation or history on why that code is the way it is, it's easier to abstract that away into the function.

        Furthermore, most of the time code is being read isn't the first time, and I emphatically don't want to reread some visual noise every time I am looking at a larger piece of code.

        • gizzlon 2 days ago

          That makes sense. To mee it's not about the function having bad code, but different opinions about what exactly "enabled" means.

          If I'm not interested I just jump past the block when reading (given that it's short and tidy)

      • zahlman 2 days ago

        > Wouldn't you jump to is_enabled to see what it does?

        It determines whether the thing is enabled. Or else some other dev has some 'splainin' to do. I already understand "what it does"; I am not interested in seeing the code until I have a reason to suspect a problem in that code.

        If the corresponding logic were inline, I would have to think about it (or maybe read a comment) in order to understand its purpose. The function name tells me the purpose directly, and hides the implementation that doesn't help me understand the bigger picture of the calling function.

        Inline code does the opposite.

        When the calculation is neatly representable as a single, short, self-evident expression, then yes, I just use a local assignment instead. If I find myself wanting to comment it - if I need to say something about the implementation that the implementation doesn't say directly - using a separate function is beneficial, because a comment in that function then clearly refers to that calculation specifically, and I can consider that separately from the overall process.

        • gizzlon 2 days ago

          > It determines whether the thing is enabled.

          Ah, but what exactly does "enabled" mean in this context? Might seem nitpicky, but I might very well have a different opinion than the person who wrote the code. I mean, if it was just `if foo.enabled ..` no one would put it in a new function.. right? :)

          I would say a comment does the same, and better because it can be multi line, and you can read it without having to click or move to the function call to see the docs.

          And you can jump past the implementation, iff it's short and "tidy" and enough.

          Yes, at some point it should be moved out anyway. I'm just weary from reading code with dozens of small functions, having to jump back and forth again and again and again

          • zahlman 2 days ago

            >Ah, but what exactly does "enabled" mean in this context?

            If the code is working, it means what it needs to mean.

            > I mean, if it was just `if foo.enabled ..` no one would put it in a new function. right?

            Sure. This is missing the point, however.

            > I'm just weary from reading code with dozens of small functions, having to jump back and forth again and again and again

            Why do you jump to look at the other parts of the code? Did it fail a test?

            • Hasu a day ago

              > If the code is working, it means what it needs to mean.

              No. Working code says nothing about the meaning of a label, which is purely to inform humans. The computer throws it away, the code will work no matter what you name it, even if the name is entirely wrong.

              > Why do you jump to look at the other parts of the code? Did it fail a test?

              Because people pick bad names for methods, and I've been hurt before. I'm not reading the code just to fix a problem, I'm reading the code to understand what it does (what it ACTUALLY does, not what the programmer who wrote it THOUGHT it does), so I can fix the problem properly.

              • zahlman a day ago

                >Because people pick bad names for methods, and I've been hurt before.

                So you write long functions because other people are bad at writing short ones?

                • Hasu a day ago

                  I have absolutely done this myself in the past and confused myself with bad names. Any criticism I apply to other people also applies to myself: I am not a special case.

                  Naming things is hard! Even if you're really good at naming things, adding more names and labels and file separation to a system adds to the complexity of the system. A long function may be complex, but it doesn't leak the complexity into the rest of the system. Creating a function and splitting it out is not a zero cost action.

                  I write long functions when long functions make sense. I write plenty of short functions too, when that makes sense. I'm not religiously attached to function or file size, I'm attached to preserving the overall system structure and avoiding stuff that makes easy bugs.

              • joshuamorton a day ago

                So my claim is that you do this less often than you claim to. There is some cutoff where you trust the code enough to not investigate it further. I'm of the opinion that this trust should generally be pretty close to the actual thing you're working on or investigating, and if it isn't that's a cultural issue that won't be solved by just "prefer to inline".

  • logicchains 2 days ago

    >why the hell would it be "easier to read" if I had to jump up and down the file to 7 different submethods when the function's entire flow is always sequential?

    If the submethods were clearly named then you'd only need to read the seven submethod names to understand what the function did, which is easier than reading 100 lines of code.

    • fragmede 2 days ago

      Why is that any easier than having comments in the code that describe each part? In languages that don't allow closures, there's no good way to pass state between the seven functions unless you pass all the state you need, either by passing all the variables directly, or by creating an instance of a class/struct/whatever to hold those same variables and passing that. If you're lucky it might only be a couple of variables, but one can imagine that it could be a lot.

      • spicyusername 2 days ago

        If all the functions need state from all the other functions, that is the problem a class or a struct solves - e.g. a place to store shared state.

        If the 7 things are directly related to one another and are _really_ not atomic things (e.g. "Find first user email", "Filter unknown hostnames", etc), then they can be in a big pile in their own place, but that is typically pretty rare.

        In general, you really want to let the code be crisp enough and your function names be intuitive enough that you don't need comments. If you have comments above little blocks of code like "Get user name and reorder list", that should probably just go into its own function.

        Typically I build my code in "layers" or "levels". The lowest level is a gigantic pile of utility functions. The top level is the highest level abstractions of whatever framework or interface I'm building. In the middle are all the abstractions I needed to build to bridge the two, typically programs are between 2-4 layers deep. Each layer should have all the same semantics of everything else at that layer, and lower layers should be less abstract than higher layers.

        • fragmede 2 days ago

          My problem with the class/struct approach is it doesn't work if you don't need everything everywhere.

              foo(...):
                  f1(a,b,c,d,e,f)
                  f2(a,c,d,f)
                  f3(b,c,d,e)
                  ...
                  f7(d,e)
          
          But with long descriptive variable names that you'd actually use so the function calls don't fit on one line. Better imo to have a big long function instead of a class and passing around extra variables.

          Though, ideally there isn't this problem in the first place/it's refactored away (if possible).

          • psychoslave 2 days ago

            A function that needs so many parameters is already a no go.

            If it doesn't return anything, then it's either a method in a class, or it's a thing that perform some tricky side effect that will be better completely removed with a more sound design.

            • RaftPeople 5 hours ago

              > A function that needs so many parameters is already a no go.

              This rule is the same as lines of code type rules. The number itself is not the issue, it could be few parameters and a problem or it could be many parameters and not be an issue at all.

            • vanviegen a day ago

              Creating a class around the too many arguments you want to pass to your function may be a good idea if the concept happens to be coherent and hopefully a bit more long-lived than just the function call.

              Otherwise, your just hiding the fact that your function requires too many arguments by calling them properties.

              • psychoslave a day ago

                Well, if there is no class that seems to make sense to group them, that's an additional flag that points to additional thoughts on design. Or discussion with fellow developer about it.

                Of course, on some very exceptional case, 7 arguments might be relevant after all. If that is like the single one in the code base, and after thorough discussion with everyone implicated in the maintenance of the code it was agreed as an exceptionally acceptable trade-off for some reasons, making sure this would not leak in all the code base as it's called almost everywhere, then let it be.

                But if it's a generalized style through the whole codebase, there are obvious lake of care for maintenability of the work and the team is going to pay for that sooner than later.

          • spicyusername 2 days ago

            You access the shared data via the struct / class reference, not as method parameters. That's the benefit.

            e.g.

                foo(...):
                    # Fields
                    a
                    b
                    c
                    d 
                    e
                    
                    # Methods
                    f1(f)
                    f2(f)
                    f3()
                    ...
                    f7()
            • mckn1ght 2 days ago

              Moving them to a higher scope makes it harder to change anything in foo. Now anytime you want to read or write a-e you have to build the context to understand their complete lifecycles. If all the logic were smooshed together, or if it were factored into the original functions with lots of parameters, as ugly as either of them might be, you still have much more assurance about when they are initialized and changed, and the possible scopes for those events are much more obviously constrained in the code.

              • spicyusername a day ago

                If all those functions need all those variables, then you're either going to put them in a class, or put all those variables in something like a dict and just pass that in.

                Seeing 10 variables passed in to a function is a code smell.

                Whether you put in in a common class / struct or aggregate them in a dict depends on whether or not all those functions are related.

                In general, your functions should not be super duper long or super duper intended. Those are also code smells that indicate you have the wrong abstractions.

          • edflsafoiewq 2 days ago

            It works fine. Not all the methods need to use all the struct members.

      • hackinthebochs 2 days ago

        Language syntax defines functional boundaries. A strong functional boundary means you don't have to reason about how other code can potentially influence your code, these boundaries are clearly defined and enforced by the compiler. If you just have one function with blocks of code with comments, you still must engage with the potential for non-obvious code interactions. That's much higher cognitive load than managing the extra function with its defined parameters.

        • fragmede 2 days ago

          In the ideal case, sure, but if assuming this can't be refactored, then the code

              foo(...):
                 // init
                 f1(a,b,c,d,e,f)
                 f2(a,b,c,d,e,f)
                 ...
                 f7(a,b,c,d,e,f)
          
          or the same just with a,b,c,d,e,f stuffed into a class/struct and passed around, isn't any easier to reason about than if those functions are inline.
          • joshuamorton 2 days ago

            There's at least one reason that something like this is going to be exceedingly rare in practice, which is that (usually) functions return things.

            In certain cases in C++ or C you might use in/out params, but those are less necessary these days, and in most other languages you can just return stuff from your functions.

            So in almost every case, f1 will have computed some intermediate value useful to f2, and so on and so forth. And these intermediate values will be arguments to the later functions. I've basically never encountered a situation where I can't do that.

            Edit: and as psychoslave mentions, the arguments themselves can be hidden with fluent syntax or by abstracting a-f out to a struct and a fluent api or `self`/`this` reference.

            Cases where you only use some of the parameters in each sub-function are the most challenging to cleanly abstract, but are also the most useful because they help to make complex spaghetti control-flow easier to follow.

          • hackinthebochs 2 days ago

            I disagree. Your example tells me the structure of the code at a glance. If it was all inlined I would have to comprehend the code to recover this simple structure. Assuming the F's are well-name that's code I don't have to read to comprehend its function. That's always a win.

          • psychoslave 2 days ago

            This typically can be coded with something like

            def foo(...) = Something.new(...).f1.f2.f7

            Note that ellipsis here are actual syntax in something like Ruby, other languages might not be as terse and convinient, but the fluent pattern can be implemented basically everywhere (ok maybe not cobol)

      • lazyasciiart 2 days ago

        > there's no good way to pass state between the seven functions unless you pass all the state you need,

        That’s why it’s better than comments: because it gives you clarity on what part of the state each function reads or writes. If you have a big complex state and a 100 line operation that is entirely “set attribute c to d, set attribute x to off” then no, you don’t need to extract functions, but it’s possible that e.g this method belongs inside the state object.

      • zahlman 2 days ago

        >Why is that any easier than having comments in the code that describe each part?

        Because you only read the submethod names, and then you already understand what the code does, at the level you're currently interested in.

      • thfuran 2 days ago

        >Why is that any easier than having comments in the code that describe each part?

        Because 7<<100

        • TeMPOraL 2 days ago

          > Because 7<<100

          But then, 7 << 100 << (7 but each access blanks out your short-term memory), which is how jumping to all those tiny functions and back plays out in practice.

          • zahlman 2 days ago

            >which is how jumping to all those tiny functions and back plays out in practice.

            Why would you jump into those functions and back?

            • TeMPOraL a day ago

              Because I need to know what they actually do? The most interesting details are almost always absent from the function name.

              EDIT:

              For even a simplest helper, there's many ways to implement it. Half of them stupid, some only incorrect, some handling errors the wrong way or just the wrong way for the needs of that specific callee I'm working on. Stupidity often manifests in unnecessary copying and/or looping over copy and/or copying every step of the loop - all of which gets trivially hidden by extra indirection of a small function calling another small function. That's how you often get accidental O(n^2) in random places.

              Many such things are OK or not in context of caller, none of this is readily apparent in function signatures or type system. If the helper fn is otherwise abstracting a small idiom, I'd argue it's only obscuring it and providing ample opportunities to screw up.

              I know many devs don't care, they prefer to instead submit slow and buggy code and fix it later when it breaks. I'm more of a "don't do stupid shit, you'll have less bugs to fix and less performance issues for customers to curse you for" kind of person, so cognitive load actually matters for me, and wishing it away isn't an acceptable solution.

              • zahlman a day ago

                >Because I need to know what they actually do?

                Strange. The longer I've been programming, the less I agree with this.

                >For even a simplest helper, there's many ways to implement it.

                Sure. But by definition, the interface is what matters at the call site.

                > That's how you often get accidental O(n^2) in random places.

                Both loops still have to be written. If they're in separate places, then instead of a combined function which is needlessly O(n^2) where it should be O(n), you have two functions, one of which is needlessly O(n) where it should be O(1).

                When you pinpoint a bottleneck function with a profiler, you want it to be obvious as possible what's wrong: is it called too often, or does it take too long each time?

                > If the helper fn is otherwise abstracting a small idiom, I'd argue it's only obscuring it and providing ample opportunities to screw up.

                Abstractions explain the purpose in context.

                > I'm more of a "don't do stupid shit, you'll have less bugs to fix and less performance issues for customers to curse you for" kind of person

                The shorter the function is, the less opportunity I have to introduce a stupidity.

          • joshuamorton 2 days ago

            Why does pressing "go to defn" blank your short term memory in a way that code scrolling beyond the top of the screen doesn't?

            • TeMPOraL a day ago

              Because jumping is disorienting, because each defn has a 1-3 lines of overhead (header, delimiters, whitespace) and lives among other defns, which may not be related to the task at hand, and are arranged in arbitrary order?

              Does this really need explaining? My screen can show 35-50 lines of code; that can be 35-50 lines of relevant code in a "fat" function, or 10-20 lines of actual code, out of order, mixed with syntactic noise. The latter does not lower cognitive load.

              • joshuamorton a day ago

                I wouldn't have asked if I didn't have a real curiosity!

                To use a real world example where this comes up a lot, lots and lots of code can be structured as something like:

                    accum = []
                    for x in something():
                        for y in something_else():
                            accum.append(operate_on(x, y))
                
                I find structuring it like this much easier than fully expanding all of these out, which at best ends up being something like

                    accum = []
                    req = my_service.RpcRequest(foo="hello", bar=12)
                    rpc = my_service.new_rpc()
                    resp = my_service.call(rpc, req)
                    
                    req = my_service.OtherRpcRequest(foo="goodbye", bar=12)
                    rpc = my_service.new_rpc()
                    resp2 = my_service.call(rpc, req)
                
                    for x in resp.something:
                        for y in resp2.something_else:
                            my_frobnicator = foo_frobnicator.new()
                            accum.append(my_frobnicator.frob(x).nicate(y))
                
                and that's sort of the best case where there isn't some associated error handling that needs to be done for the rpc requests/responses etc.

                I find it much easier to understand what's happening in the first case than the second, since the overall structure of the operations on the data is readily apparent at a glance, and I don't need to scan through error handling and boilerplate.

                Like, looking at real-life examples I have handy, there's a bunch of cases where I have 6-10 lines of nonsense fiddling (with additional lines of documentation that would be even more costly to put inline!), and that's in python. In cpp, go, and java which I use at work and are generally more verbose, and have more rpc and other boilerplate, this is usually even higher.

                So the difference is that my approach means that when you jump to a function, you can be confident that the actual structure and logic of that function will be present and apparent to you on your screen without scrolling or puzzling. Whereas your approach gives you that, say, 50% of the time, maybe less, because the entire function doesn't usually fit on the screen, and the structure may contain multiple logical subroutines, but they aren't clearly delineated.

    • lr4444lr 2 days ago

      If the variables were clearly named, I wouldn't have to read much at all, unless I was interested in the details. I reitrate: why does the length of the single function with no reuse matter?

      • F-W-M 2 days ago

        It does not matter if function foo is reused, only if the code inside foo that is to be pulled into new function bar is.

  • afarviral 12 hours ago

    There's just no way I buy that I could safely make a change in a 100 loc function and know that there won't be an impact 30 lines down, where with a few additional function you can define the shape of interactions and know that if that shape/interface/type is maintained that there won't be unexpected interactions. Its a balance though as indirection can also readily hide and obscure interactions or add unnecessary glue code that also takes up mental bandwidth and requires additional testing to confirm.

  • Freak_NL 2 days ago

    For unit testing those sub-sections in a clear and concise manner (i.e., low cognitive load). As long as the method names are descriptive no jumping to and fro is needed usually.

    That doesn't mean every little unit needs to be split out, but it can make sense to do so if it helps write and debug those parts.

    • oxidant 2 days ago

      Then you need to make those functions public, when the goal is to keep them private and unusable outside of the parent function.

      Sometimes it's easy to write multiple named functions, but I've found debugging functions can be more difficult when the interactions of the sub functions contribute to a bug.

      Why jump back and forth between sections of a module when I could've read the 10 lines in context together?

      • Freak_NL 2 days ago

        > Then you need to make those functions public, […]

        That depends on the language, but often there will be a way to expose them to unit tests while keeping them limited in exposure. Java has package private for this, with Rust the unit test sits in the same file and can access private function just fine. Other languages have comparable idioms.

        • oxidant 2 days ago

          Javascript doesn't, AFAIK. I work in Elixir, which doesn't.

          I'm for it if it's possible but it can still make it harder to follow.

  • Mawr 2 days ago

    Because a function clearly defines the scope of the state within it, whereas a section of code within a long function does not. Therefore a function can be reasoned about in isolation, which lowers cognitive load.

    • kragen 2 days ago

      I don't agree. If there are side effects happening which may be relevant, the section of code within a long function is executing in a clearly defined state (the stuff above it has happened, the stuff below it won't happen until it finishes) while the same code in a separate function could be called from anywhere. Even without side effects, if it's called from more than one place, you have to think about all of its callers before you change its semantics, and before you look, you don't know if there is more than one caller. Therefore the section of code can be reasoned about with much lower cognitive load. This may be why larger subroutines correlate with lower bug rates, at least in the small number of published empirical studies.

      The advantage of small subroutines is not that they're more logically tractable. They're less logically tractable! The advantage is that they are more flexible, because the set of previously defined subroutines forms a language you can use to write new code.

      Factoring into subroutines is not completely without its advantages for intellectual tractability. You can write tests for a subroutine which give you some assurance of what it does and how it can be broken. And (in the absence of global state, which is a huge caveat) you know that the subroutine only depends on its arguments, while a block in the middle of a long subroutine may have a lot of local variables in scope that it doesn't use. And often the caller of the new subroutine is more readable when you can see the code before the call to it and the code after it on the same screen: code written in the language extended with the new subroutine can be higher level.

    • lr4444lr 2 days ago

      You can write long functions in a bad way, don't get me wrong. I'm just saying the rule that the length itself is an anti-pattern has no inherent validity.

  • sureglymop 2 days ago

    As a non English speaker, what does "so much this" mean?

    Does it essentially just mean "I agree"?

    • scott_w 2 days ago

      Yep, basically “I agree with this statement a lot.” It’s very much an “online Americanism.”

    • lr4444lr 2 days ago

      In the superlative, yes. It's a fairly new phrase, and hardly in my parlance, but it's growing on me when I'm in informal typed chat contexts.

    • ericjmorey 2 days ago

      It's a call for others to take note of the important or profound message being highlighted. So more than just "I agree".

    • ziml77 2 days ago

      When someone says "this" they are basically pointing at a comment and saying "this is what I think too".

      "So much" is applied to intensify that.

      So, yes, it's a strong assertion of agreement with the comment they're replying to.

  • BenoitEssiambre 2 days ago

    Indeed and breaking out logic into more global scopes has serious downsides if that logic needs to be modified in the future, if your system still needs to support innovation and improvements, downsides not totally unlike the downsides of using a lot of global variables instead of local ones.

    Prematurely abstracting and breaking code out into small high level chunks is bad. I try to lay it out from an information theoretic, mathematical perspective here:

    https://benoitessiambre.com/entropy.html

    with some implications for testing:

    https://benoitessiambre.com/integration.html

    It all comes down to managing code entropy.

pcblues 2 days ago

I'm a fan of using the least number of language features to get the job done. If a language is simple and can be stepped through easily, one benefits from the removal of the added cognitive load of knowing a large number of language features. This provides extra brainspace to understand the problem space and the system one is working on. Most importantly, it makes it easier to have the whole shebang in your mind while you add code (correctly.)

And it adds to maintainability (so long as done in a balanced way!)

I had a boss who saw me looking out a window say, "You look like you are concentrating. I'll come back later."

I document EVERYTHING I think straight away into their appropriate documents so I can forget about it while I'm loading as much of a system's design into my head as I can. It allows me to write good code during that small window of available zen. After years of doing that, I made a document about documentation. Hope it's of use. https://pcblues.com/assets/approaching_software_projects.pdf

  • userbinator 2 days ago

    I'm a fan of using the least number of language features to get the job done.

    Some people seem to love doing the opposite, which IMHO is the biggest problem with the software industry --- there's a huge number of developers who think that the more complex (or in their words, "structured") software is, and the more latest language features it has, is somehow better than the simple "deprecated" stuff that's been working for decades.

    I've come to the conclusion that a lot of new language features are there only for the benefit of trendchasers and worshippers of planned obsolescence, and not the users nor the developers on their side.

    • chuckadams a day ago

      I think the biggest problem with the software industry is all the people who insist that everything that everyone else does is the biggest problem with the software industry.

      I guess that set includes me now ¯\_(ツ)_/¯

    • whilenot-dev a day ago

      > I've come to the conclusion that a lot of new language features are there only for the benefit of trendchasers and worshippers of planned obsolescence, and not the users nor the developers on their side.

      What are your thoughts on aync/await then (available in Python/JavaScript etc.)?

      • normie3000 a day ago

        > What are your thoughts on aync/await then (available in Python/JavaScript etc.)?

        Not the OP, but I find it hard to have an absolute opinion on this - IMO some recent additions to javascript significantly decrease cognitive load.

        async/await is a great example of this (vs then/catch/finally chains), and also:

        * spreading of arrays, props, and more arguably args * shortcutting prop/value pairs, e.g. { x:x } as { x }

        Some stuff it seems are more confusing, e.g.

        * similar but subtly different things like for/of vs for/in vs forEach vs iterating Object.keys(), Object.values(), Object.entries() * what are generators and the yield keyword for?

      • redman25 a day ago

        Complexity in javascript is probably more like the fact that there are three different ways of constructing classes. Async/await has it's own pitfalls but at least it is adding new functionality.

        • whilenot-dev a day ago

          JavaScript has multiple ways to blueprint objects, yes, but that isn't really what GP stated. I consider the class keyword as way more beneficial than just being for "trendchasers and worshippers".

          The only pitfall with async/await is all the existing I/O-intensive code where blocking your runtime wasn't seen as a problem that waisted unnecessary resources (yet). Must be This Tall to Write Multi-Threaded Code[0] comes to mind.

          [0]: https://bholley.net/blog/2015/must-be-this-tall-to-write-mul...

          • redman25 a day ago

            I agree, class syntax is better. If javascript had been designed today, I imagine that would be the _only_ way to blueprint objects.

      • userbinator 21 hours ago

        My thoughts can be summed up in 4 words: "stay away from async".

        Several decades of experience debugging race conditions and other obscure concurrency bugs have taught me that the benefits of async code are rarely worth it, and more often than not add unnececessary overhead both at runtime and in development.

    • namaria 2 days ago

      Funny thing is, humans have a huge bias towards traditions that seems to have reversed in the last century or so.

      My take is that consumerism did a number on our psyche.

  • Tainnor 2 days ago

    > If a language is simple and can be stepped through easily, one benefits from the removal of the added cognitive load of knowing a large number of language features.

    Things that you know and have internalised don't occupy your brainspace. That's why you can read this paragraph without having to think about every word individually.

    • qw a day ago

      > Things that you know and have internalised don't occupy your brainspace

      Some "10x" developers are extremely productive because they know the software stack inside and out.

      There are also "10x" teams where they share an institutional knowledge of the software stack and are constantly improving and finding more efficient ways of solving a problem. When someone finds a better library, more efficient CI/CD or establish new code practices it is easy to apply it when they have a single stack to focus on.

      It's hard to replicate this in teams where they are responsible for 10 micro services, each written in their own language and using different software stacks that are more "optimal" for some use case.

      • Tainnor a day ago

        I agree. The constant push for constantly solving 100 problems with 50 poorly understood technologies instead of doing a couple of things and understanding them really well sometimes really makes me hate modern software development.

  • pcblues 2 days ago

    Also, it's important that you make your own templates for each of the document types, because thinking about them is part of the design process. It reduces the cognitive load of understanding yet another set of design document templates, and they are malleable in your own hands :)

  • at_a_remove 2 days ago

    I agree and will add in a few bits: although I will obey the language's idiom, in general, I like to keep the number of "things it does" per line of code to the minimum. Kind of an anti-code golf.

    I also document (or comment code) when I am waiting for the flow to kick in. For me, the documentation starts at the appendices. One for input files or api source tricks or describing the input tables in a database. Another for the relevant outputs. Maybe a "justify overall design decisions" section. Just stuff someone would wish if they had to handle it in five years.

    • pcblues a day ago

      I wonder why people boast they can do ten things in one line. Less lines of code doesn't mean less bugs. The debugger just stops at each symbol instead of each line which I think adds to the cognitive load.

      Documenting while waiting for the flow is like stretching before exercise. It gets you to that place :)

      I got my experience/slaps by fixing my own code in the same codebase for more than ten years. I have empathy for the future maintainers :)

      • at_a_remove 16 hours ago

        I have passed on at least a few codebases. Sometimes, people get bright ideas and just dump what's working to do greenfield stuff. This is why an ex-coworker recently wanted me to come back to braindump on something I finished in 2009. After I left, some bright spark wanted to start from scratch and now there are continual complaints when the previous thing just worked, aside from a per-semester update, which the software would nag you about, as new departments appeared and such.

        On the other hand, I had an apartment-renting suite I had written get passed on to students. While it was written in Perl (a famously "write once, read never" language according to critics), because I did just one thing per line and had comments at multiple levels (a block header for each function, comments within functions, a changelog at the top), they called back and said it was the easiest thing in the world for them to convert it, despite not knowing Perl at all.

        Although I had been programming for years and years by the time I took a FORTRAN course in high school, I had a particularly exacting teacher with a Ph.D. in Computer Science and he drilled us on how we would have to deal with the code of others, or our old code, or code when we did not want to come into work with a cold. Cleverness is held in reserve for data structures and algorithms, when you have no other options. He was very firm on that.

ipython a day ago

I’m not sure I’d use the Unix io interface as a shining example of reducing cognitive load. Sure, it’s a simple interface on the surface but…

Let’s just take write() as an example. Just calling write() does not guarantee that the full contents were written to the file. You could have the case where write() wrote nothing, or some data, but was interrupted because of a signal before it completed. Or you could have a short write because of a quota or disk limit.

And I’m sure I’ll be nerd sniped shortly with even more obscure examples of the sharp edges awaiting you in such a “simple” interface. (For a more snarky version of my comment, see the last essay “the rise of worse is better” of the Unix haters handbook: https://web.mit.edu/~simsong/www/ugh.pdf)

Point is, this interface just shifts cognitive load on to the application developer - it’s got to go somewhere after all.

  • nateglims 2 hours ago

    So much is crammed into ioctl or fcntl. Especially if you are trying to treat UART, SPI, I2C, etc the same by supplying the I/O interface.

  • C-programmer a day ago

    `write` can format your disk if a trickster sets LD_PRELOAD to a malicious shared object.

lisper 2 days ago

OMG, so much this.

One of the biggest sources of cognitive load is poor language design. There are so many examples that I can't even begin to list them all here, but in general, any time a compiler gives you an error and tells you how to fix it that is a big red flag. For example, if the compiler can tell you that there needs to be a semicolon right here, that means that there does not in fact need to be a semicolon right there, but rather that this semicolon is redundant with information that is available elsewhere in the code, and the only reason it's needed is because the language design demands it, not because of any actual necessity to precisely specify the behavior of the code.

Another red flag is boilerplate. By definition boilerplate is something that you have to type not because it's required to specify the behavior of the code but simply because the language design demands it. Boilerplate is always unnecessary cognitive load, and it's one sign of a badly designed language. (Yes, I'm looking at you, Java.)

I use Common Lisp for my coding whenever I can, and one of the reasons is that it, uniquely among languages, allows me to change the syntax and add new constructs so that the language meets the problem and not the other way around. This reduces cognitive load tremendously, and once you get used to it, writing code in any other language starts to feel like a slog. You become keenly aware of the fact that 90% of your mental effort is going not towards actually solving the problem at hand, but appeasing the compiler or conforming to some stupid syntax rule that exists for no reason other than that someone at some time in the dim and distant past thought it might be a good idea, and were almost certainly wrong.

  • JasonSage 2 days ago

    > Another red flag is boilerplate.

    I have to disagree. Boilerplate can simply be a one-time cost that is paid at setup time, when somebody is already required to have an understanding of what’s happening. That boilerplate can be the platform for others to come along and easily read/modify something verbose without having to go context-switch or learn something.

    Arguing against boilerplate to an extreme is like arguing for DRY and total prevention of duplicated lines of code. It actually increases the cognitive load. Simple code to read and simple code to write is low-cost, and paying a one-time cost at setup is low compared to repeated cost during maintenance.

    • singingfish 2 days ago

      I've had some C# code inflicted on me recently that follows the pile of garbage design pattern. Just some offshore guys fulfilling the poorly expressed spec with as little brain work as possible. The amount of almost-duplicate boilerplate kicking around is one of the problems. Yeah it looks like the language design encourages this lowest common denominator type approach, and has lead into the supplier providing code that needs substantial refactoring in order be able to create automated tests as the entry points ignore separation of concerns and abuse private v public members to give the pretense of best practices while in reality providing worst practice modify this code at your peril instead. It's very annoying because I could have used that budget to do something actually useful, but on the other hand improves my job security for now.

      • JasonSage 2 days ago

        Sounds like you would have had problems whether there was boilerplate-y code or not.

        • singingfish 2 days ago

          The extra boilerplate noise with excessive repetition doesn't help one little bit.

    • oalae5niMiel7qu 2 days ago

      If some program can generate that code automatically, the need to generate it, write it to disk, and for you to edit it is proof that there is some flaw in the language the code is written in. When the generator needs to change, the whole project is fucked because you either have to delete the generated code, regenerate it, and replicate your modifications (where they still apply, and if they don't still apply, it could have major implications for the entire project), or you have to manually replicate the differences between what the new version of the generator would generate and what the old version generated when you ran it.

      With AST macros, you don't change generated code, but instead provide pieces of code that get incorporated into the generated code in well-defined ways that allow the generated code to change in the future without scuttling your entire project.

      >others to come along and easily read/modify something verbose without having to go context-switch or learn something.

      They're probably not reading it, but assuming it's exactly the same code that appears in countless tutorials, other projects, and LLMs. If there's some subtle modification in there, it could escape notice, and probably will at some point. If there are extensive modifications, then people who rely on that code looking like the tutorials will be unable to comprehend it in any way.

  • kvark 2 days ago

    I disagree with the first point. Say, the compiler figured out your missing semicolon. Doesn't mean it's easy for another human to clearly see it. The compiler can spend enormous compute to guess that, and that guess doesn't even have to be right! Ever been in a situation where following the compiler recommendation produces code that doesn't work or even build? We are optimizing syntax for humans here, so pointing out some redundancies is totally fine.

    • lisper 2 days ago

      > Doesn't mean it's easy for another human to clearly see it.

      Why do you think that matters? If it's not needed, then it should never have been there in the first place. If it helps to make the program readable by humans then it can be shown as part of the rendering of the program on a screen, but again, that should be part of the work the computer does, not the human. Unnecessary cognitive load is still unnecessary cognitive load regardless of the goal in whose name it is imposed.

      • Calavar 2 days ago

        In languages (both natural and machine languages) a certain amount of syntax redundancy is a feature. The point of syntax "boilerplate" is to turn typos into syntax errors. When you have a language without any redundant syntactical features, you run the risk that your typo is also valid syntax, just with different semantics than what you intended. IMHO, that's much worse than dealing with a missing semicolon error.

        • mckn1ght 2 days ago

          Can you provide an example where syntax that’s required to be typed and can be accurately diagnosed by the compiler can lead to unintended logic? This is not the same thing as like not typing curly braces under an if directive and then adding a second line under it.

          • Calavar a day ago

            > Can you provide an example where syntax that’s required to be typed and can be accurately diagnosed by the compiler can lead to unintended logic?

            I'm not sure we are on the same page here. I'm saying the absence of redundant syntax of the sort that lets the compiler accurately diagnose 'trivial' syntax errors, that can create scenarios where a typo can give you unintended valid syntax with different logic.

            So yes, the conditional shorthand in C would be an example. Getting rid of the braces means you lose an opportunity for the compiler to catch a 'trivial' syntax error, which can lead to different semantics than what the writer intended.

            • mckn1ght a day ago

              Yes, these are different things, which is why I discounted curly braces before. Those are not required for an if statement’s scope. Semicolon’s are “required” everywhere. The compiler can easily spot where one should be by parsing an invalid expression because it encounters illegal tokens to append onto the end of a valid expression, eg you cannot have one statement that contains two assignment operators at the same level of precedence.

              However for curly brances around a conditional lexical scope, the compiler cannot tell you where the closing brace should be, besides before the end of the lexical scope that contains it, like the end of the containing function or class. There can be multiple valid locations before that: every other valid line of code. This is not the same as a semicolon, which must end every line of code.

              Can you provide another example?

      • layer8 a day ago

        The full stop at the end of your last sentence isn’t strictly needed. It isn’t even strictly needed in the preceding sentences (except for the first one with the question mark), because the capitalization already indicates the beginning of the next sentence. We still use full stops because redundancy and consistency help preventing errors and ambiguities. Reducing the tolerances to zero increases the risk of mistakes and misinterpretations.

        Ideally, adding/removing/changing a single character in valid source code would always render the code invalid instead of “silently” changing its meaning.

        • lisper a day ago

          > The full stop at the end of your last sentence isn’t strictly needed.

          yes, that's true | but this redundancy is not necessary | it's there for historical reasons | there are other ways to designate the separation between sentences | some of those alternatives might even make more sense than the usual convention

          • layer8 a day ago

            The point was that the full stop is currently a redundant element in most cases, yet we would not want to omit it just for the reason of being redundant.

            The spaces in your “ | ” punctuation are also not strictly needed, yet one would want to keep then for readability and for risk of otherwise mistaking an “|” for an “l” or similar.

            Again, something not being strictly needed isn’t a sufficient argument to do without it. There are trade-offs.

            • lisper a day ago

              > The spaces in your “ | ” punctuation are also not strictly needed.

              yes.that.is.true|spaces.are.not.strictly.needed.at.all|there.are.alternatives.and.there.are.situations.where.using.those.alternatives.actually.makes.sense|however.the.use.of.whitespace.is.so.deeply.ingrained.that.if.you.dont.do.it.the.rendering.of.your.text.will.generally.be.very.annoying.on.contemporary.systems

              The Right Answer is to separate the underlying representation from the rendering. We already do this to some extent in modern systems. For example, the meaning of text generally doesn't change if you change the font. This is not always true. The font can matter in math, for example. And some font modifications can carry semantic information -- using italics can provide emphasis, for example.

              The Right Way to design a programming language is to have an underlying unambiguous non-redundant canonical representation, and then multiple possible renderings that can be tailored to different requirements. Again, we kinda-sorta do that in modern systems with, for example, syntax coloring. But that's just a half-assed hack layered on top of deeply broken language designs.

              • vacuity a day ago

                Considering all the "tabs or spaces" flamewars and standardized formatting as with gofmt for Go code, I think this would get restricted at most professional codebases to some person's favored style. Not sure that's a good reason, but it's worth considering. For projects that are solo or along those lines, feel free.

          • vacuity a day ago

            You're being disingenuous. Your suggestion is more like if you wrote

            yes, that's true but this redundancy is not necessary it's there for historical reasons...

            without any breaks. That might be exaggerating compared to your actual position, but surely you can see that "unnecessary in this situation" doesn't imply "unnecessary overall". "Not necessary" if we're cherrypicking, sure.

            If my program now has no semicolons and then I write something else that behaves differently than expected, I'm going to be sad. My mental model for programming fares better when semicolons are used, so I will favor writing programs with semicolons. To me, the cost is trivial and the benefit, while minimal, outweights the cost. I consider it separate from actual boilerplate. You can disagree and use other languages, but then we're probably being moreso opinionated than divided into better or worse camps.

            • lisper a day ago

              > That might be exaggerating compared to your actual position

              To the point of being a straw man.

              There was actually a time when neither white space nor punctuation was used andallwordswerejustruntogetherlikethis. Note that it's still possible to decipher that text, it just takes a bit more effort. Natural language is inherently redundant to a certain extent. It's mathematically impossible to remove all redundancy (that would be tantamount to achieving optimal compression, which is uncomputable).

              The spaces around the vertical bars in my example were redundant because they always appeared before and after. That is a sort of trivial redundancy and yes, you can remove it without loss of information. It just makes the typography look a little less aesthetically appealing (IMHO). But having something to indicate the boundaries between words and sentences has actual value and reduces cognitive load.

              ---

              [1] https://en.wikipedia.org/wiki/Kolmogorov_complexity#Uncomput...

              • vacuity a day ago

                I think you forgot the analogy. Why is it bad to have semicolons in programs then?

                > You become keenly aware of the fact that 90% of your mental effort is going not towards actually solving the problem at hand, but appeasing the compiler or conforming to some stupid syntax rule that exists for no reason other than that someone at some time in the dim and distant past thought it might be a good idea, and were almost certainly wrong.

                You said this originally. I definitely agree for something like parentheses in if conditions in Java, but I think semicolons are a great example of how

                > having something to indicate the boundaries between words and sentences has actual value and reduces cognitive load.

                • lisper a day ago

                  > Why is it bad to have semicolons in programs then?

                  It's not bad to have them, it's bad to require them when they aren't necessary. It's bad to make their absence be a fatal syntax error when they aren't necessary. (Some times they are necessary, but that's the exception in contemporary languages.)

                  Also, I know I'm the one who brought them up, but semicolons are just one small example of a much bigger and more widespread problem. It's a mistake to fixate on semicolons.

      • dwattttt 2 days ago

        Aside from the other good points, this thread is about cognitive load. If a language lets you leave off lots of syntactic elements & let the compiler infer from context, that also forces anyone else reading it to also do the cognitive work to infer from context.

        The only overhead it increases is the mechanical effort to type the syntax by the code author; they already had to know the context to know there should be two statements, because they made them, so there's no increased "cognitive" load.

        • lisper 2 days ago

          I guess I didn't make this clear. I'm not advocating for semicolons to be made optional. I'm saying that they should not be included in the language syntax at all unless they are necessary for some semantic purpose. And this goes for any language element, not just semicolons.

          The vast majority of punctuation in programming languages is unnecessary. The vast majority of type declarations are unnecessary. All boilerplate is unnecessary. All these things are there mostly because of tradition, not because there is any technical justification for any of it.

          • dwattttt 2 days ago

            The point generalises beyond semicolons; everything you leave to context is something other people have to load up the context for in order to understand.

            Consider Python; if there are the optional type hints, those can tell you the third parameter to a function is optional. If those are missing, you need to dive into the function to find that out; those type hints are entirely optional, and yet they reduce the cognitive load of anyone using it.

            • mckn1ght 2 days ago

              I haven’t used type hints in Python, but can what you’re describing lead to situations where the code cannot run and the interpreter gives you a suggestion on how to fix it?

              • dwattttt 2 days ago

                Type hints have no runtime impact, so they can't make stuff not work.

                Type linters like mypy can check your code & report something like "this function call requires str, you're providing str | None" though.

            • oalae5niMiel7qu 2 days ago

              >The point generalises beyond semicolons; everything you leave to context is something other people have to load up the context for in order to understand.

              This is not true, because an editor can add any on-screen hints that are needed to help a human understand the code. For example, in my editor, Python code gets vertical lines that indicate where the different indentation levels are, so I can easily see when two lines of code far apart on the screen are at the same indentation level, or how many indentation levels lower the next line is after a long, highly-indented block. Python could add an end-of-block marker like Ruby does to make things like this easier to see, or it could try to encode the vertical lines into the language somehow, but I'd derive no benefit because the editor already gives me the visual clues I need.

      • naasking a day ago

        > If it helps to make the program readable by humans then it can be shown as part of the rendering of the program on a screen, but again, that should be part of the work the computer does, not the human.

        That means you and the compiler front-end are looking at different representations. Sounds doesn't sound like a good idea. Keep it stupid simple: all of our source control tools should work on the same representation, and it should be simpler and portable. Well defined syntax is a good choice here, even if there's some redundancy.

        • lisper a day ago

          > doesn't sound like a good idea.

          It isn't, but that ship sailed when cpp was invented.

          > Keep it stupid simple

          Could not agree more. That's why I use Lisp.

          • naasking 8 hours ago

            > It isn't, but that ship sailed when cpp was invented.

            The fact that subsequent languages largely avoided that suggests that we can sometimes learn from our mistakes.

      • nullstyle 2 days ago

        > then it can be shown as part of the rendering of the program on a screen

        I disagree with this, and can most easily express my disagreement by pointing out that people look at code with a diversity of programs: From simple text editors with few affordances to convey a programs meaning apart from the plain text like notepad and pico all the way up to the full IDEs that can do automatic refactoring and structured editing like the Jet Brains suite, Emacs+Paredit, or the clearly ever-superior Visual Interdev 6.

        If people view code through a diversity of programs, then code's on-disk form matters, IMO.

        • lisper 2 days ago

          Sure, but nothing stops you from looking at the raw code. Consider looking at compiled code. You can always hexdump the object file, but have a disassembly helps a lot.

        • oalae5niMiel7qu 2 days ago

          People's choice of editor is influenced by what they're editing. For example, virtually every Lisp programmer uses Emacs, even though there are many alternatives out there, including VS Code plugins. And virtually every Java programmer uses a JetBrains IDE or something similar. I'd probably install an IDE if I had to work on a Java codebase. Editing with a diversity of programs isn't universal.

    • glitchc 2 days ago

      No, as python and other languages amply demonstrate, the semicolon is for the compiler, not the developer. If the compiler is sophisticated enough to figure out that a semicolon is needed, it has become optional. That's the OP's point.

      • taormina 2 days ago

        But the language spec for Python is what allows for this, not the compiler. \n is just the magic character now except now we also need a \ to make multiline expressions. It’s all trade offs, compilers are not magic

        • BeetleB 2 days ago

          > now except now we also need a \ to make multiline expressions.

          You never need the backslash in Python to make multiple expressions. There's always a way to do multiline using parentheses. Their own style guidelines discourage using backslash for this purpose.

          • taormina a day ago

            And you can also do it with triple quotation marks if strings are involved, but it’s still more work for the compiler that someone explicitly did, it’s not magic.

            • BeetleB a day ago

              Plain strings work fine. Python has the same behavior as C: If two strings separated by whitespace, it concatenates them. So if I have a long string, I start with an open parenthesis, type one string, go to the next line, and another, and so on over several lines. Python sees it all as one string. Very readable.

        • scotty79 2 days ago

          Scala then. Semicolons are optional but you still can have them if you need them

          • lblume 2 days ago

            The obvious example would have been JavaScript, but nobody wants to say something positive about JavaScript...

            • scotty79 2 days ago

              JavaScript has some specific and unique issues. Some silly choices (like auto inserting of semi-colons after empty return) and source code routinely, intentionally getting mangled by minification.

            • glitchc 2 days ago

              > but nobody wants to say something positive about JavaScript...

              For obvious reasons...

      • nazgul17 2 days ago

        It's not that the semicolon is somehow a special character and that's why it's required/optional. It's the context that makes it necessary or not. Python proves that it's possible to design a language that doesn't need semicolons; it does not mean that e.g. Java or C are well defined if you make semicolons optional.

      • sokoloff 2 days ago

        If it’s in the language spec as required there and I’m using a compiler that claims to implement that language spec, I want the compiler to raise the error.

        Additionally offering help on how to fix it is welcome, but silently accepting not-language-X code as if it were valid language-X code is not what I want in a language-X compiler.

  • gleenn 2 days ago

    Totally agree. I think the biggest and most important things a language designer chooses is what to disallow. For instance, private/package/public etc is one small example of an imposed restriction which makes it easier to reason about changing a large project because if e.g. something is private then you know it's okay and probably easy to refactor. The self-imposed restrictions save you mental effort later. I also love lisps but am a Clojure fan. This is because in Clojure, 90+% of the code is static functions operating on immutable data. That makes it extremely easy to reason about in the large. Those two restrictions are big and impose a lot of structure, but man I can tear around the codebase with a machete because there are so many things that code /can't do/. Also, testing is boneheaded simple because everything is just parameters in to those static functions and assert on the values coming out. I don't have to do some arduous object construction with all these factories if I need to mock anything, I can use "with-redefs" to statically swap function definitions too, which is clean and very easy to reason about. Choosing the things you mr language disallows is one of the most important things you can do to reduce cognitive load.

    • oalae5niMiel7qu 2 days ago

      When the code needs to do something that it can't do, there is a massive cognitive load associated with figuring out how to do something approximating it. When the language or a framework is associated with lots of "how do I X" questions, the answers to which are all "completely rethink your program", that is evidence that the language/framework is not reducing cognitive load.

  • pwdisswordfishz 2 days ago

    Yes, semicolons are totally unnecessary. That’s why nobody who works on JavaScript has ever regretted that automatic semicolon insertion was added to the language. It has never prevented the introduction of new syntaxes to the language (like discussed here: <https://github.com/twbs/bootstrap/issues/3057#issuecomment-5...>), nor motivated the addition of awkward grammatical contortions like [no LineTerminator here].

    • christophilus 2 days ago

      There are plenty of languages that don’t require semicolons and yet manage to avoid those issues: Clojure, Go, Odin…

      • jmyeet 2 days ago

        Clojure delineates everything by explicitly putting statements in parentheses (like any LISP). That's basically the same thing.

        Go is an interesting example but it gets away with this by being far stricter with syntax IIRC (for the record, I'm a fan of Go's opinionated formatting).

      • 9rx 2 days ago

        Funny enough, Go’s grammar does require semicolons. It avoids needing them typed in the source code by automatically adding them on each newline before parsing.

        • ropejumper 2 days ago

          On almost every newline, which is the reason why this doesn't work:

              func thing()
              {
          
          I quite like this approach. It's very simple and consistent, and once you know how it works it's not ever surprising.
  • wesselbindt 2 days ago

    > the compiler can tell you that there needs to be a semicolon right here

    I can see that this is an annoyance, but does it really increase cognitive load? For me language design choices like allowing arbitrary arguments to functions (instead of having a written list of allowed arguments, I have to keep it in my head), or not having static types (instead of the compiler or my ide keeping track of types, I have to hold them in my head) are the main culprits for increasing cognitive load. Putting a semicolon where it belongs after the compiler telling me I have to is a fairly mindless exercise. The mental acrobatics I have to pull off to get anything done in dynamically typed languages is much more taxing to me.

    • lisper 2 days ago

      Semicolons are just an example, and a fairly minor one. A bigger pet peeve of mine is C-style type declarations. If I create a binding for X and initialize it to 1, the compiler should be able to figure out that X is an integer without my having to tell it.

      In fact, all type declarations should be optional, with run-time dynamic typing as a fallback when type inferencing fails. Type "errors" should always be warnings. There should be no dichotomy between "statically typed" and "dynamically typed" languages. There should be a smooth transition between programs with little or no compile-time type information and programs with a lot of compile-time type information, and the compiler should do something reasonable in all cases.

      • t-writescode 2 days ago

        > with run-time dynamic typing as a fallback when type inferencing fails.

        I've seen the code that comes out of this, and how difficult it can be to refactor. I definitely prefer strict typing in every situation that it can't be inferred, if you're going to have a language with static types.

        • medo-bear a day ago

          It works the other way too. Ive seen plenty of code with strict typing that could have its cognitive load reduced greatly by dynamic typing. A far bigger problem is hidden sideffects and static typing does nothing to fix that

          • wesselbindt 14 hours ago

            I work in dynamically typed languages a lot, so I don't have many opportunities to feel the way you do. Could you give an example where moving from static to dynamic would reduce cognitive load?

            For the opposite example, here's where my preference comes from: I'm editing a function. There's an argument called order, and most likely it's either an instance of some Order class, which has some attribute I need, or it's an integer representing the I'd of such an order, or it's null. I'm hoping to access that attribute, so I have to figure out the type of order.

            In a dynamically typed language, I'll have to look at the caller of my function (goes on my mental stack), see where it gets it order from, potentially go to the caller of that function (goes on my mental stack), etc until I hopefully see where order is instantiated and figure out it's type, so I can take the call sites off of my mind, and just keep the type of order in my mental stack.

            But actually, this is wrong, because my function is called by way more functions than the ones I examined. So really, all I know now is that _sometimes_ order is of type Order. To be sure, I have to go to _all_ callers of my function, and all their callers, etc. This grows exponentially.

            But let's say I manage, and I find the type of order, and keep it in my mind. Then I need to repeat the same process for other arguments I want to use, which is now harder because I'm keeping the type of order on my mental stack. If I manage to succeed, I can go and edit the function, keeping the types of whatever variables I'm using in my head. I pray that I didn't miss a call site, and that the logic is not too complicated, because my mental stack is occupied with remembering types.

            Here's how to do this in a statically typed language: read the type signature. Done.

            • medo-bear 14 hours ago

              just think of any obvious function where type is obvious. surely writing typing information in those cases is reddundant to the extent that it wouldnt even help with optimisation. but think about something like this:

                 bool function1(x y z):
                        bool function2(x y z)
              
              immagine function2 besides returning true/false mutates x in some major way. this is a far bigger and more common problem than typing. a dynamic language with capabilities of runtime debugging is far better equiped to inspect code like this

              also i am not saying that typing is worse than no type information. im saying that typing should be optional as far as the compiler is concerned and typing information should be like documentation that is useful to the compiler. common lisp can be an example of a language that is both dynamic and strongly typed (see SBCL) to the extent that you can implement a statically typed language (ala ML) in it

              • kazinator 3 hours ago

                Sometimes when people decry lack of typing, it turns out that it's actually a lack of the ability to define a named structure type with named fields, rather than faking one out with an ad hoc hash table representing a collection of properties.

              • t-writescode 12 hours ago

                > a dynamic language with capabilities of runtime debugging is far better equiped to inspect code like this

                Have you used a robust, modern debugger, like IntelliJ or Visual Studio? You can do a whole heck of a lot of very, very robust debugging. You can watch, run commands, run sets of commands, inspect anything in great detail, write new code in the watcher, and so on.

                • medo-bear 12 hours ago

                  I use them and compared to what I can do with my Common Lisp programs on a far less computationally intensive IDE it is many times poorer. An even more interesting aspect because it is entirely people dependent, is that I find code in an average common lisp project far more readable and understandable than what is regarded as a very good java aplication

              • medo-bear 11 hours ago

                to further expand on this, in common lisp i can build up my program dynamically in the repl, allowing me to progrssively make more readable and understandable code after making it do what i want, and then do the final tidying up with type i formation amd necessary documantation when i compile it. i fail to see how any i can program with less cognitive load in any other language, especially strictly static ones

                • wesselbindt 10 hours ago

                  Repls are not unique to dynamically typed languages. Haskell has one.

                  • medo-bear 9 hours ago

                    there are "repls" and repls. even java has a "repl". forget about dynamic/static dychtomy, im yet to see a non-lisp repl. one which provides a true interactive experience with your running program

                    and even though static languages can have "repls" as an afterthought (eg GHCi), rest assured that their static typing property is a completely unnecessary cognitive load (at least cognitive but very likely a performance one too) to their functioning

      • naasking a day ago

        > If I create a binding for X and initialize it to 1, the compiler should be able to figure out that X is an integer without my having to tell it.

        There are many integral types, all of which have different properties, often for good reasons.

        • lisper a day ago

          Sure, but often != always. And if your language forbids type inference then you are burdened with the cognitive load of worrying about types every single time whether there is a good reason in that instance or not.

          • wesselbindt 14 hours ago

            There's a difference between inference and coercion, and no one here is arguing against inference, as far as I can tell.

  • epolanski 2 days ago

    The very same reasons you find CL to lower your cognitive load are why ultimately after 60 years all lisps have been relegated to niche languages despite their benefits, and I say it as a Racket lover. It raises cognitive load for everybody else by having to go through further steps into decoding your choices.

    It's the very same reason why Haskell monocle-wielding developers haven't been able to produce one single killer software in decades: every single project/library out there has its own extensions to the language, specific compiler flags, etc that onboarding and sharing code becomes a huge chore. And again I say it as an avid Haskeller.

    Haskellers know that, and there was some short lived simple Haskell momentum but it died fast.

    But choosing Haskell or a lisp (maybe I can exclude Clojure somewhat) at work? No, no and no.

    Meanwhile bidonville PHP programmers can boast Laravel, Symfony and dozens of other libraries and frameworks that Haskellers will never ever be able to produce. Java?

    C? Even more.

    The language might be old and somewhat complex, but read a line and it means the same in any other project, there are no surprises only your intimacy with the language limiting you. There's no ambiguity.

    • tome 2 days ago

      I’m surprised to hear this from an avid Haskeller and I think it might give the wrong impression to those who are less familiar with Haskell. I’m sure you know this, but for the benefit of others, projects don’t have their own extensions, they just may or may not use some of the extensions provided by GHC. Anyway, that practice is now diminishing given the GHC2021 and GHC2024 “standards”, which just enable a fixed set of stable extensions.

      And regarding using specific compiler flags, well, projects almost never do that.

    • lisper 2 days ago

      > But choosing Haskell or a lisp (maybe I can exclude Clojure somewhat) at work? No, no and no.

      I've been using CL at work for pretty much my entire career and have always gotten a huge amount of leverage from it.

      • epolanski 2 days ago

        So do I, but not in large projects and teams that need to scale.

        • medo-bear a day ago

          This requirement has become a meme. I can do more on a project alone (spanning several new for me domains) with lisp than I can with a group of 5 or 10 in any other language

    • cognate a day ago

      I dumped Haskell specifically because of its enormous cognitive load. All my time and energy went into Haskell and its endless quirks, leaving nothing for the business problem and its stakeholders.

      • tome a day ago

        Intriguing! Could you say more about which aspects of Haskell gave it a high cognitive load for you?

        By contrast, I've used predominantly Haskell in my career for the last ten years, exactly because it has reduced my cognitive load. So I'm interested in understand the discrepancy here.

        • cognate a day ago

          Too much experimental software, too much fragmentation in major architectural design approaches, compounded by weak documentation, abandonware, and a tiny community.

          Consider the choices in optics libraries, effects systems, regex packages, Haskell design patterns, web frameworks, SQL libraries, and even basic string datatypes. Now consider the Cartesian product of those choices and all their mutual incompatibilities. That's cognitive overload and nobody pays for analysis paralysis.

          A stable engineering group with long-term projects can define a reference architecture and these problems are manageable. But consider large enterprise consulting, where I work. We routinely see unrelated new client projects, quickly assembled teams with non-intersecting skill sets, and arbitrary client technical constraints. Here, the idea of a repeatable, reference architecture doesn't fit, and every new project suffered cognitive overload from Haskell's big Cartesian product.

          I really hoped Boring Haskell, Simple Haskell, and other pragmatic influences would prevail but Haskell has gone elsewhere. Those values are worth reconsidering, either in Haskell directly, or in a new and simpler language that puts those goals at center of its mission.

          • tome 14 hours ago

            Thanks! You seem to be mostly talking about cognitive load arising from having too many choices. Is that right?

            (That said, I don't understand what abandonware and a tiny community have to do with cognitive load -- I agree they're bad, I just don't see the connection to cognitive load.)

            > I really hoped Boring Haskell, Simple Haskell, and other pragmatic influences would prevail but Haskell has gone elsewhere. Those values are worth reconsidering

            I agree with this, except the "gone elsewhere" part. Haskell has far more pragmatic influences today than it did ten years ago when I started using it professionally. The change is slow, but it is in the right direction.

            • cognate 4 hours ago

              Yes, Haskell's choices are often overwhelming. Think of Ruby On Rails, where the community has a well-worn playbook and everyone knows the game. Then compare Haskell, which hits designers with an overwhelming menu of choices, and the community still hasn't picked the winners.

              Glancing at r/haskell, people often ask for help in choosing web frameworks, SQL libraries, effect systems and monad transformers, regex libraries, text datatypes, lens packages and so on. Simple Haskell and Boring Haskell tried eliminating those problems but the community ignored their pleas, occasionally dismissing the idea with frank negativity.

              > what abandonware and a tiny community have to do with cognitive load -- I agree they're bad, I just don't see the connection to cognitive load.)

              Our due diligence on 3rd party libraries investigates how active a library is, which includes github submission frequency, online discussions, blog posts, security fix responsiveness, courseware, etc. Activity runs from extremely high (like pytorch) to graveyard code written long ago by graduate students and researchers. Between those endpoints, the big middle is often murky and requires lots of contingency analysis, given that we're delivering real systems to clients and they must stay absolutely safe and happy. All that analysis is brain-deadening, non-productive cognitive load.

              Obviously none of this is unique to Haskell, but it's fair to say that other platforms provide more standardized design conventions, and for my needs, a quicker path to success.

  • nradov 2 days ago

    The criticisms of Java syntax are somewhat fair, but it's important to understand the historical context. It was first designed in 1995 and intended to be an easy transition for C++ programmers (minimal cognitive load). In an alternate history where James Gosling and his colleagues designed Java "better" then it would have never been widely adopted and ended up as a mere curiosity like Common Lisp is today. Sometimes you have to meet your customers where they are.

    It has taken a few decades but the latest version significantly reduces the boilerplate.

    • lisper 2 days ago

      Sure. I understand why things are the way they are. But that I don't think that is a reason not to complain about the way things are. Improvement is always the product of discontent.

  • the__alchemist 2 days ago

    Great points. I strongly agree with your first point. Regrettably, I haven't used any language that solves this. (But believe it's possible, and you've demonstrated with one I haven't used).

    I'm stuck between two lesser evils, not having the ideal solution you found: 1: Rust: Commits the sin you say. 2: Python, Kotlin, C++ etc: Commits a worse sin: Prints lots of words.. (Varying degrees depending on which of these), where I may or may not be able to tell what's wrong, and if I can, I have to pick it out of a text well.

    Regarding boilerplate: This is one of the things I dislike most about rust. (As an example). I feel like prefixing`#[derive(Clone, Copy, PartialEq)]` on every (non-holding) enum is a flaw. Likewise, the way I use structs almost always results in prefixing each field with `pub`. (Other people use them in a different way, I believe, which doesn't require this)

  • PittleyDunkin 2 days ago

    > Another red flag is boilerplate. By definition boilerplate is something that you have to type not because it's required to specify the behavior of the code but simply because the language design demands it.

    Two things: 1) this is often not language design but rather framework design, and 2) any semantic redundancy in context can be called boilerplate. Those same semantics may not be considered boilerplate in a different context.

    And on the (Common) Lisp perspective—reading and writing lisp is arguably a unique skill that takes time and money to develop and brings much less value in return. I'm not fan of java from an essentialist perspective, but much of that cognitive load can be offset by IDEs, templates, lint tooling, etc etc. It has a role, particularly when you need to marshall a small army of coders very rapidly.

    • lisper 2 days ago

      If the world put even a tenth of the effort into training Lisp programmers as it does into training Java programmers you would have no trouble marshaling an army of Lisp programmers.

      • oalae5niMiel7qu 2 days ago

        The real problem is you cannot ever marshal an army of cheap Lisp programmers, because Lisp programming requires not only learning but raw ability. The big companies are searching for a language that any idiot can learn in a week, with the hope that they can hire thousands of them now, and fire them all next year when LLMs are slightly smarter.

        They run into the problem that programming is inherently hard, and no amount of finagling with the language can change that, so you have to have someone on every team with actual talent. But the team can be made of mostly idiots, and some of them can be fired next year if LLMs keep improving.

        If you use Lisp for everything, you can't just hire any idiot. You have to be selective, and that costs money. And you won't be able to fire them unless AGI is achieved.

        • lisper 2 days ago

          > you cannot ever marshal an army of cheap Lisp programmers

          That may be, but since Lisp programmers are easily 10x as productive as ordinary mortals you can pay them, say, 5x as much and still get a pretty good ROI.

          > you can't just hire any idiot

          Yeah, well, if you think hiring any idiot is a winning strategy, far be it for me to stand in your way.

          • oalae5niMiel7qu a day ago

            I don't think it's a winning strategy, but I'm in no position to make hiring or programming-language decisions, and I don't have the market insight that would be required to start my own company.

        • medo-bear a day ago

          I would rather take on a lisp job that pays my bills than a Java job that pays my bills + upgrades my car

  • deergomoo 2 days ago

    > Another red flag is boilerplate. By definition boilerplate is something that you have to type not because it's required to specify the behavior of the code but simply because the language design demands it. Boilerplate is always unnecessary cognitive load, and it's one sign of a badly designed language. (Yes, I'm looking at you, Java.)

    The claim that LLMs are great for spitting out boilerplate has always sat wrong with me for this reason. They are, but could we not spend some of that research money on eliminating some of the need for boilerplate, rather than just making it faster to input?

  • api 2 days ago

    I agree up until the end. Languages that let you change the syntax can result in stuff where every program is written in its own DSL. Ruby has this issue to some extent.

    • lisper 2 days ago

      Sure, changing the syntax is not something to be done lightly. It has to be done judiciously and with great care. But it can be a huge win in some cases. For example, take a look at:

      https://flownet.com/gat/lisp/djbec.lisp

      It implements elliptic curve cryptography in Common Lisp using an embedded infix syntax.

  • jmyeet 2 days ago

    So with semi-colons, you have three basic options:

    1. Not required (eg Python, Go)

    2. Required (eg C/C++, Java)

    3. Optional (eg Javascript)

    For me, (3) is by far the worst option. To me, the whole ASI debate is so ridiculous. To get away with (1), the languages make restrictions on syntax, most of which I think are acceptable. For example, Java/C/C++ allow you to put multiple statements on a single line. Do you need that? Probably not. I can't even think of an example where that's useful/helpful.

    "Boilerplate" becomes a matter of debate. It's a common criticism with Java, for example (eg anonymous classes). I personally think with modern IDEs it's really a very minor issue.

    But some languages make, say, the return statement optional. I actually don't like this. I like a return being explicit and clear in the code. Some will argue the return statement is boilerplate.

    Also, explicit type declarations can be viewed as boilerplate.. There are levels to this. C++'s auto is one-level. So are "var" declarations. Java is more restrictive than this (eg <> for implied types to avoid repeating types in a single declaration). But is this boilerplate?

    Common Lisp is where you lose me. Like the meme goes, if CL was a good idea it would've caught on at some point in the last 60 years. Redefning the language seems like a recipe for disaster, or at least adding a bunch of cognitive load because you can't trust that "standard" functions aren't doing standard things.

    Someone once said they like in Java that they're never surprised by 100 lines of code of Java. Unlike CL, there's never a parser or an interpreter hidden in there. Now that's a testament to CL's power for sure. But this kind of power just isn't conducive to maintainable code.

    • medo-bear a day ago

      > Unlike CL, there's never a parser or an interpreter hidden in there

      Ive never ever run into this problem in the years of writing common lisp. Can you show me an example code that has this? I wager you cannot and you are writing poopoo about something you know poo about and want it to be true just because you are too lazy to move beyond knowing poo

      > But this kind of power just isn't conducive to maintainable code.

      I can usually run code decades old in common lisp. In fact this is one of its well known features. How much more maintainable can it possibly get :)

    • epolanski 2 days ago

      I like (3) to be honest and the number of times it has poised any issue is virtually 0.

  • bloopernova 2 days ago

    Regarding Common Lisp, do you know of any articles that highlight the methods used to "change the syntax and add new constructs so that the language meets the problem and not the other way around."

    • webnrrd2k 2 days ago

      It's talking about lisp macros, idempotent languages, and a few other features of lispey languages. I'd suggest the book On Lisp, or Lisp in Small Pieces as good places to learn about it, but there are a ton of other resources that may be better suited to your needs.

      • webnrrd2k 2 days ago

        Also check out clojure, and the books: Norvig's PAIP, or Graham's ANSI Common Lisp.

        • corinroyal 2 days ago

          And don't miss Sonja Keene's book "Object-Oriented Programming in Common Lisp" and Kiczales' "The Art of the Meta-Object Protocol". If you don't reach enlightenment after those, Libgen will refund your money.

  • 3abiton 2 days ago

    My gripe with the post is that there is no objective "cognitive load" solution. Arguably this varies from 1 person to another.

    • epolanski 2 days ago

      I don't think you can have golden rules, if you do, you fall in the usual don't do X, or limit Y to Z lines, etc.

      But what you _can_ do is to ask yourself whether you're adding or removing cognitive load as you work and seek feedback from (possibly junior) coworkers.

    • ants_everywhere 2 days ago

      This is true for exactly the same reason that no one algorithm compresses all types of data equally well.

  • isodev 2 days ago

    > poor language design

    We have an excellent modern-day example with Swift - it managed to grow from a simple and effective tool for building apps, to a “designed by committee” monstrosity that requires months to get into.

  • cess11 2 days ago

    You can reduce your Java boilerplate to annotations or succinct XML or whatever. Code generation is used a lot on the JVM.

    Can you show a real compiler message about such a semicolon?

    • lisper 2 days ago

          % cat test.c
          main () {
            int x
            x=1
          }
          % gcc test.c  
          test.c:1:1: warning: type specifier missing, defaults to 'int' [-Wimplicit-int]
          main () {
          ^
          test.c:2:8: error: expected ';' at end of declaration
            int x
                 ^
                 ;
          1 warning and 1 error generated.
      • cess11 2 days ago

        I added int to the main declaration to clean the irrelevant warning, and I get this:

           tst.c: In function ‘main’:
           tst.c:3:5: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or ‘__attribute__’ before ‘x’
               3 |     x=1
                 |     ^
           tst.c:3:5: error: ‘x’ undeclared (first use in this function)
           tst.c:3:5: note: each undeclared identifier is reported only once for each function it appears in
           tst.c:3:8: error: expected ‘;’ before ‘}’ token
               3 |     x=1
                 |        ^
                 |        ;
               4 | }
                 | ~       
        
        gcc (Debian 12.2.0-14) 12.2.0

        I get three errors, all on line 3 rather than 2, and as the first of them says, there are at least four alternatives for resolution besides semicolon.

        Full code after adding type to main, including linter message from c.vim:

                 1 int main () {                                                                        
                 2     int x                                                                            
             E   3     x=1 /\* E: 'x' undeclared (first use in this function)                            
                 4 }
        
        .
  • frenchslumber 2 days ago

    Completely agree! Common Lisp is truly the tool of the Gods.

bob1029 2 days ago

> Types of cognitive load - Extrinsic/Intrinsic

This neatly mirrors the central ideas presented in Out of the Tar Pit [0], which defines accidental and essential complexity.

Reading this paper was probably one of the biggest career unlocks for me. You really can win ~the entire game if you stay focused on the schema and keep in touch with the customer often enough to ensure that it makes sense to them over time.

OOTP presents a functional-relational programming approach, but you really just need the relational part to manage the complexity of the domain. Being able to say that one domain type is relevant to another domain type, but only by way of a certain set of attributes (in a 3rd domain type - join table), is an unbelievably powerful tool when used with discipline. This is how you can directly represent messy real world things like circular dependencies. Modern SQL dialects provide recursive CTEs which were intended to query these implied graphs.

Over time, my experience has evolved into "let's do as much within the RDBMS as we possibly can". LINQ & friends are certainly nice to have if you need to build a fancy ETL pipeline that interfaces with some non-SQL target, but they'll never beat a simple merge statement in brevity or performance if the source & target of the information is ultimately within the same DB scope. I find myself spending more time in the SQL tools (and Excel) than I do in the various code tools.

[0] https://curtclifton.net/papers/MoseleyMarks06a.pdf

throw4847285 a day ago

I think this is compelling programming advice, but "cognitive load" isn't adding anything. The problem with popularizing a term like cognitive load is that it just becomes a new more scientific term for an existing concept in folk psychology. It can then be applied, detached from any experimental psych, to anybody's personal bugbear.

"This programming paradigm is bad because cognitive load" becomes identical to "this programming paradigm is bad because it isn't simple" and then simple is such a fuzzy concept that you can define whatever you like as simple and whatever you don't as having high cognitive load.

  • frou_dh a day ago

    > then simple is such a fuzzy concept that you can define whatever you like as simple and whatever you don't as having high cognitive load.

    Good observation. That is absolutely rampant in online programmer discussions/flame-wars. Which is why Rich Hickey's classic presentation Simple Made Easy, although not the last word on the topic, at least tried to bring some objectivity to what simple is.

  • rTX5CMRXIfFG a day ago

    Which is why engineers who intend to use the term in their discussions shouldn’t dumb it down or loosely define it in their own words, and cite the definition from authoritative sources, say reference texts.

    And whenever it is being used as an argument to reject or approve code, just saying “because cognitive load” should not be accepted as enough. Instead, there should be an accompanying explanation for what exactly in the code raises the cognitive load and what mechanisms come into play that creates the cognitive load according to cognitive science. (Note: I’m using “science” here as opposed to just “psychology”, because the ana/physio of human memory is not exclusive to the domain of psychology.)

deergomoo 2 days ago

Composition over inheritance is one of the most valuable lessons I learned earlier in my career as a developer. In fact these days, I'm hard-pressed to think of a case in which I would prefer inheritance as my first choice to model any problem. I'm sure there probably are some, but it feels too easy to wield irresponsibly and let bad design creep in.

At a previous job I had, a fairly important bit of code made use of a number of class hierarchies each five or six layers deep, including the massive code smell/design failure of certain layers stubbing out methods on a parent class due to irrelevancy.

To make matters worse, at the point of use often only the base/abstract types were referenced, so even working out what code was running basically required stepping through in a debugger if you didn't want to end up like the meme of Charlie from Always Sunny. And of course, testing was a nightmare because everything happened internally to the classes, so you would end up extending them even further in tests just to stub/mock bits you needed to control.

hn8726 2 days ago

I agree with vast majority of the post, and it matches my experience. What I'm not sure I follow is the part about layered architecture, and what is offered as an alternative. The author quickly gets to a _conclusion_ that

> So, why pay the price of high cognitive load for such a layered architecture, if it doesn't pay off in the future?

where one of the examples is

> If you think that such layering will allow you to quickly replace a database or other dependencies, you're mistaken. Changing the storage causes lots of problems, and believe us, having some abstractions for the data access layer is the least of your worries.

but in my experience, it's crucial to abstract away — even if the interface is not ideal — external dependencies. The point is not to be able to "replace a database", but to _own_ the interface that is used by the application. Maybe the author only means _unnecessary layering_, but the way the argument is framed seems like using external dependency APIs throughout the entire app is somehow better.

  • o_nate a day ago

    If you feel like you are missing key parts of the argument, like I was, it might help to switch the view from "short" to "long" - there's a little slider on the right hand side of the screen. This adds in some paragraphs that help the piece flow better. (I'd never seen a blog post with this feature before.)

  • master_crab 2 days ago

    I commented on this in another thread here.

    What I read it as is don’t over-index on creating separate layers/services if they are already highly dependent on each other. It just adds additional complexity tracing dependencies over the networking stack, databases/datastores, etc that the services are now split across.

    In other words: a monolithic design is acceptable if the services are highly intertwined and dependent.

  • nogridbag a day ago

    I agree. I find this part of the article weird as it has contracting statements:

    > No port/adapter terms to learn

    and

    > we gave it all up in favour of the good old dependency inversion principle

    Both use interfaces, just in different ways. We use hexagonal (ports and adapters) pattern in my project. If you asked anyone on my team to define hexagonal architecture they'll have no idea what you're talking about. They just follow the project coding patterns. There's no additional complexity.

    > If you think that such layering will allow you to quickly replace a database or other dependencies, you're mistaken

    I think most people do not stay at companies long enough to see the price of not abstracting away these things. That's the next developer's problem. The code tends to be tightly coupled to the libraries and frameworks used. Eventually, the projects dependencies have to be upgraded (usually due to security issues) and the migrations are usually incredibly difficult, expensive, and fragile. The product's business logic is tightly coupled with the framework and libraries used at the time. Even if the company realizes that framework has no future, they're kind of locked into their initial decision made a decade ago. At least, that's been my experience.

    We have two major products at my company. Both started with the same initial framework, but the project I architected that used hexagonal was migrated to a faster and more modern framework within 4 weeks. The other product had a multi-year migration to a newer version of the same framework (and by the time it was completed is already two major versions outdated). Both products are similar in scale and code size.

  • mrkeen 2 days ago

    I think the 'Layered Architecture' section is all over the place.

    There are a lot of terms thrown around with pretty loose definitions - in this article and others. I had to look up "layered architecture" to see what other people wrote about it, and it looks like an anti-pattern to me:

      In a four-layered architecture, the layers are typically divided into:
        Presentation
        Application
        Domain
        Infrastructure
    
      These layers are arranged in a hierarchical order, where each layer provides services to the layer above it and uses services from the layer below it, and each layer is responsible for handling specific tasks and has limited communication with the other layers. [1]
    
    It looks like an anti-pattern to be because, as described, each layer depends on the one below it. It looks like how you'd define the "dependency non-inversion" principle. Domain depends on Infrastructure? A BankBalance is going to depend on MySQL? Even if you put the DB behind an interface, the direction of dependencies is still wrong: BankBalace->IDatabase.

    Back to TFA:

    > In the end, we gave it all up in favour of the good old dependency inversion principle.

    OK. DIP is terrific.

    > No port/adapter terms to learn

    There is a big overlap between ports/adapters, hexagonal, and DIP:

      Allow an application to equally be driven by users, programs, automated test or batch scripts, and to be developed and tested in isolation from its eventual run-time devices and databases. [2]
    
    That is, the Domain ("application") is at the bottom of the dependency graph, so that the Infrastructure {Programs, Tests, Scripts} can depend upon it.

    > If you think that such layering will allow you to quickly replace a database or other dependencies, you're mistaken.

    Layering will not help - it will hinder, as I described above. But you should be able to quickly replace any dependency you like, which is what DIP/PortsAdapters/Hexagonal gives you.

    > Changing the storage causes lots of problems, and believe us, having some abstractions for the data access layer is the least of your worries. At best, abstractions can save somewhat 10% of your migration time (if any)

    I iterate on my application code without spinning up a particular database. Same with my unit tests. Well worth it.

    [1] https://bitloops.com/docs/bitloops-language/learning/softwar... [2] https://alistair.cockburn.us/hexagonal-architecture/

svilen_dobrev 2 days ago

> Involve junior developers in architecture reviews. They will help you to identify the mentally demanding areas.

This.

And also, a mantra of my own: Listen carefully to any newcomer in the team/company in first 1-3 weeks, until s/he gets accustomed (and/or stop paying attention to somewhat uneasy stuff). They will tell you all things that are, if not wrong, at least weird.

  • epolanski 2 days ago

    I second this, but one should also be extremely wary of newcomers feedback and try to understand their nature.

    Some people are extremely resistant to new ideas, some might be simply lazy, some can't be bothered to read documentations, etc.

    Spotting the real person behind the feedback is crucial and often those people need to be fired fast.

    I myself tend to be lazy when it comes to learn new stuff/patterns, especially when I am in the middle of having to progress a project so my own feedback may be more of a frustration for my inability to progress due to having to understand first a, b, c and d which may take considerable time and pain for something I can do in an old way in few minutes.

  • tantalor 2 days ago

    Junior developers probably won't say anything, because they are used to not understanding code, and they are not going to second guess the more-experienced author.

    • jstummbillig 2 days ago

      It's definitely on the senior to prompt the junior appropriately. But when you do, they will.

  • ben_w 2 days ago

    Aye, but the joiners may need prompting as well as getting listened to.

    In each place where I've seen something wildly wrong, the problem has been clear in the first few weeks — sometimes even in the first few days* — but I always start with the assumption that if I disagree with someone who has been at it for years, they've probably got good reasons for the stuff that surprises me.

    Unfortunately I'm not very convincing: when I do finally feel confident enough to raise stuff, quite often they do indeed have reasons… bad reasons that ultimately prove to be fatal or near-fatal flaws to their business plans, but the issues only seldom get fixed once I raise them.

    * one case where the problem was visible in the interview, but I was too young and naive so I disregarded what I witnessed, and I regretted it.

wnmurphy a day ago

> Introduce intermediate variables with meaningful names

Abstracting chunks of compound conditionals into easy-to-read variables is one of my favorite techniques. Underrated.

> isValid = val > someConstant

> isAllowed = condition2 || condition3

> isSecure = condition4 && !condition5

> if isValid && isAllowed && isSecure { //...

  • goalieca a day ago

    I treat it a lot like english. Run-on sentences, too much technical jargon, and too many fragmented short sentences all make it harder to read. There's analogies to writing code.

  • RaftPeople a day ago

    > Abstracting chunks of compound conditionals into easy-to-read variables is one of my favorite techniques. Underrated.

    Same. Or alternatively I will just put the English version in comments above each portion of the conditional, makes it easy to read and understand.

  • AtlasBarfed 17 hours ago

    So you like procedural code.

    I mean, at least western people seem to think in recipes, todo lists, or numbered instructions. Which is what procedural code is.

    Dogma will chop up those blocks into sometimes a dozen functions, and that's in stuff like Java, functional is even worse for "function misdirection / short term memory overload".

    I don't really mind the hundred line method if that thing is doing the real meat of the work. I find stepping through code to be helpful, and those types of methods/functions/code are easy to track. Lots of functions? You have to set breakpoints or step into the functions, and who knows if you are stepping into a library function or a code-relevant function.

    Of course a thousand line method may be a bit much too, but the dogma for a long time was "more than ten lines? subdivide into more functions" which was always weird to me.

ScotterC 2 days ago

Looks like a solid post with solid learnings. Apologies for hijacking the thread but I’d really love to have a discussion on how these heuristics of software development change with the likes of Cursor/LLM cyborg coding in the mix.

I’ve done an extensive amount of LLM assisted coding and our heuristics need to change. Synthesis of a design still needs to be low cognitive load - e.g. how data flows between multiple modules - because you need to be able to verify the actual system or that the LLM suggestion matches the intended mental model. However, striving for simplicity inside a method/function matters way less. It’s relatively easy to verify that an LLM generated unit test is working as intended and the complexity of the code within the function doesn’t matter if its scope is sufficiently narrow.

IMO identifying the line between locations where “low cognitive load required” vs “low cognitive load is unnecessary” changes the game of software development and is not often discussed.

  • codespin 2 days ago

    With LLM generated code (and any code really) the interface between components becomes much more important. It needs to be clearly defined so that it can be tested and avoid implicit features that could go away if it were re-generated.

    Only when you know for sure the problem can't be coming through from that component can you stop thinking about it and reduce the cognitive load.

    • ScotterC 2 days ago

      Agreed.

      Regarding some of the ‘layered architecture’ discussion from the OP, I’d argue that having many modules that are clearly defined is not as large a detriment to cognitive load when an LLM is interpreting it. This is dependent on two factors, each module being clearly defined enough that you can be confident the problem lies within the interactions between modules/components and not within them AND sharing proper/sufficient context with an LLM so that it is focused on the interactions between components so that it doesn’t try to force fit a solution into one of them or miss the problem space entirely.

      The latter is a constant nagging issue but the former is completely doable (types and unit testing helps) but flies in the face of the mo’ files, mo’ problems issue that creates higher cognitive loads for humans.

  • tkgally 2 days ago

    > I’d really love to have a discussion on how these heuristics of software development change with the likes of Cursor/LLM cyborg coding in the mix

    I would also be interested in reading people’s thoughts about how those heuristics might change in the months and years ahead, as reasoning LLMs get more powerful and as context windows continue to increase. Even if it never becomes possible to offload software development completely to AI, it does seem at least possible that human cognitive load will not be an issue in the same way it is now.

ChuckMcM 2 days ago

I agree with the author's point. It was the founding principle of "object oriented" programming in that once the object was a black box, you could just rely on it working and let go of the implementation details. But it is also important in day to day work. I remember when I realized that I could only keep six distinct "disasters" in my head at the same time. Trying to add another one would pop one out of my head. So much of life can be simplified if you systematize things so that the thing to remember is small, like all of the stuff you take on business trips (shave kit, travel charger, noise cancelling headphones, whatever) if you put all of it in a box in your closet labelled "travel kit" then you only have to remember three things, outfits for 'n' days, your laptop, and your travel kit. Got those three and you are good to go.

vonnik 2 days ago

Totally agree with this and would add that cognitive load is not just a matter of the code before you, but a function of your total digital environment:

https://vonnik.substack.com/p/how-to-take-your-brain-back

Interruptions and distractions leave a cognitive residue that drastically reduces working memory through the Zeigarnik effect.

DarkNova6 2 days ago

> Layered architecture: Abstraction is supposed to hide complexity, here it just adds indirection

Agree with everything except this. As someone who deals with workflows and complex business domains, separating your technical concerns from your core domain is not only necessary. They are a key means to survival.

Once you have 3 different input-channels and 5 external systems you need to call, you absolutely need to keep your distance not to pollute your core representation of the actual problem you need to solve.

  • ImPleadThe5th a day ago

    As with all things, it's useful used properly.

    The codebase at my job has far too many abstractions/layers for things that do not provide any benefit for being abstract. It was simply done because it was the "coderly" thing to do.

    I do agree that at the least it makes sense to separate out repository logic.

jeremydeanlakey 2 days ago

The central idea around cognitive load is very good, central to writing good code.

But it's deeply mistaken to oppose smaller (or more correctly: simpler) classes/functions and layered architecture.

Layered architecture and simple (mostly small) classes and methods are critical to light cognitive load.

e.g. You should not be handling database functionality in your service classes, nor should you be doing business logic in your controllers. These different kinds of logic are very different, require different kinds of knowledge. Combining them _increases_ cognitive load, not decreases.

It's not mainly about swapping out dependencies (although this is an important benefit), it's about doing one thing at a time.

  • jillesvangurp 2 days ago

    There are some nice studies on correlations between metrics (cohesiveness, coupling, complexity, levels of indirection, etc.) and maintainability. Basically anything that scores poorly is inherently hard to understand because it induces a high cognitive load.

    The reason what you outline is bad is because they each impact these metrics. Bypassing layers creates a more tight coupling: you are basically putting code in the wrong place. This also makes the code less coherent. The two go hand in hand. And then you end up doing complex things like reaching deep into layers (which violates Demeter's law).

    Anyway, MVC style architectures have always had the problem that they invite people to mix viewer and controller code and logic. And once your business logic mixes with your rendering logic, your code is well on its way of becoming yet another UI project that fell into the trap of tight coupling, low cohesiveness, and high complexity.

  • jeremydeanlakey 2 days ago

    To make this more concrete:

    If your service layer method requires data to be saved and the results to be sorted, you want to call a data layer method that saves it and a library method that sorts it. You do not want any of that saving or sorting functionality in your service method.

    Combining different layers and different tasks so that your module is "deep" rather than "shallow" will make your code much higher cognitive load and create a lot of bugs.

Freak_NL 2 days ago

That short/long toggle in the top-right seems to expand and collapse the article. It defaults to short. Reading this article in its short form I kept wondering if I was missing something relevant (cognitive load++), but with the long form on I kept wondering if some paragraphs were explicitly intended to be superfluous or tangential (cognitive load++) for the sake of that collapsing trick.

For an article on cognitive load, using a gimmick which increases it seems ironic.

  • dangoodmanUT 2 days ago

    I thought these things were pretty inferable

mrcsd 2 days ago

IMO cognative load is much easier to manage when required (human) memory use is less of a factor. In practical terms, this means maximising the locality of reasoning, i.e., having everything you need in front of you to make a decision. One of the reasons I favour rust is precisely because this factor has been a focus in the design.

mark_l_watson a day ago

I have been programming since 1963 and I was a Master Software Engineer at Capital One before I retired. I have my own ways of managing cognitive load. I usually use Lisp languages, Python, and Haskell and I tend to write my own short libraries and then treat these libraries as black boxes unless I want to extend them. I obviously use a zillion libraries that other people write also, but there is a comfort and ease using my own code.

MiscCompFacts 20 hours ago

I really enjoyed this post quite abit, and it’s a topic that I and many developers have discussed concerning good abstractions. This isn’t directly related to the article it’s self, but to the website minds.md. I was surprised to see it seems like a blogging platform for various authors but I didn’t see any information about where to upload or who runs this service. It mentions GitHub but doesn’t seem to link to it on the home page. Just curious to know more about this site and if it’s for anyone or a specific group of people?

jmyeet 2 days ago

I have an issue with complex conditions with or without local variable labels for readability. You really shouldn't have them at all.

At one time, they used to teach that functions should have one entry-point (this is typically a non-issue but can come up with assembly code) and one exit-point. Instead of a complex condition, I much prefer just early returns ie:

    // what's going on 1
    if (condition1 || condition2) {
      return;
    }

    // what's going on 2
    if (condition3 && condition4) {
      return;
    }

    // what's going on 3
    if (condition5) {
      return;
    }

    // do the thing
    return;
  • Aeolos 2 days ago

    I prefer this style for languages that have either a GC or scoped resource management (eg RAII).

    However, I think the single exit point holds merit for C, where an early return can easily and silently cause a resource leak. (Unless you use compiler-specific extensions, or enforce rust-style resource ownership, which is really hard without compiler support.)

meowface 2 days ago

I've sometimes seen people attack early returns and I've never understood it. To me they make things so much cleaner that it seems like common sense.

xnx 2 days ago

This applies to user experience as well. I've seen designers focus on number of items or number of clicks when mental effort / cognitive load is what matters. Sometimes picking from a list of 50 links is easier. Sometimes answering 7 yes/no questions is easier.

tugu77 2 days ago

> We were told that a really smart developer had contributed to it. Lots of cool architectures, fancy libraries and trendy technologies were used. In other words, the author had created a high cognitive load for us.

Maybe that dev was "really smart" but then not very senior. Eventually the dev will hopefully use their smarts to make things so simple+dumb that the cognitive load when maintaining all that code is minimized.

One of the first things I try to drill into our junior devs. If the code looks smart, it needs to be fixed until it's really simple and straight forward. "That's impossible" some people might say. And that's why only the really smart folks can achieve it.

  • sesm 2 days ago

    > make things so simple+dumb that the cognitive load when maintaining all that code is minimized.

    That's not very smart in BigCo, instead you want the complexity to leak into adjacent systems and make them depend on you.

    • tugu77 2 days ago

      I know your comment is a little tongue-in-cheek, but that kind of thinking is in fact widespread and a reason why many of us are so miserable.

odyssey7 2 days ago

One day, the world will rediscover functional programming. The absence of state mutation is a beautiful thing.

tugu77 2 days ago

Re: complex conditionals

I contributed some code to a FOSS project recently which is written in C. In my 10 lines of contributions, 3 were a complex conditional. I'd have loved to do what the article suggests, with some intermediate booleans. But that C version would have required me to define those variables at the beginning of the function, a hundred lines earlier instead of just right there. No way that's going to fly, so now they will need to live with a complex conditional. It's one of those "modern language" features which C fanatics generally frown upon but which makes code much easier to read.

  • pegas1 a day ago

    Yes, I remember those times, trying to do structured programming in the unstructured environment. So, we both know that even very complex conditionals can be made readable by CRLFs, TABs, and inline comments. It is not about the language, it is about the person.

  • mrkeen 2 days ago

    Can you not move to c99?

kazinator 2 days ago

> Prefer composition over inheritance.

Why? Because if SuperuserController is built on AdminController via composition rather than inheritance, it is magically protected from breakage due to changes in AdminController?

Not buying it.

fferen 2 days ago

Disagree with first example. If that condition is only used once, adding a variable introduces more state to keep track of, that could just be a comment next to the conditional.

halfcat 2 days ago

I suspect part of the challenge is we’re dealing with a graph (of execution paths) but all we have to work with is a tree (file system).

Every person will prefer a different grouping the execution paths that lowers their cognitive load. But for any way you group execution paths, you exclude a different grouping that would have been beneficial to someone working at a different level of abstraction.

So you like your function that fits in one computer screen, but that increases the cognitive load on someone else who’s working on a different problem that has cross-cutting concerns across many modules. If you have separate frontend/backend teams you’ll like Rails, but a team of full stack people will prefer Django (just because they group the same things differently).

I guess this is just Conway’s law again?

kkarpkkarp 13 hours ago

in 1955 George Miller discovered[1] human can only contain 7 bits of information at the time.

Ie. if I write 1452687 and ask you to read it only once and then close eyes and repeat it, you will be able to do this.

If I write 12573945 and ask you to do the same, you will most likely not be able to do it.

Same happens with bigger "bits" of information: you can remember 7 ideas/statements from the textbook you read, so do not trap yourself in "ah that is easy, I will continue reading without taking notes". After every 7 "things" write them down, note the page number where they are and then continue reading. Otherwise it is a waste of time.

[1] https://en.wikipedia.org/wiki/The_Magical_Number_Seven%2C_Pl...

swiftcoder 2 days ago

> By no means I am trying to blame C++. I love the language. It's just that I am tired now.

I feel that in my bones.

Swizec 2 days ago

We can measure and quantify this cognitive load! I’ve been researching this for a book and have found some really cool research from ~10 years ago. It seems people stopped thinking about this around when microservices became popular (but they have the same problems just with http/grpc calls instead).

There are two main ways to measure this:

1. Cyclomatic/mccabe complexity tells you how hard an individual module or execution flow is to understand. The more decision points and branches, the harder. Eventually you get to “virtually undebuggable”

2. Architectural complexity measures how visible different modules are to each other. The more dependencies, the worse things get to work with. We can empirically measure that codebases with unclear dependency structures lead to bugs and lower productivity.

I wrote more here: https://swizec.com/blog/why-taming-architectural-complexity-...

The answer seems to be vertical domain oriented modules with clear interfaces and braindead simple code. No hammer factory factories.

PS: the big ball of mud is the world’s most popular architecture because it works. Working software first, then you can figure out the right structure.

  • energy123 2 days ago

    This is true and valuable, but it's worth mentioning that some aspect of cognitive load is subjective. The code I write is always lower cognitive load to me than anyone else's code, even if my code has more cyclomatic complexity and code smells, because I've built up years of neural representations dedicated to understanding my favored way of doing things via practice and repetition. And I lack the neural representations needed to quickly understand other people's code if they approach things differently, even if their approach is just better.

    This is not to say we should keep practicing our bad habits, but that we should practice good habits (e.g. composition over inheritance) as quickly as possible so the bad habits don't become ingrained in how we mentally process code.

  • rTX5CMRXIfFG 2 days ago

    I’ve been interested in the same topic for a while now and the most difficult part, when explaining the concept to other programmers and defending against it in coding standards/reviews, is how to prove that cognitive load exists.

    Cyclomatic complexity seems one indicator, but architectural complexity needs to be clarified. I agree that how much modules expose to each other is one trait, but again, needs clarification. How do you intend to go about this?

    Been thinking about custom abstractions (ie those that you build yourself and which do not come from the standard libraries/frameworks) needed to understand code and simply counting them; the higher the number, the worse. But it seems that one needs to find something in cognitive psychology to back up the claim.

    • Swizec 2 days ago

      > Cyclomatic complexity seems one indicator, but architectural complexity needs to be clarified. I agree that how much modules expose to each other is one trait, but again, needs clarification. How do you intend to go about this?

      Too much to summarize in a comment, I recommend reading the 3-blog series linked above. Architectural complexity is pretty well defined and we have an exact way to measure it.

      Unfortunately there’s little industry tooling I’ve found to expose this number on the day-to-day. There’s 1 unpopular paid app with an awful business model – I couldn’t even figure out how to try it because they want you to talk to sales first /eyeroll.

      I have some prototype ideas rolling around my brain but been focusing on writing the book first. Early experiments look promising.

      There IS backing from cognitive research too – working memory. We struggle to keep track of more than ~7 independent items when working. The goal of abstraction (and this essay’s cognitive load idea) is to keep the number of independently moving or impacted pieces under 7 while working. As soon as your changes could touch more stuff than fits in your brain, it becomes extremely challenging to work with and you get those whack-a-mole situations where every bug you fix causes 2 new bugs.

emptiestplace 2 days ago

The "too smart developers" narrative is pandering - poor design stems from inexperience and its accompanying insecurity, not intelligence. Skilled developers intuitively grasp the value of simplicity.

  • nostradumbasp 2 days ago

    I love what you're saying. But, I've met a lot of people who have say 10-20 years experience designing applications with unnecessary and sometimes incredible cognitive load. There are serious incentives to NOT write "simple" code, let me share a few of them.

    Root causes from my perspective look like: 1. Job security type development. Fearful/insecure developers make serious puzzle boxes. "Oh yea wait until they fire me and see how much they need me, I'm the only one who can do this."

    2. Working in a vacuum/black hole developers. Red flags are phrases like "snark I could have done this" when working together on a feature with them. Yes, that is exactly the point, and I even hope the junior comes in after and can build off of it too.

    3. Mixing work with play "I read this blog post about category theory and found this great way to conceptualize my code through various abstractions that actually deter from runtime performance but sound really cool when we talk about it at lunch".

    4. Clout/resume/ego chasing "I want to say something smart at stand up, a conference, or at a future job, so other people know they are not on my level and cannot touch my code or achieve my quality."

    Some other red flags. They alone maintain their "pet" projects for everything serious until they couldn't. Minor problems/changes come up, someone else goes in and fixes it. Something serious happens it's a stop the world garbage collection for that developer and they are the only one who can fix it disrupting any other operations they were part of.

    • emptiestplace 16 hours ago

      I agree with everything you're saying, but these are different issues - not symptoms of excess intellect. Just post-hoc rationalization of poor choices.

      • nostradumbasp 9 hours ago

        Valid point I kind of missed the thesis a bit. My bad.

  • fallingknife 2 days ago

    I find that it comes most from intelligence. I see plenty of super experienced but not very smart engineers design terrible over engineered systems. On the other hand, juniors err in the opposite direction with long functions with deep nested branching and repetition. And the latter is better. Easier to refactor up in abstraction level than down.

    • aulin a day ago

      the latter is not better though, it's terrible actually, deep nesting is the essence of cognitive load

    • emptiestplace 2 days ago

      Perhaps I'm confused, but it seems to me that your examples actually support my point. You're describing experience-based patterns - seniors over-abstracting vs juniors writing tangled code. Neither case is about intelligence; they're about different types of inexperience leading to different design mistakes.

  • marginalia_nu 2 days ago

    Yeah it's kind of a weird narrative. Writing complex code is leaps and bounds easier than writing simple code. Often takes both experience and intelligence to see the correct way.

nogridbag a day ago

I'm probably late to the party, but I was hoping to see a larger discussion on "Too many small methods, classes or modules" aka Deep Module vs Shallow Modules. I think we can probably debate just that last sentence alone for a while. I would imagine "module" here simply means a group of classes that are collectively related. Having "too many modules" is likely just the implicit complexity of the problem you're trying to solve. You either have all those modules or you have no product. If all products were trivial to code, no one would be employing you to create the product.

So the main thing we typically come across is "too many classes" and that's the example the author gave in the article. Nearly all the developers at my company go with the "throw everything into one class" approach regardless whether they're junior, senior, etc. I tend to go the extreme opposite route and usually create small classes for things that seem critical to test in isolation. While I'm coding that small class and writing unit tests, it feels perfect. When I revisit the code, it is a bit overwhelming to understand the relationships between the classes. So I understand the article's criticisms of too many small classes.

However, I have my doubts that moving this class into the main class as a simple method would reduce cognitive load. For one, due to the nature of our tools, e.g. JUnit, I would be forced to make that method public in order to test it (and now it's part of the large class's API contract). So I can either make it public, remove my unit tests, or attempt to make my unit tests part of the greater component's test which really overcomplicates the parent's unit tests. Ignoring the testing issue, is the cognitive load complexity just a file structure issue? Instead of turning those small classes into methods in a large class, I could just use nested classes in a single file. Or, there can be some package naming convention so readers know "these helper classes are only used by this parent class". I would be interested to hear others thoughts, but due to the nature of HN's algorithm, it's likely too late too see many replies :)

  • fmbb a day ago

    > Ignoring the testing issue

    I mean it sounds to me like testing is your only problem so it’s kind of hard to ignore.

    Personally I prefer fewer larger modules over more smaller modules. Deeply nested code is harder for me to reason about, I need to push stuff on my stack more often as I go down a debugging rabbit hole.

    You can never reason locally about a bug. All code looks good locally. Bugs always manifest in the totality of your code and that is where you have to look at it. Your unit tests for the release running on production are all green, someone reviews all those lines that are in there. Locally it all made sense, local reasoning is how you ended up with that bug in production.

    If you write a small little class with one method to help implement some bigger interaction, if your change does not touch the unit tests of the ”wrapper” class actually implementing the bigger interaction, what the users use, you are likely writing meaningless code or your test coverage for the actual use cases is not good enough.

    • nogridbag a day ago

      > if your change does not touch the unit tests of the ”wrapper” class

      I don't think this is a good example. In fact, this is more supportive of having the small little class. The "wrapper" class only needs to have 1-2 unit tests to test the scenarios in which the small little class is invoked without needing to be concerned with the complexity of everything that small little class is actually doing. I've never written a small helper class without writing corresponding tests in the wrapper so I've never had that problem - it's usually the first thing I write after adding the helper class. For that specific problem there's automated and manual processes anyway. Code coverage tools can easily tell you if you missed writing tests for the invocation and reviewers should spot that in PR reviews.

galaxyLogic 2 days ago

I think it is somewhat misguided notion that code could have a property whose value is its "cognitive load". Instead we should start from the premise that:

1. Somebody writes a program

2. Somebody else (perhaps the same programmer a few years later) tries to read and understand that program

So the "cognitive load" is a property of the communication between the person who writes the program and others who read and (try to) understand the program.

Cognitive Load is an attribute of communication, not of the artifact that is the medium of the communication (i.r. the code written).

Are we writing in a "language" that readers will understand? And are we reading in a way that our assumptions about what the writer is trying communicate to us are correct?

A program is instructions to the computer. But when written in a highj-level language it is also meant to be read by other prrogrammers, not just by the CPU. It is human-to-human communication. How easy is it for somebody to understand what somebody else is trying to tell the computer to do? That's the "Cognitive Load".

  • ajuc 2 days ago

    Code is 95% of the communication.

    And how overcomplicated the code is has a huge (overwhelming everything else) influence on how hard it is to communicate what it does.

    So in practice it's fair to call it the property of the code, even if bad documentation or mentoring can make things unnecessarily worse.

    Solar system can be modeled as geocentric with enough epicycles, or as heliocentric system with eliptical orbits.

    One of these is inherently easier to communicate.

    • galaxyLogic a day ago

      Right, I was just trying to bring up the point that however the code is written, what really matters is, how easily humans can understand it.

      Now maybe there is, or can be an algorithm that takes a piece of code and spits out a number saying how easy it is for a typical programmer to understand it correctly. That would be the measure of "cognitive load".

      If we just speak of cognitive load without really specifying how to measure it, we are not where we would like to be.

      How do we define, and measure "cognitive load"? It is an easy word to use, but how to measure it?

fmxsh 2 days ago

> "Having too many shallow modules can make it difficult to understand the project. Not only do we have to keep in mind each module responsibilities, but also all their interactions."

Not only does it externalize internal complexity, but it creates emergent complexity beyond what would arise between and within deeper modules.

In a sense, shallow modules seem to be like spreading out the functions outside the conceptual class while thinking the syntactical encapsulation itself, rather than the content of the encapsulation, is the crucial factor.

  • euph0ria a day ago

    Had to re-read this several times to grasp the meaning..

    Curious, what is your background and day to day work to be able to express your thinking in these terms?

    • fmxsh 4 hours ago

      Thank you for asking! My phrasing may not have been the most clear.

      My background is in humanities and health care. I currently work in eldercare. Computers have always been a major interest of mine since an early age, but professionally I have taken another route. I have a general curiosity about systems and theories of different kinds. I do have an education in the basic scientific approach, something I acquired while studying to become a teacher. Soft sciences have often come more easily to me. I lack the benefits of a disciplined study of computer science, something I am sure affects my approach.

      My use of the word 'emergent' was inspired by systems theory [0], where the whole is more than the sum of its parts. The parts of a system create emergent behaviors or phenomena. For example: A football team consists of several players, and the strategy of the teamwork is an emergent phenomenon. Similarly, my thought was that when functions are spread out into several modules, they may create unexpected emergent complexity. Without being able to give a concrete example off the top of my head, I think I have struggled with bugs born out of such complexity.

      My thought was that the "syntactical encapsulation" (the actual code we write) may not serve the "conceptual class" (the idea we have). We may have a good concept, but we distribute its functionality among too many classes.

      [0] https://en.wikipedia.org/wiki/Systems_theory

phreack a day ago

Another prong in the cognitive load fork that's trying to stab you when working, is the UX of your tools.

That is absolutely personal, but it pays off massively in the long term to build out a development environment that you're comfortable with. If it's an IDE, you can try to recognize pain points like nested tools you use often and bring them to the front via new buttons or learning hotkeys.

UI is also important, in "modern" IDEs there's an awful trend of not labeling icons, so I strictly use one that does so I don't have to juggle "which doodle is the Git button again" on top of my actual work cognitive load.

dherikb a day ago

> Mantras like "methods should be shorter than 15 lines of code" or "classes should be small" turned out to be somewhat wrong.

I really have some concerns about this kind of opinion.

I know that we can't follow this rule (or smell) every time, but I already see this affirmation being used by very poor or inexperienced programmers to justify understandable gigantic and hard to test pieces of code.

This is the type of advice that just experienced programmers can understand what this means and know when is applied.

Most part of the time, it's easier to fix a cod

wcrichton 2 days ago

I did my PhD at Stanford about the cognitive aspects of programming, including studies of cognitive load. This article uses pseudoscience to justify folk theories about programming. I would encourage readers to take everything with a grain of salt, and do not wave this article around as a "scientific" justification for anything.

I laid out my objections to the article last year when it first circulated: https://github.com/zakirullin/cognitive-load/issues/22

  • fforflo 2 days ago

    Congrats on your PhD at Stanford, but some humility has to be part of the scientific process for sure. It looks like a lot of folks called "programmers" agree with the points in the post. If it's such a common experience that should tell you something about the state of affairs.

    It's a blog post on the internet. Of course one should take it with a grain of salt. The same applies to any peer-reviewed article on software engineering for example.

    Just yesterday, I was watching this interview with Adam Frank [0] one of the parts that stood out was his saying why "Why Science Cannot Ignore Human Experience" (I can't find the exact snippet, but apparently he has a book with the same title.

    [0] https://www.youtube.com/watch?v=yhZAXXI83-4

    • wcrichton 2 days ago

      I'm not saying that the conclusions in the article are false. As a programmer, I prefer composition to inheritance, too. I'm saying that the justifications are presented using a scientific term of art (cognitive load), but the scientific evidence regarding cognitive load isn't sufficient to justify these claims.

      • beacon294 2 days ago

        I don't think the readers really care about the scientific term in this context. It's a shared experience that we care about and implicitly understand. It's probably worth "researching" (in the scientific sense).

  • Tainnor 2 days ago

    Thanks for sharing your knowledge and please ignore some of the other comments. HN is a place for "intellectual curiosity", but for some reason it always attracts its fair share of anti-intellectualism. Debating what cognitive load actually is is very relevant to the topic at hand.

    FWIW, I can't speak to the science of it but as a programmer I even disagree with many of the conclusions in the article, such as that advanced language features are bad because they increase mental load.

  • LAC-Tech 2 days ago

    Not knowledgeable enough to weigh in here, I just think it's very cool that 1) the authors blog was in public source control and 2) you made a polite github issue with your criticisms and 3) it wasn't deleted.

  • GiorgioG 2 days ago

    Where was the word “scientific” mentioned in the article? I don’t think you were the target reader they had in mind when the author wrote this. I’ve been programming since 1986, and this article resonates with my experience. More abstractions, layers, and so on requires my brain to have to keep track of more shit which takes away from doing the work that brought me to work on that bit of code (bug fix, feature work, debugging, etc.)

    We’re very proud of you and the hard work you did to earn your PhD, now please stop trotting it out.

    • wcrichton 2 days ago

      The article is attempting to use a scientific term of art, "cognitive load", to justify claims about programming. Those claim cannot be justified given the existing evidence about cognitive load. As I explain in my linked response, I nonetheless agree with many of the claims, but they're best understood as folk theories than scientific theories.

      And I don't think condescension will make this a productive discussion!

      • GiorgioG 2 days ago

        Maybe in your field the definition of cognitive load has a very specific, academic meaning. This article wasn’t meant for you.

        • wcrichton 2 days ago

          My issue is that this article is trying to use cognitive load in its specific, academic meaning. It says:

          > The average person can hold roughly four such chunks in working memory. Once the cognitive load reaches this threshold, it becomes much harder to understand things.

          This is a paraphrase of the scientific meaning. "Intrinsic" and "extrinsic" cognitive load are also terms of art coined by John Sweller in his studies of working memory in education.

          I agree the article isn't designed to be peer-reviewed science. And I agree the article has real insights that resonate with working developers. But I'm also a fan of honesty in scientific communication. When we say "vaccines prevent disease", that's based on both an enormous amount of data as well as a relatively precise theory of how vaccines work biologically. But if we say "composition reduces cognitive load", that's just based on personal experience. I think it's valuable to separate out the strength of the evidence for these claims.

          • GiorgioG 2 days ago

            You’re exhausting.

            • Tainnor 2 days ago

              Please don't do this on HN.

            • cwbriscoe 2 days ago

              But, but he wrote a paper. He must be smart...

dvt 2 days ago

> The companies where we were like ”woah, these folks are smart as hell” for the most part failed

Being clever, for the most part, almost never buys you anything. Building a cool product has nothing to do with being particularly smart, and scaling said product also rarely has much to do with being some kind of genius.

There's this pervasive Silicon Valley throughline of the mythical "10x engineer," mostly repeated by incompetent CEO/PM-types which haven't written a line of code in their lives. In reality, having a solid mission, knowing who your customer is, finding that perfect product market fit, and building something people love is really what building stuff is all about.

At the end of the day, all the bit-wrangling in the world is in service of that goal.

  • Scene_Cast2 2 days ago

    Depends on how you define smart. I worked at a place where income was directly tied to the quality of the ML models. Building what people love wouldn't have been the best strategy there.

  • jsd1982 2 days ago

    Only if your goal is to be an entrepreneur. Not everyone chases that goal nor considers success in that fashion.

wruza 2 days ago

That’s why I strangely love some closed enterprise solutions. They are anti-fun, non-extensible behemots that you can learn in around five years and then it’s basically all free. If you can stand everything else, ofc.

The open source world could learn from that, by holding up on spiral rotation of ideas (easily observable to turn 360 in under a decade) and not promoting techniques that are not fundamental to a specific development environment. E.g. functional or macro/dsl ideas in a language that is not naturally functional or macro/that-dsl by stdlib or other coding standards. Or complex paradigms instead of few pages of clear code.

Most of it comes from the ability to change things and create idioms, but this ability alone doesn’t make one a good change/idiom designer. As a result, changes are chaotic and driven by impression rather than actual usefulness (clearly indicated by spiraling). Since globally there’s no change prohibition, but “mainstream” still is a phenomenon, the possibility of temperate design is greatly depressed.

singingfish 2 days ago

Aligning the computer's and the humans' thinking processes. Cognitive load is exceptionally important - one of the few uncontravenable facts in human psychology is that healthy human short term memory has a capacity of 5 items plus or minus 2. So reliably 5. And thus the maximum number of thinking balls you should be juggling at one time.

Which then leads to thinking about designs that lead to the management of cognitive load - thus the nature of the balls changes due to things like chunking and context. Which are theoretical constructs that came out of that memory research.

So yes, this is pretty much principal zero - cognitive load and understanding the theory underneath it are the most important thing - and are closely related to the two hard problems in computer science (cache invalidation, naming things and off by one errors).

Thank you for attending my TED talk

master_crab 2 days ago

On the layered architecture section:

I have seen too many architectures where an engineer took “microservices” too far and broke apart services that almost always rely on each other into separate containers/VMs/serverless functions.

I’m not suggesting people build monolithic applications, but it’s not necessarily a good idea to break every service into its own distinct stack.

jrs235 2 days ago

I think Cognitive Load on a developer includes distractions/interruptions. Constant slack notifications, taps on the shoulder, meetings, etc. increase cognitive load. It's context switching. One only has so much memory and focus, to switch tasks one has additional overhead, thinking and memory/storage demands.

userbinator 2 days ago

When one of these articles come up, I always wonder if the authors have ever looked at APL-family languages and the people who use them, or those who have a similar ultra-compact style even with more mainstream languages; here are the most memorable examples that come to mind:

https://news.ycombinator.com/item?id=8558822

https://news.ycombinator.com/item?id=28491562

What's the "cognitive load" of these? Would you rather stare at a few lines of code for an hour and be enlightened, or spend the same amount of time wading through a dozen or more files and still struggle to understand how the whole thing works?

BiteCode_dev 2 days ago

Something I noticed is that some vim / keyboard only envs are paying a huge cognitive load price by holding various states in their mind and having to expand efforts every time they switching context.

Sometimes there is the added burden of an exotic linux distro or a dvorak layout on a specially shaped keyboard.

Now, some devs are capable of handling this. But not all do, I've seen many claiming they are more productive with it, but when compared to others, they were less productive.

They were slow and tired easily. They had a higher burn out rate. The had too much to pay upfront for their day to day coding task but couldn't see that their idealization of their situation was not matching reality.

My message here is: if you are in such env be very honest with yourself. Are you good enough that you are among the few that actually benefit from it?

  • LAC-Tech 2 days ago

    hi, tiling window manager and neovim user here.

    I don't think about states much, it's all just muscle memory. Like doing a hadouken in street fighter.

    • BiteCode_dev a day ago

      They all say that, but it's true for only 20% of my encounters when you actually watch them code next to others.

      • LAC-Tech a day ago

        could it be we just tend to be slower devs in general?

        like I doubt I am blazing on windows 11 and VScode either.

        • BiteCode_dev 6 hours ago

          I could be many things on an anecdotal experience. Can't make a study for thid at my level.

jbs789 a day ago

There was an article recently about the importance of designing the function signature, and I connect that to creating the appropriate level of abstraction, which also connects to this. Guess it’s all about creating the right interfaces, but I’m not really a programmer…

majgr 2 days ago

My whole experience of working 18 years as a software developer can be summed up with two words: ‚it depends’. Every nice architecture set up front breaks at some point. There is no silver bullet.

sharkjacobs 2 days ago

> If you keep the cognitive load low, people can contribute to your codebase within the first few hours of joining your company.

I guess there’s a lot of wiggle room for what is really being asserted here, but this seems like an absurd impossible claim.

dominicrose 2 days ago

Great article. Even smart people can get tired at some point. In fact reducing cognitive load IS the smart thing to do. It's good to bring awareness to this. I like simple rules that help reduce cognitive load, and can be shared in pull requests. Simple things like early returns.

Rules like "limit the number of lines to 300" are meant to be broken. Some 2k-line code files are easy to understand, some are not. It depends. It always depends.

AIorNot 2 days ago

The FUNDAMENTAL aspect and nature of ALL Software Engineering is: "Controlling Complexity" I dare say its the most important part of the "Engineering" in software engineering. 'Cognitive Load' is another way of saying this..

1. programming => wiring shit together and something happens

2. software engineering => using experience, skills and user and fellow engineer empathy to iterate toward a sustainable mass of code that makes something happen in a predictable way

It is exceedingly difficult to do this right and even when you do it right, what was right at one time becomes tomorrow's 'Chesterton's Fence' (https://thoughtbot.com/blog/chestertons-fence) , but I have worked on projects and code where this was achieved at least somewhat sustainably (usually under the vision of a single, great developer). Unfortunately the economics of modern development means we let the tools and environments handle our complexity and just scrap and rewrite code to meet deadlines..

I mean look at the state of web development these days https://www.youtube.com/watch?v=aWfYxg-Ypm4

riedel 2 days ago

Just a quick note on cognitive load theory: what makes cognitive load difficult is 'germane cognitive load' that comes from learning, that is neither intrisinsic nor extraneous. Getting it right is difficult: often too light interfaces can also be too light on actual understanding. What makes it worse it is nearly impossible to measure how much is extraneous and what is germane.

RandomWorker 2 days ago

This is the best article I've read on programming in a while. When I code all I do is work on one object or one function at a time. Working usually on the end result. The key here is to develop a quick and easy solution without too many issues. This is also the difference between new and experienced developers. Experienced developers have less fluid intelligence so build reliable simple programs.

iandanforth 2 days ago

I was nodding along happily until I watched the composition is better than inheritance linked video and it suggested the abomination of passing a "base" class instance to the save method of a more specific class instance to give the specific class access to functionality on the "base" class. There may be a solid argument for composition over inheritance but this bastardization of functional and OO programming ain't it.

James_K 2 days ago

I hate these silly little code examples where they show some if condition being changed to another format and act as if that's real advice that helps you program.

awinter-py 2 days ago

> AdminController extends UserController extends GuestController extends BaseController

> Cognitive load in familiar projects -- If you've internalized the mental models of the project into your long-term memory, you won't experience a high cognitive load.

^ imo using third-party libraries checks both of these boxes because 1) a fresh-to-project developer with general experience may already know the 3rd party lib, and 2) third party libraries compete in the ecosystem and the easiest ones win

jschrf 2 days ago

Very cool article. Semantics are important and key. Article nails it.

Reduce, reduce, and then reduce farther.

I'll add my 2c: Ditch the cargo cults. Biggest problem in software dev. I say this 30 years in.

Hard lesson for any young engineers: You don't need X, Y, or Z. It's already been invented. Stop re-inventing. Do so obsessively. You don't GraphQL. You don't need NoSQL. Downvote away.

Pick a language, pick a database, get sleep.

Never trust the mass thought. Cargo culting is world destroying.

  • mrkeen 14 hours ago

    > Stop re-inventing. You don't need NoSQL.

    I need somewhere for my objects to live. Objects are not relations.

    Scattering their data across five tables, to fit the relational model, so I can later JOIN them together again, has to be the biggest example of cargo culting in the whole field.

  • csomar 2 days ago

    > Stop re-inventing.

    > You don't GraphQL.

    Isn't that... ironic? You can Postgres -> GraphQL pretty much without code (pg_graphql). And then link your interfaces with the GraphQL endpoints with little code. Why re-invent with REST APIs?

neillyons 2 days ago

Cognitive load example in Go. It is common to give variables single letter names. Now you have to build up a mapping in your head that for example `a` means `Auction`. You could skip this mapping if you just named the variable `auction`.

worik 2 days ago

Interesting. I am in agreement

But not one word about comments, and only one about naming.

Useful comments go a long way to lessening cognitive load

Good names are mnemonics, not documentation

I have worked on code bases with zero comments on the purposes of functions, and names like "next()"

And I've worked with programmers who name things like "next_stage_after_numeric_input"

cainxinth 2 days ago

I think it's all about the framework, the memory palace you build to keep things organized. A secondary factor is the freedom and solitude to prevent extraneous concerns from interrupting you. The brain is not great at true multitasking (doing two or more things at the same time), but it can juggle.

rowanG077 2 days ago

Cognitive load is precisely why I love feature rich languages. Once you have internalized a language the features it has fall away in terms of cognitive load for me. In the same way I don't think about how to ride a bike while I'm riding a bike.

In most cases having a simpler language forces additional complexity into a program which does noticable add to cognitive load.

  • Etheryte 2 days ago

    I think this works only up to the point where the language gets too large and starts creating extra cognitive load all by itself. For me, C++ is a good example of a language that has too many bells and whistles, if I have to stop what I'm doing to look up some weird syntax construct, then having all those extra features stops being useful.

    • rowanG077 2 days ago

      I don't think largeness is the problem. It's language design. C++ is just really badly designed. I'd be very happy with a very large language that takes a long time to get familiar, if all the features in the language are well designed. IMO the current developer landscape is all about "fast onboarding", but that is the totally wrong metric to optimize for. To me it's the difference between someone walking and an airplane. Sure it's very easy to just start walking, you ain't going to go anywhere fast. On the other hand an airplane takes orders of magnitude longer to get going but once it does you won't ever catch up to it by walking.

      • Etheryte 2 days ago

        I think this is a good point. If you learn a language and it's useful, you usually use it for many, many years. So long as the daily driving experience is great, onboarding doesn't have to be that important of a metric.

DrNosferatu a day ago

Cognitive load allows gatekeeper devs to keep* their job. Very difficult to ever see this to change.

markus_zhang 2 days ago

I don't program professionally (I work as a DE but I don't consider it as a serious programming venue) so the No. 1 issue while reading medium-large source code is abstraction -- programming patterns.

I hope it improves whence I write an implementation myself.

jonplackett 2 days ago

Lots of board games exploit this - eg settlers of catan has 5 resources and it’s really hard to think about all 5 at once. You always forget one! (Well I do anyway)

leke a day ago

Yep, someone read John K. Ousterhout's book.

sourcecodeplz a day ago

Wow what a great read, thank you so much for this! I think every dev should read this at least once

papaver 2 days ago

a lot of good points but i feel like one of the biggest i've learned is missing...

leaning toward functional techniques has probably had the biggest impact on my productivity in the last 10 years. some of the highest cognitive load in code comes from storing the current state of objects in ones limited memory. removing state and working with transparent functions completely changes the game. once i write a function that i trust does its job i can replace its implementation with its name in my memory and move on to the next one.

  • menotyou 2 days ago

    Before OOP became popular the usage of global variables was discouraged in procedural languages because it was the cause of many bugs and errors.

    In OOP global state variables were renamed to instance variables and are now widely used. The problem why it was discouraged beforehand did not went away by renaming but is now spread all over the place.

Spiwux 2 days ago

I don't understand the architecture section. The title is "layered architecture," but then it talks about Ports/Adapters, which would be hexagonal architecture?

blahbleuuuudsu 19 hours ago

If a stack trace has too many levels. That is enough proof for me the people building this don't know what they are doing.

johnklos 2 days ago

One way that I explain cognitive load to people unfamiliar with the term is to imagine crossing a lawn that has both autumn leaves and dog poop, and picture how much more mental energy one expends when trying to not step on dog poop.

nox101 a day ago

This reminds me of Joel Splosky's "Making Wrong Code Look Wrong"

https://www.joelonsoftware.com/2005/05/11/making-wrong-code-...

By writing code in a certain way, at a glance you could tell if it's correct or wrong. Most people suggesting making types handle this but that's a level of abstraction. If I have

    v = a + b
If I don't know what a and b are but have to go check their types then I don't know if that code is correct. You could argue if you have good types then of course it's correct but that still misses the point that I don't know what a and b are.

Names help

   html = fieldName + fieldValue
But that's not enough. are fieldName and fieldValue safe? No way to tell here. You could make a SafeString class and that would be good but you're still adding the load that you have to look up the definitions of these variables to know what's going on. The info you need is not here at this line.

Then again, I've never been able to get myself to follow the advice of that article.

z3t4 a day ago

Coding is very personal and fashion driven. Everyone's brain work different. It's all random events that forms our thinking. A software project is too big when you can no longer see the full picture - then you should break it down to independent parts. State is your enemy.

  • ibash a day ago

    State is not the enemy.

    Encapsulated and managed state is just fine.

    • z3t4 7 hours ago

      It's not that easy, say for example if a over excited user clicks the purchase button 20 times, ending up in 20 simultaneous orders. What is the best way to handle it? Should the user get 20 orders shipped to him, or only one? And should the user get a message that says "order already processed" or "order processed successfully"? You can have a system that works correctly 99% of the time. The devil is in the 1% of time when things go wrong.

    • mrkeen 14 hours ago

      Yeah but they're a statistical anomaly in the field.

      For every example of encapsulated state, there's probably 9 more examples of global state which is called encapsulated.

      And what is managed state? I can think of two examples that can effectively manage it - software transactional memory, and a good rdbms with the isolation level turned way up.

  • moffkalast a day ago

    Or alternatively, download more brain.

    Yeah, this is big brain time.

0xbadcafebee a day ago

I hate to be the bearer of bad news, but this entire concept within the space of computer science is cargo cult repetition.

Cognitive load as the general HN viewer knows it doesn't exist. (Or at least, if the concept solely consists of "thinking hard! thinking many things more hard!", it's not worthy of a phrase)

Cognitive load, and cognitive load theory, do exist as concepts, outside of computer science. Yet none of it is reflected in HN posts. Somebody just heard the phrase once, thought it sounded cool, didn't learn about, and started blogging, making up what they thought it meant. Without the actual body of knowledge and research, its use in computer science is just buzzword fuckery, its understanding by the casual HN reader clear as mud. Without nuance, research and evidence, it's nonsense masquerading as wisdom.

If all you know is parroted anecdotes and personal experience, you're not doing science, you're doing arts & crafts.

Tainnor 2 days ago

I'm sorry but this article is really bad.

I agree that cognitive load matters. What this article completely fails to understand is that you can learn things, and when you've learned and internalised certain things, they stop occupying your mental space.

"Reducing mental load" should be about avoiding non-local reasoning, inconsistencies in style/patterns/architecture, and yes, maybe even about not using some niche technology for a minor use case. But it shouldn't be about not using advanced language features or avoiding architecture and just stuffing everything in one place.

  • gnarbarian 2 days ago

    you're right. but it's so hard to enforce.

resters 2 days ago

You can always tell an intellectually limited programmer when their code requires thinking too hard.

atoav 2 days ago

Programmers and tech people should understand why cognitive load needs to be reduced.

All of us would scream if we saw how some bureaucrat at a government office makes you fill out some form digitally only to print it out and then type off the printout, only to print it out and give it to their collegue, who.. you get the point.

This is a problem that could be solved perfectly by a good IT process — a process which instead of multiplying work instead reduces it.

Yet programmers and nerds tend to similar wasteful behaviour when it comes to cognitive load. "Why should I explain the code — it speaks for itself" can be similarily foolish. You already spent all that time to think about and understand your code, so why let that all go to waste and throw out all clues that would help even other hardcore nerds to orient themselves? Good code is clear code, good projects are like a good spaceship: the hero who never has been in this ship knows which button to press, because you — the engineer — made sure the obvious button does the obvious thing. More often than not that hero is your future self.

People reading our code, readmes and using our interfaces got all kind of things on their minds, the best we can do is not waste their mental capacity without a good reason.

devjab 2 days ago

Every SOLID, Clean Code, DRY and so on are all terrible advice sold by a bunch of people who haven’t worked in software development since before Python was invented. Every one of those principles are continently vague so that people like Uncle Bob can claim that you got it wrong when it doesn’t work for you. Uncle Bob is completely correct though, but maybe the reason you many others got it wrong is because the principles are continently vague. Continently because people like Uncle Bob are consultants who are happy to sell your organisation guidance. I think the biggest nail in the coffin of everything from TDD to Clean Architecture should be that they clearly haven’t worked. It’s been more than 20 years and software is more of a mess than if ever was. If all these “best practices” worked, they would have worked by now.

YAGNI is the only principle I’ve seen consistently work. There are no other mantras that work. Abstractions are almost always terrible but even a rule like “if you rewrite it twice” or whatever people come up with aren’t universal. Sometimes you want an abstraction from the beginning, sometimes you never want to abstract. The key is always to keep the cognitive load as low as possible as the author talks about. The same is true for small functions, and I’ve been guilty of this. It’s much worse to have to go through 90 “go to definition” than just read through one long function.

Yet we still teach these bad best practices to young developers under the pretence that it works and that everything else is technical debt. Hah, technical debt doesn’t really exist. If you have to go back and replace part of your Python code with C because it’s become a bottle neck that means you’ve made it. 95% of all software (and this number is angry man yelling at clouds) will never need to scale because it’ll never get more than a few thousand users at best. Even if your software blows up chances are you won’t know where the future bottle necks will be so stop trying to solve them before you run into them.

lwhi 2 days ago

I support Tailwind for precisely this reason.

randomcatuser 2 days ago

super interesting "short/long" slider on the right -- what made you come up with this UI concept?

euph0ria 2 days ago

Really liked this! Thanks!

sgt 2 days ago

  AdminController extends UserController extends GuestController extends BaseController
That's nothing... Java enterprise programmer enters the chat
lo_zamoyski a day ago

The cognitive load associated with abstractions that the author seems to mention isn’t caused by abstractions, but by leakiness or inadequacy of the abstraction family.

In an ideal setting, when solving a problem, we first produce a language for the domain of discourse with a closure propery, i.e., operations of the language defined over terms of the language produce terms in the same language. Abstraction is effectively the process of implementing terms in the language of this domain of discourse using some existing implementation language. When our language does not align with the language of discourse for a problem, this adds cognitive load, because instead of remaining focused on the terms of the language of discourse, we are now in the business of tracking book keeping information in our heads. That book keeping is supposed to be handled by the abstraction in a consistent manner to simulate the language of discourse.

So you end up with a hodgepodge of mixed metaphors and concepts and bits and pieces of languages that are out of context.

Of course, in practice, the language of discourse is often an evolving or developing thing, and laziness and time constraints cause people to do what is momentarily expedient versus correct. Furthermore, machine limitations mean that what might be natural to express in a language of discourse may not be terribly efficient to simulate on a physical machine (without some kind of optimization, at least). So you get half measures or grammatically strange expressions that require knowledge of the implementation constraints to understand.

pkkkzip 2 days ago

I remember I had this argument with a CTO before about cognitive load. I was concerned of the sheer amount of code behind React/Redux for what could've been just a simple plain server rendered with jQuery sprinkled.

Her answer was "if Facebook (before meta) is doing it then so should we."

I said we aren't facebook. But all the engineers sided with her.

Said startup failed after burning $XX million dollars for a product nobody bought.

m3kw9 2 days ago

Every little advantage matters. Code spacing rules for example, your eyes go to the position where it is expected without a new search. Use simple APIs, don’t use new shiny things, don’t go beyond the most simple abstraction(personal thing)

laxk a day ago

TL;DR:

- use principles as guidelines not gospel

- optimize for readability/maintainability

- and for the love of all that's holy, don't build stuff you don't need yet.

BTW, any other old-timers remember when we just wrote code without worrying if it was SOLID enough? Those were the days...

fullstackchris 2 days ago

Fantastic post already by the first example. Meanwhile I can think of big brain developers who have laughed at me "because you assigned to a variable what you use once"... good luck with your career

This post is like an examplified grugbrain.dev

LAC-Tech 2 days ago

Enjoyed this post. A lot of it resonated with me, here's some of my thoughts:

Too many small methods, classes or modules

Realized I was guilty of this this year: on a whim I deleted a couple of "helper" structs in a side project, and the end result was the code was much shorter and my "longest" method was about... 12 lines. I think, like a lot of people, I did this as an overreaction to those 20 times indented, multiple page long functions we've all come across and despised.

No port/adapter terms to learn

This came under the criticism of "layered architecture", and I don't think this is fair. The whole point of the ports/adaptors (or hexagon) architecture was that you had one big business logic thing in the middle that communicated with the outside world via little adapters. It's the exact opposite of horizontal layering.

People say "We write code in DDD", which is a bit strange, because DDD is about problem space, not about solution space.

1. I really should re-read the book now I'm a bit more seasoned. 2. I have noticed this is a really common pattern. Something that's more of a process, or design pattern, or even mathematical construct gets turned into code and now people can't look past the code (see also: CRDTs, reactive programming, single page applications...).

Involve junior developers in architecture reviews. They will help you to identify the mentally demanding areas.

Years later I remembered how impress that a boss of mine leveraged my disgust at a legacy codebase, and my failure to understand it as a junior (partly my fault, partly the code bases fault..), by chanelling my desire to refactor into something that had direct operational benefits, and not the shot gun scatter refactoring I kept eagerly doing.

creer 2 days ago

> Too many small methods, classes or modules > Method, class and module are interchangeable in this context

Class, method, functions are NOT the only way to manage cognitive load. Other ways work well for thinking developers:

Formatting - such as a longer lines and lining up things to highlight identical and different bits.

Commenting - What a concept?! using comments to make things more clear.

Syntactic sugar, moderate use of DSL features, macros... - Is this sometimes the right way?

But yeah, if your tool or style guide or programming language even, imposes doing everything through the object system or functions, then someone clearly knew better. And reduced your cognitive load by taking away your choices /s.

d0mine 2 days ago

What causes more cognitive load:

   filter(odd, numbers)
vs.

    (n for n in numbers if odd(n))
It depends on the reader too.
  • jodrellblank 2 days ago

    Rather depends if we can trust that it's Python's "filter" or if it's another language you're making look Pythonic, and we don't know who implemented filter/2 or how.

    - The first one might be an in-place filter and mutate "numbers", the second one definitely isn't.

    - The first one might not be Python's filter and might be a shadowed name or a monkeypatched call, the second one definitely isn't.

    - The first one isn't clear whether it filters odd numbers in, or filters them out, unless you already know filter/2; the second one is clear.

    - The first one relies on you understanding first-class functions, the second one doesn't.

    - The first one isn't clear whether it relies on `numbers` being a list or can work with any sequence, the second one clearly doesn't use list indexing or anything like it and works on any sequence that works in a `for` loop.

    - The first one gives no hint what it will do on an empty input - throw an exception, return an error, or return an empty list. The second one is clear from the patterns of a `for` loop.

    - The first one has a risk of hiding side-effects behind the call to filter, the second one has no call so can't do that.

    - Neither of them have type declarations or hints, or give me a clue what will happen if "numbers" doesn't contain numbers.

    - The first one isn't clear whether it returns a list or a generator, the second one explicitly uses () wrapper syntax to make a generator comprehension.

    - The first one has a risk of hiding a bad algorithm - like copying "numbers", doing something "accidentally n^2" - while the second one is definitely once for each "n".

    Along the lines of "code can have obviously no bugs, or no obvious bugs" the second one has less room for non-obvious bugs. Although if the reader knows and trusts Python's filter then that helps a lot.

    Biggest risk of bugs is that odd(n) tests if a number is part of the OEIS sequence discovered by mathematician Arthur Odd...

    • jodrellblank 2 days ago

      > "Neither of them have type declarations or hints, or give me a clue what will happen if "numbers" doesn't contain numbers."

      bools in Python are False==0 and True==1, and I'm now imagining an inexperienced dev believing those things are numbers and has no idea they could be anything else, and is filtering for Trues with the intent of counting them later on, but they messed up the assignment and instead of 'numbers' always getting a list of bools it sometimes gets a scalar single bool outside a list instead. They want to check for this case, but don't understand types or how to check them at all, but they have stumbled on these filter/loop which throw when run against a single answer. How useful! Now they are using those lines of code for control flow as a side effect.

    • d0mine a day ago

      This is ridiculous. You can assume that you know what language you are reviewing/working in (sorry, I forgot to mention that the example is in Python). I can remember cases when I was not sure what [human] language I thinking in, but I don't remember a single case when there was a confusion what programming language I'm working in (it is not a factor in cognitive load).

      filter is a builtin name in Python. There is no confusion here in practice. Static checker such as ruff will tell you if you attempt it accidentally. It is the first rule: A001 builtin-variable-shadowing.

      If you are a noob the second variant may be easier to grasp. The first variant has less moving parts.

  • zahlman 2 days ago

    The argument isn't about these minor syntactic or API differences. It's about the structure of the code, in the SICP (https://en.wikipedia.org/wiki/Structure_and_Interpretation_o...) sense.

    • d0mine a day ago

      It is not how "cognitive load" is usually understood (as it relates to the working memory, measured by task-involved pupillary response). It involves anything and everything that is not already stored in your long term memory.

      I remember spending egregiously long time to find a bug that was essentially a typo in some constant. Expressiveness of the language, how many chunks you have to keep in the working memory matters. The chunks can be low/high level depending on what you are trying to do at the moment but you can't escape looking at the low level details at some point.

  • SoftTalker 2 days ago

    Also depends on whether it’s obvious why I need a list of odd numbers.