I'm curious about Relational Lisp. It had the shortest development time, 3h to Haskell's 10/8; a middling/low number of lines of code, at 274; and only 12 lines of documentation.
It describes Relational Lisp as being Lisp enhanced with a database-like language for logic programming.
They also assumed Haskell performed so well, because the author was an expert at it. So, they independently hired a college graduate with no prior knowledge of Haskell and gave him 8 days to learn it. Turns out the graduate implemented the second best solution in terms of lines of code and development time.
When I started out in the 1990s professional programmers generally made a point of learning new languages, to acquire skills and expose themselves to alternate ways of thinking.
I remember a boss in the early 2ks who was teaching himself OCaml in his spare time just because.
I love learning interesting niche languages and also wish it was still the case that learning interesting languages purely out of interest was common.
But I'm afraid that the network effects that favor large languages are only getting stronger over time. It's very hard to convince someone to learn a new language if they're used to finding the answer to almost every question they can think of on Stack Overflow and find free libraries for almost any task on GitHub. Your niche language will have neither of those things. Large languages will continue to consolidate and become impossible to displace, even if a new language pops up that is strictly better in terms of design.
Entrenched programming languages have reached the point where, I believe, they will never be displaced. So there are certain obvious mistakes, such as every object in Java being nullable, that we will have to live with in perpetuity.
In the mid-to-late naughts there arose the diploma mills, offering short programmes to turn students into silicon valley hopefuls. It fundamentally shifted the culture of software development.
There are a lot of developers that won't even learn new things about their current language. I'm still fighting to get some people to adopt C++11 which is 13 years old now.
I used to think this sort of "unreasonable" mindset was a characteristic of "old fogies" and "bright young'uns" always knew better. But ever since i moved from the latter to the former group years ago i began to understand this mindset. It is basically a fear of losing the expertise one has acquired over a long career through a language which has become almost second nature. Also with time and experience you become very cautious about trying out new things in production code because you still don't understand the ramifications fully. For all its complexity, pre C++11 was far simpler to understand and write code in. We knew the minefields and what to avoid and how to model effectively. So most of us are not a fan of new additions to the language every 3 years just because the committee members are trying to ape other languages. Speaking for myself, i only wanted a concurrency library and some compile-time programming constructs as new additions to the language, everything else was strictly not necessary.
Looked at from the above viewpoint, you should be able to appreciate us "old fogies" mindset and why we refuse to eagerly jump on the C++11 (and later) bandwagon just because it exists. We need a justification for every new thing we are forced to re-learn and use. So my suggestion is to take one new feature at a time, discuss and convince the folks of its utility, maybe make the changes in one single module and demonstrate it. The argument of "its new and shiny" will never fly with us.
I am one of those old fogies. While some of the new things in C++ are not useful to me, there are a lot of things aped from other languages that make my life better. Those who are not interested in learning everything can sit back for a bit while those who eagerly jump to new things figure out what is good and what they can ignore (remember that each problem domain is different so we won't all have the same answer). For me, just using unique_ptr would be a major improvement.
Even the old fogies need to be aware of what is happening. I haven't yet done much Rust, but the advocates (when I cut through their Rust religion) are saying some things that really speak to my pain points and so I'm planning to learn it (and have been for a couple years - I may retire before I get around to it, but some new and shiny things are not just glitter and so it remains on my list)
Ah; a kindred "old fogey" :-) Yes our experience shapes how eager we are to jump to learn new things. But it is eventually done at our own pace. In general though, with age and experience people become more cautious psychologically. Probably because we have made so many mistakes which in hindsight could have been avoided if we had thought more and studied more. This is what is frustrating with "shiny new" features. Even before people have gotten some substantial real-world experience with C++11, you have C++14/17/20/23 piling on top and making us feel peak "imposter syndrome". To make matters worse, the noobs/wannabes start "cargo cult programming" with no real understanding and thus ruining the s/n ratio. They don't understand that merely learning the new syntactical features will not magically make one a better and more productive programmer.
The C++ standards committee really needs to disband itself for a decade and leave us programmers in peace :-) Stroustrup wasn't kidding when in the preface to the 4th edition of TC++PL he said "C++11 feels like a new language".
Finally, i agree that we need to be aware and keep abreast of new languages/features which are genuinely good/useful but i would rather be "slow and sure" than "fast and fail".
Is working on things that would be useful. The early adopters of modules have reported some very intriguing results - it isn't ready for me to try yet, but so far what the standard committee has done looks like it will be a win for me in 5 more years when I finally can adopt it. Reflection and contracts are two other things coming that have some intriguing uses in the real world and I'm hopeful to see how/if they work out.
If - as some have reported - modules can bring my compile time down significantly that is bigger than anything else and it will be worth the effort to adopt. That is a big if - modules are still in the early adopter stage and so we are still learning how they work. In general my project is late adopters.
I think the dichotomy of script kiddies versus hackers applies aptly to modern developers. Some developers learn their frameworks and libraries and enough of the language to be productive (script kiddies), whereas some have a keen interest in understanding how a system works and the art of software engineering (hackers). Hackers in my experience are still a very rare breed.
Even if this is sometimes true, when you adopt this viewpoint your work turns to crap. Hire good people, and if you can't hire good people, do something more interesting.
They took a programming camp they paid for, where they were given the task to learn the language of the day. So they learned it because they were forced to, not out of curiosity or desire to improve.
Monkeys using a typewriter were also proven to be 83% more productive than the average developer. Study suggests that their edge likely came from not understanding and ignoring the certified scrum master™.
Someone who learned Haskell intensely for 8 days could very well be more productive than someone who learned Haskell intensely for 80 days. The former probably got a good introduction to the standard library functions and has become familiar with the main classes like Functor, Applicative, Foldable, Traversable. The latter might be too engorged in advanced language features like TypeInType; or evaluating and choosing between slightly different abstractions to accomplish a single goal, like choosing van Laarhoven optics vs profunctor optics.
And I'm not trying to demean advanced type system extensions or van Laarhoven lenses; I'm just reflecting on my personal journey with Haskell. Playing around with the language in this way is similar to playing around with advanced template meta programming in C++. It just takes experience to have the discipline to know the difference and write simple code and be productive.
At the time I don't think Haskell had any of that, and not sure when monads were introduced in Haskell either (wasn't on day 1 I think). Which means that the language was simpler in some aspects.
But what I do think made the job simpler is that they had easy access to other people that knew Haskell. Whereas, today, unless you have a mentor you're going to need to handle any issues you're encountering via delayed responses on community forums... Or AI, most of the time.
Yeah I know the report was from 1994. But it doesn't make sense to learn and use Haskell as if it's still 1994 and so that's not my argument. If anything, having the article be a top link on HN might convince someone to learn Haskell today, and that's what's relevant.
Totally agree on having people nearby that knew Haskell already.
Would be nice to redo this experiment with modern languages like Rust, Go, as well as modern "flavors" of Haskell and C++. Maybe throw OCaml in as well.
Also be interesting (but impossible) do this with more complex problems. I work on more than 10 million lines of C++ (large, but there are many larger C++ codebases out there), with much of the code going back 15 years (comparatively young), with several hundred developers . Even if Haskell could do this in 1 million lines of code (seems unlikely, but who knows), that is still a lot of code. Does it have the abstraction needed to handle this, or does something fail and haskell becomes unmaintainable for some reason?
Which is to say this is interesting, but it is a microbenchmark and so of questionable relevance to the real world.
I work on one of the largest Haskell codebases in the world that I know of (https://mercury.com/). We're in the ballpark of 1.5 million lines of proprietary code built and deployed as effectively a single executable, and of course if you included open source libraries and stuff that we have built or depend on, it would be larger.
I can't really speak to your problem domain, but I feel like we do a lot with what we have. Most of our pain comes from compile times / linking taking longer than we'd prefer, but we invest a lot of energy and money improving that in a way that benefits the whole Haskell ecosystem.
Not sure what abstractions you are wondering about, though.
What I'm wondering about is how maintainable programs of that size are over time. That you get get over a million lines says it is possible. However difficult is it though? Abstractions are just code for whatever it is needed to break your problems up between everyone without conflicts. How easy/hard is this?
For example, I like python for small programs, but I found around 10-50k LOC python no longer is workable as you will make a change not realizing that function is used elsewhere and because that code path isn't covered in tests you didn't know about the breakage until you ship.
It’s highly scalable. Part of the reason compile times are a bit long is that the compiler is doing whole program analysis.
Most of the control flow in a Haskell program is encoded in the types. A “sum type” is a type that represents choices and they introduce new branches to your logic. The compiler can be configured to squawk at you if you miss any branches in your code (as long as you’re disciplined to be wary about catch-all pattern matches). This means that even at millions of lines you can get away with refactorings that change thousands of lines across many modules and be confident you haven’t missed anything.
You can do these things in C++ code based as well but I find the analysis tooling there is building models where in Haskell the types are much more direct. You get feedback faster.
We have a pretty limited set of abstractions that are used throughout. We mostly serve web requests, talk to a PostgreSQL database, communicate with 3rd-party systems with HTTP, and we're starting to use Temporal.io for queued-job type stuff over a homegrown queueing system that we used in the past.
One of the things you'll often hear as a critique levelled against Haskell developers is that we tend to overcomplicate things, but as an organization we skew very heavily towards favoring simple Haskell, at least at the interface level that other developers need to use to interact with a system.
So yeah, basically: Web Request -> Handler -> Do some DB queries -> Fire off some async work.
We also have risk analysis, cron jobs, batch processing systems that use the same DB and so forth.
We're starting to feel a little more pain around maybe not having enough abstraction though. Right now pretty much any developer can write SQL queries against any tables in the system, so it makes it harder for other teams to evolve the schema sometimes.
For SQL, we use a library called esqueleto, which lets us write SQL in a typesafe way, and we can export fragments of SQL for other developers to join across tables in a way that's reusable:
select $
from $ \(p1 `InnerJoin` f `InnerJoin` p2) -> do
on (p2 ^. PersonId ==. f ^. FollowFollowed)
on (p1 ^. PersonId ==. f ^. FollowFollower)
return (p1, f, p2)
which generates this SQL:
SELECT P1., Follow., P2.*
FROM Person AS P1
INNER JOIN Follow ON P1.id = Follow.follower
INNER JOIN Person AS P2 ON P2.id = Follow.followed
^ It's totally possible to make subqueries, join predicates, etc. reusable with esqueleto so that other teams get at data in a blessed way, but the struggle is mostly just that the other developers don't always know where to look for the utility so they end up reinventing it.
In the end, I guess I'd assert that discoverability is the trickier component for developers currently.
I worked at SimSpace, we had a million lines of Haskell written in house. It was wonderful! It was broken up into 150-175 packages with a surprisingly shallow dependency tree, making compile times decent.
It helped that our large application was a bunch of smaller pieces that coordinated through PostgreSQL.
We had three architects who spent their time finding near future problems and making sure they didn't happen.
I've had Haskell jobs with smaller and worse codebases. I think bad code can be created in any language.
I agree, but good code bases do need language support. Some languages cannot easially scale to large code sizes (dynamic types, self modifying code, and other such things that Haskell doesn't have most come to mind as why I've given up on some languages for large code bases - but there may be other things I don't know of that make languages not work at large sizes)
Also JVM languages, Java, Kotlin, Clojure, now that they have added functional features. Plus functional dynamic languages like Scheme/Racket and Elixir.
It's interesting that we've known results such as these for 30(+) years, yet 99.999% of all software is still written in outright miserable languages such as Python or Javascript...
Interesting take on the paper. From my perspective I’d say that developing this wouldn’t take 8 hours in python or JavaScript, but it might just be a preference thing.
These days I’d be surprise if asking Copilot to start the product wouldn’t cut of the equivalent of 6hours of the Haskell development time.
It's not Haskell, but you can write great functional programming code with Javascript. I think it is very ergonomic for FP. Remember that Javascript is the world's most misunderstood programming language =))
As much as I appreciate the FP features of Python, I much prefer a language that enforces immutability. Constraints like that make the code easier to understand, reduces the number of possible bugs.
At this stage in my life, when I'm beginning to give up on immutability being taken seriously, I'd almost settle for proper immutable collections being provided alongside the mutable ones.
I really just want to combine Sets and Maps the way I combine numbers and strings.
I don't think you can be surprised that Python or Javascript are more popular than Haskell. Just the monad thing rules it out entirely.
I would say ML might have been a more popular language, but again it's just a bit too weird for people that aren't deeply into academics / PL research. Also the tooling is just bad, even to this day. (Tbf that is true of Python too.)
Monads aren't weird. Basically it's data structures you can map over and flatten, like a list or a maybe/result/option. Everyone who isn't extremely junior knows about them. There's some theory that ties into formal systems and blah blah blah, but you sure don't need to be janking your monoid in public to understand and use monads.
As for tooling, few people actually want immersive, robust tooling. If that was popular things like Pharo and Allegro wouldn't be niche. Most want crude text editing with pretty colours and not much more.
Says everyone after having spent months getting the hang of it and more months writing blogs about how everyone is wrong to think it’s difficult because when you think about it “ A Monad is just a Monoid in the Category of Endofunctors”
It’s incredible just how blind most programmers are to the time they spend on learning things to come to the point where they think a subject is simple and how they all assume that any junior should just listen to their magical recital of a boiled down explanation and immediately understand it at a fundamental level.
If they were called "flatmappable" (which is essentially what they are), people wouldn't complain so much about monads. But in Haskell you have to bend over backwards just to set a variable you think should be in scope, or to log something to the console, and monads are involved in achieving that kind of things.
Haskell is hard - monads in themselves not really.
Yes, if only people called it something different. Quantum mechanics similarly immediately understandable if you just call it mechanics of discrete amounts. No need to read a lot of books, do a lot of math or calculation. You now know quantum mechanics at a fundamental level because I have told you the magical recital of a boiled down version. I’ll get back to you with a link to my blog explaining why university courses on the subject should just be replace with a picture of a cat and my magical phrase which will have the same effect.
Honestly do you not even see how naive this idea is that the only thing standing between a subject that literally everyone spends a ton of time on getting good understanding of is a renaming or a catch phrase? Even people who read tons of blogs of “actually it simple just…” end up spending time getting to know it. And every one of those people writing those blogs spent a ton of time on it which is why they are writing blogs about their “eureka moment” that will forever make the subject and instantly learned matter.
There are so many things in programming that are hard to grasp at a deep level, but don't look scary to beginners (who misuse them without realizing). Monads look scary so they take all the blame. I'm not really proposing this, but I guess if monads were called something like "flatmappable", the focus would perhaps shift to the more difficult parts of Haskell: at the root, immutability and purity, and on top of that, the deluge of abstractions and ways to compose them. It wouldn't make Haskell any easier... it would just save the poor monad all the bad rap.
The weird thing is the word monad and the technical formalism. Practically speaking, I really don't see why I decent junior can't understand
x = [1, 2, 3].flatMap(el => if (el % 2 == 0) [] else [ el ]) // x = [1, 3]
vs
x = []
for (i = 1, i <= 3, i++) { if (i % 2) == 0) x.push(i) } // x = [1, 3]
or some such. There are advantages and disadvantages to either style of programming. I actually don't like pure functional or imperative style because of the respective tradeoffs each make (functional style requires either significant runtime overhead or heavyweight compiler optimizations and imperative gets messy very quickly and is typically difficult to reason about as program size expands).
The problem is, that's not "monad"; that's the implementation of the monad interface specifically on an array. That's like claiming to understand "iterators" because you understand how to write an iterator implementation on an array, but then trying to take that understanding to an iterator that yields the primes in order. If you think "iterator == presenting each element of an array in order", and nothing else, you don't actually understand iterators, because presenting each element of an array in order is an iterator, not the concept of iterators. "flatMap" is a monad, it is not the concept of monad.
I don't understand shit about the formalisms and I don't care. If you look at my posting history you'll notice that I'm a very uneducated, very crude programmer.
This kind of data structure is like the next step after learning how the scalars work.
It's a claim that monads are somehow supernatural, twists of reality, which is obviously not the case since a lot of people who have never learnt the word monad are using them daily in the form of Optional style types and lists and so on.
so many people are enchanted by quick results.. it’s the same psychology as that of enchantment by LLMs.. and they exert the same pull as tiktok reels.
you get results, who cares when they have no correctness tying them together.
Monocole-wielding elitist opinions like these scream to me very low productivity environments that would spend weeks on reinventing the wheel rather than getting to market fast.
I love and practice Haskell, but anybody thinking that technologies like these are fit for fast moving startups are absolutely delusional.
You can easily setup a monorepo and release quickly mobile/native/web/apis and whatever with excellent editor and ecosystem support in TypeScript, good luck achieving that on alternative languages.
Last but not least, 99% of people like you criticizing JavaScript have never seen what kind of great goodies there are in the ecosystem, even for writing pure functional programming that scales, e.g. with Effect-ts[1]
The issue isn’t the quality of libraries, and as you said, there are nice ones out there. The issue is the churn and dependency management.
And as for fast moving startups, the most important factor will always be the problem (and it’s myriad sub problem) and hiring. Selecting a language is dependent on those.
No, PG advocating for Lisp because he has had a good experience with it 30 years ago is not a "good reason".
Build a tech team around lisp in your startup then task somebody with scraping a website or handling session tokens refreshing and see the results.
At some point people should ask themselves why there's so little killer software written in Lisps or pure functional languages (besides extremely rare exceptions like Emacs or Datomic) when modest and much hated languages like PHP have plenty.
> scraping a website or handling session tokens refreshing and see the results
Have done this task at work - taking a Java crawler and modifying it to be able to navigate saml-based auth.
It's the goddamn pervasive mutation that kills you everytime. The original author will make assumptions like "I can just put my cookies here, and read from them later". And it works for a while, until you add more threads, or crawl different domains, or someone attempts some thread-safety by storing cookies in shudder thread-locals.
(This was actually the project that convinced me that I'll never be able to diagnose or fix a race condition with a step-through debugger.)
I was so fed up with this task that I whipped up a Haskell version at home. I then used that as a standard to test the Java work crawler against - by comparing mitmproxy dumps. If the Java version didn't do what the Haskell version did, I hacked on it some more.
Okay, since you made quite a good argument, I will give you my counter.
The primary problem with JavaScript is that one, the entire ecosystem is prone to breaking. It is a very brittle system.
Next, TypeScript gives you an allure of safety, but at the edges it breaks down because underneath it, it's all just untyped JavaScript.
And the last and the most important one is because it makes it so easy to wrap functions in each other and there are no patterns at all, there is a lot of indirection in most typescript code bases.
This causes API surface area bloat which becomes hard to maintain in the long term.
And tooling doesn't completely solve for this problem either. I have seen code bases where the LSP struggles because of tons of generic types.
I think the most pragmatic alternative is Golang if you are building APIs.
And for a full stackish framework Phoenix and Elixir is a great choice.
Or you can just go all in on Rust like my companies and you get the best of everything.
Wow.I read the article when it first came out. Periodically, it gets re-posted.
My take on the article now is still what I thought then.
When hiring a team of SW engineers to build something, it
is critical to choose a language that a large number of
potential candidates know - "know" as in have already written
tens of thousands of line of; Concurrent Euclid might be
a great language (dating me) but the pool of engineers who
really know it is vanishing small.
I think the reaction by many people would still be same 30 years later today.
Tooling and standardization sure helped with C++, but the authors pointed out the psychological and sociological barriers involved. While some of these barriers have eroded in the past decades (evolving imperative languages incorporated more and more functional aspects over time), but pure functional languages are still perceived as "too clever" by many.
I also wonder how the literate programming approach taken with the Haskell solution compares to the "test harness" used by the C++ implementation in terms of documentation. It has been shown that people read code differently from prose and tests suites are somewhat comparable to the "executable specification" that literate programming provides, with the former aligning more closely with reading the implementation.
It'd be interesting to conduct a similar study today with more contemporary contestants like Haskell, C++20, Rust, Python (it's about prototyping after all), Java, C#/F#, and Julia. It'd also be intriguing to learn whether the perception of the functional solutions changed over time.
Writing code is already hard enough as-is due to the complexity imposed by business requirements. On top of that, reading and debugging code is significantly harder than writing it. It is extremely easy to end up with an incredibly clever codebase, which in practice is just a completely unmaintainable mountain of technical debt.
Take for example something like the Fast Inverse Square Root algorithm [0]. It is incredibly clever, but if you randomly come across it most developers won't be able to easily understand how it works. The well-known variant uses undefined behavior, gives a suboptimal result, and doesn't have any meaningful comments. In other words: it cannot be maintained. Having one or two of those in places where they are absolutely necessary isn't a too bad, but imagine having to deal with bugs in a codebase where you come across stuff like this every single day.
Writing clever code is easy. It is far harder and way more important part to write simple code.
That's not an anti-feature because of cleverness. It's a failure of engineering. Just like everything else, clever code needs to be encapsulated behind a good interface, be well documented, be well tested.
> if you randomly come across it most developers won't be able to easily understand how it works
So document it.
> uses undefined behavior
So fix the undefined behavior. Use memcpy or bit_cast.
> gives a suboptimal result
That's probably intentionally sacrificing accuracy for speed. Document it.
> doesn't have any meaningful comments
Then add it.
Clever algorithms is never the problem by itself. It's undocumented unencapsulated clever algorithms with unknown bugs.
Maybe the best example of this would be memcpy. It's extremely clear what it does, but the implementations would surely be "clever" code. And yet you don't need to know anything about SSE to use it.
What I find interesting is that the Haskell solution was the only one to use higher order functions. Assuming they also count virtual functions in languages like C++ to be higher order, I think a part of the difference here is in design attitude rather than an inherent part of the languages studied.
I really like the idea of comparing languages in a real-ish scenario of development, written by independent expert-in-language developers! As a web dev, I'm particularly interested in the idea of this for comparing the various web frameworks (including "no framework").
Some thoughts on the experiment:
- To get a better idea of the impact of the language on authors' thought processes, it'd probably have to include submissions from more authors in each language. With just one (or so) submission per language, I could see there being variation in expertise.
- I'm curious to see what the documentation looks like here, that there's so much written in some of the submissions, and that the paper authors value it so highly. Is it used to explain what the code does, indicating potentially too-complex code, or is it explaining whys?
- In the "Lessons Learned" section, it's mentioned that other reviewers were not as impressed with Haskell. I'm curious if their reactions were included in the evaluation - to me, these reactions would reduce the success for the understandability (and learnability?) criterion. The paper authors seem to have written this off as "If functional languages are to become more widely used, various sociological and psychological barriers must be overcome".
For the "suspicious" people, this seems to imply the code was final:
> It is significant that [people] were all surprised and suspicious when we told them that Haskell prototype P1 (see appendix B) is a complete tested executable program.
For the people critiquing "cleverness", that seems completely valid whether or not it's actual code or pseudocode.
Interesting. I think it's the wrong test, though. (I mean, look, it's hard to get data on actual software engineering. They got actual data, and they published it. It's more than most people ever do.)
I think the real test would be to do the same experiment, but not with a prototype. It would be to write a finished program, and then maintain it for a decade or three. (But of course, nobody's ever going to run that experiment - it would be too expensive, plus the time is too long for the "publish or perish" world.)
The point is, more matters than just "how fast can I develop". How fast can I develop code that can be maintained for a long time by other people? How hard is it for them to maintain? How does the choice of language affect that? How does how fast it was developed affect that?
In the real world, speed of development is only one variable, and maybe not the most important one. (And, yes, I'm complaining about the data being inadequate, after noting earlier how rare it was to get data at all...)
There is also the very real-world consideration of: How do I hire people to maintain this beast?
Sometimes, you have to bow to industry trends. Otherwise, you're one or two resignations away from being dead in the water. A Haskell solution might be svelte and easy to reason about, for a Haskell programmer. But you have to also be prepared to train new hires on all that, and you may not have that lead time.
they should redo this and add rust. not because rust is the new hype, but productivity and security also depends on the ecosystem. and all have their different stances, issues and points.
Haskell appeared to do quite well in the NSWC experiment; even better than we had anticipated! The reaction from the other participants, however, in particular those not familiar with the advantages of functional programming, was somewhat surprising, and is worth some discussion.
There were two kinds of responses:
In conducting the independent design review at Intermetrics, there was a significance sense of disbelief. We quote from [CHJ93]: "It is significant that Mr. Domanski, Mr. Banowetz and Dr. Brosgol were all surprised and suspicious when we told them that Haskell prototype P1 (see appendix B) is a complete tested executable program. We provided them with a copy of P1 without
explaining that it was a program, and based on preconceptions from their past experience, they had studied P1 under the assumption that it was a mixture of requirements specification and top level design. They were convinced it was incomplete because it did not address issues such as data structure design and execution order."
The other kind of response had more to do with the "cleverness" of the solution: it is safe to say that some observers have simply discounted the results because in their minds the use of higher-order functions to capture regions was just a trick that would probably not be useful in other contexts. One observer described the solution as "cute but not extensible" (para-phrasing); this comment slipped its way into an initial draft of the final report, which described the Haskell prototype as being "too cute for its own good" (the phrase was later removed after objection by the first author of this paper).
We mention these responses because they must be anticipated in the future. If functional languages are to become more widely used, various sociological and psychological barriers must be overcome. As a community we should be aware of these barriers and realize that they will not disappear overnight.
From what I understand, there was a successor to AP5 called Relational Lisp (-> rellisp). I don't think I've seen the source for that. AP5 is available and there have been updates to the implementation since many years.
I'm curious about Relational Lisp. It had the shortest development time, 3h to Haskell's 10/8; a middling/low number of lines of code, at 274; and only 12 lines of documentation.
It describes Relational Lisp as being Lisp enhanced with a database-like language for logic programming.
I suspect this may be it:
https://oceanpark.com/ap5.html
https://www.ap5.com/
There's a C2 entry for it, of course:
https://wiki.c2.com/?RelationalLispWeenie
They also assumed Haskell performed so well, because the author was an expert at it. So, they independently hired a college graduate with no prior knowledge of Haskell and gave him 8 days to learn it. Turns out the graduate implemented the second best solution in terms of lines of code and development time.
But they hired someone capable of learning.
99% of developers will not learn a language that doesn’t look familiar to them on principle. They don’t like it and it’s the end of discussion.
What a shame.
When I started out in the 1990s professional programmers generally made a point of learning new languages, to acquire skills and expose themselves to alternate ways of thinking.
I remember a boss in the early 2ks who was teaching himself OCaml in his spare time just because.
I love learning interesting niche languages and also wish it was still the case that learning interesting languages purely out of interest was common.
But I'm afraid that the network effects that favor large languages are only getting stronger over time. It's very hard to convince someone to learn a new language if they're used to finding the answer to almost every question they can think of on Stack Overflow and find free libraries for almost any task on GitHub. Your niche language will have neither of those things. Large languages will continue to consolidate and become impossible to displace, even if a new language pops up that is strictly better in terms of design.
Entrenched programming languages have reached the point where, I believe, they will never be displaced. So there are certain obvious mistakes, such as every object in Java being nullable, that we will have to live with in perpetuity.
In the mid-to-late naughts there arose the diploma mills, offering short programmes to turn students into silicon valley hopefuls. It fundamentally shifted the culture of software development.
There was also a dire need for more programmers. I think the industry would take what it could get.
There are a lot of developers that won't even learn new things about their current language. I'm still fighting to get some people to adopt C++11 which is 13 years old now.
I used to think this sort of "unreasonable" mindset was a characteristic of "old fogies" and "bright young'uns" always knew better. But ever since i moved from the latter to the former group years ago i began to understand this mindset. It is basically a fear of losing the expertise one has acquired over a long career through a language which has become almost second nature. Also with time and experience you become very cautious about trying out new things in production code because you still don't understand the ramifications fully. For all its complexity, pre C++11 was far simpler to understand and write code in. We knew the minefields and what to avoid and how to model effectively. So most of us are not a fan of new additions to the language every 3 years just because the committee members are trying to ape other languages. Speaking for myself, i only wanted a concurrency library and some compile-time programming constructs as new additions to the language, everything else was strictly not necessary.
Looked at from the above viewpoint, you should be able to appreciate us "old fogies" mindset and why we refuse to eagerly jump on the C++11 (and later) bandwagon just because it exists. We need a justification for every new thing we are forced to re-learn and use. So my suggestion is to take one new feature at a time, discuss and convince the folks of its utility, maybe make the changes in one single module and demonstrate it. The argument of "its new and shiny" will never fly with us.
I am one of those old fogies. While some of the new things in C++ are not useful to me, there are a lot of things aped from other languages that make my life better. Those who are not interested in learning everything can sit back for a bit while those who eagerly jump to new things figure out what is good and what they can ignore (remember that each problem domain is different so we won't all have the same answer). For me, just using unique_ptr would be a major improvement.
Even the old fogies need to be aware of what is happening. I haven't yet done much Rust, but the advocates (when I cut through their Rust religion) are saying some things that really speak to my pain points and so I'm planning to learn it (and have been for a couple years - I may retire before I get around to it, but some new and shiny things are not just glitter and so it remains on my list)
Ah; a kindred "old fogey" :-) Yes our experience shapes how eager we are to jump to learn new things. But it is eventually done at our own pace. In general though, with age and experience people become more cautious psychologically. Probably because we have made so many mistakes which in hindsight could have been avoided if we had thought more and studied more. This is what is frustrating with "shiny new" features. Even before people have gotten some substantial real-world experience with C++11, you have C++14/17/20/23 piling on top and making us feel peak "imposter syndrome". To make matters worse, the noobs/wannabes start "cargo cult programming" with no real understanding and thus ruining the s/n ratio. They don't understand that merely learning the new syntactical features will not magically make one a better and more productive programmer.
The C++ standards committee really needs to disband itself for a decade and leave us programmers in peace :-) Stroustrup wasn't kidding when in the preface to the 4th edition of TC++PL he said "C++11 feels like a new language".
Finally, i agree that we need to be aware and keep abreast of new languages/features which are genuinely good/useful but i would rather be "slow and sure" than "fast and fail".
> C++ standards committee
Is working on things that would be useful. The early adopters of modules have reported some very intriguing results - it isn't ready for me to try yet, but so far what the standard committee has done looks like it will be a win for me in 5 more years when I finally can adopt it. Reflection and contracts are two other things coming that have some intriguing uses in the real world and I'm hopeful to see how/if they work out.
Agree with you on Reflections and Contracts (have actual fundamental value) but not so much on Modules (more incremental and restructuring).
If - as some have reported - modules can bring my compile time down significantly that is bigger than anything else and it will be worth the effort to adopt. That is a big if - modules are still in the early adopter stage and so we are still learning how they work. In general my project is late adopters.
I think the dichotomy of script kiddies versus hackers applies aptly to modern developers. Some developers learn their frameworks and libraries and enough of the language to be productive (script kiddies), whereas some have a keen interest in understanding how a system works and the art of software engineering (hackers). Hackers in my experience are still a very rare breed.
This has been my experience with most engineers.
I can’t think of a time I’ve seen this, and I’ve worked with most languages at least once (including Haskell, Rust, Go, F#, PHP, Java, etc…)
One of the other competitors was the project head. All of those people seem deeply inclined into learning and experimenting.
Even if this is sometimes true, when you adopt this viewpoint your work turns to crap. Hire good people, and if you can't hire good people, do something more interesting.
How did they learn the languages they currently know?
Perhaps only God knows.
They took a programming camp they paid for, where they were given the task to learn the language of the day. So they learned it because they were forced to, not out of curiosity or desire to improve.
Monkeys using a typewriter were also proven to be 83% more productive than the average developer. Study suggests that their edge likely came from not understanding and ignoring the certified scrum master™.
Someone who learned Haskell intensely for 8 days could very well be more productive than someone who learned Haskell intensely for 80 days. The former probably got a good introduction to the standard library functions and has become familiar with the main classes like Functor, Applicative, Foldable, Traversable. The latter might be too engorged in advanced language features like TypeInType; or evaluating and choosing between slightly different abstractions to accomplish a single goal, like choosing van Laarhoven optics vs profunctor optics.
And I'm not trying to demean advanced type system extensions or van Laarhoven lenses; I'm just reflecting on my personal journey with Haskell. Playing around with the language in this way is similar to playing around with advanced template meta programming in C++. It just takes experience to have the discipline to know the difference and write simple code and be productive.
Submitted report was published in 1994.
At the time I don't think Haskell had any of that, and not sure when monads were introduced in Haskell either (wasn't on day 1 I think). Which means that the language was simpler in some aspects.
But what I do think made the job simpler is that they had easy access to other people that knew Haskell. Whereas, today, unless you have a mentor you're going to need to handle any issues you're encountering via delayed responses on community forums... Or AI, most of the time.
Yeah I know the report was from 1994. But it doesn't make sense to learn and use Haskell as if it's still 1994 and so that's not my argument. If anything, having the article be a top link on HN might convince someone to learn Haskell today, and that's what's relevant.
Totally agree on having people nearby that knew Haskell already.
Would be nice to redo this experiment with modern languages like Rust, Go, as well as modern "flavors" of Haskell and C++. Maybe throw OCaml in as well.
Also be interesting (but impossible) do this with more complex problems. I work on more than 10 million lines of C++ (large, but there are many larger C++ codebases out there), with much of the code going back 15 years (comparatively young), with several hundred developers . Even if Haskell could do this in 1 million lines of code (seems unlikely, but who knows), that is still a lot of code. Does it have the abstraction needed to handle this, or does something fail and haskell becomes unmaintainable for some reason?
Which is to say this is interesting, but it is a microbenchmark and so of questionable relevance to the real world.
I work on one of the largest Haskell codebases in the world that I know of (https://mercury.com/). We're in the ballpark of 1.5 million lines of proprietary code built and deployed as effectively a single executable, and of course if you included open source libraries and stuff that we have built or depend on, it would be larger.
I can't really speak to your problem domain, but I feel like we do a lot with what we have. Most of our pain comes from compile times / linking taking longer than we'd prefer, but we invest a lot of energy and money improving that in a way that benefits the whole Haskell ecosystem.
Not sure what abstractions you are wondering about, though.
What I'm wondering about is how maintainable programs of that size are over time. That you get get over a million lines says it is possible. However difficult is it though? Abstractions are just code for whatever it is needed to break your problems up between everyone without conflicts. How easy/hard is this?
For example, I like python for small programs, but I found around 10-50k LOC python no longer is workable as you will make a change not realizing that function is used elsewhere and because that code path isn't covered in tests you didn't know about the breakage until you ship.
It’s highly scalable. Part of the reason compile times are a bit long is that the compiler is doing whole program analysis.
Most of the control flow in a Haskell program is encoded in the types. A “sum type” is a type that represents choices and they introduce new branches to your logic. The compiler can be configured to squawk at you if you miss any branches in your code (as long as you’re disciplined to be wary about catch-all pattern matches). This means that even at millions of lines you can get away with refactorings that change thousands of lines across many modules and be confident you haven’t missed anything.
You can do these things in C++ code based as well but I find the analysis tooling there is building models where in Haskell the types are much more direct. You get feedback faster.
Modern C++ has something like sum types, but it's so clunky and un-ergonomic it's ridiculous :(
And the standard library is including monad-like types too. It has come a long way.
We have a pretty limited set of abstractions that are used throughout. We mostly serve web requests, talk to a PostgreSQL database, communicate with 3rd-party systems with HTTP, and we're starting to use Temporal.io for queued-job type stuff over a homegrown queueing system that we used in the past.
One of the things you'll often hear as a critique levelled against Haskell developers is that we tend to overcomplicate things, but as an organization we skew very heavily towards favoring simple Haskell, at least at the interface level that other developers need to use to interact with a system.
So yeah, basically: Web Request -> Handler -> Do some DB queries -> Fire off some async work.
We also have risk analysis, cron jobs, batch processing systems that use the same DB and so forth.
We're starting to feel a little more pain around maybe not having enough abstraction though. Right now pretty much any developer can write SQL queries against any tables in the system, so it makes it harder for other teams to evolve the schema sometimes.
For SQL, we use a library called esqueleto, which lets us write SQL in a typesafe way, and we can export fragments of SQL for other developers to join across tables in a way that's reusable:
select $ from $ \(p1 `InnerJoin` f `InnerJoin` p2) -> do on (p2 ^. PersonId ==. f ^. FollowFollowed) on (p1 ^. PersonId ==. f ^. FollowFollower) return (p1, f, p2)
which generates this SQL:
SELECT P1., Follow., P2.* FROM Person AS P1 INNER JOIN Follow ON P1.id = Follow.follower INNER JOIN Person AS P2 ON P2.id = Follow.followed
^ It's totally possible to make subqueries, join predicates, etc. reusable with esqueleto so that other teams get at data in a blessed way, but the struggle is mostly just that the other developers don't always know where to look for the utility so they end up reinventing it.
In the end, I guess I'd assert that discoverability is the trickier component for developers currently.
I worked at SimSpace, we had a million lines of Haskell written in house. It was wonderful! It was broken up into 150-175 packages with a surprisingly shallow dependency tree, making compile times decent.
It helped that our large application was a bunch of smaller pieces that coordinated through PostgreSQL.
We had three architects who spent their time finding near future problems and making sure they didn't happen.
I've had Haskell jobs with smaller and worse codebases. I think bad code can be created in any language.
>I think bad code can be created in any language.
I agree, but good code bases do need language support. Some languages cannot easially scale to large code sizes (dynamic types, self modifying code, and other such things that Haskell doesn't have most come to mind as why I've given up on some languages for large code bases - but there may be other things I don't know of that make languages not work at large sizes)
> We had three architects who spent their time finding near future problems and making sure they didn't happen.
I want this.
Also JVM languages, Java, Kotlin, Clojure, now that they have added functional features. Plus functional dynamic languages like Scheme/Racket and Elixir.
It's interesting that we've known results such as these for 30(+) years, yet 99.999% of all software is still written in outright miserable languages such as Python or Javascript...
Interesting take on the paper. From my perspective I’d say that developing this wouldn’t take 8 hours in python or JavaScript, but it might just be a preference thing.
These days I’d be surprise if asking Copilot to start the product wouldn’t cut of the equivalent of 6hours of the Haskell development time.
It's not Haskell, but you can write great functional programming code with Javascript. I think it is very ergonomic for FP. Remember that Javascript is the world's most misunderstood programming language =))
As much as I appreciate the FP features of Python, I much prefer a language that enforces immutability. Constraints like that make the code easier to understand, reduces the number of possible bugs.
At this stage in my life, when I'm beginning to give up on immutability being taken seriously, I'd almost settle for proper immutable collections being provided alongside the mutable ones.
I really just want to combine Sets and Maps the way I combine numbers and strings.
I don't think you can be surprised that Python or Javascript are more popular than Haskell. Just the monad thing rules it out entirely.
I would say ML might have been a more popular language, but again it's just a bit too weird for people that aren't deeply into academics / PL research. Also the tooling is just bad, even to this day. (Tbf that is true of Python too.)
Monads aren't weird. Basically it's data structures you can map over and flatten, like a list or a maybe/result/option. Everyone who isn't extremely junior knows about them. There's some theory that ties into formal systems and blah blah blah, but you sure don't need to be janking your monoid in public to understand and use monads.
As for tooling, few people actually want immersive, robust tooling. If that was popular things like Pharo and Allegro wouldn't be niche. Most want crude text editing with pretty colours and not much more.
> Monads aren't weird. Basically it's …
Says everyone after having spent months getting the hang of it and more months writing blogs about how everyone is wrong to think it’s difficult because when you think about it “ A Monad is just a Monoid in the Category of Endofunctors”
It’s incredible just how blind most programmers are to the time they spend on learning things to come to the point where they think a subject is simple and how they all assume that any junior should just listen to their magical recital of a boiled down explanation and immediately understand it at a fundamental level.
If they were called "flatmappable" (which is essentially what they are), people wouldn't complain so much about monads. But in Haskell you have to bend over backwards just to set a variable you think should be in scope, or to log something to the console, and monads are involved in achieving that kind of things.
Haskell is hard - monads in themselves not really.
Yes, if only people called it something different. Quantum mechanics similarly immediately understandable if you just call it mechanics of discrete amounts. No need to read a lot of books, do a lot of math or calculation. You now know quantum mechanics at a fundamental level because I have told you the magical recital of a boiled down version. I’ll get back to you with a link to my blog explaining why university courses on the subject should just be replace with a picture of a cat and my magical phrase which will have the same effect.
Honestly do you not even see how naive this idea is that the only thing standing between a subject that literally everyone spends a ton of time on getting good understanding of is a renaming or a catch phrase? Even people who read tons of blogs of “actually it simple just…” end up spending time getting to know it. And every one of those people writing those blogs spent a ton of time on it which is why they are writing blogs about their “eureka moment” that will forever make the subject and instantly learned matter.
I think you misunderstood my point...
There are so many things in programming that are hard to grasp at a deep level, but don't look scary to beginners (who misuse them without realizing). Monads look scary so they take all the blame. I'm not really proposing this, but I guess if monads were called something like "flatmappable", the focus would perhaps shift to the more difficult parts of Haskell: at the root, immutability and purity, and on top of that, the deluge of abstractions and ways to compose them. It wouldn't make Haskell any easier... it would just save the poor monad all the bad rap.
The weird thing is the word monad and the technical formalism. Practically speaking, I really don't see why I decent junior can't understand
vs or some such. There are advantages and disadvantages to either style of programming. I actually don't like pure functional or imperative style because of the respective tradeoffs each make (functional style requires either significant runtime overhead or heavyweight compiler optimizations and imperative gets messy very quickly and is typically difficult to reason about as program size expands).The problem is, that's not "monad"; that's the implementation of the monad interface specifically on an array. That's like claiming to understand "iterators" because you understand how to write an iterator implementation on an array, but then trying to take that understanding to an iterator that yields the primes in order. If you think "iterator == presenting each element of an array in order", and nothing else, you don't actually understand iterators, because presenting each element of an array in order is an iterator, not the concept of iterators. "flatMap" is a monad, it is not the concept of monad.
I think any language that has a flatmap type construct would also have filter functions that would be a lot clearer to read and write...
vs in Common Lisp for example.(Yes I know the original is just a contrived example that you shouldn't read too much into, but...)
I don't understand shit about the formalisms and I don't care. If you look at my posting history you'll notice that I'm a very uneducated, very crude programmer.
This kind of data structure is like the next step after learning how the scalars work.
'Weirdness' is, as this conversation confirms, in the eye if the beholder. I _find_ ruby weird and non-intuitive.
It's a claim that monads are somehow supernatural, twists of reality, which is obviously not the case since a lot of people who have never learnt the word monad are using them daily in the form of Optional style types and lists and so on.
Note that these languages are not included in the paper, so this is merely a speculation as to their performance.
so many people are enchanted by quick results.. it’s the same psychology as that of enchantment by LLMs.. and they exert the same pull as tiktok reels.
you get results, who cares when they have no correctness tying them together.
I prefer results WITH correctness, however, and I can utilize the LLM for that. I mentioned it in my previous comment.
Its especially sad to see many YC companies falling for writing JavaScript.
No it's not why would it?
Monocole-wielding elitist opinions like these scream to me very low productivity environments that would spend weeks on reinventing the wheel rather than getting to market fast.
I love and practice Haskell, but anybody thinking that technologies like these are fit for fast moving startups are absolutely delusional.
You can easily setup a monorepo and release quickly mobile/native/web/apis and whatever with excellent editor and ecosystem support in TypeScript, good luck achieving that on alternative languages.
Last but not least, 99% of people like you criticizing JavaScript have never seen what kind of great goodies there are in the ecosystem, even for writing pure functional programming that scales, e.g. with Effect-ts[1]
[1] https://effect.website/
The issue isn’t the quality of libraries, and as you said, there are nice ones out there. The issue is the churn and dependency management.
And as for fast moving startups, the most important factor will always be the problem (and it’s myriad sub problem) and hiring. Selecting a language is dependent on those.
Paul Graham is a major Lisp advocate, attributing a non-trivial part of his own early success to it.
So there are all those good reasons for it, but it's still a little weird.
No, PG advocating for Lisp because he has had a good experience with it 30 years ago is not a "good reason".
Build a tech team around lisp in your startup then task somebody with scraping a website or handling session tokens refreshing and see the results.
At some point people should ask themselves why there's so little killer software written in Lisps or pure functional languages (besides extremely rare exceptions like Emacs or Datomic) when modest and much hated languages like PHP have plenty.
> scraping a website or handling session tokens refreshing and see the results
Have done this task at work - taking a Java crawler and modifying it to be able to navigate saml-based auth.
It's the goddamn pervasive mutation that kills you everytime. The original author will make assumptions like "I can just put my cookies here, and read from them later". And it works for a while, until you add more threads, or crawl different domains, or someone attempts some thread-safety by storing cookies in shudder thread-locals.
(This was actually the project that convinced me that I'll never be able to diagnose or fix a race condition with a step-through debugger.)
I was so fed up with this task that I whipped up a Haskell version at home. I then used that as a standard to test the Java work crawler against - by comparing mitmproxy dumps. If the Java version didn't do what the Haskell version did, I hacked on it some more.
Okay, since you made quite a good argument, I will give you my counter.
The primary problem with JavaScript is that one, the entire ecosystem is prone to breaking. It is a very brittle system.
Next, TypeScript gives you an allure of safety, but at the edges it breaks down because underneath it, it's all just untyped JavaScript.
And the last and the most important one is because it makes it so easy to wrap functions in each other and there are no patterns at all, there is a lot of indirection in most typescript code bases.
This causes API surface area bloat which becomes hard to maintain in the long term.
And tooling doesn't completely solve for this problem either. I have seen code bases where the LSP struggles because of tons of generic types.
I think the most pragmatic alternative is Golang if you are building APIs.
And for a full stackish framework Phoenix and Elixir is a great choice.
Or you can just go all in on Rust like my companies and you get the best of everything.
JS/TS primary target is running interactive code in webbrowser. What are the good alternatives?
Sure if you are building a highly interactive app like Google maps go crazy on the javascript but keep it only for that.
I think some business SPA is heavily benefited from JS/TS too.
Previous discussion that includes even more backlinks to additional earlier discussions: https://news.ycombinator.com/item?id=14267882
Thanks! Macroexpanded:
Haskell, Ada, C++, Awk: An Experiment in Prototyping Productivity (1994) [pdf] - https://news.ycombinator.com/item?id=33936366 - Dec 2022 (49 comments)
Haskell, Ada, C++: An Experiment in Prototyping Productivity (1994) [pdf] - https://news.ycombinator.com/item?id=19570776 - April 2019 (55 comments)
Haskell vs. Ada vs. C++ an Experiment in Software Prototyping Productivity (1994) [pdf] - https://news.ycombinator.com/item?id=14267882 - May 2017 (59 comments)
Haskell vs. Ada vs. C++ vs. Awk vs (1994) [pdf] - https://news.ycombinator.com/item?id=13275288 - Dec 2016 (68 comments)
Haskell, Ada, C++, Awk: An Experiment in Prototyping Productivity (1994) [pdf] - https://news.ycombinator.com/item?id=7050892 - Jan 2014 (24 comments)
Haskell v Ada v C++ v Awk... An Experiment in Software Prototyping Productivity - https://news.ycombinator.com/item?id=7029783 - Jan 2014 (23 comments)
Wow.I read the article when it first came out. Periodically, it gets re-posted. My take on the article now is still what I thought then. When hiring a team of SW engineers to build something, it is critical to choose a language that a large number of potential candidates know - "know" as in have already written tens of thousands of line of; Concurrent Euclid might be a great language (dating me) but the pool of engineers who really know it is vanishing small.
My favorite part of this is the fact that the Haskell solution is considered "too clever".
I think the reaction by many people would still be same 30 years later today. Tooling and standardization sure helped with C++, but the authors pointed out the psychological and sociological barriers involved. While some of these barriers have eroded in the past decades (evolving imperative languages incorporated more and more functional aspects over time), but pure functional languages are still perceived as "too clever" by many.
I also wonder how the literate programming approach taken with the Haskell solution compares to the "test harness" used by the C++ implementation in terms of documentation. It has been shown that people read code differently from prose and tests suites are somewhat comparable to the "executable specification" that literate programming provides, with the former aligning more closely with reading the implementation.
It'd be interesting to conduct a similar study today with more contemporary contestants like Haskell, C++20, Rust, Python (it's about prototyping after all), Java, C#/F#, and Julia. It'd also be intriguing to learn whether the perception of the functional solutions changed over time.
"Clever" code is an anti-feature.
Writing code is already hard enough as-is due to the complexity imposed by business requirements. On top of that, reading and debugging code is significantly harder than writing it. It is extremely easy to end up with an incredibly clever codebase, which in practice is just a completely unmaintainable mountain of technical debt.
Take for example something like the Fast Inverse Square Root algorithm [0]. It is incredibly clever, but if you randomly come across it most developers won't be able to easily understand how it works. The well-known variant uses undefined behavior, gives a suboptimal result, and doesn't have any meaningful comments. In other words: it cannot be maintained. Having one or two of those in places where they are absolutely necessary isn't a too bad, but imagine having to deal with bugs in a codebase where you come across stuff like this every single day.
Writing clever code is easy. It is far harder and way more important part to write simple code.
[0]: https://en.wikipedia.org/wiki/Fast_inverse_square_root
That's not an anti-feature because of cleverness. It's a failure of engineering. Just like everything else, clever code needs to be encapsulated behind a good interface, be well documented, be well tested.
> if you randomly come across it most developers won't be able to easily understand how it works
So document it.
> uses undefined behavior
So fix the undefined behavior. Use memcpy or bit_cast.
> gives a suboptimal result
That's probably intentionally sacrificing accuracy for speed. Document it.
> doesn't have any meaningful comments
Then add it.
Clever algorithms is never the problem by itself. It's undocumented unencapsulated clever algorithms with unknown bugs.
Maybe the best example of this would be memcpy. It's extremely clear what it does, but the implementations would surely be "clever" code. And yet you don't need to know anything about SSE to use it.
What I find interesting is that the Haskell solution was the only one to use higher order functions. Assuming they also count virtual functions in languages like C++ to be higher order, I think a part of the difference here is in design attitude rather than an inherent part of the languages studied.
I really like the idea of comparing languages in a real-ish scenario of development, written by independent expert-in-language developers! As a web dev, I'm particularly interested in the idea of this for comparing the various web frameworks (including "no framework").
Some thoughts on the experiment:
- To get a better idea of the impact of the language on authors' thought processes, it'd probably have to include submissions from more authors in each language. With just one (or so) submission per language, I could see there being variation in expertise.
- I'm curious to see what the documentation looks like here, that there's so much written in some of the submissions, and that the paper authors value it so highly. Is it used to explain what the code does, indicating potentially too-complex code, or is it explaining whys?
- In the "Lessons Learned" section, it's mentioned that other reviewers were not as impressed with Haskell. I'm curious if their reactions were included in the evaluation - to me, these reactions would reduce the success for the understandability (and learnability?) criterion. The paper authors seem to have written this off as "If functional languages are to become more widely used, various sociological and psychological barriers must be overcome".
> to me, these reactions would reduce the success for the understandability (and learnability?) criterion
You say that of the submission that was sent back to the authors to complete the actual code instead of just sending pseudocode...
Does it say that the submission was sent back?
For the "suspicious" people, this seems to imply the code was final:
> It is significant that [people] were all surprised and suspicious when we told them that Haskell prototype P1 (see appendix B) is a complete tested executable program.
For the people critiquing "cleverness", that seems completely valid whether or not it's actual code or pseudocode.
You can complain about at most 1 of "it's hard to read" and "it's too simple, it doesn't look like you wrote the entire code".
Interesting. I think it's the wrong test, though. (I mean, look, it's hard to get data on actual software engineering. They got actual data, and they published it. It's more than most people ever do.)
I think the real test would be to do the same experiment, but not with a prototype. It would be to write a finished program, and then maintain it for a decade or three. (But of course, nobody's ever going to run that experiment - it would be too expensive, plus the time is too long for the "publish or perish" world.)
The point is, more matters than just "how fast can I develop". How fast can I develop code that can be maintained for a long time by other people? How hard is it for them to maintain? How does the choice of language affect that? How does how fast it was developed affect that?
In the real world, speed of development is only one variable, and maybe not the most important one. (And, yes, I'm complaining about the data being inadequate, after noting earlier how rare it was to get data at all...)
There is also the very real-world consideration of: How do I hire people to maintain this beast?
Sometimes, you have to bow to industry trends. Otherwise, you're one or two resignations away from being dead in the water. A Haskell solution might be svelte and easy to reason about, for a Haskell programmer. But you have to also be prepared to train new hires on all that, and you may not have that lead time.
Yes, Haskell isn't anything near good enough on this test.
they should redo this and add rust. not because rust is the new hype, but productivity and security also depends on the ecosystem. and all have their different stances, issues and points.
Under "Lessons Learned" section;
Haskell appeared to do quite well in the NSWC experiment; even better than we had anticipated! The reaction from the other participants, however, in particular those not familiar with the advantages of functional programming, was somewhat surprising, and is worth some discussion. There were two kinds of responses:
In conducting the independent design review at Intermetrics, there was a significance sense of disbelief. We quote from [CHJ93]: "It is significant that Mr. Domanski, Mr. Banowetz and Dr. Brosgol were all surprised and suspicious when we told them that Haskell prototype P1 (see appendix B) is a complete tested executable program. We provided them with a copy of P1 without explaining that it was a program, and based on preconceptions from their past experience, they had studied P1 under the assumption that it was a mixture of requirements specification and top level design. They were convinced it was incomplete because it did not address issues such as data structure design and execution order."
The other kind of response had more to do with the "cleverness" of the solution: it is safe to say that some observers have simply discounted the results because in their minds the use of higher-order functions to capture regions was just a trick that would probably not be useful in other contexts. One observer described the solution as "cute but not extensible" (para-phrasing); this comment slipped its way into an initial draft of the final report, which described the Haskell prototype as being "too cute for its own good" (the phrase was later removed after objection by the first author of this paper).
We mention these responses because they must be anticipated in the future. If functional languages are to become more widely used, various sociological and psychological barriers must be overcome. As a community we should be aware of these barriers and realize that they will not disappear overnight.
(1994)
(1994)
Added above. Thanks!
(1994)
Added above. Thanks!
TLDR: "Haskell vs Ada vs C++" but a lisp wins in development hours by huge margin
I wonder if this is the 'relational lisp' in question
https://www.ap5.com/ap5-man.html
or
https://ieeexplore.ieee.org/document/13081
From what I understand, there was a successor to AP5 called Relational Lisp (-> rellisp). I don't think I've seen the source for that. AP5 is available and there have been updates to the implementation since many years.
[dead]
coment