Always fun to see this kind of Rosetta Code thing, feel like no matter the topic there's something to glean from reading!
That said: if I ever gave the palindrome question to a candidate (I haven't) and they pulled out a `reverse` library func/keyword we'd be encouraging them to do more.
sshine · 4h ago
I live-coded Haskell a couple of times at job interviews for non-Haskell positions where you were free to choose.
It’s fun because the interviewer often doesn’t grok the paradigm. In one case, I was asked to make a password generator, and I went with QuickCheck’s Gen, which lets me write a random generator of things without explicitly defining a single function (they’re all monads).
So you go from somehow describing the thing you want to make to “I’m done.” :-D
18172828286177 · 4h ago
Did you get the job?
sshine · 2h ago
Yeah.
I also refused to solve a part of the task because it would lead to bad password policy. Instead I property-tested the generator.
djtango · 4h ago
I remember I did an Advent of Code day 1 in Haskell a while back.
The meat of the code was a one liner. But opening a file, throwing away the first line and parsing a CSV took a lot more head scratching, especially if you've only ever done puzzles in Haskell.
Makes the hard things easy and the easy things hard
ngruhn · 4h ago
Same. Until I discovered those parser combinator libraries which are THE way to do parsing in Haskell. It makes parsing so pleasant you'll never need a regex again. A parser for that CSV file probably looks like this:
line = decimal `sepBy` char ','
file = line `sepBy` newline
That's it. It almost reads like prose.
djtango · 4h ago
Thanks for sharing! This was in a pre-LLM era and I couldn't be bothered to research the right way to do this. Those parser combinators are excellent, so clean
ReDress · 2h ago
> Silly job interview questions in Haskell.
Well, it's not clear to me why we're terming these as silly. They mostly seem simple, instead, to me.
lynx97 · 5h ago
Well, that fizlle implementation is not a good example of a great answer. It checks for the two conditions twice. At least I would use a where clause to define fizz and buzz.
internet_points · 1h ago
which is nice with the laziness because you only eval what you need, e.g.
fizzle :: Int -> String
fizzle n
| fizz && baz = "Fizz Baz!" -- if this branch is taken, buzz is never evaluated
| fizz && buzz = "Fizz Buzz!"
| fizz = "Fizz!"
| buzz = "Buzz!"
| otherwise = show n
where
fizz = n `mod` 3 == 0
buzz = n `mod` 5 == 0
baz = n `mod` 7 == 0
thdhhghgbhy · 4h ago
Typical Haskell coding post with no mention of order or efficiency of the algorithm solutions. The thing is, even in basic coding quizes, if the interviewer is worth their salt, they will ask about how to improve the efficiency of the naive solutions first put forward, like the palindrome answer in this post.
What is the order of sumNToTotal?
dmurray · 2h ago
It does discuss performance and points out some places where the Haskell implementation makes the natural naive solutions performant.
> While it's true that sort has an O(nlogn) performance profile, one interesting thing here is that sorting is lazy in Haskell! This means that if our two strings are unequal, they will only be sorted far enough to determine inequality!
I don't understand exactly how this works - don't all O(n log n) sorting algorithms still have to do some work on the other elements of the list before pulling out the smallest one? If you just scanned and pulled the smallest one to the front each time that is O(n^2). And there's an implementation detail here that differences at the start of the sorted string will be caught more quickly than at the end, so real-world behaviour is dependent on your choice of sort order.
Agreed that the performance analysis is quite superficial and that some of the solutions like sum NToTotal must be very sub-optimal.
kqr · 2h ago
The matchesSum check is constant in the length of the input list. The filter adds an iteration over all results of (combinations n xs). If the complexity of (combinations n) is T(n) in xs, then by analogy of its recursive definition we have two recursive calls that are T(n-1), we have an fmap that is O(1) and a concatenation that is O(1). That sets up the recurrence T(n) = 2T(n-1) making the (combinations n) function O(n) in xs.
Thus the whole thing is O(n²). It's not like this is particularly hard for Haskell. If anything, it's often easier because we can set up the recurrence by analogy with the definition.
iamevn · 1h ago
can you not narrow it down further to O(k * nCr(n, k)) (n=size of input, k=combination size) since it does k conses for each of the nCr(n, k) combinations?
(the final filter sums k elements nCr(n, k) times as well)
benmmurphy · 1h ago
how is combinations O(n)? for n choose 3 generating combinations is going to approach n^3 since the formula is basically (n * n * n) / constant and for sumToAny generating all the combinations is 2^n.
Paracompact · 3h ago
The execution order, or the order of the output list? The latter should be self-evident from the function body (first elements get collected first), and the former holds no tricks either since filters are evaluated from the head first, and concat (<>) allows lazy access to the first list's elements.
This is just a beginner's Haskell post. If an interviewer really meant to test optimization knowledge, I would hope they wouldn't ask me to optimize one O(n) utility function into another.
iamevn · 2h ago
with N as the length of the subsets and X as the length of input, `combinations n xs` basically walks through list xs and at each element, conses that element onto each combination of size n-1 of later elements in the list then appends those resulting lists together. this is on the order of N * nCr(X, N) aka O(NX!/(N!(X-N)!)) which dominates the overall runtime. the final filter walks through each combination and does O(N) work at each element to find the ones that sum to the desired amount, again O(NX!/(N!(X-N)!))
Sourabhsss1 · 5h ago
Words for the functions are easy to understand.
pensatoio · 7h ago
Fifteen years into my career, and I'm finally realizing that "expressive" languages are practically unreadable.
tikhonj · 6h ago
unfamiliar languages are practically unreadable
rhdjsjebshjffn · 6h ago
I don't see how that's any better for a haskell shop. i got some empathy but you chose a rough life.
yakshaving_jgt · 5h ago
The trick is to hire people who are familiar with the language.
No comments yet
rrradical · 6h ago
As a haskell programmer, all of that was easily readable.
sgarland · 6h ago
As a non-Haskell programmer, all of that was easily readable. Straightforward words for functions makes that pretty easy.
RHSeeger · 6h ago
There were a couple of places that took me a couple reads to figure out, like the fact that `(x:)` was "prepend". But overall, I followed the code pretty well. From the context of someone that wrote a small amount of Haskell a decade ago.
kqr · 6h ago
The : operator is the linked list data constructor. It takes an element and a list and creates a new linked list by linking the element to the existing list. It does the opposite when used in a pattern match: separates out the first element in a linked list.
It is also an operator, meaning it can be used with infix notation, as in (x : xs). Haskell has something called operator sections, where if one supplies only one of the arguments to an operator it will return a function expecting the other argument. In other words
(x:) == \xs -> (x:xs)
and
(:xs) == \x -> (x:xs)
This can be used as in this article, to create a function that prepends x to any list. Another common example is (1+) which increments any number it is given, or (:[]) which turns any value into a one-element list.
It can also be used much more cleverly -- especially considering that any two-argument function can be turned into an operator with backticks -- but then (in my opinion) readability starts to suffer.
shakna · 5h ago
So it's the equivalent of Lisp's cons then?
djtango · 4h ago
Yeah so prepend is just currying cons with the head
sshine · 1h ago
Yes, : is pronounced cons.
StopDisinfo910 · 6h ago
It’s partial application of cons via the operator, admittedly a poor choice from Haskell, a language which likes operators a bit too much. I think eta-expansion makes the whole thing clearer: (\xs -> x:xs) but most Haskellers would disagree.
The article also features examples of point-free style, another unfortunate trend for readability.
As long as you use operators sparingly, don’t abuse partial application and prefer explicit lambdas to composition, Haskell is fairly readable. The issue is that approximately no Haskeller writes Haskell this way.
Skeime · 2h ago
I don't use Haskell nearly enough to call myself a Haskeller but I will still disagree. Yes, operator sections are yet another thing to learn but I find them very intuitive, and actually easier to read than the equivalent lambda expression because I don't have to match up the bound variable.
(For example, (\x -> x ++ y) and (\y -> x ++ y) look pretty similar to me at first glance, but (++y) and (x++) are immediately distinguishable.)
Of course, this is reliant on knowing the operators but that seems like a mostly orthogonal issue to me: You still need to know the operator in the lambda expression. That said, the niceness of sections gives people yet another reason to introduce operators for their stuff when arguably they already are too prevalent.
sshine · 1h ago
It’s not that those language features are hard to understand. They’re all syntactic and don’t bring a ton of theory with them. It’s just that the tower of understanding for basic programs is very tall, and the tendency to introduce abstraction essentially never ends. I spent ten years with Haskell as my go-to language and there are still things I don’t understand and haven’t explored. It’s not like that with Python, or Go, or even Rust.
90s_dev · 5h ago
I honestly just skimmed the code and assumed it probably does what I would guess it does. It seemed to make sense and be straightforward... assuming my guesses were right, I guess?
CamperBob2 · 6h ago
As a C programmer, that's the worst FizzBuzz implementation ever. You're not supposed to special-case % 15, just
bool m3 = !(i % 3);
bool m5 = !(i % 5);
if (m3) printf("Fizz");
if (m5) printf("Buzz");
if (m3 || m5) printf("\n");
You can turn in your visitor badge at the front desk, and they'll call you an Uber.
unsnap_biceps · 5h ago
They have
n `mod` 3 == 0 && n `mod` 5 == 0
And you have
if (m3 || m5)
I really don't see what point you're trying to make here...
tayo42 · 4h ago
the article has a special string that says "fizz buzz",
they saying its unnecessary because if you go in order you first print "fizz", then print "buzz" which will always print "fizz buzz" for the equivalent of " mod 15" you don't need a special string that like.
the "if (m3 || m5)" is just printing a newline because under that condition you printed something earlier.
hnlmorg · 4h ago
It’s still an extra condition. So it really doesn’t matter if you’re printing “Fizz Buzz” string or not, it’s the same amount of branching.
coolcase · 5h ago
Nice. C looks like a safe language with the casual using an int as a bool.
Joel_Mckay · 5h ago
Depends which of the hundreds of C compilers you used, as some "bool" are cast as uint8_t, others unsigned char, and others bit-packed with an optimizer.
With C, any claim one makes about repeatability is always wrong at some point depending on the version compliance.
I like C, but Haskell is a happy optimistic syntax... Julia is probably the language I'd wager becoming more relevant as Moore's laws corpse begins to stink. =3
kqr · 5h ago
I agree. But you're also not supposed to have separate booleans for each special print, because when you have many special prints it gets annoying to extend.
fizzbuzz i =
fromMaybe (show i) . mconcat $
[ "fizz" <$ guard (i `rem` 3 == 0)
, "buzz" <$ guard (i `rem` 5 == 0)
]
main =
for_ [1..100] $
putStrLn . fizzbuzz
This allows you to add special prints by adding just the one line of code, changing nothing else.
Glyptodon · 5h ago
I can't tell if you're being serious or making a joke.
internet_points · 1h ago
and people say haskell is hard to get
chpatrick · 6h ago
That's an extra system call for printing the newline.
CamperBob2 · 6h ago
Well, duh, yeah, if you call setbuf(stdio, NULL) first.
vostok · 6h ago
I think this person wants a space between Fizz and Buzz.
CamperBob2 · 6h ago
Exactly, which means the interviewer didn't even state the problem correctly. The train had already jumped the rails by the time the candidate started writing. Hopefully HR will agree that they deserve each other.
YZF · 5h ago
And yet it's funny how many times you see the supposed "correct" solution missing that 3x5=15. I wonder how AI will answer fizzbuzz, is that part of any standard benchmark?
sponnath · 6h ago
Not all programming languages are obvious derivatives of C. Haskell is pretty readable once you spend some time getting to know the syntax.
90s_dev · 5h ago
The semantics are very different. It's much closer to a mathematical proof than a C program with different syntax, in terms of its lazy evaluation.
Darmani · 5h ago
I prefer to think of Haskell-like lazy evaluation as constructing a dataflow graph. The expression `map f (sort xs)` constructs a dataflow graph that streams each output of the sort function to `f`, and then printing the result begins running that job. Through that lens, the Haskell program is more like constructing a Spark pipeline. But you can also think of it as just sorting a list then transforming each element with a function. It only makes a difference in resource costs or when there's potential nontermination involved, unless you use unsafe effects (e.g.: unsafePerformIO).
Is there a way to think of proofs as being lazy? Yes, but it's not what you think. It's an idea in proof theory called polarization. Some parts of a proof can be called positive or negative. Positive parts roughly correspond to strict, and negative roughly correspond to lazy.
To explain a bit more: Suppose you want to prove all chess games terminate. You start by proving "There is no move in chess that increases the number of pieces on the board." This is a lemma with type `forall m: ChessMove, forall b: BoardState, numPieces b >= numPieces (applyMove m b)`. Suppose you now want to prove that, throughout a game of chess, the amount of material is decreasing. You would do this by inducting over the first lemma, which is essentially the same as using it in a recursive function that takes in a board state and a series of moves, and outputs a proof that the final state does not have more material than the initial state. This is compact, but intrinsically computational. But now you can imagine unrolling that recursive function and getting a different proof that the amount of material is always decreasing: simply write out every possible chess game and check. This is called "cut elimination."
So you can see there's a sense in which every component of a proof is "executable," and you can see whether it executes in a strict or lazy manner. Implications ("If A, then B") are lazy. Conjuctions ("A and B") can be either strict or lazy, depending on how they're used. I'm at the edge of my depth here and can't explain more -- in honesty, I never truly grokked proof polarization.
Conversely, in programming languages, it's not strictly accurate to say that the C program is strict and the Haskell program is lazy. In C, function definitions and macro expansions are lazy. You can have the BAR() macro create a #error, and yet FOO(BAR()) need not create a compile error. In Haskell, bang patterns, primitives like Int#, and the `seq` operator are all strict.
So it's not the case that proofs are lazy and C is strict and Haskell is lazy so it's more like a proof. It's not even accurate to say that C is strict and Haskell is lazy. Within a proof, and within a C and a Haskell program, you can find lazy parts and strict parts.
bcrosby95 · 5h ago
How much time have you spent with functional programming languages?
djtango · 4h ago
We once hired a F# developer to write Clojure and within a week, he was writing some of the clearest Clojure I've read.
That said: if I ever gave the palindrome question to a candidate (I haven't) and they pulled out a `reverse` library func/keyword we'd be encouraging them to do more.
It’s fun because the interviewer often doesn’t grok the paradigm. In one case, I was asked to make a password generator, and I went with QuickCheck’s Gen, which lets me write a random generator of things without explicitly defining a single function (they’re all monads).
So you go from somehow describing the thing you want to make to “I’m done.” :-D
I also refused to solve a part of the task because it would lead to bad password policy. Instead I property-tested the generator.
The meat of the code was a one liner. But opening a file, throwing away the first line and parsing a CSV took a lot more head scratching, especially if you've only ever done puzzles in Haskell.
Makes the hard things easy and the easy things hard
Well, it's not clear to me why we're terming these as silly. They mostly seem simple, instead, to me.
What is the order of sumNToTotal?
> While it's true that sort has an O(nlogn) performance profile, one interesting thing here is that sorting is lazy in Haskell! This means that if our two strings are unequal, they will only be sorted far enough to determine inequality!
I don't understand exactly how this works - don't all O(n log n) sorting algorithms still have to do some work on the other elements of the list before pulling out the smallest one? If you just scanned and pulled the smallest one to the front each time that is O(n^2). And there's an implementation detail here that differences at the start of the sorted string will be caught more quickly than at the end, so real-world behaviour is dependent on your choice of sort order.
Agreed that the performance analysis is quite superficial and that some of the solutions like sum NToTotal must be very sub-optimal.
Thus the whole thing is O(n²). It's not like this is particularly hard for Haskell. If anything, it's often easier because we can set up the recurrence by analogy with the definition.
(the final filter sums k elements nCr(n, k) times as well)
This is just a beginner's Haskell post. If an interviewer really meant to test optimization knowledge, I would hope they wouldn't ask me to optimize one O(n) utility function into another.
No comments yet
It is also an operator, meaning it can be used with infix notation, as in (x : xs). Haskell has something called operator sections, where if one supplies only one of the arguments to an operator it will return a function expecting the other argument. In other words
and This can be used as in this article, to create a function that prepends x to any list. Another common example is (1+) which increments any number it is given, or (:[]) which turns any value into a one-element list.It can also be used much more cleverly -- especially considering that any two-argument function can be turned into an operator with backticks -- but then (in my opinion) readability starts to suffer.
The article also features examples of point-free style, another unfortunate trend for readability.
As long as you use operators sparingly, don’t abuse partial application and prefer explicit lambdas to composition, Haskell is fairly readable. The issue is that approximately no Haskeller writes Haskell this way.
(For example, (\x -> x ++ y) and (\y -> x ++ y) look pretty similar to me at first glance, but (++y) and (x++) are immediately distinguishable.)
Of course, this is reliant on knowing the operators but that seems like a mostly orthogonal issue to me: You still need to know the operator in the lambda expression. That said, the niceness of sections gives people yet another reason to introduce operators for their stuff when arguably they already are too prevalent.
they saying its unnecessary because if you go in order you first print "fizz", then print "buzz" which will always print "fizz buzz" for the equivalent of " mod 15" you don't need a special string that like.
the "if (m3 || m5)" is just printing a newline because under that condition you printed something earlier.
With C, any claim one makes about repeatability is always wrong at some point depending on the version compliance.
I like C, but Haskell is a happy optimistic syntax... Julia is probably the language I'd wager becoming more relevant as Moore's laws corpse begins to stink. =3
I've always liked this solutiin, which avoids that: https://archive.is/KJ39B
This allows you to add special prints by adding just the one line of code, changing nothing else.Is there a way to think of proofs as being lazy? Yes, but it's not what you think. It's an idea in proof theory called polarization. Some parts of a proof can be called positive or negative. Positive parts roughly correspond to strict, and negative roughly correspond to lazy.
To explain a bit more: Suppose you want to prove all chess games terminate. You start by proving "There is no move in chess that increases the number of pieces on the board." This is a lemma with type `forall m: ChessMove, forall b: BoardState, numPieces b >= numPieces (applyMove m b)`. Suppose you now want to prove that, throughout a game of chess, the amount of material is decreasing. You would do this by inducting over the first lemma, which is essentially the same as using it in a recursive function that takes in a board state and a series of moves, and outputs a proof that the final state does not have more material than the initial state. This is compact, but intrinsically computational. But now you can imagine unrolling that recursive function and getting a different proof that the amount of material is always decreasing: simply write out every possible chess game and check. This is called "cut elimination."
So you can see there's a sense in which every component of a proof is "executable," and you can see whether it executes in a strict or lazy manner. Implications ("If A, then B") are lazy. Conjuctions ("A and B") can be either strict or lazy, depending on how they're used. I'm at the edge of my depth here and can't explain more -- in honesty, I never truly grokked proof polarization.
Conversely, in programming languages, it's not strictly accurate to say that the C program is strict and the Haskell program is lazy. In C, function definitions and macro expansions are lazy. You can have the BAR() macro create a #error, and yet FOO(BAR()) need not create a compile error. In Haskell, bang patterns, primitives like Int#, and the `seq` operator are all strict.
So it's not the case that proofs are lazy and C is strict and Haskell is lazy so it's more like a proof. It's not even accurate to say that C is strict and Haskell is lazy. Within a proof, and within a C and a Haskell program, you can find lazy parts and strict parts.
Learn paradigms not languages...