Spotting when something can be mutated in place instead of creating a new instance is fairly common. There's a paper that does it at runtime using reference counting and various compilers for functional languages will try to do it ahead of time.
That would roughly lead to updating each element in the array in sequence without making a new array. This is an extension to that when the updates can occur simultaneously which is a more complicated lifetime invariant to recognise.
As an aside, compilers probably should have constraint solvers embedded. Lots of compilation is expressible as a constraint problem.
If we had constraint solvers in our compilers, would refinement types also be a lot easier to implement?
Yes, the type "" from the Prelude is a linked list.
> I would have thought they were compiled as "array lists" like Python (arrays of pointers, but with extra space pre-allocated to make extending cheap)
No, and that doesn't quite work for immutable types, since there's nothing you can do with the "extra space pre-allocated". You can't mutate it! Nonetheless, rope-like structures will work fine, and there are surely plenty of them floating around.
If you force computation of a whole list sand save it, the representation in memory would be just a basic linked list. But that's not really recommended as it's a quite inefficient way to store and work with non-tiny amounts data.
There are also true arrays in Haskell which are not very useful due to reasons.
Perhaps the most flexible structures we have are various tree-based array implementations, similar to Clojure's arrays.