It's awkward to work with fully nested structures. Think about having a map of customer objects which have a list of addresses and you need to update everyone's first address to be capitalized for some raeson. You'd really want a fully fleshed out lens library and maybe even macros for auto generating them on your concrete data structures to make it easy to 'update' (via a persistent structure) without having to deal with all the cruft of doing it by hand.
which pattern matches on some `object` (map) and does processing. We find it less fragile than specifying explicit path to an element. It can also work in a polymorphic fashion. On the other hand, there is a risk of a false positive (when you modify address that you shouldn't). But you can mitigate that risk by using additional checks (in case of a customer, you can check for additional set of fields that are specific for that object(map)
edit:formatting
I agree that you want specialised mini-DSLs for querying and updating highly nested structures (... which you should probably avoid creating anyway), but for such a simple task you could just do this (Clojure example):
In JavaScript world, there is ImmutableJS, which uses this style of updates. However, there is better approach which is what ImmerJS uses – you write a function where you can do mutation to the target data structure, wrap it in a "produce" function, and library will pick on those mutations and create a new copy of the data that is structurally shared with the original.
That Clojure code isn't quite grasping the essence of ImmerJS. The whole point of ImmerJS is that JS has nice, built-in syntax for in-place mutation and we can reuse that syntax to generate updates to an immutable data structure so long as we scope the mutation syntax so that outside of an individual block of mutation syntax everything stays immutable. That it is implemented with JS proxies is something of an implementation detail (it could e.g. be implemented with linear or affine types or something like Haskell's ST type).
In this sense it's closer to Clojure's transients, if Clojure came prepackaged with special mutation syntax (notably assignment via =) and if transient-ness could be propagated transitively to all nested structures in a data structure.
Nothing wrong with using incredibly un-fancy data structures, like arrays, in a FP way. You just need to make sure that you don't do operations that modify individual elements, but work in large batches. So no for loops inserting elements one by one, but map, filter, flatmap...
Most good FP data structures will be tree like, but with a low branching factor and chunky leafs. Now of course that does not save you from lots of memory consumption if you use a language/platform where even primitives are boxed, like the JVM. But that is a completely separate topic.
Sharing mutable data without a mutex (which suffers from unbounded contention) is hard. Approaches that work include updating persistent data structures and sharing the new copy, or sharing diff objects over a lock-free queue.
Sharing the copy is not what's solved by lenses. In a certain sense lenses are just a way to try to do away with the boilerplate of updating nested immutable data structures. It is very easy to write this boilerplate (in the sense that it is straightforward and hard to mess up). It's just mind-numbingly tedious.
Although I don't know what you mean by growing lenses.
Mutex + inner mutation is no easier than CAS (which is the usual solution with concurrent writing of immutable data structures) a la Java AtomicReference or STM (another popular one) and in my opinion significantly harder as soon as you have multiple mutexes.
CAS is a massive pain in the ass for complex changes. Lenses are a way to point to part of the object which can in theory make it less painful as you can redo a change without redoing the work to setup the change necessarily. Growing just refers to growing a codebase using them by adding them is all.
I am not against FP I think the point of "it is good if you can fix performance" is very accurate. I just think it is also important to acknowledge that the paradigm can be more complex in situations where it is supposed to help leading to a mixed bag.
Similar to distributed databases. Your database can now be phenomenally powerful but you can't do a stored proc anymore without losing that power.
It can be a very effective and totally worth it but it isn't a pure win necessarily.
What are you CASing? If the object is too large you will have contention to the point that you might as well be single threaded. If the object is too small you now have to CAS multiple things which is far from trivial.
CAS is a primitive, it isn't complex itself but it can be complex to work with once you are talking non trivial work. Just like locks. A global lock is dumb simple but a real locking system can be as complex as you will let it.
> If the object is too large you will have contention to the point that you might as well be single threaded. If the object is too small you now have to CAS multiple things which is far from trivial.
Right but both of those things are true for locks too right? CAS seems no harder than locks.
Retry logic is hidden for locks while in your face for CAS.
Additionally while multiple locks is annoying it is way easier than multiple CAS. "Have a global order for locks" is the hard but solvable problem for multiple locks. For CAS if you need to CAS two dependent things you... I don't know it depends.
> Retry logic is hidden for locks while in your face for CAS.
It's hidden in both cases (usually CAS instructions are hidden behind an `update` interface as in Java's Atomic* family rather than directly used in the same way that generally a lock is an interface for a TAS instruction/spin lock + upgrading to wait queue).
> For CAS if you need to CAS two dependent things you... I don't know it depends.
You use nested atomic references. Just like locks it's not a great way of doing things, and an analogous problem to ordering locks rears its head (by virtue of nesting you cannot mess up ordering in the strict sense, but you can accidentally "cross the boundaries" of two atomics in an update that goes against the nesting order), but it's doable in the same way as locks.
The usual CAS-like but better approach is STM (which is where immutability really shines).
I'm still not seeing how CAS is any harder than locks.
IMO the hard part of immutable structures is mutation (solved by https://lib.rs/crates/im), the hard part of diffing is writing one "command" subclass per type of operation (which can be more or less manageable), and the hard part of mutexes is remembering to lock the mutex or rwlock in every single access (Rust fixes this), avoiding priority inversions (Rust doesn't help), and avoiding deadlocks (Rust doesn't help).
Under the hood it's a effectively a single CAS instruction that loops on failure (which only occurs under contention, but then you have waiting with locks too).