Hacker Timesnew | past | comments | ask | show | jobs | submitlogin

No one is going to argue that your program doesn't work when it seems to work.

Unless you're using threads and locks. Then it's probably a safe bet that it will fail in ways you don't expect. ;)



Actually, my webapps are full of threads. Again, Imperative style is not the wrong way, and FP the right way. They are just different. Why not let people find their own way ? Just give the information they need to make up their mind. And why not be honest and tell them that learning Haskell is probably more painful that learning to deal with the mutable state.


You don't understand what I'm saying... The likelihood that there are no deadlocks or race conditions in your threaded applications is incredibly low without a massive amount of testing. Even experts in the field can mess up and find bugs surfacing years later that they didn't expect. The fact that it's hard to make a non-trivial multi-threaded programs bulletproof is proven both theoretically and empirically.

That's why there is a movement in the software community to consolidate threading portions of the code and boil them down to the simplest constructs that could possibly work. It reduces the debugging load significantly. And that's why a lot of time is being invested developing strategies that are atomic and insensitive to threaded programming, because it lets you sidestep the problem entirely. Functional programming is just the leading edge of this trend, it's already sweeping through the imperative and objective world. Things like Software Transactional Memory and Erlang's thread management are examples of tenable approaches. These approaches are a roadmap of how we can move forward and make truly and clearly reliable threaded programs. Programs that don't have land mines waiting for a 1:1000000 shot to set them off and inexplicably ruin everything.

This isn't about "finding your own way." It's about groping for any handhold as we're about to fall into a dark precipice of totally unmanageable massively-parallel and/or distributed code.

P.S. Haskell only seems harder to you because you're clearly not dealing with all the problems that mutable state brings. It's much easier to ignore something than to learn to deal with it.


There is a reality out there, we're not arguing over pastels vs. chalk vs. pencils. You can write webapps in assembly if you're so motivated and tenacious.


[dead]


A caller of a function in an imperative environment cannot assume anything about the scope of that function's access of mutable state within the program. My characterization of the situation might be a little spectacular w.r.t. specifics (e.g. 316 arguments, etc), but I've not seen a robust refutation of the principle.

There's no doubt that there is subjectivity w.r.t. practices and methodologies, not to mention domain-specific tendencies and requirements. But it's a little absurd to claim that, very specifically, mutable state is on par with persistent data structures (or, at the very least, immutable state constructs) on almost any axis of reliability or ability to reason about a program's operation, especially in conjunction with any degree of concurrency.

"Pain" is definitely subjective. IMO, if you're not feeling at least a little pain, you're not pushing your skills, your craft, etc. Too much pain, and it's possible you've not kept yourself limber enough to stretch for the big wins, regardless of context.


"A caller of a function in an imperative environment cannot assume anything about the scope of that function's access of mutable state within the program"

This really doesn't make sense.

main() { A a = new A(); f(); ... } the caller main() can assume that "a" is not in scope of f(). Is it the robust refutation you need ?

The caller can further assume that the scope of f() is limited to the part of the state for which there is a path from a global variable or from one of its parameters (You can see the state as a graph. The nodes are the data and the links are the references) Therefore, this scope is not the whole state but a limited subset of the state. Claiming otherwise is wrong and does not help to bring attention to what need to be improved : this subset is itself a superset of the part of the state that the function really need. We need new techniques to hide more state to the function, for example some kind dereferencement control ?


I believe cemerick meant, "A caller of a function in an imperative environment cannot assume anything about the scope of that function's access of mutable state within the program without reading the entire program." If they verify it by perfectly understanding the underlying code, the of course that doesn't hold.

But even in the code snippet you gave us, we really don't know if global state is being manipuated. For example, A could be handling global counters or referencing global variables (common example: stdin and stdout). And how do you know that a doesn't get manipulated by f()? You'd have to check! Perhaps A's constructor registers all A's it every creates and then f() marks all A's for protection from garbage collection?

Really, we can't assume anything about the code snippet you gave us without verifying it. The code might be modular, it might be reasonably modular, or it might be a complete mess.

This boundary of uncertainty is what functional programmers are railing against, and what you're defending.


We have to agree to disagree. Peace.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: