When would you NOT want to use functional programming? What is it not so good at?
I am more looking for disadvantages of the paradigm as a whole, not things like "not widely used", or "no good debugger available". Those answers may be correct as of now, but they deal with FP being a new concept (an unavoidable issue) and not any inherent qualities.
Related:
If your language does not provide good mechanisms to plumb state/exception behavior through your program (e.g. syntax sugars for monadic binds) then any task involving state/exceptions becomes a chore. (Even with these sugars, some people might find it harder to deal with state/exceptions in FP.)
Functional idioms often do lots of inversion-of-control or laziness, which often has a negative impact on debugging (using a debugger). (This is somewhat offset by FP being much less error-prone due to immutability/referential transparency, which means you'll need to debug less often.)
I just wanted to buzz in with an anecdote because I'm learning Haskell right now as we speak. I'm learning Haskell because the idea of separating functions from actions appeals to me and there are some really sexy theories behind implicit parallelization because of the isolation of pure functions from non-pure functions.
I've been learning the fold class of functions now for three days. Fold seems to have a very simple application: taking a list and reducing it to a single value. Haskell implements a
foldl
, andfoldr
for this. The two functions have massively different implementations. There is an alternate implementation offoldl
, calledfoldl'
. On top of this there is version with a slightly different syntax calledfoldr1
andfoldl1
with different initial values. Of which there is a correspond implementation offoldl1'
forfoldl1
. As if all of this wasn't mind blowing, the functions thatfold[lr].*
require as arguments and use internally in the reduction have two separate signatures, only one variant works on infinite lists (r), and only one of them is executed in constant memory (as I understand (L) because only it requires aredex
). Understanding whyfoldr
can work on infinite lists requires at least a decent understanding of the languages lazy-behavoir and the minor detail that not all functions will force the evaluation of second argument. The graphs online for these functions are confusing as hell for someone who never saw them in college. There is noperldoc
equivalent. I can't find a single description of what any of the functions in the Haskell prelude do. The prelude is a kinda of a distribution preloaded that comes with core. My best resource is really a guy I've never met (Cale) who is helping me at a huge expense to his own time.Oh, and fold doesn't have to reduce the list to a non-list type scalar, the identity function for lists can be written
foldr (:) [] [1,2,3,4]
(highlights that you can accumulate to a list)./me goes back to reading.
It's hard for me to think of many downsides to functional programming. Then again, I am a former chair of the International Conference on Functional Programming, so you may safely assume I am biased.
I think the main downsides have to do with isolation and with barriers to entry. Learning to write good functional programs means learning to think differently, and to do it well requires a substantial investment of time and effort. It is difficult to learn without a teacher. These properties lead to some downsides:
It is likely that a functional program written by a newcomer will be unnecessarily slow—more likely than, say, a C program written by a newcomer to C. On the other hand, it is about equally likely that a C++ program written by a newcomer will be unnecessarily slow. (All those shiny features...)
Generally experts have no difficulty writing fast functional programs; and in fact some of the best-performing parallel programs on 8- and 16-core processors are now written in Haskell.
It's more likely that someone starting functional programming will give up before realizing the promised productivity gains than will someone starting, say, Python or Visual Basic. There just isn't as much support in the form of books and development tools.
There are fewer people to talk to. Stackoverflow is a good example; relatively few Haskell programmers visit the site regularly (although part of this is that Haskell programmers have their own lively forums which are much older and better established than Stackoverflow).
It's also true that you can't talk to your neighbor very easily, because functional-programming concepts are harder to teach and harder to learn than the object-oriented concepts behind languages like Smalltalk, Ruby, and C++. And also, the object-oriented community has spent years developing good explanations for what they do, whereas the functional-programming community seem to think that their stuff is obviously great and doesn't require any special metaphors or vocabulary for explanation. (They are wrong. I am still waiting for the first great book Functional Design Patterns.)
A well-known downside of lazy functional programming (applies to Haskell or Clean but not to ML or Scheme or Clojure) is that it is very difficult to predict the time and space costs of evaluating a lazy functional program—even experts can't do it. This problem is fundamental to the paradigm and is not going away. There are excellent tools for discovering time and space behavior post facto, but to use them effectively you have to be expert already.
I think the bullshit surrounding functional languages is the biggest problem with functional programming. When I started using functional programming in anger, a big hurdle for me was understanding why many of the highly-evolved arguments put forward by the Lisp community (e.g. about macros and homoiconic syntax) were wrong. Today, I see many people being deceived by the Haskell community with regard to parallel programming.
In fact, you don't have to look any further than this very thread to see some of it:
Statements like this might give you the impression that experts choose Haskell because it can be so good for parallelism but the truth is that Haskell's performance sucks and the myth that Haskell is good for multicore parallelism is perpetuated by Haskell researchers with little to no knowledge about parallelism who avoid real peer review by only publishing inside the comfort zone of journals and conferences under the control of their own clique. Haskell is invisible in real-world parallel/multicore/HPC precisely because it sucks at parallel programming.
Specifically, the real challenge in multicore programming is taking advantage of CPU caches to make sure cores aren't starved of data, a problem that has never been addressed in the context of Haskell. Charles Leiserson's group at MIT did an excellent job of explaining and solving this problem using their own Cilk language that went on to become the backbone of real-world parallel programming for multicores in both Intel TBB and Microsoft's TPL in .NET 4. There is a superb description of how this technique can be used to write elegant high-level imperative code that compiles to scalable high-performance code in the 2008 paper The cache complexity of multithreaded cache oblivious algorithms. I explained this in my review of some of the state-of-the-art Parallel Haskell research.
This leaves a big question mark over the purely functional programming paradigm. This is the price you pay for abstracting away time and space, which was always the major motivation behind this declarative paradigm.
EDIT: Texas Multicore Technologies have also recently found Haskell to be underwhelming in the context of multicore parallelism.
Philip Wadler wrote a paper about this (called Why No One Uses Functional Programming Languages) and addressed the practical pitfalls stopping people from using FP languages:
Update: inaccessible old link for those with ACM access:
Aside from speed or adoption issues and addressing a more basic issue, I've heard it put that with functional programming, it's very easy to add new functions for existing datatypes, but it's "hard" to add new datatypes. Consider:
(Written in SMLnj. Also, please excuse the somewhat contrived example.)
I can very quickly add the following:
However, if I add a new type to Animal, I have to go through each function to add support for it:
Notice, though, that the exact opposite is true for object-oriented languages. It's very easy to add a new subclass to an abstract class, but it can be tedious if you want to add a new abstract method to the abstract class/interface for all subclasses to implement.