Imagine a bacon-wrapped Ferrari. Still not better than our free technical reports.
See all our reports

Why the debate on Object-Oriented vs Functional Programming is all about composition

In a previous post, I laid out a framework for code quality that divides the qualities of code into a few categories like fitness, correctness, clarity, performance, maintainability and beauty. However, let’s forget all that for the moment and talk about composition :-)

noun 1. the act of combining parts or elements to form a whole.

In programming terms, composition is about making more complex programs out of simpler programs, without modifying the simpler pieces being composed–where the program could be anything from a single CPU instruction to an operating system with apps.

Do objects compose?

In object-oriented programming (OOP), the unit of composition seems to be an object (or class). Do objects compose? The answer to that is complicated, as there isn’t even a commonly accepted definition of object orientation. It’s mostly a “I know it when I see it” type of thing.

Objects that represent data structures with little behavior usually do compose. This is also what we can model in class diagrams: one object owns or contains another. This kind of composition doesn’t really have much to do with being object-oriented: struct or tuple types compose just as well as data objects.

For objects with complex behavior, there aren’t any well-defined compositions. Even UML diagrams give up on specifying the nature of such composition and simply allow you to say that “A depends on B”. The nature of the dependency is completely ad-hoc. B might be used internally in a method, passed as a constructor argument or method parameter or might be returned from a method.

OOP puts code and data close together, so we want to know if data and behavior taken together can compose. There’s no answer for the general case, because objects can have very complex and incompatible behavior patterns. Especially if an object is observably mutable (which seems to be the usual case with OO code), it has a potentially complex life cycle. And objects with different life cycles can easily become non-composable.

I think the real composability and reusability in object-oriented code doesn’t come from object-oriented design at all: it comes from abstraction and encapsulation. If a library manages to hide a potentially complex implementation behind a simple interface, and you only have to know the simple interface to use it, then that doesn’t come from some inherent property of object-orientation, but a basic module system that limits which symbols are exported to other modules, whether the modules are classes, namespaces, packages or something else. Such module systems are present in both OO and non-OO languages.

In summary, I think objects do not compose very well in general – only if the specific objects are designed to compose. Immutability helps to make objects composable by eliminating complex life-cycles and essentially turning the objects into values. Encapsulation also improves composability, and is put to good use in well-designed OO code, but is not unique to OO.

Do functions compose?

Functional programming (FP), on the other hand, is at its very core based on the mathematical notion of function composition. Composing functions f and g means g(f(x)) – f’s output becomes g’s input. And in pure FP, the inputs and outputs are values without life cycles.

It’s so simple to understand compared to the numerous and perhaps even indescribable ad hoc compositions possible in OOP. If you have two functions with matching input and output types, they always compose!

More complicated forms of composition can be achieved through higher-order functions: by passing functions as inputs to functions or outputting functions from functions. Functions are treated as values just like everything else.

In summary, functions almost always compose, because they deal with values that have no life cycles.

What does composition give us?

Having simple core rules for composition gives us a better ability to take existing code apart and put it together in a different way, and thus become reusable. Objects were once hailed as something that finally brings reusability, but it’s actually functions that can achieve this much more easily.

The real key, I think, is that the composability afforded by the functional design approach means that the same approach can be used for both the highest levels of abstraction and the lowest level–behavior is described by functions all the way down (many machine instructions can also be represented as functions).

However, I think most programmers today (including me) don’t really understand how to make this approach work for complete programs. I would love for the functional programming advocates to put more effort into explaining how to build complete programs (that contain GUIs, database interactions and whatnot) in a functional way. I would really like to learn that rather than new abstractions from category theory — even if the latter can help with the former, show us OOP guys the big strategy before the the small tactics of getting there.

Even if we can’t do whole programs as functions, we can certainly do isolated parts of them. On the other hand, it seems that objects that put code close to the data help people make sense of a system. And if the objects are really designed to be composable, it works out quite nicely. I think a mix of object-oriented and functional programming such as in Scala, F# or even Java 8 is the way to go forward.

So, thanks for tuning in, and please leave comments below or find me on Twitter @t4ffer.

  • Oleg Šelajev

    On of the ways for learning about how to write large software systems in a functional way is to go through “Real World Haskell” book. It sits on my to-read list for a while now. If you know a better approach, please share.

    Another fun-trivia fact to ponder about is Clojure’s http library called httpkit. It seems pretty relevant, adopted and used in the community (I might be wrong here :P). So clojure is a lisp, right? Super functional and thus composable.

    If we check HttpKit’s github repo: we can see that just a bit over 75% of the source code is java which is extremely OO and thus less composable. :)
    My take on that number is that it’s written in java and there’s a wrapper to expose it to clojure. Which actually proves nothing, but there should be reason that a library for such a functional and composable language as clojure is written in java. It might be purely performance though.

    So real-world systems would probably consist of smaller modules which are written using what suits their functional needs best and then wrapping them into a functional interface might help composeability and clarity.

  • Erik Post

    “On the other hand, it seems that objects that put code close to the data help people make sense of a system.”

    It sounds to me a bit as though you’re saying that this is more or less the last holdout, or at least your sweet spot, of OO. What I’d like know is in what sense you feel that does FP not put code “close” to the data. You tie code and data together using types, non?

  • Gavin King

    “there isn’t even a commonly accepted definition of object orientation”

    Not true. Object orientation means the use of _subtype polymorphism_. That’s a perfectly well-founded notion, and perfectly distinguishes OO languages from languages which don’t support OO.

    “Do objects compose? The answer to that is complicated”

    I don’t see how the answer could possibly be complicated. Indeed, the question has an answer that is totally self-evident: _of course_ objects compose, since we can assemble programs out of them. We’ve been doing it on an industrial scale for three decades.

    “It’s so simple to understand compared to the numerous and perhaps even indescribable ad hoc compositions possible in OOP.”

    I have no clue what you’re talking about here. “Numerous”?? “Indescribable”?? I can think of two ways in which objects can be composed in OO languages: at the class level (inheritance), and at the instance level (references). I can’t imagine what all these other numerous forms of indescribable composition are. Care to enumerate some of them?

    “In summary, functions almost always compose, because they deal with values that have no life cycles.”

    I’m trying to understand what you’re trying to say here. Interpreted literally, it sounds like you’re saying that I can “almost always” take two arbitrary functions and compose them together to form something meaningful. But that’s just absurd, so that can’t be what you mean.

    Taken at face value, it’s simply not true that “functions almost always compose”.

    “I think a mix of object-oriented and functional programming such as in Scala, F# or even Java 8 is the way to go forward.”

    Again, I don’t understand what you mean by this. It seems to me that object-oriented programming is, and always has been, a superset of “functional programming” in the sense in which you seem to be using the term here. OO languages have always supported functional composition (one method calling another, or passing a reference to a method to another method) so I don’t know what would be new about this “mix”. Indeed, AFAICT, it’s impossible to design an OO language which doesn’t support functional composition and functional programming (again, in the sense you’re using the word here).

    P.S. Please don’t try to argue that a language doesn’t support functional composition if it doesn’t have function types and function references or anonymous functions. The “strategy” pattern is an ancient and well-known encoding of first-class function support into OO languages which don’t directly support first-class functions, and is commonly used by all OO developers I’ve ever met.

  • Erkki Lindpere

    I guess in OO it is just more natural to do so and indeed I can’t say that anything is preventing from doing that in FP. In OO it can also happen that people find ways to put code that depends on some nature of the data far from the data itself.

    E,g, in OO it is a bit more obvious from the structure e.g. dot and cross product operations would be defined within the structure of a 3D vector type. On the other hand operations involving multiple different types seem more natural to define in FP.

  • Erkki Lindpere

    Thanks for the comments, Gavin!

    “Object orientation means the use of _subtype polymorphism_.”

    I admit that subtype polymorphism is widely accepted as a defining feature of OO. There’s no *universally* accepted definition, but subtype polymorphism is quite *common*.

    “_of course_ objects compose, since we can assemble programs out of them.”

    The fact that we have been doing something for a period of time on a large scale doesn’t mean that what we’ve been doing is right or that we’ve been doing well.

    “I have no clue what you’re talking about here. “Numerous”??
    “Indescribable”?? I can think of two ways in which objects can be
    composed in OO languages: at the class level (inheritance), and at the
    instance level (references).”

    If in a pure functional language, you have functions f: A -> B and g: B -> C, you know that B is a value and it is “complete” (or consistent). Since g takes a B, you know you can pass f’s result to g, always.

    If in a OO language, you have a (possibly mutable) an object of type A and a method of some class f: A -> C, how do you know whether you can pass a given A that you have to the method f? You cannot know in the general case, because you don’t even know whether an object A is in a consistent state. Or maybe it’s a subtype that violates the Liskov Substitution Principle. And this is what makes it indescribable to me — for the composition to be safe may require that I call some other method on A first, before I can pass it around, or it may require god knows what other arcane knowledge for me to be able to use that object safely. OO languages don’t really provide many safeguards here, but pure FP languages eliminate this uncertainty about whether a particular composition is safe or not.

    Then again, OO languages can still do things to make composition a lot easier e.g. let me easily know that all A’s are immutable and there are no subtypes violating LSP, and I’ll know they are safe to pass around anywhere.

  • Gavin King

    > If in a OO language, you have a (possibly mutable) an object of type A and a method of some class

    So, in fact, what you’re _actually_ talking about isn’t OO vs FP, it’s _imperative_ vs _declarative_. This is apparently a big source of confusion in the interwebs right now. There’s nothing, and I mean absolutely nothing, in any definition of OO which talks about mutation. Unfortunately some folks seem to have recently got the impression that there is.

    My speculation is that this is due to a confusion over two different senses of the word “state”. When we say that an object has state we mean that instances of its class are distinguishable by the references they hold, _not_ that these references are necessarily _mutable_. An immutable object is just as objecty as a mutable object.

    To be clear, objects don’t rob you of referential transparency, and functions don’t protect you from the loss of it. Just like there are:

    – mutable objects, and
    – immutable objects,

    there are:

    – impure functions, and
    – pure functions.

    If you program using impure functions, as most of the world outside of some very particular FP communities does, you’re just as vulnerable to the problems you’ve been trying to blame objects for.

    > If in a pure functional language, you have functions f: A -> B and
    > g: B -> C, you know that B is a value and it is “complete” (or
    > consistent). Since g takes a B, you know you can pass f’s result
    > to g, always.

    Object orientation doesn’t rob you of the ability to compose functions (or methods). In Ceylon, I can write:

    value writeFormatted = compose(process.writeLine, dateFormat.format);

    Here, “process” and “dateFormat” are objects. But I easily composed their methods just like you can compose ordinary functions.

    It’s true that classes don’t exist at the same level of granularity as functions/method, and thus they’re less likely to compose “by accident” rather than “by design”, but I’m not sure what that proves. You still have the function level of granularity to work with when you need it.

    You see, there are multiple levels of granularity in modern languages: function, class, namespace, module. It’s important to understand their different roles. The arguments you’ve just given me for objects not composing could equally be applied to modules.

  • Erkki Lindpere

    Yeah, basically I admit that the definition of OOP (if there is one true definition) doesn’t require mutable state. I was also just reading the reddit comments and yeah, I more tend to think that “state” usually means “mutable” and “immutable” means “stateless”, but I’m willing to accept existence of “immutable state”.

    However, I think the way OO programs are structured in most modern OO languages seem to make using mutable state a much easier solution for problems approaching some complexity than alternatives.

    I agree about different abstractions at different granularity (maybe that was forgotten a bit in the original post), and I wish the programming paradigm (or language) would help choosing appropriate abstractions for appropriate problem granularity. For example, implementing some subsystem as pure functional, with immutable values/objects, and being able to assert that it’s so in code, then exposing all of that subsystem as an object. It would be great if the language could nudge me towards picking the right kind of abstractions by accident so that what I write without thinking would be easily composable.

  • Gavin King

    Right, so some OO languages _do_ nudge you towards using more immutable objects. Sure, Java/C#/Ruby don’t really do much to encourage you to use immutable objects, just like Fortran/Pascal/Perl don’t nudge you towards using pure functions. But OCaml/F# and even Ceylon and Scala *do* (to greater or lesser extents). In Ceylon you have to explicitly annotate mutable references, because they’re considered slightly discouraged.

    Which is what irritates me about this whole line of argument: OO != Java/C#/Ruby. There are other OO languages out there. Folks bashing on OO just need to get out more.

  • Ivano Pagano

    “Not true. Object orientation means the use of _subtype polymorphism_. That’s a perfectly well-founded notion, and perfectly distinguishes OO languages from languages which don’t support OO.”

    This exemplifies exactly what you’re trying to disprove, Gavin, since not everyone would agree with this definition, me for a starter.
    As far as I’m concerned I would say that what best defines OO (as programming with objects) is the use of objects as units of cohesion for access to state and behaviour, with the added feature of abstracting over the inner working through public interfaces.
    I see subtyping as an added feature over encapsulation.

    Just as you did I was evaluating lately how to best use compositional features for both aspects of a FOOP language and what tools are currently available in existing (mainstream) languages.
    My (absolutely personal) considerations would need some more space and maybe I’ll blog a post and link it here in the future.

    good day to everyone

  • Gavin King

    “I would say that what best defines OO (as programming with objects) is the use of objects as units of cohesion for access to state and behaviour, with the added feature of abstracting over the inner working through public interfaces.”

    1. I can see two senses in which you might talk about “objects as units of cohesion”. Let’s consider each in turn:

    – One is that you have language level visibility rules to control access to “private” members of an instance or class. But not all languages that are commonly considered to support object-oriented programming have this. Indeed, there are OO languages which don’t support data hiding at all.

    – On the other hand, perhaps you mean “cohesion” in a weaker sense here, where you’re not really thinking of visibility control, but just the ability to package functions and values together into an “object”. But plenty of functional languages which are *not* commonly considered OO support tuples and even records.

    (OK, OK, in fairness, some people suggest that it’s the added ability to do “open recursion” which makes all the difference between “objects” and “record types”, but that’s a highly technical distinction, and one most programmers are only dimly aware of. I would not be happy with any definition of OO that talks about open recursion as being the big distinguishing feature.)

    So the first part of your definition doesn’t really much help us to clearly distinguish “OO” programming. So let’s now turn to the second part of your definition.

    2. “the added feature of abstracting over the inner working through public interfaces” is, literally, subtype polymorphism, which is exactly my definition.

    *Every* language I know of that is commonly considered to be object oriented has some kind of subtype polymorphism, whether via dynamic typing, structural typing, or nominal typing. Conversely, no language I know of that is _not_ considered OO _does_ have subtype polymorphism.

  • Ivano Pagano

    Well said.

    Probably I was interpreting the discussion while focusing more on object-oriented programming (intending a way to design code) as opposed to object-oriented languages (intending features supported by syntax and semantics).
    From a language-centered perspective your position is correct. From a design-centered perspective I agree with you that the distinction between languages becomes blurry, and this was what I was talking about.

    About your point 2. above, regarding information hiding or encapsulation, I’d say that subtype polymorphism corresponds to the definition only when we talk about statically typed languages, but I may be mistaken here.

  • Gavin King

    Well to me any kind of dynamic dispatch is a form of “subtype polymorphism”, whether it is in a dynamically-typed language like Smalltalk or Ruby, or a statically typed language like Java, Go, or C++. I think it would be pretty strange to say that Smalltalk doesn’t have subtype polymorphism just because it doesn’t have static types!

  • Ivano Pagano

    “I think it would be pretty strange to say that Smalltalk doesn’t have subtype polymorphism just because it doesn’t have static types!”


  • cosmin

    you should have added C# on the list in the final paragraph. C# supports functional constructs since C# 3.0. F# borrowed linq from C# just like C# will get many of F# features in its future releases.

    I’m not a C# fanboy, but could not stand having newly born java 8 and omitting C# :)

  • Broc kelley

    good god, i wish I could keep up with anythign any of youa r saying.

  • Daniel Sagenschneider

    Interesting read, though “composition” from what I can understand from reading this is limited to single threaded code. Basically how instructions are organised (/possibly abstracted) and plugged together to be executed sequentially by a single thread.

    OO can become a mess when mutable objects are used in multi-threaded code. Actors (for FP) have their own interesting dynamics in involving somewhat event-based programming (not easiest to build nor test).

    I guess arguing which is better in the context of a single thread is matter of preference – and possibly based on the problem space. If behaviour is more important then use FP. If modelling state is more important then use OO.

  • Gavin King

    Again, as I said upthread, and I’m going to repeat myself, since so many people seem to be confused about this: Object-orientation has nothing to do with mutation. Savvy OO developers have advocated the use of immutable objects since oooh at least the days of Smalltalk. I remember one of the older Smalltalk guys telling me about the benefits of immutable objects in my very first job!

    A lot of FP guys get this point muddled up because we talk about objects having “state”. But what we mean is that an object instance has state (references to other objects) that distinguishes it from from other instances of the same class. We’re not trying to say that this state is necessarily mutable. It might be mutable, in a language with mutable state, but mutability is not what makes the object “stateful” in this sense, and it’s not what makes the object “objecty”.

    And remember: almost all code written using functions (and no objects) also uses mutation. Typical code in languages like basic, fortran, pascal, etc, is not written using pure functions! It’s only in a very small corner of the programming language universe (ML, Haskell, etc) where the disciplined use of pure functions has really come to the fore. That’s good, but it’s unrelated to the totally orthogonal question of whether these languages are object oriented.

    So just as we talk about mutable vs. immutable objects, we have pretty much the same distinction between impure and pure _functions_.

  • Gavin King

    i.e. “state” != “mutable state”

    If these two terms meant the same thing, then one of them would be redundant.

  • Daniel Sagenschneider

    I was not referring to mutable and impure functions defining OO or FP. I was referring to the article identifying composition as an aspect needed for a language, and that composition is considered mainly in terms of single thread composition.

    What the article I see assumes is that OO is by nature mutable objects and FP is assumed to be pure functions. And the conclusion suggests that pure/immutable is difficult to work with – or at least not very well understood. And to this I agree with what you are saying about OO and FP, “state” != “mutable state”.

    And in a single threaded world I can see that purity and immutability are great things. However, in a multi-threaded world this strict to the rule purity and immutability falls down.

    To explain, let’s use pure functions. Pure functions input a particular state and output a new state. In other words, the result of executing a pure function is a new state of the world (not to diverge too much but essentially monads that are a particular state of the world and new monad is returned from a pure function as a new state of the world).

    For a single thread, the execution of pure functions is sequential steps of creating “new worlds” from “old worlds”. Or to put it another (and slightly big step, stay with me) is that each return of a function returns a snapshot of a new dimension. The dimension is a new state that is different to the previous state. As there is no mutation of the existing state/dimension the return of a pure function can be considered a new distinct dimension.

    Now for a single threaded context, this is ok as each function sequentially creates an ordered sequence of new state/dimensions that is progressed through. A way to think of this is frames of a movie where each frame is a slightly altered picture of the previous frame – stringing them together in sequence forms the movie.

    However, in a multi-threaded context the two threads executing pure functions are diverging. Each thread creates its own new states of the world (dimensions). It would be nice if these threads could work in complete isolation of each other, but like people we can not live in isolation of each other – we interact with each other in the same world/dimension.

    Now for threads to share state, they can not operate in separate dimensions. They need to operate in the same dimension. Even to pass immutable objects between threads, there needs to be a messaging service that is mutable between the the threads. Therefore, for multiple threads mutation of state (/impure functions) is necessary for multiple threads to interact.

    So when it comes to composition, the context of the object/function needs to be considered. Is it batch single threaded? Thread-per-request? Event-based? And the big question of composition is whether object/functions created in one context can be used in another context?

    I would suggest that in solving the composition debate that this re-use of objects/functions across threading contexts is possible. Otherwise, I see it as just a discussion on best means to organise sequential machine instructions.

  • Gavin King

    Ah OK, then I totally misread your comment. Apologies.

  • Daniel Sagenschneider

    No worries. I was just seeing if there was any interest in thinking outside composition within a single threaded context. Given cloud computing and multi-core architectures, I can see functional programming (along with actors) coming into more focus. But like this article alludes to, this is still difficult for many developers.

    For a shameless plug, composition within a multi-threaded context of re-using objects/functions is addressed in a paper I’m waiting on publishing. See . Hopefully, this will be out soon and I can start talking more about it.

  • Carlos Saltos

    Ask what is composition in OOP to three different OOP experts and you will get three different definitions. Ask about function composition to three different FP experts and you will get a better and similar definition from them.

  • Gavin King

    Interesting, do you have any evidence for that? Because it simply doesn’t sound right to me. It seems to me that we all have a pretty clear idea of how both objects and functions compose: functions compose by calling each other and passing references to each other; objects compose by holding and passing references to each other, and by calling each other’s operations.

    I can’t imagine what else “composition” could possibly mean in this context.

  • Carlos Saltos

    OK, now ask two other programmers what OOP composition is and you will understand what I mean

  • Gavin King

    I’m sure they will say the same. And you have not shown me any evidence to the contrary. You’ve just asserted something that sounds, on the face of it, quite unlikely, without any evidence at all.

  • Gavin King

    To clarify one thing: when speaking loosely, we do sometimes say that objects compose via inheritance, which is, I suppose, a second sense of the word “composition” in OO programming. But speaking more carefully, that is actually composition at the *class* (and *type*) level, not at the *object instance* level.

    Perhaps that’s the source of your confusion: you sometimes hear people talking about composing “objects” via inheritance. I think that confusion is easily resolved by distinguishing instances from classes/types and understanding that composition can happen in both directions.

  • Carlos Saltos

    OK, that’s only one source of the confusion about composition in OOP, now just ask two more colleges about it and you’ll find even more.

    But beyond the confusions (that will only require better explanations and learning to solve them) the thing is that composition in OOP is not as better and powerful as composition in FP.

  • Carlos Saltos

    Thank you for the reference to “Real World Haskell” it’s now added to my read list, hope to read it (and understand it) soon.

    Yes, Clojure and Scala wrapping Java code can bring a lot more clarity and composeability. It’s similar to Java when wraps some C code, and even similar to C when wraps some assembler code in key pieces that requires better performance. The cross-language usage in real world systems is pretty amazing.

  • Gavin King

    So you haven’t been able to give any source at all for your claim that OO programmers disagree about what composition means, just a vague assertion that my colleagues on the Ceylon team disagree with me about what it means.

    It’s unclear how you could possibly know this about my team, and in fact it’s a bit silly: my colleagues are extremely knowledgeable about programming languages and about object orientation and object oriented languages, and we’ve spent years discussing and reflecting on programming language design. The problem of “what does it mean to compose a program out of objects” has never come up, neither in the Ceylon team, nor, as far as I can recall, in any other team I’ve worked with in my almost two decades of coding in object-oriented languages.

    And it’s equally silly to assert that, after much more than two decades of industrial use of object-oriented languages, with hundreds of thousands of programmers composing thousands of programs using object oriented techniques *every day*, that composition using object oriented techniques isn’t an extremely well-understood thing. If it weren’t, then those programs simply wouldn’t work, and, in particular, we wouldn’t have all the thousands of reusable libraries that those programs depend on.

    This is a problem *you* have invented, that only exists in your head.

  • Carlos Saltos

    Just ask them and see it for yourself instead of writing these much comments

  • Gavin King

    I just asked them. They came up with two sorts of composition: composition of objects (where an object holds a reference to other objects), and functional composition (of methods).

    Someone mentioned that you could consider inheritance a sort of composition, but said that this isn’t usually the way we use the word in OOP discussions. (FTR, I think it’s quite clearly a sort of composition, though it’s not, strictly speaking, composition of *objects*.)

    So basically, their definition was much the same as mine. And so it seems that you’re simply wrong. It seems to me that as an FP enthusiast, you’re projecting your own confusion about object oriented programming onto others. But we’re not confused; you’re confused.

  • Carlos Saltos

    Eureka !! … I’ve proved my point: you’ve asked them and came with two different sorts of compositions in OOP, plus one extra mentioning inheritance (mentioned only by one of them) and you still think that composition in OOP is a more precise and clearer concept than in FP ? … Wake up !!

  • Carlos Saltos

    FP composition is also extendable to multithreading systems. You may check Twitter’s Finagle as a good example for that at

  • Gavin King

    Nonsense. It’s very clear that both those sorts of composition were covered by my original description above, which mentions them both explicitly. And I also mentioned the third form of composition in a follow up comment.

    So your claim that my team members would come up with a different interpretation of the term fell flat, and now you’re just misdirecting and prevaricating. FTR I never said composition in OO was a clearer concept than in FP. It’s almost the same, and equally clear.

    And guess what: in FP values can be composed out of other values too – consider a tuple, or a constructor for an algebraic data type. Clearly a tuple is a value composed of values, and that’s clearly not the same as composition of functions. So FP also has the same two different types of composition. Most FP languages don’t have subtyping however, so types can’t be composed from other types.

  • Carlos Saltos

    It was fun to talk to you. Now let’s just relax and enjoy the weekend. Cheers from Barcelona, Spain.

  • Daniel Sagenschneider

    As per the short intro to Scala, much of what is going on is “syntactic sugar”. Futures are not something specific to functions, as they also exist in Object Orientation. Note: there is little on threading (except the parallelism of future/promise programming along with the problem space lending itself well to stateless concurrency… which somewhat suits pure functions).

    My paper is finally published. Have a look for composition that extends Inversion of Control from Dependency Injection to include Thread Injection and Continuation Injection to round out separation of concerns regarding composition: (you can also see more practical details on OfficeFloor’s website

  • Carlos Saltos

    Yes you are right. They are also in OOP and actually on the video Marius Eriksen uses OOP as a key resource along with FP using Scala.

    Thank you for your paper reference. I’ll take a look.

    For a more info about Scala I recommend Effective Scala here ->

  • Fodor Balázs

    In their very essence, FP aims at composition of transformations and mapping. OOP captures nature’s statefulness. I would say, that OOP is suitable for representing and modeling real-world objects, FP is good for describing the transformation between their state. OOP is weak in composing transformations, but FP is also weak in modeling physical objects, which indeed _have_ state, thinking from an electron to a house, for example.