-->

How to functional compose transforms of objects vi

2019-05-28 12:31发布

问题:

Live code example

I'm trying to learn transducers via egghead and I think I got it until we try to compose object transformation. I have an example below that doesn't work

const flip = map(([k,v]) => ({[v]: k}));
const double = map(([k,v]) => ({[k]: v + v}));
seq(flip, {one: 1, two: 2}); /*?*/ {1: 'one', 2: 'two'}
seq(double, {one: 1, two: 2}); /*?*/ {'one': 2, 'two: 4}

but if I compose it fails:

seq(compose(flip, double), {one: 1, two: 2}); /*?*/ {undefined: NaN}
seq(compose(double, flip), {one: 1, two: 2}); /*?*/ {undefined: undefined} 

How can I work with objects using transducers with fp composition?

There is quite a bit of boiler plate so I really suggest looking at the live code example to review the utils like compose, seq etc.

回答1:

first off thanks for going through the course. You're having trouble composing because we've got clashing data types between the expected inputs and outputs.

When composing flip and double, the seq helper calls the transduce helper function which will convert your input object to an array of [k,v] entries so that it can iterate through it. It also calls your composed transform with the objectReducer helper to be used as the inner reducer, which just does an Object.assign to keep building up the accumulation.

It then iterates through the [k,v] entries, passing them to your composed reducer, but it's up to you to ensure you keep the data types compatible between your transforms.

In your example, double will get the return value of flip, but double expects a [k,v] array, and flip returns an object.

So you would have to do something like this:

const entriesToObject = map(([k,v]) => {
  return {[k]:v};
});
const flipAndDouble = compose(
  map(([k,v]) => {
    return [k,v+v];
  }),
  map(([k,v]) => {
    return [v,k];
  }),
  entriesToObject,
);

//{ '2': 'one', '4': 'two', '6': 'three' }​​​​​

It's a bit confusing since you have to ensure the last step returns an object and not a [k,v] array. This is so the objReducer that performs the Object.assign will work correctly since it expects an object as the value. This is why I added in entriesToObject above.

If the objReducer was updated to handle [k,v] arrays as well as objects as values then you could keep returning [k,v] arrays from your last step as well, which is a much better approach

You can see an example of how the objReducer could be rewritten here: https://github.com/jlongster/transducers.js/blob/master/transducers.js#L766

For production use, if you use that transducer library, you can just keep treating your inputs and outputs as [k,v] arrays, which is a much better approach. For your own learning, you could try modifying the objReducer based on that link and you should then be able to remove entriesToObject from the composition above.

Hope that helps!



回答2:

Any limitations are your own

Others point out that you're making a mistake with the types. Each of your functions expect [k,v] input, but neither of them output that form - neither compose(f,g) or compose(g,f) will work in this case

Anyway, transducers are generic and need not know anything about the types of data they handle

const flip = ([ key, value ]) =>
  [ value, key ]

const double = ([ key, value ]) =>
  [ key, value * 2 ]

const pairToObject = ([ key, value ]) =>
  ({ [key]: value })

const entriesToObject = (iterable) =>
  Transducer ()
    .log ('begin:')
    .map (double)
    .log ('double:')
    .map (flip)
    .log ('flip:')
    .map (pairToObject)
    .log ('obj:')
    .reduce (Object.assign, {}, Object.entries (iterable))

console.log (entriesToObject ({one: 1, two: 2}))
// begin: [ 'one', 1 ]
// double: [ 'one', 2 ]
// flip: [ 2, 'one' ]
// obj: { 2: 'one' }
// begin: [ 'two', 2 ]
// double: [ 'two', 4 ]
// flip: [ 4, 'two' ]
// obj: { 4: 'two' }
// => { 2: 'one', 4: 'two' }

Of course we have the standard boring array of numbers and return a boring array of numbers as a possibility too

const main = nums =>
  Transducer ()
    .log ('begin:')
    .filter (x => x > 2)
    .log ('greater than 2:')
    .map (x => x * x)
    .log ('square:')
    .filter (x => x < 30)
    .log ('less than 30:')
    .reduce ((acc, x) => [...acc, x], [], nums)

console.log (main ([ 1, 2, 3, 4, 5, 6, 7 ]))
// begin: 1
// begin: 2
// begin: 3
// greater than 2: 3
// square: 9
// less than 30: 9
// begin: 4
// greater than 2: 4
// square: 16
// less than 30: 16
// begin: 5
// greater than 2: 5
// square: 25
// less than 30: 25
// begin: 6
// greater than 2: 6
// square: 36
// begin: 7
// greater than 2: 7
// square: 49
// [ 9, 16, 25 ]

More interestingly, we can take an input of an array of objects and return a set

const main2 = (people = []) =>
  Transducer ()
    .log ('begin:')
    .filter (p => p.age > 13)
    .log ('age over 13:')
    .map (p => p.name)
    .log ('name:')
    .filter (name => name.length > 3)
    .log ('name is long enough:')
    .reduce ((acc, x) => acc.add (x), new Set, people)

const data =
  [ { name: "alice", age: 55 }
  , { name: "bob", age: 16 }
  , { name: "alice", age: 12 }
  , { name: "margaret", age: 66 }
  , { name: "alice", age: 91 }
  ]

console.log (main2 (data))
// begin: { name: 'alice', age: 55 }
// age over 13: { name: 'alice', age: 55 }
// name: alice
// name is long enough: alice
// begin: { name: 'bob', age: 16 }
// age over 13: { name: 'bob', age: 16 }
// name: bob
// begin: { name: 'alice', age: 12 }
// begin: { name: 'margaret', age: 66 }
// age over 13: { name: 'margaret', age: 66 }
// name: margaret
// name is long enough: margaret
// begin: { name: 'alice', age: 91 }
// age over 13: { name: 'alice', age: 91 }
// name: alice
// name is long enough: alice
// => Set { 'alice', 'margaret' }

See? We can perform any type of transformations you want. You just need a Transducer that fits the bill

const identity = x =>
  x

const Transducer = (t = identity) => ({
  map: (f = identity) =>
    Transducer (k =>
      t ((acc, x) => k (acc, f (x))))

  , filter: (f = identity) =>
    Transducer (k =>
      t ((acc, x) => f (x) ? k (acc, x) : acc))

  , tap: (f = () => undefined) =>
    Transducer (k =>
      t ((acc, x) => (f (x), k (acc, x))))

  , log: (s = "") =>
      Transducer (t) .tap (x => console.log (s, x))

  , reduce: (f = (a,b) => a, acc = null, xs = []) =>
      xs.reduce (t (f), acc)
})

Full program demonstration - .log is added just so you can see things happening in the correct order

const identity = x =>
  x

const flip = ([ key, value ]) =>
  [ value, key ]
  
const double = ([ key, value ]) =>
  [ key, value * 2 ]
  
const pairToObject = ([ key, value ]) =>
  ({ [key]: value })
  
const Transducer = (t = identity) => ({
  map: (f = identity) =>
    Transducer (k =>
      t ((acc, x) => k (acc, f (x))))
      
  , filter: (f = identity) =>
    Transducer (k =>
      t ((acc, x) => f (x) ? k (acc, x) : acc))
      
  , tap: (f = () => undefined) =>
    Transducer (k =>
      t ((acc, x) => (f (x), k (acc, x))))
      
  , log: (s = "") =>
      Transducer (t) .tap (x => console.log (s, x))
      
  , reduce: (f = (a,b) => a, acc = null, xs = []) =>
      xs.reduce (t (f), acc)
})
  
const entriesToObject = (iterable) =>
  Transducer ()
    .log ('begin:')
    .map (double)
    .log ('double:')
    .map (flip)
    .log ('flip:')
    .map (pairToObject)
    .log ('obj:')
    .reduce (Object.assign, {}, Object.entries (iterable))
    
console.log (entriesToObject ({one: 1, two: 2}))
// begin: [ 'one', 1 ]
// double: [ 'one', 2 ]
// flip: [ 2, 'one' ]
// obj: { 2: 'one' }
// begin: [ 'two', 2 ]
// double: [ 'two', 4 ]
// flip: [ 4, 'two' ]
// obj: { 4: 'two' }
// => { 2: 'one', 4: 'two' }

functional programming vs functional programs

JavaScript doesn't include functional utilities like map, filter or reduce for other iterables like Generator, Map, or Set. When writing a function that enables functional programming, we can do so in a variety of ways - consider the varying implementations of reduce

// possible implementation 1
const reduce = (f = (a,b) => a, acc = null, xs = []) =>
  xs.reduce (f, acc)

// possible implementation 2
const reduce = (f = (a,b) => a, acc = null, [ x = Empty, ...xs ]) =>
  isEmpty (x)
    ? acc
    : reduce (f, f (acc, x) xs)

// possible implementation 3
const reduce = (f = (a,b) => a, acc = null, xs = []) =>
{
  for (const x of xs)
    acc = f (acc, x)
  return acc
}

Each implementation of reduce above enables functional programming; however, only one implementation is itself a functional program

  1. This is just a wrapper around native Array.prototype.reduce. it has the same disadvantage as Array.prototype.reduce because it only works for arrays. Here we are happy that we can now write reduce expressions using a normal function and creating the wrapper was easy. But, if we call reduce (add, 0, new Set ([ 1, 2, 3 ])), it fails because sets do not have a reduce method and this makes us sad.

  2. This works on any iterable now, but the recursive definition means that it will overflow the stack if xs is significantly large - at least until JavaScript interpreters add support for tail call elimination. Here, we are happy about our representation of reduce, but wherever we use it our program we are sad about its Achilles heel

  3. This works on any iterable just like #2, however we must trade the elegant recursive expression for the imperative-style for loop which ensures stack safety. The ugly details makes us sad about reduce but it makes us happy wherever we use it in our program.

Why is this important? Well, in the Transducer I shared, the reduce method I included is:

const Transducer (t = identity) =>
  ({ ...

   , reduce: (f = (a,b) => a, acc = null, xs = []) =>
      xs.reduce (t (f), acc)
  })

This particular implementation is closest to our reduce #1 above - it's a quick and dirty wrapper around Array.prototype.reduce. Sure our Transducer can perform transformations on arrays containing values of any type, but it means our Transducer can only accept arrays as input. We traded flexibility for an easier implementation.

We could write it closer to style #2, but then we inherit stack vulnerability wherever we use our transducer module on big data sets - which is where transducers are meant to excel in the first place. An implementation closer to #3 is itself not a functional program, but it enables functional programming —

The result is a module that necessarily utilizes some of JavaScript's imperative-style in order to enable the user to write functional-style programs in an unburdened fashion

const Transducer (t = identity) =>
  ({ ...

   , reduce: (f = (a,b) => a, acc = null, xs = []) =>
     {
       const reducer = t (f)
       for (const x of xs)
         acc = reducer (acc, x)
       return acc
     }
  })

The idea here is you get to write your own Transducer module and invent any other data types and utilities to support it. Familiarizing yourself with the trade-offs enable you to choose whatever is best for your program.

There's many ways around the "problem" presented in this section. So how can one really write functional programs in JavaScript if we're constantly having to revert to imperative style in various parts of our program? There's no silver bullet answer, but I have spent considerable time exploring various solutions. If you're this deep in the post and interested, I share some of that work here

Possibility #4

Yep, you can leverage Array.from that converts any iterable to an array, which allows us to plug directly into Array.prototype.reduce. Now transducers that accept any iterable input, functional style, and an easy implementation —

A drawback of this approach is that it creates an intermediate array of values (wasted memory) instead of handling the values one-at-a-time as they come out of the iterable. Note, even solution #2 shares non-trivial drawback

const Transducer (t = identity) =>
  ({ ...

   , reduce: (f = (a,b) => a, acc = null, xs = []) =>
       Array.from (xs)
         .reduce (t (f), acc)
  })