I'm going through "Intermediate Perl" and it's pretty cool. I just finished the section on "The Schwartzian Transform" and after it sunk in I started to wonder why the transform doesn't use a cache. In lists that have several repeated values the transform recomputes the value for each one so I thought why not use a hash to cache results. Here' some code:
# a place to keep our results
my %cache;
# the transformation we are interested in
sub foo {
# expensive operations
}
# some data
my @unsorted_list = ....;
# sorting with the help of the cache
my @sorted_list = sort {
($cache{$a} //= &foo($a)) <=> ($cache{$b} //= &foo($b))
} @unsorted_list;
Am I missing something? Why isn't the cached version of the Schwartzian transform listed in books and in general just better circulated because on first glance I think the cached version should be more efficient?
Edit: daxim has pointed out in the comments this is known as the orcish maneuver. So I wasn't going crazy although I don't quite understand the name.
(lots of other comments redacted)
To the extent that an array lookup is more efficient than a hash lookup (i.e., $a->[1]
is faster than $cache{$a}
), the canonical form could be more efficient than your code, even with a lot of duplicates.
Benchmark results:
Here's my benchmarking code:
# when does an additional layer of caching improve the performance of
# the Schwartzian transform?
# methods:
# 1. canonical Schwartzian transform
# 2. cached transform
# 3. canonical with memoized function
# inputs:
# 1. few duplicates (rand)
# 2. many duplicates (int(rand))
# functions:
# 1. fast
# 2. slow
use Benchmark;
use Math::BigInt;
use strict qw(vars subs);
use warnings;
no warnings 'uninitialized';
# fast_foo: a cheap operation, slow_foo: an expensive operation
sub fast_foo { my $x = shift; exp($x) }
sub slow_foo { my $x = shift; my $y = new Math::BigInt(int(exp($x))); $y->bfac() }
# XXX_memo_foo: put caching optimization inside call to 'foo'
my %fast_memo = ();
sub fast_memo_foo {
my $x = shift;
if (exists($fast_memo{$x})) {
return $fast_memo{$x};
} else {
return $fast_memo{$x} = fast_foo($x);
}
}
my %slow_memo = ();
sub slow_memo_foo {
my $x = shift;
if (exists($slow_memo{$x})) {
return $slow_memo{$x};
} else {
return $slow_memo{$x} = slow_foo($x);
}
}
my @functions = qw(fast_foo slow_foo fast_memo_foo slow_memo_foo);
my @input1 = map { 5 * rand } 1 .. 1000; # 1000 random floats with few duplicates
my @input2 = map { int } @input1; # 1000 random ints with many duplicates
sub canonical_ST {
my $func = shift @_;
my @sorted = map { $_->[0] }
sort { $a->[1] <=> $b->[1] }
map { [$_, $func->($_)] } @_;
return;
}
sub cached_ST {
my $func = shift @_;
my %cache = ();
my @sorted = sort {
($cache{$a} //= $func->($a)) <=> ($cache{$b} //= $func->{$b})
} @_;
return;
}
foreach my $input ('few duplicates','many duplicates') {
my @input = $input eq 'few duplicates' ? @input1 : @input2;
foreach my $func (@functions) {
print "\nInput: $input\nFunction: $func\n-----------------\n";
Benchmark::cmpthese($func =~ /slow/ ? 30 : 1000,
{
'Canonical' => sub { canonical_ST($func, @input) },
'Cached' => sub { cached_ST($func, @input) }
});
}
}
and the results (Strawberry Perl 5.12):
Input: few duplicates
Function: fast_foo
-----------------
Rate Canonical Cached
Canonical 160/s -- -18%
Cached 196/s 22% --
Input: few duplicates
Function: slow_foo
-----------------
Rate Canonical Cached
Canonical 7.41/s -- -0%
Cached 7.41/s 0% --
Input: few duplicates
Function: fast_memo_foo
-----------------
Rate Canonical Cached
Canonical 153/s -- -25%
Cached 204/s 33% --
Input: few duplicates
Function: slow_memo_foo
-----------------
Rate Cached Canonical
Cached 20.2/s -- -7%
Canonical 21.8/s 8% --
Input: many duplicates
Function: fast_foo
-----------------
Rate Canonical Cached
Canonical 179/s -- -50%
Cached 359/s 101% --
Input: many duplicates
Function: slow_foo
-----------------
Rate Canonical Cached
Canonical 11.8/s -- -62%
Cached 31.0/s 161% --
Input: many duplicates
Function: fast_memo_foo
-----------------
Rate Canonical Cached
Canonical 179/s -- -50%
Cached 360/s 101% --
Input: many duplicates
Function: slow_memo_foo
-----------------
Rate Canonical Cached
Canonical 28.2/s -- -9%
Cached 31.0/s 10% --
I'm a little stunned by these results -- the canonical Schwartzian transform has only a slight advantage under the most favorable conditions (expensive function call, few duplicates or no memoization) and is at a pretty substantial disadvantage in the other cases. The OP's caching scheme inside the sort
function even outperforms memoization outside sort
. I wasn't expecting this when I did the benchmarks, but I think the OP is onto something.
Caching the Schwartzian Transform would be useful when you're calling foo()
in multiple transforms:
@sorted1 = map { $_->[0] }
sort { $a->[1] cmp $b->[1] }
map { [$_, foo($_)] }
@unsorted1;
@sorted2 = map { $_->[0] }
sort { $a->[1] cmp $b->[1] }
map { [$_, foo($_)] }
@unsorted2;
If @unsorted1
and @unsorted2
have largely the same values, then you'll be calling foo()
for the same values twice. If this function is computationally expensive, you probably want to cache the results.
The easiest way to do this would be to use the Memoize
module:
use Memoize;
memoize('foo');
If you add these two lines, you don't have to worry about setting up a cache for foo()
yourself, Memoize
handles it for you.
Edit: I only just noticed that your sort doesn't do the Schwartzian Transform. The whole point behind the ST is that you only run your expensive function once for each member of the list, which is why you do the whole map sort map
construct. While you probably could do some hand-written caching like how you've done, it would be non-standard Perl (in the sense that someone would be expecting to see the ST and then have to sit there and figure out what your code is doing) and could quickly become a maintenance nightmare.
But yes, if your list has repeated values, using a cache (either hand-rolled or with Memoize
) could result in a quicker Schwartzian Transform. I say "could" because there could be instances where doing a hash lookup is actually more expensive than calling out to foo()
(the Memoize
docs use sub foo { my $x = shift; return $x * $x }
as an example of one of these instances).