Do fluent interfaces significantly impact runtime

2019-04-06 05:12发布

I'm currently occupying myself with implementing a fluent interface for an existing technology, which would allow code similar to the following snippet:

using (var directory = Open.Directory(@"path\to\some\directory"))
{
    using (var file = Open.File("foobar.html").In(directory))
    {
        // ...
    }
}

In order to implement such constructs, classes are needed that accumulate arguments and pass them on to other objects. For example, to implement the Open.File(...).In(...) construct, you would need two classes:

// handles 'Open.XXX':
public static class OpenPhrase
{
    // handles 'Open.File(XXX)':
    public static OpenFilePhrase File(string filename)
    {
        return new OpenFilePhrase(filename);
    }

    // handles 'Open.Directory(XXX)':
    public static DirectoryObject Directory(string path)
    {
        // ...
    }
}

// handles 'Open.File(XXX).XXX':
public class OpenFilePhrase
{
    internal OpenFilePhrase(string filename)
    {
        _filename = filename
    }

    // handles 'Open.File(XXX).In(XXX):
    public FileObject In(DirectoryObject directory)
    {
        // ...
    }

    private readonly string _filename;
}

That is, the more constituent parts statements such as the initial examples have, the more objects need to be created for passing on arguments to subsequent objects in the chain until the actual statement can finally execute.

Question:

I am interested in some opinions: Does a fluent interface which is implemented using the above technique significantly impact the runtime performance of an application that uses it? With runtime performance, I refer to both speed and memory usage aspects.

Bear in mind that a potentially large number of temporary, argument-saving objects would have to be created for only very brief timespans, which I assume may put a certain pressure on the garbage collector.

If you think there is significant performance impact, do you know of a better way to implement fluent interfaces?

2条回答
老娘就宠你
2楼-- · 2019-04-06 05:44

Generally speaking, objects with a very small lifetime are exactly the kind of objects that the GC deals most efficiently with, because most of them will be dead at the time the next minor collection runs -- and on any decent GC implementation, the cost of a minor collection is proportional to the total size of live objects. Thus, short-lived objects cost very little, and their allocation means only bumping a pointer up, which is fast.

So I would say: probably no significant performance impact.

查看更多
Rolldiameter
3楼-- · 2019-04-06 05:46

Thomas is quite correct that generational GC is optimized for exactly this allocation and collection of short-lived objects. However, functional languages like OCaml have GCs that are much more heavily optimized for this than .NET is and, yet, they still go to great lengths to avoid this situation in the equivalent circumstance of applying multiple arguments to a curried function. Specifically, they use a technique called big-step semantics where the compiler removes all of the intermediates at compile time so the GC never sees any of this.

On .NET, value types may well let you solve this problem yourself.

查看更多
登录 后发表回答