How to Speed Up Metal Code for iOS/Mac OS

2019-02-10 02:24发布

I'm trying to implement code in Metal that performs a 1D convolution between two vectors with lengths. I've implemented the following which works correctly

kernel void convolve(const device float *dataVector [[ buffer(0) ]],
                     const device int& dataSize [[ buffer(1) ]],
                     const device float *filterVector [[ buffer(2) ]],
                     const device int& filterSize [[ buffer(3) ]],
                     device float *outVector [[ buffer(4) ]],
                     uint id [[ thread_position_in_grid ]]) {
    int outputSize = dataSize - filterSize + 1;
    for (int i=0;i<outputSize;i++) {
        float sum = 0.0;
        for (int j=0;j<filterSize;j++) {
            sum += dataVector[i+j] * filterVector[j];
        }
        outVector[i] = sum;
    }
}

My problem is it takes about 10 times longer to process (computation + data transfer to/from GPU) the same data using Metal than in Swift on a CPU. My question is how do I replace the inner loop with a single vector operation or is there another way to speed up the above code?

标签: ios macos metal
2条回答
淡お忘
2楼-- · 2019-02-10 03:02

The following code shows how to render encoded commands in parallel on the GPU using the Objective-C Metal API (the threading code above only divides rendering of the output into grid sections for parallel processing; the calculations are still not performed in parallel). It is what you're referring to in your question, even while it's not exactly what you want. I've provided this answer to help anyone who might have stumbled upon this question, thinking that it was going to provide an answer related to parallel rendering (when, in fact, it does not):

    - (void)drawInMTKView:(MTKView *)view
    {
        dispatch_async(((AppDelegate *)UIApplication.sharedApplication.delegate).cameraViewQueue, ^{
                    id <CAMetalDrawable> drawable = [view currentDrawable]; //[(CAMetalLayer *)view.layer nextDrawable];
                    MTLRenderPassDescriptor *renderPassDesc = [view currentRenderPassDescriptor];
                    renderPassDesc.colorAttachments[0].loadAction = MTLLoadActionClear;
                    renderPassDesc.colorAttachments[0].clearColor = MTLClearColorMake(0.0,0.0,0.0,1.0);
                    renderPassDesc.renderTargetWidth = self.texture.width;
                    renderPassDesc.renderTargetHeight = self.texture.height;
                    renderPassDesc.colorAttachments[0].texture = drawable.texture;
                    if (renderPassDesc != nil)
                    {
                        dispatch_semaphore_wait(self._inflight_semaphore, DISPATCH_TIME_FOREVER);
                        id <MTLCommandBuffer> commandBuffer = [self.metalContext.commandQueue commandBuffer];
                        [commandBuffer enqueue];
            // START PARALLEL RENDERING OPERATIONS HERE
                        id <MTLParallelRenderCommandEncoder> parallelRCE = [commandBuffer parallelRenderCommandEncoderWithDescriptor:renderPassDesc];
// FIRST PARALLEL RENDERING OPERATION
                        id <MTLRenderCommandEncoder> renderEncoder = [parallelRCE renderCommandEncoder];

                        [renderEncoder setRenderPipelineState:self.metalContext.renderPipelineState];

                        [renderEncoder setVertexBuffer:self.metalContext.vertexBuffer offset:0 atIndex:0];
                        [renderEncoder setVertexBuffer:self.metalContext.uniformBuffer offset:0 atIndex:1];

                        [renderEncoder setFragmentBuffer:self.metalContext.uniformBuffer offset:0 atIndex:0];

                        [renderEncoder setFragmentTexture:self.texture
                                                  atIndex:0];

                        [renderEncoder drawPrimitives:MTLPrimitiveTypeTriangleStrip
                                          vertexStart:0
                                          vertexCount:4
                                        instanceCount:1];

                        [renderEncoder endEncoding];
            // ADD SECOND, THIRD, ETC. PARALLEL RENDERING OPERATION HERE
.
.
.
// SUBMIT ALL RENDERING OPERATIONS IN PARALLEL HERE
                        [parallelRCE endEncoding];

                        __block dispatch_semaphore_t block_sema = self._inflight_semaphore;
                        [commandBuffer addCompletedHandler:^(id<MTLCommandBuffer> buffer) {
                            dispatch_semaphore_signal(block_sema);

                        }];

                        if (drawable)
                            [commandBuffer presentDrawable:drawable];
                        [commandBuffer commit];
                        [commandBuffer waitUntilScheduled];
                    }
        });
    }

In the above example, you would duplicate the renderEncoder-related for each calculation you want to perform in parallel. I do not see how this would be of benefit to you in your code example, as one operation appears to be dependent on another. Probably, then, the best you could hope for is the code provided to you by warrenm, even though that doesn't really qualify as parallel rendering, though.

查看更多
唯我独甜
3楼-- · 2019-02-10 03:09

The key to taking advantage of the GPU's parallelism in this case is to let it manage the outer loop for you. Instead of invoking the kernel once for the entire data vector, we'll invoke it for each element in the data vector. The kernel function simplifies to this:

kernel void convolve(const device float *dataVector [[ buffer(0) ]],
                     const constant int &dataSize [[ buffer(1) ]],
                     const constant float *filterVector [[ buffer(2) ]],
                     const constant int &filterSize [[ buffer(3) ]],
                     device float *outVector [[ buffer(4) ]],
                     uint id [[ thread_position_in_grid ]])
{
    float sum = 0.0;
    for (int i = 0; i < filterSize; ++i) {
        sum += dataVector[id + i] * filterVector[i];
    }
    outVector[id] = sum;
}

In order to dispatch this work, we select a threadgroup size based on the thread execution width recommended by the compute pipeline state. The one tricky thing here is making sure that there's enough padding in the input and output buffers so that we can slightly overrun the actual size of the data. This does cause us to waste a small amount of memory and computation, but saves us the complexity of doing a separate dispatch just to compute the convolution for the elements at the end of the buffer.

// We should ensure here that the data buffer and output buffer each have a size that is a multiple of
// the compute pipeline's threadExecutionWidth, by padding the amount we allocate for each of them.
// After execution, we ignore the extraneous elements in the output buffer beyond the first (dataCount - filterCount + 1).

let iterationCount = dataCount - filterCount + 1
let threadsPerThreadgroup = MTLSize(width: min(iterationCount, computePipeline.threadExecutionWidth), height: 1, depth: 1)
let threadgroups = (iterationCount + threadsPerThreadgroup.width - 1) / threadsPerThreadgroup.width
let threadgroupsPerGrid = MTLSize(width: threadgroups, height: 1, depth: 1)

let commandEncoder = commandBuffer.computeCommandEncoder()
commandEncoder.setComputePipelineState(computePipeline)
commandEncoder.setBuffer(dataBuffer, offset: 0, at: 0)
commandEncoder.setBytes(&dataCount, length: MemoryLayout<Int>.stride, at: 1)
commandEncoder.setBuffer(filterBuffer, offset: 0, at: 2)
commandEncoder.setBytes(&filterCount, length: MemoryLayout<Int>.stride, at: 3)
commandEncoder.setBuffer(outBuffer, offset: 0, at: 4)
commandEncoder.dispatchThreadgroups(threadgroupsPerGrid, threadsPerThreadgroup: threadsPerThreadgroup)
commandEncoder.endEncoding()

In my experiments, this parallelized approach runs 400-1000x faster than the serial version in the question. I'm curious to hear how it compares to your CPU implementation.

查看更多
登录 后发表回答