Serverless - Numpy - Unable to find good bind path

2019-05-21 02:54发布

问题:

I've been beating on this for over a week and been through all sorts of forum issues and posts and cannot resolve. I'm trying to package numpy in a function, individually building requirements (I have multiple functions with multiple requirements that I'd like to keep separate).

Environment:
Windows 10 Home

Docker Toolbox for Windows:

Client:
Version:       18.03.0-ce
API version:   1.37
Go version:    go1.9.4
Git commit:    0520e24302


Built: Fri Mar 23 08:31:36 2018
 OS/Arch:       windows/amd64
 Experimental:  false
 Orchestrator:  swarm

Server: Docker Engine - Community
 Engine:
  Version:      18.09.0
  API version:  1.39 (minimum version 1.12)
  Go version:   go1.10.4
  Git commit:   4d60db4
  Built:        Wed Nov  7 00:52:55 2018
  OS/Arch:      linux/amd64
  Experimental: false

Serverless Version:

serverless version 6.4.1
serverless-python-requirements version 6.4.1

Directory Structure:

|-test
  |-env.yml
  |-serverless.yml
  |-Dockerfile
  |-functions
    |-f1
      |-index.py
      |-requirements.txt
      |-sub_function_1.py
      |-sub_function_2.py
    |-f2
      |-index.py
      |-requirements.txt
      |-sub_function_3.py
      |-sub_function_4.py

serverless.yml

service: test 
plugins:
  - serverless-python-requirements
custom:
  pythonRequirements:
    zip: true                  
    dockerFile: Dockerfile      
    dockerizePip: non-linux     
provider:
  name: aws
  runtime: python3.6
  stage: dev
  environment: ${file(./env.yml):${opt:stage, self:provider.stage}.env}
  region: ${file(./env.yml):${opt:stage, self:provider.stage}.aws.region}
  profile: ${file(./env.yml):${opt:stage, self:provider.stage}.aws.profile}
package:
  individually: true
functions:
  f1:
    handler:index.handler
    module:functions/f1
  f2:
    handler:index.handleer
    module:functions/f2

I have my project files in C:\Serverless\test. I run npm init, followed by npm i --save serverless-python-requirements, accepting all defaults. I get the following on sls deploy -v. even though I've added C:\ to Shared Folders on the running default VM in VirtualBox, and selected auto-mount and permanent.

If I comment out both dockerizePip and dockerFile I get the following as expected based on here and other SO posts:

 Serverless: Invoke invoke
{
    "errorMessage": "Unable to import module 'index'"
}

If I comment out dockerfile I get:

Serverless: Docker Image: lambci/lambda:build-python3.6

      Error --------------------------------------------------

    error during connect: Get https://XXXXXX/v1.37/version: dial tcp
    XXXXXXXXXX: connectex: A connection attempt failed because the 
    connected party did not properly respond after a period of time, or
    established connection failed because connected host has failed to 
    respond.

    at dockerCommand (C:\Serverless\test\node_modules\serverless-python-requirements\lib\docker.js:20:11)
    at getBindPath (C:\Serverless\test\node_modules\serverless-python-requirements\lib\docker.js:100:3)

With Dockerfile

# AWS Lambda execution environment is based on Amazon Linux 1
FROM amazonlinux:1

# Install Python 3.6
RUN yum -y install python36 python36-pip

# Install your dependencies
RUN curl -s https://bootstrap.pypa.io/get-pip.py | python3
RUN yum -y install python3-devel mysql-devel gcc

# Set the same WORKDIR as default image
RUN mkdir /var/task
WORKDIR /var/task

.

Serverless: Building custom docker image from Dockerfile...
Serverless: Docker Image: sls-py-reqs-custom

  Error --------------------------------------------------

  Unable to find good bind path format

     For debugging logs, run again after setting the "SLS_DEBUG=*" environment variable.

  Stack Trace --------------------------------------------

Error: Unable to find good bind path format
    at getBindPath (C:\Serverless\test\node_modules\serverless-python-requirements\lib\docker.js:142:9)
    at installRequirements (C:\Serverless\test\node_modules\serverless-python-requirements\lib\pip.js:152:7)
    at installRequirementsIfNeeded (C:\Serverless\test\node_modules\serverless-python-requirements\lib\pip.js:451:3)

If I move my project to C:\Users\, I get this instead:

Serverless: Docker Image: sls-py-reqs-custom
Serverless: Trying bindPath /c/Users/Serverless/test/.serverless/requirements (run,--rm,-v,/c/Users/Serverless/test/.serverless/req
uirements:/test,alpine,ls,/test/requirements.txt)
Serverless: /test/requirements.txt

  Error --------------------------------------------------

  docker: Error response from daemon: create "/c/Users/Serverless/test/.serverless/requirements": "\"/c/Users/Serverless/test/.serv
erless/requirements\"" includes invalid characters for a local volume name, only "[a-zA-Z0-9][a-zA-Z0-9_.-]" are allowed. If you in
tended to pass a host directory, use absolute path.
See 'docker run --help'.


     For debugging logs, run again after setting the "SLS_DEBUG=*" environment variable.

  Stack Trace --------------------------------------------

Error: docker: Error response from daemon: create "/c/Users/Serverless/test/.serverless/requirements": "\"/c/Users/Serverless/test/
.serverless/requirements\"" includes invalid characters for a local volume name, only "[a-zA-Z0-9][a-zA-Z0-9_.-]" are allowed. If y
ou intended to pass a host directory, use absolute path.
See 'docker run --help'.

    at dockerCommand (C:\Users\Serverless\test\node_modules\serverless-python-requirements\lib\docker.js:20:11)
    at getDockerUid (C:\Users\Serverless\test\node_modules\serverless-python-requirements\lib\docker.js:162:14)

I've seen the Makefile style recommendation from @brianz here, but I'm not sure how to adapt that to this (Makefiles are not my strong suit). I'm a bit at a loss as to what to do next and advice would be greatly appreciated. TIA.

回答1:

I was unable to make the plugin work but I found a better solution anyhow - Lambda Layers. This is a bonus because it reduces the size of the lambda and allows code/file reuse. There is a pre-built lambda layer for numpy and scipy that you can use, but I built my own to show myself how it all works. Here's how I made it work:

Create a layer package:

  1. Open an EC2 instance or Ubuntu or Linux or whatever - This is needed so we can compile the runtime binaries correctly
  2. Make a dependencies package zip - Must use the directory structure python/lib/python3.6/site-packages for python to find during runtime

    mkdir -p tmpdir/python/lib/python3.6/site-packages 
    pip install -r requirements.txt --no-deps -t tmpdir/python/lib/python3.6/site-packages 
    cd tmpdir zip -r ../py_dependencies.zip . 
    cd .. 
    rm -r tmpdir
    
  3. Push layer zip to AWS - requires latest awscli

    sudo pip install awscli --upgrade --user
    sudo aws lambda publish-layer-version \
    --layer-name py_dependencies \
    --description "Python 3.6 dependencies [numpy=0.15.4]" \
    --license-info "MIT" \
    --compatible-runtimes python3.6 \
    --zip-file fileb://py_dependencies.zip \
    --profile python_dev_serverless
    
  4. To use in any function that requires numpy, just use the arn that is shown in the console or during the upload above

    f1:
      handler: index.handler_f_use_numpy
      include:
        - functions/f_use_numpy.py
      layers:
        - arn:aws:lambda:us-west-2:XXXXX:layer:py_dependencies:1
    
  5. As an added bonus, you can push common files like constants to a layer as well. Here's how I did it for testing use in windows and on the lambda:

    import platform
    
    \# Set common path
    COMMON_PATH = "../../layers/common/"
    if platform.system() == "Linux": COMMON_PATH = "/opt/common/"
    
    def handler_common(event, context):
        # Read from a constants.json file
        with open(COMMON_PATH + 'constants.json') as f:
            return text = json.load(f)
    


回答2:

when I got the same issue, I opened docker went to settings/shared drive opted to reset credentials and after applied my changes and this cleared the error