首页 文章

使用boto3从无服务器lambda函数调用aws s3存储桶时出现AccessDenied错误消息

提问于
浏览
0

我正在使用amazon aws构建一个无服务器应用程序 . 我现在正在测试boto3以从我的aws s3服务获取存储桶列表 . 虽然我的IAM用户具有AdministratorAccess访问权限,但每次我尝试调用我的lambda函数时,它都会显示错误消息 . 有人可以帮帮我吗?感谢您的关注 . 这是我的错误信息

{
    "stackTrace": [
        [
            "/var/task/handler.py",
            9,
            "hello",
            "for bucket in s3.buckets.all():"
        ],
        [
            "/var/runtime/boto3/resources/collection.py",
            83,
            "__iter__",
            "for page in self.pages():"
        ],
        [
            "/var/runtime/boto3/resources/collection.py",
            161,
            "pages",
            "pages = [getattr(client, self._py_operation_name)(**params)]"
        ],
        [
            "/var/runtime/botocore/client.py",
            312,
            "_api_call",
            "return self._make_api_call(operation_name, kwargs)"
        ],
        [
            "/var/runtime/botocore/client.py",
            605,
            "_make_api_call",
            "raise error_class(parsed_response, operation_name)"
        ]
    ],
    "errorType": "ClientError",
    "errorMessage": "An error occurred (AccessDenied) when calling the ListBuckets operation: Access Denied"
}

这是我的lambda函数handler.py

import json
import boto3


def hello(event, context):

    s3 = boto3.resource('s3')

    for bucket in s3.buckets.all():
        print(bucket.name)



    body = {
        "message": "gg"
    }


    response = {
        "statusCode": 200,
        "body": json.dumps(body)
    }

    return response

这是我的serverless.yml文件

# Welcome to Serverless!
#
# This file is the main config file for your service.
# It's very minimal at this point and uses default values.
# You can always add more config options for more control.
# We've included some commented out config examples here.
# Just uncomment any of them to get that config option.
#
# For full config options, check the docs:
#    docs.serverless.com
#
# Happy Coding!

service: serverless-boto3

# You can pin your service to only deploy with a specific Serverless version
# Check out our docs for more details
# frameworkVersion: "=X.X.X"

provider:
  name: aws
  runtime: python2.7

# you can overwrite defaults here
#  stage: dev
#  region: us-east-1

# you can add statements to the Lambda function's IAM Role here
#  iamRoleStatements:
#    - Effect: "Allow"
#      Action:
#        - "s3:ListBucket"
#      Resource: { "Fn::Join" : ["", ["arn:aws:s3:::", { "Ref" : "ServerlessDeploymentBucket" } ] ]  }
#    - Effect: "Allow"
#      Action:
#        - "s3:PutObject"
#      Resource:
#        Fn::Join:
#          - ""
#          - - "arn:aws:s3:::"
#            - "Ref" : "ServerlessDeploymentBucket"
#            - "/*"

# you can define service wide environment variables here
#  environment:
#    variable1: value1

# you can add packaging information here
#package:
#  include:
#    - include-me.py
#    - include-me-dir/**
#  exclude:
#    - exclude-me.py
#    - exclude-me-dir/**

functions:
  hello:
    handler: handler.hello

#    The following are a few example events you can configure
#    NOTE: Please make sure to change your handler code to work with those events
#    Check the event documentation for details
    events:
      - http:
          path: users/create
          method: get
#      - s3: ${env:BUCKET}
#      - schedule: rate(10 minutes)
#      - sns: greeter-topic
#      - stream: arn:aws:dynamodb:region:XXXXXX:table/foo/stream/1970-01-01T00:00:00.000
#      - alexaSkill
#      - alexaSmartHome: amzn1.ask.skill.xx-xx-xx-xx
#      - iot:
#          sql: "SELECT * FROM 'some_topic'"
#      - cloudwatchEvent:
#          event:
#            source:
#              - "aws.ec2"
#            detail-type:
#              - "EC2 Instance State-change Notification"
#            detail:
#              state:
#                - pending
#      - cloudwatchLog: '/aws/lambda/hello'
#      - cognitoUserPool:
#          pool: MyUserPool
#          trigger: PreSignUp

#    Define function environment variables here
#    environment:
#      variable2: value2

# you can add CloudFormation resource templates here
#resources:
#  Resources:
#    NewResource:
#      Type: AWS::S3::Bucket
#      Properties:
#        BucketName: my-new-bucket
#  Outputs:
#     NewOutput:
#       Description: "Description for the output"
#       Value: "Some output value"

2 回答

  • 0

    serverless.yml 中,您没有为Lambda函数提供访问S3的任何权限 . 模板中的示例已注释掉 .

    Lambda函数使用IAM角色来访问AWS资源的权限 . 在Amazon Management Console中,选择您的Lambda函数 . 向下滚动并查找 Execution role . 这将显示您为函数创建的模板 .

    Manage Permissions: Using an IAM Role (Execution Role)

    每个Lambda函数都具有与之关联的IAM角色(执行角色) . 您在创建Lambda函数时指定IAM角色 . 您授予此角色的权限决定了AWS Lambda在承担角色时可以执行的操作 . 您授予IAM角色有两种类型的权限:

    • 如果您的Lambda功能代码访问其他AWS资源,例如从S3存储桶读取对象或将日志写入CloudWatch Logs,则需要向该角色授予相关Amazon S3和CloudWatch操作的权限 .

    • 如果事件源是基于流的(Amazon Kinesis Streams和DynamoDB流),AWS Lambda将代表您轮询这些流 . AWS Lambda需要权限轮询流并读取流上的新记录,因此您需要向此角色授予相关权限 .

    IAM Policies for AWS Lambda

  • 2

    我已经拥有权限,但添加以下 resources 为我解决了这个问题:

    Resources:
         S3Bucket:
           Type: AWS::S3::Bucket
           Properties:
             BucketName: ${self:custom.bucketName}
         S3BucketPermissions:
           Type: AWS::S3::BucketPolicy
           DependsOn: S3Bucket
           Properties:
             Bucket: ${self:custom.bucketName}
             PolicyDocument:
               Statement:
                 - Principal: "*"
                   Action:
                     - s3:PutObject
                     - s3:PutObjectAcl
                   Effect: Allow
                   Sid: "AddPerm"
                   Resource: arn:aws:s3:::${self:custom.bucketName}/*
    

相关问题