首页 文章

使用Bitbucket Pipeline将整个Bitbucket repo上传到S3

提问于
浏览
6

我正在使用Bitbuckets Pipeline . 我希望它将我的仓库(非常小)的全部内容推送到S3 . 我不想将其压缩,推送到S3然后解压缩 . 我只是希望它在我的Bitbucket仓库中采用现有的文件/文件夹结构并将其推送到S3 .

yaml文件和.py文件应该是什么样的?

这是当前的yaml文件:

image: python:3.5.1

pipelines:
  branches:
    master:
      - step:
          script:
            # - apt-get update # required to install zip
            # - apt-get install -y zip # required if you want to zip repository objects
            - pip install boto3==1.3.0 # required for s3_upload.py
            # the first argument is the name of the existing S3 bucket to upload the artefact to
            # the second argument is the artefact to be uploaded
            # the third argument is the the bucket key
            # html files
            - python s3_upload.py my-bucket-name html/index_template.html html/index_template.html # run the deployment script
            # Example command line parameters. Replace with your values
            #- python s3_upload.py bb-s3-upload SampleApp_Linux.zip SampleApp_Linux # run the deployment script

这是我目前的python:

from __future__ import print_function
import os
import sys
import argparse
import boto3
from botocore.exceptions import ClientError

def upload_to_s3(bucket, artefact, bucket_key):
    """
    Uploads an artefact to Amazon S3
    """
    try:
        client = boto3.client('s3')
    except ClientError as err:
        print("Failed to create boto3 client.\n" + str(err))
        return False
    try:
        client.put_object(
            Body=open(artefact, 'rb'),
            Bucket=bucket,
            Key=bucket_key
        )
    except ClientError as err:
        print("Failed to upload artefact to S3.\n" + str(err))
        return False
    except IOError as err:
        print("Failed to access artefact in this directory.\n" + str(err))
        return False
    return True


def main():

    parser = argparse.ArgumentParser()
    parser.add_argument("bucket", help="Name of the existing S3 bucket")
    parser.add_argument("artefact", help="Name of the artefact to be uploaded to S3")
    parser.add_argument("bucket_key", help="Name of the S3 Bucket key")
    args = parser.parse_args()

    if not upload_to_s3(args.bucket, args.artefact, args.bucket_key):
        sys.exit(1)

if __name__ == "__main__":
    main()

这要求我将yaml文件中repo中的每个文件列为另一个命令 . 我只想让它 grab 所有内容并将其上传到S3 .

4 回答

  • 2

    以下对我有用,这是我的yaml文件,其中包含一个带有官方aws命令行工具的docker镜像:cgswong/aws . 非常方便,比bitbucket推荐的更强大(abesiyo / s3) .

    image: cgswong/aws
    
    pipelines:
      branches:
        master:
          - step:
              script:
                - aws s3 --region "us-east-1" sync public/ s3://static-site-example.activo.com --cache-control "public, max-age=14400" --delete
    

    几点说明:

    • 确保输入您的s3桶名而不是我的名字 .

    • 为您的代码设置正确的文件夹来源,它可以是root '/'或更深层次的文件来源,它将同步下面的所有文件 .

    • '--delete'选项删除了从文件夹中删除的对象,决定是否需要它 .

    • --cache-control可帮助您为s3存储桶中的每个文件设置缓存控制头元数据 . 如果需要,请设置它 .

    • 注意我将此命令附加到对master分支的任何提交,如果需要调整 .

    这是完整的文章:Continuous Deployment with Bitbucket Pipelines, S3, and CloudFront

  • 5

    自己搞清楚了 . 这是python文件's3_upload.py'

    from __future__ import print_function
    import os
    import sys
    import argparse
    import boto3
    #import zipfile
    from botocore.exceptions import ClientError
    
    def upload_to_s3(bucket, artefact, is_folder, bucket_key):
        try:
            client = boto3.client('s3')
        except ClientError as err:
            print("Failed to create boto3 client.\n" + str(err))
            return False
        if is_folder == 'true':
            for root, dirs, files in os.walk(artefact, topdown=False):
                print('Walking it')
                for file in files:
                    #add a check like this if you just want certain file types uploaded
                    #if file.endswith('.js'):
                    try:
                        print(file)
                        client.upload_file(os.path.join(root, file), bucket, os.path.join(root, file))
                    except ClientError as err:
                        print("Failed to upload artefact to S3.\n" + str(err))
                        return False
                    except IOError as err:
                        print("Failed to access artefact in this directory.\n" + str(err))
                        return False
                    #else:
                    #    print('Skipping file:' + file)
        else:
            print('Uploading file ' + artefact)
            client.upload_file(artefact, bucket, bucket_key)
        return True
    
    
    def main():
    
        parser = argparse.ArgumentParser()
        parser.add_argument("bucket", help="Name of the existing S3 bucket")
        parser.add_argument("artefact", help="Name of the artefact to be uploaded to S3")
        parser.add_argument("is_folder", help="True if its the name of a folder")
        parser.add_argument("bucket_key", help="Name of file in bucket")
        args = parser.parse_args()
    
        if not upload_to_s3(args.bucket, args.artefact, args.is_folder, args.bucket_key):
            sys.exit(1)
    
    if __name__ == "__main__":
        main()
    

    这是他们bitbucket-pipelines.yml文件:

    ---
    image: python:3.5.1
    
    pipelines:
      branches:
        dev:
          - step:
              script:
                - pip install boto3==1.4.1 # required for s3_upload.py
                - pip install requests
                # the first argument is the name of the existing S3 bucket to upload the artefact to
                # the second argument is the artefact to be uploaded
                # the third argument is if the artefact is a folder
                # the fourth argument is the bucket_key to use
                - python s3_emptyBucket.py dev-slz-processor-repo
                - python s3_upload.py dev-slz-processor-repo lambda true lambda
                - python s3_upload.py dev-slz-processor-repo node_modules true node_modules
                - python s3_upload.py dev-slz-processor-repo config.dev.json false config.json
        stage:
          - step:
              script:
                - pip install boto3==1.3.0 # required for s3_upload.py
                - python s3_emptyBucket.py staging-slz-processor-repo
                - python s3_upload.py staging-slz-processor-repo lambda true lambda
                - python s3_upload.py staging-slz-processor-repo node_modules true node_modules
                - python s3_upload.py staging-slz-processor-repo config.staging.json false config.json
        master:
          - step:
              script:
                - pip install boto3==1.3.0 # required for s3_upload.py
                - python s3_emptyBucket.py prod-slz-processor-repo
                - python s3_upload.py prod-slz-processor-repo lambda true lambda
                - python s3_upload.py prod-slz-processor-repo node_modules true node_modules
                - python s3_upload.py prod-slz-processor-repo config.prod.json false config.json
    

    作为dev分支的示例,它抓取“lambda”文件夹中的所有内容,遍历该文件夹的整个结构,并且对于它找到的每个项目,它将其上载到dev-slz-processor-repo存储桶

    最后,这是一个有用的函数,'s3_emptyBucket',在上传新的对象之前从桶中删除所有对象:

    from __future__ import print_function
    import os
    import sys
    import argparse
    import boto3
    #import zipfile
    from botocore.exceptions import ClientError
    
    def empty_bucket(bucket):
        try:
            resource = boto3.resource('s3')
        except ClientError as err:
            print("Failed to create boto3 resource.\n" + str(err))
            return False
        print("Removing all objects from bucket: " + bucket)
        resource.Bucket(bucket).objects.delete()
        return True
    
    
    def main():
    
        parser = argparse.ArgumentParser()
        parser.add_argument("bucket", help="Name of the existing S3 bucket to empty")
        args = parser.parse_args()
    
        if not empty_bucket(args.bucket):
            sys.exit(1)
    
    if __name__ == "__main__":
        main()
    
  • 1

    您可以更改为使用docker https://hub.docker.com/r/abesiyo/s3/

    它运行得很好

    到位桶,pipelines.yml

    image: abesiyo/s3
    
    pipelines:
        default:
           - step:
              script:
                 - s3 --region "us-east-1" rm s3://<bucket name>
                 - s3 --region "us-east-1" sync . s3://<bucket name>
    

    还请在bitbucket管道上设置环境变量AWS_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY

  • 1

    为了将静态网站部署到Amazon S3,我有这个bitbucket-pipelines.yml配置文件:

    image: attensee/s3_website
    
    pipelines:
      default:
        - step:
            script:
              - s3_website push
    

    我正在使用attensee / s3_website docker镜像,因为那个安装了令人敬畏的s3_website工具 . s3_website(s3_website.yml)的配置文件[在Bitbucket中的存储库的根目录中创建此文件]如下所示:

    s3_id: <%= ENV['S3_ID'] %>
    s3_secret: <%= ENV['S3_SECRET'] %>
    s3_bucket: bitbucket-pipelines
    site : .
    

    我们必须从位桶设置中定义环境变量中的环境变量S3_ID和S3_SECRET

    谢谢你到https://www.savjee.be/2016/06/Deploying-website-to-ftp-or-amazon-s3-with-BitBucket-Pipelines/的解决方案

相关问题