[Django]-AccessDenied when calling the CreateMultipartUpload operation in Django using django-storages and boto3

47👍

It turns out that I had to specify a policy that adds permission to use any object /* under a bucket.

Before

...
"Resource": [
            "arn:aws:s3:::www.xyz.com"
            ]
...

After

...
"Resource": [
            "arn:aws:s3:::www.xyz.com/*"
            ]
...

8👍

I also got this error, but I was making a different mistake. The django-storages function was creating the object with an ACL of “public-read”. This is the default, which makes sense for a web framework, and indeed it is what I intended, but I had not included ACL-related permissions in my IAM policy.

  • PutObjectAcl
  • PutObjectVersionAcl

This policy worked for me (it is based on this one):

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
                "s3:PutObject",
                "s3:GetObject",
                "s3:ListBucketMultipartUploads",
                "s3:AbortMultipartUpload",
                "s3:PutObjectVersionAcl",
                "s3:DeleteObject",
                "s3:PutObjectAcl",
                "s3:ListMultipartUploadParts"
            ],
            "Resource": [
                "arn:aws:s3:::bucketname/*",
                "arn:aws:s3:::bucketname"
            ]
        },
        {
            "Sid": "VisualEditor1",
            "Effect": "Allow",
            "Action": [
                "s3:ListBucket",
                "s3:GetBucketLocation"
            ],
            "Resource": "arn:aws:s3:::bucketname"
        },
        {
            "Sid": "VisualEditor2",
            "Effect": "Allow",
            "Action": "s3:ListAllMyBuckets",
            "Resource": "*"
        }
    ]
}

6👍

Another possible cause is that your bucket has encryption switched on. You’ll want a second statement adding kms:GenerateDataKey and kms:Decrypt. Here’s my statement for that:

        {
            "Sid": "VisualEditor1",
            "Effect": "Allow",
            "Action": [
               "kms:Decrypt",
               "kms:GenerateDataKey"
            ],
            "Resource": "*"
        }

Note that I am using built-in keys, not CMKs. See AWS docs here for more.

1👍

I was receiving this same error (An error occurred (AccessDenied) when calling the CreateMultipartUpload operation: Access Denied) using the the following Python script:

import logging
import boto3
import datetime

def create_boto3_client(s3_id, s3_secret_key):
    try:
        logging.info(f'####### Creating boto3Client... #######')
        s3_client = boto3.resource(
                's3',
                aws_access_key_id = s3_id,
                aws_secret_access_key = s3_secret_key,
        )
        logging.info(f'####### Successfully created boto3Client #######')
    except:
        logging.error(f'####### Failed to create boto3Client  #######')
    return s3_client


def upload_file_to_s3(s3_client, s3_bucket, aws_path, blob):
    try:
        ul_start = datetime.datetime.now()
        logging.info(f'####### Starting file upload at {str(ul_start)} #######')
        config = boto3.s3.transfer.TransferConfig(multipart_threshold=1024*25, max_concurrency=10, multipart_chunksize=1024*25, use_threads=True)
        s3_client.Bucket(s3_bucket).upload_fileobj(blob, Key = aws_path, Config = config)
        ul_end = datetime.datetime.now()
        logging.info(f'####### File uploaded to AWS S3 bucket at {str(ul_end) } #######')
        ul_duration = str(ul_end - ul_start)
        logging.info(f'####### Upload duration:{str(ul_duration)} #######')
    except Exception as e:
        logging.error(f'####### Failed to upload file to AWS S3: {e}  #######')
    return ul_start, ul_end, ul_duration

In my case, the aws_path (.upload_file(Key)) was incorrect. Was pointing to a path that the s3_client did not have access to.

0👍

FYI another cause for this is that your destination bucket does not have the proper policy definition.

For my use case I was trying to copy S3 files from one bucket in an AWS Account A to another bucket in AWS Account B. I created a role and policy that enabled this, but I did not add a bucket policy that allowed the outside AWS Role to write to it. I was able to fix this issue by following this AWS doc site : https://aws.amazon.com/premiumsupport/knowledge-center/copy-s3-objects-account/

(Ignore if above link works)

If the above link breaks, the site mentions :

Important: Objects in Amazon S3 are no longer automatically owned by the AWS account that uploads it. By default, any newly created buckets now have the Bucket owner enforced setting enabled. It’s also a best practice to use the Bucket owner enforced setting when changing Object Ownership. However, note that this option disables all bucket ACLs and ACLs on any objects in your bucket.

With the Bucket owner enforced setting in S3 Object Ownership, all objects in an Amazon S3 bucket are automatically owned by the bucket owner. The Bucket owner enforced feature also disables all access control lists (ACLs), which simplifies access management for data stored in S3. However, for existing buckets, an Amazon S3 object is still owned by the AWS account that uploaded it, unless you explicitly disable the ACLs. To change object ownership of objects in an existing bucket, see How can I change the ownership of publicly owned objects in my S3 bucket?

If your existing method of sharing objects relies on using ACLs, then identify the principals that use ACLs to access objects. For more information about how to review permissions before disabling any ACLs, see Prerequisites for disabling ACLs.

If you can’t disable your ACLs, then follow these steps to take ownership of objects until you can adjust your bucket policy:

  1. In the source account, create an AWS Identity and Access Management (IAM) customer managed policy that grants an IAM identity (user or role) proper permissions. The IAM user must have access to retrieve objects from the source bucket and put objects back into the destination bucket. You can use an IAM policy similar to the following:

{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:ListBucket",
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::source-DOC-EXAMPLE-BUCKET",
"arn:aws:s3:::source-DOC-EXAMPLE-BUCKET/"
]
},
{
"Effect": "Allow",
"Action": [
"s3:ListBucket",
"s3:PutObject",
"s3:PutObjectAcl"
],
"Resource": [
"arn:aws:s3:::destination-DOC-EXAMPLE-BUCKET",
"arn:aws:s3:::destination-DOC-EXAMPLE-BUCKET/
"
]
}
]
}
Note: This example IAM policy includes only the minimum required permissions for listing objects and copying objects across buckets in different accounts. You must customize the allowed S3 actions according to your use case. For example, if the user must copy objects that have object tags, then you must also grant permissions for s3:GetObjectTagging. If you experience an error, try performing these steps as an admin user.

  1. In the source account, attach the customer managed policy to the IAM identity that you want to use to copy objects to the destination bucket.

  2. In the destination account, set S3 Object Ownership on the destination bucket to bucket owner preferred. After you set S3 Object Ownership, new objects uploaded with the access control list (ACL) set to bucket-owner-full-control are automatically owned by the bucket’s account.

  3. In the destination account, modify the bucket policy of the destination bucket to grant the source account permissions for uploading objects. Additionally, include a condition in the bucket policy that requires object uploads to set the ACL to bucket-owner-full-control. You can use a statement similar to the following:

Note: Replace destination-DOC-EXAMPLE-BUCKET with the name of the destination bucket. Then, replace arn:aws:iam::222222222222:user/Jane with the Amazon Resource Name (ARN) of the IAM identity from the source account.

{
"Version": "2012-10-17",
"Id": "Policy1611277539797",
"Statement": [
{
"Sid": "Stmt1611277535086",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::222222222222:user/Jane"
},
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::destination-DOC-EXAMPLE-BUCKET/*",
"Condition": {
"StringEquals": {
"s3:x-amz-acl": "bucket-owner-full-control"
}
}
},
{
"Sid": "Stmt1611277877767",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::222222222222:user/Jane"
},
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:::destination-DOC-EXAMPLE-BUCKET"
}
]
}
Note: This example bucket policy includes only the minimum required permissions for uploading an object with the required ACL. You must customize the allowed S3 actions according to your use case. For example, if the user must copy objects that have object tags, you must also grant permissions for s3:GetObjectTagging

  1. After you configure the IAM policy and bucket policy, the IAM identity from the source account must upload objects to the destination bucket. Make sure that the ACL is set to bucket-owner-full-control. For example, the source IAM identity must run the cp AWS CLI command with the –acl option:

aws s3 cp s3://source-DOC-EXAMPLE-BUCKET/object.txt s3://destination-DOC-EXAMPLE-BUCKET/object.txt –acl bucket-owner-full-control

👤Abe

0👍

In my case, uploading to s3 using Github actions was failing and throwing similar error – An error occurred (AccessDenied) when calling the CreateMultipartUpload operation: Access Denied

Post validation of policies in place with IAM user and S3 bucket, which were good and similar setup is working fine with different IAM user and S3 bucket.

Since Github don’t allow to view the added secrets, rotated security_credentials for IAM user and used at Github. That helped.

Leave a comment