How To Upload File To S3 Boto3
In this tutorial, we will learn about 4 different ways to upload a file to S3 using python. This is a continuation of the series where we are writing scripts to work with AWS S3 in Python linguistic communication.
Setting up permissions for S3
For this tutorial to piece of work, nosotros will need an IAM user who has access to upload a file to S3. Nosotros can configure this user on our local motorcar using AWS CLI or we can apply its credentials directly in python script. We have already covered this topic on how to create an IAM user with S3 access. If you lot practise non have this user setup delight follow that blog offset then continue with this web log.
Upload a file to S3 using s3 customer
One of the most mutual ways to upload files on your local machine to S3 is using the client class for S3. You demand to provide the bucket name, file which you want to upload and object proper name in S3.
1 2 three iv 5 six 7 8 nine 10 eleven 12 13 14 15 16 17 18 19 | import boto3 from pprint import pprint import pathlib import os def upload_file_using_client ( ) : "" " Uploads file to S3 bucket using S3 client object :return: None " "" s3 = boto3 . customer ( "s3" ) bucket_name = "binary-guy-frompython-one" object_name = "sample1.txt" file_name = os . path . bring together ( pathlib . Path ( __file__ ) . parent . resolve ( ) , "sample_file.txt" ) response = s3 . upload_file ( file_name , bucket_name , object_name ) pprint ( response ) # prints None |
When you lot run this function, information technology will upload "sample_file.txt" to S3 and it will have the name "sample1.txt" in S3. Nosotros tin can verify this in the console.
In the above code, we have not specified whatsoever user credentials. In such cases, boto3 uses the default AWS CLI profile fix up on your local auto. You lot can likewise specify which contour should be used by boto3 if you take multiple profiles on your motorcar. All you need to practise is add the beneath line to your code.
boto3.setup_default_session(profile_name='PROFILE_NAME_FROM_YOUR_MACHINE')
Some other choice is you can specify the access cardinal id and secret access key in the code itself. This is not recommended approach and I strongly believe using IAM credentials directly in code should exist avoided in nigh cases. You can utilize access key id and underground access central in code equally shown below, in instance you have to do this.
s3 = boto3.client("s3", aws_access_key_id=ACCESS_KEY, aws_secret_access_key=SECRET_KEY)
Upload a file to S3 using S3 resource class
Another option to upload files to s3 using python is to employ the S3 resource form.
def upload_file_using_resource ( ) : "" " Uploads file to S3 bucket using S3 resource object. This is useful when you are dealing with multiple buckets st same time. :render: None " "" s3 = boto3 . resource ( "s3" ) bucket_name = "binary-guy-frompython-2" object_name = "sample2.txt" file_name = bone . path . join ( pathlib . Path ( __file__ ) . parent . resolve ( ) , "sample_file.txt" ) bucket = s3 . Saucepan ( bucket_name ) response = saucepan . upload_file ( file_name , object_name ) impress ( response ) # Prints None |
The above code will also upload files to S3. The in a higher place approach is specially useful when you are dealing with multiple buckets. You can create different bucket objects and employ them to upload files.
Uploading a file to S3 using put object
Till now we have seen two ways to upload files to S3. Both of them are easy but we practise not have much control over the files we are uploading to S3. What if we desire to add encryption when we upload files to s3 or decide which kind of admission level our file has (nosotros will swoop deep into file/object admission levels in some other blog).
When nosotros demand such fine-grained command while uploading files to S3, we tin can use the put_object function equally shown in the below lawmaking.
one ii three four 5 6 seven 8 ix 10 xi 12 13 fourteen 15 16 17 18 19 20 21 22 23 | def upload_file_to_s3_using_put_object ( ) : "" " Uploads file to s3 using put_object function of resource object. Aforementioned part is available for s3 customer object as well. put_object function gives u.s. much more options and we can prepare object access policy, tags, encryption etc :render: None " "" s3 = boto3 . resource ( "s3" ) bucket_name = "binary-guy-frompython-ii" object_name = "sample_using_put_object.txt" file_name = os . path . join ( pathlib . Path ( __file__ ) . parent . resolve ( ) , "sample_file.txt" ) saucepan = s3 . Saucepan ( bucket_name ) response = bucket . put_object ( ACL = "private" , Torso = file_name , ServerSideEncryption = "AES256" , Cardinal = object_name , Metadata = { "env" : "dev" , "owner" : "binary guy" } , ) print ( response ) # prints s3.Object(bucket_name='binary-guy-frompython-2', central='sample_using_put_object.txt') |
When we run the above lawmaking we can meet that our file has been uploaded to S3. But we also need to bank check if our file has other properties mentioned in our code. In S3, to bank check object details click on that object. When we click on "sample_using_put_object.txt " we volition run into the below details.
We can encounter that our object is encrypted and our tags showing in object metadata. There are many other options that y'all can set for objects using the put_object part. You tin find those details at boto3 documentation for put_object.
Uploading byte data to S3
In some cases, you may have byte data every bit the output of some process and yous desire to upload that to S3. You lot can think that information technology's easy. Nosotros write that information to file and upload that file to S3. But what if there is a simple mode where you exercise not have to write byte data to file?
Of course, there is. We utilize the upload_fileobj role to directly upload byte data to S3. In the below code, I am reading a file in binary format and then using that data to create object in S3. But you have whatsoever binary data written to S3 using the below code.
1 2 3 4 5 six vii 8 nine 10 11 12 13 fourteen 15 xvi 17 18 nineteen | def upload_file_to_s3_using_file_object ( ) : "" " Uploads to file to s3 using upload_fileobj office of s3 client object. Similar function is bachelor for s3 resource object besides. In this case, instead of copying file, we open that file and copy data of that file to S3. This tin be useful when yous have binary data already created as output of some process. We practice not have to write this binary data to local file and and then upload that file. Nosotros tin apply upload_fileobj function :render: None " "" s3 = boto3 . client ( "s3" ) bucket_name = "binary-guy-frompython-ane" object_name = "sample_file_object.txt" file_name = bone . path . join ( pathlib . Path ( __file__ ) . parent . resolve ( ) , "sample_file.txt" ) with open ( file_name , "rb" ) as data : s3 . upload_fileobj ( data , bucket_name , object_name ) |
Let the states check if this has created an object in S3 or non.
As nosotros can encounter, information technology has successfully created an S3 object using our byte information.
Conclusion
In this blog, nosotros have learned 4 different means to upload files and binary data to s3 using python. Y'all can get all the lawmaking in this web log at GitHub. I promise y'all found this useful. In the adjacent web log, nosotros volition acquire unlike ways to listing down objects in the S3 bucket. See you soon.
Source: https://binaryguy.tech/aws/s3/how-to-upload-a-file-to-s3-using-python/
Posted by: fostertionvits38.blogspot.com
0 Response to "How To Upload File To S3 Boto3"
Post a Comment