Python AWS Boto3 How do i read files from S3 Bucket

+5 votes

Using Boto3, the python script downloads files from an S3 bucket to read them and write the contents of the downloaded files to a file called blank_file.txt.

What my question is, how would it work the same way once the script gets on an AWS Lambda function?

Aug 29, 2018 in AWS by datageek
• 2,530 points
313,515 views

14 answers to this question.

+3 votes
Best answer

You can use the following code,

import boto3
s3 = boto3.resource('s3')
obj = s3.Object(bucketname, itemname)
body = obj.get()['Body'].read()
answered Dec 7, 2018 by Nitesh

selected Dec 9, 2020 by MD
What is itemname here?
As far as I know, the itemname here is the file that is being fetched and read by the function.

itemname is Key (string) -- Key of the object to get.

Thanks, @Mayur for your contribution.

Please register at Edureka Community and earn credits for every contribution. A contribution could be asking a question, answering, commenting or even upvoting/downvoting an answer or question.

These credits can be used to get a discount on the course. Also, you could become the admin at Edureka Community with certain points.

Thanks!
Thanks it solved my issue.
+1 vote

AWS Lambda usually provides 512 MB of /tmp space. You can use that mount point to  store the downloaded S3 files or to create new ones. I have specified the command to do so below.

s3client.download_file(bucket_name, obj.key, '/tmp/'+filename) ... blank_file = open('/tmp/blank_file.txt', 'w')

The working directory used by Lambda is /var/task and it is a read-only filesystem. You will not be able to create files in it.

To know more about AWS lambda and its features in detail check this out! https://www.youtube.com/watch?v=XjPUyGKRjZs

answered Aug 29, 2018 by Archana
• 4,170 points
+2 votes

You can download the file from S3 bucket 

import boto3
bucketname = 'my-bucket' # replace with your bucket name
filename = 'my_image_in_s3.jpg' # replace with your object key
s3 = boto3.resource('s3')
s3.Bucket(bucketname).download_file(filename, 'my_localimage.jpg')
answered Dec 7, 2018 by Jino
+1 vote

Use this code to download the file.

import boto3
s3 = boto3.resource("s3")
srcFileName="abc.txt"
destFileName="s3_abc.txt"
bucketName="mybucket001"
k = Key(bucket,srcFileName)
k.get_contents_to_filename(destFileName)
answered Dec 7, 2018 by Nidhi
0 votes
s3 = boto3.resource('s3')
bucket = s3.Bucket('test-bucket')
for obj in bucket.objects.all():
    key = obj.key
    body = obj.get()['Body'].read()
answered Dec 10, 2018 by Saptdvip
Does this reads all the objects in the bucket? If yes, is there a way to send all these read objects to a sqs through the same lambda?

Yes, you can! Have a look at this: 

https://github.com/tesera/lambda-s3-to-sqs

0 votes

You can use data.Body.toString('ascii') to get the contents of the text file, assuming that the text file was encoded used ascii format.

answered Dec 10, 2018 by ramesh
0 votes

This is the code i found and can be used to read the file from S3 bucket using lambda function

def lambda_handler(event, context):
    # TODO implement
    import boto3

    s3 = boto3.client('s3')
    data = s3.get_object(Bucket='my_s3_bucket', Key='main.txt')
    contents = data['Body'].read()
    print(contents)
answered Dec 10, 2018 by Shuvodip Ghosh
0 votes
I got a related article, I am not sure whether it works or not

https://faragta.com/aws-lambda/read-file-from-s3.html
answered Dec 10, 2018 by bugseeker
0 votes

You can use this function to read the file

exports.handler = (event, context, callback) => {
     var bucketName = process.env.bucketName;
     var keyName = event.Records[0].s3.object.key;
     readFile(bucketName, keyName, readFileContent, onError);
};

answered Dec 10, 2018 by Suresh Rao
0 votes
Here is a Amazon Documentation i found on web

https://docs.aws.amazon.com/lambda/latest/dg/with-s3-example.html

This contains the code to do that.
answered Dec 10, 2018 by Girish
0 votes

All of the answers are kind of right, but no one is completely answering the specific question OP asked. I'm assuming that the output file is also being written to a 2nd S3 bucket since they are using lambda.  This code also uses an in-memory object to hold everything, so that needs to be considered:

import boto3

import io


#buckets

inbucket = 'my-input-bucket'

outbucket = 'my-output-bucket'


s3 = boto3.resource('s3')


outfile = io.StringIO()


# Print out bucket names (optional)

for bucket in s3.buckets.all():

    print(bucket.name)


# Pull data from everyfile in the inbucket

bucket = s3.Bucket(inbucket)

for obj in bucket.objects.all():

  x = obj.get()['Body'].read().decode()

  print(x)


# Generate output file and close it!

outobj = s3.Object(outbucket,'outputfile.txt')

outobj.put(Body=outfile.getvalue())

outfile.close()

Check out "Amazon S3 Storage for SQL Server Databases" for setting up new Amazon S3 buckets

answered Mar 30, 2019 by awsdbaexpert
• 200 points
import boto3

s3 =boto3.resource('s3')

BUCKET_NAME ='give you bucket name here eg. deletemetesting11'

#for key in s3.buckets(BUCKET_NAME).Key:

allFiles = s3.Bucket(BUCKET_NAME).objects.all()

for file in allFiles:

    print(file.key)
0 votes

I understand the requirement of knowledge necessary regarding the query can be solved with a demonstration with  " how does s3 read data from python? Here goes a small example:

import boto3 client = boto3. client('s3') #low-level functional API resource = boto3. ...
import pandas as pd obj = client. get_object(Bucket='my-bucket', Key='path/to/my/table.csv') grid_sizes = pd. ...
from io import BytesIO obj = client. ...
my_bucket. ...
files = list(my-bucket.
answered Jul 1, 2020 by Gitika
• 65,910 points

edited Dec 9, 2020 by MD
0 votes

Before moving forward with the query please try to check that the particular file has downloaded from an s3 bucket or not. Here go "how did I download from s3 bucket with boto3?" 

To set up and run this example, you must first:

  • Configure your AWS credentials, as described in Quickstart.
  • Create an S3 bucket and upload a file to the bucket.
  • Replace the BUCKET_NAME and KEY values in the code snippet with the name of your bucket and the key for the uploaded file.
answered Jul 1, 2020 by Kunal Chandan

edited Jul 2, 2020
0 votes

You can use that mount point to store the downloaded S3 files or to create new ones. I have specified the command to do so below.

answered Dec 2, 2020 by Cam Nhung Dinh

Related Questions In AWS

0 votes
1 answer
0 votes
1 answer

How to edit files in the AWS S3 bucket from CLI?

Hi@akhtar, According to my knowledge,  you can't edit ...READ MORE

answered Jun 8, 2020 in AWS by MD
• 95,440 points
8,963 views
+1 vote
2 answers

How do I run python script on aws EC2 instance(ubuntu)?

I believe that you are using the ...READ MORE

answered Apr 17, 2018 in AWS by Cloud gunner
• 4,670 points
9,922 views
0 votes
1 answer

How do I create folder under an Amazon S3 bucket through PHP API?

Of Course, it is possible to create ...READ MORE

answered Apr 24, 2018 in AWS by anonymous
10,958 views
+1 vote
2 answers

Want my AWS s3 Bucket to read Name from CloudWatch Event

CloudTrail events for S3 bucket level operations ...READ MORE

answered May 28, 2018 in AWS by Cloud gunner
• 4,670 points
1,730 views
0 votes
1 answer
+3 votes
6 answers

How to move files from amazon ec2 to s3 bucket using command line

Hey, 3 ways you can do this: To ...READ MORE

answered Oct 9, 2018 in AWS by Omkar
• 69,210 points
19,247 views
webinar REGISTER FOR FREE WEBINAR X
REGISTER NOW
webinar_success Thank you for registering Join Edureka Meetup community for 100+ Free Webinars each month JOIN MEETUP GROUP