Part 2: Build a Chalice application using Rekognition¶
For this part of the tutorial, we will begin writing the media query Chalice
application and integrate Rekognition into the application. This initial
version of the application will accept the S3 bucket and key name of an image,
call the DetectLabels
API on that stored image, and return the labels
detected for that image. So assuming the sample.jpg
image is stored in a
bucket some-bucket
under the key sample.jpg
, we will be able to invoke
a Lambda function that return the labels Rekognition detected:
$ echo '{"Bucket": "some-bucket", "Key": "sample.jpg"}' | chalice invoke --name detect_labels_on_image
["Animal", "Canine", "Dog", "German Shepherd", "Mammal", "Pet", "Collie"]
For this section, we will be doing the following to create this version of the application:
Create a new Chalice project¶
Create the new Chalice project for the Media Query application.
Instructions¶
Create a new Chalice project called
media-query
with thenew-project
command:$ chalice new-project media-query
Verification¶
To ensure that the project was created, list the contents of the newly created
media-query
directory:
$ ls media-query
app.py requirements.txt
It should contain an app.py
file and a requirements.txt
file.
Copy over boilerplate files¶
Copy over starting files to facilitate development of the application
Instructions¶
Copy over the starting point code for section
02-chalice-with-rekognition
into yourmedia-query
directory:$ cp -r chalice-workshop/code/media-query/02-chalice-with-rekognition/. media-query/
Note
If you are ever stuck and want to skip to the beginning of a different part of this tutorial, you can do this by running the same command as above, but instead use the
code
directory name of the part you want to skip to. For example, if you wanted to skip to the beginning of Part 5 of this tutorial, you can run the following command withmedia-query
as the current working directory and be ready to start Part 5:media-query$ cp -r ../chalice-workshop/code/media-query/05-s3-delete-event/. ./
Verification¶
Ensure the structure of the
media-query
directory is the following:$ tree -a media-query ├── .chalice │ ├── config.json │ └── policy-dev.json ├── .gitignore ├── app.py ├── chalicelib │ ├── __init__.py │ └── rekognition.py ├── recordresources.py ├── requirements.txt └── resources.json
For the files that got added, they will be used later in the tutorial but for a brief overview of the new files:
chalicelib
: A directory for managing Python modules outside of theapp.py
. It is common to put the lower-level logic in thechalicelib
directory and keep the higher level logic in theapp.py
file so it stays readable and small. You can read more aboutchalicelib
in the Chalice documentation.chalicelib/rekognition.py
: A utility module to further simplifyboto3
client calls to Amazon Rekognition..chalice/config.json
: Manages configuration of the Chalice application. You can read more about the configuration file in the Chalice documentation..chalice/policy-dev.json
: The IAM policy to apply to your Lambda function. This essentially manages the AWS permissions of your applicationresources.json
: A CloudFormation template with additional resources to deploy outside of the Chalice application.recordresources.py
: Records resource values from the additional resources deployed to your CloudFormation stack and saves them as environment variables in your Chalice application .
Write a Lambda function for detecting labels¶
Fill out the app.py
file to write a Lambda function that detects labels
on an image stored in a S3 bucket.
Instructions¶
Move into the
media-query
directory:$ cd media-query
Add
boto3
, the AWS SDK for Python, as a dependency in therequirements.txt
file:
1 | boto3<1.8.0
|
Open the
app.py
file and delete all lines of code underneath the line:app = Chalice(app_name='media-query')
. Yourapp.py
file should only consist of the following lines:from chalice import Chalice app = Chalice(app_name='media-query')
Import
boto3
and thechalicelib.rekognition
module in yourapp.py
file:
1 2 3 | import boto3
from chalice import Chalice
from chalicelib import rekognition
|
Add a helper function for instantiating a Rekognition client:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 | import boto3
from chalice import Chalice
from chalicelib import rekognition
app = Chalice(app_name='media-query')
_REKOGNITION_CLIENT = None
def get_rekognition_client():
global _REKOGNITION_CLIENT
if _REKOGNITION_CLIENT is None:
_REKOGNITION_CLIENT = rekognition.RekognitonClient(
boto3.client('rekognition'))
return _REKOGNITION_CLIENT
|
Add a new function
detect_labels_on_image
decorated by theapp.lambda_function
decorator. Have the function use a rekognition client to detect and return labels on an image stored in a S3 bucket:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 | import boto3
from chalice import Chalice
from chalicelib import rekognition
app = Chalice(app_name='media-query')
_REKOGNITION_CLIENT = None
def get_rekognition_client():
global _REKOGNITION_CLIENT
if _REKOGNITION_CLIENT is None:
_REKOGNITION_CLIENT = rekognition.RekognitonClient(
boto3.client('rekognition'))
return _REKOGNITION_CLIENT
@app.lambda_function()
def detect_labels_on_image(event, context):
bucket = event['Bucket']
key = event['Key']
return get_rekognition_client().get_image_labels(bucket=bucket, key=key)
|
Verification¶
Ensure the contents of the
requirements.txt
file is:
1 | boto3<1.8.0
|
Ensure the contents of the
app.py
file is:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 | import boto3
from chalice import Chalice
from chalicelib import rekognition
app = Chalice(app_name='media-query')
_REKOGNITION_CLIENT = None
def get_rekognition_client():
global _REKOGNITION_CLIENT
if _REKOGNITION_CLIENT is None:
_REKOGNITION_CLIENT = rekognition.RekognitonClient(
boto3.client('rekognition'))
return _REKOGNITION_CLIENT
@app.lambda_function()
def detect_labels_on_image(event, context):
bucket = event['Bucket']
key = event['Key']
return get_rekognition_client().get_image_labels(bucket=bucket, key=key)
|
Create a S3 bucket¶
Create a S3 bucket for uploading images and use with the Chalice application.
Instructions¶
Use the AWS CLI and the
resources.json
CloudFormation template to deploy a CloudFormation stackmedia-query
that contains a S3 bucket:$ aws cloudformation deploy --template-file resources.json --stack-name media-query
Verification¶
Retrieve and store the name of the S3 bucket using the AWS CLI:
$ MEDIA_BUCKET_NAME=$(aws cloudformation describe-stacks --stack-name media-query --query "Stacks[0].Outputs[?OutputKey=='MediaBucketName'].OutputValue" --output text)
Ensure you can access the S3 bucket by listing its contents:
$ aws s3 ls $MEDIA_BUCKET_NAME
Note that the bucket should be empty.
Deploy the Chalice application¶
Deploy the chalice application.
Instructions¶
Install the dependencies of the Chalice application:
$ pip install -r requirements.txt
Run
chalice deploy
to deploy the application:$ chalice deploy Creating deployment package. Creating IAM role: media-query-dev-detect_labels_on_image Creating lambda function: media-query-dev-detect_labels_on_image Resources deployed: - Lambda ARN: arn:aws:lambda:us-west-2:123456789123:function:media-query-dev-detect_labels_on_image
Verification¶
Upload the sample workshop image to the S3 bucket:
$ aws s3 cp ../chalice-workshop/code/media-query/final/assets/sample.jpg s3://$MEDIA_BUCKET_NAME
Create a
sample-event.json
file to use withchalice invoke
:$ echo "{\"Bucket\": \"$MEDIA_BUCKET_NAME\", \"Key\": \"sample.jpg\"}" > sample-event.json
Run
chalice invoke
on thedetect_labels_on_image
Lambda function:$ chalice invoke --name detect_labels_on_image < sample-event.json
It should return the following labels in the output:
["Animal", "Canine", "Dog", "German Shepherd", "Mammal", "Pet", "Collie"]