AWS Rekognition and Raspberry Pi working together

I’m going to take the work i did from my first post, setting up a Raspberry pi, and the second post, image detection in AWS Rekognition, and put it all together.

First steps is to move the code from the ImageDectect.py in to Detector class. I ended up breaking this in to 3 classes, and set up in to the init function which looks something like this.

    def __init__(self):
        #4 = the pin on the Rasberry pi that the MotionSensor is connected to
        self.pir = MotionSensor(4, threshold=0.5)
        self.camera = PiCamera()
        self.source_photo = 'test.jpg'
        with open('new_user_credentials.csv', 'r') as input:
            csvreader = csv.DictReader(input)
            for row in csvreader:
                self.access_key_id = row['Access key ID']
                self.secret_key = row['Secret access key']

    def aws_rekognition_image(self, photo):
        client = boto3.client('rekognition',
                              aws_access_key_id=self.access_key_id,
                              aws_secret_access_key=self.secret_key,
                              region_name='us-west-2')
        return client.detect_labels(Image={'Bytes': photo})
    
    def covert_img_to_bytes(self):
        with open(self.source_photo, 'rb') as photo:
            return photo.read()
    
    def print_results(self, results):
        for each in results['Labels']:
            print(each['Name'] + ": " + str(each['Confidence']))

Next we need to add function for to take a picture instead of a video. This is just a simple function that starts the camera, stores the image, and closed the camera

    def take_picture(self):
        self.camera.resolution = (1920, 1080)
        self.camera.rotation = 180
        self.camera.start_preview()
        sleep(2)
        self.camera.capture(self.source_photo)
        self.camera.stop_preview()

Finally we need to update start function to do all the steps.

  • Check for motion
  • Take a picture
  • Wait for motion to send
  • Covert the image to bytes
  • Sent the bytes to AWS
  • print the results

The start code is pretty simple as it call all the functions in order (currently not loop the program ends after it runs

    def start(self):
        self.wait_for_motion()
        self.take_picture()
        self.wait_for_no_motion()
        photo = self.covert_img_to_bytes()
        results = self.aws_rekognition_image(photo)
        self.print_results(results)

The end results are the following. It took the following picture of my living room and it was able to see with the precent confidence in each results. I was hoping it would see the water bottle i put directly in the middle of the shot, but it did not. But the rest was pretty accurate.

>>> %Run detector.py
Motion detect!
No Motion
Furniture: 99.96051025390625
Couch: 99.90579986572266
Chair: 99.3740234375
Living Room: 98.54119873046875
Room: 98.54119873046875
Indoors: 98.54119873046875
Interior Design: 97.71133422851562
Shelf: 96.75226593017578
Cushion: 93.28191375732422
Screen: 90.99018859863281
Electronics: 90.99018859863281
Monitor: 90.56035614013672
Display: 90.56035614013672
Bookcase: 78.97013092041016
LCD Screen: 70.0054702758789
Flooring: 65.52447509765625
Entertainment Center: 64.22875213623047
TV: 63.16238021850586
Television: 63.16238021850586
Wood: 59.266143798828125

You can view the entire source code for the above here https://github.com/carchi8py/Raspberry-pi-detector/tree/0.3

For part 4 i’m going to see if we can get the bounding box for each item AWS return to appear on the image.

One thought on “AWS Rekognition and Raspberry Pi working together

Add yours

Leave a comment

Blog at WordPress.com.

Up ↑