Frigate, Home Assistant and AI

0 Shares
0
0
0
0

So if you’ve seen the first blog on Frigate NVR and Home Assistant, and you’ve followed along – you’ll be in a pretty good spot. All your cameras are now streaming to Frigate and onto your Home Assistant hardware, you’re getting clips and snapshots that you can view, you’ve set up capture zones – you’re feeling pretty smug. But… you want to take it to the next level, object detection, people detection and facial identification – bring on the AI!

So yes, I dropped the two letter acronym, partly because no doubt this will be my popular blog of the year but also because Frigate alongside third party integrations can identify trained images of people you know on your cameras. And because its Home Assistant, we can create automations based on those actual known people. Cool eh? Lets get started

The Background

First up, we need to look at tools that integrate with Frigate to take the clips and snapshots seemlessly and find a way to train a model to identify those people on your Frigate camera streams. Enter Doubletake, an awesome opensource project shared on Github from David Yakuwenko (Thanks David!) here but sadly this is no longer in development. The good news, the good people in the community (Thanks too Sergey Krashevich) have created plenty of forks, so it doesn’t die off.

Doubletake is pretty awesome – it combines a quite ridiculous UI that allows you to train images to recognise faces, native Home Assistant and Frigate integration and support for a number of detectors.

What are detectors you may ask? Essentially they take images, if they detect a face then they use the data to recognise the persons face against the images you’ve provided with a confidence level (in percentage format). There are a few popular detectors, completely open-source (and free) such as Facebox, Compreface and CodeProject.ai.

Before we go to the installation, there is a few components that are critical here, and being a techy, I think its important you understand how they all fit together.

Home Assistant: the base for all this. It provides the platform for all of this, the ability to run containers on top, applications, add-ons and the notification platform. Its the magic.

Frigate: This is the NVR or network video recorder. It consolidates all your camera streams and does some basic object detection using some basic AI. It uses MQTT to interact with Home Assistant which generates the notifications, alerts, etc to you, the Smart Home Guru!

MQTT: The messaging protocol which allows all the various components to talk to each other. Imagine this is the slack channel that all the bits of software use to chat.

Doubletake: This is the tool responsible for tying everything ‘detection’ together. It takes the images you want to train (i.e. the people you know) from Frigate and trains the detectors to identify

Compreface/Facebox/CodeProject.ai: These are the detectors, essentially the models that doubletake uses to train

They say a picture paints a thousand words…

DoubleTake Installation

First we need to install Doubletake and installing it via the Home Assistant Add-on store is the recommended way

  1. Settings > Add-ons > Add-on Store
  2. Click 3 dots (top right hand corner) and add the repo URL for Doubletake (https://github.com/jakowenko/double-take-hassio-addons)
  3. Find the Doubletake entry in the Add-ons and click Install
  4. Leave the configuration tab as default and start the add-on

Next, we need to go into Doubletake (which you can either find in your add-ons section or on your left-hand side bar) and configure it. Find the config button and lets get it talking to MQTT and Frigate

DoubleTake Base Config

First up, we need to provide it with the details for MQTT

mqtt:
  host: mqtt_IP
  username: mqtt-user
  password: mqtt-password

Then we want to give it a few details on the topics (essentially the ‘slack’ channels’ where we’re passing around the images from app to app

  topics:
    # mqtt topic for frigate message subscription
    frigate: frigate/events
    #  mqtt topic for home assistant discovery subscription
    homeassistant: homeassistant
    # mqtt topic where matches are published by name
    matches: double-take/matches
    # mqtt topic where matches are published by camera name
    cameras: double-take/cameras

Then we want to tell it where to find Frigate and a few basic config items such as updating labels and what type of objects we want to update the labels on

frigate:
  url: http://frigate_IP:5000
  update_sub_labels: true
  labels:
    - person

Detector Installation (Compreface)

At this stage, you’ve got a bit of a decision to make as to which detector you want to choose – these are essentially the models and honestly, I don’t think there is any one that is perfect, or the defacto choice.

For now, I’m using Compreface by Exadel – its open-source, completely free and installation is super easy but if you choose to go down the route of Facebox or Codeproject.ai – the set up won’t be difficult. Here are the instructions

  1. Settings > Add-ons > Add-on Store
  2. Find the Compreface entry in the Add-ons and click Install
  3. Start the service, leaving the config as default – it will expose the service on port 8000
  4. Open up a web browser and browse to https://yourhomeassistantIP:8000
  5. You’ll hit the compreface login screen, create a local account and note your credentials (just in case you ever need them in the future)
  6. In the Applications, click the ‘Create Application’, give it a name and click Create
  7. Add a service to the application, give it a name (I just called mine HA) and make sure the service is recognition
  8. It will generate you an API key which you can copy, and then use in your Doubletake config

Go back into your Frigate config, as we need to add some additional lines to tell Frigate to use Compreface

DoubleTake Detector Config

detectors:
  compreface:
    url: http://192.168.1.18:8000
    # recognition api key
    key: <<insert api key>>
    # number of seconds before the request times out and is aborted
    timeout: 15
    # minimum required confidence that a recognized face is actually a face
    # value is between 0.0 and 1.0
    det_prob_threshold: 0.8
    # require opencv to find a face before processing with detector
    opencv_face_required: false

Hopefully the above is pretty self explanatory, you’re telling Frigate which detector to use (you can use more than one if you like), the URL is your compreface URL that you used to configure it above and the key is the api key from the recognition service you just configured.

The threshold is something you need to ‘play’ with as such, I’ve got mine so that its essentially 80% confident of a face match before it uses the person’s name, rather that just ‘person’.

OpenCV isn’t required with compreface as it already has a face check built in.

Being completely candid here, there are plenty more options I need/want to play with here to make my face recognition as accurate as I possibly can do. A few options worth looking at here

recognize:
  # minimum face size to be recognized (pixels)
  min_face_size: 1000
  # threshold for face recognition confidence
  recognition_threshold: 0.8
  # time (in seconds) to wait before recognizing the same person again
  match_timeout: 60 
  # time (in seconds) to wait before re-identifying a person
  reidentification_interval: 60 
  # scale factor for the face template (values between 0.25 and 1.0)
  face_template_scale: 0.7
  # maximum number of face templates to keep in cache
  templates_cache_size: 1000 

Once you’ve finished making your config changes, just double check or take a doubletake 😉 to see whether all the green lights are lit up next to each component in the product stack. It should look something like this

If you’ve got this far, you should be in a position where you can begin training your face(s). In Doubletake, you need to click on the ‘Train’ button at the top of the screen.

Click on the empty drop down below and select ‘add new’, type in the name of you or one of the faces you want to identify whether that be your wife, your child, the postman, the amazon delivery driver – whoever.

Now you’ve got a couple of options, you can either upload a photo of said person by clicking the upload button, or you can walk in front of your camera (s) and notice that immediately it will begin detecting a person and face that it doesn’t know yet. You can select each of those snapshots and ask it to train using those images rather than photos you’ve uploaded.

Personally I’ve had more success with the latter methodology, purely because things such as camera resolution, background, lighting will be the same when identifying live people on the camera. You should then be able to go onto DoubleTake > Matches and see each of those matches vs your trained people and the confidence of the platform that its correctly identified those people.

Notifications

Now you’ve got things being matched correctly in Doubletake, we need to use the stack to alert you when someone is at your door, or picked up on your camera. Thankfully – the Home Assistant Community is coming to the rescue once again! In a previous post I talked through the value of Blueprints, but to refresh – these are community written automations that you can import into your Home Assistant, fill in the gaps (such as notification messages, etc) and it’ll work.

Thankfully, due to the popularity of this stack, the rather popular blueprint for this is here. The developer provides handy links to import this straight into your Home Assistant and the configuration is super easy.

Without pasting the whole configuration (as it would be a rather large pointless image) but with a few entries in some fields, you’ll be up and running. For reference I have populated the following fields:

Frigate Camera: Camera Entity from Frigate (e.g. Front Doorbell)

Mobile Device: Select mobile/cellphone from dropdown (HA app must be installed)

Base URL: If you have external access to your HA installation, paste the URL here

MQTT Topic: The default of "frigate/events" should be prepopulated here

Notification Message: "A Person was detected at the Front Door"

Update Sub Label: Enabled (allows it to change 'Person' to the name of the identified person

Update Image: Enabled

Live View Entity - iOS only: Camera Entity from Frigate (e.g. Front Doorbell)

Zone Filter: Enabled

Required Zones: Select from Frigate drop down (i.e. zone_driveway)

Object Filter: person

Once you’ve got that blueprint saved as an automation then you should be all good to go.

The smart folk amongst us, will also notice that everytime you train doubletake with a new person, a Home Assistant entity will be created under the format double_take_name.

There’s plenty more than can be done here – such as identifying vehicles (using licence plates) or, creating automations based on individuals that are correctly identified using the doubletake entities but am super keen to see what automations you’ve got working here, or anybody whose experienced some of the detectors

0 Shares