Frigate is an open-source network video recorder (NVR) that uses artificial intelligence, specifically neural network object detection to provide real-time alert for your security cameras. I have happily been running Frigate for over two years as my NVR. My setup, which I will go into detail in a upcoming article, is comprised of 14 cameras that provide real-time alerts of people and cars along with the video being recorded locally on my home server. My favorite feature of Frigate is that it runs completely locally with no required dependencies on the Internet. This is huge plus compared to turn-key commercial systems such as Amazon Ring, Eufee, or Arlo which rely on an Internet connection for recording or many other features such as notifications. While the barrier to entry on the turn-key systems is easier, Frigate provides the flexibility and security that makes it the perfect solution for home security and automation hobbyists.
In this post is provide, I will be discussing my experience so far with the beta version of Frigate version 13. Version 13 of Frigate is by far the most exciting release with the introduction of custom object detection models though a paid add-on service called Frigate+. Nothing stopped anyone from developing custom models for Frigate before version 13, however the process is complex and time consuming. Frigate+ aims to provide a simple and accurate service to make the custom model a reality for everyone.
Along with this review, I also had the pleasure to interview Blake Blackshear, the author of Frigate. I thought this would be a great time to talk with Blake to get a behind the scenes look of Frigate’s history and where it’s going in the future .Be sure to check out the interview below:
Web GUI Improvements
Frigate now saves metadata for key moments in an event such as when the tracked object is detected, enters or exits zone, becomes active / stationary, and leaves. This metadata is used to overlay on top of recordings when viewed from the Events page. I find myself using these buttons often since its a fast way to seek to an exact location for certain events rather than manually scrubbing through a video. I’ve used my Microsoft paint skills to map out what the metadata buttons do on the Frigate Events page.
In this example, the object was stainionary for the configured amount of time, so the stationary button appeared. If you have multiple zones configured, you may see multiple zone icons appear for an event depending on if the object passes through any of the zones.

While this new metadata feature is highly aplonded, there are some issues which may distract you. The bounding boxes drawn currently have an hard time mapping over the object they are meant to highlight. Frigate doesn’t shy away from this issue, expliciting stating as such in the official documentation and on the Frigate events page. The reason for this issue is that the overlay is being presented on the recording stream and the data coming from the detect stream. If you’re a Frigate user, you know it’s best practice to use seperate camera video streams, each with a different resolution, for recording and detecting objects. This is to reduce the overhead with reading a high resolution camera stream for object detection.
Frigate does provide a setting, “annotation_offset
” that can be set to try and compensate for the differences. In my configuration, I set the value to -10. While it did help to get the bounding boxes aligned more often, it’s not consistent. Where this bounding box overlay issues becomes irritating is when the object goes off camera and the bounding box is highlighting nothing. Normally, the object will be very close to the bounding box as in the image below, so it’s not that bad.

Another handy feature is that the size of an object detected is also saved in the metadata. This allows you to use the size data to to limit false positives in Frigate by configuring minimum and maximum sizes of objects. If you know that a person’s size can’t be smaller than 50,000 pixels in a specific zone, you can help stop a smaller object that may get detected as a human from registering. I’ve configured the values for a few of my camera which have consistently generated false positives. I really should do all of them, but no false positives ever came up on them, so I never got around to it.
In previous Frigate versions, the size of the object detected was never saved. To figure out the size of objects as they appeared in your video feed required you to record your screen showing the live video of Frigate in debug mode, which displays the bounding box with size data. The more elegant way to view object size data in the past was to replay back the recording from an mp4 file and view it through the debug feed. If you’ve never played back a saved video recording in Frigate for debugging purposes, it’s something cool to check out.

Frigate + , The Custom Model Training Service
Not to be confused with the multitude of overpriced video streaming services, Frigate+ is a paid service that offers the ability to train your own custom object detection models. If you aren’t familiar with training neural net models, it’s a very compute intensive process which doesn’t come cheap. It often involves using graphics processing units (GPU) to train detection models based on thousands of images. The most common complaint on Frigate discussion boards is accuracy of the model used for object detection. Since its inception, Frigate’s object detection model has been based on the MobileSet_SSD detection framework which used the Coco dataset for training. This image set is more suited for regular images rather than those generated from security camera. Frigate+ uses images submitted by all of its user base to compile a better suited base model. From there, the base model is further trained by using images captured by individual users. Everyone will technically have their own unique object detection model with a common underlying foundational model.
Frigate+ costs $50 /year and allows you to train a custom model up to 12 times. There will be the ability to buy additional training credits, but that feature is not available at the wiring of this article. After the year expires, if you decide not to renew, you won’t lose access to your models. You can also load the models on as many Frigate instances as you wish. From a cost benefit perspective, it’s a no brainer for every Frigate user. I personally hate monthly fees and hate when software stops working when the expiration date hits. Think of this as more of a support plan that your custom models will still work, but could continue to improve by continuing your subscription.
Verifying & Annotating Screenshots for Training
Training a neutral net does have some level of manual human interaction to validate the input is accurate. Before Frigate+ will allow screenshots to be used for training, they must be manually verified by logging into the Frigate+ website. When uploading a screenshot to Frigate+, the object detected will automatically be highlighted by a bounded box. This box almost always have to be resized a bit to either make it tightly fit the object to make or to expand it if all of the object is not inside. The Frigate documentation recommended these steps to allow the model to properly learn the size of various objects. It’s also vital that you submit both positive and false positive images to Frigate+ for the optimal training. I followed this recommendation and lowed by detection percentages to capture false positives for a while to submit those images for training.
When I uploaded by first batch of screenshots, I didn’t find the process fun at all. The word annotating itself just sound boring and the process is very repetitive. However, after I downloaded by first custom model from Frigate+ and saw the huge benefits from training, I started to get excited about uploading as many screenshots as I could.
Another manual step that comes with most screenshots is annotating other objects appearing in the shot. The camera in my front yard covers the street and my neighbors driveway which always has a car parked there. So with every image submitted for my front yard, I’m always manually drawing a bounding box around his car. I wish he got out more often.
The least fun part of annotating screenshots is when the object is partly hidden by something else or out of frame. Take the case of my dog and the entryway pillar. Frigate’s documentation states to draw the bounding box to where you believe the object extends to even if it’s hidden. While this is a simple example, there are others where the object is larger and more of it is hidden. The reason for this requirement is that the model needs to understand the true size of the object even if it’s not all visible. The video below shows the process of resizing the bound box and annotating other objects in the screenshot.
The Frigate+ Web Interface
Frigate+ provides a simple and easy to use website that currently has four pages. The first is a high-level dashboard which provides an overview of the total number of images submitted for training your custom detection model. The images
To get your screenshots to Frigate+ for training, you have two options: uploading them from from your local Frigate deployment or manually upload the screenshots via a website. The option I recommend and a vast majority of people will use is the “Send to Frigate+” option on your local Frigate install. While the “Send to Frigate+” button has been available for awhile, I’m embarrassed to say I never to sent any images to the platform until the custom model option was available. As I suspect as many other did, once the custom model was announced to beta users, I excitedly started uploading screenshots, dreaming they would lead to an end of false positives.
When Frigate+ was first announced, my privacy centric inclination was that I was never going to submit screenshots. One of the main reasons I used Frigate was so my security footage and images were never stored anywhere besides my home server. When Frigate+ was announced, it wasn’t known that to get access to the custom models , you would be required to upload at least 10 screenshots. However I couldn’t resist trying it after reading some initial success stories on the Frigate github discussions.
I totally get that some won’t want to use Frigate+ since it requires sharing security camera screenshots. Whether this changes in the future will be interesting to see. However, I assume the main reason for the 10 image requirement is so that Frigate can continue to improve the foundation of the custom model that is based on. If you are an absolute privacy zeolite, Frigate+ isn’t for you. However, I think even many of us who have our home networks segregated into VLANs, MAC address ACLs on switch ports and perform outbound contenting filter are going to feel safe uploading screenshots that we explicitly choose to send for training. I think that’s where the real security differentiator lies, we still have control. Do you not want to send screenshots of your children or maybe your unlicensed honey stand at the edge of your driveway? Since the upload process is manual, you have the control to decide. To me, this manual approach is much more privacy focused vs storing all video recording to a cloud which may contain content you didn’t want exposed.
While I mentioned earlier that Frigate has no dependencies on the Internet, using Frigate+ does require access to the Internet to download the custom models. There is currently no way around this dependency. Uploading images to Frigate+ to further improve your custom model techncially doesn’t require Internet access since you can upload images manually through their website. However, I don’t recommend that method since it’s going to be a total pain if you are uploading hundreds of screenshots.
The Custom Models
My initial impressions of the Frigate+ custom model is positive, however it does take time and meticulous annotating to ensure your custom model perform accurately. The custom model makes the accuracy percentage of most detections increase to over 95%, with some detections hitting 98-100%. Previously with the original model that Frigate has used since its inspection, using percentages over 90% would commonly lead to objects never being detected. As you can see from the image below, all of the detections are over 96%. You will need to adjust your Frigate config accordingly or else you will will get a ton of false positives!


Even with humans and other objects like cars being detected at very high percentages, that didn’t stop false positives from also occurring at those percentages. Even at 95%, I was continuing to get some false positives, so I upped my percentages even higher, to 90% & 97% for the min_score and threshold settings. If you’ve used Frigate as long as I have, using percentages this high is a bit scary since it feels like you might miss detections. The fact is that if you don’t annotate your image accurately and put the detection score values as high as mine, you definitely will miss detections. On one version of my custom models, I noticed no detections occuring on my garage camera. I reverted back to the first Frigate+ model I generated and all of a sudden the detections were occuring again. Looking back, I wasn’t annotating the images as the documentation stated.
To have a control in place to validate detections aren’t being missed, I have been running separate instance of Frigate running that is not using the custom Frigate+ model. It’s still running the same Frigate 13 Beta 5 release, however it’s leveraging an external object detector, Codeproject AI. I’m running the IPCam YOLO v5.6 models on CodeProjectAI that require a GPU for CUDA processing. This method is heavy compete intensive compared to using the Tensorflow Light based custom models running on my Google Coral. Using this object detector has been the moist accurate I’ve ever used, even better than the custom Frigate+ models. I’m assuming it’s mostly due to the fact that YOLO is more accurate than, not that the Frigate+ models are inferior.
As seen from the image below, the model isn’t perfect even after training it with 600+ images. While I had my min_score in the high 80s rather than 90s, setting the minimum size for a human for this camera would have most likely stopped this detection from occurring. Nevertheless, the model still thinks there is a 98% chance my cat’s bed seen through the window is a human. The detection occurred when I turned on the lights in my room, triggering a detection to occur, however I or the cat never appeared in the frame. This shows it’s still very important to tweak the min/max sizes for objects being detected on each camera accordingly to limit false positives form occurring.
While using Frigate+’s custom models are a huge step forward, Frigate still uses SSDLite MobileDet as the base image detector. MobileDet-SSD is one of many image detectors that can be used for neural net image detection. It’s popular since it’s fast and can run comfortably on devices like the Google Coral TPU and handle 60+ FPS. While fast, it is not as accurate as other image detectors such as YOLO. I recommend checking out “A Simple Guide to YOLO and SSD” to learn more about the differences.
Even after four rounds of training over 600 images, the custom Frigate+ models still were producing false positives, abit at a way lower scale then during the initial training. In the instance below, I purposely decreased my confidence interval to 90% just to see what the Frigate+ model would detect. The Frigate development team actually recommends to reduce your detection percentages for while to capture false positives for training. As seen from the screenshot, there is still room for improvement in the custom model. When the detection occurred, I didn’t have a minimum size set for humans on the camera. That configuration would have most likely eliminated the detection from occurring.

While setting the minimum size of objects does provide a huge benefit, it won’t always save you when using a non-trained or less accurate model. There are plenty of occasions when palm tree shadows or other plants swaying in the wind triggered a very high human detection score and the detection size could have been that of a human. I dream of the day what all the tricks Frigate has up its sleeve to make up for the MobileDet-SSD framework aren’t needed.
Final Thoughts
As much as I loved Frigate before, version 13 brings it to the next level. I’m actively continuing to submit more and more screenshots to improve my custom model in the hopes that my palm tree shadows false positives will be a relic of the past. Overall, I highly recommend everyone sign-up to Frigate+ once it becomes generally available.
In my opinion, the true pinnacle of detection accuracy is going to come if YOLO based custom models are made available. This will provide an alternative for Frigate users who want to push the bar higher and transcend the limitations with the Google Coral TPU and MobileDet-SSD detection framework. While using the Google Coral TPU for object detection is an extremely efficient and will most likely be the defacto standard for most, there are more accurate alternatives. During my testing, the YOLO v5.6 based detector included in CodeProject AI has proven to produce less false positives. This solution while more accurate is much more costly financial and compute since it requires a Nvidia GPU. I have 7 more rounds of training credits included in my Frigate+ package, so who knows, maybe after the additional rounds, I’ll have a different tune when it comes to YOLO models. Only time will tell.