KIBSI PRODUCTS

Detect anything using our models or yours

With 1000s of built-in detectors, Kibsi is ready out-of-the-box - or bring your own computer vision models.

Detect anything using our models or yours.

Gain contextual understanding of your video stream with thousands of  built-in detectors and enhancers that locate, track, and relate objects as they move across a scene. Kibsi is designed to work with your existing cameras, all while respecting privacy.

Imagine a giant warehouse full of built-in detectors

Kibsi comes with built-in computer vision models for thousands of objects and classes, curated to provide state-of-the-art detections. Compose multiple detectors with simple logic to gain additional insights without the burden of having to train new models.

  • People
  • Vehicles
  • Symbols & logos
  • Heavy machinery
  • Animals
  • Gauges, QR codes, etc. 
  • Common objects
  • and more!

Your custom models are first class, too!

Add custom detectors built on common architectures such as YOLOv5, RESNET, Faster R-CNN, or create completely custom computer vision models built on PyTorch or Tensorflow. Kibsi gives your custom models superpowers with automatic interaction detection and stateful tracking.

Enhance detections with object attributes.

Run additional computer vision models on detected objects to add context and attributes, creating limitless combinations.

Robust, stateful object tracking.

Kibsi automatically tracks hundreds of detected objects as they move across the scene, enabling opportunities to gain insights over space and time.

Track the movements of people, vehicles, animals and more.

Calculate dwell and loiter times.

Build heatmaps of movement.

Enhance with attributes.

Handle partial and full occlusion.

Get started with Kibsi.

Contact Us

Understand any interaction

Kibsi understands interactions between all detected objects and categorizes the nature of the interaction (i.e. near, holding, etc), resulting in a relational understanding and data model that resembles a database, but for the physical world.

  • Determine when people are holding, touching, or near an object
  • Categorize the nature of interactions between multiple people
  • Measure the size of groups, and the arrival and departure of people from a group
  • Determine direction and speed of travel

Any camera, anywhere

Kibsi is designed for reality and works with your existing camera installations. With multi-camera support, your applications aren’t restricted to a single field of view.

  • IP security cameras
  • Video management systems
  • Overhead and skewed camera angles
  • Low frame rates
  • Moderate lighting & resolution

Built for
privacy.

Kibsi does not detect the identity or identifying features of people, simplifying compliance with privacy laws and policies. Additionally, video data is not retained by Kibsi unless explicitly enabled in response to a detected event. Kibsi also provides the ability to create workflows for hybrid deployments, which means your video never leaves your premises.

 

Determine when people are holding, touching, or near an object.

 

Categorize the nature of interactions between multiple people.

Differentiators

Contextual understanding

Compose multiple detectors with simple logic to gain additional insights and add context, creating limitless combinations.

No-code platform built for all

A rich canvas to map detections into your business language. Our no-code, drag-and-drop interface allows anyone to build custom computer vision applications.

Point & click deployment

Computer vision outcomes require a lot of components, assembled in just the right way. Kibsi has you covered from development through production, with just a few clicks - no fragile processes to build & maintain.

Building blocks for any use case

Built-in computer vision models for thousands of objects and classes, curated to provide state-of-the-art detections. And of course, you can bring your own custom detectors built on common architectures.

An API for the physical world

Treat the real-world as if it was a relational datastore. Understand the life cycle, long-term state, and interactions of detected objects. It’s like having an actual API that returns the state of the physical world.