Changelog
0.14.0 August 31, 2023¶
- Added #282: support for SAHI inference technique with
sv.InferenceSlicer
.
>>> import cv2
>>> import supervision as sv
>>> from ultralytics import YOLO
>>> image = cv2.imread(SOURCE_IMAGE_PATH)
>>> model = YOLO(...)
>>> def callback(image_slice: np.ndarray) -> sv.Detections:
... result = model(image_slice)[0]
... return sv.Detections.from_ultralytics(result)
>>> slicer = sv.InferenceSlicer(callback = callback)
>>> detections = slicer(image)
-
Added #297:
Detections.from_deepsparse
to enable seamless integration with DeepSparse framework. -
Added #281:
sv.Classifications.from_ultralytics
to enable seamless integration with Ultralytics framework. This will enable you to use supervision with all models that Ultralytics supports.
Warning
sv.Detections.from_yolov8 and sv.Classifications.from_yolov8 are now deprecated and will be removed with supervision-0.16.0 release.
-
Added #341: First supervision usage example script showing how to detect and track objects on video using YOLOv8 + Supervision.
-
Changed #296:
sv.ClassificationDataset
andsv.DetectionDataset
now use image path (not image name) as dataset keys. -
Fixed #300:
Detections.from_roboflow
to filter out polygons with less than 3 points.
0.13.0 August 8, 2023¶
- Added #236: support for mean average precision (mAP) for object detection models with
sv.MeanAveragePrecision
.
>>> import supervision as sv
>>> from ultralytics import YOLO
>>> dataset = sv.DetectionDataset.from_yolo(...)
>>> model = YOLO(...)
>>> def callback(image: np.ndarray) -> sv.Detections:
... result = model(image)[0]
... return sv.Detections.from_yolov8(result)
>>> mean_average_precision = sv.MeanAveragePrecision.benchmark(
... dataset = dataset,
... callback = callback
... )
>>> mean_average_precision.map50_95
0.433
-
Added #256: support for ByteTrack for object tracking with
sv.ByteTrack
. -
Added #222:
sv.Detections.from_ultralytics
to enable seamless integration with Ultralytics framework. This will enable you to usesupervision
with all models that Ultralytics supports.
Warning
sv.Detections.from_yolov8
is now deprecated and will be removed with supervision-0.15.0
release.
-
Added #191:
sv.Detections.from_paddledet
to enable seamless integration with PaddleDetection framework. -
Added #245: support for loading PASCAL VOC segmentation datasets with
sv.DetectionDataset.
.
0.12.0 July 24, 2023¶
Warning
With the supervision-0.12.0
release, we are terminating official support for Python 3.7.
- Added #177: initial support for object detection model benchmarking with
sv.ConfusionMatrix
.
>>> import supervision as sv
>>> from ultralytics import YOLO
>>> dataset = sv.DetectionDataset.from_yolo(...)
>>> model = YOLO(...)
>>> def callback(image: np.ndarray) -> sv.Detections:
... result = model(image)[0]
... return sv.Detections.from_yolov8(result)
>>> confusion_matrix = sv.ConfusionMatrix.benchmark(
... dataset = dataset,
... callback = callback
... )
>>> confusion_matrix.matrix
array([
[0., 0., 0., 0.],
[0., 1., 0., 1.],
[0., 1., 1., 0.],
[1., 1., 0., 0.]
])
-
Added #173:
Detections.from_mmdetection
to enable seamless integration with MMDetection framework. -
Added #130: ability to install package in
headless
ordesktop
mode. -
Changed #180: packing method from
setup.py
topyproject.toml
. -
Fixed #188:
sv.DetectionDataset.from_cooc
can't be loaded when there are images without annotations. -
Fixed #226:
sv.DetectionDataset.from_yolo
can't load background instances.
0.11.1 June 29, 2023¶
- Fix #165:
as_folder_structure
fails to savesv.ClassificationDataset
when it is result of inference.
0.11.0 June 28, 2023¶
- Added #150: ability to load and save
sv.DetectionDataset
in COCO format usingas_coco
andfrom_coco
methods.
>>> import supervision as sv
>>> ds = sv.DetectionDataset.from_coco(
... images_directory_path='...',
... annotations_path='...'
... )
>>> ds.as_coco(
... images_directory_path='...',
... annotations_path='...'
... )
- Added #158: ability to marge multiple
sv.DetectionDataset
together usingmerge
method.
>>> import supervision as sv
>>> ds_1 = sv.DetectionDataset(...)
>>> len(ds_1)
100
>>> ds_1.classes
['dog', 'person']
>>> ds_2 = sv.DetectionDataset(...)
>>> len(ds_2)
200
>>> ds_2.classes
['cat']
>>> ds_merged = sv.DetectionDataset.merge([ds_1, ds_2])
>>> len(ds_merged)
300
>>> ds_merged.classes
['cat', 'dog', 'person']
-
Added #162: additional
start
andend
arguments tosv.get_video_frames_generator
allowing to generate frames only for a selected part of the video. -
Fix #157: incorrect loading of YOLO dataset class names from
data.yaml
.
0.10.0 June 14, 2023¶
- Added #125: ability to load and save
sv.ClassificationDataset
in a folder structure format.
>>> import supervision as sv
>>> cs = sv.ClassificationDataset.from_folder_structure(
... root_directory_path='...'
... )
>>> cs.as_folder_structure(
... root_directory_path='...'
... )
-
Added #125: support for
sv.ClassificationDataset.split
allowing to dividesv.ClassificationDataset
into two parts. -
Added #110: ability to extract masks from Roboflow API results using
sv.Detections.from_roboflow
. -
Added commit hash: Supervision Quickstart notebook where you can learn more about Detection, Dataset and Video APIs.
-
Changed #135:
sv.get_video_frames_generator
documentation to better describe actual behavior.
0.9.0 June 7, 2023¶
- Added #118: ability to select
sv.Detections
by index, list of indexes or slice. Here is an example illustrating the new selection methods.
>>> import supervision as sv
>>> detections = sv.Detections(...)
>>> len(detections[0])
1
>>> len(detections[[0, 1]])
2
>>> len(detections[0:2])
2
-
Added #101: ability to extract masks from YOLOv8 result using
sv.Detections.from_yolov8
. Here is an example illustrating how to extract boolean masks from the result of the YOLOv8 model inference. -
Added #122: ability to crop image using
sv.crop
. Here is an example showing how to get a separate crop for each detection insv.Detections
. -
Added #120: ability to conveniently save multiple images into directory using
sv.ImageSink
. Here is an example showing how to save every tenth video frame as a separate image.
>>> import supervision as sv
>>> with sv.ImageSink(target_dir_path='target/directory/path') as sink:
... for image in sv.get_video_frames_generator(source_path='source_video.mp4', stride=10):
... sink.save_image(image=image)
- Fixed #106: inconvenient handling of
sv.PolygonZone
coordinates. Nowsv.PolygonZone
accepts coordinates in the form of[[x1, y1], [x2, y2], ...]
that can be both integers and floats.
0.8.0 May 17, 2023¶
- Added #100: support for dataset inheritance. The current
Dataset
got renamed toDetectionDataset
. NowDetectionDataset
inherits fromBaseDataset
. This change was made to enforce the future consistency of APIs of different types of computer vision datasets. - Added #100: ability to save datasets in YOLO format using
DetectionDataset.as_yolo
.
>>> import roboflow
>>> from roboflow import Roboflow
>>> import supervision as sv
>>> roboflow.login()
>>> rf = Roboflow()
>>> project = rf.workspace(WORKSPACE_ID).project(PROJECT_ID)
>>> dataset = project.version(PROJECT_VERSION).download("yolov5")
>>> ds = sv.DetectionDataset.from_yolo(
... images_directory_path=f"{dataset.location}/train/images",
... annotations_directory_path=f"{dataset.location}/train/labels",
... data_yaml_path=f"{dataset.location}/data.yaml"
... )
>>> ds.classes
['dog', 'person']
- Added #102: support for
DetectionDataset.split
allowing to divideDetectionDataset
into two parts.
>>> import supervision as sv
>>> ds = sv.DetectionDataset(...)
>>> train_ds, test_ds = ds.split(split_ratio=0.7, random_state=42, shuffle=True)
>>> len(train_ds), len(test_ds)
(700, 300)
- Changed #100: default value of
approximation_percentage
parameter from0.75
to0.0
inDetectionDataset.as_yolo
andDetectionDataset.as_pascal_voc
.
0.7.0 May 11, 2023¶
- Added #91:
Detections.from_yolo_nas
to enable seamless integration with YOLO-NAS model. - Added #86: ability to load datasets in YOLO format using
Dataset.from_yolo
. - Added #84:
Detections.merge
to merge multipleDetections
objects together. - Fixed #81:
LineZoneAnnotator.annotate
does not return annotated frame. - Changed #44:
LineZoneAnnotator.annotate
to allow for custom text for the in and out tags.
0.6.0 April 19, 2023¶
- Added #71: initial
Dataset
support and ability to saveDetections
in Pascal VOC XML format. - Added #71: new
mask_to_polygons
,filter_polygons_by_area
,polygon_to_xyxy
andapproximate_polygon
utilities. - Added #72: ability to load Pascal VOC XML object detections dataset as
Dataset
. - Changed #70: order of
Detections
attributes to make it consistent with order of objects in__iter__
tuple. - Changed #71:
generate_2d_mask
topolygon_to_mask
.
0.5.2 April 13, 2023¶
- Fixed #63:
LineZone.trigger
function expects 4 values instead of 5.
0.5.1 April 12, 2023¶
- Fixed
Detections.__getitem__
method did not return mask for selected item. - Fixed
Detections.area
crashed for mask detections.
0.5.0 April 10, 2023¶
- Added #58:
Detections.mask
to enable segmentation support. - Added #58:
MaskAnnotator
to allow easyDetections.mask
annotation. - Added #58:
Detections.from_sam
to enable native Segment Anything Model (SAM) support. - Changed #58:
Detections.area
behaviour to work not only with boxes but also with masks.
0.4.0 April 5, 2023¶
- Added #46:
Detections.empty
to allow easy creation of emptyDetections
objects. - Added #56:
Detections.from_roboflow
to allow easy creation ofDetections
objects from Roboflow API inference results. - Added #56:
plot_images_grid
to allow easy plotting of multiple images on single plot. - Added #56: initial support for Pascal VOC XML format with
detections_to_voc_xml
method. - Changed #56:
show_frame_in_notebook
refactored and renamed toplot_image
.
0.3.2 March 23, 2023¶
- Changed #50: Allow
Detections.class_id
to beNone
.
0.3.1 March 6, 2023¶
- Fixed #41:
PolygonZone
throws an exception when the object touches the bottom edge of the image. - Fixed #42:
Detections.wth_nms
method throws an exception whenDetections
is empty. - Changed #36:
Detections.wth_nms
support class agnostic and non-class agnostic case.
0.3.0 March 6, 2023¶
- Changed: Allow
Detections.confidence
to beNone
. - Added:
Detections.from_transformers
andDetections.from_detectron2
to enable seamless integration with Transformers and Detectron2 models. - Added:
Detections.area
to dynamically calculate bounding box area. - Added:
Detections.wth_nms
to filter out double detections with NMS. Initial - only class agnostic - implementation.
0.2.0 February 2, 2023¶
- Added: Advanced
Detections
filtering with pandas-like API. - Added:
Detections.from_yolov5
andDetections.from_yolov8
to enable seamless integration with YOLOv5 and YOLOv8 models.
0.1.0 January 19, 2023¶
Say hello to Supervision 👋