Changelog¶
0.26.1 Jul 22, 2025¶
-
Fixed 1894: Error in
sv.MeanAveragePrecisionwhere the area used for size-specific evaluation (small / medium / large) was always zero unless explicitly provided insv.Detections.data. -
Fixed 1895:
ID=0bug insv.MeanAveragePrecisionwhere objects were getting0.0mAP despite perfect IoU matches due to a bug in annotation ID assignment. -
Fixed 1898: Issue where
sv.MeanAveragePrecisioncould return negative values when certain object size categories have no data. -
Fixed 1901:
match_metricsupport forsv.Detections.with_nms. -
Fixed 1906:
border_thicknessparameter usage forsv.PercentageBarAnnotator.
0.26.0 Jul 16, 2025¶
Removed
supervision-0.26.0 drops python3.8 support and upgrade all codes to python3.9 syntax style.
Tip
Supervision’s documentation theme now has a fresh look that is consistent with the documentations of all Roboflow open-source projects. (#1858)
-
Added #1774: Support for the IOS (Intersection over Smallest) overlap metric that measures how much of the smaller object is covered by the larger one in
sv.Detections.with_nms,sv.Detections.with_nmm,sv.box_iou_batch, andsv.mask_iou_batch.import numpy as np import supervision as sv boxes_true = np.array([ [100, 100, 200, 200], [300, 300, 400, 400] ]) boxes_detection = np.array([ [150, 150, 250, 250], [320, 320, 420, 420] ]) sv.box_iou_batch( boxes_true=boxes_true, boxes_detection=boxes_detection, overlap_metric=sv.OverlapMetric.IOU ) # array([[0.14285714, 0. ], # [0. , 0.47058824]]) sv.box_iou_batch( boxes_true=boxes_true, boxes_detection=boxes_detection, overlap_metric=sv.OverlapMetric.IOS ) # array([[0.25, 0. ], # [0. , 0.64]]) -
Added #1874:
sv.box_iouthat efficiently computes the Intersection over Union (IoU) between two individual bounding boxes. -
Added #1816: Support for frame limitations and progress bar in
sv.process_video. -
Added #1788: Support for creating
sv.KeyPointsobjects from ViTPose and ViTPose++ inference results viasv.KeyPoints.from_transformers. -
Added #1823:
sv.xyxy_to_xcycarhfunction to convert bounding box coordinates from(x_min, y_min, x_max, y_max)into measurement space to format(center x, center y, aspect ratio, height), where the aspect ratio iswidth / height. -
Added #1788:
sv.xyxy_to_xywhfunction to convert bounding box coordinates from(x_min, y_min, x_max, y_max)format to(x, y, width, height)format. -
Changed #1820:
sv.LabelAnnotatornow supports thesmart_positionparameter to automatically keep labels within frame boundaries, and themax_line_lengthparameter to control text wrapping for long or multi-line labels. -
Changed #1825:
sv.LabelAnnotatornow supports non-string labels. -
Changed #1792:
sv.Detections.from_vlmnow supports parsing bounding boxes and segmentation masks from responses generated by Google Gemini models.import supervision as sv gemini_response_text = """```json [ {"box_2d": [543, 40, 728, 200], "label": "cat", "id": 1}, {"box_2d": [653, 352, 820, 522], "label": "dog", "id": 2} ] ```""" detections = sv.Detections.from_vlm( sv.VLM.GOOGLE_GEMINI_2_5, gemini_response_text, resolution_wh=(1000, 1000), classes=['cat', 'dog'], ) detections.xyxy # array([[543., 40., 728., 200.], [653., 352., 820., 522.]]) detections.data # {'class_name': array(['cat', 'dog'], dtype='<U26')} detections.class_id # array([0, 1]) -
Changed #1878:
sv.Detections.from_vlmnow supports parsing bounding boxes from responses generated by Moondream.import supervision as sv moondream_result = { 'objects': [ { 'x_min': 0.5704046934843063, 'y_min': 0.20069346576929092, 'x_max': 0.7049859315156937, 'y_max': 0.3012596592307091 }, { 'x_min': 0.6210969910025597, 'y_min': 0.3300672620534897, 'x_max': 0.8417936339974403, 'y_max': 0.4961046129465103 } ] } detections = sv.Detections.from_vlm( sv.VLM.MOONDREAM, moondream_result, resolution_wh=(1000, 1000), ) detections.xyxy # array([[1752.28, 818.82, 2165.72, 1229.14], # [1908.01, 1346.67, 2585.99, 2024.11]]) -
Changed #1709:
sv.Detections.from_vlmnow supports parsing bounding boxes from responses generated by Qwen-2.5 VL.import supervision as sv qwen_2_5_vl_result = """```json [ {"bbox_2d": [139, 768, 315, 954], "label": "cat"}, {"bbox_2d": [366, 679, 536, 849], "label": "dog"} ] ```""" detections = sv.Detections.from_vlm( sv.VLM.QWEN_2_5_VL, qwen_2_5_vl_result, input_wh=(1000, 1000), resolution_wh=(1000, 1000), classes=['cat', 'dog'], ) detections.xyxy # array([[139., 768., 315., 954.], [366., 679., 536., 849.]]) detections.class_id # array([0, 1]) detections.data # {'class_name': array(['cat', 'dog'], dtype='<U10')} detections.class_id # array([0, 1]) -
Changed #1786: Significantly improved the speed of HSV color mapping in
sv.HeatMapAnnotator, achieving approximately 28x faster performance on 1920x1080 frames. -
Fixed #1834: Supervision’s
sv.MeanAveragePrecisionis now fully aligned with pycocotools, the official COCO evaluation tool, ensuring accurate and standardized metrics. This update enabled us to launch a new version of the Computer Vision Model Leaderboard.import supervision as sv from supervision.metrics import MeanAveragePrecision predictions = sv.Detections(...) targets = sv.Detections(...) map_metric = MeanAveragePrecision() map_metric.update(predictions, targets).compute() # Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.464 # Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.637 # Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.203 # Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.284 # Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.497 # Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.629 -
Fixed #1767: Fixed losing
sv.Detections.datawhen detections filtering.
0.25.0 Nov 12, 2024¶
-
No removals or deprecations in this release!
-
Essential update to the
LineZone: when computing line crossings, detections that jitter might be counted twice (or more). This can now be solved with theminimum_crossing_thresholdargument. If you set it to2or more, extra frames will be used to confirm the crossing, improving the accuracy significantly. (#1540) -
It is now possible to track objects detected as
KeyPoints. See the complete step-by-step guide in the Object Tracking Guide. (#1658)
import numpy as np
import supervision as sv
from ultralytics import YOLO
model = YOLO("yolov8m-pose.pt")
tracker = sv.ByteTrack()
trace_annotator = sv.TraceAnnotator()
def callback(frame: np.ndarray, _: int) -> np.ndarray:
results = model(frame)[0]
key_points = sv.KeyPoints.from_ultralytics(results)
detections = key_points.as_detections()
detections = tracker.update_with_detections(detections)
annotated_image = trace_annotator.annotate(frame.copy(), detections)
return annotated_image
sv.process_video(
source_path="input_video.mp4",
target_path="output_video.mp4",
callback=callback
)
-
Added
is_emptymethod toKeyPointsto check if there are any keypoints in the object. (#1658) -
Added
as_detectionsmethod toKeyPointsthat convertsKeyPointstoDetections. (#1658) -
Added a new video to
supervision[assets]. (#1657)
from supervision.assets import download_assets, VideoAssets
path_to_video = download_assets(VideoAssets.SKIING)
-
Supervision can now be used with
Python 3.13. The most renowned update is the ability to run Python without Global Interpreter Lock (GIL). We expect support for this among our dependencies to be inconsistent, but if you do attempt it - let us know the results! (#1595) -
Added
Mean Average RecallmAR metric, which returns a recall score, averaged over IoU thresholds, detected object classes, and limits imposed on maximum considered detections. (#1661)
import supervision as sv
from supervision.metrics import MeanAverageRecall
predictions = sv.Detections(...)
targets = sv.Detections(...)
map_metric = MeanAverageRecall()
map_result = map_metric.update(predictions, targets).compute()
map_result.plot()
- Added
PrecisionandRecallmetrics, providing a baseline for comparing model outputs to ground truth or another model (#1609)
import supervision as sv
from supervision.metrics import Recall
predictions = sv.Detections(...)
targets = sv.Detections(...)
recall_metric = Recall()
recall_result = recall_metric.update(predictions, targets).compute()
recall_result.plot()
- All Metrics now support Oriented Bounding Boxes (OBB) (#1593)
import supervision as sv
from supervision.metrics import F1_Score
predictions = sv.Detections(...)
targets = sv.Detections(...)
f1_metric = MeanAverageRecall(metric_target=sv.MetricTarget.ORIENTED_BOUNDING_BOXES)
f1_result = f1_metric.update(predictions, targets).compute()
- Introducing Smart Labels! When
smart_positionis set forLabelAnnotator,RichLabelAnnotatororVertexLabelAnnotator, the labels will move around to avoid overlapping others. (#1625)
import supervision as sv
from ultralytics import YOLO
image = cv2.imread("image.jpg")
label_annotator = sv.LabelAnnotator(smart_position=True)
model = YOLO("yolo11m.pt")
results = model(image)[0]
detections = sv.Detections.from_ultralytics(results)
annotated_frame = label_annotator.annotate(first_frame.copy(), detections)
sv.plot_image(annotated_frame)
- Added the
metadatavariable toDetections. It allows you to store custom data per-image, rather than per-detected-object as was possible withdatavariable. For example,metadatacould be used to store the source video path, camera model or camera parameters. (#1589)
import supervision as sv
from ultralytics import YOLO
model = YOLO("yolov8m")
result = model("image.png")[0]
detections = sv.Detections.from_ultralytics(result)
# Items in `data` must match length of detections
object_ids = [num for num in range(len(detections))]
detections.data["object_number"] = object_ids
# Items in `metadata` can be of any length.
detections.metadata["camera_model"] = "Luxonis OAK-D"
-
Added a
py.typedtype hints metafile. It should provide a stronger signal to type annotators and IDEs that type support is available. (#1586) -
ByteTrackno longer requiresdetectionsto have aclass_id(#1637) draw_line,draw_rectangle,draw_filled_rectangle,draw_polygon,draw_filled_polygonandPolygonZoneAnnotatornow comes with a default color (#1591)- Dataset classes are treated as case-sensitive when merging multiple datasets. (#1643)
- Expanded metrics documentation with example plots and printed results (#1660)
- Added usage example for polygon zone (#1608)
-
Small improvements to error handling in polygons: (#1602)
-
Updated
ByteTrack, removing shared variables. Previously, multiple instances ofByteTrackwould share some date, requiring liberal use oftracker.reset(). (#1603), (#1528) - Fixed a bug where
class_agnosticsetting inMeanAveragePrecisionwould not work. (#1577) hacktoberfest -
Removed welcome workflow from our CI system. (#1596)
-
Large refactor of
ByteTrack: STrack moved to separate class, removed superfluousBaseTrackclass, removed unused variables (#1603) - Large refactor of
RichLabelAnnotator, matching its contents withLabelAnnotator. (#1625)
0.24.0 Oct 4, 2024¶
import supervision as sv
from supervision.metrics import F1Score
predictions = sv.Detections(...)
targets = sv.Detections(...)
f1_metric = F1Score()
f1_result = f1_metric.update(predictions, targets).compute()
print(f1_result)
print(f1_result.f1_50)
print(f1_result.small_objects.f1_50)
-
Added new cookbook: Small Object Detection with SAHI. This cookbook provides a detailed guide on using
InferenceSlicerfor small object detection. #1483 -
Added an Embedded Workflow, which allows you to preview annotators. #1533
-
Enhanced
LineZoneAnnotator, allowing the labels to align with the line, even when it's not horizontal. Also, you can now disable text background, and choose to draw labels off-center which minimizes overlaps for multipleLineZonelabels. #854
import supervision as sv
import cv2
image = cv2.imread("<SOURCE_IMAGE_PATH>")
line_zone = sv.LineZone(
start=sv.Point(0, 100),
end=sv.Point(50, 200)
)
line_zone_annotator = sv.LineZoneAnnotator(
text_orient_to_line=True,
display_text_box=False,
text_centered=False
)
annotated_frame = line_zone_annotator.annotate(
frame=image.copy(), line_counter=line_zone
)
sv.plot_image(frame)
- Added per-class counting capabilities to
LineZoneand introducedLineZoneAnnotatorMulticlassfor visualizing the counts per class. This feature allows tracking of individual classes crossing a line, enhancing the flexibility of use cases like traffic monitoring or crowd analysis. #1555
import supervision as sv
import cv2
image = cv2.imread("<SOURCE_IMAGE_PATH>")
line_zone = sv.LineZone(
start=sv.Point(0, 100),
end=sv.Point(50, 200)
)
line_zone_annotator = sv.LineZoneAnnotatorMulticlass()
frame = line_zone_annotator.annotate(
frame=frame, line_zones=[line_zone]
)
sv.plot_image(frame)
- Added
from_easyocr, allowing integration of OCR results into the supervision framework. EasyOCR is an open-source optical character recognition (OCR) library that can read text from images. #1515
import supervision as sv
import easyocr
import cv2
image = cv2.imread("<SOURCE_IMAGE_PATH>")
reader = easyocr.Reader(["en"])
result = reader.readtext("<SOURCE_IMAGE_PATH>", paragraph=True)
detections = sv.Detections.from_easyocr(result)
box_annotator = sv.BoxAnnotator(color_lookup=sv.ColorLookup.INDEX)
label_annotator = sv.LabelAnnotator(color_lookup=sv.ColorLookup.INDEX)
annotated_image = image.copy()
annotated_image = box_annotator.annotate(scene=annotated_image, detections=detections)
annotated_image = label_annotator.annotate(scene=annotated_image, detections=detections)
sv.plot_image(annotated_image)
- Added
oriented_box_iou_batchfunction todetection.utils. This function computes Intersection over Union (IoU) for oriented or rotated bounding boxes (OBB). #1502
import numpy as np
boxes_true = np.array([[[1, 0], [0, 1], [3, 4], [4, 3]]])
boxes_detection = np.array([[[1, 1], [2, 0], [4, 2], [3, 3]]])
ious = sv.oriented_box_iou_batch(boxes_true, boxes_detection)
print("IoU between true and detected boxes:", ious)
- Extended
PolygonZoneAnnotatorto allow setting opacity when drawing zones, providing enhanced visualization by filling the zone with adjustable transparency. #1527
import cv2
from ncnn.model_zoo import get_model
import supervision as sv
image = cv2.imread("<SOURCE_IMAGE_PATH>")
model = get_model(
"yolov8s",
target_size=640,
prob_threshold=0.5,
nms_threshold=0.45,
num_threads=4,
use_gpu=True,
)
result = model(image)
detections = sv.Detections.from_ncnn(result)
Removed
The frame_resolution_wh parameter in PolygonZone has been removed.
Removed
Supervision installation methods "headless" and "desktop" were removed, as they are no longer needed. pip install supervision[headless] will install the base library and harmlessly warn of non-existent extras.
-
Supervision now depends on
opencv-pythonrather thanopencv-python-headless. #1530 -
Fixed the COCO 101 point Average Precision algorithm to correctly interpolate precision, providing a more precise calculation of average precision without averaging out intermediate values. #1500
-
Resolved miscellaneous issues highlighted when building documentation. This mostly includes whitespace adjustments and type inconsistencies. Updated documentation for clarity and fixed formatting issues. Added explicit version for
mkdocstrings-python. #1549 -
Enabled and fixed Ruff rules for code formatting, including changes like avoiding unnecessary iterable allocations and using Optional for default mutable arguments. #1526
0.23.0 Aug 28, 2024¶
- Added #930:
IconAnnotator, a new annotator that allows drawing icons on each detection. Useful if you want to draw a specific icon for each class.
import supervision as sv
from inference import get_model
image = <SOURCE_IMAGE_PATH>
icon_dog = <DOG_PNG_PATH>
icon_cat = <CAT_PNG_PATH>
model = get_model(model_id="yolov8n-640")
results = model.infer(image)[0]
detections = sv.Detections.from_inference(results)
icon_paths = []
for class_name in detections.data["class_name"]:
if class_name == "dog":
icon_paths.append(icon_dog)
elif class_name == "cat":
icon_paths.append(icon_cat)
else:
icon_paths.append("")
icon_annotator = sv.IconAnnotator()
annotated_frame = icon_annotator.annotate(
scene=image.copy(),
detections=detections,
icon_path=icon_paths
)
- Added #1385:
BackgroundColorAnnotator, that draws an overlay on the background images of the detections.
import supervision as sv
from inference import get_model
image = <SOURCE_IMAGE_PATH>
model = get_model(model_id="yolov8n-640")
results = model.infer(image)[0]
detections = sv.Detections.from_inference(results)
background_overlay_annotator = sv.BackgroundOverlayAnnotator()
annotated_frame = background_overlay_annotator.annotate(
scene=image.copy(),
detections=detections
)
- Added #1386: Support for Transformers v5 functions in
sv.Detections.from_transformers. This includes theDetrImageProcessormethodspost_process_object_detection,post_process_panoptic_segmentation,post_process_semantic_segmentation, andpost_process_instance_segmentation.
import torch
import supervision as sv
from PIL import Image
from transformers import DetrImageProcessor, DetrForObjectDetection
processor = DetrImageProcessor.from_pretrained("facebook/detr-resnet-50")
model = DetrForObjectDetection.from_pretrained("facebook/detr-resnet-50")
image = Image.open(<SOURCE_IMAGE_PATH>)
inputs = processor(images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
width, height = image.size
target_size = torch.tensor([[height, width]])
results = processor.post_process_object_detection(
outputs=outputs, target_sizes=target_size)[0]
detections = sv.Detections.from_transformers(
transformers_results=results,
id2label=model.config.id2label)
- Added #1354: Ultralytics SAM (Segment Anything Model) support in
sv.Detections.from_ultralytics. SAM2 was released during this update, and is already supported viasv.Detections.from_sam.
import supervision as sv
from segment_anything import (
sam_model_registry,
SamAutomaticMaskGenerator
)
sam_model_reg = sam_model_registry[MODEL_TYPE]
sam = sam_model_reg(checkpoint=CHECKPOINT_PATH).to(device=DEVICE)
mask_generator = SamAutomaticMaskGenerator(sam)
sam_result = mask_generator.generate(IMAGE)
detections = sv.Detections.from_sam(sam_result=sam_result)
-
Added #1458:
outline_coloroptions forTriangleAnnotatorandDotAnnotator. -
Added #1409:
text_coloroption forVertexLabelAnnotatorkeypoint annotator. -
Changed #1434:
InferenceSlicernow features anoverlap_whparameter, making it easier to compute slice sizes when handling overlapping slices. -
Fixed #1448: Various annotator type issues have been resolved, supporting expanded error handling.
-
Fixed #1348: Introduced a new method for seeking to a specific video frame, addressing cases where traditional seek methods were failing. It can be enabled with
iterative_seek=True.
import supervision as sv
for frame in sv.get_video_frames_generator(
source_path=<SOURCE_VIDEO_PATH>,
start=60,
iterative_seek=True
):
...
- Fixed #1424:
plot_imagefunction now clearly indicates that the size is in inches.
Removed
The track_buffer, track_thresh, and match_thresh parameters in ByteTrack are deprecated and were removed as of supervision-0.23.0. Use lost_track_buffer, track_activation_threshold, and minimum_matching_threshold instead.
Removed
The triggering_position parameter in sv.PolygonZone was removed as of supervision-0.23.0. Use triggering_anchors instead.
Deprecated
overlap_filter_strategy in InferenceSlicer.__init__ is deprecated and will be removed in supervision-0.27.0. Use overlap_strategy instead.
Deprecated
overlap_ratio_wh in InferenceSlicer.__init__ is deprecated and will be removed in supervision-0.27.0. Use overlap_wh instead.
0.22.0 Jul 12, 2024¶
- Added #1326:
sv.DetectionsDatasetandsv.ClassificationDatasetallowing to load the images into memory only when necessary (lazy loading).
Deprecated
Constructing DetectionDataset with parameter images as Dict[str, np.ndarray] is deprecated and will be removed in supervision-0.26.0. Please pass a list of paths List[str] instead.
Deprecated
The DetectionDataset.images property is deprecated and will be removed in supervision-0.26.0. Please loop over images with for path, image, annotation in dataset:, as that does not require loading all images into memory.
import roboflow
from roboflow import Roboflow
import supervision as sv
roboflow.login()
rf = Roboflow()
project = rf.workspace(<WORKSPACE_ID>).project(<PROJECT_ID>)
dataset = project.version(<PROJECT_VERSION>).download("coco")
ds_train = sv.DetectionDataset.from_coco(
images_directory_path=f"{dataset.location}/train",
annotations_path=f"{dataset.location}/train/_annotations.coco.json",
)
path, image, annotation = ds_train[0]
# loads image on demand
for path, image, annotation in ds_train:
# loads image on demand
-
Added #1296:
sv.Detections.from_lmmnow supports parsing results from the Florence 2 model, extending the capability to handle outputs from this Large Multimodal Model (LMM). This includes detailed object detection, OCR with region proposals, segmentation, and more. Find out more in our Colab notebook. -
Added #1232 to support keypoint detection with Mediapipe. Both legacy and modern pipelines are supported. See
sv.KeyPoints.from_mediapipefor more. -
Added #1316:
sv.KeyPoints.from_mediapipeextended to support FaceMesh from Mediapipe. This enhancement allows for processing both face landmarks fromFaceLandmarker, and legacy results fromFaceMesh. -
Added #1310:
sv.KeyPoints.from_detectron2is a newKeyPointsmethod, adding support for extracting keypoints from the popular Detectron 2 platform. -
Added #1300:
sv.Detections.from_detectron2now supports segmentation models detectron2. The resulting masks can be used withsv.MaskAnnotatorfor displaying annotations.
import supervision as sv
from detectron2 import model_zoo
from detectron2.engine import DefaultPredictor
from detectron2.config import get_cfg
import cv2
image = cv2.imread(<SOURCE_IMAGE_PATH>)
cfg = get_cfg()
cfg.merge_from_file(model_zoo.get_config_file("COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml"))
cfg.MODEL.WEIGHTS = model_zoo.get_checkpoint_url("COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml")
predictor = DefaultPredictor(cfg)
result = predictor(image)
detections = sv.Detections.from_detectron2(result)
mask_annotator = sv.MaskAnnotator()
annotated_frame = mask_annotator.annotate(scene=image.copy(), detections=detections)
- Added #1277: if you provide a font that supports symbols of a language,
sv.RichLabelAnnotatorwill draw them on your images. - Various other annotators have been revised to ensure proper in-place functionality when used with
numpyarrays. Additionally, we fixed a bug wheresv.ColorAnnotatorwas filling boxes with solid color when used in-place.
import cv2
import supervision as sv
import
image = cv2.imread(<SOURCE_IMAGE_PATH>)
model = get_model(model_id="yolov8n-640")
results = model.infer(image)[0]
detections = sv.Detections.from_inference(results)
rich_label_annotator = sv.RichLabelAnnotator(font_path=<TTF_FONT_PATH>)
annotated_image = rich_label_annotator.annotate(scene=image.copy(), detections=detections)
- Added #1227: Added support for loading Oriented Bounding Boxes dataset in YOLO format.
import supervision as sv
train_ds = sv.DetectionDataset.from_yolo(
images_directory_path="/content/dataset/train/images",
annotations_directory_path="/content/dataset/train/labels",
data_yaml_path="/content/dataset/data.yaml",
is_obb=True,
)
_, image, detections in train_ds[0]
obb_annotator = OrientedBoxAnnotator()
annotated_image = obb_annotator.annotate(scene=image.copy(), detections=detections)
- Fixed #1312: Fixed
CropAnnotator.
Removed
BoxAnnotator was removed, however BoundingBoxAnnotator has been renamed to BoxAnnotator. Use a combination of BoxAnnotator and LabelAnnotator to simulate old BoundingBox behavior.
Deprecated
The name BoundingBoxAnnotator has been deprecated and will be removed in supervision-0.26.0. It has been renamed to BoxAnnotator.
-
Added #975 📝 New Cookbooks: serialize detections into json and csv.
-
Added #1290: Mostly an internal change, our file utility function now support both
strandpathlibpaths. -
Added #1340: Two new methods for converting between bounding box formats -
xywh_to_xyxyandxcycwh_to_xyxy
Removed
from_roboflow method has been removed due to deprecation. Use from_inference instead.
Removed
Color.white() has been removed due to deprecation. Use color.WHITE instead.
Removed
Color.black() has been removed due to deprecation. Use color.BLACK instead.
Removed
Color.red() has been removed due to deprecation. Use color.RED instead.
Removed
Color.green() has been removed due to deprecation. Use color.GREEN instead.
Removed
Color.blue() has been removed due to deprecation. Use color.BLUE instead.
Removed
ColorPalette.default() has been removed due to deprecation. Use ColorPalette.DEFAULT instead.
Removed
FPSMonitor.__call__ has been removed due to deprecation. Use the attribute FPSMonitor.fps instead.
0.21.0 Jun 5, 2024¶
-
Added #500:
sv.Detections.with_nmmto perform non-maximum merging on the current set of object detections. -
Added #1221:
sv.Detections.from_lmmallowing to parse Large Multimodal Model (LMM) text result intosv.Detectionsobject. For nowfrom_lmmsupports only PaliGemma result parsing.
import supervision as sv
paligemma_result = "<loc0256><loc0256><loc0768><loc0768> cat"
detections = sv.Detections.from_lmm(
sv.LMM.PALIGEMMA,
paligemma_result,
resolution_wh=(1000, 1000),
classes=["cat", "dog"],
)
detections.xyxy
# array([[250., 250., 750., 750.]])
detections.class_id
# array([0])
- Added #1236:
sv.VertexLabelAnnotatorallowing to annotate every vertex of a keypoint skeleton with custom text and color.
import supervision as sv
image = ...
key_points = sv.KeyPoints(...)
edge_annotator = sv.EdgeAnnotator(
color=sv.Color.GREEN,
thickness=5
)
annotated_frame = edge_annotator.annotate(
scene=image.copy(),
key_points=key_points
)
-
Added #1147:
sv.KeyPoints.from_inferenceallowing to createsv.KeyPointsfrom Inference result. -
Added #1138:
sv.KeyPoints.from_yolo_nasallowing to createsv.KeyPointsfrom YOLO-NAS result. -
Added #1163:
sv.mask_to_rleandsv.rle_to_maskallowing for easy conversion between mask and rle formats. -
Changed #1236:
sv.InferenceSlicerallowing to select overlap filtering strategy (NONE,NON_MAX_SUPPRESSIONandNON_MAX_MERGE). -
Changed #1178:
sv.InferenceSliceradding instance segmentation model support.
import cv2
import numpy as np
import supervision as sv
from inference import get_model
model = get_model(model_id="yolov8x-seg-640")
image = cv2.imread(<SOURCE_IMAGE_PATH>)
def callback(image_slice: np.ndarray) -> sv.Detections:
results = model.infer(image_slice)[0]
return sv.Detections.from_inference(results)
slicer = sv.InferenceSlicer(callback = callback)
detections = slicer(image)
mask_annotator = sv.MaskAnnotator()
label_annotator = sv.LabelAnnotator()
annotated_image = mask_annotator.annotate(
scene=image, detections=detections)
annotated_image = label_annotator.annotate(
scene=annotated_image, detections=detections)
-
Changed #1228:
sv.LineZonemaking it 10-20 times faster, depending on the use case. -
Changed #1163:
sv.DetectionDataset.from_cocoandsv.DetectionDataset.as_cocoadding support for run-length encoding (RLE) mask format.
0.20.0 April 24, 2024¶
-
Added #1128:
sv.KeyPointsto provide initial support for pose estimation and broader keypoint detection models. -
Added #1128:
sv.EdgeAnnotatorandsv.VertexAnnotatorto enable rendering of results from keypoint detection models.
import cv2
import supervision as sv
from ultralytics import YOLO
image = cv2.imread(<SOURCE_IMAGE_PATH>)
model = YOLO('yolov8l-pose')
result = model(image, verbose=False)[0]
keypoints = sv.KeyPoints.from_ultralytics(result)
edge_annotators = sv.EdgeAnnotator(color=sv.Color.GREEN, thickness=5)
annotated_image = edge_annotators.annotate(image.copy(), keypoints)
-
Changed #1037:
sv.LabelAnnotatorby adding an additionalcorner_radiusargument that allows for rounding the corners of the bounding box. -
Changed #1109:
sv.PolygonZonesuch that theframe_resolution_whargument is no longer required to initializesv.PolygonZone.
Deprecated
The frame_resolution_wh parameter in sv.PolygonZone is deprecated and will be removed in supervision-0.24.0.
-
Changed #1084:
sv.get_polygon_centerto calculate a more accurate polygon centroid. -
Changed #1069:
sv.Detections.from_transformersby adding support for Transformers segmentation models and extract class names values.
import torch
import supervision as sv
from PIL import Image
from transformers import DetrImageProcessor, DetrForSegmentation
processor = DetrImageProcessor.from_pretrained("facebook/detr-resnet-50-panoptic")
model = DetrForSegmentation.from_pretrained("facebook/detr-resnet-50-panoptic")
image = Image.open(<SOURCE_IMAGE_PATH>)
inputs = processor(images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
width, height = image.size
target_size = torch.tensor([[height, width]])
results = processor.post_process_segmentation(
outputs=outputs, target_sizes=target_size)[0]
detections = sv.Detections.from_transformers(results, id2label=model.config.id2label)
mask_annotator = sv.MaskAnnotator()
label_annotator = sv.LabelAnnotator(text_position=sv.Position.CENTER)
annotated_image = mask_annotator.annotate(
scene=image, detections=detections)
annotated_image = label_annotator.annotate(
scene=annotated_image, detections=detections)
- Fixed #787:
sv.ByteTrack.update_with_detectionswhich was removing segmentation masks while tracking. Now,ByteTrackcan be used alongside segmentation models.
0.19.0 March 15, 2024¶
- Added #818:
sv.CSVSinkallowing for the straightforward saving of image, video, or stream inference results in a.csvfile.
import supervision as sv
from ultralytics import YOLO
model = YOLO(<SOURCE_MODEL_PATH>)
csv_sink = sv.CSVSink(<RESULT_CSV_FILE_PATH>)
frames_generator = sv.get_video_frames_generator(<SOURCE_VIDEO_PATH>)
with csv_sink:
for frame in frames_generator:
result = model(frame)[0]
detections = sv.Detections.from_ultralytics(result)
csv_sink.append(detections, custom_data={<CUSTOM_LABEL>:<CUSTOM_DATA>})
- Added #819:
sv.JSONSinkallowing for the straightforward saving of image, video, or stream inference results in a.jsonfile.
import supervision as sv
from ultralytics import YOLO
model = YOLO(<SOURCE_MODEL_PATH>)
json_sink = sv.JSONSink(<RESULT_JSON_FILE_PATH>)
frames_generator = sv.get_video_frames_generator(<SOURCE_VIDEO_PATH>)
with json_sink:
for frame in frames_generator:
result = model(frame)[0]
detections = sv.Detections.from_ultralytics(result)
json_sink.append(detections, custom_data={<CUSTOM_LABEL>:<CUSTOM_DATA>})
-
Added #847:
sv.mask_iou_batchallowing to compute Intersection over Union (IoU) of two sets of masks. -
Added #847:
sv.mask_non_max_suppressionallowing to perform Non-Maximum Suppression (NMS) on segmentation predictions. -
Added #888:
sv.CropAnnotatorallowing users to annotate the scene with scaled-up crops of detections.
import cv2
import supervision as sv
from inference import get_model
image = cv2.imread(<SOURCE_IMAGE_PATH>)
model = get_model(model_id="yolov8n-640")
result = model.infer(image)[0]
detections = sv.Detections.from_inference(result)
crop_annotator = sv.CropAnnotator()
annotated_frame = crop_annotator.annotate(
scene=image.copy(),
detections=detections
)
-
Changed #827:
sv.ByteTrack.resetallowing users to clear trackers state, enabling the processing of multiple video files in sequence. -
Changed #802:
sv.LineZoneAnnotatorallowing to hide in/out count usingdisplay_in_countanddisplay_out_countproperties. -
Changed #787:
sv.ByteTrackinput arguments and docstrings updated to improve readability and ease of use.
Deprecated
The track_buffer, track_thresh, and match_thresh parameters in sv.ByteTrack are deprecated and will be removed in supervision-0.23.0. Use lost_track_buffer, track_activation_threshold, and minimum_matching_threshold instead.
- Changed #910:
sv.PolygonZoneto now accept a list of specific box anchors that must be in zone for a detection to be counted.
Deprecated
The triggering_position parameter in sv.PolygonZone is deprecated and will be removed in supervision-0.23.0. Use triggering_anchors instead.
-
Changed #875: annotators adding support for Pillow images. All supervision Annotators can now accept an image as either a numpy array or a Pillow Image. They automatically detect its type, draw annotations, and return the output in the same format as the input.
-
Fixed #944:
sv.DetectionsSmootherremovingtracking_idfromsv.Detections.
0.18.0 January 25, 2024¶
- Added #720:
sv.PercentageBarAnnotatorallowing to annotate images and videos with percentage values representing confidence or other custom property.
>>> import supervision as sv
>>> image = ...
>>> detections = sv.Detections(...)
>>> percentage_bar_annotator = sv.PercentageBarAnnotator()
>>> annotated_frame = percentage_bar_annotator.annotate(
... scene=image.copy(),
... detections=detections
... )
-
Added #702:
sv.RoundBoxAnnotatorallowing to annotate images and videos with rounded corners bounding boxes. -
Added #770:
sv.OrientedBoxAnnotatorallowing to annotate images and videos with OBB (Oriented Bounding Boxes).
import cv2
import supervision as sv
from ultralytics import YOLO
image = cv2.imread(<SOURCE_IMAGE_PATH>)
model = YOLO("yolov8n-obb.pt")
result = model(image)[0]
detections = sv.Detections.from_ultralytics(result)
oriented_box_annotator = sv.OrientedBoxAnnotator()
annotated_frame = oriented_box_annotator.annotate(
scene=image.copy(),
detections=detections
)
-
Added #696:
sv.DetectionsSmootherallowing for smoothing detections over multiple frames in video tracking. -
Added #769:
sv.ColorPalette.from_matplotliballowing users to create asv.ColorPaletteinstance from a Matplotlib color palette.
>>> import supervision as sv
>>> sv.ColorPalette.from_matplotlib('viridis', 5)
ColorPalette(colors=[Color(r=68, g=1, b=84), Color(r=59, g=82, b=139), ...])
-
Changed #770:
sv.Detections.from_ultralyticsadding support for OBB (Oriented Bounding Boxes). -
Changed #735:
sv.LineZoneto now accept a list of specific box anchors that must cross the line for a detection to be counted. This update marks a significant improvement from the previous requirement, where all four box corners were necessary. Users can now specify a single anchor, such assv.Position.BOTTOM_CENTER, or any other combination of anchors defined asList[sv.Position]. -
Changed #756:
sv.Color's andsv.ColorPalette's method of accessing predefined colors, transitioning from a function-based approach (sv.Color.red()) to a more intuitive and conventional property-based method (sv.Color.RED).
Deprecated
sv.ColorPalette.default() is deprecated and will be removed in supervision-0.22.0. Use sv.ColorPalette.DEFAULT instead.
-
Changed #769:
sv.ColorPalette.DEFAULTvalue, giving users a more extensive set of annotation colors. -
Changed #677:
sv.Detections.from_roboflowtosv.Detections.from_inferencestreamlining its functionality to be compatible with both the both inference pip package and the Robloflow hosted API.
Deprecated
Detections.from_roboflow() is deprecated and will be removed in supervision-0.22.0. Use Detections.from_inference instead.
- Fixed #735:
sv.LineZonefunctionality to accurately update the counter when an object crosses a line from any direction, including from the side. This enhancement enables more precise tracking and analytics, such as calculating individual in/out counts for each lane on the road.
0.17.0 December 06, 2023¶
-
Added #633:
sv.PixelateAnnotatorallowing to pixelate objects on images and videos. -
Added #652:
sv.TriangleAnnotatorallowing to annotate images and videos with triangle markers. -
Added #602:
sv.PolygonAnnotatorallowing to annotate images and videos with segmentation mask outline.
>>> import supervision as sv
>>> image = ...
>>> detections = sv.Detections(...)
>>> polygon_annotator = sv.PolygonAnnotator()
>>> annotated_frame = polygon_annotator.annotate(
... scene=image.copy(),
... detections=detections
... )
>>> from supervision.assets import download_assets, VideoAssets
>>> download_assets(VideoAssets.VEHICLES)
"vehicles.mp4"
-
Added #605:
Position.CENTER_OF_MASSallowing to place labels in center of mass of segmentation masks. -
Added #651:
sv.scale_boxesallowing to scalesv.Detections.xyxyvalues. -
Added #637:
sv.calculate_dynamic_text_scaleandsv.calculate_dynamic_line_thicknessallowing text scale and line thickness to match image resolution. -
Added #620:
sv.Color.as_hexallowing to extract color value in HEX format. -
Added #572:
sv.Classifications.from_timmallowing to load classification result from timm models. -
Added #478:
sv.Classifications.from_clipallowing to load classification result from clip model. -
Added #571:
sv.Detections.from_azure_analyze_imageallowing to load detection results from Azure Image Analysis. -
Changed #646:
sv.BoxMaskAnnotatorrenaming it tosv.ColorAnnotator. -
Changed #606:
sv.MaskAnnotatorto make it 5x faster. -
Fixed #584:
sv.DetectionDataset.from_yoloto ignore empty lines in annotation files. -
Fixed #555:
sv.BlurAnnotatorto trim negative coordinates before bluring detections. -
Fixed #511:
sv.TraceAnnotatorto respect trace position.
0.16.0 October 19, 2023¶
-
Added #422:
sv.BoxMaskAnnotatorallowing to annotate images and videos with mox masks. -
Added #433:
sv.HaloAnnotatorallowing to annotate images and videos with halo effect.
>>> import supervision as sv
>>> image = ...
>>> detections = sv.Detections(...)
>>> halo_annotator = sv.HaloAnnotator()
>>> annotated_frame = halo_annotator.annotate(
... scene=image.copy(),
... detections=detections
... )
-
Added #466:
sv.HeatMapAnnotatorallowing to annotate videos with heat maps. -
Added #492:
sv.DotAnnotatorallowing to annotate images and videos with dots. -
Added #449:
sv.draw_imageallowing to draw an image onto a given scene with specified opacity and dimensions. -
Added #280:
sv.FPSMonitorfor monitoring frames per second (FPS) to benchmark latency. -
Changed #482:
sv.LineZone.triggernow returnTuple[np.ndarray, np.ndarray]. The first array indicates which detections have crossed the line from outside to inside. The second array indicates which detections have crossed the line from inside to outside. -
Changed #465: Annotator argument name from
color_map: strtocolor_lookup: ColorLookupenum to increase type safety. -
Changed #426:
sv.MaskAnnotatorallowing 2x faster annotation. -
Fixed #477: Poetry env definition allowing proper local installation.
-
Fixed #430:
sv.ByteTrackto returnnp.array([], dtype=int)whensvDetectionsis empty.
Deprecated
sv.Detections.from_yolov8 and sv.Classifications.from_yolov8 as those are now replaced by sv.Detections.from_ultralytics and sv.Classifications.from_ultralytics.
0.15.0 October 5, 2023¶
-
Added #170:
sv.BoundingBoxAnnotatorallowing to annotate images and videos with bounding boxes. -
Added #170:
sv.BoxCornerAnnotatorallowing to annotate images and videos with just bounding box corners. -
Added #170:
sv.MaskAnnotatorallowing to annotate images and videos with segmentation masks. -
Added #170:
sv.EllipseAnnotatorallowing to annotate images and videos with ellipses (sports game style). -
Added #386:
sv.CircleAnnotatorallowing to annotate images and videos with circles. -
Added #354:
sv.TraceAnnotatorallowing to draw path of moving objects on videos. -
Added #405:
sv.BlurAnnotatorallowing to blur objects on images and videos.
>>> import supervision as sv
>>> image = ...
>>> detections = sv.Detections(...)
>>> bounding_box_annotator = sv.BoundingBoxAnnotator()
>>> annotated_frame = bounding_box_annotator.annotate(
... scene=image.copy(),
... detections=detections
... )
-
Added #354: Supervision usage example. You can now learn how to perform traffic flow analysis with Supervision.
-
Changed #399:
sv.Detections.from_roboflownow does not requireclass_listto be specified. Theclass_idvalue can be extracted directly from the inference response. -
Changed #381:
sv.VideoSinknow allows to customize the output codec. -
Changed #361:
sv.InferenceSlicercan now operate in multithreading mode. -
Fixed #348:
sv.Detections.from_deepsparseto allow processing empty deepsparse result object.
0.14.0 August 31, 2023¶
- Added #282: support for SAHI inference technique with
sv.InferenceSlicer.
>>> import cv2
>>> import supervision as sv
>>> from ultralytics import YOLO
>>> image = cv2.imread(SOURCE_IMAGE_PATH)
>>> model = YOLO(...)
>>> def callback(image_slice: np.ndarray) -> sv.Detections:
... result = model(image_slice)[0]
... return sv.Detections.from_ultralytics(result)
>>> slicer = sv.InferenceSlicer(callback = callback)
>>> detections = slicer(image)
-
Added #297:
Detections.from_deepsparseto enable seamless integration with DeepSparse framework. -
Added #281:
sv.Classifications.from_ultralyticsto enable seamless integration with Ultralytics framework. This will enable you to use supervision with all models that Ultralytics supports.
Deprecated
sv.Detections.from_yolov8 and sv.Classifications.from_yolov8 are now deprecated and will be removed with supervision-0.16.0 release.
-
Added #341: First supervision usage example script showing how to detect and track objects on video using YOLOv8 + Supervision.
-
Changed #296:
sv.ClassificationDatasetandsv.DetectionDatasetnow use image path (not image name) as dataset keys. -
Fixed #300:
Detections.from_roboflowto filter out polygons with less than 3 points.
0.13.0 August 8, 2023¶
- Added #236: support for mean average precision (mAP) for object detection models with
sv.MeanAveragePrecision.
>>> import supervision as sv
>>> from ultralytics import YOLO
>>> dataset = sv.DetectionDataset.from_yolo(...)
>>> model = YOLO(...)
>>> def callback(image: np.ndarray) -> sv.Detections:
... result = model(image)[0]
... return sv.Detections.from_yolov8(result)
>>> mean_average_precision = sv.MeanAveragePrecision.benchmark(
... dataset = dataset,
... callback = callback
... )
>>> mean_average_precision.map50_95
0.433
-
Added #256: support for ByteTrack for object tracking with
sv.ByteTrack. -
Added #222:
sv.Detections.from_ultralyticsto enable seamless integration with Ultralytics framework. This will enable you to usesupervisionwith all models that Ultralytics supports.
Deprecated
sv.Detections.from_yolov8 is now deprecated and will be removed with supervision-0.15.0 release.
-
Added #191:
sv.Detections.from_paddledetto enable seamless integration with PaddleDetection framework. -
Added #245: support for loading PASCAL VOC segmentation datasets with
sv.DetectionDataset..
0.12.0 July 24, 2023¶
Python 3.7. Support Terminated
With the supervision-0.12.0 release, we are terminating official support for Python 3.7.
- Added #177: initial support for object detection model benchmarking with
sv.ConfusionMatrix.
>>> import supervision as sv
>>> from ultralytics import YOLO
>>> dataset = sv.DetectionDataset.from_yolo(...)
>>> model = YOLO(...)
>>> def callback(image: np.ndarray) -> sv.Detections:
... result = model(image)[0]
... return sv.Detections.from_yolov8(result)
>>> confusion_matrix = sv.ConfusionMatrix.benchmark(
... dataset = dataset,
... callback = callback
... )
>>> confusion_matrix.matrix
array([
[0., 0., 0., 0.],
[0., 1., 0., 1.],
[0., 1., 1., 0.],
[1., 1., 0., 0.]
])
-
Added #173:
Detections.from_mmdetectionto enable seamless integration with MMDetection framework. -
Added #130: ability to install package in
headlessordesktopmode. -
Changed #180: packing method from
setup.pytopyproject.toml. -
Fixed #188:
sv.DetectionDataset.from_cooccan't be loaded when there are images without annotations. -
Fixed #226:
sv.DetectionDataset.from_yolocan't load background instances.
0.11.1 June 29, 2023¶
- Fixed #165:
as_folder_structurefails to savesv.ClassificationDatasetwhen it is result of inference.
0.11.0 June 28, 2023¶
- Added #150: ability to load and save
sv.DetectionDatasetin COCO format usingas_cocoandfrom_cocomethods.
>>> import supervision as sv
>>> ds = sv.DetectionDataset.from_coco(
... images_directory_path='...',
... annotations_path='...'
... )
>>> ds.as_coco(
... images_directory_path='...',
... annotations_path='...'
... )
- Added #158: ability to merge multiple
sv.DetectionDatasettogether usingmergemethod.
>>> import supervision as sv
>>> ds_1 = sv.DetectionDataset(...)
>>> len(ds_1)
100
>>> ds_1.classes
['dog', 'person']
>>> ds_2 = sv.DetectionDataset(...)
>>> len(ds_2)
200
>>> ds_2.classes
['cat']
>>> ds_merged = sv.DetectionDataset.merge([ds_1, ds_2])
>>> len(ds_merged)
300
>>> ds_merged.classes
['cat', 'dog', 'person']
-
Added #162: additional
startandendarguments tosv.get_video_frames_generatorallowing to generate frames only for a selected part of the video. -
Fixed #157: incorrect loading of YOLO dataset class names from
data.yaml.
0.10.0 June 14, 2023¶
- Added #125: ability to load and save
sv.ClassificationDatasetin a folder structure format.
>>> import supervision as sv
>>> cs = sv.ClassificationDataset.from_folder_structure(
... root_directory_path='...'
... )
>>> cs.as_folder_structure(
... root_directory_path='...'
... )
-
Added #125: support for
sv.ClassificationDataset.splitallowing to dividesv.ClassificationDatasetinto two parts. -
Added #110: ability to extract masks from Roboflow API results using
sv.Detections.from_roboflow. -
Added commit hash: Supervision Quickstart notebook where you can learn more about Detection, Dataset and Video APIs.
-
Changed #135:
sv.get_video_frames_generatordocumentation to better describe actual behavior.
0.9.0 June 7, 2023¶
- Added #118: ability to select
sv.Detectionsby index, list of indexes or slice. Here is an example illustrating the new selection methods.
>>> import supervision as sv
>>> detections = sv.Detections(...)
>>> len(detections[0])
1
>>> len(detections[[0, 1]])
2
>>> len(detections[0:2])
2
-
Added #101: ability to extract masks from YOLOv8 result using
sv.Detections.from_yolov8. Here is an example illustrating how to extract boolean masks from the result of the YOLOv8 model inference. -
Added #122: ability to crop image using
sv.crop. Here is an example showing how to get a separate crop for each detection insv.Detections. -
Added #120: ability to conveniently save multiple images into directory using
sv.ImageSink. Here is an example showing how to save every tenth video frame as a separate image.
>>> import supervision as sv
>>> with sv.ImageSink(target_dir_path='target/directory/path') as sink:
... for image in sv.get_video_frames_generator(source_path='source_video.mp4', stride=10):
... sink.save_image(image=image)
- Fixed #106: inconvenient handling of
sv.PolygonZonecoordinates. Nowsv.PolygonZoneaccepts coordinates in the form of[[x1, y1], [x2, y2], ...]that can be both integers and floats.
0.8.0 May 17, 2023¶
- Added #100: support for dataset inheritance. The current
Datasetgot renamed toDetectionDataset. NowDetectionDatasetinherits fromBaseDataset. This change was made to enforce the future consistency of APIs of different types of computer vision datasets. - Added #100: ability to save datasets in YOLO format using
DetectionDataset.as_yolo.
>>> import roboflow
>>> from roboflow import Roboflow
>>> import supervision as sv
>>> roboflow.login()
>>> rf = Roboflow()
>>> project = rf.workspace(WORKSPACE_ID).project(PROJECT_ID)
>>> dataset = project.version(PROJECT_VERSION).download("yolov5")
>>> ds = sv.DetectionDataset.from_yolo(
... images_directory_path=f"{dataset.location}/train/images",
... annotations_directory_path=f"{dataset.location}/train/labels",
... data_yaml_path=f"{dataset.location}/data.yaml"
... )
>>> ds.classes
['dog', 'person']
- Added #102: support for
DetectionDataset.splitallowing to divideDetectionDatasetinto two parts.
>>> import supervision as sv
>>> ds = sv.DetectionDataset(...)
>>> train_ds, test_ds = ds.split(split_ratio=0.7, random_state=42, shuffle=True)
>>> len(train_ds), len(test_ds)
(700, 300)
- Changed #100: default value of
approximation_percentageparameter from0.75to0.0inDetectionDataset.as_yoloandDetectionDataset.as_pascal_voc.
0.7.0 May 11, 2023¶
- Added #91:
Detections.from_yolo_nasto enable seamless integration with YOLO-NAS model. - Added #86: ability to load datasets in YOLO format using
Dataset.from_yolo. - Added #84:
Detections.mergeto merge multipleDetectionsobjects together. - Fixed #81:
LineZoneAnnotator.annotatedoes not return annotated frame. - Changed #44:
LineZoneAnnotator.annotateto allow for custom text for the in and out tags.
0.6.0 April 19, 2023¶
- Added #71: initial
Datasetsupport and ability to saveDetectionsin Pascal VOC XML format. - Added #71: new
mask_to_polygons,filter_polygons_by_area,polygon_to_xyxyandapproximate_polygonutilities. - Added #72: ability to load Pascal VOC XML object detections dataset as
Dataset. - Changed #70: order of
Detectionsattributes to make it consistent with order of objects in__iter__tuple. - Changed #71:
generate_2d_masktopolygon_to_mask.
0.5.2 April 13, 2023¶
- Fixed #63:
LineZone.triggerfunction expects 4 values instead of 5.
0.5.1 April 12, 2023¶
- Fixed
Detections.__getitem__method did not return mask for selected item. - Fixed
Detections.areacrashed for mask detections.
0.5.0 April 10, 2023¶
- Added #58:
Detections.maskto enable segmentation support. - Added #58:
MaskAnnotatorto allow easyDetections.maskannotation. - Added #58:
Detections.from_samto enable native Segment Anything Model (SAM) support. - Changed #58:
Detections.areabehaviour to work not only with boxes but also with masks.
0.4.0 April 5, 2023¶
- Added #46:
Detections.emptyto allow easy creation of emptyDetectionsobjects. - Added #56:
Detections.from_roboflowto allow easy creation ofDetectionsobjects from Roboflow API inference results. - Added #56:
plot_images_gridto allow easy plotting of multiple images on single plot. - Added #56: initial support for Pascal VOC XML format with
detections_to_voc_xmlmethod. - Changed #56:
show_frame_in_notebookrefactored and renamed toplot_image.
0.3.2 March 23, 2023¶
- Changed #50: Allow
Detections.class_idto beNone.
0.3.1 March 6, 2023¶
- Fixed #41:
PolygonZonethrows an exception when the object touches the bottom edge of the image. - Fixed #42:
Detections.wth_nmsmethod throws an exception whenDetectionsis empty. - Changed #36:
Detections.wth_nmssupport class agnostic and non-class agnostic case.
0.3.0 March 6, 2023¶
- Changed: Allow
Detections.confidenceto beNone. - Added:
Detections.from_transformersandDetections.from_detectron2to enable seamless integration with Transformers and Detectron2 models. - Added:
Detections.areato dynamically calculate bounding box area. - Added:
Detections.wth_nmsto filter out double detections with NMS. Initial - only class agnostic - implementation.
0.2.0 February 2, 2023¶
- Added: Advanced
Detectionsfiltering with pandas-like API. - Added:
Detections.from_yolov5andDetections.from_yolov8to enable seamless integration with YOLOv5 and YOLOv8 models.
0.1.0 January 19, 2023¶
Say hello to Supervision đź‘‹