Changelog
0.22.0 Jul 12, 2024¶
- Added #1326:
sv.DetectionsDataset
andsv.ClassificationDataset
allowing to load the images into memory only when necessary (lazy loading).
Deprecated
Constructing DetectionDataset
with parameter images
as Dict[str, np.ndarray]
is deprecated and will be removed in supervision-0.26.0
. Please pass a list of paths List[str]
instead.
Deprecated
The DetectionDataset.images
property is deprecated and will be removed in supervision-0.26.0
. Please loop over images with for path, image, annotation in dataset:
, as that does not require loading all images into memory.
import roboflow
from roboflow import Roboflow
import supervision as sv
roboflow.login()
rf = Roboflow()
project = rf.workspace(<WORKSPACE_ID>).project(<PROJECT_ID>)
dataset = project.version(<PROJECT_VERSION>).download("coco")
ds_train = sv.DetectionDataset.from_coco(
images_directory_path=f"{dataset.location}/train",
annotations_path=f"{dataset.location}/train/_annotations.coco.json",
)
path, image, annotation = ds_train[0]
# loads image on demand
for path, image, annotation in ds_train:
# loads image on demand
-
Added #1296:
sv.Detections.from_lmm
now supports parsing results from the Florence 2 model, extending the capability to handle outputs from this Large Multimodal Model (LMM). This includes detailed object detection, OCR with region proposals, segmentation, and more. Find out more in our Colab notebook. -
Added #1232 to support keypoint detection with Mediapipe. Both legacy and modern pipelines are supported. See
sv.KeyPoints.from_mediapipe
for more. -
Added #1316:
sv.KeyPoints.from_mediapipe
extended to support FaceMesh from Mediapipe. This enhancement allows for processing both face landmarks fromFaceLandmarker
, and legacy results fromFaceMesh
. -
Added #1310:
sv.KeyPoints.from_detectron2
is a newKeyPoints
method, adding support for extracting keypoints from the popular Detectron 2 platform. -
Added #1300:
sv.Detections.from_detectron2
now supports segmentation models detectron2. The resulting masks can be used withsv.MaskAnnotator
for displaying annotations.
import supervision as sv
from detectron2 import model_zoo
from detectron2.engine import DefaultPredictor
from detectron2.config import get_cfg
import cv2
image = cv2.imread(<SOURCE_IMAGE_PATH>)
cfg = get_cfg()
cfg.merge_from_file(model_zoo.get_config_file("COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml"))
cfg.MODEL.WEIGHTS = model_zoo.get_checkpoint_url("COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml")
predictor = DefaultPredictor(cfg)
result = predictor(image)
detections = sv.Detections.from_detectron2(result)
mask_annotator = sv.MaskAnnotator()
annotated_frame = mask_annotator.annotate(scene=image.copy(), detections=detections)
- Added #1277: if you provide a font that supports symbols of a language,
sv.RichLabelAnnotator
will draw them on your images. - Various other annotators have been revised to ensure proper in-place functionality when used with
numpy
arrays. Additionally, we fixed a bug wheresv.ColorAnnotator
was filling boxes with solid color when used in-place.
import cv2
import supervision as sv
import
image = cv2.imread(<SOURCE_IMAGE_PATH>)
model = get_model(model_id="yolov8n-640")
results = model.infer(image)[0]
detections = sv.Detections.from_inference(results)
rich_label_annotator = sv.RichLabelAnnotator(font_path=<TTF_FONT_PATH>)
annotated_image = rich_label_annotator.annotate(scene=image.copy(), detections=detections)
- Added #1227: Added support for loading Oriented Bounding Boxes dataset in YOLO format.
import supervision as sv
train_ds = sv.DetectionDataset.from_yolo(
images_directory_path="/content/dataset/train/images",
annotations_directory_path="/content/dataset/train/labels",
data_yaml_path="/content/dataset/data.yaml",
is_obb=True
)
_, image, detections in train_ds[0]
obb_annotator = OrientedBoxAnnotator()
annotated_image = obb_annotator.annotate(scene=image.copy(), detections=detections)
- Fixed #1312: Fixed
CropAnnotator
.
Removed
BoxAnnotator
was removed, however BoundingBoxAnnotator
has been renamed to BoxAnnotator
. Use a combination of BoxAnnotator
and LabelAnnotator
to simulate old BoundingBox
behavior.
Deprecated
The name BoundingBoxAnnotator
has been deprecated and will be removed in supervision-0.26.0
. It has been renamed to BoxAnnotator
.
-
Added #975 📝 New Cookbooks: serialize detections into json and csv.
-
Added #1290: Mostly an internal change, our file utility function now support both
str
andpathlib
paths. -
Added #1340: Two new methods for converting between bounding box formats -
xywh_to_xyxy
andxcycwh_to_xyxy
Removed
from_roboflow
method has been removed due to deprecation. Use from_inference instead.
Removed
Color.white()
has been removed due to deprecation. Use color.WHITE
instead.
Removed
Color.black()
has been removed due to deprecation. Use color.BLACK
instead.
Removed
Color.red()
has been removed due to deprecation. Use color.RED
instead.
Removed
Color.green()
has been removed due to deprecation. Use color.GREEN
instead.
Removed
Color.blue()
has been removed due to deprecation. Use color.BLUE
instead.
Removed
ColorPalette.default()
has been removed due to deprecation. Use ColorPalette.DEFAULT instead.
Removed
FPSMonitor.__call__
has been removed due to deprecation. Use the attribute FPSMonitor.fps instead.
0.21.0 Jun 5, 2024¶
-
Added #500:
sv.Detections.with_nmm
to perform non-maximum merging on the current set of object detections. -
Added #1221:
sv.Detections.from_lmm
allowing to parse Large Multimodal Model (LMM) text result intosv.Detections
object. For nowfrom_lmm
supports only PaliGemma result parsing.
import supervision as sv
paligemma_result = "<loc0256><loc0256><loc0768><loc0768> cat"
detections = sv.Detections.from_lmm(
sv.LMM.PALIGEMMA,
paligemma_result,
resolution_wh=(1000, 1000),
classes=['cat', 'dog']
)
detections.xyxy
# array([[250., 250., 750., 750.]])
detections.class_id
# array([0])
- Added #1236:
sv.VertexLabelAnnotator
allowing to annotate every vertex of a keypoint skeleton with custom text and color.
import supervision as sv
image = ...
key_points = sv.KeyPoints(...)
edge_annotator = sv.EdgeAnnotator(
color=sv.Color.GREEN,
thickness=5
)
annotated_frame = edge_annotator.annotate(
scene=image.copy(),
key_points=key_points
)
-
Added #1147:
sv.KeyPoints.from_inference
allowing to createsv.KeyPoints
from Inference result. -
Added #1138:
sv.KeyPoints.from_yolo_nas
allowing to createsv.KeyPoints
from YOLO-NAS result. -
Added #1163:
sv.mask_to_rle
andsv.rle_to_mask
allowing for easy conversion between mask and rle formats. -
Changed #1236:
sv.InferenceSlicer
allowing to select overlap filtering strategy (NONE
,NON_MAX_SUPPRESSION
andNON_MAX_MERGE
). -
Changed #1178:
sv.InferenceSlicer
adding instance segmentation model support.
import cv2
import numpy as np
import supervision as sv
from inference import get_model
model = get_model(model_id="yolov8x-seg-640")
image = cv2.imread(<SOURCE_IMAGE_PATH>)
def callback(image_slice: np.ndarray) -> sv.Detections:
results = model.infer(image_slice)[0]
return sv.Detections.from_inference(results)
slicer = sv.InferenceSlicer(callback = callback)
detections = slicer(image)
mask_annotator = sv.MaskAnnotator()
label_annotator = sv.LabelAnnotator()
annotated_image = mask_annotator.annotate(
scene=image, detections=detections)
annotated_image = label_annotator.annotate(
scene=annotated_image, detections=detections)
-
Changed #1228:
sv.LineZone
making it 10-20 times faster, depending on the use case. -
Changed #1163:
sv.DetectionDataset.from_coco
andsv.DetectionDataset.as_coco
adding support for run-length encoding (RLE) mask format.
0.20.0 April 24, 2024¶
-
Added #1128:
sv.KeyPoints
to provide initial support for pose estimation and broader keypoint detection models. -
Added #1128:
sv.EdgeAnnotator
andsv.VertexAnnotator
to enable rendering of results from keypoint detection models.
import cv2
import supervision as sv
from ultralytics import YOLO
image = cv2.imread(<SOURCE_IMAGE_PATH>)
model = YOLO('yolov8l-pose')
result = model(image, verbose=False)[0]
keypoints = sv.KeyPoints.from_ultralytics(result)
edge_annotators = sv.EdgeAnnotator(color=sv.Color.GREEN, thickness=5)
annotated_image = edge_annotators.annotate(image.copy(), keypoints)
-
Changed #1037:
sv.LabelAnnotator
by adding an additionalcorner_radius
argument that allows for rounding the corners of the bounding box. -
Changed #1109:
sv.PolygonZone
such that theframe_resolution_wh
argument is no longer required to initializesv.PolygonZone
.
Deprecated
The frame_resolution_wh
parameter in sv.PolygonZone
is deprecated and will be removed in supervision-0.24.0
.
-
Changed #1084:
sv.get_polygon_center
to calculate a more accurate polygon centroid. -
Changed #1069:
sv.Detections.from_transformers
by adding support for Transformers segmentation models and extract class names values.
import torch
import supervision as sv
from PIL import Image
from transformers import DetrImageProcessor, DetrForSegmentation
processor = DetrImageProcessor.from_pretrained("facebook/detr-resnet-50-panoptic")
model = DetrForSegmentation.from_pretrained("facebook/detr-resnet-50-panoptic")
image = Image.open(<SOURCE_IMAGE_PATH>)
inputs = processor(images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
width, height = image.size
target_size = torch.tensor([[height, width]])
results = processor.post_process_segmentation(
outputs=outputs, target_sizes=target_size)[0]
detections = sv.Detections.from_transformers(results, id2label=model.config.id2label)
mask_annotator = sv.MaskAnnotator()
label_annotator = sv.LabelAnnotator(text_position=sv.Position.CENTER)
annotated_image = mask_annotator.annotate(
scene=image, detections=detections)
annotated_image = label_annotator.annotate(
scene=annotated_image, detections=detections)
- Fixed #787:
sv.ByteTrack.update_with_detections
which was removing segmentation masks while tracking. Now,ByteTrack
can be used alongside segmentation models.
0.19.0 March 15, 2024¶
- Added #818:
sv.CSVSink
allowing for the straightforward saving of image, video, or stream inference results in a.csv
file.
import supervision as sv
from ultralytics import YOLO
model = YOLO(<SOURCE_MODEL_PATH>)
csv_sink = sv.CSVSink(<RESULT_CSV_FILE_PATH>)
frames_generator = sv.get_video_frames_generator(<SOURCE_VIDEO_PATH>)
with csv_sink:
for frame in frames_generator:
result = model(frame)[0]
detections = sv.Detections.from_ultralytics(result)
csv_sink.append(detections, custom_data={<CUSTOM_LABEL>:<CUSTOM_DATA>})
- Added #819:
sv.JSONSink
allowing for the straightforward saving of image, video, or stream inference results in a.json
file.
import supervision as sv
from ultralytics import YOLO
model = YOLO(<SOURCE_MODEL_PATH>)
json_sink = sv.JSONSink(<RESULT_JSON_FILE_PATH>)
frames_generator = sv.get_video_frames_generator(<SOURCE_VIDEO_PATH>)
with json_sink:
for frame in frames_generator:
result = model(frame)[0]
detections = sv.Detections.from_ultralytics(result)
json_sink.append(detections, custom_data={<CUSTOM_LABEL>:<CUSTOM_DATA>})
-
Added #847:
sv.mask_iou_batch
allowing to compute Intersection over Union (IoU) of two sets of masks. -
Added #847:
sv.mask_non_max_suppression
allowing to perform Non-Maximum Suppression (NMS) on segmentation predictions. -
Added #888:
sv.CropAnnotator
allowing users to annotate the scene with scaled-up crops of detections.
import cv2
import supervision as sv
from inference import get_model
image = cv2.imread(<SOURCE_IMAGE_PATH>)
model = get_model(model_id="yolov8n-640")
result = model.infer(image)[0]
detections = sv.Detections.from_inference(result)
crop_annotator = sv.CropAnnotator()
annotated_frame = crop_annotator.annotate(
scene=image.copy(),
detections=detections
)
-
Changed #827:
sv.ByteTrack.reset
allowing users to clear trackers state, enabling the processing of multiple video files in sequence. -
Changed #802:
sv.LineZoneAnnotator
allowing to hide in/out count usingdisplay_in_count
anddisplay_out_count
properties. -
Changed #787:
sv.ByteTrack
input arguments and docstrings updated to improve readability and ease of use.
Deprecated
The track_buffer
, track_thresh
, and match_thresh
parameters in sv.ByterTrack
are deprecated and will be removed in supervision-0.23.0
. Use lost_track_buffer,
track_activation_threshold
, and minimum_matching_threshold
instead.
- Changed #910:
sv.PolygonZone
to now accept a list of specific box anchors that must be in zone for a detection to be counted.
Deprecated
The triggering_position
parameter in sv.PolygonZone
is deprecated and will be removed in supervision-0.23.0
. Use triggering_anchors
instead.
-
Changed #875: annotators adding support for Pillow images. All supervision Annotators can now accept an image as either a numpy array or a Pillow Image. They automatically detect its type, draw annotations, and return the output in the same format as the input.
-
Fixed #944:
sv.DetectionsSmoother
removingtracking_id
fromsv.Detections
.
0.18.0 January 25, 2024¶
- Added #720:
sv.PercentageBarAnnotator
allowing to annotate images and videos with percentage values representing confidence or other custom property.
>>> import supervision as sv
>>> image = ...
>>> detections = sv.Detections(...)
>>> percentage_bar_annotator = sv.PercentageBarAnnotator()
>>> annotated_frame = percentage_bar_annotator.annotate(
... scene=image.copy(),
... detections=detections
... )
-
Added #702:
sv.RoundBoxAnnotator
allowing to annotate images and videos with rounded corners bounding boxes. -
Added #770:
sv.OrientedBoxAnnotator
allowing to annotate images and videos with OBB (Oriented Bounding Boxes).
import cv2
import supervision as sv
from ultralytics import YOLO
image = cv2.imread(<SOURCE_IMAGE_PATH>)
model = YOLO("yolov8n-obb.pt")
result = model(image)[0]
detections = sv.Detections.from_ultralytics(result)
oriented_box_annotator = sv.OrientedBoxAnnotator()
annotated_frame = oriented_box_annotator.annotate(
scene=image.copy(),
detections=detections
)
-
Added #696:
sv.DetectionsSmoother
allowing for smoothing detections over multiple frames in video tracking. -
Added #769:
sv.ColorPalette.from_matplotlib
allowing users to create asv.ColorPalette
instance from a Matplotlib color palette.
>>> import supervision as sv
>>> sv.ColorPalette.from_matplotlib('viridis', 5)
ColorPalette(colors=[Color(r=68, g=1, b=84), Color(r=59, g=82, b=139), ...])
-
Changed #770:
sv.Detections.from_ultralytics
adding support for OBB (Oriented Bounding Boxes). -
Changed #735:
sv.LineZone
to now accept a list of specific box anchors that must cross the line for a detection to be counted. This update marks a significant improvement from the previous requirement, where all four box corners were necessary. Users can now specify a single anchor, such assv.Position.BOTTOM_CENTER
, or any other combination of anchors defined asList[sv.Position]
. -
Changed #756:
sv.Color
's andsv.ColorPalette
's method of accessing predefined colors, transitioning from a function-based approach (sv.Color.red()
) to a more intuitive and conventional property-based method (sv.Color.RED
).
Deprecated
sv.ColorPalette.default()
is deprecated and will be removed in supervision-0.22.0
. Use sv.ColorPalette.DEFAULT
instead.
-
Changed #769:
sv.ColorPalette.DEFAULT
value, giving users a more extensive set of annotation colors. -
Changed #677:
sv.Detections.from_roboflow
tosv.Detections.from_inference
streamlining its functionality to be compatible with both the both inference pip package and the Robloflow hosted API.
Deprecated
Detections.from_roboflow()
is deprecated and will be removed in supervision-0.22.0
. Use Detections.from_inference
instead.
- Fixed #735:
sv.LineZone
functionality to accurately update the counter when an object crosses a line from any direction, including from the side. This enhancement enables more precise tracking and analytics, such as calculating individual in/out counts for each lane on the road.
0.17.0 December 06, 2023¶
-
Added #633:
sv.PixelateAnnotator
allowing to pixelate objects on images and videos. -
Added #652:
sv.TriangleAnnotator
allowing to annotate images and videos with triangle markers. -
Added #602:
sv.PolygonAnnotator
allowing to annotate images and videos with segmentation mask outline.
>>> import supervision as sv
>>> image = ...
>>> detections = sv.Detections(...)
>>> polygon_annotator = sv.PolygonAnnotator()
>>> annotated_frame = polygon_annotator.annotate(
... scene=image.copy(),
... detections=detections
... )
>>> from supervision.assets import download_assets, VideoAssets
>>> download_assets(VideoAssets.VEHICLES)
"vehicles.mp4"
-
Added #605:
Position.CENTER_OF_MASS
allowing to place labels in center of mass of segmentation masks. -
Added #651:
sv.scale_boxes
allowing to scalesv.Detections.xyxy
values. -
Added #637:
sv.calculate_dynamic_text_scale
andsv.calculate_dynamic_line_thickness
allowing text scale and line thickness to match image resolution. -
Added #620:
sv.Color.as_hex
allowing to extract color value in HEX format. -
Added #572:
sv.Classifications.from_timm
allowing to load classification result from timm models. -
Added #478:
sv.Classifications.from_clip
allowing to load classification result from clip model. -
Added #571:
sv.Detections.from_azure_analyze_image
allowing to load detection results from Azure Image Analysis. -
Changed #646:
sv.BoxMaskAnnotator
renaming it tosv.ColorAnnotator
. -
Changed #606:
sv.MaskAnnotator
to make it 5x faster. -
Fixed #584:
sv.DetectionDataset.from_yolo
to ignore empty lines in annotation files. -
Fixed #555:
sv.BlurAnnotator
to trim negative coordinates before bluring detections. -
Fixed #511:
sv.TraceAnnotator
to respect trace position.
0.16.0 October 19, 2023¶
-
Added #422:
sv.BoxMaskAnnotator
allowing to annotate images and videos with mox masks. -
Added #433:
sv.HaloAnnotator
allowing to annotate images and videos with halo effect.
>>> import supervision as sv
>>> image = ...
>>> detections = sv.Detections(...)
>>> halo_annotator = sv.HaloAnnotator()
>>> annotated_frame = halo_annotator.annotate(
... scene=image.copy(),
... detections=detections
... )
-
Added #466:
sv.HeatMapAnnotator
allowing to annotate videos with heat maps. -
Added #492:
sv.DotAnnotator
allowing to annotate images and videos with dots. -
Added #449:
sv.draw_image
allowing to draw an image onto a given scene with specified opacity and dimensions. -
Added #280:
sv.FPSMonitor
for monitoring frames per second (FPS) to benchmark latency. -
Changed #482:
sv.LineZone.trigger
now returnTuple[np.ndarray, np.ndarray]
. The first array indicates which detections have crossed the line from outside to inside. The second array indicates which detections have crossed the line from inside to outside. -
Changed #465: Annotator argument name from
color_map: str
tocolor_lookup: ColorLookup
enum to increase type safety. -
Changed #426:
sv.MaskAnnotator
allowing 2x faster annotation. -
Fixed #477: Poetry env definition allowing proper local installation.
-
Fixed #430:
sv.ByteTrack
to returnnp.array([], dtype=int)
whensvDetections
is empty.
Deprecated
sv.Detections.from_yolov8
and sv.Classifications.from_yolov8
as those are now replaced by sv.Detections.from_ultralytics
and sv.Classifications.from_ultralytics
.
0.15.0 October 5, 2023¶
-
Added #170:
sv.BoundingBoxAnnotator
allowing to annotate images and videos with bounding boxes. -
Added #170:
sv.BoxCornerAnnotator
allowing to annotate images and videos with just bounding box corners. -
Added #170:
sv.MaskAnnotator
allowing to annotate images and videos with segmentation masks. -
Added #170:
sv.EllipseAnnotator
allowing to annotate images and videos with ellipses (sports game style). -
Added #386:
sv.CircleAnnotator
allowing to annotate images and videos with circles. -
Added #354:
sv.TraceAnnotator
allowing to draw path of moving objects on videos. -
Added #405:
sv.BlurAnnotator
allowing to blur objects on images and videos.
>>> import supervision as sv
>>> image = ...
>>> detections = sv.Detections(...)
>>> bounding_box_annotator = sv.BoundingBoxAnnotator()
>>> annotated_frame = bounding_box_annotator.annotate(
... scene=image.copy(),
... detections=detections
... )
-
Added #354: Supervision usage example. You can now learn how to perform traffic flow analysis with Supervision.
-
Changed #399:
sv.Detections.from_roboflow
now does not requireclass_list
to be specified. Theclass_id
value can be extracted directly from the inference response. -
Changed #381:
sv.VideoSink
now allows to customize the output codec. -
Changed #361:
sv.InferenceSlicer
can now operate in multithreading mode. -
Fixed #348:
sv.Detections.from_deepsparse
to allow processing empty deepsparse result object.
0.14.0 August 31, 2023¶
- Added #282: support for SAHI inference technique with
sv.InferenceSlicer
.
>>> import cv2
>>> import supervision as sv
>>> from ultralytics import YOLO
>>> image = cv2.imread(SOURCE_IMAGE_PATH)
>>> model = YOLO(...)
>>> def callback(image_slice: np.ndarray) -> sv.Detections:
... result = model(image_slice)[0]
... return sv.Detections.from_ultralytics(result)
>>> slicer = sv.InferenceSlicer(callback = callback)
>>> detections = slicer(image)
-
Added #297:
Detections.from_deepsparse
to enable seamless integration with DeepSparse framework. -
Added #281:
sv.Classifications.from_ultralytics
to enable seamless integration with Ultralytics framework. This will enable you to use supervision with all models that Ultralytics supports.
Deprecated
sv.Detections.from_yolov8 and sv.Classifications.from_yolov8 are now deprecated and will be removed with supervision-0.16.0
release.
-
Added #341: First supervision usage example script showing how to detect and track objects on video using YOLOv8 + Supervision.
-
Changed #296:
sv.ClassificationDataset
andsv.DetectionDataset
now use image path (not image name) as dataset keys. -
Fixed #300:
Detections.from_roboflow
to filter out polygons with less than 3 points.
0.13.0 August 8, 2023¶
- Added #236: support for mean average precision (mAP) for object detection models with
sv.MeanAveragePrecision
.
>>> import supervision as sv
>>> from ultralytics import YOLO
>>> dataset = sv.DetectionDataset.from_yolo(...)
>>> model = YOLO(...)
>>> def callback(image: np.ndarray) -> sv.Detections:
... result = model(image)[0]
... return sv.Detections.from_yolov8(result)
>>> mean_average_precision = sv.MeanAveragePrecision.benchmark(
... dataset = dataset,
... callback = callback
... )
>>> mean_average_precision.map50_95
0.433
-
Added #256: support for ByteTrack for object tracking with
sv.ByteTrack
. -
Added #222:
sv.Detections.from_ultralytics
to enable seamless integration with Ultralytics framework. This will enable you to usesupervision
with all models that Ultralytics supports.
Deprecated
sv.Detections.from_yolov8
is now deprecated and will be removed with supervision-0.15.0
release.
-
Added #191:
sv.Detections.from_paddledet
to enable seamless integration with PaddleDetection framework. -
Added #245: support for loading PASCAL VOC segmentation datasets with
sv.DetectionDataset.
.
0.12.0 July 24, 2023¶
Python 3.7. Support Terminated
With the supervision-0.12.0
release, we are terminating official support for Python 3.7.
- Added #177: initial support for object detection model benchmarking with
sv.ConfusionMatrix
.
>>> import supervision as sv
>>> from ultralytics import YOLO
>>> dataset = sv.DetectionDataset.from_yolo(...)
>>> model = YOLO(...)
>>> def callback(image: np.ndarray) -> sv.Detections:
... result = model(image)[0]
... return sv.Detections.from_yolov8(result)
>>> confusion_matrix = sv.ConfusionMatrix.benchmark(
... dataset = dataset,
... callback = callback
... )
>>> confusion_matrix.matrix
array([
[0., 0., 0., 0.],
[0., 1., 0., 1.],
[0., 1., 1., 0.],
[1., 1., 0., 0.]
])
-
Added #173:
Detections.from_mmdetection
to enable seamless integration with MMDetection framework. -
Added #130: ability to install package in
headless
ordesktop
mode. -
Changed #180: packing method from
setup.py
topyproject.toml
. -
Fixed #188:
sv.DetectionDataset.from_cooc
can't be loaded when there are images without annotations. -
Fixed #226:
sv.DetectionDataset.from_yolo
can't load background instances.
0.11.1 June 29, 2023¶
- Fix #165:
as_folder_structure
fails to savesv.ClassificationDataset
when it is result of inference.
0.11.0 June 28, 2023¶
- Added #150: ability to load and save
sv.DetectionDataset
in COCO format usingas_coco
andfrom_coco
methods.
>>> import supervision as sv
>>> ds = sv.DetectionDataset.from_coco(
... images_directory_path='...',
... annotations_path='...'
... )
>>> ds.as_coco(
... images_directory_path='...',
... annotations_path='...'
... )
- Added #158: ability to merge multiple
sv.DetectionDataset
together usingmerge
method.
>>> import supervision as sv
>>> ds_1 = sv.DetectionDataset(...)
>>> len(ds_1)
100
>>> ds_1.classes
['dog', 'person']
>>> ds_2 = sv.DetectionDataset(...)
>>> len(ds_2)
200
>>> ds_2.classes
['cat']
>>> ds_merged = sv.DetectionDataset.merge([ds_1, ds_2])
>>> len(ds_merged)
300
>>> ds_merged.classes
['cat', 'dog', 'person']
-
Added #162: additional
start
andend
arguments tosv.get_video_frames_generator
allowing to generate frames only for a selected part of the video. -
Fix #157: incorrect loading of YOLO dataset class names from
data.yaml
.
0.10.0 June 14, 2023¶
- Added #125: ability to load and save
sv.ClassificationDataset
in a folder structure format.
>>> import supervision as sv
>>> cs = sv.ClassificationDataset.from_folder_structure(
... root_directory_path='...'
... )
>>> cs.as_folder_structure(
... root_directory_path='...'
... )
-
Added #125: support for
sv.ClassificationDataset.split
allowing to dividesv.ClassificationDataset
into two parts. -
Added #110: ability to extract masks from Roboflow API results using
sv.Detections.from_roboflow
. -
Added commit hash: Supervision Quickstart notebook where you can learn more about Detection, Dataset and Video APIs.
-
Changed #135:
sv.get_video_frames_generator
documentation to better describe actual behavior.
0.9.0 June 7, 2023¶
- Added #118: ability to select
sv.Detections
by index, list of indexes or slice. Here is an example illustrating the new selection methods.
>>> import supervision as sv
>>> detections = sv.Detections(...)
>>> len(detections[0])
1
>>> len(detections[[0, 1]])
2
>>> len(detections[0:2])
2
-
Added #101: ability to extract masks from YOLOv8 result using
sv.Detections.from_yolov8
. Here is an example illustrating how to extract boolean masks from the result of the YOLOv8 model inference. -
Added #122: ability to crop image using
sv.crop
. Here is an example showing how to get a separate crop for each detection insv.Detections
. -
Added #120: ability to conveniently save multiple images into directory using
sv.ImageSink
. Here is an example showing how to save every tenth video frame as a separate image.
>>> import supervision as sv
>>> with sv.ImageSink(target_dir_path='target/directory/path') as sink:
... for image in sv.get_video_frames_generator(source_path='source_video.mp4', stride=10):
... sink.save_image(image=image)
- Fixed #106: inconvenient handling of
sv.PolygonZone
coordinates. Nowsv.PolygonZone
accepts coordinates in the form of[[x1, y1], [x2, y2], ...]
that can be both integers and floats.
0.8.0 May 17, 2023¶
- Added #100: support for dataset inheritance. The current
Dataset
got renamed toDetectionDataset
. NowDetectionDataset
inherits fromBaseDataset
. This change was made to enforce the future consistency of APIs of different types of computer vision datasets. - Added #100: ability to save datasets in YOLO format using
DetectionDataset.as_yolo
.
>>> import roboflow
>>> from roboflow import Roboflow
>>> import supervision as sv
>>> roboflow.login()
>>> rf = Roboflow()
>>> project = rf.workspace(WORKSPACE_ID).project(PROJECT_ID)
>>> dataset = project.version(PROJECT_VERSION).download("yolov5")
>>> ds = sv.DetectionDataset.from_yolo(
... images_directory_path=f"{dataset.location}/train/images",
... annotations_directory_path=f"{dataset.location}/train/labels",
... data_yaml_path=f"{dataset.location}/data.yaml"
... )
>>> ds.classes
['dog', 'person']
- Added #102: support for
DetectionDataset.split
allowing to divideDetectionDataset
into two parts.
>>> import supervision as sv
>>> ds = sv.DetectionDataset(...)
>>> train_ds, test_ds = ds.split(split_ratio=0.7, random_state=42, shuffle=True)
>>> len(train_ds), len(test_ds)
(700, 300)
- Changed #100: default value of
approximation_percentage
parameter from0.75
to0.0
inDetectionDataset.as_yolo
andDetectionDataset.as_pascal_voc
.
0.7.0 May 11, 2023¶
- Added #91:
Detections.from_yolo_nas
to enable seamless integration with YOLO-NAS model. - Added #86: ability to load datasets in YOLO format using
Dataset.from_yolo
. - Added #84:
Detections.merge
to merge multipleDetections
objects together. - Fixed #81:
LineZoneAnnotator.annotate
does not return annotated frame. - Changed #44:
LineZoneAnnotator.annotate
to allow for custom text for the in and out tags.
0.6.0 April 19, 2023¶
- Added #71: initial
Dataset
support and ability to saveDetections
in Pascal VOC XML format. - Added #71: new
mask_to_polygons
,filter_polygons_by_area
,polygon_to_xyxy
andapproximate_polygon
utilities. - Added #72: ability to load Pascal VOC XML object detections dataset as
Dataset
. - Changed #70: order of
Detections
attributes to make it consistent with order of objects in__iter__
tuple. - Changed #71:
generate_2d_mask
topolygon_to_mask
.
0.5.2 April 13, 2023¶
- Fixed #63:
LineZone.trigger
function expects 4 values instead of 5.
0.5.1 April 12, 2023¶
- Fixed
Detections.__getitem__
method did not return mask for selected item. - Fixed
Detections.area
crashed for mask detections.
0.5.0 April 10, 2023¶
- Added #58:
Detections.mask
to enable segmentation support. - Added #58:
MaskAnnotator
to allow easyDetections.mask
annotation. - Added #58:
Detections.from_sam
to enable native Segment Anything Model (SAM) support. - Changed #58:
Detections.area
behaviour to work not only with boxes but also with masks.
0.4.0 April 5, 2023¶
- Added #46:
Detections.empty
to allow easy creation of emptyDetections
objects. - Added #56:
Detections.from_roboflow
to allow easy creation ofDetections
objects from Roboflow API inference results. - Added #56:
plot_images_grid
to allow easy plotting of multiple images on single plot. - Added #56: initial support for Pascal VOC XML format with
detections_to_voc_xml
method. - Changed #56:
show_frame_in_notebook
refactored and renamed toplot_image
.
0.3.2 March 23, 2023¶
- Changed #50: Allow
Detections.class_id
to beNone
.
0.3.1 March 6, 2023¶
- Fixed #41:
PolygonZone
throws an exception when the object touches the bottom edge of the image. - Fixed #42:
Detections.wth_nms
method throws an exception whenDetections
is empty. - Changed #36:
Detections.wth_nms
support class agnostic and non-class agnostic case.
0.3.0 March 6, 2023¶
- Changed: Allow
Detections.confidence
to beNone
. - Added:
Detections.from_transformers
andDetections.from_detectron2
to enable seamless integration with Transformers and Detectron2 models. - Added:
Detections.area
to dynamically calculate bounding box area. - Added:
Detections.wth_nms
to filter out double detections with NMS. Initial - only class agnostic - implementation.
0.2.0 February 2, 2023¶
- Added: Advanced
Detections
filtering with pandas-like API. - Added:
Detections.from_yolov5
andDetections.from_yolov8
to enable seamless integration with YOLOv5 and YOLOv8 models.
0.1.0 January 19, 2023¶
Say hello to Supervision 👋