Changelog
0.22.0 Jul 12, 2024¶
- Added #1326:
sv.DetectionsDatasetandsv.ClassificationDatasetallowing to load the images into memory only when necessary (lazy loading).
Deprecated
Constructing DetectionDataset with parameter images as Dict[str, np.ndarray] is deprecated and will be removed in supervision-0.26.0. Please pass a list of paths List[str] instead.
Deprecated
The DetectionDataset.images property is deprecated and will be removed in supervision-0.26.0. Please loop over images with for path, image, annotation in dataset:, as that does not require loading all images into memory.
import roboflow
from roboflow import Roboflow
import supervision as sv
roboflow.login()
rf = Roboflow()
project = rf.workspace(<WORKSPACE_ID>).project(<PROJECT_ID>)
dataset = project.version(<PROJECT_VERSION>).download("coco")
ds_train = sv.DetectionDataset.from_coco(
images_directory_path=f"{dataset.location}/train",
annotations_path=f"{dataset.location}/train/_annotations.coco.json",
)
path, image, annotation = ds_train[0]
# loads image on demand
for path, image, annotation in ds_train:
# loads image on demand
-
Added #1296:
sv.Detections.from_lmmnow supports parsing results from the Florence 2 model, extending the capability to handle outputs from this Large Multimodal Model (LMM). This includes detailed object detection, OCR with region proposals, segmentation, and more. Find out more in our Colab notebook. -
Added #1232 to support keypoint detection with Mediapipe. Both legacy and modern pipelines are supported. See
sv.KeyPoints.from_mediapipefor more. -
Added #1316:
sv.KeyPoints.from_mediapipeextended to support FaceMesh from Mediapipe. This enhancement allows for processing both face landmarks fromFaceLandmarker, and legacy results fromFaceMesh. -
Added #1310:
sv.KeyPoints.from_detectron2is a newKeyPointsmethod, adding support for extracting keypoints from the popular Detectron 2 platform. -
Added #1300:
sv.Detections.from_detectron2now supports segmentation models detectron2. The resulting masks can be used withsv.MaskAnnotatorfor displaying annotations.
import supervision as sv
from detectron2 import model_zoo
from detectron2.engine import DefaultPredictor
from detectron2.config import get_cfg
import cv2
image = cv2.imread(<SOURCE_IMAGE_PATH>)
cfg = get_cfg()
cfg.merge_from_file(model_zoo.get_config_file("COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml"))
cfg.MODEL.WEIGHTS = model_zoo.get_checkpoint_url("COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml")
predictor = DefaultPredictor(cfg)
result = predictor(image)
detections = sv.Detections.from_detectron2(result)
mask_annotator = sv.MaskAnnotator()
annotated_frame = mask_annotator.annotate(scene=image.copy(), detections=detections)
- Added #1277: if you provide a font that supports symbols of a language,
sv.RichLabelAnnotatorwill draw them on your images. - Various other annotators have been revised to ensure proper in-place functionality when used with
numpyarrays. Additionally, we fixed a bug wheresv.ColorAnnotatorwas filling boxes with solid color when used in-place.
import cv2
import supervision as sv
import
image = cv2.imread(<SOURCE_IMAGE_PATH>)
model = get_model(model_id="yolov8n-640")
results = model.infer(image)[0]
detections = sv.Detections.from_inference(results)
rich_label_annotator = sv.RichLabelAnnotator(font_path=<TTF_FONT_PATH>)
annotated_image = rich_label_annotator.annotate(scene=image.copy(), detections=detections)
- Added #1227: Added support for loading Oriented Bounding Boxes dataset in YOLO format.
import supervision as sv
train_ds = sv.DetectionDataset.from_yolo(
images_directory_path="/content/dataset/train/images",
annotations_directory_path="/content/dataset/train/labels",
data_yaml_path="/content/dataset/data.yaml",
is_obb=True
)
_, image, detections in train_ds[0]
obb_annotator = OrientedBoxAnnotator()
annotated_image = obb_annotator.annotate(scene=image.copy(), detections=detections)
- Fixed #1312: Fixed
CropAnnotator.
Removed
BoxAnnotator was removed, however BoundingBoxAnnotator has been renamed to BoxAnnotator. Use a combination of BoxAnnotator and LabelAnnotator to simulate old BoundingBox behavior.
Deprecated
The name BoundingBoxAnnotator has been deprecated and will be removed in supervision-0.26.0. It has been renamed to BoxAnnotator.
-
Added #975 📝 New Cookbooks: serialize detections into json and csv.
-
Added #1290: Mostly an internal change, our file utility function now support both
strandpathlibpaths. -
Added #1340: Two new methods for converting between bounding box formats -
xywh_to_xyxyandxcycwh_to_xyxy
Removed
from_roboflow method has been removed due to deprecation. Use from_inference instead.
Removed
Color.white() has been removed due to deprecation. Use color.WHITE instead.
Removed
Color.black() has been removed due to deprecation. Use color.BLACK instead.
Removed
Color.red() has been removed due to deprecation. Use color.RED instead.
Removed
Color.green() has been removed due to deprecation. Use color.GREEN instead.
Removed
Color.blue() has been removed due to deprecation. Use color.BLUE instead.
Removed
ColorPalette.default() has been removed due to deprecation. Use ColorPalette.DEFAULT instead.
Removed
FPSMonitor.__call__ has been removed due to deprecation. Use the attribute FPSMonitor.fps instead.
0.21.0 Jun 5, 2024¶
-
Added #500:
sv.Detections.with_nmmto perform non-maximum merging on the current set of object detections. -
Added #1221:
sv.Detections.from_lmmallowing to parse Large Multimodal Model (LMM) text result intosv.Detectionsobject. For nowfrom_lmmsupports only PaliGemma result parsing.
import supervision as sv
paligemma_result = "<loc0256><loc0256><loc0768><loc0768> cat"
detections = sv.Detections.from_lmm(
sv.LMM.PALIGEMMA,
paligemma_result,
resolution_wh=(1000, 1000),
classes=['cat', 'dog']
)
detections.xyxy
# array([[250., 250., 750., 750.]])
detections.class_id
# array([0])
- Added #1236:
sv.VertexLabelAnnotatorallowing to annotate every vertex of a keypoint skeleton with custom text and color.
import supervision as sv
image = ...
key_points = sv.KeyPoints(...)
edge_annotator = sv.EdgeAnnotator(
color=sv.Color.GREEN,
thickness=5
)
annotated_frame = edge_annotator.annotate(
scene=image.copy(),
key_points=key_points
)
-
Added #1147:
sv.KeyPoints.from_inferenceallowing to createsv.KeyPointsfrom Inference result. -
Added #1138:
sv.KeyPoints.from_yolo_nasallowing to createsv.KeyPointsfrom YOLO-NAS result. -
Added #1163:
sv.mask_to_rleandsv.rle_to_maskallowing for easy conversion between mask and rle formats. -
Changed #1236:
sv.InferenceSlicerallowing to select overlap filtering strategy (NONE,NON_MAX_SUPPRESSIONandNON_MAX_MERGE). -
Changed #1178:
sv.InferenceSliceradding instance segmentation model support.
import cv2
import numpy as np
import supervision as sv
from inference import get_model
model = get_model(model_id="yolov8x-seg-640")
image = cv2.imread(<SOURCE_IMAGE_PATH>)
def callback(image_slice: np.ndarray) -> sv.Detections:
results = model.infer(image_slice)[0]
return sv.Detections.from_inference(results)
slicer = sv.InferenceSlicer(callback = callback)
detections = slicer(image)
mask_annotator = sv.MaskAnnotator()
label_annotator = sv.LabelAnnotator()
annotated_image = mask_annotator.annotate(
scene=image, detections=detections)
annotated_image = label_annotator.annotate(
scene=annotated_image, detections=detections)
-
Changed #1228:
sv.LineZonemaking it 10-20 times faster, depending on the use case. -
Changed #1163:
sv.DetectionDataset.from_cocoandsv.DetectionDataset.as_cocoadding support for run-length encoding (RLE) mask format.
0.20.0 April 24, 2024¶
-
Added #1128:
sv.KeyPointsto provide initial support for pose estimation and broader keypoint detection models. -
Added #1128:
sv.EdgeAnnotatorandsv.VertexAnnotatorto enable rendering of results from keypoint detection models.
import cv2
import supervision as sv
from ultralytics import YOLO
image = cv2.imread(<SOURCE_IMAGE_PATH>)
model = YOLO('yolov8l-pose')
result = model(image, verbose=False)[0]
keypoints = sv.KeyPoints.from_ultralytics(result)
edge_annotators = sv.EdgeAnnotator(color=sv.Color.GREEN, thickness=5)
annotated_image = edge_annotators.annotate(image.copy(), keypoints)
-
Changed #1037:
sv.LabelAnnotatorby adding an additionalcorner_radiusargument that allows for rounding the corners of the bounding box. -
Changed #1109:
sv.PolygonZonesuch that theframe_resolution_whargument is no longer required to initializesv.PolygonZone.
Deprecated
The frame_resolution_wh parameter in sv.PolygonZone is deprecated and will be removed in supervision-0.24.0.
-
Changed #1084:
sv.get_polygon_centerto calculate a more accurate polygon centroid. -
Changed #1069:
sv.Detections.from_transformersby adding support for Transformers segmentation models and extract class names values.
import torch
import supervision as sv
from PIL import Image
from transformers import DetrImageProcessor, DetrForSegmentation
processor = DetrImageProcessor.from_pretrained("facebook/detr-resnet-50-panoptic")
model = DetrForSegmentation.from_pretrained("facebook/detr-resnet-50-panoptic")
image = Image.open(<SOURCE_IMAGE_PATH>)
inputs = processor(images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
width, height = image.size
target_size = torch.tensor([[height, width]])
results = processor.post_process_segmentation(
outputs=outputs, target_sizes=target_size)[0]
detections = sv.Detections.from_transformers(results, id2label=model.config.id2label)
mask_annotator = sv.MaskAnnotator()
label_annotator = sv.LabelAnnotator(text_position=sv.Position.CENTER)
annotated_image = mask_annotator.annotate(
scene=image, detections=detections)
annotated_image = label_annotator.annotate(
scene=annotated_image, detections=detections)
- Fixed #787:
sv.ByteTrack.update_with_detectionswhich was removing segmentation masks while tracking. Now,ByteTrackcan be used alongside segmentation models.
0.19.0 March 15, 2024¶
- Added #818:
sv.CSVSinkallowing for the straightforward saving of image, video, or stream inference results in a.csvfile.
import supervision as sv
from ultralytics import YOLO
model = YOLO(<SOURCE_MODEL_PATH>)
csv_sink = sv.CSVSink(<RESULT_CSV_FILE_PATH>)
frames_generator = sv.get_video_frames_generator(<SOURCE_VIDEO_PATH>)
with csv_sink:
for frame in frames_generator:
result = model(frame)[0]
detections = sv.Detections.from_ultralytics(result)
csv_sink.append(detections, custom_data={<CUSTOM_LABEL>:<CUSTOM_DATA>})
- Added #819:
sv.JSONSinkallowing for the straightforward saving of image, video, or stream inference results in a.jsonfile.
import supervision as sv
from ultralytics import YOLO
model = YOLO(<SOURCE_MODEL_PATH>)
json_sink = sv.JSONSink(<RESULT_JSON_FILE_PATH>)
frames_generator = sv.get_video_frames_generator(<SOURCE_VIDEO_PATH>)
with json_sink:
for frame in frames_generator:
result = model(frame)[0]
detections = sv.Detections.from_ultralytics(result)
json_sink.append(detections, custom_data={<CUSTOM_LABEL>:<CUSTOM_DATA>})
-
Added #847:
sv.mask_iou_batchallowing to compute Intersection over Union (IoU) of two sets of masks. -
Added #847:
sv.mask_non_max_suppressionallowing to perform Non-Maximum Suppression (NMS) on segmentation predictions. -
Added #888:
sv.CropAnnotatorallowing users to annotate the scene with scaled-up crops of detections.
import cv2
import supervision as sv
from inference import get_model
image = cv2.imread(<SOURCE_IMAGE_PATH>)
model = get_model(model_id="yolov8n-640")
result = model.infer(image)[0]
detections = sv.Detections.from_inference(result)
crop_annotator = sv.CropAnnotator()
annotated_frame = crop_annotator.annotate(
scene=image.copy(),
detections=detections
)
-
Changed #827:
sv.ByteTrack.resetallowing users to clear trackers state, enabling the processing of multiple video files in sequence. -
Changed #802:
sv.LineZoneAnnotatorallowing to hide in/out count usingdisplay_in_countanddisplay_out_countproperties. -
Changed #787:
sv.ByteTrackinput arguments and docstrings updated to improve readability and ease of use.
Deprecated
The track_buffer, track_thresh, and match_thresh parameters in sv.ByterTrack are deprecated and will be removed in supervision-0.23.0. Use lost_track_buffer, track_activation_threshold, and minimum_matching_threshold instead.
- Changed #910:
sv.PolygonZoneto now accept a list of specific box anchors that must be in zone for a detection to be counted.
Deprecated
The triggering_position parameter in sv.PolygonZone is deprecated and will be removed in supervision-0.23.0. Use triggering_anchors instead.
-
Changed #875: annotators adding support for Pillow images. All supervision Annotators can now accept an image as either a numpy array or a Pillow Image. They automatically detect its type, draw annotations, and return the output in the same format as the input.
-
Fixed #944:
sv.DetectionsSmootherremovingtracking_idfromsv.Detections.
0.18.0 January 25, 2024¶
- Added #720:
sv.PercentageBarAnnotatorallowing to annotate images and videos with percentage values representing confidence or other custom property.
>>> import supervision as sv
>>> image = ...
>>> detections = sv.Detections(...)
>>> percentage_bar_annotator = sv.PercentageBarAnnotator()
>>> annotated_frame = percentage_bar_annotator.annotate(
... scene=image.copy(),
... detections=detections
... )
-
Added #702:
sv.RoundBoxAnnotatorallowing to annotate images and videos with rounded corners bounding boxes. -
Added #770:
sv.OrientedBoxAnnotatorallowing to annotate images and videos with OBB (Oriented Bounding Boxes).
import cv2
import supervision as sv
from ultralytics import YOLO
image = cv2.imread(<SOURCE_IMAGE_PATH>)
model = YOLO("yolov8n-obb.pt")
result = model(image)[0]
detections = sv.Detections.from_ultralytics(result)
oriented_box_annotator = sv.OrientedBoxAnnotator()
annotated_frame = oriented_box_annotator.annotate(
scene=image.copy(),
detections=detections
)
-
Added #696:
sv.DetectionsSmootherallowing for smoothing detections over multiple frames in video tracking. -
Added #769:
sv.ColorPalette.from_matplotliballowing users to create asv.ColorPaletteinstance from a Matplotlib color palette.
>>> import supervision as sv
>>> sv.ColorPalette.from_matplotlib('viridis', 5)
ColorPalette(colors=[Color(r=68, g=1, b=84), Color(r=59, g=82, b=139), ...])
-
Changed #770:
sv.Detections.from_ultralyticsadding support for OBB (Oriented Bounding Boxes). -
Changed #735:
sv.LineZoneto now accept a list of specific box anchors that must cross the line for a detection to be counted. This update marks a significant improvement from the previous requirement, where all four box corners were necessary. Users can now specify a single anchor, such assv.Position.BOTTOM_CENTER, or any other combination of anchors defined asList[sv.Position]. -
Changed #756:
sv.Color's andsv.ColorPalette's method of accessing predefined colors, transitioning from a function-based approach (sv.Color.red()) to a more intuitive and conventional property-based method (sv.Color.RED).
Deprecated
sv.ColorPalette.default() is deprecated and will be removed in supervision-0.22.0. Use sv.ColorPalette.DEFAULT instead.
-
Changed #769:
sv.ColorPalette.DEFAULTvalue, giving users a more extensive set of annotation colors. -
Changed #677:
sv.Detections.from_roboflowtosv.Detections.from_inferencestreamlining its functionality to be compatible with both the both inference pip package and the Robloflow hosted API.
Deprecated
Detections.from_roboflow() is deprecated and will be removed in supervision-0.22.0. Use Detections.from_inference instead.
- Fixed #735:
sv.LineZonefunctionality to accurately update the counter when an object crosses a line from any direction, including from the side. This enhancement enables more precise tracking and analytics, such as calculating individual in/out counts for each lane on the road.
0.17.0 December 06, 2023¶
-
Added #633:
sv.PixelateAnnotatorallowing to pixelate objects on images and videos. -
Added #652:
sv.TriangleAnnotatorallowing to annotate images and videos with triangle markers. -
Added #602:
sv.PolygonAnnotatorallowing to annotate images and videos with segmentation mask outline.
>>> import supervision as sv
>>> image = ...
>>> detections = sv.Detections(...)
>>> polygon_annotator = sv.PolygonAnnotator()
>>> annotated_frame = polygon_annotator.annotate(
... scene=image.copy(),
... detections=detections
... )
>>> from supervision.assets import download_assets, VideoAssets
>>> download_assets(VideoAssets.VEHICLES)
"vehicles.mp4"
-
Added #605:
Position.CENTER_OF_MASSallowing to place labels in center of mass of segmentation masks. -
Added #651:
sv.scale_boxesallowing to scalesv.Detections.xyxyvalues. -
Added #637:
sv.calculate_dynamic_text_scaleandsv.calculate_dynamic_line_thicknessallowing text scale and line thickness to match image resolution. -
Added #620:
sv.Color.as_hexallowing to extract color value in HEX format. -
Added #572:
sv.Classifications.from_timmallowing to load classification result from timm models. -
Added #478:
sv.Classifications.from_clipallowing to load classification result from clip model. -
Added #571:
sv.Detections.from_azure_analyze_imageallowing to load detection results from Azure Image Analysis. -
Changed #646:
sv.BoxMaskAnnotatorrenaming it tosv.ColorAnnotator. -
Changed #606:
sv.MaskAnnotatorto make it 5x faster. -
Fixed #584:
sv.DetectionDataset.from_yoloto ignore empty lines in annotation files. -
Fixed #555:
sv.BlurAnnotatorto trim negative coordinates before bluring detections. -
Fixed #511:
sv.TraceAnnotatorto respect trace position.
0.16.0 October 19, 2023¶
-
Added #422:
sv.BoxMaskAnnotatorallowing to annotate images and videos with mox masks. -
Added #433:
sv.HaloAnnotatorallowing to annotate images and videos with halo effect.
>>> import supervision as sv
>>> image = ...
>>> detections = sv.Detections(...)
>>> halo_annotator = sv.HaloAnnotator()
>>> annotated_frame = halo_annotator.annotate(
... scene=image.copy(),
... detections=detections
... )
-
Added #466:
sv.HeatMapAnnotatorallowing to annotate videos with heat maps. -
Added #492:
sv.DotAnnotatorallowing to annotate images and videos with dots. -
Added #449:
sv.draw_imageallowing to draw an image onto a given scene with specified opacity and dimensions. -
Added #280:
sv.FPSMonitorfor monitoring frames per second (FPS) to benchmark latency. -
Changed #482:
sv.LineZone.triggernow returnTuple[np.ndarray, np.ndarray]. The first array indicates which detections have crossed the line from outside to inside. The second array indicates which detections have crossed the line from inside to outside. -
Changed #465: Annotator argument name from
color_map: strtocolor_lookup: ColorLookupenum to increase type safety. -
Changed #426:
sv.MaskAnnotatorallowing 2x faster annotation. -
Fixed #477: Poetry env definition allowing proper local installation.
-
Fixed #430:
sv.ByteTrackto returnnp.array([], dtype=int)whensvDetectionsis empty.
Deprecated
sv.Detections.from_yolov8 and sv.Classifications.from_yolov8 as those are now replaced by sv.Detections.from_ultralytics and sv.Classifications.from_ultralytics.
0.15.0 October 5, 2023¶
-
Added #170:
sv.BoundingBoxAnnotatorallowing to annotate images and videos with bounding boxes. -
Added #170:
sv.BoxCornerAnnotatorallowing to annotate images and videos with just bounding box corners. -
Added #170:
sv.MaskAnnotatorallowing to annotate images and videos with segmentation masks. -
Added #170:
sv.EllipseAnnotatorallowing to annotate images and videos with ellipses (sports game style). -
Added #386:
sv.CircleAnnotatorallowing to annotate images and videos with circles. -
Added #354:
sv.TraceAnnotatorallowing to draw path of moving objects on videos. -
Added #405:
sv.BlurAnnotatorallowing to blur objects on images and videos.
>>> import supervision as sv
>>> image = ...
>>> detections = sv.Detections(...)
>>> bounding_box_annotator = sv.BoundingBoxAnnotator()
>>> annotated_frame = bounding_box_annotator.annotate(
... scene=image.copy(),
... detections=detections
... )
-
Added #354: Supervision usage example. You can now learn how to perform traffic flow analysis with Supervision.
-
Changed #399:
sv.Detections.from_roboflownow does not requireclass_listto be specified. Theclass_idvalue can be extracted directly from the inference response. -
Changed #381:
sv.VideoSinknow allows to customize the output codec. -
Changed #361:
sv.InferenceSlicercan now operate in multithreading mode. -
Fixed #348:
sv.Detections.from_deepsparseto allow processing empty deepsparse result object.
0.14.0 August 31, 2023¶
- Added #282: support for SAHI inference technique with
sv.InferenceSlicer.
>>> import cv2
>>> import supervision as sv
>>> from ultralytics import YOLO
>>> image = cv2.imread(SOURCE_IMAGE_PATH)
>>> model = YOLO(...)
>>> def callback(image_slice: np.ndarray) -> sv.Detections:
... result = model(image_slice)[0]
... return sv.Detections.from_ultralytics(result)
>>> slicer = sv.InferenceSlicer(callback = callback)
>>> detections = slicer(image)
-
Added #297:
Detections.from_deepsparseto enable seamless integration with DeepSparse framework. -
Added #281:
sv.Classifications.from_ultralyticsto enable seamless integration with Ultralytics framework. This will enable you to use supervision with all models that Ultralytics supports.
Deprecated
sv.Detections.from_yolov8 and sv.Classifications.from_yolov8 are now deprecated and will be removed with supervision-0.16.0 release.
-
Added #341: First supervision usage example script showing how to detect and track objects on video using YOLOv8 + Supervision.
-
Changed #296:
sv.ClassificationDatasetandsv.DetectionDatasetnow use image path (not image name) as dataset keys. -
Fixed #300:
Detections.from_roboflowto filter out polygons with less than 3 points.
0.13.0 August 8, 2023¶
- Added #236: support for mean average precision (mAP) for object detection models with
sv.MeanAveragePrecision.
>>> import supervision as sv
>>> from ultralytics import YOLO
>>> dataset = sv.DetectionDataset.from_yolo(...)
>>> model = YOLO(...)
>>> def callback(image: np.ndarray) -> sv.Detections:
... result = model(image)[0]
... return sv.Detections.from_yolov8(result)
>>> mean_average_precision = sv.MeanAveragePrecision.benchmark(
... dataset = dataset,
... callback = callback
... )
>>> mean_average_precision.map50_95
0.433
-
Added #256: support for ByteTrack for object tracking with
sv.ByteTrack. -
Added #222:
sv.Detections.from_ultralyticsto enable seamless integration with Ultralytics framework. This will enable you to usesupervisionwith all models that Ultralytics supports.
Deprecated
sv.Detections.from_yolov8 is now deprecated and will be removed with supervision-0.15.0 release.
-
Added #191:
sv.Detections.from_paddledetto enable seamless integration with PaddleDetection framework. -
Added #245: support for loading PASCAL VOC segmentation datasets with
sv.DetectionDataset..
0.12.0 July 24, 2023¶
Python 3.7. Support Terminated
With the supervision-0.12.0 release, we are terminating official support for Python 3.7.
- Added #177: initial support for object detection model benchmarking with
sv.ConfusionMatrix.
>>> import supervision as sv
>>> from ultralytics import YOLO
>>> dataset = sv.DetectionDataset.from_yolo(...)
>>> model = YOLO(...)
>>> def callback(image: np.ndarray) -> sv.Detections:
... result = model(image)[0]
... return sv.Detections.from_yolov8(result)
>>> confusion_matrix = sv.ConfusionMatrix.benchmark(
... dataset = dataset,
... callback = callback
... )
>>> confusion_matrix.matrix
array([
[0., 0., 0., 0.],
[0., 1., 0., 1.],
[0., 1., 1., 0.],
[1., 1., 0., 0.]
])
-
Added #173:
Detections.from_mmdetectionto enable seamless integration with MMDetection framework. -
Added #130: ability to install package in
headlessordesktopmode. -
Changed #180: packing method from
setup.pytopyproject.toml. -
Fixed #188:
sv.DetectionDataset.from_cooccan't be loaded when there are images without annotations. -
Fixed #226:
sv.DetectionDataset.from_yolocan't load background instances.
0.11.1 June 29, 2023¶
- Fix #165:
as_folder_structurefails to savesv.ClassificationDatasetwhen it is result of inference.
0.11.0 June 28, 2023¶
- Added #150: ability to load and save
sv.DetectionDatasetin COCO format usingas_cocoandfrom_cocomethods.
>>> import supervision as sv
>>> ds = sv.DetectionDataset.from_coco(
... images_directory_path='...',
... annotations_path='...'
... )
>>> ds.as_coco(
... images_directory_path='...',
... annotations_path='...'
... )
- Added #158: ability to merge multiple
sv.DetectionDatasettogether usingmergemethod.
>>> import supervision as sv
>>> ds_1 = sv.DetectionDataset(...)
>>> len(ds_1)
100
>>> ds_1.classes
['dog', 'person']
>>> ds_2 = sv.DetectionDataset(...)
>>> len(ds_2)
200
>>> ds_2.classes
['cat']
>>> ds_merged = sv.DetectionDataset.merge([ds_1, ds_2])
>>> len(ds_merged)
300
>>> ds_merged.classes
['cat', 'dog', 'person']
-
Added #162: additional
startandendarguments tosv.get_video_frames_generatorallowing to generate frames only for a selected part of the video. -
Fix #157: incorrect loading of YOLO dataset class names from
data.yaml.
0.10.0 June 14, 2023¶
- Added #125: ability to load and save
sv.ClassificationDatasetin a folder structure format.
>>> import supervision as sv
>>> cs = sv.ClassificationDataset.from_folder_structure(
... root_directory_path='...'
... )
>>> cs.as_folder_structure(
... root_directory_path='...'
... )
-
Added #125: support for
sv.ClassificationDataset.splitallowing to dividesv.ClassificationDatasetinto two parts. -
Added #110: ability to extract masks from Roboflow API results using
sv.Detections.from_roboflow. -
Added commit hash: Supervision Quickstart notebook where you can learn more about Detection, Dataset and Video APIs.
-
Changed #135:
sv.get_video_frames_generatordocumentation to better describe actual behavior.
0.9.0 June 7, 2023¶
- Added #118: ability to select
sv.Detectionsby index, list of indexes or slice. Here is an example illustrating the new selection methods.
>>> import supervision as sv
>>> detections = sv.Detections(...)
>>> len(detections[0])
1
>>> len(detections[[0, 1]])
2
>>> len(detections[0:2])
2
-
Added #101: ability to extract masks from YOLOv8 result using
sv.Detections.from_yolov8. Here is an example illustrating how to extract boolean masks from the result of the YOLOv8 model inference. -
Added #122: ability to crop image using
sv.crop. Here is an example showing how to get a separate crop for each detection insv.Detections. -
Added #120: ability to conveniently save multiple images into directory using
sv.ImageSink. Here is an example showing how to save every tenth video frame as a separate image.
>>> import supervision as sv
>>> with sv.ImageSink(target_dir_path='target/directory/path') as sink:
... for image in sv.get_video_frames_generator(source_path='source_video.mp4', stride=10):
... sink.save_image(image=image)
- Fixed #106: inconvenient handling of
sv.PolygonZonecoordinates. Nowsv.PolygonZoneaccepts coordinates in the form of[[x1, y1], [x2, y2], ...]that can be both integers and floats.
0.8.0 May 17, 2023¶
- Added #100: support for dataset inheritance. The current
Datasetgot renamed toDetectionDataset. NowDetectionDatasetinherits fromBaseDataset. This change was made to enforce the future consistency of APIs of different types of computer vision datasets. - Added #100: ability to save datasets in YOLO format using
DetectionDataset.as_yolo.
>>> import roboflow
>>> from roboflow import Roboflow
>>> import supervision as sv
>>> roboflow.login()
>>> rf = Roboflow()
>>> project = rf.workspace(WORKSPACE_ID).project(PROJECT_ID)
>>> dataset = project.version(PROJECT_VERSION).download("yolov5")
>>> ds = sv.DetectionDataset.from_yolo(
... images_directory_path=f"{dataset.location}/train/images",
... annotations_directory_path=f"{dataset.location}/train/labels",
... data_yaml_path=f"{dataset.location}/data.yaml"
... )
>>> ds.classes
['dog', 'person']
- Added #102: support for
DetectionDataset.splitallowing to divideDetectionDatasetinto two parts.
>>> import supervision as sv
>>> ds = sv.DetectionDataset(...)
>>> train_ds, test_ds = ds.split(split_ratio=0.7, random_state=42, shuffle=True)
>>> len(train_ds), len(test_ds)
(700, 300)
- Changed #100: default value of
approximation_percentageparameter from0.75to0.0inDetectionDataset.as_yoloandDetectionDataset.as_pascal_voc.
0.7.0 May 11, 2023¶
- Added #91:
Detections.from_yolo_nasto enable seamless integration with YOLO-NAS model. - Added #86: ability to load datasets in YOLO format using
Dataset.from_yolo. - Added #84:
Detections.mergeto merge multipleDetectionsobjects together. - Fixed #81:
LineZoneAnnotator.annotatedoes not return annotated frame. - Changed #44:
LineZoneAnnotator.annotateto allow for custom text for the in and out tags.
0.6.0 April 19, 2023¶
- Added #71: initial
Datasetsupport and ability to saveDetectionsin Pascal VOC XML format. - Added #71: new
mask_to_polygons,filter_polygons_by_area,polygon_to_xyxyandapproximate_polygonutilities. - Added #72: ability to load Pascal VOC XML object detections dataset as
Dataset. - Changed #70: order of
Detectionsattributes to make it consistent with order of objects in__iter__tuple. - Changed #71:
generate_2d_masktopolygon_to_mask.
0.5.2 April 13, 2023¶
- Fixed #63:
LineZone.triggerfunction expects 4 values instead of 5.
0.5.1 April 12, 2023¶
- Fixed
Detections.__getitem__method did not return mask for selected item. - Fixed
Detections.areacrashed for mask detections.
0.5.0 April 10, 2023¶
- Added #58:
Detections.maskto enable segmentation support. - Added #58:
MaskAnnotatorto allow easyDetections.maskannotation. - Added #58:
Detections.from_samto enable native Segment Anything Model (SAM) support. - Changed #58:
Detections.areabehaviour to work not only with boxes but also with masks.
0.4.0 April 5, 2023¶
- Added #46:
Detections.emptyto allow easy creation of emptyDetectionsobjects. - Added #56:
Detections.from_roboflowto allow easy creation ofDetectionsobjects from Roboflow API inference results. - Added #56:
plot_images_gridto allow easy plotting of multiple images on single plot. - Added #56: initial support for Pascal VOC XML format with
detections_to_voc_xmlmethod. - Changed #56:
show_frame_in_notebookrefactored and renamed toplot_image.
0.3.2 March 23, 2023¶
- Changed #50: Allow
Detections.class_idto beNone.
0.3.1 March 6, 2023¶
- Fixed #41:
PolygonZonethrows an exception when the object touches the bottom edge of the image. - Fixed #42:
Detections.wth_nmsmethod throws an exception whenDetectionsis empty. - Changed #36:
Detections.wth_nmssupport class agnostic and non-class agnostic case.
0.3.0 March 6, 2023¶
- Changed: Allow
Detections.confidenceto beNone. - Added:
Detections.from_transformersandDetections.from_detectron2to enable seamless integration with Transformers and Detectron2 models. - Added:
Detections.areato dynamically calculate bounding box area. - Added:
Detections.wth_nmsto filter out double detections with NMS. Initial - only class agnostic - implementation.
0.2.0 February 2, 2023¶
- Added: Advanced
Detectionsfiltering with pandas-like API. - Added:
Detections.from_yolov5andDetections.from_yolov8to enable seamless integration with YOLOv5 and YOLOv8 models.
0.1.0 January 19, 2023¶
Say hello to Supervision 👋