supervision-0.26.0 #1892
SkalskiP
announced in
Announcements
Replies: 1 comment
-
|
I feel like very much happier |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Warning
supervision-0.26.0dropspython3.8support and upgrade all codes topython3.9syntax style.Tip
Our docs page now has a fresh look that is consistent with the documentations of all Roboflow open-source projects. (#1858)
🚀 Added
Added support for creating
sv.KeyPointsobjects from ViTPose and ViTPose++ inference results viasv.KeyPoints.from_transformers. (#1788)vitpose-plus-large.mp4
Added support for the IOS (Intersection over Smallest) overlap metric that measures how much of the smaller object is covered by the larger one in
sv.Detections.with_nms,sv.Detections.with_nmm,sv.box_iou_batch, andsv.mask_iou_batch. (#1774)Added
sv.box_iouthat efficiently computes the Intersection over Union (IoU) between two individual bounding boxes. (#1874)Added support for frame limitations and progress bar in
sv.process_video. (#1816)Added
sv.xyxy_to_xcycarhfunction to convert bounding box coordinates from(x_min, y_min, x_max, y_max)into measurement space to format(center x, center y, aspect ratio, height), where the aspect ratio iswidth / height. (#1823)Added
sv.xyxy_to_xywhfunction to convert bounding box coordinates from(x_min, y_min, x_max, y_max)format to(x, y, width, height)format. (#1788)🌱 Changed
sv.LabelAnnotatornow supports thesmart_positionparameter to automatically keep labels within frame boundaries, and themax_line_lengthparameter to control text wrapping for long or multi-line labels. (#1820)supervision-0.26.0.mp4
sv.LabelAnnotatornow supports non-string labels. (#1825)sv.Detections.from_vlmnow supports parsing bounding boxes and segmentation masks from responses generated by Google Gemini models. You can test Gemini prompting, result parsing, and visualization with Supervision using this example notebook. (#1792)sv.Detections.from_vlmnow supports parsing bounding boxes from responses generated by Moondream. (#1878)sv.Detections.from_vlmnow supports parsing bounding boxes from responses generated by Qwen-2.5 VL. You can test Qwen2.5-VL prompting, result parsing, and visualization with Supervision using this example notebook. (#1709)sv.HeatMapAnnotator, achieving approximately 28x faster performance on 1920x1080 frames. (#1786)🔧 Fixed
Supervision’s
sv.MeanAveragePrecisionis now fully aligned with pycocotools, the official COCO evaluation tool, ensuring accurate and standardized metrics. (#1834)Tip
The updated mAP implementation enabled us to build an updated version of the Computer Vision Model Leaderboard.
sv.Detections.datawhen detections filtering.sv.LMMenum is deprecated and will be removed insupervision-0.31.0. Usesv.VLMinstead.sv.Detections.from_lmmproperty is deprecated and will be removed insupervision-0.31.0. Usesv.Detections.from_vlminstead.❌ Removed
sv.DetectionDataset.imagesproperty has been removed insupervision-0.26.0. Please loop over images withfor path, image, annotation in dataset:, as that does not require loading all images into memory.sv.DetectionDatasetwith parameterimagesasDict[str, np.ndarray]is deprecated and has been removed insupervision-0.26.0. Please pass a list of pathsList[str]instead.sv.BoundingBoxAnnotatoris deprecated and has been removed insupervision-0.26.0. It has been renamed tosv.BoxAnnotator.🏆 Contributors
@onuralpszr (Onuralp SEZER), @SkalskiP (Piotr Skalski), @SunHao-AI (Hao Sun), @rafaelpadilla Rafael Padilla, @Ashp116 (Ashp116), @capjamesg (James Gallagher), @blakeburch (Blake Burch), @hidara2000 (hidara2000), @Armaggheddon (Alessandro Brunello), @soumik12345 (Soumik Rakshit).
This discussion was created from the release supervision-0.26.0.
Beta Was this translation helpful? Give feedback.
All reactions