vllm.transformers_utils.processors.isaac ¶
IsaacImageProcessor ¶
Source code in vllm/transformers_utils/processors/isaac.py
preprocess ¶
preprocess(
images: list[Tensor],
return_tensors: str | TensorType | None,
**kwargs: Unpack[IsaacImageProcessorKwargs],
) -> BatchFeature
Preprocess images into format compatible with vLLM input processing.
Source code in vllm/transformers_utils/processors/isaac.py
_make_writeable ¶
Return arr itself if it is already writeable, otherwise try to flip the write flag in-place and finally fall back to arr.copy(). This guarantees the buffer handed to torch.from_numpy() is always writeable, silencing the PyTorch warning about undefined behaviour.
Source code in vllm/transformers_utils/processors/isaac.py
get_image_size_for_max_num_patches ¶
get_image_size_for_max_num_patches(
image_height: int,
image_width: int,
patch_size: int,
max_num_patches: int,
min_num_patches: int | None = None,
eps: float = 1e-05,
pixel_shuffle_scale: int = 1,
) -> tuple[int, int]
Compute a target resolution whose patch grid satisfies patching parametrization.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
image_height | `int` | Height in pixels of the source image prior to any resizing. | required |
image_width | `int` | Width in pixels of the source image prior to any resizing. | required |
patch_size | `int` | Size of the square patch used by the vision encoder. | required |
max_num_patches | `int` | Upper bound on | required |
min_num_patches | `int`, *optional* | Lower bound on the number of patches. When provided the image will be scaled up if necessary. | None |
eps | `float`, *optional*, defaults to 1e-5 | Convergence tolerance for the internal binary search to determine the target dimensions. | 1e-05 |
pixel_shuffle_scale | `int`, *optional*, defaults to 1 | Additional stride multiplier applied when pixel shuffle later reduces spatial resolution. | 1 |
Returns:
| Type | Description |
|---|---|
int |
|
int |
|
tuple[int, int] | optional minimum patch-count constraints. |
Source code in vllm/transformers_utils/processors/isaac.py
53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 | |
patchify_vision ¶
Convert normalized images into flattened ViT-style patches.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
image | `torch.Tensor` | Tensor of shape | required |
patch_size | `int` | Edge length of the square patches | required |
Returns:
| Type | Description |
|---|---|
Tensor |
|
Raises:
| Type | Description |
|---|---|
ValueError | If |
Source code in vllm/transformers_utils/processors/isaac.py
prepare_image_tensor ¶
Standardize RGB images prior to patch extraction via rescaling and whitening.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
image | `torch.Tensor` | Tensor with shape | required |
scale | `float`, *optional*, defaults to `VISION_SCALE` | Scalar multiplier applied before normalization. | VISION_SCALE |
Returns: torch.Tensor: Normalized tensor with the same shape as the input and dtype torch.float32.
Source code in vllm/transformers_utils/processors/isaac.py
process_vision_for_patches ¶
process_vision_for_patches(
images: Tensor,
patch_size: int,
max_num_patches: int,
min_num_patches: int | None = None,
pixel_shuffle_scale: int = 1,
) -> tuple[Tensor, list[int]]
Resize, normalize, and patchify RGB images for the vision encoder.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
images | `torch.Tensor` | Either | required |
patch_size | `int` | Edge length of square patches; implicitly controls resize grid granularity. | required |
max_num_patches | `int` | Maximum number of patches allowed after resizing. | required |
min_num_patches | `int`, *optional* | Minimum number of patches. If provided, the routine upsamples images as needed to satisfy the lower bound. | None |
pixel_shuffle_scale | `int`, *optional*, defaults to 1 | Pixel shuffle scale factor; influences the target grid that the function produces. | 1 |
Returns:
| Type | Description |
|---|---|
Tensor |
|
list[int] | where |
tuple[Tensor, list[int]] | / patch_size, channels * patch_size**2) |
tuple[Tensor, list[int]] | effective |
tuple[Tensor, list[int]] | shuffling. |
Source code in vllm/transformers_utils/processors/isaac.py
231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 | |