You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

359 lines
15 KiB

6 years ago
  1. # Copyright 2017 The TensorFlow Authors. All Rights Reserved.
  2. #
  3. # Licensed under the Apache License, Version 2.0 (the "License");
  4. # you may not use this file except in compliance with the License.
  5. # You may obtain a copy of the License at
  6. #
  7. # http://www.apache.org/licenses/LICENSE-2.0
  8. #
  9. # Unless required by applicable law or agreed to in writing, software
  10. # distributed under the License is distributed on an "AS IS" BASIS,
  11. # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  12. # See the License for the specific language governing permissions and
  13. # limitations under the License.
  14. # ==============================================================================
  15. """Abstract detection model.
  16. This file defines a generic base class for detection models. Programs that are
  17. designed to work with arbitrary detection models should only depend on this
  18. class. We intend for the functions in this class to follow tensor-in/tensor-out
  19. design, thus all functions have tensors or lists/dictionaries holding tensors as
  20. inputs and outputs.
  21. Abstractly, detection models predict output tensors given input images
  22. which can be passed to a loss function at training time or passed to a
  23. postprocessing function at eval time. The computation graphs at a high level
  24. consequently look as follows:
  25. Training time:
  26. inputs (images tensor) -> preprocess -> predict -> loss -> outputs (loss tensor)
  27. Evaluation time:
  28. inputs (images tensor) -> preprocess -> predict -> postprocess
  29. -> outputs (boxes tensor, scores tensor, classes tensor, num_detections tensor)
  30. DetectionModels must thus implement four functions (1) preprocess, (2) predict,
  31. (3) postprocess and (4) loss. DetectionModels should make no assumptions about
  32. the input size or aspect ratio --- they are responsible for doing any
  33. resize/reshaping necessary (see docstring for the preprocess function).
  34. Output classes are always integers in the range [0, num_classes). Any mapping
  35. of these integers to semantic labels is to be handled outside of this class.
  36. Images are resized in the `preprocess` method. All of `preprocess`, `predict`,
  37. and `postprocess` should be reentrant.
  38. The `preprocess` method runs `image_resizer_fn` that returns resized_images and
  39. `true_image_shapes`. Since `image_resizer_fn` can pad the images with zeros,
  40. true_image_shapes indicate the slices that contain the image without padding.
  41. This is useful for padding images to be a fixed size for batching.
  42. The `postprocess` method uses the true image shapes to clip predictions that lie
  43. outside of images.
  44. By default, DetectionModels produce bounding box detections; However, we support
  45. a handful of auxiliary annotations associated with each bounding box, namely,
  46. instance masks and keypoints.
  47. """
  48. import abc
  49. from object_detection.core import standard_fields as fields
  50. class DetectionModel(object):
  51. """Abstract base class for detection models."""
  52. __metaclass__ = abc.ABCMeta
  53. def __init__(self, num_classes):
  54. """Constructor.
  55. Args:
  56. num_classes: number of classes. Note that num_classes *does not* include
  57. background categories that might be implicitly predicted in various
  58. implementations.
  59. """
  60. self._num_classes = num_classes
  61. self._groundtruth_lists = {}
  62. @property
  63. def num_classes(self):
  64. return self._num_classes
  65. def groundtruth_lists(self, field):
  66. """Access list of groundtruth tensors.
  67. Args:
  68. field: a string key, options are
  69. fields.BoxListFields.{boxes,classes,masks,keypoints} or
  70. fields.InputDataFields.is_annotated.
  71. Returns:
  72. a list of tensors holding groundtruth information (see also
  73. provide_groundtruth function below), with one entry for each image in the
  74. batch.
  75. Raises:
  76. RuntimeError: if the field has not been provided via provide_groundtruth.
  77. """
  78. if field not in self._groundtruth_lists:
  79. raise RuntimeError('Groundtruth tensor {} has not been provided'.format(
  80. field))
  81. return self._groundtruth_lists[field]
  82. def groundtruth_has_field(self, field):
  83. """Determines whether the groundtruth includes the given field.
  84. Args:
  85. field: a string key, options are
  86. fields.BoxListFields.{boxes,classes,masks,keypoints} or
  87. fields.InputDataFields.is_annotated.
  88. Returns:
  89. True if the groundtruth includes the given field, False otherwise.
  90. """
  91. return field in self._groundtruth_lists
  92. @abc.abstractmethod
  93. def preprocess(self, inputs):
  94. """Input preprocessing.
  95. To be overridden by implementations.
  96. This function is responsible for any scaling/shifting of input values that
  97. is necessary prior to running the detector on an input image.
  98. It is also responsible for any resizing, padding that might be necessary
  99. as images are assumed to arrive in arbitrary sizes. While this function
  100. could conceivably be part of the predict method (below), it is often
  101. convenient to keep these separate --- for example, we may want to preprocess
  102. on one device, place onto a queue, and let another device (e.g., the GPU)
  103. handle prediction.
  104. A few important notes about the preprocess function:
  105. + We assume that this operation does not have any trainable variables nor
  106. does it affect the groundtruth annotations in any way (thus data
  107. augmentation operations such as random cropping should be performed
  108. externally).
  109. + There is no assumption that the batchsize in this function is the same as
  110. the batch size in the predict function. In fact, we recommend calling the
  111. preprocess function prior to calling any batching operations (which should
  112. happen outside of the model) and thus assuming that batch sizes are equal
  113. to 1 in the preprocess function.
  114. + There is also no explicit assumption that the output resolutions
  115. must be fixed across inputs --- this is to support "fully convolutional"
  116. settings in which input images can have different shapes/resolutions.
  117. Args:
  118. inputs: a [batch, height_in, width_in, channels] float32 tensor
  119. representing a batch of images with values between 0 and 255.0.
  120. Returns:
  121. preprocessed_inputs: a [batch, height_out, width_out, channels] float32
  122. tensor representing a batch of images.
  123. true_image_shapes: int32 tensor of shape [batch, 3] where each row is
  124. of the form [height, width, channels] indicating the shapes
  125. of true images in the resized images, as resized images can be padded
  126. with zeros.
  127. """
  128. pass
  129. @abc.abstractmethod
  130. def predict(self, preprocessed_inputs, true_image_shapes):
  131. """Predict prediction tensors from inputs tensor.
  132. Outputs of this function can be passed to loss or postprocess functions.
  133. Args:
  134. preprocessed_inputs: a [batch, height, width, channels] float32 tensor
  135. representing a batch of images.
  136. true_image_shapes: int32 tensor of shape [batch, 3] where each row is
  137. of the form [height, width, channels] indicating the shapes
  138. of true images in the resized images, as resized images can be padded
  139. with zeros.
  140. Returns:
  141. prediction_dict: a dictionary holding prediction tensors to be
  142. passed to the Loss or Postprocess functions.
  143. """
  144. pass
  145. @abc.abstractmethod
  146. def postprocess(self, prediction_dict, true_image_shapes, **params):
  147. """Convert predicted output tensors to final detections.
  148. This stage typically performs a few things such as
  149. * Non-Max Suppression to remove overlapping detection boxes.
  150. * Score conversion and background class removal.
  151. Outputs adhere to the following conventions:
  152. * Classes are integers in [0, num_classes); background classes are removed
  153. and the first non-background class is mapped to 0. If the model produces
  154. class-agnostic detections, then no output is produced for classes.
  155. * Boxes are to be interpreted as being in [y_min, x_min, y_max, x_max]
  156. format and normalized relative to the image window.
  157. * `num_detections` is provided for settings where detections are padded to a
  158. fixed number of boxes.
  159. * We do not specifically assume any kind of probabilistic interpretation
  160. of the scores --- the only important thing is their relative ordering.
  161. Thus implementations of the postprocess function are free to output
  162. logits, probabilities, calibrated probabilities, or anything else.
  163. Args:
  164. prediction_dict: a dictionary holding prediction tensors.
  165. true_image_shapes: int32 tensor of shape [batch, 3] where each row is
  166. of the form [height, width, channels] indicating the shapes
  167. of true images in the resized images, as resized images can be padded
  168. with zeros.
  169. **params: Additional keyword arguments for specific implementations of
  170. DetectionModel.
  171. Returns:
  172. detections: a dictionary containing the following fields
  173. detection_boxes: [batch, max_detections, 4]
  174. detection_scores: [batch, max_detections]
  175. detection_classes: [batch, max_detections]
  176. (If a model is producing class-agnostic detections, this field may be
  177. missing)
  178. instance_masks: [batch, max_detections, image_height, image_width]
  179. (optional)
  180. keypoints: [batch, max_detections, num_keypoints, 2] (optional)
  181. num_detections: [batch]
  182. In addition to the above fields this stage also outputs the following
  183. raw tensors:
  184. raw_detection_boxes: [batch, total_detections, 4] tensor containing
  185. all detection boxes from `prediction_dict` in the format
  186. [ymin, xmin, ymax, xmax] and normalized co-ordinates.
  187. raw_detection_scores: [batch, total_detections,
  188. num_classes_with_background] tensor of class score logits for
  189. raw detection boxes.
  190. """
  191. pass
  192. @abc.abstractmethod
  193. def loss(self, prediction_dict, true_image_shapes):
  194. """Compute scalar loss tensors with respect to provided groundtruth.
  195. Calling this function requires that groundtruth tensors have been
  196. provided via the provide_groundtruth function.
  197. Args:
  198. prediction_dict: a dictionary holding predicted tensors
  199. true_image_shapes: int32 tensor of shape [batch, 3] where each row is
  200. of the form [height, width, channels] indicating the shapes
  201. of true images in the resized images, as resized images can be padded
  202. with zeros.
  203. Returns:
  204. a dictionary mapping strings (loss names) to scalar tensors representing
  205. loss values.
  206. """
  207. pass
  208. def provide_groundtruth(self,
  209. groundtruth_boxes_list,
  210. groundtruth_classes_list,
  211. groundtruth_masks_list=None,
  212. groundtruth_keypoints_list=None,
  213. groundtruth_weights_list=None,
  214. groundtruth_confidences_list=None,
  215. groundtruth_is_crowd_list=None,
  216. is_annotated_list=None):
  217. """Provide groundtruth tensors.
  218. Args:
  219. groundtruth_boxes_list: a list of 2-D tf.float32 tensors of shape
  220. [num_boxes, 4] containing coordinates of the groundtruth boxes.
  221. Groundtruth boxes are provided in [y_min, x_min, y_max, x_max]
  222. format and assumed to be normalized and clipped
  223. relative to the image window with y_min <= y_max and x_min <= x_max.
  224. groundtruth_classes_list: a list of 2-D tf.float32 one-hot (or k-hot)
  225. tensors of shape [num_boxes, num_classes] containing the class targets
  226. with the 0th index assumed to map to the first non-background class.
  227. groundtruth_masks_list: a list of 3-D tf.float32 tensors of
  228. shape [num_boxes, height_in, width_in] containing instance
  229. masks with values in {0, 1}. If None, no masks are provided.
  230. Mask resolution `height_in`x`width_in` must agree with the resolution
  231. of the input image tensor provided to the `preprocess` function.
  232. groundtruth_keypoints_list: a list of 3-D tf.float32 tensors of
  233. shape [num_boxes, num_keypoints, 2] containing keypoints.
  234. Keypoints are assumed to be provided in normalized coordinates and
  235. missing keypoints should be encoded as NaN.
  236. groundtruth_weights_list: A list of 1-D tf.float32 tensors of shape
  237. [num_boxes] containing weights for groundtruth boxes.
  238. groundtruth_confidences_list: A list of 2-D tf.float32 tensors of shape
  239. [num_boxes, num_classes] containing class confidences for groundtruth
  240. boxes.
  241. groundtruth_is_crowd_list: A list of 1-D tf.bool tensors of shape
  242. [num_boxes] containing is_crowd annotations
  243. is_annotated_list: A list of scalar tf.bool tensors indicating whether
  244. images have been labeled or not.
  245. """
  246. self._groundtruth_lists[fields.BoxListFields.boxes] = groundtruth_boxes_list
  247. self._groundtruth_lists[
  248. fields.BoxListFields.classes] = groundtruth_classes_list
  249. if groundtruth_weights_list:
  250. self._groundtruth_lists[fields.BoxListFields.
  251. weights] = groundtruth_weights_list
  252. if groundtruth_confidences_list:
  253. self._groundtruth_lists[fields.BoxListFields.
  254. confidences] = groundtruth_confidences_list
  255. if groundtruth_masks_list:
  256. self._groundtruth_lists[
  257. fields.BoxListFields.masks] = groundtruth_masks_list
  258. if groundtruth_keypoints_list:
  259. self._groundtruth_lists[
  260. fields.BoxListFields.keypoints] = groundtruth_keypoints_list
  261. if groundtruth_is_crowd_list:
  262. self._groundtruth_lists[
  263. fields.BoxListFields.is_crowd] = groundtruth_is_crowd_list
  264. if is_annotated_list:
  265. self._groundtruth_lists[
  266. fields.InputDataFields.is_annotated] = is_annotated_list
  267. @abc.abstractmethod
  268. def regularization_losses(self):
  269. """Returns a list of regularization losses for this model.
  270. Returns a list of regularization losses for this model that the estimator
  271. needs to use during training/optimization.
  272. Returns:
  273. A list of regularization loss tensors.
  274. """
  275. pass
  276. @abc.abstractmethod
  277. def restore_map(self, fine_tune_checkpoint_type='detection'):
  278. """Returns a map of variables to load from a foreign checkpoint.
  279. Returns a map of variable names to load from a checkpoint to variables in
  280. the model graph. This enables the model to initialize based on weights from
  281. another task. For example, the feature extractor variables from a
  282. classification model can be used to bootstrap training of an object
  283. detector. When loading from an object detection model, the checkpoint model
  284. should have the same parameters as this detection model with exception of
  285. the num_classes parameter.
  286. Args:
  287. fine_tune_checkpoint_type: whether to restore from a full detection
  288. checkpoint (with compatible variable names) or to restore from a
  289. classification checkpoint for initialization prior to training.
  290. Valid values: `detection`, `classification`. Default 'detection'.
  291. Returns:
  292. A dict mapping variable names (to load from a checkpoint) to variables in
  293. the model graph.
  294. """
  295. pass
  296. @abc.abstractmethod
  297. def updates(self):
  298. """Returns a list of update operators for this model.
  299. Returns a list of update operators for this model that must be executed at
  300. each training step. The estimator's train op needs to have a control
  301. dependency on these updates.
  302. Returns:
  303. A list of update operators.
  304. """
  305. pass