You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

35 lines
1.1 KiB

  1. # Object Detection TPU Inference Exporter
  2. This package contains SavedModel Exporter for TPU Inference of object detection
  3. models.
  4. ## Usage
  5. This Exporter is intended for users who have trained models with CPUs / GPUs,
  6. but would like to use them for inference on TPU without changing their code or
  7. re-training their models.
  8. Users are assumed to have:
  9. + `PIPELINE_CONFIG`: A pipeline_pb2.TrainEvalPipelineConfig config file;
  10. + `CHECKPOINT`: A model checkpoint trained on any device;
  11. and need to correctly set:
  12. + `EXPORT_DIR`: Path to export SavedModel;
  13. + `INPUT_PLACEHOLDER`: Name of input placeholder in model's signature_def_map;
  14. + `INPUT_TYPE`: Type of input node, which can be one of 'image_tensor',
  15. 'encoded_image_string_tensor', or 'tf_example';
  16. + `USE_BFLOAT16`: Whether to use bfloat16 instead of float32 on TPU.
  17. The model can be exported with:
  18. ```
  19. python object_detection/tpu_exporters/export_saved_model_tpu.py \
  20. --pipeline_config_file=<PIPELINE_CONFIG> \
  21. --ckpt_path=<CHECKPOINT> \
  22. --export_dir=<EXPORT_DIR> \
  23. --input_placeholder_name=<INPUT_PLACEHOLDER> \
  24. --input_type=<INPUT_TYPE> \
  25. --use_bfloat16=<USE_BFLOAT16>
  26. ```