I am running Isaac ROS FoundationPose on a Jetson Orin, but I am only achieving a pose estimation frequency of approximately 1.5 Hz. According to the official NVIDIA documentation, the performance is rated at 4.6 QPS.
I would like to clarify if the reported 4.6 QPS figure refers strictly to the forward inference latency of the refine_model and score_model. Does this benchmark exclude the computational overhead of the GXF pipeline, specifically the pre-processing (sampling and Nvdiffrast rendering) and post-processing (decoding) stages?
The following are the execution latencies for the most time-consuming stages in the pipeline:
Pose Sampling and Rendering: 420 ms
Refine Model Inference and Sync: 240 ms
Score Model Inference and Sync: 200 ms

I am running Isaac ROS FoundationPose on a Jetson Orin, but I am only achieving a pose estimation frequency of approximately 1.5 Hz. According to the official NVIDIA documentation, the performance is rated at 4.6 QPS.
I would like to clarify if the reported 4.6 QPS figure refers strictly to the forward inference latency of the
refine_modelandscore_model. Does this benchmark exclude the computational overhead of the GXF pipeline, specifically the pre-processing (sampling and Nvdiffrast rendering) and post-processing (decoding) stages?The following are the execution latencies for the most time-consuming stages in the pipeline:
Pose Sampling and Rendering: 420 ms
Refine Model Inference and Sync: 240 ms
Score Model Inference and Sync: 200 ms