-
-
Notifications
You must be signed in to change notification settings - Fork 465
Improvement on Intel-PT-based fuzzing capabilities #3724
Description
I noticed that LibAFL has been integrating the Intel-PT-based fuzzing capabilities using the ptcov crate. I have also implemented a full-featured Intel-PT decoder iptr using pure Rust with more idiomatic APIs in both low-level and bitmap-level APIs.
Evaluated on the famous libxdc_experiments benchmark, it turns out that iptr has comparable performance with libxdc, the fastest Intel PT decoder written in C, with each target completed in 5-10 seconds. However, ptcov cannot run most of the targets (possible due to the approach that I build up PtImage since libxdc dumps memory in page level instead of image level, leading to some instructions split into two pages), while some targets cannot complete in even 18 hours... I guess the reason for the low performance is that ptcov does not use TNT cache to accelerate the bitmap update.
I also managed to integrate iptr into LibAFL, creating a fuzzer with the same functionality with intel_pt_command_executor. Due to the fact that the target program is very small, the performance improvement is not as large as what we see in libxdc_experiments. Anyway, it can still discover the vulnerabilities as ptcov does, proving the full functionalities.
I wonder whether the LibAFL team is willing to accept iptr as an alternative Intel PT decoder. I can think of four feasible approaches:
-
iptr as an alternative with independent sub-crate like
libafl_intelpt -
iptr and ptcov being two configurable backend features in
libafl_intelptHowever, I found that the logics and APIs are highly coupled with ptcov in both
libafl_intelptand the Intel PT executor hook, with some ptcov-related structs exposed in public APIs.If we want to use features to select these two backends, some refactor works still need to be done.
-
Rewrite ptcov using iptr
-
Directly use iptr to replace ptcov.
I'm willing to assist in any of the above approaches. Which approach do you prefer? :)