Hard Thresholding Pursuit (HTP) has aroused increasing attention for its robust theoretical guarantees and impressive numerical performance in non-convex optimization. In this paper, we introduce a novel tuning-free procedure, named Full-Adaptive HTP (FAHTP), that simultaneously adapts to both the unknown sparsity and signal strength of the underlying model. We provide an in-depth analysis of the iterative thresholding dynamics of FAHTP, offering refined theoretical insights. In specific, under the beta-min condition mini∈S∗|β∗i|≥Cσ(logp/n)1/2, we show that the FAHTP achieves oracle estimation rate σ(s∗/n)1/2, highlighting its theoretical superiority over convex competitors such as LASSO and SLOPE, and recovers the true support set exactly. More importantly, even without the beta-min condition, our method achieves a tighter error bound than the classical minimax rate with high probability. The comprehensive numerical experiments substantiate our theoretical findings, underscoring the effectiveness and robustness of the proposed FAHTP.