Peg-in-hole (PiH) assembly is a fundamental yet challenging robotic manipulation task. While reinforcement learning (RL) has shown promise in tackling such tasks, it requires extensive exploration and carefully-designed reward functions. In this paper, we propose a novel PiH skill-learning framework that leverages its inverse task, i.e., peg-out-hole (PoH) disassembly, to facilitate PiH learning. Compared to PiH, PoH is inherently easier as it requires only overcoming existing friction without precise alignment, making data collection more efficient. To this end, we first collect a large dataset of PoH trajectories in simulation, which are then inverted to generate training data for PiH. To bridge the Sim-to-Real gap, the learned policy is fine-tuned with tactile measurements to compensate for peg-hole misalignment in real-world scenarios. Compared to direct RL approaches that train PiH policies from scratch, our method achieves a twofold improvement in both learning speed and Sim-to-Sim success rates. Extensive Sim-to-Real experiments across single-arm and dual-arm robot configurations, as well as diverse peg and hole geometries, validate the effectiveness of our framework, achieving an average success rate of 90.4% across all tested objects.