Distribution Matching Distillation (DMD) has been successfully applied to text-to-image diffusion models such as Stable Diffusion 1.5. However, vanilla DMD suffers from convergence difficulties on large-scale flow-based text-to-image models, such as SD 3.5 and FLUX. In this paper, we first analyze the issues when applying vanilla DMD on large-scale models. Then, to overcome the scalability challenge, we propose Implicit Distribution Alignment (IDA) to regularize the distance between the generator and fake distribution. Furthermore, we propose Intra-Segment Guidance (ISG) to relocate the timestep importance distribution from the teacher model. With IDA alone, DMD converges for SD 3.5; employing both IDA and ISG, DMD converges for SD 3.5 and FLUX.1 dev. Along with other improvements such as scaled-up discriminator models, our final model, dubbed SenseFlow, achieves superior performance in distillation for both diffusion-based (SDXL) and flow-matching models (SD 3.5 Large and FLUX).