Dynamic topic models capture the evolution of topics in sequential corpora, which have been applied in various tasks like trend analysis. However, existing models suffer from producing repetitive and unassociated topics that fail to reveal the evolution and hinder further applications. To address these issues, we in this paper propose the Unassociated word sampling and Contrastive Dynamic Topic Model. Instead of simply chaining topics as early work, we propose a contrastive topic evolution method that builds the similarity relations among dynamic topics. This captures the evolution of topics and also maintains the differences between them, which prevents producing repetitive topics. To avoid unassociated topics, we further propose an unassociated word sampling method that consistently excludes unassociated words from discovered topics. Experiments on benchmark datasets show our method significantly outperforms state-of-the-art baselines, capturing topic evolution with more coherent and diverse topics and showing better performance on downstream tasks.