Traditional GAN (Generative Adversarial Network) architectures often reproduce biases present in their training data, leading to synthetic data that may unfairly impact certain subgroups. Past efforts to improve fairness in GANs usually target single demographic categories, like sex or race, but overlook intersectionality. Our approach addresses this gap by integrating an intersectionality framework with explainability techniques to identify and select problematic sensitive features. These insights are then used to develop intersectional fairness constraints integrated into the GAN training process. We aim to enhance fairness and maintain diverse subgroup representation by addressing intersections of multiple demographic attributes. Specifically, we adjusted the loss functions of two state-of-the-art GAN models for tabular data, including an intersectional demographic parity constraint. Our evaluations indicate that this approach significantly improves fairness in synthetically generated datasets. We compared the outcomes using Adult, and Diabetes datasets when considering the intersection of two sensitive features versus focusing on a single sensitive attribute, demonstrating the effectiveness of our method in capturing more complex biases.