In recent times, there has been a growing utilization of Machine Learning (ML) in the realm of malware detection. The Adversarial Example (AE) attack, which is widely acknowledged for undermining ML in diverse contexts, has demonstrated its effectiveness in evading or deceiving ML-based systems designed for malware detection. By ensuring the transferability of generated AEs, attacks can be enhanced to bypass various types of malware detection models. This research focuses on investigating the transferability of AEs generated using Generative Adversarial Networks (GANs) and assessing the resistance of ML-based Android malware detection against them. In addition, we provide experimental evidence demonstrating that GAN-generated AEs can still retain their original functionality and malicious behaviors. To begin with, we establish ML models with high detection performance for malware. These models, which function as black-box, form the foundation for generating AEs by effectively identifying malware. Furthermore, we utilize the previously mentioned model to build the discriminator, enhancing the transferability of the sample generator by ensuring the preservation of the functional and executable features of the file. Finally, AEs generated from each GAN model using different black-box detectors will be tested for their transferability capabilities on various targeted victim models.