The recent explosion in the number and advancement of cyberattacks induces the deployment of machine learning (ML)-based network intrusion detection systems (NIDS) in the network infrastructure of each corporation. However, there are plenty of difficulties for enterprise organization in training a conventional ML-based IDS, such as the data shortage, the privacy concerns about sensitive information, etc. Fortunately, federated learning (FL) has emerged as a decentralized training scheme that facilitates the collaboration of different parties in building a robust ML-based NIDS. As a result, this IDS model can learn new signatures of cyber threats from various data sources without the privacy breaches. Nonetheless, because of the server's blindness to the local training, the FL framework has to face the risks of poisoning attacks where the compromised clients intentionally inject adversarial data into their local dataset or directly manipulate the model weights before updating to the server for aggregation. Several anti-poisoning techniques have been proposed to mitigate the impact of poisoning attacks in FL, but these approaches regularly require some prior knowledge and do not work well in the case of non-Independently and Identically Distributed (non-IID) data environments. This paper introduces a new defensive mechanism for FL-based NIDS, named FedLS, by adopting penultimate layer representations (PLR) and Autoencoder (AE)-based latent space to filter malicious updates from the aggregation phase. The experimental results on CIC-ToN-IoT and N-BaIoT datasets have demonstrated the effectiveness of our FedLS in detecting advanced poisoning methods in both IID and non-IID cases. More specifically, the Accuracy and F1-Score metrics of FL-based NIDS witness a surge to over 99\% after integrating our proposed defense in the best case.