With the rapid development and integration of artificial intelligence (AI)
methods in next-generation networks (NextG), AI algorithms have provided
significant advantages for NextG in terms of frequency spectrum usage,
bandwidth, latency, and security. A key feature of NextG is the integration of
AI, i.e., self-learning architecture based on self-supervised algorithms, to
improve the performance of the network. A secure AI-powered structure is also
expected to protect NextG networks against cyber-attacks. However, AI itself
may be attacked, i.e., model poisoning targeted by attackers, and it results in
cybersecurity violations. This paper proposes an AI trust platform using
Streamlit for NextG networks that allows researchers to evaluate, defend,
certify, and verify their AI models and applications against adversarial
threats of evasion, poisoning, extraction, and interference.