AI agents use the same networking infrastructure as users and apps. So security solutions like zero trust should evolve to protect agentic AI communications.
We uncovered 1,100+ exposed Ollama LLM serversβ20% with open modelsβrevealing critical security gaps and the need for better LLM threat monitoring.
Foundation-sec-8B-Instruct layers instruction fine-tuning on top of our domain-focused base model, giving you a chat-native copilotthat understands security.
Foundation AI's Cerberus is a 24/7 guard for the AI supply chain, analyzing models as they enter HuggingFace and sharing results to Cisco Security products.
Foundation AI's second releaseβFoundation-sec-8b-reasoning is designed to designed to bring enhanced analytical capabilities to complex security workflows.
Foundation AI's first release β Llama-3.1-FoundationAI-SecurityLLM-base-8B β is designed to improve response time, expand capacity, and proactively reduce risk.
AI threat research is a fundamental part of Ciscoβs approach to AI security. Our roundups highlight new findings from both original and third-party sources.
AI threat research is a fundamental part of Ciscoβs approach to AI security. Our roundups highlight new findings from both original and third-party sources.
The performance of DeepSeek models has made a clear impact, but are these models safe and secure? We use algorithmic AI vulnerability testing to find out.