Rising geopolitical tensions are reflected (or in some cases preceded) by cyber operations, while technology itself has become politicized. Letβs admit it: we are in the middle of it.Β
Introduction: One tech power to rule them all is a thing of the pastΒ
The relative safety, peace and prosperity that much of the world has enjoyed since 1945 was not accidental. It emerged from the ashes
Unmasking impostors is something the art world has faced for decades, and there are valuable lessons from the works of Elmyr de Hory that can apply to the world of defensive cybersecurity. During the 1960s, de Hory gained infamy as a premier forger, passing off counterfeit masterworks of Picasso, Matisse, and Renoir to unsuspecting collectors and renowned museums. Over the next several decades,
Most teams have security tools in place. Alerts are firing, dashboards look clean, threat intel is flowing in. On the surface, everything feels under control.
But one question usually stays unanswered: Would your defenses actually stop a real attack?
Thatβs where things get shaky. A control exists, so itβs assumed to work. A detection rule is active, so itβs expected to catch something. But very
In September 2025, Anthropic disclosed that a state-sponsored threat actor used an AI coding agent to execute an autonomous cyber espionage campaign against 30 global targets. The AI handled 80-90% of tactical operations on its own, performing reconnaissance, writing exploit code, and attempting lateral movement at machine speed.
This incident is worrying, but there's a scenario that should
The U.S. Department of Justice (DoJ) said a Russian national has been sentenced to two years in prison for managing a botnet that was used to launch ransomware attacks against U.S. companies.
Ilya Angelov, 40, of Tolyatti, Russia, was also fined $100,000. Angelov, who went by the online aliases "milan" and "okart," is said to have co-managed a Russia-based cybercriminal group known as TA551 (aka
Cybersecurity researchers are calling attention to an active device code phishing campaign that's targeting Microsoft 365 identities across more than 340 organizations in the U.S., Canada, Australia, New Zealand, and Germany.
The activity, per Huntress, was first spotted on February 19, 2026, with subsequent cases appearing at an accelerated pace since then. Notably, the campaign leverages
On February 25, 2026, Gartner published its inaugural Market Guide for Guardian Agents, marking an important milestone for this emerging category. For those unfamiliar with the various Gartner report types, βa Market Guide defines a market and explains what clients can expect it to do in the short term. With the focus on early, more chaotic markets, a Market Guide does not rate or position
Cybersecurity has changed fast. Roles are moreΒ specialized, and tooling is more advanced. On paper, this should make organizations more secure. But in practice, many teams struggle with the same basic problems they faced years ago: unclear risk priorities, misaligned tooling decisions, and difficulty explaining security issues in terms the business understands.
These challenges do not
AWS Bedrock is Amazon's platform for building AI-powered applications. It gives developers access to foundation models and the tools to connect those models directly to enterprise data and systems. That connectivity is what makes it powerful β but itβs also what makes Bedrock a target.
When an AI agent can query your Salesforce instance, trigger a Lambda function, or pull from a SharePoint
Artificial Intelligence (AI) is changing how individuals and organizations conduct many activities, including how cybercriminals carry out phishing attacks and iterate on malware. Now, cybercriminals are using AI to generate personalized phishing emails, deepfakes and malware that evade traditional detection by impersonating normal user activity and bypassing legacy security models. As a result,
Security teams have spent years building identity and access controls for human users and service accounts. But a new category of actor has quietly entered most enterprise environments, and it operates entirely outside those controls.
Claude Code, Anthropic's AI coding agent, is now running across engineering organizations at scale. It reads files, executes shell commands, calls external APIs,
When a Magecart payload hides inside the EXIF data of a dynamically loaded third-party favicon, no repository scanner will catch it β because the malicious code never actually touches your repo. As teams adopt Claude Code Security for static analysis, this is the exact technical boundary where AI code scanning stops and client-side runtime execution begins.
A detailed analysis of where Claude
Security teams today are not short on tools or data. They are overwhelmed by both.Β
Yet within the terabytes of alerts, exposures, and misconfigurations β security teams still struggle to understand context:Β
Q: Which exposures, misconfigurations, and vulnerabilities chain together to create viable attack paths to crown jewels?
Even the most mature security teams canβt answer that
A majority of security leaders are struggling to defend AI systems with tools and skills that are not fit for the challenge, according to the AI and Adversarial Testing Benchmark Report 2026 from Pentera.
The report, based on a survey of 300 US CISOs and senior security leaders, examines how organizations are securing AI infrastructure and highlights critical gaps tied to skills shortages and
If you run security at any reasonably complex organization, your validation stack probably looks something like this: a BAS tool in one corner. A pentest engagement, or maybe an automated pentesting product, in another. A vulnerability scanner feeding an attack surface management platform somewhere else. Each tool gives you a slice of the picture. None of them talks to each other in any
Disclaimer: This report has been prepared by the Threat Research Center to enhance cybersecurity awareness and support the strengthening of defense capabilities. It is based on independent research and observations of the current threat landscape available at the time of publication. The content is intended for informational and preparedness purposes only.
Read more blogs around threat
Phishing has quietly turned into one of the hardest enterprise threats to expose early. Instead of crude lures and obvious payloads, modern campaigns rely on trusted infrastructure, legitimate-looking authentication flows, and encrypted traffic that conceals malicious behavior from traditional detection layers. For CISOs, the priority is now clear: scale phishing detection in a way that helps
The most dangerous phishing campaigns arenβt just designed to fool employees. Many are designed to exhaust the analysts investigating them. When a phishing investigation takes 12 hours instead of five minutes, the outcome can shift from a contained incident to a breach.
For years, the cybersecurity industry has focused on the front door of phishing defense: employee training, email gateways that
βYou knew, and you could have acted. Why didnβt you?βΒ
This is the question you do not want to be asked. And increasingly, itβs the question leaders are forced to answer after an incident.
For years, many executive teams and boards have treated a large vulnerability backlog as an uncomfortable but tolerable fact of life: βweβve accepted the risk.β If youβve ever seen a report showing
Artificial Intelligence (AI) is no longer just a tool we talk to; it is a tool that does things for us. These are called AI Agents. They can send emails, move data, and even manage software on their own.
But there is a problem. While these agents make work faster, they also open a new "back door" for hackers.
The Problem: "The Invisible Employee"
Think of an AI Agent like a new employee who has