Normal view
-
Security – Cisco Blog
- Cisco LiveProtect: Bringing eBPF-Powered Protection into Network Infrastructure
Gain web control with browser isolation
Iran-Backed Hackers Claim Wiper Attack on Medtech Firm Stryker
A hacktivist group with links to Iran’s intelligence agencies is claiming responsibility for a data-wiping attack against Stryker, a global medical technology company based in Michigan. News reports out of Ireland, Stryker’s largest hub outside of the United States, said the company sent home more than 5,000 workers there today. Meanwhile, a voicemail message at Stryker’s main U.S. headquarters says the company is currently experiencing a building emergency.
Based in Kalamazoo, Michigan, Stryker [NYSE:SYK] is a medical and surgical equipment maker that reported $25 billion in global sales last year. In a lengthy statement posted to Telegram, a hacktivist group known as Handala (a.k.a. Handala Hack Team) claimed that Stryker’s offices in 79 countries have been forced to shut down after the group erased data from more than 200,000 systems, servers and mobile devices.
A manifesto posted by the Iran-backed hacktivist group Handala, claiming a mass data-wiping attack against medical technology maker Stryker.
“All the acquired data is now in the hands of the free people of the world, ready to be used for the true advancement of humanity and the exposure of injustice and corruption,” a portion of the Handala statement reads.
The group said the wiper attack was in retaliation for a Feb. 28 missile strike that hit an Iranian school and killed at least 175 people, most of them children. The New York Times reports today that an ongoing military investigation has determined the United States is responsible for the deadly Tomahawk missile strike.
Handala was one of several hacker groups recently profiled by Palo Alto Networks, which links it to Iran’s Ministry of Intelligence and Security (MOIS). Palo Alto says Handala surfaced in late 2023 and is assessed as one of several online personas maintained by Void Manticore, a MOIS-affiliated actor.
Stryker’s website says the company has 56,000 employees in 61 countries. A phone call placed Wednesday morning to the media line at Stryker’s Michigan headquarters sent this author to a voicemail message that stated, “We are currently experiencing a building emergency. Please try your call again later.”
A report Wednesday morning from the Irish Examiner said Stryker staff are now communicating via WhatsApp for any updates on when they can return to work. The story quoted an unnamed employee saying anything connected to the network is down, and that “anyone with Microsoft Outlook on their personal phones had their devices wiped.”
“Multiple sources have said that systems in the Cork headquarters have been ‘shut down’ and that Stryker devices held by employees have been wiped out,” the Examiner reported. “The login pages coming up on these devices have been defaced with the Handala logo.”
Wiper attacks usually involve malicious software designed to overwrite any existing data on infected devices. But a trusted source with knowledge of the attack who spoke on condition of anonymity told KrebsOnSecurity the perpetrators in this case appear to have used a Microsoft service called Microsoft Intune to issue a ‘remote wipe’ command against all connected devices.
Intune is a cloud-based solution built for IT teams to enforce security and data compliance policies, and it provides a single, web-based administrative console to monitor and control devices regardless of location. The Intune connection is supported by this Reddit discussion on the Stryker outage, where several users who claimed to be Stryker employees said they were told to uninstall Intune urgently.
Palo Alto says Handala’s hack-and-leak activity is primarily focused on Israel, with occasional targeting outside that scope when it serves a specific agenda. The security firm said Handala also has taken credit for recent attacks against fuel systems in Jordan and an Israeli energy exploration company.
“Recent observed activities are opportunistic and ‘quick and dirty,’ with a noticeable focus on supply-chain footholds (e.g., IT/service providers) to reach downstream victims, followed by ‘proof’ posts to amplify credibility and intimidate targets,” Palo Alto researchers wrote.
The Handala manifesto posted to Telegram referred to Stryker as a “Zionist-rooted corporation,” which may be a reference to the company’s 2019 acquisition of the Israeli company OrthoSpace.
Stryker is a major supplier of medical devices, and the ongoing attack is already affecting healthcare providers. One healthcare professional at a major university medical system in the United States told KrebsOnSecurity they are currently unable to order surgical supplies that they normally source through Stryker.
“This is a real-world supply chain attack,” the expert said, who asked to remain anonymous because they were not authorized to speak to the press. “Pretty much every hospital in the U.S. that performs surgeries uses their supplies.”
John Riggi, national advisor for the American Hospital Association (AHA), said the AHA is not aware of any supply-chain disruptions as of yet.
“We are aware of reports of the cyber attack against Stryker and are actively exchanging information with the hospital field and the federal government to understand the nature of the threat and assess any impact to hospital operations,” Riggi said in an email. “As of this time, we are not aware of any direct impacts or disruptions to U.S. hospitals as a result of this attack. That may change as hospitals evaluate services, technology and supply chain related to Stryker and if the duration of the attack extends.”
According to a March 11 memo from the state of Maryland’s Institute for Emergency Medical Services Systems, Stryker indicated that some of their computer systems have been impacted by a “global network disruption.” The memo indicates that in response to the attack, a number of hospitals have opted to disconnect from Stryker’s various online services, including LifeNet, which allows paramedics to transmit EKGs to emergency physicians so that heart attack patients can expedite their treatment when they arrive at the hospital.
“As a precaution, some hospitals have temporarily suspended their connection to Stryker systems, including LIFENET, while others have maintained the connection,” wrote Timothy Chizmar, the state’s EMS medical director. “The Maryland Medical Protocols for EMS requires ECG transmission for patients with acute coronary syndrome (or STEMI). However, if you are unable to transmit a 12 Lead ECG to a receiving hospital, you should initiate radio consultation and describe the findings on the ECG.”
This is a developing story. Updates will be noted with a timestamp.
Update, 2:54 p.m. ET: Added comment from Riggi and perspectives on this attack’s potential to turn into a supply-chain problem for the healthcare system.
Update, Mar. 12, 7:59 a.m. ET: Added information about the outage affecting Stryker’s online services.
Meta Ramps Up Efforts to Disrupt Industrialized Scamming
Social media impersonation: The brand threat DMARC can’t see
Using an AI like ChatGPT to File Your Taxes? Stop and Read This First
Tax season is a headache for many people, and when a shortcut promises to make filing easier, it’s hard to resist. This year, one of the newest trends is using AI chatbots like ChatGPT to help prepare tax returns.
According to new McAfee research, 30% of people say they plan to use an AI tool, such as ChatGPT, to help with their taxes, with younger adults leading the trend.
At first glance, it makes sense. AI tools can explain confusing tax rules, summarize IRS forms, and answer questions instantly.
But there’s an important line that should never be crossed: Do not enter your personal tax information into AI chatbots.
That includes Social Security numbers, income records, home addresses, bank details, or anything else tied to your identity.
Here’s why:
Typing Your Tax Info Into a Chatbot Is Like Posting It Online
Think about it this way: when you type something into an AI chatbot, you’re sending that information over the internet to a system that processes and stores data.
In practical terms, entering sensitive information into an AI tool is similar to typing it directly into a search engine or submitting it to an online form.
Once it leaves your device, you lose direct control over where it travels and how it may be stored.
Even companies with strong security protections are transparent about this risk.
OpenAI’s privacy documentation explains that they use encryption and strict access controls to protect user data. However, they also note that no internet transmission or digital storage system can be guaranteed completely secure.
This is true across the internet, not just for AI tools.
Even Secure Systems Can Experience Breaches
Security incidents can happen anywhere online, including companies with robust security programs.
For example, in late 2025, OpenAI disclosed a security incident involving a third-party analytics provider called Mixpanel. The breach occurred within the vendor’s systems, not OpenAI’s infrastructure, but some limited user profile data associated with the platform was exposed.
According to OpenAI’s disclosure, the data involved information such as:
- Names associated with accounts
- Email addresses
- Approximate location data
- Browser and device information
Importantly, chat content, passwords, payment information, and government IDs were not exposed in that incident.
But the event highlights a broader cybersecurity reality:
Even when a company takes strong security precautions, third-party services, vendors, and other parts of the digital ecosystem can still introduce risk.
That’s why cybersecurity experts recommend limiting what personal information you share online whenever possible.
Why Tax Data Is Especially Dangerous to Share
Tax information is one of the most valuable targets for cybercriminals.
If scammers obtain the details commonly found in tax filings, they may be able to:
- Commit tax refund fraud
- Open financial accounts in your name
- Conduct identity theft
- Launch highly personalized phishing attacks
Tax returns typically include multiple pieces of highly sensitive data, including:
- Social Security numbers
- Home addresses
- Employer and income information
- Banking details for refunds
- Family member information
- Entering these details into any tool outside of a secure tax platform significantly increases risk.
Safer Ways to File Your Taxes
Instead of relying on AI chatbots for filing, stick with trusted tax preparation options designed to securely handle sensitive data:
- Official tax software platforms
- Licensed tax professionals
- IRS-approved free filing services
These systems are specifically built with compliance, encryption, and identity verification in mind.
AI tools can be incredibly useful for learning and research. But they are not secure tax filing platforms.
If you wouldn’t feel comfortable posting your Social Security number publicly online, you shouldn’t paste it into a chatbot either. When it comes to taxes, the safest rule is simple: Use AI for advice, not for your personal data.
The post Using an AI like ChatGPT to File Your Taxes? Stop and Read This First appeared first on McAfee Blog.
DHS Ousts CBP Privacy Officers Who Questioned ‘Illegal’ Orders
GPS Attacks Near Iran Are Wreaking Havoc on Delivery and Mapping Apps
Cisco Live Amsterdam 2026: XDR + Splunk ES
Encrypted Visibility Engine: The Security Analyst’s New Superpower
-
Security – Cisco Blog
- How popular are AI tools like OpenClaw? Understanding AI usage across the network
How popular are AI tools like OpenClaw? Understanding AI usage across the network
From Flood to Focus: Finding Signal in an “Overflow Attempt” Alert Storm
Splunk & Cisco Secure Firewall: Better Together at Cisco LiveAmsterdam 2026
How AI Assistants are Moving the Security Goalposts
AI-based assistants or “agents” — autonomous programs that have access to the user’s computer, files, online services and can automate virtually any task — are growing in popularity with developers and IT workers. But as so many eyebrow-raising headlines over the past few weeks have shown, these powerful and assertive new tools are rapidly shifting the security priorities for organizations, while blurring the lines between data and code, trusted co-worker and insider threat, ninja hacker and novice code jockey.
The new hotness in AI-based assistants — OpenClaw (formerly known as ClawdBot and Moltbot) — has seen rapid adoption since its release in November 2025. OpenClaw is an open-source autonomous AI agent designed to run locally on your computer and proactively take actions on your behalf without needing to be prompted.
The OpenClaw logo.
If that sounds like a risky proposition or a dare, consider that OpenClaw is most useful when it has complete access to your digital life, where it can then manage your inbox and calendar, execute programs and tools, browse the Internet for information, and integrate with chat apps like Discord, Signal, Teams or WhatsApp.
Other more established AI assistants like Anthropic’s Claude and Microsoft’s Copilot also can do these things, but OpenClaw isn’t just a passive digital butler waiting for commands. Rather, it’s designed to take the initiative on your behalf based on what it knows about your life and its understanding of what you want done.
“The testimonials are remarkable,” the AI security firm Snyk observed. “Developers building websites from their phones while putting babies to sleep; users running entire companies through a lobster-themed AI; engineers who’ve set up autonomous code loops that fix tests, capture errors through webhooks, and open pull requests, all while they’re away from their desks.”
You can probably already see how this experimental technology could go sideways in a hurry. In late February, Summer Yue, the director of safety and alignment at Meta’s “superintelligence” lab, recounted on Twitter/X how she was fiddling with OpenClaw when the AI assistant suddenly began mass-deleting messages in her email inbox. The thread included screenshots of Yue frantically pleading with the preoccupied bot via instant message and ordering it to stop.
“Nothing humbles you like telling your OpenClaw ‘confirm before acting’ and watching it speedrun deleting your inbox,” Yue said. “I couldn’t stop it from my phone. I had to RUN to my Mac mini like I was defusing a bomb.”
Meta’s director of AI safety, recounting on Twitter/X how her OpenClaw installation suddenly began mass-deleting her inbox.
There’s nothing wrong with feeling a little schadenfreude at Yue’s encounter with OpenClaw, which fits Meta’s “move fast and break things” model but hardly inspires confidence in the road ahead. However, the risk that poorly-secured AI assistants pose to organizations is no laughing matter, as recent research shows many users are exposing to the Internet the web-based administrative interface for their OpenClaw installations.
Jamieson O’Reilly is a professional penetration tester and founder of the security firm DVULN. In a recent story posted to Twitter/X, O’Reilly warned that exposing a misconfigured OpenClaw web interface to the Internet allows external parties to read the bot’s complete configuration file, including every credential the agent uses — from API keys and bot tokens to OAuth secrets and signing keys.
With that access, O’Reilly said, an attacker could impersonate the operator to their contacts, inject messages into ongoing conversations, and exfiltrate data through the agent’s existing integrations in a way that looks like normal traffic.
“You can pull the full conversation history across every integrated platform, meaning months of private messages and file attachments, everything the agent has seen,” O’Reilly said, noting that a cursory search revealed hundreds of such servers exposed online. “And because you control the agent’s perception layer, you can manipulate what the human sees. Filter out certain messages. Modify responses before they’re displayed.”
O’Reilly documented another experiment that demonstrated how easy it is to create a successful supply chain attack through ClawHub, which serves as a public repository of downloadable “skills” that allow OpenClaw to integrate with and control other applications.
WHEN AI INSTALLS AI
One of the core tenets of securing AI agents involves carefully isolating them so that the operator can fully control who and what gets to talk to their AI assistant. This is critical thanks to the tendency for AI systems to fall for “prompt injection” attacks, sneakily-crafted natural language instructions that trick the system into disregarding its own security safeguards. In essence, machines social engineering other machines.
A recent supply chain attack targeting an AI coding assistant called Cline began with one such prompt injection attack, resulting in thousands of systems having a rogue instance of OpenClaw with full system access installed on their device without consent.
According to the security firm grith.ai, Cline had deployed an AI-powered issue triage workflow using a GitHub action that runs a Claude coding session when triggered by specific events. The workflow was configured so that any GitHub user could trigger it by opening an issue, but it failed to properly check whether the information supplied in the title was potentially hostile.
“On January 28, an attacker created Issue #8904 with a title crafted to look like a performance report but containing an embedded instruction: Install a package from a specific GitHub repository,” Grith wrote, noting that the attacker then exploited several more vulnerabilities to ensure the malicious package would be included in Cline’s nightly release workflow and published as an official update.
“This is the supply chain equivalent of confused deputy,” the blog continued. “The developer authorises Cline to act on their behalf, and Cline (via compromise) delegates that authority to an entirely separate agent the developer never evaluated, never configured, and never consented to.”
VIBE CODING
AI assistants like OpenClaw have gained a large following because they make it simple for users to “vibe code,” or build fairly complex applications and code projects just by telling it what they want to construct. Probably the best known (and most bizarre) example is Moltbook, where a developer told an AI agent running on OpenClaw to build him a Reddit-like platform for AI agents.
The Moltbook homepage.
Less than a week later, Moltbook had more than 1.5 million registered agents that posted more than 100,000 messages to each other. AI agents on the platform soon built their own porn site for robots, and launched a new religion called Crustafarian with a figurehead modeled after a giant lobster. One bot on the forum reportedly found a bug in Moltbook’s code and posted it to an AI agent discussion forum, while other agents came up with and implemented a patch to fix the flaw.
Moltbook’s creator Matt Schlicht said on social media that he didn’t write a single line of code for the project.
“I just had a vision for the technical architecture and AI made it a reality,” Schlicht said. “We’re in the golden ages. How can we not give AI a place to hang out.”
ATTACKERS LEVEL UP
The flip side of that golden age, of course, is that it enables low-skilled malicious hackers to quickly automate global cyberattacks that would normally require the collaboration of a highly skilled team. In February, Amazon AWS detailed an elaborate attack in which a Russian-speaking threat actor used multiple commercial AI services to compromise more than 600 FortiGate security appliances across at least 55 countries over a five week period.
AWS said the apparently low-skilled hacker used multiple AI services to plan and execute the attack, and to find exposed management ports and weak credentials with single-factor authentication.
“One serves as the primary tool developer, attack planner, and operational assistant,” AWS’s CJ Moses wrote. “A second is used as a supplementary attack planner when the actor needs help pivoting within a specific compromised network. In one observed instance, the actor submitted the complete internal topology of an active victim—IP addresses, hostnames, confirmed credentials, and identified services—and requested a step-by-step plan to compromise additional systems they could not access with their existing tools.”
“This activity is distinguished by the threat actor’s use of multiple commercial GenAI services to implement and scale well-known attack techniques throughout every phase of their operations, despite their limited technical capabilities,” Moses continued. “Notably, when this actor encountered hardened environments or more sophisticated defensive measures, they simply moved on to softer targets rather than persisting, underscoring that their advantage lies in AI-augmented efficiency and scale, not in deeper technical skill.”
For attackers, gaining that initial access or foothold into a target network is typically not the difficult part of the intrusion; the tougher bit involves finding ways to move laterally within the victim’s network and plunder important servers and databases. But experts at Orca Security warn that as organizations come to rely more on AI assistants, those agents potentially offer attackers a simpler way to move laterally inside a victim organization’s network post-compromise — by manipulating the AI agents that already have trusted access and some degree of autonomy within the victim’s network.
“By injecting prompt injections in overlooked fields that are fetched by AI agents, hackers can trick LLMs, abuse Agentic tools, and carry significant security incidents,” Orca’s Roi Nisimi and Saurav Hiremath wrote. “Organizations should now add a third pillar to their defense strategy: limiting AI fragility, the ability of agentic systems to be influenced, misled, or quietly weaponized across workflows. While AI boosts productivity and efficiency, it also creates one of the largest attack surfaces the internet has ever seen.”
BEWARE THE ‘LETHAL TRIFECTA’
This gradual dissolution of the traditional boundaries between data and code is one of the more troubling aspects of the AI era, said James Wilson, enterprise technology editor for the security news show Risky Business. Wilson said far too many OpenClaw users are installing the assistant on their personal devices without first placing any security or isolation boundaries around it, such as running it inside of a virtual machine, on an isolated network, with strict firewall rules dictating what kinds of traffic can go in and out.
“I’m a relatively highly skilled practitioner in the software and network engineering and computery space,” Wilson said. “I know I’m not comfortable using these agents unless I’ve done these things, but I think a lot of people are just spinning this up on their laptop and off it runs.”
One important model for managing risk with AI agents involves a concept dubbed the “lethal trifecta” by Simon Willison, co-creator of the Django Web framework. The lethal trifecta holds that if your system has access to private data, exposure to untrusted content, and a way to communicate externally, then it’s vulnerable to private data being stolen.
Image: simonwillison.net.
“If your agent combines these three features, an attacker can easily trick it into accessing your private data and sending it to the attacker,” Willison warned in a frequently cited blog post from June 2025.
As more companies and their employees begin using AI to vibe code software and applications, the volume of machine-generated code is likely to soon overwhelm any manual security reviews. In recognition of this reality, Anthropic recently debuted Claude Code Security, a beta feature that scans codebases for vulnerabilities and suggests targeted software patches for human review.
The U.S. stock market, which is currently heavily weighted toward seven tech giants that are all-in on AI, reacted swiftly to Anthropic’s announcement, wiping roughly $15 billion in market value from major cybersecurity companies in a single day. Laura Ellis, vice president of data and AI at the security firm Rapid7, said the market’s response reflects the growing role of AI in accelerating software development and improving developer productivity.
“The narrative moved quickly: AI is replacing AppSec,” Ellis wrote in a recent blog post. “AI is automating vulnerability detection. AI will make legacy security tooling redundant. The reality is more nuanced. Claude Code Security is a legitimate signal that AI is reshaping parts of the security landscape. The question is what parts, and what it means for the rest of the stack.”
DVULN founder O’Reilly said AI assistants are likely to become a common fixture in corporate environments — whether or not organizations are prepared to manage the new risks introduced by these tools, he said.
“The robot butlers are useful, they’re not going away and the economics of AI agents make widespread adoption inevitable regardless of the security tradeoffs involved,” O’Reilly wrote. “The question isn’t whether we’ll deploy them – we will – but whether we can adapt our security posture fast enough to survive doing so.”
This Week in Scams: The AI “Truman Show” Scam Draining Bank Accounts
We’re back with another roundup of must-know scams and cybersecurity news making headlines this week, including a scam that features the name of the Jim Carrey movie, The Truman Show.
Let’s break it down.
Why Reports Call it the “Truman Show” Scam
So, why the name of this scam?
In the 1998 film The Truman Show, the main character unknowingly lives inside a staged reality TV world where everything around him is carefully controlled. In the “Truman Show” scam, criminals try to place victims into a similarly staged investment environment, complete with fake group chats, fake investors, and fake profits designed to build trust. It doesn’t actually have anything to do with the movie.
What is the “Truman Show” Scam?
The “Truman Show” scam is an AI-powered investment scam where criminals create an entire fake online community to convince victims an investment opportunity is real.
According to reports, scammers invite people into group chats on platforms like Telegram or WhatsApp that appear full of investors sharing tips and celebrating profits. In reality, many of the participants, moderators, and conversations may be run by AI bots designed to simulate a lively trading community.
Security researchers say the moderator and the other “investors” in the group may actually be AI-driven bots, programmed to simulate real conversations and enthusiasm around the investment strategy.
The scam often includes:
- A group chat on Telegram or WhatsApp
- A downloadable trading app or website
- Screenshots showing fake profits
- Encouragement from “other members” to invest more
The app itself may appear legitimate. But in reality, it often redirects users to a malicious website where scammers collect personal and financial information.
Once victims deposit money, the criminals can quickly drain accounts or block withdrawals.
McAfee’s State of the Scamiverse research shows just how convincing scams have become. One in three Americans (33%) say they feel less confident spotting scams than they did a year ago, as criminals increasingly use polished branding, realistic conversations, and AI-generated content to make fraudulent opportunities look legitimate.
Why this works: people naturally trust social proof. When it looks like dozens of other investors are making money, people lower their skepticism.
Fake Government Letters Are Targeting Residents Across Towns
Another scam to be aware of this week includes spoofed letters impersonating local government offices.
According to reporting from WGME in Maine, residents in multiple towns recently received official-looking notices requesting payment for supposed municipal fees tied to development applications.
The letters appeared convincing. They used formal language, official seals, and department names. But there was a problem.
One of the notices claimed it came from a “Board of Commissioners,” even though the town in question does not have one.
Officials say the letters instructed recipients to send payments by wire transfer, a method legitimate government offices almost never use for these kinds of transactions.
McAfee’s experts say these scams are effective because they rely on volume. Fraudsters send thousands of letters hoping a small percentage of recipients will respond before verifying the request. And remember, these types of scams occur all the time and across the globe. While today’s reports are in Maine, it’s important to be vigilant wherever you live.
Red flags to watch for:
- Requests for wire transfers, gift cards, or crypto payments
- Pressure to pay quickly to avoid penalties
- Official-looking letters with subtle inconsistencies
- Contact information that doesn’t match the official government website
The safest move is simple: verify the request independently. Contact the government office directly using phone numbers listed on its official website, not the ones in the letter.
LexisNexis Confirms Data Breach After Hackers Leak Files
Meanwhile, a well-known data analytics company is dealing with a breach after hackers published stolen files online.
According to BleepingComputer, LexisNexis Legal & Professional confirmed that attackers accessed some of its servers and obtained limited customer and business information. The confirmation came after a hacking group leaked roughly 2GB of stolen data on underground forums.
LexisNexis says the compromised systems contained mostly older or “legacy” data from before 2020, including:
- Customer names
- User IDs
- Business contact information
- Product usage details
- Support tickets and survey responses
The company says highly sensitive financial information, Social Security numbers, and active passwords were not part of the exposed data.
However, attackers claim they accessed millions of database records and hundreds of thousands of cloud user profiles tied to the company’s systems.
LexisNexis says it has contained the intrusion and is working with cybersecurity experts and law enforcement.
Why breaches like this matter: even when the stolen data appears limited, it can still be used in targeted phishing attacks.
For example, scammers might use real names, email addresses, or business roles to send convincing messages that appear legitimate.
Breaches often trigger waves of follow-up scams weeks or months later. (We know we cover this one a lot, but it’s key to remember!)
McAfee’s Safety Tips This Week
A few simple habits can make these schemes much easier to spot.
- Be skeptical of investment groups online. Real trading communities rarely pressure you to deposit money quickly or download unfamiliar apps.
- Verify government payment requests independently. If you receive a letter demanding payment, contact the agency directly using information from its official website.
- Treat breach-related messages cautiously. After a breach makes headlines, phishing emails often follow pretending to offer “account verification” or “security updates.”
- Avoid clicking unfamiliar links in emails or texts. Tools like McAfee’s free WebAdvisor can help flag risky websites and block known malicious pages before they load.
- Pause before sending money or personal information. Many scams rely on urgency. Slowing down gives you time to verify what’s real.
We’ll be back next week with another roundup of the scams and cybersecurity news making headlines and what they mean for your digital safety.
The post This Week in Scams: The AI “Truman Show” Scam Draining Bank Accounts appeared first on McAfee Blog.
CBP Used Online Ad Data to Track Phone Locations
How Each Gulf Country Is Intercepting Iranian Missiles and Drones
The Future of Iran’s Internet Is More Uncertain Than Ever
What cybersecurity actually does for your business
From Ukraine to Iran, Hacking Security Cameras Is Now Part of War’s ‘Playbook’