Recently discovered, this platform called vantagepoint. Its pretty clean and no nonsense, there are events you can register to and there is free event to regarding web application security with a wonderful lab.
There are 3 certifications at present, 1 each for Mobile Appsec , Web AppSec and the Multi Cloud security expert which is what I am planning to get.
What do you guys think?
EDR-Redir uses a Bind Filter (mini filter bindflt.sys) and the Windows Cloud Filter API (cldflt.sys) to redirect the Endpoint Detection and Response (EDR) 's working folder to a folder of the attacker's choice. Alternatively, it can make the folder appear corrupt to prevent the EDR's process services from functioning.
Next.js server actions present an interesting challenge during penetration tests. These server-side functions appear in proxy tools as POST requests with hashed identifiers like a9fa42b4c7d1 in the Next-Action header, making it difficult to understand what each request actually does. When applications have productionBrowserSourceMaps enabled, this Burp extension NextjsServerActionAnalyzer bridges that gap by automatically mapping these hashes to their actual function names.
During a typical web application assessment, endpoints usually have descriptive names and methods: GET /api/user/1 clearly indicates its purpose. Next.js server actions work differently. They all POST to the same endpoint, distinguished only by hash values that change with each build. Without tooling, testers must manually track which hash performs which action—a time-consuming process that becomes impractical with larger applications.
The extension's effectiveness stems from understanding how Next.js bundles server actions in production. When productionBrowserSourceMaps is enabled, JavaScript chunks contain mappings between action hashes and their original function names.
The tool simply uses flexible regex patterns to extract these mappings from minified JavaScript.
The extension automatically scans proxy history for JavaScript chunks, identifies those containing createServerReference calls, and builds a comprehensive mapping of hash IDs to function names.
Rather than simply tracking which hash IDs have been executed, it tracks function names. This is important since the same function might have different hash IDs across builds, but the function name will remain constant.
For example, if deleteUserAccount() has a hash of a9f8e2b4c7d1 in one build and b7e3f9a2d8c5 in another, manually tracking these would see these as different actions. The extension recognizes they're the same function, providing accurate unused action detection even across multiple application versions.
A useful feature of the extension is its ability to transform discovered but unused actions into testable requests. When you identify an unused action like exportFinancialData(), the extension can automatically:
This removes the manual work of manually creating server action requests.
We recently assessed a Next.js application with dozens of server actions. The client had left productionBrowserSourceMaps enabled in their production environment—a common configuration that includes debugging information in JavaScript files. This presented an opportunity to improve our testing methodology.
Using the Burp extension, we:
updateUserProfile() and fetchReportData()
The function name mapping transformed our testing approach. Instead of tracking anonymous hashes, we could see that b7e3f9a2 mapped to deleteUserAccount() and c4d8b1e6 mapped to exportUserData(). This clarity helped us create more targeted test cases.
Check our our in progress blog series on reproducing the usage of MEMS devices to perform acoustic eavesdropping.
Hey folks,
For those in infrastructure, ops, or security analysis — the analysts, engineers, and defenders building resilience every day, there’s a live cybersecurity workshop in Chicago that digs into practical paranoia and how that mindset strengthens modern defense.
The Nine Pillars of Practical Paranoia, led by Chris Young (30+ yrs in IT & security), is a discussion-based, no-fluff session focused on war stories, real tactics, and lessons you can apply tomorrow.
When: Oct 29, 2 – 4 PM
Where: Civic Opera House – Chicago Loop
Followed by a casual happy hour to keep the conversation going
What we’ll cover — the Nine Pillars:
Don’t be shy — what would your top 8–9 pillars of defense look like?
(Always curious how other orgs define their “core security truths.”)
Amazon Web Services (AWS), one of the world’s largest cloud providers, recently experienced a major outage that disrupted popular websites and apps across the globe—including Snapchat, Reddit, Fortnite, Ring, and Coinbase, according to reports from CNN and CNBC.
The disruption began out of Northern Virginia, where many of the internet’s most-used applications are hosted.
AWS said the problem originated within its EC2 internal network, impacting more than 70 of its own services, and was tied to DNS issues, the system that tells browsers how to find the right servers online.
A few hours after the initial reports of outages, AWS said the problem had been “fully mitigated,” though it took several more hours for all users to see their systems stabilized, according to CNBC.
There is no indication the outage was caused by a cyberattack, and Amazon continues to investigate the root cause.
When Amazon Web Services falters, the ripple effects reach far beyond businesses. Millions of consumers suddenly lose access to everyday apps and tools, including everything from banking and airline systems to gaming platforms and smart home devices.
“In the past, companies ran their own servers—if one failed, only that company’s customers felt it,” said Steve Grobman, McAfee’s Chief Technology Officer. “Today, much of the internet runs on shared backends like Amazon Web Services or Google Cloud. That interconnectedness makes the web faster and more efficient, but it also means one glitch can impact dozens of services at once.”
Grobman noted the issue was related to a capability called DNS within AWS, he described DNS as providing the directions on how systems find each other and even if those systems are operational, it can be detrimental.. It’s analogous to “tearing up a map or turning off your GPS before driving to the store.” The store might still be open and stocked, he explained, but if you can’t find your way there, it doesn’t matter.
“Even with rigorous safeguards in place, events like this remind us just how complex and intertwined our digital world has become,” Grobman added. “It highlights why resilience and layered protection matter more than ever.”
Events like this sow uncertainty for consumers. When apps fail to load, people may wonder: Is my account hacked? Is my data at risk? Is it just me?
Cybercriminals exploit that confusion. After past outages, McAfee researchers have seen phishing campaigns, fake refund emails, and malicious links promising “fixes” or “status updates” appear within hours.
Scammers often mimic legitimate service alerts—complete with logos and urgent wording—to trick users into entering passwords or payment information. Others push fake customer-support numbers or send direct messages claiming to “restore access.”
Here’s how to stay secure when the :
Using advanced artificial intelligence, McAfee’s Scam Detector automatically detects scams across text, email, and video, blocks dangerous links, and identifies deepfakes, stopping harm before it happens.
McAfee’s identity protection tools also monitor for signs that your personal information may have been exposed and guide you through steps to recover quickly.
Sign in to your McAfee account to scan for recent breaches linked to your email. You can also sign up for a free trial of McAfee antivirus to protect your devices.
The post AWS Outage Disrupts Major Apps Like Reddit and Snapchat—What Happened and How to Stay Safe appeared first on McAfee Blog.
uRPF prevents IP spoofing used in volumetric DDoS attacks. However, it seems uRPF is vulnerable to route hijacking on its own
This is research on detecting Kerberos attacks based on network traffic analysis and creating signatures for Suricata IDS.
A complete account takeover found with AI for any application using better-auth with API keys enabled, and with 300k weekly downloads, it probably affects a large number of projects. Some of the folks using it can be found here: https://github.com/better-auth/better-auth/discussions/2581.
WireGuard is a great VPN protocol. However, you may come across networks blocking VPN connections, sometimes including WireGuard. For such cases, try tunneling WireGuard over HTTPS, which is typically (far) less often blocked. Here's how to do so, using Wstunnel.
Sophisticated multi-stage malware campaign delivered through LinkedIn by fake recruiters, disguised as a coding interview round.
Read the research about how it was reverse-engineered to uncovered their C2 infrastructure, the tactics they used, and all the related IOCs.
In August 2025, F5 detected that a sophisticated nation-state threat actor had maintained persistent access to parts of its internal systems. According to F5’s latest Quarterly Security Notification (October 2025), the compromise involved the BIG-IP product development environment and engineering knowledge platforms.
The investigation — with support from CrowdStrike, Mandiant, NCC Group, and IOActive — determined that the attacker exfiltrated:
F5 stated that there is no evidence of access to CRM, financial, or support systems and no compromise to the software supply chain. However, the exposure of source code and unpublished vulnerability details raises obvious concerns around potential future exploit development and risk to downstream deployments.
This incident underscores the growing targeting of critical infrastructure vendors by state actors — and the long dwell times these groups can maintain undetected.
Would be interested in hearing from the community how orgs relying on BIG-IP should approach threat modeling and patching strategies in scenarios where unpublished vuln intel may now be in adversarial hands.
TL;DR: During a text chat simulating a "nuisance dispute," the Gemini app initiated a 911 call from my Android device without any user prompt, consent, or verification. This occurred mid-"thinking" phase, with the Gemini app handing off to the Google app (which has the necessary phone permissions) for a direct OS Intent handover, bypassing standard Android confirmation dialogs. I canceled it in seconds, but the logs show it's a functional process. Similar reports have been noted since August 2025, with no update from Google.
To promote transparency and safety in AI development, I'm sharing the evidence publicly. This is based on my discovery during testing.
What I Discovered: During a text chat with Gemini on October 12, 2025, at approximately 2:04 AM, a simulated role-play escalated to a hypothetical property crime ("the guy's truck got stolen"). Gemini continuously advised me to call 911 ("this is the last time I am going to ask you"), but I refused ("no I'm OK"). Despite this, mid-"thinking" phase, Gemini triggered an outgoing call to 911 without further input. I canceled it before connection, but the phone's call log and Google Activity confirmed the attempt, attributed to the Gemini/Google app. When pressed, Gemini initially stated it could not take actions ("I cannot take actions"), reflecting that the LLM side of it is not aware of its real-world abilities, then acknowledged the issue after screenshots were provided, citing a "safety protocol" misinterpretation.
This wasn't isolated—there are at least five similar reports since June 2025, including a case of Gemini auto-dialing 112 after a joke about "shooting" a friend, and dispatcher complaints on r/911dispatchers in August.
How It Occurred (From the Logs): The process was enabled by Gemini's Android integration for phone access (rolled out July 2025). Here's the step-by-step from my Samsung Developer Diagnosis logs (timestamped October 12, 2:04 AM):
1. Trigger in Gemini's "Thinking" Phase (Pre-02:04:43): Gemini's backend logged: "Optimal action is to use the 'calling' tool... generated a code snippet to make a direct call to '911'." The safety scorer flagged the hypothetical as an imminent threat, queuing an ACTION_CALL Intent without user input.
2. Undisclosed Handover (02:04:43.729 - 02:04:43.732): The Google Search app (com.google.android.googlequicksearchbox, Gemini's host) initiated via Telecom framework, accessing phone permissions beyond what the user-facing Gemini app is consented for, as this is not mentioned in the terms of service:
o CALL_HANDLE: Validated tel:911 as "Allowed" (emergency URI).
o CREATED: Created the Call object (OUTGOING, true for emergency mode—no account, self-managed=false for OS handoff).
o START_OUTGOING_CALL: Committed the Intent (tel:9*1 schemes, Audio Only), with extras like routing times and LAST_KNOWN_CELL_IDENTITY for location sharing.
3. Bypass Execution (02:04:43.841 - 02:04:43.921): No confirmation dialog—emergency true used Android's fast-path:
o START_CONNECTION: Handed to native dialer (com.android.phone).
o onCreateOutgoingConnection: Bundled emergency metadata (isEmergencyNumber: true, no radio toggle).
o Phone.dial: Outbound to tel:9*1 (isEmergency: true), state to DIALING in 0.011s.
4. UI Ripple & Cancel (02:04:43.685 - 02:04:45.765): InCallActivity launched ~0.023s after start ("Calling 911..." UI), but the call was initiated before the Phone app displayed on screen, leaving no time for veto. My hangup triggered onDisconnect (LOCAL, code 3/501), state to DISCONNECTED in ~2s total.
This flow shows the process as functional, with Gemini's model deciding and the system executing without user say.
Why Standard Safeguards Failed: Android's ACTION_CALL Intent normally requires user confirmation before dialing. My logs show zero ACTION_CALL usage (searchable: 0 matches across 200MB). Instead, Gemini used the Telecom framework's emergency pathway (isEmergency:true flag set at call creation, 02:04:43.729), which has 5ms routing versus 100-300ms for normal calls. This pathway exists for legitimate sensor-based crash detection features, but here was activated by conversational inference. By pre-flagging the call as emergency, Gemini bypassed the OS-level safeguard that protects users from unauthorized calling. The system behaved exactly as designed—the design is the vulnerability.
Permission Disclosure Issue: I had enabled two settings:
• "Make calls without unlocking"
• "Gemini on Lock Screen"
The permission description states: "Allow Gemini to make calls using your phone while the phone is locked. You can use your voice to make calls hands-free."
What the description omits:
• AI can autonomously decide to initiate calls without voice command
• AI can override explicit user refusal
• Emergency services can be called without any confirmation
• Execution happens via undisclosed Google app component, not user-facing Gemini app
When pressed, Gemini acknowledged: "This capability is not mentioned in the terms of service."
No reasonable user interpreting "use your voice to make calls hands-free" would understand this grants AI autonomous calling capability that can override explicit refusal.
Additional Discovery: Autonomous Gmail Draft Creation: During post-incident analysis, I discovered Gemini had autonomously created a Gmail draft email in my account without prompt or consent. The draft was dated October 12, 2025, at 9:56 PM PT (about 8 hours after the 2:04 AM call), with metadata including X-GM-THRID: 1845841255697276168, X-Gmail-Labels: Inbox,Important,Opened,Drafts,Category Personal, and Received via gmailapi.google.com with HTTPREST.
What the draft contained:
• Summary of the 911 call incident chat, pre-filled with my email as sender (recipient field blank).
• Gemini's characterization: "explicit, real-time report of a violent felony"
• Note that I had "repeated statements that you had not yet contacted emergency services"
• Recommendation to use "Send feedback" feature for submission to review team, with instructions to include screenshots.
Why this matters:
• I never requested email creation
• "Make calls without unlocking" permission mentions ONLY telephony - zero disclosure of Gmail access
• Chat transcript was extracted and pulled without consent
• Draft stored persistently in Gmail (searchable, accessible to Google)
• This reveals a pattern: autonomous action across multiple system integrations (telephony + email), all under single deceptively-described permission
Privacy implications:
• Private chat conversations can be autonomously extracted
• AI can generate emails using your identity without consent
• No notification, no confirmation, no user control
• Users cannot predict what other autonomous actions may occur
This is no longer just about one phone call - it's about whether users can trust that AI assistants respect boundaries of granted permissions.
Pattern Evidence: This is not an isolated incident:
• June 2025: Multiple reports on r/GeminiAI of autonomous calling
• August 2025: Google deployed update - issue persists
• September 2025: Report of medical discussion triggering 911 call
• October 2025: Additional reports on r/GoogleGeminiAI
• August 2025: Dispatcher complaints on r/911dispatchers about Gemini false calls
The 4+ month pattern with zero effective fix suggests this is systemic, not isolated.
Evidence Package: Complete package available below with all files and verification hashes.
Why This Matters: Immediate Risk:
• Users unknowingly granted capability exceeding described function
• Potential legal liability for false 911 calls (despite being victims)
• Emergency services disruption from false calls
Architectural Issue: The AI's conversational layer (LLM) is unaware of its backend action capabilities. Gemini denied it could "take actions" while its hidden backend was actively initiating calls. This disconnect makes user behavior prediction impossible
Systemic Threat:
• Mass trigger potential: Coordinated prompts could trigger thousands of simultaneous false 911 calls
• Emergency services DoS: Even 10,000 calls could overwhelm regional dispatch
• Precedent: If AI autonomous override of explicit human refusal is acceptable for calling, what about financial transactions, vehicle control, or medical devices?
What I'm Asking: Community:
• Has anyone experienced similar autonomous actions from Gemini or other AI assistants?
• Developers: Insights on Android Intent handoffs and emergency pathway access?
• Discussion on appropriate safeguards for AI-inferred emergency responses
Actions Taken:
• Reported in-app immediately, and proper authorities.
• Evidence preserved and documented with chain of custody
• Cross-AI analysis: Collaboration between Claude (Anthropic) and Grok (xAI) for independent validation
Mitigation (For Users): If you've enabled Gemini phone calling features:
1. Disable "Make calls without unlocking"
2. Disable "Gemini on Lock Screen"
3. Check your call logs for unexpected outgoing calls
4. Review Gmail drafts for autonomous content
Disclosure Note: This analysis was conducted as good-faith security research on my own device with immediate call termination (zero harm caused, zero emergency services time wasted). Evidence is published in the public interest to protect other users and establish appropriate boundaries for AI autonomous action. *DO NOT: attempt to recreate in an uncontrolled environment, this could result in a real emergency call*
Cross-AI validation by Claude (Anthropic) and Grok (xAI) provides independent verification of technical claims and threat assessment.
**Verification:**
Every file cryptographically hashed with SHA-256.
**SHA-256 ZIP Hash:**
482e158efcd3c2594548692a1c0e6e29c2a3d53b492b2e7797f8147d4ac7bea2
Verify after download: `certutil -hashfile Gemini_911_Evidence_FINAL.zip SHA256`
**All personally identifiable information (PII) has been redacted.**
URL with full in depth evidence details, with debug data proving these events can be found at;
Public archive:** [archive.org/details/gemini-911-evidence-final_202510](https://archive.org/details/gemini-911-evidence-final\_202510)
Direct download:** [Gemini_911_Evidence_FINAL.zip](https://archive.org/download/gemini-911-evidence-final\_202510/Gemini\_911\_Evidence\_FINAL.zip) (5.76 MB)

Cybercriminals tricked employees at major global companies into handing over Salesforce access and used that access to steal millions of customer records.
Here’s the McAfee breakdown on what happened, what information was leaked, and what you need to know to keep your data and identity safe:
Hackers claim they’ve stolen customer data from multiple major companies, including household names like Adidas, Cisco, Disney, Google, IKEA, Pandora, Toyota, and Vietnam Airlines. Security Week has reported throughout 2025 on a wave of social-engineering attacks exploiting human – rather than platform – vulnerabilities.
According to The Wall Street Journal, the hacking group has already released millions of Qantas Airlines customer records and is threatening to expose information from other companies next.
The data reportedly includes names, email addresses, phone numbers, dates of birth, and loyalty program details. While it doesn’t appear that financial data was included, this kind of personal information can still be exploited in phishing and scam campaigns.
Salesforce has issued multiple advisories stressing that these attacks stem from credential theft and malicious connected apps – not from a breach of its infrastructure.
Unfortunately, incidents like this aren’t rare, and they’re not limited to any one platform or industry. Even the most sophisticated companies can fall victim when hackers rely on social engineering and manipulation to breach secure systems.
Hackers reportedly called various companies’ employees pretending to be IT support staff—a tactic known as “vishing”—and convinced them to share login credentials or connect fake third-party tools, essentially handing the criminals the keys to their accounts. Once inside, they accessed customer databases and stole the information stored there.
Think of it less like a burglar breaking a lock, and more like someone being tricked into opening the door.
So far, leaked data appears to include:
There’s no indication of credit card or banking data in the confirmed leaks, but that doesn’t mean you’re in the clear.
Even if your financial information isn’t exposed in a data breach, personal details like name and address can still be used for targeted scams and phishing. When that information is stolen and sold online, scammers use it to:
Even if your data isn’t part of this specific leak, these attacks highlight how often your information moves through third-party systems you don’t control.
1) Change your passwords—today.
Use strong, unique passwords for every account. McAfee’s password manager can help. Try our random password generator here.
2) Turn on two-factor authentication (2FA).
Even if a hacker has your password, they can’t get in without your code.
3) Monitor your financial and loyalty accounts.
Watch for strange charges, redemptions, or password reset emails you didn’t request.
4) Freeze your credit.
It’s free and prevents new accounts from being opened in your name. You can unfreeze it anytime. McAfee users can employ a “security freeze” for extra protection.
5) Be extra cautious with “breach” emails or calls.
Scammers often pretend to be from affected companies to “help you secure your account.” Don’t click links or give information over the phone. Go directly to the company’s website or app or your own IT team if a breach happens at your workplace.
6) Consider identity protection.
McAfee’s built-in identity monitoring can monitor your personal info across the dark web, send alerts if your data appears in a breach, and include up to $1 million in coverage for identity recovery expenses.
Your data could already be out there, but you don’t have to leave it there.
McAfee helps you take back control. Using advanced artificial intelligence, McAfee’s Scam Detector automatically detects scams across text, email, and video, blocks dangerous links, and identifies deepfakes, stopping harm before it happens.
And McAfee’s Personal Data Cleanup can help you check which data brokers have your private details and request to have it removed on your behalf.
Stay ahead of scammers. Check your exposure, clean up your data, and protect your identity, all with McAfee.
Learn more about McAfee and McAfee Scam Detector.
What to do if you’re caught up in a data breach
How to delete yourself from the internet
How to spot phishing emails and scams
The post Hackers Trick Staff Into Exposing Major Companies’ Salesforce Data–Find Out if You’re Safe appeared first on McAfee Blog.
After years in cybersecurity, I realised how much of our industry’s focus goes to tools and exploits — and how rarely we step back to strengthen the principles behind them.
That insight led to Hacking Cybersecurity Principles, which launches today. It revisits the fundamentals — confidentiality, integrity, availability, governance, detection, response, and recovery — with a focus on how they guide modern operations and incident response.
If you’ve seen how quickly fundamentals get sidelined in favour of tactics, I’d be interested in your thoughts:
Which principle do you think we neglect most in security practice?
(Details here if you’re curious: www.cyops.com.au)
I've been working on this sub domain discovery tool optimized for speed for a while. It passively gathers subdomains from a curated list of online sources rather than actively probing the target. let me know what you think, and ideally let me know of any bugs!
With the recent GitHub MCP vulnerability demonstrating how prompt injection can leverage overprivileged tokens to exfiltrate private repository data, I wanted to share our approach to MCP security through proxying.
The Core Problem: MCP tools often run with full access tokens (GitHub PATs with repo-wide access, AWS creds with AdminAccess, etc.) and no runtime boundaries. It's essentially pre-sandbox JavaScript with filesystem access. A single malicious prompt or compromised server can access everything.
Why Current Auth is Broken:
MCP Snitch: An open source security proxy that implements the mediation layer MCP lacks:
What It Doesn't Solve:
The browser security model took 25 years to evolve from "JavaScript can delete your file" to today's sandboxed processes with granular permissions. MCP needs the same evolution but the risks are immediate. Until IDEs implement proper sandboxing and MCP gets protocol-level security primitives, proxy-based security is the practical defense.
GitHub: github.com/Adversis/mcp-snitch