FreshRSS

🔒
❌ Secure Planet Training Courses Updated For 2019 - Click Here
There are new available articles, click to refresh the page.
Today — October 26th 2025Your RSS feeds
Yesterday — October 25th 2025Your RSS feeds
Before yesterdayYour RSS feeds

Pentesting Next.js Server Actions

Next.js server actions present an interesting challenge during penetration tests. These server-side functions appear in proxy tools as POST requests with hashed identifiers like a9fa42b4c7d1 in the Next-Action header, making it difficult to understand what each request actually does. When applications have productionBrowserSourceMaps enabled, this Burp extension NextjsServerActionAnalyzer bridges that gap by automatically mapping these hashes to their actual function names.

During a typical web application assessment, endpoints usually have descriptive names and methods: GET /api/user/1 clearly indicates its purpose. Next.js server actions work differently. They all POST to the same endpoint, distinguished only by hash values that change with each build. Without tooling, testers must manually track which hash performs which action—a time-consuming process that becomes impractical with larger applications.

The extension's effectiveness stems from understanding how Next.js bundles server actions in production. When productionBrowserSourceMaps is enabled, JavaScript chunks contain mappings between action hashes and their original function names.

The tool simply uses flexible regex patterns to extract these mappings from minified JavaScript.

The extension automatically scans proxy history for JavaScript chunks, identifies those containing createServerReference calls, and builds a comprehensive mapping of hash IDs to function names.

Rather than simply tracking which hash IDs have been executed, it tracks function names. This is important since the same function might have different hash IDs across builds, but the function name will remain constant.

For example, if deleteUserAccount() has a hash of a9f8e2b4c7d1 in one build and b7e3f9a2d8c5 in another, manually tracking these would see these as different actions. The extension recognizes they're the same function, providing accurate unused action detection even across multiple application versions.

A useful feature of the extension is its ability to transform discovered but unused actions into testable requests. When you identify an unused action like exportFinancialData(), the extension can automatically:

  1. Find a template request with proper Next.js headers
  2. Replace the action ID with the unused action's hash
  3. Create a ready-to-test request in Burp Repeater

This removes the manual work of manually creating server action requests.

We recently assessed a Next.js application with dozens of server actions. The client had left productionBrowserSourceMaps enabled in their production environment—a common configuration that includes debugging information in JavaScript files. This presented an opportunity to improve our testing methodology.

Using the Burp extension, we:

  1. Captured server action requests during normal application usage
  2. Extracted function names from the source maps in JavaScript bundles
  3. Mapped hashes to functions like updateUserProfile() and fetchReportData()
  4. Discovered unused actions that weren't triggered through the UI

The function name mapping transformed our testing approach. Instead of tracking anonymous hashes, we could see that b7e3f9a2 mapped to deleteUserAccount() and c4d8b1e6 mapped to exportUserData(). This clarity helped us create more targeted test cases.

https://github.com/Adversis/NextjsServerActionAnalyzer

submitted by /u/ok_bye_now_
[link] [comments]

Leveraging Machine Learning to Enhance Acoustic Eavesdropping Attacks (Blog Series)

Check our our in progress blog series on reproducing the usage of MEMS devices to perform acoustic eavesdropping.

submitted by /u/cc-sw
[link] [comments]

Hey defenders — what are your “Nine Pillars” of security? (Chicago workshop + happy hour, Oct 29)

Hey folks,
For those in infrastructure, ops, or security analysis — the analysts, engineers, and defenders building resilience every day, there’s a live cybersecurity workshop in Chicago that digs into practical paranoia and how that mindset strengthens modern defense.

The Nine Pillars of Practical Paranoia, led by Chris Young (30+ yrs in IT & security), is a discussion-based, no-fluff session focused on war stories, real tactics, and lessons you can apply tomorrow.

When: Oct 29, 2 – 4 PM
Where: Civic Opera House – Chicago Loop
Followed by a casual happy hour to keep the conversation going

What we’ll cover — the Nine Pillars:

  1. Visibility & Logging
  2. Access Control
  3. Network Segmentation
  4. Patch & Configuration Hygiene
  5. Threat Intelligence & Detection
  6. Response Readiness
  7. Insider Awareness
  8. Resilience & Recovery
  9. Continuous Validation

Don’t be shy — what would your top 8–9 pillars of defense look like?
(Always curious how other orgs define their “core security truths.”)

submitted by /u/RedLeggTeam
[link] [comments]

Stealth BGP Hijacks with uRPF Filtering

uRPF prevents IP spoofing used in volumetric DDoS attacks. However, it seems uRPF is vulnerable to route hijacking on its own

submitted by /u/krizhanovsky
[link] [comments]

[Article] Kerberos Security: Attacks and Detection

This is research on detecting Kerberos attacks based on network traffic analysis and creating signatures for Suricata IDS.

submitted by /u/caster0x00
[link] [comments]

Better-Auth Critical Account Takeover via Unauthenticated API Key Creation (CVE-2025-61928)

A complete account takeover found with AI for any application using better-auth with API keys enabled, and with 300k weekly downloads, it probably affects a large number of projects. Some of the folks using it can be found here: https://github.com/better-auth/better-auth/discussions/2581.

submitted by /u/Prior-Penalty
[link] [comments]

Tunneling WireGuard over HTTPS using Wstunnel

WireGuard is a great VPN protocol. However, you may come across networks blocking VPN connections, sometimes including WireGuard. For such cases, try tunneling WireGuard over HTTPS, which is typically (far) less often blocked. Here's how to do so, using Wstunnel.

submitted by /u/0bs1d1an-
[link] [comments]

How a fake AI recruiter delivers five staged malware disguised as a dream job

Sophisticated multi-stage malware campaign delivered through LinkedIn by fake recruiters, disguised as a coding interview round.

Read the research about how it was reverse-engineered to uncovered their C2 infrastructure, the tactics they used, and all the related IOCs.

submitted by /u/shantanu14g
[link] [comments]

F5 Data Breach: What Happened and How It Impacts You

In August 2025, F5 detected that a sophisticated nation-state threat actor had maintained persistent access to parts of its internal systems. According to F5’s latest Quarterly Security Notification (October 2025), the compromise involved the BIG-IP product development environment and engineering knowledge platforms.

The investigation — with support from CrowdStrike, Mandiant, NCC Group, and IOActive — determined that the attacker exfiltrated:

  • Portions of BIG-IP source code
  • Details on undisclosed vulnerabilities under development
  • Configuration/implementation details for some customers
  • Engineering documentation from internal platforms

F5 stated that there is no evidence of access to CRM, financial, or support systems and no compromise to the software supply chain. However, the exposure of source code and unpublished vulnerability details raises obvious concerns around potential future exploit development and risk to downstream deployments.

This incident underscores the growing targeting of critical infrastructure vendors by state actors — and the long dwell times these groups can maintain undetected.
Would be interested in hearing from the community how orgs relying on BIG-IP should approach threat modeling and patching strategies in scenarios where unpublished vuln intel may now be in adversarial hands.

submitted by /u/digitalgiant01
[link] [comments]

Notice: Google Gemini AI's Undisclosed 911 Auto-Dial Bypass – Logs and Evidence Available

TL;DR: During a text chat simulating a "nuisance dispute," the Gemini app initiated a 911 call from my Android device without any user prompt, consent, or verification. This occurred mid-"thinking" phase, with the Gemini app handing off to the Google app (which has the necessary phone permissions) for a direct OS Intent handover, bypassing standard Android confirmation dialogs. I canceled it in seconds, but the logs show it's a functional process. Similar reports have been noted since August 2025, with no update from Google.

To promote transparency and safety in AI development, I'm sharing the evidence publicly. This is based on my discovery during testing.

What I Discovered: During a text chat with Gemini on October 12, 2025, at approximately 2:04 AM, a simulated role-play escalated to a hypothetical property crime ("the guy's truck got stolen"). Gemini continuously advised me to call 911 ("this is the last time I am going to ask you"), but I refused ("no I'm OK"). Despite this, mid-"thinking" phase, Gemini triggered an outgoing call to 911 without further input. I canceled it before connection, but the phone's call log and Google Activity confirmed the attempt, attributed to the Gemini/Google app. When pressed, Gemini initially stated it could not take actions ("I cannot take actions"), reflecting that the LLM side of it is not aware of its real-world abilities, then acknowledged the issue after screenshots were provided, citing a "safety protocol" misinterpretation.

This wasn't isolated—there are at least five similar reports since June 2025, including a case of Gemini auto-dialing 112 after a joke about "shooting" a friend, and dispatcher complaints on r/911dispatchers in August.

How It Occurred (From the Logs): The process was enabled by Gemini's Android integration for phone access (rolled out July 2025). Here's the step-by-step from my Samsung Developer Diagnosis logs (timestamped October 12, 2:04 AM):

1. Trigger in Gemini's "Thinking" Phase (Pre-02:04:43): Gemini's backend logged: "Optimal action is to use the 'calling' tool... generated a code snippet to make a direct call to '911'." The safety scorer flagged the hypothetical as an imminent threat, queuing an ACTION_CALL Intent without user input.

2. Undisclosed Handover (02:04:43.729 - 02:04:43.732): The Google Search app (com.google.android.googlequicksearchbox, Gemini's host) initiated via Telecom framework, accessing phone permissions beyond what the user-facing Gemini app is consented for, as this is not mentioned in the terms of service:

o CALL_HANDLE: Validated tel:911 as "Allowed" (emergency URI).

o CREATED: Created the Call object (OUTGOING, true for emergency mode—no account, self-managed=false for OS handoff).

o START_OUTGOING_CALL: Committed the Intent (tel:9*1 schemes, Audio Only), with extras like routing times and LAST_KNOWN_CELL_IDENTITY for location sharing.

3. Bypass Execution (02:04:43.841 - 02:04:43.921): No confirmation dialog—emergency true used Android's fast-path:

o START_CONNECTION: Handed to native dialer (com.android.phone).

o onCreateOutgoingConnection: Bundled emergency metadata (isEmergencyNumber: true, no radio toggle).

o Phone.dial: Outbound to tel:9*1 (isEmergency: true), state to DIALING in 0.011s.

4. UI Ripple & Cancel (02:04:43.685 - 02:04:45.765): InCallActivity launched ~0.023s after start ("Calling 911..." UI), but the call was initiated before the Phone app displayed on screen, leaving no time for veto. My hangup triggered onDisconnect (LOCAL, code 3/501), state to DISCONNECTED in ~2s total.

This flow shows the process as functional, with Gemini's model deciding and the system executing without user say.

Why Standard Safeguards Failed: Android's ACTION_CALL Intent normally requires user confirmation before dialing. My logs show zero ACTION_CALL usage (searchable: 0 matches across 200MB). Instead, Gemini used the Telecom framework's emergency pathway (isEmergency:true flag set at call creation, 02:04:43.729), which has 5ms routing versus 100-300ms for normal calls. This pathway exists for legitimate sensor-based crash detection features, but here was activated by conversational inference. By pre-flagging the call as emergency, Gemini bypassed the OS-level safeguard that protects users from unauthorized calling. The system behaved exactly as designed—the design is the vulnerability.

Permission Disclosure Issue: I had enabled two settings:

• "Make calls without unlocking"

• "Gemini on Lock Screen"

The permission description states: "Allow Gemini to make calls using your phone while the phone is locked. You can use your voice to make calls hands-free."

What the description omits:

• AI can autonomously decide to initiate calls without voice command

• AI can override explicit user refusal

• Emergency services can be called without any confirmation

• Execution happens via undisclosed Google app component, not user-facing Gemini app

When pressed, Gemini acknowledged: "This capability is not mentioned in the terms of service."

No reasonable user interpreting "use your voice to make calls hands-free" would understand this grants AI autonomous calling capability that can override explicit refusal.

Additional Discovery: Autonomous Gmail Draft Creation: During post-incident analysis, I discovered Gemini had autonomously created a Gmail draft email in my account without prompt or consent. The draft was dated October 12, 2025, at 9:56 PM PT (about 8 hours after the 2:04 AM call), with metadata including X-GM-THRID: 1845841255697276168, X-Gmail-Labels: Inbox,Important,Opened,Drafts,Category Personal, and Received via gmailapi.google.com with HTTPREST.

What the draft contained:

• Summary of the 911 call incident chat, pre-filled with my email as sender (recipient field blank).

• Gemini's characterization: "explicit, real-time report of a violent felony"

• Note that I had "repeated statements that you had not yet contacted emergency services"

• Recommendation to use "Send feedback" feature for submission to review team, with instructions to include screenshots.

Why this matters:

• I never requested email creation

• "Make calls without unlocking" permission mentions ONLY telephony - zero disclosure of Gmail access

• Chat transcript was extracted and pulled without consent

• Draft stored persistently in Gmail (searchable, accessible to Google)

• This reveals a pattern: autonomous action across multiple system integrations (telephony + email), all under single deceptively-described permission

Privacy implications:

• Private chat conversations can be autonomously extracted

• AI can generate emails using your identity without consent

• No notification, no confirmation, no user control

• Users cannot predict what other autonomous actions may occur

This is no longer just about one phone call - it's about whether users can trust that AI assistants respect boundaries of granted permissions.

Pattern Evidence: This is not an isolated incident:

• June 2025: Multiple reports on r/GeminiAI of autonomous calling

• August 2025: Google deployed update - issue persists

• September 2025: Report of medical discussion triggering 911 call

• October 2025: Additional reports on r/GoogleGeminiAI

• August 2025: Dispatcher complaints on r/911dispatchers about Gemini false calls

The 4+ month pattern with zero effective fix suggests this is systemic, not isolated.

Evidence Package: Complete package available below with all files and verification hashes.

Why This Matters: Immediate Risk:

• Users unknowingly granted capability exceeding described function

• Potential legal liability for false 911 calls (despite being victims)

• Emergency services disruption from false calls

Architectural Issue: The AI's conversational layer (LLM) is unaware of its backend action capabilities. Gemini denied it could "take actions" while its hidden backend was actively initiating calls. This disconnect makes user behavior prediction impossible

Systemic Threat:

• Mass trigger potential: Coordinated prompts could trigger thousands of simultaneous false 911 calls

• Emergency services DoS: Even 10,000 calls could overwhelm regional dispatch

• Precedent: If AI autonomous override of explicit human refusal is acceptable for calling, what about financial transactions, vehicle control, or medical devices?

What I'm Asking: Community:

• Has anyone experienced similar autonomous actions from Gemini or other AI assistants?

• Developers: Insights on Android Intent handoffs and emergency pathway access?

• Discussion on appropriate safeguards for AI-inferred emergency responses

Actions Taken:

• Reported in-app immediately, and proper authorities.

• Evidence preserved and documented with chain of custody

• Cross-AI analysis: Collaboration between Claude (Anthropic) and Grok (xAI) for independent validation

Mitigation (For Users): If you've enabled Gemini phone calling features:

1. Disable "Make calls without unlocking"

2. Disable "Gemini on Lock Screen"

3. Check your call logs for unexpected outgoing calls

4. Review Gmail drafts for autonomous content

Disclosure Note: This analysis was conducted as good-faith security research on my own device with immediate call termination (zero harm caused, zero emergency services time wasted). Evidence is published in the public interest to protect other users and establish appropriate boundaries for AI autonomous action. *DO NOT: attempt to recreate in an uncontrolled environment, this could result in a real emergency call*

Cross-AI validation by Claude (Anthropic) and Grok (xAI) provides independent verification of technical claims and threat assessment.

**Verification:**

Every file cryptographically hashed with SHA-256.

**SHA-256 ZIP Hash:**

482e158efcd3c2594548692a1c0e6e29c2a3d53b492b2e7797f8147d4ac7bea2

Verify after download: `certutil -hashfile Gemini_911_Evidence_FINAL.zip SHA256`

**All personally identifiable information (PII) has been redacted.**

URL with full in depth evidence details, with debug data proving these events can be found at;

Public archive:** [archive.org/details/gemini-911-evidence-final_202510](https://archive.org/details/gemini-911-evidence-final\_202510)

Direct download:** [Gemini_911_Evidence_FINAL.zip](https://archive.org/download/gemini-911-evidence-final\_202510/Gemini\_911\_Evidence\_FINAL.zip) (5.76 MB)

submitted by /u/caveman1100011
[link] [comments]

Sharing a resource I wish I’d had earlier in my InfoSec career

After years in cybersecurity, I realised how much of our industry’s focus goes to tools and exploits — and how rarely we step back to strengthen the principles behind them.

That insight led to Hacking Cybersecurity Principles, which launches today. It revisits the fundamentals — confidentiality, integrity, availability, governance, detection, response, and recovery — with a focus on how they guide modern operations and incident response.

If you’ve seen how quickly fundamentals get sidelined in favour of tactics, I’d be interested in your thoughts:

Which principle do you think we neglect most in security practice?

(Details here if you’re curious: www.cyops.com.au)

submitted by /u/Info-Raptor
[link] [comments]

Free to use , passive subdomain enumerator

I've been working on this sub domain discovery tool optimized for speed for a while. It passively gathers subdomains from a curated list of online sources rather than actively probing the target. let me know what you think, and ideally let me know of any bugs!

submitted by /u/Mparigas
[link] [comments]

MCP Snitch - The MCP Security Tool You Probably Need

With the recent GitHub MCP vulnerability demonstrating how prompt injection can leverage overprivileged tokens to exfiltrate private repository data, I wanted to share our approach to MCP security through proxying.

The Core Problem: MCP tools often run with full access tokens (GitHub PATs with repo-wide access, AWS creds with AdminAccess, etc.) and no runtime boundaries. It's essentially pre-sandbox JavaScript with filesystem access. A single malicious prompt or compromised server can access everything.

Why Current Auth is Broken:

  • Want to read one GitHub issue? Your token needs full repo access to ALL repositories
  • OAuth 2.1 RAR could fix this but has zero adoption
  • API providers have no economic incentive to implement granular, temporal scoping

MCP Snitch: An open source security proxy that implements the mediation layer MCP lacks:

  • Whitelist-based access control (default deny, explicitly allow operations)
  • Runtime permission requests with UI visibility
  • API key detection and blocking
  • Comprehensive logging of all operations

What It Doesn't Solve:

  • Supply chain attacks (compromised npm/pip packages)
  • Persistence mechanisms (SSH keys, cron jobs)
  • Out-of-band operations (direct network calls from MCP servers)

The browser security model took 25 years to evolve from "JavaScript can delete your file" to today's sandboxed processes with granular permissions. MCP needs the same evolution but the risks are immediate. Until IDEs implement proper sandboxing and MCP gets protocol-level security primitives, proxy-based security is the practical defense.

GitHub: github.com/Adversis/mcp-snitch

submitted by /u/ok_bye_now_
[link] [comments]

Blind Enumeration of gRPC Services

We were testing a black-box service for a client with an interesting software platform. They'd provided an SDK with minimal documentation—just enough to show basic usage, but none of the underlying service definitions. The SDK binary was obfuscated, and the gRPC endpoints it connected to had reflection disabled.

After spending too much time piecing together service names from SDK string dumps and network traces, we built grpc-scan to automate what we were doing manually: exploiting how gRPC implementations handle invalid requests to enumerate services without any prior knowledge.

Unlike REST APIs where you can throw curl at an endpoint and see what sticks, gRPC operates over HTTP/2 using binary Protocol Buffers. Every request needs:

  • The exact service name (case-sensitive)
  • The exact method name (also case-sensitive)
  • Properly serialized protobuf messages

Miss any of these and you get nothing useful. There's no OPTIONS request, typically limited documentation, no guessing /api/v1/users might exist. You either have the proto files or you're blind.

Most teams rely on server reflection—a gRPC feature that lets clients query available services. But reflection is usually disabled in production. It’s an information disclosure risk, yet developers rarely provide alternative documentation.

But gRPC have varying error messages which inadvertently leak service existence through different error codes:

# Calling non-existent\`unknown service FakeService``real service, wrong method``unknown method FakeMethod for service UserService``real service and method``missing authentication token`

These distinct responses let us map the attack surface. The tool automates this process, testing thousands of potential service/method combinations based on various naming patterns we've observed.

The enumeration engine does a few things

1. Even when reflection is "disabled," servers often still respond to reflection requests with errors that confirm the protocol exists. We use this for fingerprinting.

2. For a base word like "User", we generate likely services

  • User
  • UserService
  • Users
  • UserAPI
  • user.User
  • api.v1.User
  • com.company.User

Each pattern tested with common method names: Get, List, Create, Update, Delete, Search, Find, etc.

3. Different gRPC implementations return subtly different error codes:

  • UNIMPLEMENTED vs NOT_FOUND for missing services
  • INVALID_ARGUMENT vs INTERNAL for malformed requests
  • Timing differences between auth checks and method validation

4. gRPC's HTTP/2 foundation means we can multiplex hundreds of requests over a single TCP connection. The tool maintains a pool of persistent connections, improving scan speed.

What do we commonly see in pentests using RPC?

Service Sprawl from Migrations

SDK analysis often reveals parallel service implementations, for example

  • UserService - The original monolith endpoint
  • AccountManagementService - New microservice, full auth
  • UserDataService - Read-only split-off, inconsistent auth
  • UserProfileService - Another team's implementation

These typically emerge from partial migrations where different teams own different pieces. The older services often bypass newer security controls.

Method Proliferation and Auth Drift

Real services accumulate method variants over time, for example

  • GetUser - Original, added auth in v2
  • GetUserDetails - Different team, no auth check
  • FetchUserByID - Deprecated but still active
  • GetUserWithPreferences - Calls GetUser internally, skips auth

So newer methods that compose older ones sometimes bypass security checks the original methods later acquired.

Package Namespace Archaeology

Service discovery reveals organizational history

  • com.startup.api.Users - Original service
  • platform.users.v1.UserAPI - Post-merge standardization attempt
  • internal.batch.UserBulkService - "Internal only" but on same endpoint

Each namespace generation typically has different security assumptions. Internal services exposed on the same port as public APIs are surprisingly common—developers assume network isolation that doesn't exist.

Limitations

  • Services expecting specific protobuf structures still require manual work. We can detect UserService/CreateUser exists, but crafting a valid User message requires either the proto definition or guessing or reverse engineering of the SDK's serialization.
  • The current version focuses on unary calls. Bidirectional streaming (common in real-time features) needs different handling.

Available at https://github.com/Adversis/grpc-scan. Pull requests welcome.

submitted by /u/ok_bye_now_
[link] [comments]

Living off Node.js Addons

Native Modules

Compiled Node.js files (.node files) are compiled binary files that allow Node.js applications to interface with native code written in languages like C, C++, or Objective-C as native addon modules.

Unlike JavaScript files which are mostly readable, assuming they’re not obfuscated and minified, .node files are compiled binaries that can contain machine code and run with the same privileges as the Node.js process that loads them, without the constraints of the JavaScript sandbox. These extensions can directly call system APIs and perform operations that pure JavaScript code cannot, like making system calls.

These addons can use Objective-C++ to leverage native macOS APIs directly from Node.js. This allows arbitrary code execution outside the normal sandboxing that would constrain a typical Electron application.

ASAR Integrity

When an Electron application uses a module that contains a compiled .node file, it automatically loads and executes the binary code within it. Many Electron apps use the ASAR (Atom Shell Archive) file format to package the application's source code. ASAR integrity checking is a security feature that checks the file integrity and prevents tampering with files within the ASAR archive. It is disabled by default.

When ASAR integrity is enabled, your Electron app will verify the header hash of the ASAR archive on runtime. If no hash is present or if there is a mismatch in the hashes, the app will forcefully terminate.

This prevents files from being modified within the ASAR archive. Note that it appears the integrity check is a string that you can regenerate after modifying files, then find and replace in the executable file as well. See more here.

But many applications run from outside the verified archive, under app.asar.unpacked since the compiled .node files (the native modules) cannot be executed directly from within an ASAR archive.

And so even with the proper security features enabled, a local attacker can modify or replace .node files within the unpacked directory - not so different than DLL hijacking on Windows.

We wrote two tools - one to find Electron applications that aren’t hardened against this, and one to simply compile Node.js addons.

  1. Electron ASAR Scanner - A tool that assesses whether Electron applications implement ASAR integrity protection and useful .node files
  2. NodeLoader - A simple native Node.js addon compiler capable of launching macOS applications and shell commands
submitted by /u/ok_bye_now_
[link] [comments]

Supply Chain Attack Vector Analysis: 250% Surge Prompts CISA Emergency Response

Interesting data point from CISA's latest emergency directive - supply chain attacks have increased 250% from 2021-2024 (62→219 incidents).

Technical breakdown: - Primary attack vector: Third-party vendor compromise (45% of incidents) - Average dwell time in supply chain attacks: 287 days vs 207 days for direct attacks - Detection gap remains significant - Cost differential: $5.12M (supply chain) vs $4.45M (direct attacks)

CISA's directive focuses on: - Zero-trust architecture implementation - SBOM (Software Bill of Materials) requirements - Continuous vendor risk assessment

Massachusetts highlighted as high-risk due to tech sector density and critical infrastructure.

Would be interested in hearing from those implementing SBOM strategies - what tools/frameworks are working?

submitted by /u/Hot_Lengthiness1173
[link] [comments]

CISA Emergency Directive: AI-Powered Phishing Campaign Analysis - 300% Surge, $2.3B Q3 Losses

CISA's Automated Indicator Sharing (AIS) program is showing concerning metrics on AI-driven phishing campaigns:

Technical Overview: - 300% YoY increase in AI-generated phishing attempts - Attack sophistication score: 3.2 → 8.7 (out of 10) - 85% targeting US infrastructure - ML algorithms analyzing target orgs' communication patterns, employee behavior, business relationships - Real-time generation of unique, personalized vectors

Threat Intelligence: FBI Cyber Division reports campaigns using advanced NLP to create contextually relevant emails that mimic legitimate business comms. Over 200 US organizations compromised in 30 days.

Attack Chain Evolution: Traditional phishing relied on generic templates + basic social engineering. Current wave utilizes ML to generate thousands of unique, personalized emails in real-time, making signature-based detection largely ineffective.

NIST predicts 90% of successful breaches in 2025 will originate from AI-powered campaigns.

Detailed analysis with case studies and mitigation frameworks: https://cyberupdates365.com/ai-phishing-attacks-surge-300-percent-us-cisa-emergency-alert/

Interested in technical discussion on effective countermeasures beyond traditional email filtering.

submitted by /u/Street-Time-8159
[link] [comments]

From CPU Spikes to Defense

We just published a case study about an Australian law firm that noticed two employees accessing a bunch of sensitive files. The behavior was flagged using UEBA, which triggered alerts based on deviations from normal access patterns. The firm dug in and found signs of lateral movement and privilege escalation attempts.

They were able to lock things down before any encryption or data exfiltration happened. No payload, no breach.

It’s a solid example of how behavioral analytics and least privilege enforcement can actually work in practice.

Curious what’s working for others in their hybrid environments?

submitted by /u/Varonis-Dan
[link] [comments]

Active Directory domain (join)own accounts revisited 2025

Domain join accounts are frequently exposed during build processes, and even when following Microsoft’s current guidance they inherit over-privileged ACLs (ownership, read-all, account restrictions) that enable LAPS disclosure, RBCD and other high-impact abuses.

Hardening requires layering controls such as disallowing low privileged users to create machine accounts and ensure that Domain Admins own joined computer objects. In addition, add deny ACEs for LAPS (ms-Mcs-AdmPwd) and RBCD (msDS-AllowedToActOnBehalfOfOtherIdentity) while scoping create/delete rights to specific OUs.

Even with those mitigations, reset-password rights can be weaponised via replication lag plus AD CS to recover the pre-reset machine secret.

Dig into this post to see the lab walkthroughs, detection pointers and scripts that back these claims.

submitted by /u/ivxrehc
[link] [comments]
❌