❌

Normal view

Received today β€” 8 April 2026 ⏭ /r/netsec - Information Security News & Discussion

Brandefense Q4 2025 Ransomware Trends Report β€” 2,373 incidents, 125 groups, CVE exploitation breakdown

Hi r/netsec community,

Q4 2025 data, monitoring dark web leak sites and criminal forums
throughout October–December 2025.

Numbers:
- 2,373 confirmed victims
- 125 active ransomware groups
- 134 countries, 27 industries

Group highlights:
- Qilin peaked at 481 attacks in Q4, up from 113 in Q1
- Cl0p skipped encryption entirely in most campaigns β€” pure data theft + extortion via Oracle EBS and Cleo zero-days
- 46.3% of activity attributed to smaller/unnamed groups β€” RaaS commoditization is real

CVEs exploited this quarter (with group attribution):

RCE:
- CVE-2025-10035 (Fortra GoAnywhere MFT) β€” Medusa
- CVE-2025-55182 (React Server Components) β€” Weaxor
- CVE-2025-61882 (Oracle E-Business Suite) β€” Cl0p
- CVE-2024-21762 (Fortinet FortiOS SSL VPN) β€” Qilin

Privilege Escalation:
- CVE-2025-29824 (Windows CLFS driver β†’ SYSTEM) β€” Play

Auth Bypass:
- CVE-2025-61884 (Oracle E-Business Suite) β€” Cl0p
- CVE-2025-31324 (SAP NetWeaver, CVSS 10.0) β€” BianLian, RansomExx

Notable: DragonForce announced a white-label "cartel" model through underground forums. Operations linked to Scattered Spider suggest staged attack chains β€” initial access and ransomware deployment split between separate actors.

Full report
brandefense.io/reports/ransomware-trends-report-q4-2025/

submitted by /u/brandefense
[link] [comments]

Training for Device Code Phishing

With the news of Hundreds of orgs being compromised daily, I saw a really cool red team tool that trains for this exact scenario. Have you guys used this new white hat tool? Thinking about ditching KB4 and even using this for our red teams for access.

submitted by /u/redwheel82
[link] [comments]
Received yesterday β€” 7 April 2026 ⏭ /r/netsec - Information Security News & Discussion

The Race to Ship AI Tools Left Security Behind. Part 1: Sandbox Escape

AI coding tools are being shipped fast. In too many cases, basic security is not keeping up.

In our latest research, we found the same sandbox trust-boundary failure pattern across tools from Anthropic, Google, and OpenAI. Anthropic fixed and engaged quickly (CVE-2026-25725). Google did not ship a fix by disclosure. OpenAI closed the report as informational and did not address the core architectural issue.

That gap in response says a lot about vendor security posture.

submitted by /u/Fun_Preference1113
[link] [comments]

Anthropic Opus 4.6 is less good at finding vulns than you might think

We benchmarked Opus 4.6's ability to find simple C vulns and found that the model flags about 1 in 4 flaws -- with a very high false positive rate and lots of inconsistency from run to run. Techniques like judge agents and requiring the model to justify its results improve the results to some extent, but they're still not great.

submitted by /u/Prior-Penalty
[link] [comments]

JavaScript runtime instrumentation via Chrome DevTools Protocol

I’ve been experimenting with Chrome DevTools Protocol primitives to build tools for reversing and debugging JavaScript at runtime.

The idea is to interact with execution by hooking functions without monkeypatching or modifying application code.

Conceptually, this is closer to a Frida-style instrumentation model (onEnter/onLeave handlers), but applied to the browser via CDP.

Early experiments include:

  • attaching hooks to functions at runtime
  • inspecting and modifying arguments and local variables
  • overriding return values (unfortunately limited to sync functions due to CDP constraints)
  • following return values to their consumer (best-effort / heuristic)
  • conditional stepping (stepIn / stepOut / stepOver)

All implemented via CDP (debugger breakpoints + runtime evaluation), so this also works inside closures and non-exported code.

I’d really appreciate feedback β€” especially from people doing reverse engineering, bug bounty, or complex frontend debugging.

submitted by /u/filippo_cavallarin
[link] [comments]
Received β€” 6 April 2026 ⏭ /r/netsec - Information Security News & Discussion

Responsible disclosure is structurally dead β€” not dying. Here's the analysis and what replaces it.

Nicholas Carlini (Anthropic research scientist) used Claude Code and a 12-line bash script to find hundreds of remotely exploitable Linux kernel vulnerabilities β€” including one introduced in 2003 and undiscovered for 23 years.
He's holding most of them unreported. His words: "I'm not going to send the Linux kernel maintainers potential slop."
The bottleneck isn't finding bugs anymore. It's validating them fast enough.
Here's the part that matters for defenders:
That validation constraint only binds researchers following responsible disclosure. An attacker running the identical script has zero validation requirement β€” they probe directly from unverified findings. The asymmetry is structural, not technical. It's baked into how responsible disclosure works.

And the framework was already failing before AI arrived:

  • 32% of vulnerabilities exploited on or before CVE issuance
  • Median exploitation window: 5.0 days (down from 8.5)
  • AI can generate working CVE exploits in ~10 minutes at ~$1 per exploit
  • 130+ new CVEs weaponised daily at scale

We ran this problem through four structured Crucible analysis passes and produced a white paper. The conclusion: responsible disclosure needs a named replacement framework β€” Post-Exploitation Response Coordination β€” which accepts that exploitation will happen before validation and rebuilds around detection, response, and recovery speed instead.

The full white paper is live at https://www.thecrucible.systems/whitepapers/f27bb2aa-8a5b-47d3-b3bf-b33effa7e20e

Curious what this community thinks β€” specifically on the asymmetry point. Is there a path to closing that gap or is it genuinely irreducible?

submitted by /u/PhilosophyExternal97
[link] [comments]
❌