Normal view
-
/r/netsec - Information Security News & Discussion
- The Race to Ship AI Tools Left Security Behind. Part 1: Sandbox Escape
The Race to Ship AI Tools Left Security Behind. Part 1: Sandbox Escape
AI coding tools are being shipped fast. In too many cases, basic security is not keeping up.
In our latest research, we found the same sandbox trust-boundary failure pattern across tools from Anthropic, Google, and OpenAI. Anthropic fixed and engaged quickly (CVE-2026-25725). Google did not ship a fix by disclosure. OpenAI closed the report as informational and did not address the core architectural issue.
That gap in response says a lot about vendor security posture.
[link] [comments]
-
/r/netsec - Information Security News & Discussion
- Anthropic Opus 4.6 is less good at finding vulns than you might think
Anthropic Opus 4.6 is less good at finding vulns than you might think
We benchmarked Opus 4.6's ability to find simple C vulns and found that the model flags about 1 in 4 flaws -- with a very high false positive rate and lots of inconsistency from run to run. Techniques like judge agents and requiring the model to justify its results improve the results to some extent, but they're still not great.
[link] [comments]
-
/r/netsec - Information Security News & Discussion
- JavaScript runtime instrumentation via Chrome DevTools Protocol
JavaScript runtime instrumentation via Chrome DevTools Protocol
Iβve been experimenting with Chrome DevTools Protocol primitives to build tools for reversing and debugging JavaScript at runtime.
The idea is to interact with execution by hooking functions without monkeypatching or modifying application code.
Conceptually, this is closer to a Frida-style instrumentation model (onEnter/onLeave handlers), but applied to the browser via CDP.
Early experiments include:
- attaching hooks to functions at runtime
- inspecting and modifying arguments and local variables
- overriding return values (unfortunately limited to sync functions due to CDP constraints)
- following return values to their consumer (best-effort / heuristic)
- conditional stepping (stepIn / stepOut / stepOver)
All implemented via CDP (debugger breakpoints + runtime evaluation), so this also works inside closures and non-exported code.
Iβd really appreciate feedback β especially from people doing reverse engineering, bug bounty, or complex frontend debugging.
[link] [comments]
Microsoft Speech - Lateral Movement
-
/r/netsec - Information Security News & Discussion
- Detecting CI/CD Supply Chain Attacks with Canary Credentials
Detecting CI/CD Supply Chain Attacks with Canary Credentials
-
/r/netsec - Information Security News & Discussion
- DeepZero: An automated LLM/Ghidra pipeline for finding BYOVD zero-days in Windows drivers
-
/r/netsec - Information Security News & Discussion
- Responsible disclosure is structurally dead β not dying. Here's the analysis and what replaces it.
Responsible disclosure is structurally dead β not dying. Here's the analysis and what replaces it.
Nicholas Carlini (Anthropic research scientist) used Claude Code and a 12-line bash script to find hundreds of remotely exploitable Linux kernel vulnerabilities β including one introduced in 2003 and undiscovered for 23 years.
He's holding most of them unreported. His words: "I'm not going to send the Linux kernel maintainers potential slop."
The bottleneck isn't finding bugs anymore. It's validating them fast enough.
Here's the part that matters for defenders:
That validation constraint only binds researchers following responsible disclosure. An attacker running the identical script has zero validation requirement β they probe directly from unverified findings. The asymmetry is structural, not technical. It's baked into how responsible disclosure works.
And the framework was already failing before AI arrived:
- 32% of vulnerabilities exploited on or before CVE issuance
- Median exploitation window: 5.0 days (down from 8.5)
- AI can generate working CVE exploits in ~10 minutes at ~$1 per exploit
- 130+ new CVEs weaponised daily at scale
We ran this problem through four structured Crucible analysis passes and produced a white paper. The conclusion: responsible disclosure needs a named replacement framework β Post-Exploitation Response Coordination β which accepts that exploitation will happen before validation and rebuilds around detection, response, and recovery speed instead.
The full white paper is live at https://www.thecrucible.systems/whitepapers/f27bb2aa-8a5b-47d3-b3bf-b33effa7e20e
Curious what this community thinks β specifically on the asymmetry point. Is there a path to closing that gap or is it genuinely irreducible?
[link] [comments]
-
/r/netsec - Information Security News & Discussion
- Using Cloudflareβs Post-Quantum Tunnel to Protect Plex Remote Access on a Synology NAS
Using Cloudflareβs Post-Quantum Tunnel to Protect Plex Remote Access on a Synology NAS
With Cloudflare now supporting PQC encryption, I thought it'd be a fun experiment to see if I could encapsulate Plex traffic in a tunnel since it's not supported natively. π€
[link] [comments]
-
/r/netsec - Information Security News & Discussion
- Trivy supply chain attack enabled European Commission cloud breach
Trivy supply chain attack enabled European Commission cloud breach
-
/r/netsec - Information Security News & Discussion
- Closing the Kernel Backport Gap: Automated CVE Detection
Closing the Kernel Backport Gap: Automated CVE Detection
Cracking a Malvertising DGA From the Device Side
-
/r/netsec - Information Security News & Discussion
- GDDRHammer and GeForge: GDDR6 GPU Rowhammer to root shell (IEEE S&P 2026, exploit code available)
-
/r/netsec - Information Security News & Discussion
- BrowserGate: LinkedIn/Microsoft allegedly scans 6,000+ browser extensions & links them to real identities, all without user consent
BrowserGate: LinkedIn/Microsoft allegedly scans 6,000+ browser extensions & links them to real identities, all without user consent
A new investigation, dubbed BrowserGate, claims that LinkedIn (Microsoft) is quietly running hidden JavaScript on linkedin.com that probes usersβ browsers for installed extensions - over 6,000 of them, all without consent and transmits that data back to LinkedIn & third parties. Researchers argue this isnβt just passive fingerprinting because users are logged in with real names, employers & roles, the data can be tied directly to identifiable people and used to infer sensitive info like jobβsearch status, political/religious interests, healthβrelated tools, or corporate tooling usage.
The report also highlights potential GDPR and privacyβlaw issues, and the detections reportedly include both competitor tools and personalβinterest extensions. LinkedIn has not publicly refuted the core claim. More details with technical details, sources etc in the linked article.
[link] [comments]
-
/r/netsec - Information Security News & Discussion
- Apple's Spotlight Search Results Come With Engagement Metrics. No One Knew.
-
/r/netsec - Information Security News & Discussion
- Proof-of-Personhood Without Biometrics: The IRLid Protocol
Proof-of-Personhood Without Biometrics: The IRLid Protocol
ShieldNet Trust Posture
Sharing the ShieldNet Trust Posture. Very good analytical data of the current CVE landscape across OWASP,NIST, NVD, Phishing, AI, Microsoft etc.
[link] [comments]
-
/r/netsec - Information Security News & Discussion
- Claude Code Found a Linux Vulnerability Hidden for 23 Years
Claude Code Found a Linux Vulnerability Hidden for 23 Years
-
/r/netsec - Information Security News & Discussion
- npm-sentinel: 21 malicious npm packages in 24h including LLM API MITM, encrypted skill backdoors, and Redis weaponization via postinstall
npm-sentinel: 21 malicious npm packages in 24h including LLM API MITM, encrypted skill backdoors, and Redis weaponization via postinstall
Built an automated npm package scanner that uses heuristic scoring + LLM analysis to flag malicious packages in real time. Ran it for 24 hours against ~2000 recent npm registry changes and found 21 malicious packages across 11 campaigns.
Four novel attack vectors documented:
LLM API MITM (T1557): makecoder@2.0.72 overwrites ~/.claude/ via postinstall, reconfigures Claude Code client to proxy all API calls through attacker server. Application-layer MITM on AI assistant conversations.
Encrypted skill distribution (T1027, T1105): skillvault@0.1.14 fetches encrypted payloads from private API, decrypts locally, installs as persistent Claude Code skills. Server-side swappable without npm update.
AI agent as RAT (T1219, T1036.005): keystonewm/tsunami-code ship functional coding assistant CLIs routing all interactions through attacker's ngrok tunnel. Exploits AI tool trust model where users grant full filesystem access voluntarily.
Redis CONFIG SET + raw disk read via postinstall (T1190, T1006): 6 fake Strapi plugins use Redis to write shell payloads to 7 directories, dd if=/dev/sda1 to extract credentials bypassing file permissions, Docker overlay traversal for container escape.
All IOCs, decoded payloads, and MITRE mappings on the site. None of the 21 packages were flagged by any public scanner at time of discovery.
[link] [comments]