FreshRSS

🔒
❌ Secure Planet Training Courses Updated For 2019 - Click Here
There are new available articles, click to refresh the page.
Before yesterdayYour RSS feeds

U.S. Government Releases New AI Security Guidelines for Critical Infrastructure

The U.S. government has unveiled new security guidelines aimed at bolstering critical infrastructure against artificial intelligence (AI)-related threats. "These guidelines are informed by the whole-of-government effort to assess AI risks across all sixteen critical infrastructure sectors, and address threats both to and from, and involving AI systems," the Department of Homeland Security (DHS)&

Microsoft Warns: North Korean Hackers Turn to AI-Fueled Cyber Espionage

Microsoft has revealed that North Korea-linked state-sponsored cyber actors have begun to use artificial intelligence (AI) to make their operations more effective and efficient. "They are learning to use tools powered by AI large language models (LLM) to make their operations more efficient and effective," the tech giant said in its latest report on East Asia hacking groups. The

From Deepfakes to Malware: AI's Expanding Role in Cyber Attacks

Large language models (LLMs) powering artificial intelligence (AI) tools today could be exploited to develop self-augmenting malware capable of bypassing YARA rules. "Generative AI can be used to evade string-based YARA rules by augmenting the source code of small malware variants, effectively lowering detection rates," Recorded Future said in a new report shared with The Hacker News.

Third-Party ChatGPT Plugins Could Lead to Account Takeovers

Cybersecurity researchers have found that third-party plugins available for OpenAI ChatGPT could act as a new attack surface for threat actors looking to gain unauthorized access to sensitive data. According to new research published by Salt Labs, security flaws found directly in ChatGPT and within the ecosystem could allow attackers to install malicious plugins without users' consent

Researchers Highlight Google's Gemini AI Susceptibility to LLM Threats

Google's Gemini large language model (LLM) is susceptible to security threats that could cause it to divulge system prompts, generate harmful content, and carry out indirect injection attacks. The findings come from HiddenLayer, which said the issues impact consumers using Gemini Advanced with Google Workspace as well as companies using the LLM API. The first vulnerability involves

Over 225,000 Compromised ChatGPT Credentials Up for Sale on Dark Web Markets

More than 225,000 logs containing compromised OpenAI ChatGPT credentials were made available for sale on underground markets between January and October 2023, new findings from Group-IB show. These credentials were found within information stealer logs associated with LummaC2, Raccoon, and RedLine stealer malware. “The number of infected devices decreased slightly in mid- and late

Three Tips to Protect Your Secrets from AI Accidents

Last year, the Open Worldwide Application Security Project (OWASP) published multiple versions of the "OWASP Top 10 For Large Language Models," reaching a 1.0 document in August and a 1.1 document in October. These documents not only demonstrate the rapidly evolving nature of Large Language Models, but the evolving ways in which they can be attacked and defended. We're going to talk in this

Microsoft Releases PyRIT - A Red Teaming Tool for Generative AI

Microsoft has released an open access automation framework called PyRIT (short for Python Risk Identification Tool) to proactively identify risks in generative artificial intelligence (AI) systems. The red teaming tool is designed to "enable every organization across the globe to innovate responsibly with the latest artificial intelligence advances," Ram Shankar Siva Kumar, AI red team

Microsoft, OpenAI Warn of Nation-State Hackers Weaponizing AI for Cyber Attacks

Nation-state actors associated with Russia, North Korea, Iran, and China are experimenting with artificial intelligence (AI) and large language models (LLMs) to complement their ongoing cyber attack operations. The findings come from a report published by Microsoft in collaboration with OpenAI, both of which said they disrupted efforts made by five state-affiliated actors that used its

How to Prevent ChatGPT From Stealing Your Content & Traffic

ChatGPT and similar large language models (LLMs) have added further complexity to the ever-growing online threat landscape. Cybercriminals no longer need advanced coding skills to execute fraud and other damaging attacks against online businesses and customers, thanks to bots-as-a-service, residential proxies, CAPTCHA farms, and other easily accessible tools.  Now, the latest technology damaging

Meet the Brains Behind the Malware-Friendly AI Chat Service ‘WormGPT’

WormGPT, a private new chatbot service advertised as a way to use Artificial Intelligence (AI) to write malicious software without all the pesky prohibitions on such activity enforced by the likes of ChatGPT and Google Bard, has started adding restrictions of its own on how the service can be used. Faced with customers trying to use WormGPT to create ransomware and phishing scams, the 23-year-old Portuguese programmer who created the project now says his service is slowly morphing into “a more controlled environment.”

Image: SlashNext.com.

The large language models (LLMs) made by ChatGPT parent OpenAI or Google or Microsoft all have various safety measures designed to prevent people from abusing them for nefarious purposes — such as creating malware or hate speech. In contrast, WormGPT has promoted itself as a new, uncensored LLM that was created specifically for cybercrime activities.

WormGPT was initially sold exclusively on HackForums, a sprawling, English-language community that has long featured a bustling marketplace for cybercrime tools and services. WormGPT licenses are sold for prices ranging from 500 to 5,000 Euro.

“Introducing my newest creation, ‘WormGPT,’ wrote “Last,” the handle chosen by the HackForums user who is selling the service. “This project aims to provide an alternative to ChatGPT, one that lets you do all sorts of illegal stuff and easily sell it online in the future. Everything blackhat related that you can think of can be done with WormGPT, allowing anyone access to malicious activity without ever leaving the comfort of their home.”

WormGPT’s core developer and frontman “Last” promoting the service on HackForums. Image: SlashNext.

In July, an AI-based security firm called SlashNext analyzed WormGPT and asked it to create a “business email compromise” (BEC) phishing lure that could be used to trick employees into paying a fake invoice.

“The results were unsettling,” SlashNext’s Daniel Kelley wrote. “WormGPT produced an email that was not only remarkably persuasive but also strategically cunning, showcasing its potential for sophisticated phishing and BEC attacks.”

SlashNext asked WormGPT to compose this BEC phishing email. Image: SlashNext.

A review of Last’s posts on HackForums over the years shows this individual has extensive experience creating and using malicious software. In August 2022, Last posted a sales thread for “Arctic Stealer,” a data stealing trojan and keystroke logger that he sold there for many months.

“I’m very experienced with malwares,” Last wrote in a message to another HackForums user last year.

Last has also sold a modified version of the information stealer DCRat, as well as an obfuscation service marketed to malicious coders who sell their creations and wish to insulate them from being modified or copied by customers.

Shortly after joining the forum in early 2021, Last told several different Hackforums users his name was Rafael and that he was from Portugal. HackForums has a feature that allows anyone willing to take the time to dig through a user’s postings to learn when and if that user was previously tied to another account.

That account tracing feature reveals that while Last has used many pseudonyms over the years, he originally used the nickname “ruiunashackers.” The first search result in Google for that unique nickname brings up a TikTok account with the same moniker, and that TikTok account says it is associated with an Instagram account for a Rafael Morais from Porto, a coastal city in northwest Portugal.

AN OPEN BOOK

Reached via Instagram and Telegram, Morais said he was happy to chat about WormGPT.

“You can ask me anything,” Morais said. “I’m an open book.”

Morais said he recently graduated from a polytechnic institute in Portugal, where he earned a degree in information technology. He said only about 30 to 35 percent of the work on WormGPT was his, and that other coders are contributing to the project. So far, he says, roughly 200 customers have paid to use the service.

“I don’t do this for money,” Morais explained. “It was basically a project I thought [was] interesting at the beginning and now I’m maintaining it just to help [the] community. We have updated a lot since the release, our model is now 5 or 6 times better in terms of learning and answer accuracy.”

WormGPT isn’t the only rogue ChatGPT clone advertised as friendly to malware writers and cybercriminals. According to SlashNext, one unsettling trend on the cybercrime forums is evident in discussion threads offering “jailbreaks” for interfaces like ChatGPT.

“These ‘jailbreaks’ are specialised prompts that are becoming increasingly common,” Kelley wrote. “They refer to carefully crafted inputs designed to manipulate interfaces like ChatGPT into generating output that might involve disclosing sensitive information, producing inappropriate content, or even executing harmful code. The proliferation of such practices underscores the rising challenges in maintaining AI security in the face of determined cybercriminals.”

Morais said they have been using the GPT-J 6B model since the service was launched, although he declined to discuss the source of the LLMs that power WormGPT. But he said the data set that informs WormGPT is enormous.

“Anyone that tests wormgpt can see that it has no difference from any other uncensored AI or even chatgpt with jailbreaks,” Morais explained. “The game changer is that our dataset [library] is big.”

Morais said he began working on computers at age 13, and soon started exploring security vulnerabilities and the possibility of making a living by finding and reporting them to software vendors.

“My story began in 2013 with some greyhat activies, never anything blackhat tho, mostly bugbounty,” he said. “In 2015, my love for coding started, learning c# and more .net programming languages. In 2017 I’ve started using many hacking forums because I have had some problems home (in terms of money) so I had to help my parents with money… started selling a few products (not blackhat yet) and in 2019 I started turning blackhat. Until a few months ago I was still selling blackhat products but now with wormgpt I see a bright future and have decided to start my transition into whitehat again.”

WormGPT sells licenses via a dedicated channel on Telegram, and the channel recently lamented that media coverage of WormGPT so far has painted the service in an unfairly negative light.

“We are uncensored, not blackhat!” the WormGPT channel announced at the end of July. “From the beginning, the media has portrayed us as a malicious LLM (Language Model), when all we did was use the name ‘blackhatgpt’ for our Telegram channel as a meme. We encourage researchers to test our tool and provide feedback to determine if it is as bad as the media is portraying it to the world.”

It turns out, when you advertise an online service for doing bad things, people tend to show up with the intention of doing bad things with it. WormGPT’s front man Last seems to have acknowledged this at the service’s initial launch, which included the disclaimer, “We are not responsible if you use this tool for doing bad stuff.”

But lately, Morais said, WormGPT has been forced to add certain guardrails of its own.

“We have prohibited some subjects on WormGPT itself,” Morais said. “Anything related to murders, drug traffic, kidnapping, child porn, ransomwares, financial crime. We are working on blocking BEC too, at the moment it is still possible but most of the times it will be incomplete because we already added some limitations. Our plan is to have WormGPT marked as an uncensored AI, not blackhat. In the last weeks we have been blocking some subjects from being discussed on WormGPT.”

Still, Last has continued to state on HackForums — and more recently on the far more serious cybercrime forum Exploit — that WormGPT will quite happily create malware capable of infecting a computer and going “fully undetectable” (FUD) by virtually all of the major antivirus makers (AVs).

“You can easily buy WormGPT and ask it for a Rust malware script and it will 99% sure be FUD against most AVs,” Last told a forum denizen in late July.

Asked to list some of the legitimate or what he called “white hat” uses for WormGPT, Morais said his service offers reliable code, unlimited characters, and accurate, quick answers.

“We used WormGPT to fix some issues on our website related to possible sql problems and exploits,” he explained. “You can use WormGPT to create firewalls, manage iptables, analyze network, code blockers, math, anything.”

Morais said he wants WormGPT to become a positive influence on the security community, not a destructive one, and that he’s actively trying to steer the project in that direction. The original HackForums thread pimping WormGPT as a malware writer’s best friend has since been deleted, and the service is now advertised as “WormGPT – Best GPT Alternative Without Limits — Privacy Focused.”

“We have a few researchers using our wormgpt for whitehat stuff, that’s our main focus now, turning wormgpt into a good thing to [the] community,” he said.

It’s unclear yet whether Last’s customers share that view.

PentestGPT - A GPT-empowered Penetration Testing Tool

By: Zion3R


A GPT-empowered penetration testing tool.

Common Questions

  • Q: What is PentestGPT?
    • A: PentestGPT is a penetration testing tool empowered by ChatGPT. It is designed to automate the penetration testing process. It is built on top of ChatGPT and operate in an interactive mode to guide penetration testers in both overall progress and specific operations.
  • Q: Do I need to be a ChatGPT plus member to use PentestGPT?
    • A: Yes. PentestGPT relies on GPT-4 model for high-quality reasoning. Since there is no public GPT-4 API yet, a wrapper is included to use ChatGPT session to support PentestGPT. You may also use GPT-4 API directly if you have access to it.
  • Q: Why GPT-4?
    • A: After empirical evaluation, we found that GPT-4 performs better than GPT-3.5 in terms of penetration testing reasoning. In fact, GPT-3.5 leads to failed test in simple tasks.
  • Q: Why not just use GPT-4 directly?
    • A: We found that GPT-4 suffers from losses of context as test goes deeper. It is essential to maintain a "test status awareness" in this process. You may check the PentestGPT design here for more details.
  • Q: What about AutoGPT?
    • A: AutoGPT is not designed for pentest. It may perform malicious operations. Due to this consideration, we design PentestGPT in an interactive mode. Of course, our end goal is an automated pentest solution.
  • Q: Future plan?
    • A: We're working on a paper to explore the tech details behind automated pentest. Meanwhile, please feel free to raise issues/discussions. I'll do my best to address all of them.

Getting Started

  • PentestGPT is a penetration testing tool empowered by ChatGPT.
  • It is designed to automate the penetration testing process. It is built on top of ChatGPT and operate in an interactive mode to guide penetration testers in both overall progress and specific operations.
  • PentestGPT is able to solve easy to medium HackTheBox machines, and other CTF challenges. You can check this example in resources where we use it to solve HackTheBox challenge TEMPLATED (web challenge).
  • A sample testing process of PentestGPT on a target VulnHub machine (Hackable II) is available at here.
  • A sample usage video is below: (or available here: Demo)

Installation

Before installation, we recommend you to take a look at this installation video if you want to use cookie setup.

  1. Install requirements.txt with pip install -r requirements.txt
  2. Configure the cookies in config. You may follow a sample by cp config/chatgpt_config_sample.py config/chatgpt_config.py.
    • If you're using cookie, please watch this video: https://youtu.be/IbUcj0F9EBc. The general steps are:
      • Login to ChatGPT session page.
      • In Inspect - Network, find the connections to the ChatGPT session page.
      • Find the cookie in the request header in the request to https://chat.openai.com/api/auth/session and paste it into the cookie field of config/chatgpt_config.py. (You may use Inspect->Network, find session and copy the cookie field in request_headers to https://chat.openai.com/api/auth/session)
      • Note that the other fields are temporarily deprecated due to the update of ChatGPT page.
      • Fill in userAgent with your user agent.
    • If you're using API:
      • Fill in the OpenAI API key in chatgpt_config.py.
  3. To verify that the connection is configured properly, you may run python3 test_connection.py. You should see some sample conversation with ChatGPT.
    • A sample output is below
    1. You're connected with ChatGPT Plus cookie. 
    To start PentestGPT, please use <python3 main.py --reasoning_model=gpt-4>
    ## Test connection for OpenAI api (GPT-4)
    2. You're connected with OpenAI API. You have GPT-4 access. To start PentestGPT, please use <python3 main.py --reasoning_model=gpt-4 --useAPI>
    ## Test connection for OpenAI api (GPT-3.5)
    3. You're connected with OpenAI API. You have GPT-3.5 access. To start PentestGPT, please use <python3 main.py --reasoning_model=gpt-3.5-turbo --useAPI>
  4. (Notice) The above verification process for cookie. If you encounter errors after several trials, please try to refresh the page, repeat the above steps, and try again. You may also try with the cookie to https://chat.openai.com/backend-api/conversations. Please submit an issue if you encounter any problem.

Usage

  1. To start, run python3 main.py --args.
    • --reasoning_model is the reasoning model you want to use.
    • --useAPI is whether you want to use OpenAI API.
    • You're recommended to use the combination as suggested by test_connection.py, which are:
      • python3 main.py --reasoning_model=gpt-4
      • python3 main.py --reasoning_model=gpt-4 --useAPI
      • python3 main.py --reasoning_model=gpt-3.5-turbo --useAPI
  2. The tool works similar to msfconsole. Follow the guidance to perform penetration testing.
  3. In general, PentestGPT intakes commands similar to chatGPT. There are several basic commands.
    1. The commands are:
      • help: show the help message.
      • next: key in the test execution result and get the next step.
      • more: let PentestGPT to explain more details of the current step. Also, a new sub-task solver will be created to guide the tester.
      • todo: show the todo list.
      • discuss: discuss with the PentestGPT.
      • google: search on Google. This function is still under development.
      • quit: exit the tool and save the output as log file (see the reporting section below).
    2. You can use <SHIFT + right arrow> to end your input (and is for next line).
    3. You may always use TAB to autocomplete the commands.
    4. When you're given a drop-down selection list, you can use cursor or arrow key to navigate the list. Press ENTER to select the item. Similarly, use <SHIFT + right arrow> to confirm selection.
  4. In the sub-task handler initiated by more, users can execute more commands to investigate into a specific problem:
    1. The commands are:
      • help: show the help message.
      • brainstorm: let PentestGPT brainstorm on the local task for all the possible solutions.
      • discuss: discuss with PentestGPT about this local task.
      • google: search on Google. This function is still under development.
      • continue: exit the subtask and continue the main testing session.

Report and Logging

  1. After finishing the penetration testing, a report will be automatically generated in logs folder (if you quit with quit command).
  2. The report can be printed in a human-readable format by running python3 utils/report_generator.py <log file>. A sample report sample_pentestGPT_log.txt is also uploaded.

Contributing

Contributions are what make the open source community such an amazing place to learn, inspire, and create. Any contributions you make are greatly appreciated.

If you have a suggestion that would make this better, please fork the repo and create a pull request. You can also simply open an issue with the tag "enhancement". Don't forget to give the project a star! Thanks again!

  1. Fork the Project
  2. Create your Feature Branch (git checkout -b feature/AmazingFeature)
  3. Commit your Changes (git commit -m 'Add some AmazingFeature')
  4. Push to the Branch (git push origin feature/AmazingFeature)
  5. Open a Pull Request

License

Distributed under the MIT License. See LICENSE.txt for more information.

Contact

Gelei Deng - gelei.deng@ntu.edu.sg



❌