FreshRSS

๐Ÿ”’
โŒ Secure Planet Training Courses Updated For 2019 - Click Here
There are new available articles, click to refresh the page.
Before yesterdayYour RSS feeds

C2-Tracker - Live Feed Of C2 Servers, Tools, And Botnets

By: Zion3R


Free to use IOC feed for various tools/malware. It started out for just C2 tools but has morphed into tracking infostealers and botnets as well. It uses shodan.io/">Shodan searches to collect the IPs. The most recent collection is always stored in data; the IPs are broken down by tool and there is an all.txt.

The feed should update daily. Actively working on making the backend more reliable


Honorable Mentions

Many of the Shodan queries have been sourced from other CTI researchers:

Huge shoutout to them!

Thanks to BertJanCyber for creating the KQL query for ingesting this feed

And finally, thanks to Y_nexro for creating C2Live in order to visualize the data

What do I track?

Running Locally

If you want to host a private version, put your Shodan API key in an environment variable called SHODAN_API_KEY

echo SHODAN_API_KEY=API_KEY >> ~/.bashrc
bash
python3 -m pip install -r requirements.txt
python3 tracker.py

Contributing

I encourage opening an issue/PR if you know of any additional Shodan searches for identifying adversary infrastructure. I will not set any hard guidelines around what can be submitted, just know, fidelity is paramount (high true/false positive ratio is the focus).

References



Cloud_Enum - Multi-cloud OSINT Tool. Enumerate Public Resources In AWS, Azure, And Google Cloud

By: Zion3R


Multi-cloud OSINT tool. Enumerate public resources in AWS, Azure, and Google Cloud.

Currently enumerates the following:

Amazon Web Services: - Open / Protected S3 Buckets - awsapps (WorkMail, WorkDocs, Connect, etc.)

Microsoft Azure: - Storage Accounts - Open Blob Storage Containers - Hosted Databases - Virtual Machines - Web Apps

Google Cloud Platform - Open / Protected GCP Buckets - Open / Protected Firebase Realtime Databases - Google App Engine sites - Cloud Functions (enumerates project/regions with existing functions, then brute forces actual function names) - Open Firebase Apps


See it in action in Codingo's video demo here.


Usage

Setup

Several non-standard libaries are required to support threaded HTTP requests and dns lookups. You'll need to install the requirements as follows:

pip3 install -r ./requirements.txt

Running

The only required argument is at least one keyword. You can use the built-in fuzzing strings, but you will get better results if you supply your own with -m and/or -b.

You can provide multiple keywords by specifying the -k argument multiple times.

Keywords are mutated automatically using strings from enum_tools/fuzz.txt or a file you provide with the -m flag. Services that require a second-level of brute forcing (Azure Containers and GCP Functions) will also use fuzz.txt by default or a file you provide with the -b flag.

Let's say you were researching "somecompany" whose website is "somecompany.io" that makes a product called "blockchaindoohickey". You could run the tool like this:

./cloud_enum.py -k somecompany -k somecompany.io -k blockchaindoohickey

HTTP scraping and DNS lookups use 5 threads each by default. You can try increasing this, but eventually the cloud providers will rate limit you. Here is an example to increase to 10.

./cloud_enum.py -k keyword -t 10

IMPORTANT: Some resources (Azure Containers, GCP Functions) are discovered per-region. To save time scanning, there is a "REGIONS" variable defined in cloudenum/azure_regions.py and cloudenum/gcp_regions.py that is set by default to use only 1 region. You may want to look at these files and edit them to be relevant to your own work.

Complete Usage Details

usage: cloud_enum.py [-h] -k KEYWORD [-m MUTATIONS] [-b BRUTE]

Multi-cloud enumeration utility. All hail OSINT!

optional arguments:
-h, --help show this help message and exit
-k KEYWORD, --keyword KEYWORD
Keyword. Can use argument multiple times.
-kf KEYFILE, --keyfile KEYFILE
Input file with a single keyword per line.
-m MUTATIONS, --mutations MUTATIONS
Mutations. Default: enum_tools/fuzz.txt
-b BRUTE, --brute BRUTE
List to brute-force Azure container names. Default: enum_tools/fuzz.txt
-t THREADS, --threads THREADS
Threads for HTTP brute-force. Default = 5
-ns NAMESERVER, --nameserver NAMESERVER
DNS server to use in brute-force.
-l LOGFILE, --logfile LOGFILE
Will APPEND found items to specified file.
-f FORMAT, --format FORMAT
Format for log file (text,json,csv - defaults to text)
--disable-aws Disable Amazon checks.
--disable-azure Disable Azure checks.
--disable-gcp Disable Google checks.
-qs, --quickscan Disable all mutations and second-level scans

Thanks

So far, I have borrowed from: - Some of the permutations from GCPBucketBrute



Dorkish - Chrome Extension Tool For OSINT & Recon

By: Zion3R


During reconaissance phase or when doing OSINT , we often use google dorking and shodan and thus the idea of Dorkish.
Dorkish is a Chrome extension tool that facilitates custom dork creation for Google and Shodan using the builder and it offers prebuilt dorks for efficient reconnaissance and OSINT engagement.


Installation And Setup

1- Clone the repository

git clone https://github.com/yousseflahouifi/dorkish.git

2- Go to chrome://extensions/ and enable the Developer mode in the top right corner.
3- click on Load unpacked extension button and select the dorkish folder.

Note: For firefox users , you can find the extension here : https://addons.mozilla.org/en-US/firefox/addon/dorkish/

Features

Google dorking

  • Builder with keywords to filter your google search results.
  • Prebuilt dorks for Bug bounty programs.
  • Prebuilt dorks used during the reconnaissance phase in bug bounty.
  • Prebuilt dorks for exposed files and directories
  • Prebuilt dorks for logins and sign up portals
  • Prebuilt dorks for cyber secruity jobs

Shodan dorking

  • Builder with filter keywords used in shodan.
  • Varierty of prebuilt dorks to find IOT , Network infrastructure , cameras , ICS , databases , etc.

Usage

Once you have found or built the dork you need, simply click it and click search. This will direct you to the desired search engine, Shodan or Google, with the specific dork you've entered. Then, you can explore and enjoy the results that match your query.

TODO

  • Add more useful dorks and catogories
  • Fix some bugs
  • Add a search bar to search through the results
  • Might add some LLM models to build dorks

Notes

I have built some dorks and I have used some public resources to gather the dorks , here's few : - https://github.com/lothos612/shodan - https://github.com/TakSec/google-dorks-bug-bounty

Warning

  • I am not responsible for any damage caused by using the tool


DarkGPT - An OSINT Assistant Based On GPT-4-200K Designed To Perform Queries On Leaked Databases, Thus Providing An Artificial Intelligence Assistant That Can Be Useful In Your Traditional OSINT Processes

By: Zion3R


DarkGPT is an artificial intelligence assistant based on GPT-4-200K designed to perform queries on leaked databases. This guide will help you set up and run the project on your local environment.


Prerequisites

Before starting, make sure you have Python installed on your system. This project has been tested with Python 3.8 and higher versions.

Environment Setup

  1. Clone the Repository

First, you need to clone the GitHub repository to your local machine. You can do this by executing the following command in your terminal:

git clone https://github.com/luijait/DarkGPT.git cd DarkGPT

  1. Configure Environment Variables

You will need to set up some environment variables for the script to work correctly. Copy the .env.example file to a new file named .env:

DEHASHED_API_KEY="your_dehashed_api_key_here"

  1. Install Dependencies

This project requires certain Python packages to run. Install them by running the following command:

pip install -r requirements.txt 4. Then Run the project: python3 main.py



SwaggerSpy - Automated OSINT On SwaggerHub

By: Zion3R


SwaggerSpy is a tool designed for automated Open Source Intelligence (OSINT) on SwaggerHub. This project aims to streamline the process of gathering intelligence from APIs documented on SwaggerHub, providing valuable insights for security researchers, developers, and IT professionals.


What is Swagger?

Swagger is an open-source framework that allows developers to design, build, document, and consume RESTful web services. It simplifies API development by providing a standard way to describe REST APIs using a JSON or YAML format. Swagger enables developers to create interactive documentation for their APIs, making it easier for both developers and non-developers to understand and use the API.


About SwaggerHub

SwaggerHub is a collaborative platform for designing, building, and managing APIs using the Swagger framework. It offers a centralized repository for API documentation, version control, and collaboration among team members. SwaggerHub simplifies the API development lifecycle by providing a unified platform for API design and testing.


Why OSINT on SwaggerHub?

Performing OSINT on SwaggerHub is crucial because developers, in their pursuit of efficient API documentation and sharing, may inadvertently expose sensitive information. Here are key reasons why OSINT on SwaggerHub is valuable:

  1. Developer Oversights: Developers might unintentionally include secrets, credentials, or sensitive information in API documentation on SwaggerHub. These oversights can lead to security vulnerabilities and unauthorized access if not identified and addressed promptly.

  2. Security Best Practices: OSINT on SwaggerHub helps enforce security best practices. Identifying and rectifying potential security issues early in the development lifecycle is essential to ensure the confidentiality and integrity of APIs.

  3. Preventing Data Leaks: By systematically scanning SwaggerHub for sensitive information, organizations can proactively prevent data leaks. This is especially crucial in today's interconnected digital landscape where APIs play a vital role in data exchange between services.

  4. Risk Mitigation: Understanding that developers might forget to remove or obfuscate sensitive details in API documentation underscores the importance of continuous OSINT on SwaggerHub. This proactive approach mitigates the risk of unintentional exposure of critical information.

  5. Compliance and Privacy: Many industries have stringent compliance requirements regarding the protection of sensitive data. OSINT on SwaggerHub ensures that APIs adhere to these regulations, promoting a culture of compliance and safeguarding user privacy.

  6. Educational Opportunities: Identifying oversights in SwaggerHub documentation provides educational opportunities for developers. It encourages a security-conscious mindset, fostering a culture of awareness and responsible information handling.

By recognizing that developers can inadvertently expose secrets, OSINT on SwaggerHub becomes an integral part of the overall security strategy, safeguarding against potential threats and promoting a secure API ecosystem.


How SwaggerSpy Works

SwaggerSpy obtains information from SwaggerHub and utilizes regular expressions to inspect API documentation for sensitive information, such as secrets and credentials.


Getting Started

To use SwaggerSpy, follow these steps:

  1. Installation: Clone the SwaggerSpy repository and install the required dependencies.
git clone https://github.com/UndeadSec/SwaggerSpy.git
cd SwaggerSpy
pip install -r requirements.txt
  1. Usage: Run SwaggerSpy with the target search terms (more accurate with domains).
python swaggerspy.py searchterm
  1. Results: SwaggerSpy will generate a report containing OSINT findings, including information about the API, endpoints, and secrets.

Disclaimer

SwaggerSpy is intended for educational and research purposes only. Users are responsible for ensuring that their use of this tool complies with applicable laws and regulations.


Contribution

Contributions to SwaggerSpy are welcome! Feel free to submit issues, feature requests, or pull requests to help improve this tool.


About the Author

SwaggerSpy is developed and maintained by Alisson Moretto (UndeadSec)

I'm a passionate cyber threat intelligence pro who loves sharing insights and crafting cybersecurity tools.


TODO

Regular Expressions Enhancement
  • [ ] Review and improve existing regular expressions.
  • [ ] Ensure that regular expressions adhere to best practices.
  • [ ] Check for any potential optimizations in the regex patterns.
  • [ ] Test regular expressions with various input scenarios for accuracy.
  • [ ] Document any complex or non-trivial regex patterns for better understanding.
  • [ ] Explore opportunities to modularize or break down complex patterns.
  • [ ] Verify the regular expressions against the latest specifications or requirements.
  • [ ] Update documentation to reflect any changes made to the regular expressions.

License

SwaggerSpy is licensed under the MIT License. See the LICENSE file for details.


Thanks

Special thanks to @Liodeus for providing project inspiration through swaggerHole.



Uscrapper - Powerful OSINT Webscraper For Personal Data Collection

By: Zion3R


Introducing Uscrapper 2.0, A powerfull OSINT webscrapper that allows users to extract various personal information from a website. It leverages web scraping techniques and regular expressions to extract email addresses, social media links, author names, geolocations, phone numbers, and usernames from both hyperlinked and non-hyperlinked sources on the webpage, supports multithreading to make this process faster, Uscrapper 2.0 is equipped with advanced Anti-webscrapping bypassing modules and supports webcrawling to scrape from various sublinks within the same domain. The tool also provides an option to generate a report containing the extracted details.


Extracted Details:

Uscrapper extracts the following details from the provided website:

  • Email Addresses: Displays email addresses found on the website.
  • Social Media Links: Displays links to various social media platforms found on the website.
  • Author Names: Displays the names of authors associated with the website.
  • Geolocations: Displays geolocation information associated with the website.
  • Non-Hyperlinked Details: Displays non-hyperlinked details found on the website including email addresses phone numbers and usernames.

Whats New?:

Uscrapper 2.0:

  • Introduced multiple modules to bypass anti-webscrapping techniques.
  • Introducing Crawl and scrape: an advanced crawl and scrape module to scrape the websites from within.
  • Implemented Multithreading to make these processes faster.

Installation Steps:

git clone https://github.com/z0m31en7/Uscrapper.git
cd Uscrapper/install/ 
chmod +x ./install.sh && ./install.sh #For Unix/Linux systems

Usage:

To run Uscrapper, use the following command-line syntax:

python Uscrapper-v2.0.py [-h] [-u URL] [-c (INT)] [-t THREADS] [-O] [-ns]


Arguments:

  • -h, --help: Show the help message and exit.
  • -u URL, --url URL: Specify the URL of the website to extract details from.
  • -c INT, --crawl INT: Specify the number of links to crawl
  • -t INT, --threads INT: Specify the number of threads to use while crawling and scraping.
  • -O, --generate-report: Generate a report file containing the extracted details.
  • -ns, --nonstrict: Display non-strict usernames during extraction.

Note:

  • Uscrapper relies on web scraping techniques to extract information from websites. Make sure to use it responsibly and in compliance with the website's terms of service and applicable laws.

  • The accuracy and completeness of the extracted details depend on the structure and content of the website being analyzed.

  • To bypass some Anti-Webscrapping methods we have used selenium which can make the overall process slower.

Contribution:

Want a new feature to be added?

  • Make a pull request with all the necessary details and it will be merged after a review.
  • You can contribute by making the regular expressions more efficient and accurate, or by suggesting some more features that can be added.


Pantheon - Insecure Camera Parser

By: Zion3R


Pantheon is a GUI application that allows users to display information regarding network cameras in various countries as well as an integrated live-feed for non-protected cameras.

Functionalities

Pantheon allows users to execute an API crawler. There was original functionality without the use of any API's (like Insecam), but Google TOS kept getting in the way of the original scraping mechanism.


Installation

  1. git clone https://github.com/josh0xA/Pantheon.git
  2. cd Pantheon
  3. pip3 install -r requirements.txt
    Execution: python3 pantheon.py
  • Note: I will later add a GUI installer to make it fully indepenent of a CLI

Windows

  • You can just follow the steps above or download the official package here.
  • Note, the PE binary of Pantheon was put together using pyinstaller, so Windows Defender might get a bit upset.

Ubuntu

  • First, complete steps 1, 2 and 3 listed above.
  • chmod +x distros/ubuntu_install.sh
  • ./distros/ubuntu_install.sh

Debian and Kali Linux

  • First, complete steps 1, 2 and 3 listed above.
  • chmod +x distros/debian-kali_install.sh
  • ./distros/debian-kali_install.sh

MacOS

  • The regular installation steps above should suffice. If not, open up an issue.

Usage

(Enter) on a selected IP:Port to establish a Pantheon webview of the camera. (Use this at your own risk)

(Left-click) on a selected IP:Port to view the geolocation of the camera.
(Right-click) on a selected IP:Port to view the HTTP data of the camera (Ctrl+Left-click for Mac).

Adjust the map as you please to see the markers.

  • Also note that this app is far from perfect and not every link that shows up is a live-feed, some are login pages (Do NOT attempt to login).

Ethical Notice

The developer of this program, Josh Schiavone, is not resposible for misuse of this data gathering tool. Pantheon simply provides information that can be indexed by any modern search engine. Do not try to establish unauthorized access to live feeds that are password protected - that is illegal. Furthermore, if you do choose to use Pantheon to view a live-feed, do so at your own risk. Pantheon was developed for educational purposes only. For further information, please visit: https://joshschiavone.com/panth_info/panth_ethical_notice.html

Licence

MIT License
Copyright (c) Josh Schiavone



CloakQuest3r - Uncover The True IP Address Of Websites Safeguarded By Cloudflare

By: Zion3R


CloakQuest3r is a powerful Python tool meticulously crafted to uncover the true IP address of websites safeguarded by Cloudflare, a widely adopted web security and performance enhancement service. Its core mission is to accurately discern the actual IP address of web servers that are concealed behind Cloudflare's protective shield. Subdomain scanning is employed as a key technique in this pursuit. This tool is an invaluable resource for penetration testers, security professionals, and web administrators seeking to perform comprehensive security assessments and identify vulnerabilities that may be obscured by Cloudflare's security measures.


Key Features:

  • Real IP Detection: CloakQuest3r excels in the art of discovering the real IP address of web servers employing Cloudflare's services. This crucial information is paramount for conducting comprehensive penetration tests and ensuring the security of web assets.

  • Subdomain Scanning: Subdomain scanning is harnessed as a fundamental component in the process of finding the real IP address. It aids in the identification of the actual server responsible for hosting the website and its associated subdomains.

  • Threaded Scanning: To enhance efficiency and expedite the real IP detection process, CloakQuest3r utilizes threading. This feature enables scanning of a substantial list of subdomains without significantly extending the execution time.

  • Detailed Reporting: The tool provides comprehensive output, including the total number of subdomains scanned, the total number of subdomains found, and the time taken for the scan. Any real IP addresses unveiled during the process are also presented, facilitating in-depth analysis and penetration testing.

With CloakQuest3r, you can confidently evaluate website security, unveil hidden vulnerabilities, and secure your web assets by disclosing the true IP address concealed behind Cloudflare's protective layers.

Limitation

infrastructure and configurations can change over time. The tool may not capture these changes, potentially leading to outdated information. 3. Subdomain Variation: While the tool scans subdomains, it doesn't guarantee that all subdomains' A records will point to the primary host. Some subdomains may also be protected by Cloudflare. " dir="auto">
- Still in the development phase, sometimes it can't detect the real Ip.

- CloakQuest3r combines multiple indicators to uncover real IP addresses behind Cloudflare. While subdomain scanning is a part of the process, we do not assume that all subdomains' A records point to the target host. The tool is designed to provide valuable insights but may not work in every scenario. We welcome any specific suggestions for improvement.

1. False Negatives: CloakReveal3r may not always accurately identify the real IP address behind Cloudflare, particularly for websites with complex network configurations or strict security measures.

2. Dynamic Environments: Websites' infrastructure and configurations can change over time. The tool may not capture these changes, potentially leading to outdated information.

3. Subdomain Variation: While the tool scans subdomains, it doesn't guarantee that all subdomains' A records will point to the pri mary host. Some subdomains may also be protected by Cloudflare.

This tool is a Proof of Concept and is for Educational Purposes Only.

How to Use:

  1. Run CloudScan with a single command-line argument: the target domain you want to analyze.

     git clone https://github.com/spyboy-productions/CloakQuest3r.git
    cd CloakQuest3r
    pip3 install -r requirements.txt
    python cloakquest3r.py example.com
  2. The tool will check if the website is using Cloudflare. If not, it will inform you that subdomain scanning is unnecessary.

  3. If Cloudflare is detected, CloudScan will scan for subdomains and identify their real IP addresses.

  4. You will receive detailed output, including the number of subdomains scanned, the total number of subdomains found, and the time taken for the scan.

  5. Any real IP addresses found will be displayed, allowing you to conduct further analysis and penetration testing.

CloudScan simplifies the process of assessing website security by providing a clear, organized, and informative report. Use it to enhance your security assessments, identify potential vulnerabilities, and secure your web assets.

Run It Online:

Run it online on replit.com : https://replit.com/@spyb0y/CloakQuest3r



Porch-Pirate - The Most Comprehensive Postman Recon / OSINT Client And Framework That Facilitates The Automated Discovery And Exploitation Of API Endpoints And Secrets Committed To Workspaces, Collections, Requests, Users And Teams

By: Zion3R


Porch Pirate started as a tool to quickly uncover Postman secrets, and has slowly begun to evolve into a multi-purpose reconaissance / OSINT framework for Postman. While existing tools are great proof of concepts, they only attempt to identify very specific keywords as "secrets", and in very limited locations, with no consideration to recon beyond secrets. We realized we required capabilities that were "secret-agnostic", and had enough flexibility to capture false-positives that still provided offensive value.

Porch Pirate enumerates and presents sensitive results (global secrets, unique headers, endpoints, query parameters, authorization, etc), from publicly accessible Postman entities, such as:

  • Workspaces
  • Collections
  • Requests
  • Users
  • Teams

Installation

python3 -m pip install porch-pirate

Using the client

The Porch Pirate client can be used to nearly fully conduct reviews on public Postman entities in a quick and simple fashion. There are intended workflows and particular keywords to be used that can typically maximize results. These methodologies can be located on our blog: Plundering Postman with Porch Pirate.

Porch Pirate supports the following arguments to be performed on collections, workspaces, or users.

  • --globals
  • --collections
  • --requests
  • --urls
  • --dump
  • --raw
  • --curl

Simple Search

porch-pirate -s "coca-cola.com"

Get Workspace Globals

By default, Porch Pirate will display globals from all active and inactive environments if they are defined in the workspace. Provide a -w argument with the workspace ID (found by performing a simple search, or automatic search dump) to extract the workspace's globals, along with other information.

porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8

Dump Workspace

When an interesting result has been found with a simple search, we can provide the workspace ID to the -w argument with the --dump command to begin extracting information from the workspace and its collections.

porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8 --dump

Automatic Search and Globals Extraction

Porch Pirate can be supplied a simple search term, following the --globals argument. Porch Pirate will dump all relevant workspaces tied to the results discovered in the simple search, but only if there are globals defined. This is particularly useful for quickly identifying potentially interesting workspaces to dig into further.

porch-pirate -s "shopify" --globals

Automatic Search Dump

Porch Pirate can be supplied a simple search term, following the --dump argument. Porch Pirate will dump all relevant workspaces and collections tied to the results discovered in the simple search. This is particularly useful for quickly sifting through potentially interesting results.

porch-pirate -s "coca-cola.com" --dump

Extract URLs from Workspace

A particularly useful way to use Porch Pirate is to extract all URLs from a workspace and export them to another tool for fuzzing.

porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8 --urls

Automatic URL Extraction

Porch Pirate will recursively extract all URLs from workspaces and their collections related to a simple search term.

porch-pirate -s "coca-cola.com" --urls

Show Collections in a Workspace

porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8 --collections

Show Workspace Requests

porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8 --requests

Show raw JSON

porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8 --raw

Show Entity Information

porch-pirate -w WORKSPACE_ID
porch-pirate -c COLLECTION_ID
porch-pirate -r REQUEST_ID
porch-pirate -u USERNAME/TEAMNAME

Convert Request to Curl

Porch Pirate can build curl requests when provided with a request ID for easier testing.

porch-pirate -r 11055256-b1529390-18d2-4dce-812f-ee4d33bffd38 --curl

Use a proxy

porch-pirate -s coca-cola.com --proxy 127.0.0.1:8080

Using as a library

Searching

p = porchpirate()
print(p.search('coca-cola.com'))

Get Workspace Collections

p = porchpirate()
print(p.collections('4127fdda-08be-4f34-af0e-a8bdc06efaba'))

Dumping a Workspace

p = porchpirate()
collections = json.loads(p.collections('4127fdda-08be-4f34-af0e-a8bdc06efaba'))
for collection in collections['data']:
requests = collection['requests']
for r in requests:
request_data = p.request(r['id'])
print(request_data)

Grabbing a Workspace's Globals

p = porchpirate()
print(p.workspace_globals('4127fdda-08be-4f34-af0e-a8bdc06efaba'))

Other Examples

Other library usage examples can be located in the examples directory, which contains the following examples:

  • dump_workspace.py
  • format_search_results.py
  • format_workspace_collections.py
  • format_workspace_globals.py
  • get_collection.py
  • get_collections.py
  • get_profile.py
  • get_request.py
  • get_statistics.py
  • get_team.py
  • get_user.py
  • get_workspace.py
  • recursive_globals_from_search.py
  • request_to_curl.py
  • search.py
  • search_by_page.py
  • workspace_collections.py


OSINT-Framework - OSINT Framework

By: Zion3R


OSINT framework focused on gathering information from free tools or resources. The intention is to help people find free OSINT resources. Some of the sites included might require registration or offer more data for $$$, but you should be able to get at least a portion of the available information for no cost.

I originally created this framework with an information security point of view. Since then, the response from other fields and disciplines has been incredible. I would love to be able to include any other OSINT resources, especially from fields outside of infosec. Please let me know about anything that might be missing!

Please visit the framework at the link below and good hunting!


https://osintframework.com

Legend

(T) - Indicates a link to a tool that must be installed and run locally
(D) - Google Dork, for more information: Google Hacking
(R) - Requires registration
(M) - Indicates a URL that contains the search term and the URL itself must be edited manually

For Update Notifications

Follow me on Twitter: @jnordine - https://twitter.com/jnordine
Watch or star the project on Github: https://github.com/lockfale/osint-framework

Suggestions, Comments, Feedback

Feedback or new tool suggestions are extremely welcome! Please feel free to submit a pull request or open an issue on github or reach out on Twitter.

Contribute with a GitHub Pull Request

For new resources, please ensure that the site is available for public and free use.

  1. Update the arf.json file in the format shown below. If this isn't the first entry for a folder, add a comma to the last closing brace of the previous entry.
  • Submit pull request!
  • Thank you!

    OSINT Framework Website

    https://osintframework.com

    Happy Hunting!



    Poastal - The Email OSINT Tool

    By: Zion3R


    Poastal is an email OSINT tool that provides valuable information on any email address. With Poastal, you can easily input an email address and it will quickly answer several questions, providing you with crucial information.


    Features

    • Determine the name of the person who has the email.
    • Check if the email is deliverable or not.
    • Find out if the email is disposable or not.
    • Identify if the email is considered spam.
    • Check if the email is registered on popular platforms such as Facebook, Twitter, Snapchat, Parler, Rumble, MeWe, Imgur, Adobe, Wordpress, and Duolingo.

    Usage

    Make sure you have the requirements installed.

    pip install -r requirements.txt

    Navigate to the backend folder and run poastal.py to start the Flask app. This points to port:8080.

    python poastal.py

    Open index.html in the root directory to use the UI.

    Enter an email address and see the results.

    Test with example@gmail.com.

    There's a new GitHub module.

    If you open up github.py you'll see a section that asks you to replace it with your own API key.

    Feedback

    I hope you find Poastal to be a valuable tool for your OSINT investigations. If you have any feedback or suggestions on how we can improve Poastal, please let me know. I'm always looking for ways to improve this tool to better serve the OSINT community.



    Holehe - Tool To Check If The Mail Is Used On Different Sites Like Twitter, Instagram And Will Retrieve Information On Sites With The Forgotten Password Function

    By: Zion3R

    Holehe Online Version

    Summary

    Efficiently finding registered accounts from emails.

    Holehe checks if an email is attached to an account on sites like twitter, instagram, imgur and more than 120 others.


    Installation

    With PyPI

    pip3 install holehe

    With Github

    git clone https://github.com/megadose/holehe.git
    cd holehe/
    python3 setup.py install

    Quick Start

    Holehe can be run from the CLI and rapidly embedded within existing python applications.

    ๏“š CLI Example

    holehe test@gmail.com

    ๏“ˆ Python Example

    import trio
    import httpx

    from holehe.modules.social_media.snapchat import snapchat


    async def main():
    email = "test@gmail.com"
    out = []
    client = httpx.AsyncClient()

    await snapchat(email, client, out)

    print(out)
    await client.aclose()

    trio.run(main)

    Module Output

    For each module, data is returned in a standard dictionary with the following json-equivalent format :

    {
    "name": "example",
    "rateLimit": false,
    "exists": true,
    "emailrecovery": "ex****e@gmail.com",
    "phoneNumber": "0*******78",
    "others": null
    }
    • rateLitmit : Lets you know if you've been rate-limited.
    • exists : If an account exists for the email on that service.
    • emailrecovery : Sometimes partially obfuscated recovery emails are returned.
    • phoneNumber : Sometimes partially obfuscated recovery phone numbers are returned.
    • others : Any extra info.

    Rate limit? Change your IP.

    Maltego Transform : Holehe Maltego

    Thank you to :

    Donations

    For BTC Donations : 1FHDM49QfZX6pJmhjLE5tB2K6CaTLMZpXZ

    ๏“ License

    GNU General Public License v3.0

    Built for educational purposes only.

    Modules

    Name Domain Method Frequent Rate Limit
    aboutme about.me register โœ˜
    adobe adobe.com password recovery โœ˜
    amazon amazon.com login โœ˜
    amocrm amocrm.com register โœ˜
    anydo any.do login โœ”
    archive archive.org register โœ˜
    armurerieauxerre armurerie-auxerre.com register โœ˜
    atlassian atlassian.com register โœ˜
    axonaut axonaut.com register โœ˜
    babeshows babeshows.co.uk register โœ˜
    badeggsonline badeggsonline.com register โœ˜
    biosmods bios-mods.com register โœ˜
    biotechnologyforums biotechnologyforums.com register โœ˜
    bitmoji bitmoji.com login โœ˜
    blablacar blablacar.com register โœ”
    blackworldforum blackworldforum.com register โœ”
    blip blip.fm register โœ”
    blitzortung forum.blitzortung.org register โœ˜
    bluegrassrivals bluegrassrivals.com register โœ˜
    bodybuilding bodybuilding.com register โœ˜
    buymeacoffee buymeacoffee.com register โœ”
    cambridgemt discussion.cambridge-mt.com register โœ˜
    caringbridge caringbridge.org register โœ˜
    chinaphonearena chinaphonearena.com register โœ˜
    clashfarmer clashfarmer.com register โœ”
    codecademy codecademy.com register โœ”
    codeigniter forum.codeigniter.com register โœ˜
    codepen codepen.io register โœ˜
    coroflot coroflot.com register โœ˜
    cpaelites cpaelites.com register โœ˜
    cpahero cpahero.com register โœ˜
    cracked_to cracked.to register โœ”
    crevado crevado.com register โœ”
    deliveroo deliveroo.com register โœ”
    demonforums demonforums.net register โœ”
    devrant devrant.com register โœ˜
    diigo diigo.com register โœ˜
    discord discord.com register โœ˜
    docker docker.com register โœ˜
    dominosfr dominos.fr register โœ”
    ebay ebay.com login โœ”
    ello ello.co register โœ˜
    envato envato.com register โœ˜
    eventbrite eventbrite.com login โœ˜
    evernote evernote.com login โœ˜
    fanpop fanpop.com register โœ˜
    firefox firefox.com register โœ˜
    flickr flickr.com login โœ˜
    freelancer freelancer.com register โœ˜
    freiberg drachenhort.user.stunet.tu-freiberg.de register โœ˜
    garmin garmin.com register โœ”
    github github.com register โœ˜
    google google.com register โœ”
    gravatar gravatar.com other โœ˜
    hubspot hubspot.com login โœ˜
    imgur imgur.com register โœ”
    insightly insightly.com login โœ˜
    instagram instagram.com register โœ”
    issuu issuu.com register โœ˜
    koditv forum.kodi.tv register โœ˜
    komoot komoot.com register โœ”
    laposte laposte.fr register โœ˜
    lastfm last.fm register โœ˜
    lastpass lastpass.com register โœ˜
    mail_ru mail.ru password recovery โœ˜
    mybb community.mybb.com register โœ˜
    myspace myspace.com register โœ˜
    nattyornot nattyornotforum.nattyornot.com register โœ˜
    naturabuy naturabuy.fr register โœ˜
    ndemiccreations forum.ndemiccreations.com register โœ˜
    nextpvr forums.nextpvr.com register โœ˜
    nike nike.com register โœ˜
    nimble nimble.com register โœ˜
    nocrm nocrm.io register โœ˜
    nutshell nutshell.com register โœ˜
    odnoklassniki ok.ru password recovery โœ˜
    office365 office365.com other โœ”
    onlinesequencer onlinesequencer.net register โœ˜
    parler parler.com login โœ˜
    patreon patreon.com login โœ”
    pinterest pinterest.com register โœ˜
    pipedrive pipedrive.com register โœ˜
    plurk plurk.com register โœ˜
    pornhub pornhub.com register โœ˜
    protonmail protonmail.ch other โœ˜
    quora quora.com register โœ˜
    rambler rambler.ru register โœ˜
    redtube redtube.com register โœ˜
    replit replit.com register โœ”
    rocketreach rocketreach.co register โœ˜
    samsung samsung.com register โœ˜
    seoclerks seoclerks.com register โœ˜
    sevencups 7cups.com register โœ”
    smule smule.com register โœ”
    snapchat snapchat.com login โœ˜
    soundcloud soundcloud.com register โœ˜
    sporcle sporcle.com register โœ˜
    spotify spotify.com register โœ”
    strava strava.com register โœ˜
    taringa taringa.net register โœ”
    teamleader teamleader.com register โœ˜
    teamtreehouse teamtreehouse.com register โœ˜
    tellonym tellonym.me register โœ˜
    thecardboard thecardboard.org register โœ˜
    therianguide forums.therian-guide.com register โœ˜
    thevapingforum thevapingforum.com register โœ˜
    tumblr tumblr.com register โœ˜
    tunefind tunefind.com register โœ”
    twitter twitter.com register โœ˜
    venmo venmo.com register โœ”
    vivino vivino.com register โœ˜
    voxmedia voxmedia.com register โœ˜
    vrbo vrbo.com register โœ˜
    vsco vsco.co register โœ˜
    wattpad wattpad.com register โœ”
    wordpress wordpress login โœ˜
    xing xing.com register โœ˜
    xnxx xnxx.com register โœ”
    xvideos xvideos.com register โœ˜
    yahoo yahoo.com login โœ”
    zoho zoho.com login โœ”


    Xsubfind3R - A CLI Utility To Find Domain'S Known Subdomains From Curated Passive Online Sources

    By: Zion3R


    xsubfind3r is a command-line interface (CLI) utility to find domain's known subdomains from curated passive online sources.


    Features

    • Fetches domains from curated passive sources to maximize results.

    • Supports stdin and stdout for easy integration into workflows.

    • Cross-Platform (Windows, Linux & macOS).

    Installation

    Install release binaries (Without Go Installed)

    Visit the releases page and find the appropriate archive for your operating system and architecture. Download the archive from your browser or copy its URL and retrieve it with wget or curl:

    • ...with wget:

       wget https://github.com/hueristiq/xsubfind3r/releases/download/v<version>/xsubfind3r-<version>-linux-amd64.tar.gz
    • ...or, with curl:

       curl -OL https://github.com/hueristiq/xsubfind3r/releases/download/v<version>/xsubfind3r-<version>-linux-amd64.tar.gz

    ...then, extract the binary:

    tar xf xsubfind3r-<version>-linux-amd64.tar.gz

    TIP: The above steps, download and extract, can be combined into a single step with this onliner

    curl -sL https://github.com/hueristiq/xsubfind3r/releases/download/v<version>/xsubfind3r-<version>-linux-amd64.tar.gz | tar -xzv

    NOTE: On Windows systems, you should be able to double-click the zip archive to extract the xsubfind3r executable.

    ...move the xsubfind3r binary to somewhere in your PATH. For example, on GNU/Linux and OS X systems:

    sudo mv xsubfind3r /usr/local/bin/

    NOTE: Windows users can follow How to: Add Tool Locations to the PATH Environment Variable in order to add xsubfind3r to their PATH.

    Install source (With Go Installed)

    Before you install from source, you need to make sure that Go is installed on your system. You can install Go by following the official instructions for your operating system. For this, we will assume that Go is already installed.

    go install ...

    go install -v github.com/hueristiq/xsubfind3r/cmd/xsubfind3r@latest

    go build ... the development Version

    • Clone the repository

       git clone https://github.com/hueristiq/xsubfind3r.git 
    • Build the utility

       cd xsubfind3r/cmd/xsubfind3r && \
      go build .
    • Move the xsubfind3r binary to somewhere in your PATH. For example, on GNU/Linux and OS X systems:

       sudo mv xsubfind3r /usr/local/bin/

      NOTE: Windows users can follow How to: Add Tool Locations to the PATH Environment Variable in order to add xsubfind3r to their PATH.

    NOTE: While the development version is a good way to take a peek at xsubfind3r's latest features before they get released, be aware that it may have bugs. Officially released versions will generally be more stable.

    Post Installation

    xsubfind3r will work right after installation. However, BeVigil, Chaos, Fullhunt, Github, Intelligence X and Shodan require API keys to work, URLScan supports API key but not required. The API keys are stored in the $HOME/.hueristiq/xsubfind3r/config.yaml file - created upon first run - and uses the YAML format. Multiple API keys can be specified for each of these source from which one of them will be used.

    Example config.yaml:

    version: 0.3.0
    sources:
    - alienvault
    - anubis
    - bevigil
    - chaos
    - commoncrawl
    - crtsh
    - fullhunt
    - github
    - hackertarget
    - intelx
    - shodan
    - urlscan
    - wayback
    keys:
    bevigil:
    - awA5nvpKU3N8ygkZ
    chaos:
    - d23a554bbc1aabb208c9acfbd2dd41ce7fc9db39asdsd54bbc1aabb208c9acfb
    fullhunt:
    - 0d9652ce-516c-4315-b589-9b241ee6dc24
    github:
    - d23a554bbc1aabb208c9acfbd2dd41ce7fc9db39
    - asdsd54bbc1aabb208c9acfbd2dd41ce7fc9db39
    intelx:
    - 2.intelx.io:00000000-0000-0000-0000-000000000000
    shodan:
    - AAAAClP1bJJSRMEYJazgwhJKrggRwKA
    urlscan:
    - d4c85d34-e425-446e-d4ab-f5a3412acbe8

    Usage

    To display help message for xsubfind3r use the -h flag:

    xsubfind3r -h

    help message:


    _ __ _ _ _____
    __ _____ _ _| |__ / _(_)_ __ __| |___ / _ __
    \ \/ / __| | | | '_ \| |_| | '_ \ / _` | |_ \| '__|
    > <\__ \ |_| | |_) | _| | | | | (_| |___) | |
    /_/\_\___/\__,_|_.__/|_| |_|_| |_|\__,_|____/|_| v0.3.0

    USAGE:
    xsubfind3r [OPTIONS]

    INPUT:
    -d, --domain string[] target domains
    -l, --list string target domains' list file path

    SOURCES:
    --sources bool list supported sources
    -u, --sources-to-use string[] comma(,) separeted sources to use
    -e, --sources-to-exclude string[] comma(,) separeted sources to exclude

    OPTIMIZATION:
    -t, --threads int number of threads (default: 50)

    OUTPUT:
    --no-color bool disable colored output
    -o, --output string output subdomains' file path
    -O, --output-directory string output subdomains' directory path
    -v, --verbosity string debug, info, warning, error, fatal or silent (default: info)

    CONFIGURATION:
    -c, --configuration string configuration file path (default: ~/.hueristiq/xsubfind3r/config.yaml)

    Contribution

    Issues and Pull Requests are welcome! Check out the contribution guidelines.

    Licensing

    This utility is distributed under the MIT license.



    Xurlfind3R - A CLI Utility To Find Domain'S Known URLs From Curated Passive Online Sources

    By: Zion3R


    xurlfind3r is a command-line interface (CLI) utility to find domain's known URLs from curated passive online sources.


    Features

    Installation

    Install release binaries (Without Go Installed)

    Visit the releases page and find the appropriate archive for your operating system and architecture. Download the archive from your browser or copy its URL and retrieve it with wget or curl:

    • ...with wget:

       wget https://github.com/hueristiq/xurlfind3r/releases/download/v<version>/xurlfind3r-<version>-linux-amd64.tar.gz
    • ...or, with curl:

       curl -OL https://github.com/hueristiq/xurlfind3r/releases/download/v<version>/xurlfind3r-<version>-linux-amd64.tar.gz

    ...then, extract the binary:

    tar xf xurlfind3r-<version>-linux-amd64.tar.gz

    TIP: The above steps, download and extract, can be combined into a single step with this onliner

    curl -sL https://github.com/hueristiq/xurlfind3r/releases/download/v<version>/xurlfind3r-<version>-linux-amd64.tar.gz | tar -xzv

    NOTE: On Windows systems, you should be able to double-click the zip archive to extract the xurlfind3r executable.

    ...move the xurlfind3r binary to somewhere in your PATH. For example, on GNU/Linux and OS X systems:

    sudo mv xurlfind3r /usr/local/bin/

    NOTE: Windows users can follow How to: Add Tool Locations to the PATH Environment Variable in order to add xurlfind3r to their PATH.

    Install source (With Go Installed)

    Before you install from source, you need to make sure that Go is installed on your system. You can install Go by following the official instructions for your operating system. For this, we will assume that Go is already installed.

    go install ...

    go install -v github.com/hueristiq/xurlfind3r/cmd/xurlfind3r@latest

    go build ... the development Version

    • Clone the repository

       git clone https://github.com/hueristiq/xurlfind3r.git 
    • Build the utility

       cd xurlfind3r/cmd/xurlfind3r && \
      go build .
    • Move the xurlfind3r binary to somewhere in your PATH. For example, on GNU/Linux and OS X systems:

       sudo mv xurlfind3r /usr/local/bin/

      NOTE: Windows users can follow How to: Add Tool Locations to the PATH Environment Variable in order to add xurlfind3r to their PATH.

    NOTE: While the development version is a good way to take a peek at xurlfind3r's latest features before they get released, be aware that it may have bugs. Officially released versions will generally be more stable.

    Post Installation

    xurlfind3r will work right after installation. However, BeVigil, Github and Intelligence X require API keys to work, URLScan supports API key but not required. The API keys are stored in the $HOME/.hueristiq/xurlfind3r/config.yaml file - created upon first run - and uses the YAML format. Multiple API keys can be specified for each of these source from which one of them will be used.

    Example config.yaml:

    version: 0.2.0
    sources:
    - bevigil
    - commoncrawl
    - github
    - intelx
    - otx
    - urlscan
    - wayback
    keys:
    bevigil:
    - awA5nvpKU3N8ygkZ
    github:
    - d23a554bbc1aabb208c9acfbd2dd41ce7fc9db39
    - asdsd54bbc1aabb208c9acfbd2dd41ce7fc9db39
    intelx:
    - 2.intelx.io:00000000-0000-0000-0000-000000000000
    urlscan:
    - d4c85d34-e425-446e-d4ab-f5a3412acbe8

    Usage

    To display help message for xurlfind3r use the -h flag:

    xurlfind3r -h

    help message:

                     _  __ _           _ _____      
    __ ___ _ _ __| |/ _(_)_ __ __| |___ / _ __
    \ \/ / | | | '__| | |_| | '_ \ / _` | |_ \| '__|
    > <| |_| | | | | _| | | | | (_| |___) | |
    /_/\_\\__,_|_| |_|_| |_|_| |_|\__,_|____/|_| v0.2.0

    USAGE:
    xurlfind3r [OPTIONS]

    TARGET:
    -d, --domain string (sub)domain to match URLs

    SCOPE:
    --include-subdomains bool match subdomain's URLs

    SOURCES:
    -s, --sources bool list sources
    -u, --use-sources string sources to use (default: bevigil,commoncrawl,github,intelx,otx,urlscan,wayback)
    --skip-wayback-robots bool with wayback, skip parsing robots.txt snapshots
    --skip-wayback-source bool with wayback , skip parsing source code snapshots

    FILTER & MATCH:
    -f, --filter string regex to filter URLs
    -m, --match string regex to match URLs

    OUTPUT:
    --no-color bool no color mode
    -o, --output string output URLs file path
    -v, --verbosity string debug, info, warning, error, fatal or silent (default: info)

    CONFIGURATION:
    -c, --configuration string configuration file path (default: ~/.hueristiq/xurlfind3r/config.yaml)

    Examples

    Basic

    xurlfind3r -d hackerone.com --include-subdomains

    Filter Regex

    # filter images
    xurlfind3r -d hackerone.com --include-subdomains -f '`^https?://[^/]*?/.*\.(jpg|jpeg|png|gif|bmp)(\?[^\s]*)?$`'

    Match Regex

    # match js URLs
    xurlfind3r -d hackerone.com --include-subdomains -m '^https?://[^/]*?/.*\.js(\?[^\s]*)?$'

    Contributing

    Issues and Pull Requests are welcome! Check out the contribution guidelines.

    Licensing

    This utility is distributed under the MIT license.



    Exploring the Dark Side: OSINT Tools and Techniques for Unmasking Dark Web Operations

    On April 5, 2023, the FBI and Dutch National Policeย announced the takedown of Genesis Market, one of the largest dark web marketplaces. The operation, dubbed "Operation Cookie Monster," resulted in the arrest of 119 people and the seizure of over $1M in cryptocurrency. You can read the FBI's warrantย hereย for details specific to this case. In light of these events, I'd like to discuss how OSINT

    LinkedInDumper - Tool To Dump Company Employees From LinkedIn API

    By: Zion3R

    Python 3 script to dump company employees from LinkedIn API๏’ฌ

    Description

    LinkedInDumper is a Python 3 script that dumps employee data from the LinkedIn social networking platform.

    The results contain firstname, lastname, position (title), location and a user's profile link. Only 2 API calls are required to retrieve all employees if the company does not have more than 10 employees. Otherwise, we have to paginate through the API results. With the --email-format CLI flag one can define a Python string format to auto generate email addresses based on the retrieved first and last name.


    Requirements

    LinkedInDumper talks with the unofficial LinkedIn Voyager API, which requires authentication. Therefore, you must have a valid LinkedIn user account. To keep it simple, LinkedInDumper just expects a cookie value provided by you. Doing it this way, even 2FA protected accounts are supported. Furthermore, you are tasked to provide a LinkedIn company URL to dump employees from.

    Retrieving LinkedIn Cookie

    1. Sign into www.linkedin.com and retrieve your li_at session cookie value e.g. via developer tools
    2. Specify the cookie value either persistently in the python script's variable li_at or temporarily during runtime via the CLI flag --cookie

    Retrieving LinkedIn Company URL

    1. Search your target company on Google Search or directly on LinkedIn
    2. The LinkedIn company URL should look something like this: https://www.linkedin.com/company/apple

    Usage

    usage: linkedindumper.py [-h] --url <linkedin-url> [--cookie <cookie>] [--quiet] [--include-private-profiles] [--email-format EMAIL_FORMAT]

    options:
    -h, --help show this help message and exit
    --url <linkedin-url> A LinkedIn company url - https://www.linkedin.com/company/<company>
    --cookie <cookie> LinkedIn 'li_at' session cookie
    --quiet Show employee results only
    --include-private-profiles
    Show private accounts too
    --email-format Python string format for emails; for example:
    [1] john.doe@example.com > '{0}.{1}@example.com'
    [2] j.doe@example.com > '{0[0]}.{1}@example.com'
    [3] jdoe@example.com > '{0[0]}{1}@example.com'
    [4] doe@example.com > '{1}@example.com'
    [5] john@example.com > '{0}@example.com'
    [6] jd@example.com > '{0[0]}{1[0]}@example.com'

    Example 1 - Docker Run

    docker run --rm l4rm4nd/linkedindumper:latest --url 'https://www.linkedin.com/company/apple' --cookie <cookie> --email-format '{0}.{1}@apple.de'

    Example 2 - Native Python

    # install dependencies
    pip install -r requirements.txt

    python3 linkedindumper.py --url 'https://www.linkedin.com/company/apple' --cookie <cookie> --email-format '{0}.{1}@apple.de'

    Outputs

    The script will return employee data as semi-colon separated values (like CSV):

     โ–ˆโ–ˆโ–“     โ–ˆโ–ˆโ–“ โ–ˆโ–ˆโ–ˆโ–„    โ–ˆ  โ–ˆโ–ˆ โ–„โ–ˆโ–€โ–“โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ โ–“โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–„  โ–ˆโ–ˆโ–“ โ–ˆโ–ˆโ–ˆโ–„    โ–ˆ โ–“โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–„  โ–ˆ    โ–ˆโ–ˆ  โ–ˆโ–ˆโ–ˆโ–„ โ–„โ–ˆโ–ˆโ–ˆโ–“ โ–ˆโ–ˆโ–“โ–ˆโ–ˆโ–ˆ  โ–“โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ  โ–ˆโ–ˆโ–€โ–ˆโ–ˆโ–ˆ  
    โ–“โ–ˆโ–ˆโ–’ โ–“โ–ˆโ–ˆโ–’ โ–ˆโ–ˆ โ–€โ–ˆ โ–ˆ โ–ˆโ–ˆโ–„โ–ˆโ–’ โ–“โ–ˆ โ–€ โ–’โ–ˆโ–ˆโ–€ โ–ˆโ–ˆโ–Œโ–“โ–ˆโ–ˆโ–’ โ–ˆโ–ˆ โ–€โ–ˆ โ–ˆ โ–’โ–ˆโ–ˆโ–€ โ–ˆโ–ˆโ–Œ โ–ˆโ–ˆ โ–“โ–ˆโ–ˆโ–’โ–“โ–ˆโ–ˆโ–’โ–€โ–ˆ& #9600; โ–ˆโ–ˆโ–’โ–“โ–ˆโ–ˆโ–‘ โ–ˆโ–ˆโ–’โ–“โ–ˆ โ–€ โ–“โ–ˆโ–ˆ โ–’ โ–ˆโ–ˆโ–’
    โ–’โ–ˆโ–ˆโ–‘ โ–’โ–ˆโ–ˆโ–’โ–“โ–ˆโ–ˆ โ–€โ–ˆ โ–ˆโ–ˆโ–’โ–“โ–ˆโ–ˆโ–ˆโ–„โ–‘ โ–’โ–ˆโ–ˆโ–ˆ โ–‘โ–ˆโ–ˆ โ–ˆโ–Œโ–’โ–ˆโ–ˆโ–’โ–“โ–ˆโ–ˆ โ–€โ–ˆ โ–ˆโ–ˆโ–’โ–‘โ–ˆโ–ˆ โ–ˆโ–Œโ–“โ–ˆโ–ˆ โ–’โ–ˆโ–ˆโ–‘โ–“โ–ˆโ–ˆ โ–“โ–ˆโ–ˆโ–‘โ–“โ–ˆโ–ˆโ–‘ โ–ˆโ–ˆโ–“โ–’โ–’โ–ˆโ–ˆโ–ˆ โ–“โ–ˆโ–ˆ โ–‘โ–„โ–ˆ โ–’
    โ–’โ–ˆโ–ˆโ–‘ โ–‘โ–ˆโ–ˆโ–‘โ–“โ–ˆโ–ˆโ–’ โ–โ–Œโ–ˆโ–ˆโ–’โ–“โ–ˆโ–ˆ โ–ˆโ–„ โ–’โ–“โ–ˆ โ–„ โ–‘โ–“โ–ˆโ–„ โ–Œ&# 9617;โ–ˆโ–ˆโ–‘โ–“โ–ˆโ–ˆโ–’ โ–โ–Œโ–ˆโ–ˆโ–’โ–‘โ–“โ–ˆโ–„ โ–Œโ–“โ–“โ–ˆ โ–‘โ–ˆโ–ˆโ–‘โ–’โ–ˆโ–ˆ โ–’โ–ˆโ–ˆ โ–’โ–ˆโ–ˆโ–„โ–ˆโ–“โ–’ โ–’โ–’โ–“โ–ˆ โ–„ โ–’โ–ˆโ–ˆโ–€โ–€โ–ˆโ–„
    โ–‘โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–’โ–‘โ–ˆโ–ˆโ–‘โ–’โ–ˆโ–ˆโ–‘ โ–“โ–ˆโ–ˆโ–‘โ–’โ–ˆโ–ˆโ–’ โ–ˆโ–„โ–‘โ–’โ–ˆโ–ˆโ–ˆโ–ˆโ–’โ–‘โ–’โ–ˆโ–ˆโ–ˆโ–ˆโ–“ โ–‘โ–ˆโ–ˆโ–‘โ–’โ–ˆโ–ˆโ–‘ โ–“โ–ˆโ–ˆโ–‘โ–‘โ–’โ–ˆโ–ˆโ–ˆโ–ˆโ–“ โ–’โ–’โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–“ โ–’โ–ˆโ–ˆโ–’ โ–‘โ–ˆโ–ˆโ–’โ–’โ–ˆโ–ˆโ–’ โ–‘ โ–‘โ–‘โ–’โ–ˆโ–ˆโ–ˆโ–ˆ& #9618;โ–‘โ–ˆโ–ˆโ–“ โ–’โ–ˆโ–ˆโ–’
    โ–‘ โ–’โ–‘โ–“ โ–‘โ–‘โ–“ โ–‘ โ–’โ–‘ โ–’ โ–’ โ–’ โ–’โ–’ โ–“โ–’โ–‘โ–‘ โ–’โ–‘ โ–‘ โ–’โ–’โ–“ โ–’ โ–‘โ–“ โ–‘ โ–’โ–‘ โ–’ โ–’ โ–’โ–’โ–“ โ–’ โ–‘โ–’โ–“โ–’ โ–’ โ–’ โ–‘ โ–’โ–‘ โ–‘ โ–‘โ–’โ–“โ–’โ–‘ โ–‘ โ–‘โ–‘โ–‘ โ–’โ–‘ โ–‘โ–‘ โ–’โ–“ โ–‘โ–’โ–“โ–‘
    โ–‘ โ–‘ โ–’ โ–‘ โ–’ โ–‘โ–‘ โ–‘โ–‘ โ–‘ โ–’โ–‘โ–‘ โ–‘โ–’ โ–’โ–‘ โ–‘ โ–‘ โ–‘ โ–‘ โ–’ โ–’ โ–’ โ–‘โ–‘ โ–‘โ–‘ โ–‘ โ–’โ–‘ โ–‘ โ–’ โ–’ โ–‘โ–‘โ–’โ–‘ โ–‘ โ–‘ โ–‘ โ–‘ โ–‘โ–‘โ–’ โ–‘ โ–‘ โ–‘ โ–‘ โ–‘โ–’ โ–‘ โ–’โ–‘
    โ–‘ โ–‘ โ–’ โ–‘ โ–‘ โ–‘ โ–‘ โ–‘ โ–‘โ–‘ โ–‘ โ–‘ โ–‘ โ–‘ โ–‘ โ–’ โ–‘ โ–‘ โ–‘ โ–‘ โ–‘ โ–‘ โ–‘ โ–‘โ–‘โ–‘ โ–‘ โ–‘ โ–‘ โ–‘ โ–‘โ–‘ โ–‘ โ–‘โ–‘ โ–‘
    โ–‘ โ–‘ โ–‘ โ–‘ โ–‘ โ–‘ โ–‘ โ–‘ โ–‘ โ–‘ โ–‘ โ–‘ โ–‘ โ–‘ โ–‘ โ–‘ โ–‘
    โ–‘ โ–‘ โ–‘ by LRVT

    [i] Company Name: apple
    [i] Company X-ID: 162479
    [i] LN Employees: 1000 employees found
    [i] Dumping Date: 17/10/2022 13:55:06
    [i] Email Format: {0}.{1}@apple.de
    Firstname;Lastname;Email;Position;Gender;Location;Profile
    Katrin;Honauer;katrin.honauer@apple.com;Software Engineer at Apple;N/A;Heidelberg;https://www.linkedin.com/in/katrin-honauer
    Raymond;Chen;raymond.chen@apple.com;Recruiting at Apple;N/A;Austin, Texas Metropolitan Area;https://www.linkedin.com/in/raytherecruiter

    [i] Successfully crawled 2 unique apple employee(s). Hurray ^_-

    Limitations

    LinkedIn will allow only the first 1,000 search results to be returned when harvesting contact information. You may also need a LinkedIn premium account when you reached the maximum allowed queries for visiting profiles with your freemium LinkedIn account.

    Furthermore, not all employee profiles are public. The results vary depending on your used LinkedIn account and whether you are befriended with some employees of the company to crawl or not. Therefore, it is sometimes not possible to retrieve the firstname, lastname and profile url of some employee accounts. The script will not display such profiles, as they contain default values such as "LinkedIn" as firstname and "Member" in the lastname. If you want to include such private profiles, please use the CLI flag --include-private-profiles. Although some accounts may be private, we can obtain the position (title) as well as the location of such accounts. Only firstname, lastname and profile URL are hidden for private LinkedIn accounts.

    Finally, LinkedIn users are free to name their profile. An account name can therefore consist of various things such as saluations, abbreviations, emojis, middle names etc. I tried my best to remove some nonsense. However, this is not a complete solution to the general problem. Note that we are not using the official LinkedIn API. This script gathers information from the "unofficial" Voyager API.



    SpiderSuite - Advance Web Spider/Crawler For Cyber Security Professionals

    By: Zion3R


    An advance cross-platform and multi-feature GUI web spider/crawler for cyber security proffesionals. Spider Suite can be used for attack surface mapping and analysis. For more information visit SpiderSuite's website.


    Installation and Usage

    Spider Suite is designed for easy installation and usage even for first timers.

    • First, download the package of your choice.

    • Then install the downloaded SpiderSuite package.

    • See First time crawling with SpiderSuite article for tutorial on how to get started.

    For complete documentation of Spider Suite see wiki.

    Contributing

    Can you translate?

    Visit SpiderSuite's translation project to make translations to your native language.

    Not a developer?

    You can help by reporting bugs, requesting new features, improving the documentation, sponsoring the project & writing articles.

    For More information see contribution guide.

    Contributers

    Credits

    This product includes software developed by the following open source projects:



    How to Set Up a Threat Hunting and Threat Intelligence Program

    Threat hunting is an essential component of your cybersecurity strategy. Whether you're getting started or in an advanced state, this article will help you ramp up your threat intelligence program. What is Threat Hunting? The cybersecurity industry is shifting from a reactive to a proactive approach. Instead of waiting for cybersecurity alerts and then addressing them, security organizations are

    Seekr - A Multi-Purpose OSINT Toolkit With A Neat Web-Interface


    A multi-purpose toolkit for gathering and managing OSINT-Data with a neat web-interface.


    Introduction

    Seekr is a multi-purpose toolkit for gathering and managing OSINT-data with a sleek web interface. The backend is written in Go and offers a wide range of features for data collection, organization, and analysis. Whether you're a researcher, investigator, or just someone looking to gather information, seekr makes it easy to find and manage the data you need. Give it a try and see how it can streamline your OSINT workflow!

    Check the wiki for setup guide, etc.

    Why use seekr over my current tool ?

    Seekr combines note taking and OSINT in one application. Seekr can be used alongside your current tools. Seekr is desingned with OSINT in mind and optimized for real world usecases.

    Key features

    • Database for OSINT targets
    • GitHub to email
    • Account cards for each person in the database
    • Account discovery intigrating with the account cards
    • Pre defined commonly used fields in the database

    Getting Started - Installation

    Windows

    Download the latest exe here

    Linux (stable)

    Download the latest stable binary here

    Linux (unstable)

    To install seekr on linux simply run:

    git clone https://github.com/seekr-osint/seekr
    cd seekr
    go run main.go

    Now open the web interface in your browser of choice.

    Run on NixOS

    Seekr is build with NixOS in mind and therefore supports nix flakes. To run seekr on NixOS run following commands.

    nix shell github:seekr-osint/seekr
    seekr

    Intigrating seekr into your current workflow

    journey
    title How to Intigrate seekr into your current workflow.
    section Initial Research
    Create a person in seekr: 100: seekr
    Simple web research: 100: Known tools
    Account scan: 100: seekr
    section Deeper account investigation
    Investigate the accounts: 100: seekr, Known tools
    Keep notes: 100: seekr
    section Deeper Web research
    Deep web research: 100: Known tools
    Keep notes: 100: seekr
    section Finishing the report
    Export the person with seekr: 100: seekr
    Done.: 100

    Feedback

    We would love to hear from you. Tell us about your opinions on seekr. Where do we need to improve?... You can do this by just opeing up an issue or maybe even telling others in your blog or somewhere else about your experience.

    Legal Disclaimer

    This tool is intended for legitimate and lawful use only. It is provided for educational and research purposes, and should not be used for any illegal or malicious activities, including doxxing. Doxxing is the practice of researching and broadcasting private or identifying information about an individual, without their consent and can be illegal. The creators and contributors of this tool will not be held responsible for any misuse or damage caused by this tool. By using this tool, you agree to use it only for lawful purposes and to comply with all applicable laws and regulations. It is the responsibility of the user to ensure compliance with all relevant laws and regulations in the jurisdiction in which they operate. Misuse of this tool may result in criminal and/or civil prosecut ion.



    Octosuite - Advanced Github OSINT Framework


    A framework fro gathering osint on GitHub users, repositories and organizations


    Wiki

    Refer to the Wiki for installation instructions, in addition to all other documentation.

    Features

    • Fetches an organization's profile information
    • Fetches an oganization's events
    • Returns an organization's repositories
    • Returns an organization's public members
    • Fetches a repository's information
    • Returns a repository's contributors
    • Returns a repository's languages
    • Fetches a repository's stargazers
    • Fetches a repository's forks
    • Fetches a repository's releases
    • Returns a list of files in a specified path of a repository
    • Fetches a user's profile information
    • Returns a user's gists
    • Returns organizations that a user owns/belongs to
    • Fetches a user's events
    • Fetches a list of users followed by the target
    • Fetches a user's followers
    • Checks if user A follows user B
    • Checks if user is a public member of an organizations
    • Returns a user's subscriptions
    • Gets a user's subscriptions
    • Gets a user's events
    • Searches users
    • Searches repositories
    • Searches topics
    • Searches issues
    • Searches commits
    • Automatically logs network activity (.logs folder)
    • User can view, read and delete logs
    • ...And more

    Note

    Octosuite automatically logs network and user activity of each session, the logs are saved by date and time in the .logs folder



    Aiodnsbrute - DNS Asynchronous Brute Force Utility


    A Python 3.5+ tool that uses asyncio to brute force domain names asynchronously.

    Speed

    It's fast. Benchmarks on small VPS hosts put around 100k DNS resoultions at 1.5-2mins. An amazon M3 box was used to make 1 mil requests in just over 3 minutes. Your mileage may vary. It's probably best to avoid using Google's resolvers if you're purely interested in speed.

    DISCLAIMER

    • Your ISP's and home router's DNS servers probably suck. Stick to a VPS with fast resolvers (or set up your own) if you're after speed.
    • WARNING This tool is capable of sending LARGE amounts of DNS traffic. I am not repsonsible if you DoS someone's DNS servers.

    Installation

    $ pip install aiodnsbrute

    Note: using a virtualenv is highly recommended.

    Alternate install

    Alternately you can install the usual way:

    $ git clone https://github.com/blark/aiodnsbrute.git
    $ cd aiodnsbrute
    $ python setup.py install .

    Usage

    Get help:

    $ aiodnsbrute --help

    Usage: cli.py [OPTIONS] DOMAIN

    aiodnsbrute is a command line tool for brute forcing domain names
    utilizing Python's asyncio module.

    credit: blark (@markbaseggio)

    Options:
    -w, --wordlist TEXT Wordlist to use for brute force.
    -t, --max-tasks INTEGER Maximum number of tasks to run asynchronosly.
    -r, --resolver-file FILENAME A text file containing a list of DNS resolvers
    to use, one per line, comments start with #.
    Default: use system resolvers
    -v, --verbosity Increase output verbosity
    -o, --output [csv|json|off] Output results to DOMAIN.csv/json (extension
    automatically appended when not using -f).
    -f, --outfile FILENAME O utput filename. Use '-f -' to send file
    output to stdout overriding normal output.
    --query / --gethostbyname DNS lookup type to use query (default) should
    be faster, but won't return CNAME information.
    --wildcard / --no-wildcard Wildcard detection, enabled by default
    --verify / --no-verify Verify domain name is sane before beginning,
    enabled by default
    --version Show the version and exit.
    --help Show this message and exit.

    Examples

    Run a brute force with some custom options:

    $ aiodnsbrute -w wordlist.txt -vv -t 1024 domain.com

    Run a brute force, supppess normal output and send only JSON to stdout:

    $ aiodnbrute -f - -o json domain.com

    ...for an advanced pattern, use custom resovers and pipe output into the awesome jq:

    $ aiodnsbrute -r resolvers.txt -f - -o json google.com | jq '.[] | select(.ip[] | startswith("172."))'

    Wildcard detection enabled by default (--no-wildcard turns it off):

    $ aiodnsbrute foo.com

    [*] Brute forcing foo.com with a maximum of 512 concurrent tasks...
    [*] Using recursive DNS with the following servers: ['50.116.53.5', '50.116.58.5', '50.116.61.5']
    [!] Wildcard response detected, ignoring answers containing ['23.23.86.44']
    [*] Wordlist loaded, proceeding with 1000 DNS requests
    [+] www.foo.com 52.73.176.251, 52.4.225.20
    100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 1000/1000 [00: 05<00:00, 140.18records/s]
    [*] Completed, 1 subdomains found

    NEW use gethostbyname (detects CNAMEs which can be handy for potential subdomain takeover detection)

    $ aiodnsbrute --gethostbyname domain.com

    Supply a list of resolvers from file (ignoring blank lines and starting with #), specify -r - to read list from stdin.

    $ aiodnsbrute -r resolvers.txt domain.com

    Thanks

    • Wordlists are from bitquark's dnspop repo (except the 10 mil entry one which I created using his tool).
    • Click for making CLI apps so easy.
    • tqdm powers the pretty progress bar!
    • aiodns for providing the Python async interface to pycares which makes this all possible!

    Notes

    • You might want to do a ulimit -n to see how many open files are allowed. You can also increase that number using the same command, i.e. ulimit -n <2048>


    Blackbird - An OSINT Tool To Search For Accounts By Username In 101 Social Networks

    Blackbird

    An OSINT tool to search fast for accounts by username across 101 sites.

    The Lockheed SR-71 "Blackbird" is a long-range, high-altitude, Mach 3+ strategic reconnaissance aircraft developed and manufactured by the American aerospace company Lockheed Corporation.

    Disclaimer

    This or previous program is for Educational purpose ONLY. Do not use it without permission. 
    The usual disclaimer applies, especially the fact that me (P1ngul1n0) is not liable for any
    damages caused by direct or indirect use of the information or functionality provided by these
    programs. The author or any Internet provider bears NO responsibility for content or misuse
    of these programs or any derivatives thereof. By using these programs you accept the fact
    that any damage (dataloss, system crash, system compromise, etc.) caused by the use of these
    programs is not P1ngul1n0's responsibility.

    Setup

    Clone the repository

    git clone https://github.com/p1ngul1n0/blackbird
    cd blackbird

    Install requirements

    pip install -r requirements.txt

    Usage

    Search by username

    python blackbird.py -u username

    Run WebServer

    python blackbird.py --web

    Access http://127.0.0.1:5000 on the browser

    Read results file

    python blackbird.py -f username.json

    List supportted sites

    python blackbird.py --list-sites

    Supported Social Networks

    1. Facebook
    2. YouTube
    3. Twitter
    4. Telegram
    5. TikTok
    6. Tinder
    7. Instagram
    8. Pinterest
    9. Snapchat
    10. Reddit
    11. Soundcloud
    12. Github
    13. Steam
    14. Linktree
    15. Xbox Gamertag
    16. Twitter Archived
    17. Xvideos
    18. PornHub
    19. Xhamster
    20. Periscope
    21. Ask FM
    22. Vimeo
    23. Twitch
    24. Pastebin
    25. WordPress Profile
    26. WordPress Site
    27. AllMyLinks
    28. Buzzfeed
    29. JsFiddle
    30. Sourceforge
    31. Kickstarter
    32. Smule
    33. Blogspot
    34. Tradingview
    35. Internet Archive
    36. Alura
    37. Behance
    38. MySpace
    39. Disqus
    40. Slideshare
    41. Rumble
    42. Ebay
    43. RedBubble
    44. Kik
    45. Roblox
    46. Armor Games
    47. Fortnite Tracker
    48. Duolingo
    49. Chess
    50. Shopify
    51. Untappd
    52. Last FM
    53. Cash APP
    54. Imgur
    55. Trello
    56. MCUUID Minecraft
    57. Patreon
    58. DockerHub
    59. Kongregate
    60. Vine
    61. Gamespot
    62. Shutterstock
    63. Chaturbate
    64. ProtonMail
    65. TripAdvisor
    66. RapidAPI
    67. HackTheBox
    68. Wikipedia
    69. Buymeacoffe
    70. Arduino
    71. League of Legends Tracker
    72. Lego Ideas
    73. Fiverr
    74. Redtube
    75. Dribble
    76. Packet Storm Security
    77. Ello
    78. Medium
    79. Hackaday
    80. Keybase
    81. HackerOne
    82. BugCrowd
    83. DevPost
    84. OneCompiler
    85. TryHackMe
    86. Lyrics Training
    87. Expo
    88. RAWG
    89. Coroflot
    90. Cloudflare
    91. Wattpad
    92. Mixlr
    93. ImageShack
    94. Freelancer
    95. Dev To
    96. BitBucket
    97. Ko Fi
    98. Flickr
    99. HackerEarth
    100. Spotify
    101. Snapchat Stories

    Supersonic speed

    Blackbird sends async HTTP requests, allowing a lot more speed when discovering user accounts.

    JSON Template

    Blackbird uses JSON as a template to store and read data.

    The data.json file store all sites that blackbird verify.

    Params

    • app - Site name
    • url
    • valid - Python expression that returns True when user exists
    • id - Unique numeric ID
    • method - HTTP method
    • json - JSON body POST (needs to be escaped, use this
      https://codebeautify.org/json-escape-unescape)
    • {username} - Username place (URL or Body)
    • response.status - HTTP response status
    • responseContent - Raw response body
    • soup - Beautifulsoup parsed response body
    • jsonData - JSON response body

    Examples

    GET

        {
    "app": "ExampleAPP1",
    "url": "https://www.example.com/{username}",
    "valid": "response.status == 200",
    "id": 1,
    "method": "GET"
    }

    POST JSON

        {
    "app": "ExampleAPP2",
    "url": "https://www.example.com/user",
    "valid": "jsonData['message']['found'] == True",
    "json": "{{\"type\": \"username\",\"input\": \"{username}\"}}",
    "id": 2,
    "method": "POST"
    }

    If you have any suggestion of a site to be included in the search, make a pull request following the template.

    Contact

    Feel free to contact me on Twitter



    Socialhunter - Crawls The Website And Finds Broken Social Media Links That Can Be Hijacked


    Crawls the given URL and finds broken social media links that can be hijacked. Broken social links may allow an attacker to conduct phishing attacks. It also can cost a loss of the company's reputation. Broken social media hijack issues are usually accepted on the bug bounty programs.


    Currently, it supports Twitter, Facebook, Instagram and Tiktok without any API keys.

    crawls the website and finds broken social media links that can be hijacked (1)

    Installation

    From Binary

    You can download the pre-built binaries from the releases page and run. For example:

    wget https://github.com/utkusen/socialhunter/releases/download/v0.1.1/socialhunter_0.1.1_Linux_amd64.tar.gz

    tar xzvf socialhunter_0.1.1_Linux_amd64.tar.gz

    ./socialhunter --help

    From Source

    1. Install Go on your system
    2. Run: go get -u github.com/utkusen/socialhunter

    Usage

    socialhunter requires 2 parameters to run:

    -f : Path of the text file that contains URLs line by line. The crawl function is path-aware. For example, if the URL is https://utkusen.com/blog, it only crawls the pages under /blog path

    -w : The number of workers to run (e.g -w 10). The default value is 5. You can increase or decrease this by testing out the capability of your system.



    โŒ