FreshRSS

πŸ”’
❌ Secure Planet Training Courses Updated For 2019 - Click Here
There are new available articles, click to refresh the page.
Before yesterdayYour RSS feeds

AWS, Google, and Azure CLI Tools Could Leak Credentials in Build Logs

New cybersecurity research has found that command-line interface (CLI) tools from Amazon Web Services (AWS) and Google Cloud can expose sensitive credentials in build logs, posing significant risks to organizations. The vulnerability has been codenamed LeakyCLI by cloud security firm Orca. "Some commands on Azure CLI, AWS CLI, and Google Cloud CLI can expose sensitive information in

Porch-Pirate - The Most Comprehensive Postman Recon / OSINT Client And Framework That Facilitates The Automated Discovery And Exploitation Of API Endpoints And Secrets Committed To Workspaces, Collections, Requests, Users And Teams

By: Zion3R


Porch Pirate started as a tool to quickly uncover Postman secrets, and has slowly begun to evolve into a multi-purpose reconaissance / OSINT framework for Postman. While existing tools are great proof of concepts, they only attempt to identify very specific keywords as "secrets", and in very limited locations, with no consideration to recon beyond secrets. We realized we required capabilities that were "secret-agnostic", and had enough flexibility to capture false-positives that still provided offensive value.

Porch Pirate enumerates and presents sensitive results (global secrets, unique headers, endpoints, query parameters, authorization, etc), from publicly accessible Postman entities, such as:

  • Workspaces
  • Collections
  • Requests
  • Users
  • Teams

Installation

python3 -m pip install porch-pirate

Using the client

The Porch Pirate client can be used to nearly fully conduct reviews on public Postman entities in a quick and simple fashion. There are intended workflows and particular keywords to be used that can typically maximize results. These methodologies can be located on our blog: Plundering Postman with Porch Pirate.

Porch Pirate supports the following arguments to be performed on collections, workspaces, or users.

  • --globals
  • --collections
  • --requests
  • --urls
  • --dump
  • --raw
  • --curl

Simple Search

porch-pirate -s "coca-cola.com"

Get Workspace Globals

By default, Porch Pirate will display globals from all active and inactive environments if they are defined in the workspace. Provide a -w argument with the workspace ID (found by performing a simple search, or automatic search dump) to extract the workspace's globals, along with other information.

porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8

Dump Workspace

When an interesting result has been found with a simple search, we can provide the workspace ID to the -w argument with the --dump command to begin extracting information from the workspace and its collections.

porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8 --dump

Automatic Search and Globals Extraction

Porch Pirate can be supplied a simple search term, following the --globals argument. Porch Pirate will dump all relevant workspaces tied to the results discovered in the simple search, but only if there are globals defined. This is particularly useful for quickly identifying potentially interesting workspaces to dig into further.

porch-pirate -s "shopify" --globals

Automatic Search Dump

Porch Pirate can be supplied a simple search term, following the --dump argument. Porch Pirate will dump all relevant workspaces and collections tied to the results discovered in the simple search. This is particularly useful for quickly sifting through potentially interesting results.

porch-pirate -s "coca-cola.com" --dump

Extract URLs from Workspace

A particularly useful way to use Porch Pirate is to extract all URLs from a workspace and export them to another tool for fuzzing.

porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8 --urls

Automatic URL Extraction

Porch Pirate will recursively extract all URLs from workspaces and their collections related to a simple search term.

porch-pirate -s "coca-cola.com" --urls

Show Collections in a Workspace

porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8 --collections

Show Workspace Requests

porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8 --requests

Show raw JSON

porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8 --raw

Show Entity Information

porch-pirate -w WORKSPACE_ID
porch-pirate -c COLLECTION_ID
porch-pirate -r REQUEST_ID
porch-pirate -u USERNAME/TEAMNAME

Convert Request to Curl

Porch Pirate can build curl requests when provided with a request ID for easier testing.

porch-pirate -r 11055256-b1529390-18d2-4dce-812f-ee4d33bffd38 --curl

Use a proxy

porch-pirate -s coca-cola.com --proxy 127.0.0.1:8080

Using as a library

Searching

p = porchpirate()
print(p.search('coca-cola.com'))

Get Workspace Collections

p = porchpirate()
print(p.collections('4127fdda-08be-4f34-af0e-a8bdc06efaba'))

Dumping a Workspace

p = porchpirate()
collections = json.loads(p.collections('4127fdda-08be-4f34-af0e-a8bdc06efaba'))
for collection in collections['data']:
requests = collection['requests']
for r in requests:
request_data = p.request(r['id'])
print(request_data)

Grabbing a Workspace's Globals

p = porchpirate()
print(p.workspace_globals('4127fdda-08be-4f34-af0e-a8bdc06efaba'))

Other Examples

Other library usage examples can be located in the examples directory, which contains the following examples:

  • dump_workspace.py
  • format_search_results.py
  • format_workspace_collections.py
  • format_workspace_globals.py
  • get_collection.py
  • get_collections.py
  • get_profile.py
  • get_request.py
  • get_statistics.py
  • get_team.py
  • get_user.py
  • get_workspace.py
  • recursive_globals_from_search.py
  • request_to_curl.py
  • search.py
  • search_by_page.py
  • workspace_collections.py


Code Keepers: Mastering Non-Human Identity Management

Identities now transcend human boundaries. Within each line of code and every API call lies a non-human identity. These entities act as programmatic access keys, enabling authentication and facilitating interactions among systems and services, which are essential for every API call, database query, or storage account access. As we depend on multi-factor authentication and passwords to safeguard

URGENT: Upgrade GitLab - Critical Workspace Creation Flaw Allows File Overwrite

GitLab once again released fixes to address a critical security flaw in its Community Edition (CE) and Enterprise Edition (EE) that could be exploited to write arbitrary files while creating a workspace. Tracked as CVE-2024-0402, the vulnerability has a CVSS score of 9.9 out of a maximum of 10. "An issue has been discovered in GitLab CE/EE affecting all versions from 16.0 prior to

Urgent: GitLab Releases Patch for Critical Vulnerabilities - Update ASAP

GitLab has released security updates to address two critical vulnerabilities, including one that could be exploited to take over accounts without requiring any user interaction. Tracked as CVE-2023-7028, the flaw has been awarded the maximum severity of 10.0 on the CVSS scoring system and could facilitate account takeover by sending password reset emails to an unverified email address. The

Cost of a Data Breach Report 2023: Insights, Mitigators and Best Practices

John Hanley of IBM Security shares 4 key findings from the highly acclaimed annual Cost of a Data Breach Report 2023 What is the IBM Cost of a Data Breach Report? The IBM Cost of a Data Breach Report is an annual report that provides organizations with quantifiable information about the financial impacts of breaches. With this data, they can make data driven decisions about how they implement

Porch-Pirate - The Most Comprehensive Postman Recon / OSINT Client And Framework That Facilitates The Automated Discovery And Exploitation Of API Endpoints And Secrets Committed To Workspaces, Collections, Requests, Users And Teams

By: Zion3R


Porch Pirate started as a tool to quickly uncover Postman secrets, and has slowly begun to evolve into a multi-purpose reconaissance / OSINT framework for Postman. While existing tools are great proof of concepts, they only attempt to identify very specific keywords as "secrets", and in very limited locations, with no consideration to recon beyond secrets. We realized we required capabilities that were "secret-agnostic", and had enough flexibility to capture false-positives that still provided offensive value.

Porch Pirate enumerates and presents sensitive results (global secrets, unique headers, endpoints, query parameters, authorization, etc), from publicly accessible Postman entities, such as:

  • Workspaces
  • Collections
  • Requests
  • Users
  • Teams

Installation

python3 -m pip install porch-pirate

Using the client

The Porch Pirate client can be used to nearly fully conduct reviews on public Postman entities in a quick and simple fashion. There are intended workflows and particular keywords to be used that can typically maximize results. These methodologies can be located on our blog: Plundering Postman with Porch Pirate.

Porch Pirate supports the following arguments to be performed on collections, workspaces, or users.

  • --globals
  • --collections
  • --requests
  • --urls
  • --dump
  • --raw
  • --curl

Simple Search

porch-pirate -s "coca-cola.com"

Get Workspace Globals

By default, Porch Pirate will display globals from all active and inactive environments if they are defined in the workspace. Provide a -w argument with the workspace ID (found by performing a simple search, or automatic search dump) to extract the workspace's globals, along with other information.

porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8

Dump Workspace

When an interesting result has been found with a simple search, we can provide the workspace ID to the -w argument with the --dump command to begin extracting information from the workspace and its collections.

porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8 --dump

Automatic Search and Globals Extraction

Porch Pirate can be supplied a simple search term, following the --globals argument. Porch Pirate will dump all relevant workspaces tied to the results discovered in the simple search, but only if there are globals defined. This is particularly useful for quickly identifying potentially interesting workspaces to dig into further.

porch-pirate -s "shopify" --globals

Automatic Search Dump

Porch Pirate can be supplied a simple search term, following the --dump argument. Porch Pirate will dump all relevant workspaces and collections tied to the results discovered in the simple search. This is particularly useful for quickly sifting through potentially interesting results.

porch-pirate -s "coca-cola.com" --dump

Extract URLs from Workspace

A particularly useful way to use Porch Pirate is to extract all URLs from a workspace and export them to another tool for fuzzing.

porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8 --urls

Automatic URL Extraction

Porch Pirate will recursively extract all URLs from workspaces and their collections related to a simple search term.

porch-pirate -s "coca-cola.com" --urls

Show Collections in a Workspace

porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8 --collections

Show Workspace Requests

porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8 --requests

Show raw JSON

porch-pirate -w abd6bded-ac31-4dd5-87d6-aa4a399071b8 --raw

Show Entity Information

porch-pirate -w WORKSPACE_ID
porch-pirate -c COLLECTION_ID
porch-pirate -r REQUEST_ID
porch-pirate -u USERNAME/TEAMNAME

Convert Request to Curl

Porch Pirate can build curl requests when provided with a request ID for easier testing.

porch-pirate -r 11055256-b1529390-18d2-4dce-812f-ee4d33bffd38 --curl

Use a proxy

porch-pirate -s coca-cola.com --proxy 127.0.0.1:8080

Using as a library

Searching

p = porchpirate()
print(p.search('coca-cola.com'))

Get Workspace Collections

p = porchpirate()
print(p.collections('4127fdda-08be-4f34-af0e-a8bdc06efaba'))

Dumping a Workspace

p = porchpirate()
collections = json.loads(p.collections('4127fdda-08be-4f34-af0e-a8bdc06efaba'))
for collection in collections['data']:
requests = collection['requests']
for r in requests:
request_data = p.request(r['id'])
print(request_data)

Grabbing a Workspace's Globals

p = porchpirate()
print(p.workspace_globals('4127fdda-08be-4f34-af0e-a8bdc06efaba'))

Other Examples

Other library usage examples can be located in the examples directory, which contains the following examples:

  • dump_workspace.py
  • format_search_results.py
  • format_workspace_collections.py
  • format_workspace_globals.py
  • get_collection.py
  • get_collections.py
  • get_profile.py
  • get_request.py
  • get_statistics.py
  • get_team.py
  • get_user.py
  • get_workspace.py
  • recursive_globals_from_search.py
  • request_to_curl.py
  • search.py
  • search_by_page.py
  • workspace_collections.py


Iac-Scan-Runner - Service That Scans Your Infrastructure As Code For Common Vulnerabilities

By: Zion3R


Service that scans your Infrastructure as Code for common vulnerabilities.

Aspect Information
Tool name IaC Scan Runner
Docker image xscanner/runner
PyPI package iac-scan-runner
Documentation docs
Contact us xopera@xlab.si

Purpose and description

The IaC Scan Runner is a REST API service used to scan IaC (Infrastructure as Code) package and perform various code checks in order to find possible vulnerabilities and improvements. Explore the docs for more info.

Running

This section explains how to run the REST API.

Run with Docker

You can run the REST API using a public xscanner/runner Docker image as follows:

# run IaC Scan Runner REST API in a Docker container and 
# navigate to localhost:8080/swagger or localhost:8080/redoc
$ docker run --name iac-scan-runner -p 8080:80 xscanner/runner

Or you can build the image locally and run it as follows:

# build Docker container (it will take some time) 
$ docker build -t iac-scan-runner .
# run IaC Scan Runner REST API in a Docker container and
# navigate to localhost:8080/swagger or localhost:8080/redoc
$ docker run --name iac-scan-runner -p 8080:80 iac-scan-runner

Run from CLI

To run using the IaC Scan Runner CLI:

# install the CLI
$ python3 -m venv .venv && . .venv/bin/activate
(.venv) $ pip install iac-scan-runner
# print OpenAPI specification
(.venv) $ iac-scan-runner openapi
# install prerequisites
(.venv) $ iac-scan-runner install
# run IaC Scan Runner REST API
(.venv) $ iac-scan-runner run

Run from source

To run locally from source:

# Export env variables 
export MONGODB_CONNECTION_STRING=mongodb://localhost:27017
export SCAN_PERSISTENCE=enabled
export USER_MANAGEMENT=enabled

# Setup MongoDB
$ docker run --name mongodb -p 27017:27017 mongo

# install prerequisites
$ python3 -m venv .venv && . .venv/bin/activate
(.venv) $ pip install -r requirements.txt
(.venv) $ ./install-checks.sh
# run IaC Scan Runner REST API (add --reload flag to apply code changes on the way)
(.venv) $ uvicorn src.iac_scan_runner.api:app

Usage and examples

This part will show one of the possible deployments and short examples on how to use API calls.

Firstly we will clone the iac scan runner repository and run the API.

$ git clone https://github.com/xlab-si/iac-scan-runner.git
$ docker compose up

After this is done you can use different API endpoints by calling localhost:8000. You can also navigate to localhost:8000/swagger or localhost:8000/redoc and test all the API endpoints there. In this example, we will use curl for calling API endpoints.

  1. Lets create a project named test.
curl -X 'POST' \
'http://0.0.0.0/project?creator_id=test' \
-H 'accept: application/json' \
-d ''

project id will be returned to us. For this example project id is 1e7b2a91-2896-40fd-8d53-83db56088026.

  1. For example, let say we want to initiate all check expect ansible-lint. Let's disable it.
curl -X 'PUT' \
'http://0.0.0.0:8000/projects/1e7b2a91-2896-40fd-8d53-83db56088026/checks/ansible-lint/disable' \
-H 'accept: application/json'
  1. Now when project is configured, we can simply choose files that we want to scan and zip them. For IaC-Scan-Runner to work files are expected to be a compressed archives (usually zip files). In this case response type will be json , but it is possible to change it to html.Please change YOUR.zip to path of your file.
curl -X 'POST' \
'http://0.0.0.0:8000/projects/1e7b2a91-2896-40fd-8d53-83db56088026/scan?scan_response_type=json' \
-H 'accept: application/json' \
-H 'Content-Type: multipart/form-data' \
-F 'iac=@YOUR.zip;type=application/zip'

That is it.

Extending the scan workflow with new check tools

At certain point, it might be required to include new check tools within the scan workflow, with aim to provide wider coverage of IaC standards and project types. Therefore, in this subsection, a sequence of required steps for that purpose is identified and described. However, the steps have to be performed manually as it will be described, but it is planned to automatize this procedure in future via API and provide user-friendly interface that will aid the user while importing new tools that will become part of the available catalogue that makes the scan workflow. Figure 16 depicts the required steps which have to be taken in order to extend the scan workflow with a new tool.

Step 1 – Adding tool-specific class to checks directory First, it is required to add a new tool-specific Python class to the checks directory inside IaC Scan Runner’s source code: iac-scan-runner/src/iac_scan_runner/checks/new_tool.py
The class of a new tool inherits the existing Check class, which provides generalization of scan workflow tools. Moreover, it is necessary to provide implementation of the following methods:

  1. def configure(self, config_filename: Optional[str], secret: Optional[SecretStr])
  2. def run(self, directory: str) While the first one aims to provide the necessary tool-specific parameters in order to set it up (such as passwords, client ids and tokens), another one specifies how the tool itself is invoked via API or CLI and its raw output returned.

Step 2 – Adding the check tool class instance within ScanRunner constructor Once the new class derived from Check is added to the IaC Scan Runner’s source code, it is also required to modify the source code of its main class, called ScanRunner. When it comes to modifications of this class, it is required first to import the tool-specific class, create a new check tool-specific class instance and adding it to the dictionary of IaC checks inside def init_checks(self). A. Importing the check tool class from iac_scan_runner.checks.tfsec import TfsecCheck B. Creating new instance of check tool object inside init_checks """Initiate predefined check objects""" new_tool = NewToolCheck() C. Adding it to self.iac_checks dictionary inside init_checks

    self.iac_checks = {
new_tool.name: new_tool,
…
}

Step 3 – Adding the check tool to the compatibility matrix inside Compatibility class On the other side, inside file src/iac_scan_runner/compatibility.py, the dictionary which represents compatibility matrix should be extended as well. There are two possible cases: a) new file type should be added as a key, together with list of relevant tools as value b) new tool should be added to the compatibility list for the existing file type.

    compatibility_matrix = {
"new_type": ["new_tool_1", "new_tool_2"],
…
"old_typeK": ["tool_1", … "tool_N", "new_tool_3"]
}

Step 4 – Providing the support for result summarization Finally, the last step in sequence of required modifications for scan workflow extension is to modify class ResultsSummary (src/iac_scan_runner/results_summary.py). Precisely, it is required to append a part of the code to its method summarize_outcome that will look for specific strings which are tool-specific and can be used to identify whether the check passed or failed. Inside the loop that traverses the compatible checks, for each new tool the following structure of if-else should be included:

        if check == "new_tool":
if outcome.find("Check pass string") > -1:
self.outcomes[check]["status"] = "Passed"
return "Passed"
else:
self.outcomes[check]["status"] = "Problems"
return "Problems"

Contact

You can contact the xOpera team by sending an email to xopera@xlab.si.

Acknowledgement

This project has received funding from the European Union’s Horizon 2020 research and innovation programme under Grant Agreement No. 101000162 (PIACERE).



SecuSphere - Efficient DevSecOps

By: Zion3R


SecuSphere is a comprehensive DevSecOps platform designed to streamline and enhance your organization's security posture throughout the software development life cycle. Our platform serves as a centralized hub for vulnerability management, security assessments, CI/CD pipeline integration, and fostering DevSecOps practices and culture.


Centralized Vulnerability Management

At the heart of SecuSphere is a powerful vulnerability management system. Our platform collects, processes, and prioritizes vulnerabilities, integrating with a wide array of vulnerability scanners and security testing tools. Risk-based prioritization and automated assignment of vulnerabilities streamline the remediation process, ensuring that your teams tackle the most critical issues first. Additionally, our platform offers robust dashboards and reporting capabilities, allowing you to track and monitor vulnerability status in real-time.

Seamless CI/CD Pipeline Integration

SecuSphere integrates seamlessly with your existing CI/CD pipelines, providing real-time security feedback throughout your development process. Our platform enables automated triggering of security scans and assessments at various stages of your pipeline. Furthermore, SecuSphere enforces security gates to prevent vulnerable code from progressing to production, ensuring that security is built into your applications from the ground up. This continuous feedback loop empowers developers to identify and fix vulnerabilities early in the development cycle.

Comprehensive Security Assessment

SecuSphere offers a robust framework for consuming and analyzing security assessment reports from various CI/CD pipeline stages. Our platform automates the aggregation, normalization, and correlation of security findings, providing a holistic view of your application's security landscape. Intelligent deduplication and false-positive elimination reduce noise in the vulnerability data, ensuring that your teams focus on real threats. Furthermore, SecuSphere integrates with ticketing systems to facilitate the creation and management of remediation tasks.

Cultivating DevSecOps Practices

SecuSphere goes beyond tools and technology to help you drive and accelerate the adoption of DevSecOps principles and practices within your organization. Our platform provides security training and awareness for developers, security, and operations teams, helping to embed security within your development and operations processes. SecuSphere aids in establishing secure coding guidelines and best practices and fosters collaboration and communication between security, development, and operations teams. With SecuSphere, you'll create a culture of shared responsibility for security, enabling you to build more secure, reliable software.

Embrace the power of integrated DevSecOps with SecuSphere – secure your software development, from code to cloud.

 Features

  • Vulnerability Management: Collect, process, prioritize, and remediate vulnerabilities from a centralized platform, integrating with various vulnerability scanners and security testing tools.
  • CI/CD Pipeline Integration: Provide real-time security feedback with seamless CI/CD pipeline integration, including automated security scans, security gates, and a continuous feedback loop for developers.
  • Security Assessment: Analyze security assessment reports from various CI/CD pipeline stages with automated aggregation, normalization, correlation of security findings, and intelligent deduplication.
  • DevSecOps Practices: Drive and accelerate the adoption of DevSecOps principles and practices within your team. Benefit from our security training, secure coding guidelines, and collaboration tools.

Dashboard and Reporting

SecuSphere offers built-in dashboards and reporting capabilities that allow you to easily track and monitor the status of vulnerabilities. With our risk-based prioritization and automated assignment features, vulnerabilities are efficiently managed and sent to the relevant teams for remediation.

API and Web Console

SecuSphere provides a comprehensive REST API and Web Console. This allows for greater flexibility and control over your security operations, ensuring you can automate and integrate SecuSphere into your existing systems and workflows as seamlessly as possible.

For more information please refer to our Official Rest API Documentation

Integration with Ticketing Systems

SecuSphere integrates with popular ticketing systems, enabling the creation and management of remediation tasks directly within the platform. This helps streamline your security operations and ensure faster resolution of identified vulnerabilities.

Security Training and Awareness

SecuSphere is not just a tool, it's a comprehensive solution that drives and accelerates the adoption of DevSecOps principles and practices. We provide security training and awareness for developers, security, and operations teams, and aid in establishing secure coding guidelines and best practices.

User Guide

Get started with SecuSphere using our comprehensive user guide.

ο’» Installation

You can install SecuSphere by cloning the repository, setting up locally, or using Docker.

Clone the Repository

$ git clone https://github.com/SecurityUniversalOrg/SecuSphere.git

Setup

Local Setup

Navigate to the source directory and run the Python file:

$ cd src/
$ python run.py

Dockerfile Setup

Build and run the Dockerfile in the cicd directory:

$ # From repository root
$ docker build -t secusphere:latest .
$ docker run secusphere:latest

Docker Compose

Use Docker Compose in the ci_cd/iac/ directory:

$ cd ci_cd/iac/
$ docker-compose -f secusphere.yml up

Pull from Docker Hub

Pull the latest version of SecuSphere from Docker Hub and run it:

$ docker pull securityuniversal/secusphere:latest
$ docker run -p 8081:80 -d secusphere:latest

Feedback and Support

We value your feedback and are committed to providing the best possible experience with SecuSphere. If you encounter any issues or have suggestions for improvement, please create an issue in this repository or contact our support team.

Contributing

We welcome contributions to SecuSphere. If you're interested in improving SecuSphere or adding new features, please read our contributing guide.



Noir - An Attack Surface Detector Form Source Code

By: Zion3R


Noir is an attack surface detector form source code.

Key Features

  • Automatically identify language and framework from source code.
  • Find API endpoints and web pages through code analysis.
  • Load results quickly through interactions with proxy tools such as ZAP, Burpsuite, Caido and More Proxy tools.
  • That provides structured data such as JSON and HAR for identified Attack Surfaces to enable seamless interaction with other tools. Also provides command line samples to easily integrate and collaborate with other tools, such as curls or httpie.

Available Support Scope

Endpoint's Entities

  • Path
  • Method
  • Param
  • Header
  • Protocol (e.g ws)

Languages and Frameworks

Language Framework URL Method Param Header WS
Go Echo
βœ…
βœ… X X X
Python Django
βœ…
X X X X
Python Flask βœ… X X X X
Ruby Rails
βœ…
βœ…
βœ… X X
Ruby Sinatra
βœ…
βœ…
βœ…
X X
Php
βœ…
βœ…
βœ…
X X
Java Spring
βœ…
βœ…
X X X
Java Jsp X X X X X
Crystal Kemal
βœ…
βœ…
βœ… X
βœ…
JS Express
βœ…
βœ…
X X X
JS Next X X X X X

Specification

Specification Format URL Method Param Header WS
Swagger JSON
βœ…
βœ…
βœ…
X X
Swagger YAML
βœ…
βœ…
βœ…
X X

Installation

Homebrew (macOS)

brew tap hahwul/noir
brew install noir

From Sources

# Install Crystal-lang
# https://crystal-lang.org/install/

# Clone this repo
git clone https://github.com/hahwul/noir
cd noir

# Install Dependencies
shards install

# Build
shards build --release --no-debug

# Copy binary
cp ./bin/noir /usr/bin/

Docker (GHCR)

docker pull ghcr.io/hahwul/noir:main

Usage

Usage: noir <flags>
Basic:
-b PATH, --base-path ./app (Required) Set base path
-u URL, --url http://.. Set base url for endpoints
-s SCOPE, --scope url,param Set scope for detection

Output:
-f FORMAT, --format json Set output format [plain/json/markdown-table/curl/httpie]
-o PATH, --output out.txt Write result to file
--set-pvalue VALUE Specifies the value of the identified parameter
--no-color Disable color output
--no-log Displaying only the results

Deliver:
--send-req Send the results to the web request
--send-proxy http://proxy.. Send the results to the web request via http proxy

Technologies:
-t TECHS, --techs rails,php Set technologies to use
--exclude-techs rails,php Specify the technologies to be excluded
--list-techs Show all technologies

Others:
-d, --debug Show debug messages
-v, --version Show version
-h, --help Show help

Example

noir -b . -u https://testapp.internal.domains

JSON Result

noir -b . -u https://testapp.internal.domains -f json
[
...
{
"headers": [],
"method": "POST",
"params": [
{
"name": "article_slug",
"param_type": "json",
"value": ""
},
{
"name": "body",
"param_type": "json",
"value": ""
},
{
"name": "id",
"param_type": "json",
"value": ""
}
],
"protocol": "http",
"url": "https://testapp.internal.domains/comments"
}
]



Continuous Security Validation with Penetration Testing as a Service (PTaaS)

By: THN
Validate security continuously across your full stack with Pen Testing as a Service. In today's modern security operations center (SOC), it's a battle between the defenders and the cybercriminals. Both are using tools and expertise – however, the cybercriminals have the element of surprise on their side, and a host of tactics, techniques, and procedures (TTPs) that have evolved. These external

The 4 Keys to Building Cloud Security Programs That Can Actually Shift Left

As cloud applications are built, tested and updated, they wind their way through an ever-complex series of different tools and teams. Across hundreds or even thousands of technologies that make up the patchwork quilt of development and cloud environments, security processes are all too often applied in only the final phases of software development.Β  Placing security at the very end of the

ZeusCloud - Open Source Cloud Security

By: Zion3R


ZeusCloud is an open source cloud security platform.

Discover, prioritize, and remediate your risks in the cloud.

  • Build an asset inventory of your AWS accounts.
  • Discover attack paths based on public exposure, IAM, vulnerabilities, and more.
  • Prioritize findings with graphical context.
  • Remediate findings with step by step instructions.
  • Customize security and compliance controls to fit your needs.
  • Meet compliance standards PCI DSS, CIS, SOC 2, and more!

Quick Start

  1. Clone repo: git clone --recurse-submodules git@github.com:Zeus-Labs/ZeusCloud.git
  2. Run: cd ZeusCloud && make quick-deploy
  3. Visit http://localhost:80

Check out our Get Started guide for more details.

A cloud-hosted version is available on special request - email founders@zeuscloud.io to get access!

Sandbox

Play around with our sandbox environment to see how ZeusCloud identifies, prioritizes, and remediates risks in the cloud!

Features

  • Discover Attack Paths - Discover toxic risk combinations an attacker can use to penetrate your environment.
  • Graphical Context - Understand context behind security findings with graphical visualizations.
  • Access Explorer - Visualize who has access to what with an IAM visualization engine.
  • Identify Misconfigurations - Discover the highest risk-of-exploit misconfigurations in your environments.
  • Configurability - Configure which security rules are active, which alerts should be muted, and more.
  • Security as Code - Modify rules or write your own with our extensible security as code approach.
  • Remediation - Follow step by step guides to remediate security findings.
  • Compliance - Ensure your cloud posture is compliant with PCI DSS, CIS benchmarks and more!

Why ZeusCloud?

Cloud usage continues to grow. Companies are shifting more of their workloads from on-prem to the cloud and both adding and expanding new and existing workloads in the cloud. Cloud providers keep increasing their offerings and their complexity. Companies are having trouble keeping track of their security risks as their cloud environment scales and grows more complex. Several high profile attacks have occurred in recent times. Capital One had an S3 bucket breached, Amazon had an unprotected Prime Video server breached, Microsoft had an Azure DevOps server breached, Puma was the victim of ransomware, etc.

We had to take action.

  • We noticed traditional cloud security tools are opaque, confusing, time consuming to set up, and expensive as you scale your cloud environment
  • Cybersecurity vendors don't provide much actionable information to security, engineering, and devops teams by inundating them with non-contextual alerts
  • ZeusCloud is easy to set up, transparent, and configurable, so you can prioritize the most important risks
  • Best of all, you can use ZeusCloud for free!

Future Roadmap

  • Integrations with vulnerability scanners
  • Integrations with secret scanners
  • Shift-left: Remediate risks earlier in the SDLC with context from your deployments
  • Support for Azure and GCP environments

Contributing

We love contributions of all sizes. What would be most helpful first:

  • Please give us feedback in our Slack.
  • Open a PR (see our instructions below on developing ZeusCloud locally)
  • Submit a feature request or bug report through Github Issues.

Development

Run containers in development mode:

cd frontend && yarn && cd -
docker-compose down && docker-compose -f docker-compose.dev.yaml --env-file .env.dev up --build

Reset neo4j and/or postgres data with the following:

rm -rf .compose/neo4j
rm -rf .compose/postgres

To develop on frontend, make the the code changes and save.

To develop on backend, run

docker-compose -f docker-compose.dev.yaml --env-file .env.dev up --no-deps --build backend

To access the UI, go to: http://localhost:80.

Security

Please do not run ZeusCloud exposed to the public internet. Use the latest versions of ZeusCloud to get all security related patches. Report any security vulnerabilities to founders@zeuscloud.io.

Open-source vs. cloud-hosted

This repo is freely available under the Apache 2.0 license.

We're working on a cloud-hosted solution which handles deployment and infra management. Contact us at founders@zeuscloud.io for more information!

Special thanks to the amazing Cartography project, which ZeusCloud uses for its asset inventory. Credit to PostHog and Airbyte for inspiration around public-facing materials - like this README!



How to Improve Your API Security Posture

APIs, more formally known as application programming interfaces, empower apps and microservices to communicate and share data. However, this level of connectivity doesn't come without major risks. Hackers can exploit vulnerabilities in APIs to gain unauthorized access to sensitive data or even take control of the entire system. Therefore, it's essential to have a robust API security posture to

What to Look for When Selecting a Static Application Security Testing (SAST) Solution

If you're involved in securing the applications your organization develops, there is no question that Static Application Security Testing (SAST) solutions are an important part of a comprehensive application security strategy. SAST secures software, supports business more securely, cuts down on costs, reduces risk, and speeds time to development, delivery, and deployment of mission-critical

Product Security: Harnessing the Collective Experience and Collaborative Tools in DevSecOps

In the fast-paced cybersecurity landscape, product security takes center stage. DevSecOps swoops in, seamlessly merging security practices into DevOps, empowering teams to tackle challenges. Let's dive into DevSecOps and explore how collaboration can give your team the edge to fight cyber villains. Application security and product security Regrettably, application security teams often intervene

Bearer - Code Security Scanning Tool (SAST) That Discover, Filter And Prioritize Security Risks And Vulnerabilities Leading To Sensitive Data Exposures (PII, PHI, PD)


Discover, filter, and prioritize security risks and vulnerabilities impacting your code.

Bearer is a static application security testing (SAST) tool that scans your source code and analyzes your data flows to discover, filter and prioritize security risks and vulnerabilities leading to sensitive data exposures (PII, PHI, PD).

Currently supporting JavaScript and Ruby stacks.

Code security scanner that natively filters and prioritizes security risks using sensitive data flow analysis.

Bearer provides built-in rules against a common set of security risks and vulnerabilities, known as OWASP Top 10. Here are some practical examples of what those rules look for:

  • Non-filtered user input.
  • Leakage of sensitive data through cookies, internal loggers, third-party logging services, and into analytics environments.
  • Usage of weak encryption libraries or misusage of encryption algorithms.
  • Unencrypted incoming and outgoing communication (HTTP, FTP, SMTP) of sensitive information.
  • Hard-coded secrets and tokens.

And many more.

Bearer is Open Source (see license) and fully customizable, from creating your own rules to component detection (database, API) and data classification.

Bearer also powers our commercial offering, Bearer Cloud, allowing security teams to scale and monitor their application security program using the same engine.

Getting started

Discover your most critical security risks and vulnerabilities in only a few minutes. In this guide, you will install Bearer, run a scan on a local project, and view the results. Let's get started!

Install Bearer

The quickest way to install Bearer is with the install script. It will auto-select the best build for your architecture. Defaults installation to ./bin and to the latest release version:

curl -sfL https://raw.githubusercontent.com/Bearer/bearer/main/contrib/install.sh | sh

Other install options


Homebrew

Using Bearer's official Homebrew tap:

brew install bearer/tap/bearer

Debian/Ubuntu
$ sudo apt-get install apt-transport-https
$ echo "deb [trusted=yes] https://apt.fury.io/bearer/ /" | sudo tee -a /etc/apt/sources.list.d/fury.list
$ sudo apt-get update
$ sudo apt-get install bearer

RHEL/CentOS

Add repository setting:

$ sudo vim /etc/yum.repos.d/fury.repo
[fury]
name=Gemfury Private Repo
baseurl=https://yum.fury.io/bearer/
enabled=1
gpgcheck=0

Then install with yum:

  $ sudo yum -y update
$ sudo yum -y install bearer

Docker

Bearer is also available as a Docker image on Docker Hub and ghcr.io.

With docker installed, you can run the following command with the appropriate paths in place of the examples.

docker run --rm -v /path/to/repo:/tmp/scan bearer/bearer:latest-amd64 scan /tmp/scan

Additionally, you can use docker compose. Add the following to your docker-compose.yml file and replace the volumes with the appropriate paths for your project:

version: "3"
services:
bearer:
platform: linux/amd64
image: bearer/bearer:latest-amd64
volumes:
- /path/to/repo:/tmp/scan

Then, run the docker compose run command to run Bearer with any specified flags:

docker compose run bearer scan /tmp/scan --debug

Binary

Download the archive file for your operating system/architecture from here.

Unpack the archive, and put the binary somewhere in your $PATH (on UNIX-y systems, /usr/local/bin or the like). Make sure it has permission to execute.


Scan your project

The easiest way to try out Bearer is with our example project, Bear Publishing. It simulates a realistic Ruby application with common security flaws. Clone or download it to a convenient location to get started.

git clone https://github.com/Bearer/bear-publishing.git

Now, run the scan command with bearer scan on the project directory:

bearer scan bear-publishing

A progress bar will display the status of the scan.

Once the scan is complete, Bearer will output a security report with details of any rule failures, as well as where in the codebase the infractions happened and why.

By default the scan command use the SAST scanner, other scanner types are available.

Analyze the report

The security report is an easily digestible view of the security issues detected by Bearer. A report is made up of:

  • The list of rules run against your code.
  • Each detected failure, containing the file location and lines that triggered the rule failure.
  • A stat section with a summary of rules checks, failures and warnings.

The Bear Publishing example application will trigger rule failures and output a full report. Here's a section of the output:

...
CRITICAL: Only communicate using SFTP connections.
https://docs.bearer.com/reference/rules/ruby_lang_insecure_ftp

File: bear-publishing/app/services/marketing_export.rb:34

34 Net::FTP.open(
35 'marketing.example.com',
36 'marketing',
37 'password123'
...
41 end


=====================================

56 checks, 10 failures, 6 warnings

CRITICAL: 7
HIGH: 0
MEDIUM: 0
LOW: 3
WARNING: 6

The security report is just one report type available in Bearer.

Additional options for using and configuring the scan command can be found in the scan documentation.

For additional guides and usage tips, view the docs.

FAQs

How do you detect sensitive data flows from the code?

When you run Bearer on your codebase, it discovers and classifies data by identifying patterns in the source code. Specifically, it looks for data types and matches against them. Most importantly, it never views the actual values (it just can’t)β€”but only the code itself.

Bearer assesses 120+ data types from sensitive data categories such as Personal Data (PD), Sensitive PD, Personally identifiable information (PII), and Personal Health Information (PHI). You can view the full list in the supported data types documentation.

In a nutshell, our static code analysis is performed on two levels: Analyzing class names, methods, functions, variables, properties, and attributes. It then ties those together to detected data structures. It does variable reconciliation etc. Analyzing data structure definitions files such as OpenAPI, SQL, GraphQL, and Protobuf.

Bearer then passes this over to the classification engine we built to support this very particular discovery process.

If you want to learn more, here is the longer explanation.

When and where to use Bearer?

We recommend running Bearer in your CI to check new PR automatically for security issues, so your development team has a direct feedback loop to fix issues immediately.

You can also integrate Bearer in your CD, though we recommend to only make it fail on high criticality issues only, as the impact for your organization might be important.

In addition, running Bearer on a scheduled job is a great way to keep track of your security posture and make sure new security issues are found even in projects with low activity.

Supported Language

Bearer currently supports JavaScript and Ruby and their associated most used frameworks and libraries. More languages will follow.

What makes Bearer different from any other SAST tools?

SAST tools are known to bury security teams and developers under hundreds of issues with little context and no sense of priority, often requiring security analysts to triage issues. Not Bearer.

The most vulnerable asset today is sensitive data, so we start there and prioritize application security risks and vulnerabilities by assessing sensitive data flows in your code to highlight what is urgent, and what is not.

We believe that by linking security issues with a clear business impact and risk of a data breach, or data leak, we can build better and more robust software, at no extra cost.

In addition, by being Open Source, extendable by design, and built with a great developer UX in mind, we bet you will see the difference for yourself.

How long does it take to scan my code? Is it fast?

It depends on the size of your applications. It can take as little as 20 seconds, up to a few minutes for an extremely large code base. We’ve added an internal caching layer that only looks at delta changes to allow quick, subsequent scans.

Running Bearer should not take more time than running your test suite.

What about false positives?

If you’re familiar with other SAST tools, false positives are always a possibility.

By using the most modern static code analysis techniques and providing a native filtering and prioritizing solution on the most important issues, we believe this problem won’t be a concern when using Bearer.

Get in touch

Thanks for using Bearer. Still have questions?

Contributing

Interested in contributing? We're here for it! For details on how to contribute, setting up your development environment, and our processes, review the contribution guide.

Code of conduct

Everyone interacting with this project is expected to follow the guidelines of our code of conduct.

Security

To report a vulnerability or suspected vulnerability, see our security policy. For any questions, concerns or other security matters, feel free to open an issue or join the Discord Community.



Noseyparker - A Command-Line Program That Finds Secrets And Sensitive Information In Textual Data And Git History


Nosey Parker is a command-line tool that finds secrets and sensitive information in textual data. It is useful both for offensive and defensive security testing.

Key features:

  • It supports scanning files, directories, and the entire history of Git repositories
  • It uses regular expression matching with a set of 95 patterns chosen for high signal-to-noise based on experience and feedback from offensive security engagements
  • It groups matches together that share the same secret, further emphasizing signal over noise
  • It is fast: it can scan at hundreds of megabytes per second on a single core, and is able to scan 100GB of Linux kernel source history in less than 2 minutes on an older MacBook Pro

This open-source version of Nosey Parker is a reimplementation of the internal version that is regularly used in offensive security engagements at Praetorian. The internal version has additional capabilities for false positive suppression and an alternative machine learning-based detection engine. Read more in blog posts here and here.


Building from source

1. (On x86_64) Install the Hyperscan library and headers for your system

On macOS using Homebrew:

brew install hyperscan pkg-config

On Ubuntu 22.04:

apt install libhyperscan-dev pkg-config

1. (On non-x86_64) Build Vectorscan from source

You will need several dependencies, including cmake, boost, ragel, and pkg-config.

Download and extract the source for the 5.4.8 release of Vectorscan:

wget https://github.com/VectorCamp/vectorscan/archive/refs/tags/vectorscan/5.4.8.tar.gz && tar xfz 5.4.8.tar.gz

Build with cmake:

cd vectorscan-vectorscan-5.4.8 && cmake -B build -DCMAKE_BUILD_TYPE=Release . && cmake --build build

Set the HYPERSCAN_ROOT environment variable so that Nosey Parker builds against your from-source build of Vectorscan:

export HYPERSCAN_ROOT="$PWD/build"

Note: The Nosey Parker Dockerfile builds Vectorscan from source and links against that.

2. Install the Rust toolchain

Recommended approach: install from https://rustup.rs

3. Build using Cargo

cargo build --release

This will produce a binary at target/release/noseyparker.

Docker Usage

A prebuilt Docker image is available for the latest release for x86_64:

docker pull ghcr.io/praetorian-inc/noseyparker:latest

A prebuilt Docker image is available for the most recent commit for x86_64:

docker pull ghcr.io/praetorian-inc/noseyparker:edge

For other architectures (e.g., ARM) you will need to build the Docker image yourself:

docker build -t noseyparker .

Run the Docker image with a mounted volume:

docker run -v "$PWD":/opt/ noseyparker

Note: The Docker image runs noticeably slower than a native binary, particularly on macOS.

Usage quick start

The datastore

Most Nosey Parker commands use a datastore. This is a special directory that Nosey Parker uses to record its findings and maintain its internal state. A datastore will be implicitly created by the scan command if needed. You can also create a datastore explicitly using the datastore init -d PATH command.

Scanning filesystem content for secrets

Nosey Parker has built-in support for scanning files, recursively scanning directories, and scanning the entire history of Git repositories.

For example, if you have a Git clone of CPython locally at cpython.git, you can scan its entire history with the scan command. Nosey Parker will create a new datastore at np.cpython and saves its findings there.

$ noseyparker scan --datastore np.cpython cpython.git
Found 28.30 GiB from 18 plain files and 427,712 blobs from 1 Git repos [00:00:04]
Scanning content β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ 100% 28.30 GiB/28.30 GiB [00:00:53]
Scanned 28.30 GiB from 427,730 blobs in 54 seconds (538.46 MiB/s); 4,904/4,904 new matches

Rule Distinct Groups Total Matches
───────────────────────────────────────────────────────────
PEM-Encoded Private Key 1,076 1,1 92
Generic Secret 331 478
netrc Credentials 42 3,201
Generic API Key 2 31
md5crypt Hash 1 2

Run the `report` command next to show finding details.

Scanning Git repos by URL, GitHub username, or GitHub organization name

Nosey Parker can also scan Git repos that have not already been cloned to the local filesystem. The --git-url URL, --github-user NAME, and --github-org NAME options to scan allow you to specify repositories of interest.

For example, to scan the Nosey Parker repo itself:

$ noseyparker scan --datastore np.noseyparker --git-url https://github.com/praetorian-inc/noseyparker

For example, to scan accessible repositories belonging to octocat:

$ noseyparker scan --datastore np.noseyparker --github-user octocat

These input specifiers will use an optional GitHub token if available in the NP_GITHUB_TOKEN environment variable. Providing an access token gives a higher API rate limit and may make additional repositories accessible to you.

See noseyparker help scan for more details.

Summarizing findings

Nosey Parker prints out a summary of its findings when it finishes scanning. You can also run this step separately:

$ noseyparker summarize --datastore np.cpython

Rule Distinct Groups Total Matches
───────────────────────────────────────────────────────────
PEM-Encoded Private Key 1,076 1,192
Generic Secret 331 478
netrc Credentials 42 3,201
Generic API Key 2 31
md5crypt Hash 1 2

Additional output formats are supported, including JSON and JSON lines, via the --format=FORMAT option.

Reporting detailed findings

To see details of Nosey Parker's findings, use the report command. This prints out a text-based report designed for human consumption:

(Note: the findings above are synthetic, invalid secrets.) Additional output formats are supported, including JSON and JSON lines, via the --format=FORMAT option.

Enumerating repositories from GitHub

To list URLs for repositories belonging to GitHub users or organizations, use the github repos list command. This command uses the GitHub REST API to enumerate repositories belonging to one or more users or organizations. For example:

$ noseyparker github repos list --user octocat
https://github.com/octocat/Hello-World.git
https://github.com/octocat/Spoon-Knife.git
https://github.com/octocat/boysenberry-repo-1.git
https://github.com/octocat/git-consortium.git
https://github.com/octocat/hello-worId.git
https://github.com/octocat/linguist.git
https://github.com/octocat/octocat.github.io.git
https://github.com/octocat/test-repo1.git

An optional GitHub Personal Access Token can be provided via the NP_GITHUB_TOKEN environment variable. Providing an access token gives a higher API rate limit and may make additional repositories accessible to you.

Additional output formats are supported, including JSON and JSON lines, via the --format=FORMAT option.

See noseyparker help github for more details.

Getting help

Running the noseyparker binary without arguments prints top-level help and exits. You can get abbreviated help for a particular command by running noseyparker COMMAND -h.

Tip: More detailed help is available with the help command or long-form --help option.

Contributing

Contributions are welcome, particularly new regex rules. Developing new regex rules is detailed in a separate document.

If you are considering making significant code changes, please open an issue first to start discussion.

License

Nosey Parker is licensed under the Apache License, Version 2.0.

Any contribution intentionally submitted for inclusion in Nosey Parker by you, as defined in the Apache 2.0 license, shall be licensed as above, without any additional terms or conditions.



Malicious Python Package Uses Unicode Trickery to Evade Detection and Steal Data

A malicious Python package on the Python Package Index (PyPI) repository has been found to use Unicode as a trick to evade detection and deploy an info-stealing malware. The package in question, namedΒ onyxproxy, was uploaded to PyPI on March 15, 2023, and comes with capabilities to harvest and exfiltrate credentials and other valuable data. It has since been taken down, but not before attracting

Faraday - Open Source Vulnerability Management Platform


Security has two difficult tasks: designing smart ways of getting new information, and keeping track of findings to improve remediation efforts. With Faraday, you may focus on discovering vulnerabilities while we help you with the rest. Just use it in your terminal and get your work organized on the run. Faraday was made to let you take advantage of the available tools in the community in a truly multiuser way.

Faraday aggregates and normalizes the data you load, allowing exploring it into different visualizations that are useful to managers and analysts alike.

To read about the latest features check out the release notes!


Install

Docker-compose

The easiest way to get faraday up and running is using our docker-compose

$ wget https://raw.githubusercontent.com/infobyte/faraday/master/docker-compose.yaml
$ docker-compose up

If you want to customize, you can find an example config over here Link

Docker

You need to have a Postgres running first.

 $ docker run \
-v $HOME/.faraday:/home/faraday/.faraday \
-p 5985:5985 \
-e PGSQL_USER='postgres_user' \
-e PGSQL_HOST='postgres_ip' \
-e PGSQL_PASSWD='postgres_password' \
-e PGSQL_DBNAME='postgres_db_name' \
faradaysec/faraday:latest

PyPi

$ pip3 install faradaysec
$ faraday-manage initdb
$ faraday-server

Binary Packages (Debian/RPM)

You can find the installers on our releases page

$ sudo apt install faraday-server_amd64.deb
# Add your user to the faraday group
$ faraday-manage initdb
$ sudo systemctl start faraday-server

Add your user to the faraday group and then run

Source

If you want to run directly from this repo, this is the recommended way:

$ pip3 install virtualenv
$ virtualenv faraday_venv
$ source faraday_venv/bin/activate
$ git clone git@github.com:infobyte/faraday.git
$ pip3 install .
$ faraday-manage initdb
$ faraday-server

Check out our documentation for detailed information on how to install Faraday in all of our supported platforms

For more information about the installation, check out our Installation Wiki.

In your browser now you can go to http://localhost:5985 and login with "faraday" as username, and the password given by the installation process

Getting Started

Learn about Faraday holistic approach and rethink vulnerability management.

Integrating faraday in your CI/CD

Setup Bandit and OWASP ZAP in your pipeline

Setup Bandit, OWASP ZAP and SonarQube in your pipeline

Faraday Cli

Faraday-cli is our command line client, providing easy access to the console tools, work in faraday directly from the terminal!

This is a great way to automate scans, integrate it to CI/CD pipeline or just get metrics from a workspace

$ pip3 install faraday-cli

Check our faraday-cli repo

Check out the documentation here.


Faraday Agents

Faraday Agents Dispatcher is a tool that gives Faraday the ability to run scanners or tools remotely from the platform and get the results.

Plugins

Connect you favorite tools through our plugins. Right now there are more than 80+ supported tools, among which you will find:


Missing your favorite one? Create a Pull Request!

There are two Plugin types:

Console plugins which interpret the output of the tools you execute.

$ faraday-cli tool run \"nmap www.exampledomain.com\"
💻 Processing Nmap command
Starting Nmap 7.80 ( https://nmap.org ) at 2021-02-22 14:13 -03
Nmap scan report for www.exampledomain.com (10.196.205.130)
Host is up (0.17s latency).
rDNS record for 10.196.205.130: 10.196.205.130.bc.example.com
Not shown: 996 filtered ports
PORT STATE SERVICE
80/tcp open http
443/tcp open https
2222/tcp open EtherNetIP-1
3306/tcp closed mysql
Nmap done: 1 IP address (1 host up) scanned in 11.12 seconds
Ò¬† Sending data to workspace: test
Òœ” Done

Report plugins which allows you to import previously generated artifacts like XMLs, JSONs.

faraday-cli tool report burp.xml

Creating custom plugins is super easy, Read more about Plugins.

API

You can access directly to our API, check out the documentation here.

Links



YATAS - A Simple Tool To Audit Your AWS Infrastructure For Misconfiguration Or Potential Security Issues With Plugins Integration


Yet Another Testing & Auditing Solution

The goal of YATAS is to help you create a secure AWS environment without too much hassle. It won't check for all best practices but only for the ones that are important for you based on my experience. Please feel free to tell me if you find something that is not covered.


Features

YATAS is a simple and easy to use tool to audit your infrastructure for misconfiguration or potential security issues.

No details Details

Installation

brew tap padok-team/tap
brew install yatas
yatas --init

Modify .yatas.yml to your needs.

yatas --install

Installs the plugins you need.

Usage

yatas -h

Flags:

  • --details: Show details of the issues found.
  • --compare: Compare the results of the previous run with the current run and show the differences.
  • --ci: Exit code 1 if there are issues found, 0 otherwise.
  • --resume: Only shows the number of tests passing and failing.
  • --time: Shows the time each test took to run in order to help you find bottlenecks.
  • --init: Creates a .yatas.yml file in the current directory.
  • --install: Installs the plugins you need.
  • --only-failure: Only show the tests that failed.

Plugins

Plugins Description Checks
AWS Audit AWS checks Good practices and security checks
Markdown Reports Reporting Generates a markdown report

Checks

Ignore results for known issues

You can ignore results of checks by adding the following to your .yatas.yml file:

ignore:
- id: "AWS_VPC_004"
regex: true
values:
- "VPC Flow Logs are not enabled on vpc-.*"
- id: "AWS_VPC_003"
regex: false
values:
- "VPC has only one gateway on vpc-08ffec87e034a8953"

Exclude a test

You can exclude a test by adding the following to your .yatas.yml file:

plugins:
- name: "aws"
enabled: true
description: "Check for AWS good practices"
exclude:
- AWS_S3_001

Specify which tests to run

To only run a specific test, add the following to your .yatas.yml file:

plugins:
- name: "aws"
enabled: true
description: "Check for AWS good practices"
include:
- "AWS_VPC_003"
- "AWS_VPC_004"

Get error logs

You can get the error logs by adding the following to your env variables:

export YATAS_LOG_LEVEL=debug

The available log levels are: debug, info, warn, error, fatal, panic and off by default

AWS - 63 Checks

AWS Certificate Manager

  • AWS_ACM_001 ACM certificates are valid
  • AWS_ACM_002 ACM certificate expires in more than 90 days
  • AWS_ACM_003 ACM certificates are used

APIGateway

  • AWS_APG_001 ApiGateways logs are sent to Cloudwatch
  • AWS_APG_002 ApiGateways are protected by an ACL
  • AWS_APG_003 ApiGateways have tracing enabled

AutoScaling

  • AWS_ASG_001 Autoscaling maximum capacity is below 80%
  • AWS_ASG_002 Autoscaling group are in two availability zones

Backup

  • AWS_BAK_001 EC2's Snapshots are encrypted
  • AWS_BAK_002 EC2's snapshots are younger than a day old

Cloudfront

  • AWS_CFT_001 Cloudfronts enforce TLS 1.2 at least
  • AWS_CFT_002 Cloudfronts only allow HTTPS or redirect to HTTPS
  • AWS_CFT_003 Cloudfronts queries are logged
  • AWS_CFT_004 Cloudfronts are logging Cookies
  • AWS_CFT_005 Cloudfronts are protected by an ACL

CloudTrail

  • AWS_CLD_001 Cloudtrails are encrypted
  • AWS_CLD_002 Cloudtrails have Global Service Events Activated
  • AWS_CLD_003 Cloudtrails are in multiple regions

COG

  • AWS_COG_001 Cognito allows unauthenticated users

DynamoDB

  • AWS_DYN_001 Dynamodbs are encrypted
  • AWS_DYN_002 Dynamodb have continuous backup enabled with PITR

EC2

  • AWS_EC2_001 EC2s don't have a public IP
  • AWS_EC2_002 EC2s have the monitoring option enabled

ECR

  • AWS_ECR_001 ECRs image are scanned on push
  • AWS_ECR_002 ECRs are encrypted
  • AWS_ECR_003 ECRs tags are immutable

EKS

  • AWS_EKS_001 EKS clusters have logging enabled
  • AWS_EKS_002 EKS clusters have private endpoint or strict public access

LoadBalancer

  • AWS_ELB_001 ELB have access logs enabled

GuardDuty

  • AWS_GDT_001 GuardDuty is enabled in the account

IAM

  • AWS_IAM_001 IAM Users have 2FA activated
  • AWS_IAM_002 IAM access key younger than 90 days
  • AWS_IAM_003 IAM User can't elevate rights
  • AWS_IAM_004 IAM Users have not used their password for 120 days

Lambda

  • AWS_LMD_001 Lambdas are private
  • AWS_LMD_002 Lambdas are in a security group
  • AWS_LMD_003 Lambdas are not with errors

RDS

  • AWS_RDS_001 RDS are encrypted
  • AWS_RDS_002 RDS are backedup automatically with PITR
  • AWS_RDS_003 RDS have minor versions automatically updated
  • AWS_RDS_004 RDS aren't publicly accessible
  • AWS_RDS_005 RDS logs are exported to cloudwatch
  • AWS_RDS_006 RDS have the deletion protection enabled
  • AWS_RDS_007 Aurora Clusters have minor versions automatically updated
  • AWS_RDS_008 Aurora RDS are backedup automatically with PITR
  • AWS_RDS_009 Aurora RDS have the deletion protection enabled
  • AWS_RDS_010 Aurora RDS are encrypted
  • AWS_RDS_011 Aurora RDS logs are exported to cloudwatch
  • AWS_RDS_012 Aurora RDS aren't publicly accessible

S3 Bucket

  • AWS_S3_001 S3 are encrypted
  • AWS_S3_002 S3 buckets are not global but in one zone
  • AWS_S3_003 S3 buckets are versioned
  • AWS_S3_004 S3 buckets have a retention policy
  • AWS_S3_005 S3 bucket have public access block enabled

Volume

  • AWS_VOL_001 EC2's volumes are encrypted
  • AWS_VOL_002 EC2 are using GP3
  • AWS_VOL_003 EC2 have snapshots
  • AWS_VOL_004 EC2's volumes are unused

VPC

  • AWS_VPC_001 VPC CIDRs are bigger than /20
  • AWS_VPC_002 VPC can't be in the same account
  • AWS_VPC_003 VPC only have one Gateway
  • AWS_VPC_004 VPC Flow Logs are activated
  • AWS_VPC_005 VPC have at least 2 subnets

How to create a new plugin ?

You'd like to add a new plugin ? Then simply visit yatas-plugin and follow the instructions.



Legitify - Detect And Remediate Misconfigurations And Security Risks Across All Your GitHub Assets


Strengthen the security posture of your GitHub organization!
Detect and remediate misconfigurations, security and compliance issues across all your GitHub assets with ease

Β 

Installation

  1. You can download the latest legitify release from https://github.com/Legit-Labs/legitify/releases, each archive contains:
  • Legitify binary for the desired platform
  • Built-in policies provided by Legit Security
  1. From source with the following steps:
git clone git@github.com:Legit-Labs/legitify.git
go run main.go analyze ...

Provenance

To enhance the software supply chain security of legitify's users, as of v0.1.6, every legitify release contains a SLSA Level 3 Provenacne document.
The provenance document refers to all artifacts in the release, as well as the generated docker image.
You can use SLSA framework's official verifier to verify the provenance.
Example of usage for the darwin_arm64 architecture for the v0.1.6 release:

VERSION=0.1.6
ARCH=darwin_arm64
./slsa-verifier verify-artifact --source-branch main --builder-id 'https://github.com/slsa-framework/slsa-github-generator/.github/workflows/generator_generic_slsa3.yml@refs/tags/v1.2.2' --source-uri "git+https://github.com/Legit-Labs/legitify" --provenance-path multiple.intoto.jsonl ./legitify_${VERSION}_${ARCH}.tar.gz

Requirements

  1. To get the most out of legitify, you need to be an owner of at least one GitHub organization. Otherwise, you can still use the tool if you're an admin of at least one repository inside an organization, in which case you'll be able to see only repository-related policies results.
  2. legitify requires a GitHub personal access token (PAT) to analyze your resources successfully, which can be either provided as an argument (-t) or as an environment variable ($GITHUB_ENV). The PAT needs the following scopes for full analysis:
admin:org, read:enterprise, admin:org_hook, read:org, repo, read:repo_hook

See Creating a Personal Access Token for more information.
Fine-grained personal access tokens are currently not supported because they do not support GitHub's GraphQL (https://github.blog/2022-10-18-introducing-fine-grained-personal-access-tokens-for-github/)

Usage

LEGITIFY_TOKEN=<your_token> legitify analyze

By default, legitify will check the policies against all your resources (organizations, repositories, members, actions).

You can control which resources will be analyzed with command-line flags namespace and org:

  • --namespace (-n): will analyze policies that relate to the specified resources
  • --org: will limit the analysis to the specified organizations
LEGITIFY_TOKEN=<your_token> legitify analyze --org org1,org2 --namespace organization,member

The above command will test organization and member policies against org1 and org2.

GitHub Enterprise Support

You can run legitify against a GitHub Enterprise instance if you set the endpoint URL in the environment variable SERVER_URL:

export SERVER_URL="https://github.example.com/"
LEGITIFY_TOKEN=<your_token> legitify analyze --org org1,org2 --namespace organization,member

GitLab Cloud/Server Support

To run legitify against GitLab Cloud set the scm flag to gitlab --scm gitlab, to run against GitLab Server you need to provide also SERVER_URL:

export SERVER_URL="https://gitlab.example.com/"
LEGITIFY_TOKEN=<your_token> legitify analyze --namespace organization --scm gitlab

Namespaces

Namespaces in legitify are resources that are collected and run against the policies. Currently, the following namespaces are supported:

  1. organization - organization level policies (e.g., "Two-Factor Authentication Is Not Enforced for the Organization")
  2. actions - organization GitHub Actions policies (e.g., "GitHub Actions Runs Are Not Limited To Verified Actions")
  3. member - organization members policies (e.g., "Stale Admin Found")
  4. repository - repository level policies (e.g., "Code Review By At Least Two Reviewers Is Not Enforced")
  5. runner_group - runner group policies (e.g, "runner can be used by public repositories")

By default, legitify will analyze all namespaces. You can limit only to selected ones with the --namespace flag, and then a comma separated list of the selected namespaces.

Output Options

By default, legitify will output the results in a human-readable format. This includes the list of policy violations listed by severity, as well as a summary table that is sorted by namespace.

Output Formats

Using the --output-format (-f) flag, legitify supports outputting the results in the following formats:

  1. human-readable - Human-readable text (default).
  2. json - Standard JSON.

Output Schemes

Using the --output-scheme flag, legitify supports outputting the results in different grouping schemes. Note: --output-format=json must be specified to output non-default schemes.

  1. flattened - No grouping; A flat listing of the policies, each with its violations (default).
  2. group-by-namespace - Group the policies by their namespace.
  3. group-by-resource - Group the policies by their resource e.g. specific organization/repository.
  4. group-by-severity - Group the policies by their severity.

Output Destinations

  • --output-file - full path of the output file (default: no output file, prints to stdout).
  • --error-file - full path of the error logs (default: ./error.log).

Coloring

When outputting in a human-readable format, legitify support the conventional --color[=when] flag, which has the following options:

  • auto - colored output if stdout is a terminal, uncolored otherwise (default).
  • always - colored output regardless of the output destination.
  • none - uncolored output regardless of the output destination.

Misc

  • Use the --failed-only flag to filter-out passed/skipped checks from the result.

Scorecard Support

scorecard is an OSSF's open-source project:

Scorecards is an automated tool that assesses a number of important heuristics ("checks") associated with software security and assigns each check a score of 0-10. You can use these scores to understand specific areas to improve in order to strengthen the security posture of your project. You can also assess the risks that dependencies introduce, and make informed decisions about accepting these risks, evaluating alternative solutions, or working with the maintainers to make improvements.

legitify supports running scorecard for all of the organization's repositories, enforcing score policies and showing the results using the --scorecard flag:

  • no - do not run scorecard (default).
  • yes - run scorecard and employ a policy that alerts on each repo score below 7.0.
  • verbose - run scorecard, employ a policy that alerts on each repo score below 7.0, and embed its output to legitify's output.

legitify runs the following scorecard checks:

Check Public Repository Private Repository
Security-Policy V
CII-Best-Practices V
Fuzzing V
License V
Signed-Releases V
Branch-Protection V V
Code-Review V V
Contributors V V
Dangerous-Workflow V V
Dependency-Update-Tool V V
Maintained V V
Pinned-Dependencies V V
SAST V V
Token-Permissions V V
Vulnerabilities V V
Webhooks V V

Policies

legitify comes with a set of policies in the policies/github directory. These policies are documented here.

In addition, you can use the --policies-path (-p) flag to specify a custom directory for OPA policies.

Contribution

Thank you for considering contributing to Legitify! We encourage and appreciate any kind of contribution. Here are some resources to help you get started:



nuvola - Tool To Dump And Perform Automatic And Manual Security Analysis On Aws Environments Configurations And Services

nuvola (with the lowercase n) is a tool to dump and perform automatic and manual security analysis on AWS environments configurations and services using predefined, extensible and custom rules created using a simple Yaml syntax.

The general idea behind this project is to create an abstracted digital twin of a cloud platform. For a more concrete example: nuvola reflects the BloodHound traits used for Active Directory analysis but on cloud environments (at the moment only AWS).

The usage of a graph database also increases the possibility of finding different and innovative attack paths and can be used as an offline, centralised and lightweight digital twin.


Quick Start

Requirements

  • docker-compose installed
  • an AWS account configured to be used with awscli with full access to the cloud resources, better if in ReadOnly mode (the policy arn:aws:iam::aws:policy/ReadOnlyAccess is fine)

Setup

  1. Clone the repository
git clone --depth=1 https://github.com/primait/nuvola.git; cd nuvola
  1. Create and edit, if required, the .env file to set your DB username/password/URL
cp .env_example .env;
  1. Start the Neo4j docker instance
make start
  1. Build the tool
make build

Usage

  1. Firstly you need to dump all the supported AWS services configurations and load the data into the Neo4j database:
./nuvola dump -profile default_RO -outputdir ~/DumpDumpFolder -format zip
  1. To import a previously executed dump operation into the Neo4j database:
./nuvola assess -import ~/DumpDumpFolder/nuvola-default_RO_20220901.zip
  1. To only perform static assessments on the data loaded into the Neo4j database using the predefined ruleset:
./nuvola assess
  1. Or use Neo4j Browser to manually explore the digital twin.

About nuvola

To get started with nuvola and its database schema, check out the nuvola Wiki.

No data is sent or shared with Prima Assicurazioni.

How to contribute

  • reporting bugs and issues
  • reporting new improvements
  • reviewing issues and pull requests
  • fixing bugs and issues
  • creating new rules
  • improving the overall quality

Presentations

License

nuvola uses graph theory to reveal possible attack paths and security misconfigurations on cloud environments.

This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version.

This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.

You should have received a copy of the GNU General Public License along with this repository and program. If not, see http://www.gnu.org/licenses/.



Cicd-Goat - A Deliberately Vulnerable CI/CD Environment


Deliberately vulnerable CI/CD environment. Hack CI/CD pipelines, capture the flags.

Created by Cider Security.

Description

The CI/CD Goat project allows engineers and security practitioners to learn and practice CI/CD security through a set of 10 challenges, enacted against a real, full blown CI/CD environment. The scenarios are of varying difficulty levels, with each scenario focusing on one primary attack vector.

The challenges cover the Top 10 CI/CD Security Risks, including Insufficient Flow Control Mechanisms, PPE (Poisoned Pipeline Execution), Dependency Chain Abuse, PBAC (Pipeline-Based Access Controls), and more.
The different challenges are inspired by Alice in Wonderland, each one is themed as a different character.


The project’s environment is based on Docker images and can be run locally. These images are:

  1. Gitea (minimal git server)
  2. Jenkins
  3. Jenkins agent
  4. LocalStack (cloud service emulator that runs in a single container)
  5. Lighttpd
  6. CTFd (Capture The Flag framework).

The images are configured to interconnect in a way that creates fully functional pipelines.

Download & Run

There's no need to clone the repository.

Linux & Mac

curl -o cicd-goat/docker-compose.yaml --create-dirs https://raw.githubusercontent.com/cider-security-research/cicd-goat/main/docker-compose.yaml
cd cicd-goat && docker-compose up -d

Windows (Powershell)

mkdir cicd-goat; cd cicd-goat
curl -o docker-compose.yaml https://raw.githubusercontent.com/cider-security-research/cicd-goat/main/docker-compose.yaml
get-content docker-compose.yaml | %{$_ -replace "bridge","nat"}
docker-compose up -d

Usage

Instructions

  • Spoiler alert! Avoid browsing the repository files as they contain spoilers.
  • To configure your git client for accessing private repositories we suggest cloning using the http url.
  • In each challenge, find the flag - in the format of flag# (e.g flag2), or another format if mentioned specifically.
  • Each challenge stands on its own. Do not use access gained in one challenge to solve another challenge.
  • If needed, use the hints on CTFd.
  • There is no need to exploit CVEs.
  • No need to hijack admin accounts of Gitea or Jenkins (named "admin" or "red-queen").

Take the challenge

  1. After starting the containers, it might take up to 5 minutes until the containers configuration process is complete.

  2. Login to CTFd at http://localhost:8000 to view the challenges:

    • Username: alice
    • Password: alice
  3. Hack:

  4. Insert the flags on CTFd and find out if you got it right.

Troubleshooting

  • If Gitea shows a blank page, refresh the page.
  • When forking a repository, don't change the name of the forked repository.

Solutions

Warning: Spoilers!

ο™ˆ

See Solutions.

Contributing

Development

  1. Clone the repository.

  2. Rename .git folders to make them usable:

    python3 rename.py git
  3. Install testing dependencies:

    pip3 install pipenv==2022.8.30
    pipenv install --deploy
  4. Run the development environment to experiment with new changes:

    rm -rf tmp tmp-ctfd/
    cp -R ctfd/data/ tmp-ctfd/
    docker-compose -f docker-compose-dev.yaml up -d
  5. Make the desired changes:

    • All services except CTFd are completely configured as code so desired changes should be made to the files in the appropriate folders.
    • To make changes in CTFd, use the admin credentials.
  6. Shutdown the environment, move changes made in CTFd and rebuild it:

    docker-compose -f docker-compose-dev.yaml down
    ./apply.sh # save CTFd changes
    docker-compose -f docker-compose-dev.yaml up -d --build
  7. Run tests:

    pytest tests/
  8. Rename .git folders to allow push:

    python3 rename.py notgit
  9. Commit and push!

Checklist

Follow the checklist below to add a challenge:

  1. CTFd:
    1. Write challenge description.
    2. Choose category according to difficulty level.
    3. Make sure the challenge is visible and has value according to difficulty.
    4. Write hints in order of usage.
    5. Add a flag. Make sure to select if it's case-insensitive.
  2. Gitea:
    1. Configure a new repository in gitea.yaml.
    2. Create the repository under gitea/repositories. Use an open-source repository that use the MIT license as a template for the challenge repository.
  3. Jenkins:
    1. Configure Jenkins and add new jobdsl files in the casc.yaml file.
    2. Make sure jobs don't run periodically. Jobs should be triggered by events / polling.
    3. Validate that the new challenge doesn't interfere with other challenges.
  4. Make sure the flag is not accessible when solving other challenges.
  5. Write tests.
  6. Write the solution.
  7. Update README.md if needed.
  8. In order to run the CI, make sure you have a CircleCI account and that you’ve clicked β€œSet Up Project” on your fork of the project.


secureCodeBox (SCB) - Continuous Secure Delivery Out Of The Box


secureCodeBox is a kubernetes based, modularized toolchain for continuous security scans of your software project. Its goal is to orchestrate and easily automate a bunch of security-testing tools out of the box.


For additional documentation aspects please have a look at our documentation website:

Purpose of this Project

The typical way to ensure application security is to hire a security specialist (aka penetration tester) at some point in your project to check the application for security bugs and vulnerabilities. Usually, this check is done at a later stage of the project and has two major drawbacks:

  1. Nowadays, a lot of projects do continuous delivery, which means the developers deploy new versions multiple times each day. The penetration tester is only able to check a single snapshot, but some further commits could introduce new security issues. To ensure ongoing application security, the penetration tester should also continuously test the application. Unfortunately, such an approach is rarely financially feasible.
  2. Due to a typically time boxed analysis, the penetration tester has to focus on trivial security issues (low-hanging fruit) and therefore will probably not address the serious, non-obvious ones.

With the secureCodeBox we provide a toolchain for continuous scanning of applications to find the low-hanging fruit issues early in the development process and free the resources of the penetration tester to concentrate on the major security issues.



The purpose of secureCodeBox is not to replace the penetration testers or make them obsolete. We strongly recommend to run extensive tests by experienced penetration testers on all your applications.

Important note: The secureCodeBox is no simple one-button-click-solution! You must have a deep understanding of security and how to configure the scanners. Furthermore, an understanding of the scan results and how to interpret them is also necessary.

There is a German article about Security DevOps – Angreifern (immer) einen Schritt voraus in the software engineering journal OBJEKTSpektrum.

Quickstart

You can find resources to help you get started on our documentation website including instruction on how to install the secureCodeBox and guides to help you run your first scans with it.

Architecture Overview


Upgrading

For the steps required for upgrading your secureCodeBox installation, see Upgrading.

License

Code of secureCodeBox is licensed under the Apache License 2.0.

Community

You are welcome, please join us on...

ο‘‹

secureCodeBox is an official OWASP project.

Author Information

Sponsored and maintained by iteratec GmbH - secureCodeBox.io



Wrongsecrets - Examples With How To Not Use Secrets


Welcome to the OWASP WrongSecrets p0wnable app. With this app, we have packed various ways of how to not store your secrets. These can help you to realize whether your secret management is ok. The challenge is to find all the different secrets by means of various tools and techniques.

Can you solve all the 16 challenges?Β 


Support

Need support? Contact us via OWASP Slack for which you sign up here, file a PR, file an issue , or use discussions. Please note that this is an OWASP volunteer based project, so it might take a little while before we respond.

Basic docker exercises

Can be used for challenges 1-4, 8, 12-15

For the basic docker exercises you currently require:

You can install it by doing:

docker run -p 8080:8080 jeroenwillemsen/wrongsecrets:1.4.0-no-vault

Now you can try to find the secrets by means of solving the challenge offered at:

Note that these challenges are still very basic, and so are their explanations. Feel free to file a PR to make them look better ;-).

Running these on Heroku

You can test them out at https://wrongsecrets.herokuapp.com/ as well! But please understand that we have NO guarantees that this works. Given we run in Heroku free-tier, please do not fuzz and/or try to bring it down: you would be spoiling it for others that want to testdrive it.

Deploying the app under your own heroku account

  1. Sign up to Heroku and log in to your account
  2. Click the button below and follow the instructions

Basic K8s exercise

Can be used for challenges 1-6, 8, 12-16

Minikube based

Make sure you have the following installed:

The K8S setup currently is based on using Minikube for local fun:

    minikube start
kubectl apply -f k8s/secrets-config.yml
kubectl apply -f k8s/secrets-secret.yml
kubectl apply -f k8s/secret-challenge-deployment.yml
while [[ $(kubectl get pods -l app=secret-challenge -o 'jsonpath={..status.conditions[?(@.type=="Ready")].status}') != "True" ]]; do echo "waiting for secret-challenge" && sleep 2; done
kubectl expose deployment secret-challenge --type=LoadBalancer --port=8080
minikube service secret-challenge

now you can use the provided IP address and port to further play with the K8s variant (instead of localhost).

k8s based

Want to run vanilla on your own k8s? Use the commands below:

    kubectl apply -f k8s/secrets-config.yml
kubectl apply -f k8s/secrets-secret.yml
kubectl apply -f k8s/secret-challenge-deployment.yml
while [[ $(kubectl get pods -l app=secret-challenge -o 'jsonpath={..status.conditions[?(@.type=="Ready")].status}') != "True" ]]; do echo "waiting for secret-challenge" && sleep 2; done
kubectl port-forward \
$(kubectl get pod -l app=secret-challenge -o jsonpath="{.items[0].metadata.name}") \
8080:8080

now you can use the provided IP address and port to further play with the K8s variant (instead of localhost).

Vault exercises with minikube

Can be used for challenges 1-8, 12-16 Make sure you have the following installed:

Run ./k8s-vault-minkube-start.sh, when the script is done, then the challenges will wait for you at http://localhost:8080 . This will allow you to run challenges 1-8, 12-15.

When you stopped the k8s-vault-minikube-start.sh script and want to resume the port forward run: k8s-vault-minikube-resume.sh. This is because if you run the start script again it will replace the secret in the vault and not update the secret-challenge application with the new secret.

Cloud Challenges

Can be used for challenges 1-16

READ THIS: Given that the exercises below contain IAM privilege escalation exercises, never run this on an account which is related to your production environment or can influence your account-over-arching resources.

Running WrongSecrets in AWS

Follow the steps in the README in the AWS subfolder.

Running WrongSecrets in GCP

Follow the steps in the README in the GCP subfolder.

Running WrongSecrets in Azure

Follow the steps in the README in the Azure subfolder.

Running Challenge15 in your own cloud only

When you want to include your own Canarytokens for your cloud-deployment, do the following:

  1. Fork the project.
  2. Make sure you use the GCP ingress or AWS ingress scripts to generate an ingress for your project.
  3. Go to canarytokens.org and select AWS Keys, in the webHook URL field add <your-domain-created-at-step1>/canaries/tokencallback.
  4. Encrypt the received credentials so that Challenge15 can decrypt them again.
  5. Commit the unencrypted and encrypted materials to Git and then commit again without the decrypted materials.
  6. Adapt the hints of Challenge 15 in your fork to point to your fork.
  7. Create a container and push it to your registry
  8. Override the K8s definition files for either AWS or GCP.

Do you want to play without guidance?

Each challenge has a Show hints button and a What's wrong? button. These buttons help to simplify the challenges and give explanation to the reader. Though, the explanations can spoil the fun if you want to do this as a hacking exercise. Therefore, you can manipulate them by overriding the following settings in your env:

  • hints_enabled=false will turn off the Show hints button.
  • reason_enabled=false will turn of the What's wrong? explanation button.

Special thanks & Contributors

Leaders:

Top contributors:

Testers:

Special mentions for helping out:

Help Wanted

You can help us by the following methods:

  • Star us
  • Share this app with others
  • Of course, we can always use your help to get more flavors of "wrongly" configured secrets in to spread awareness! We would love to get some help with other cloudproiders, like Alibabaor Tencent cloud for instance. Do you miss something else than a cloud provider as an example? File an issue or create a PR! See our guide on contributing for more details. Contributors will be listed in releases, in the "Special thanks & Contributors"-section, and the web-app.

Use OWASP WrongSecrets as a secret detection benchmark

As tons of secret detection tools are coming up for both Docker and Git, we are creating a Benchmark testbed for it. Want to know if your tool detects everything? We will keep track of the embedded secrets in this issue and have a branch in which we put additional secrets for your tool to detect. The branch will contain a Docker container generation script using which you can eventually test your container secret scanning.

Notes on development

For development on local machine use the local profile ./mvnw spring-boot:run -Dspring-boot.run.profiles=local

If you want to test against vault without K8s: start vault locally with

 export VAULT_ADDR='http://127.0.0.1:8200'
export VAULT_API_ADDR='http://127.0.0.1:8200'
vault server -dev

and in your next terminal, do (with the token from the previous commands):

export VAULT_ADDR='http://127.0.0.1:8200'
export VAULT_TOKEN='<TOKENHERE>'
vault token create -id="00000000-0000-0000-0000-000000000000" -policy="root"
vault kv put secret/secret-challenge vaultpassword.password="$(openssl rand -base64 16)"

Now use the local-vault profile to do your development.

./mvnw spring-boot:run -Dspring-boot.run.profiles=local,local-vault

If you want to dev without a Vault instance, use additionally the without-vault profile to do your development:

./mvnw spring-boot:run -Dspring-boot.run.profiles=local,without-vault

Want to push a container? See .github/scripts/docker-create-and-push.sh for a script that generates and pushes all containers. Do not forget to rebuild the app before composing the container

Dependency management

We have CycloneDX and OWASP Dependency-check integrated to check dependencies for vulnerabilities. You can use the OWASP Dependency-checker by calling mvn dependency-check:aggregate and mvn cyclonedx:makeBom to use CycloneDX to create an SBOM.

Automatic reload during development

To make changes made load faster we added spring-dev-tools to the Maven project. To enable this in IntelliJ automatically, make sure:

  • Under Compiler -> Automatically build project is enabled, and
  • Under Advanced settings -> Allow auto-make to start even if developed application is currently running.

You can also manually invoke: Build -> Recompile the file you just changed, this will also force reloading of the application.

How to add a Challenge

Follow the steps below on adding a challenge:

  1. First make sure that you have an Issue reported for which a challenge is really wanted.
  2. Add the new challenge in the org.owasp.wrongsecrets.challenges folder. Make sure you add an explanation in src/main/resources/explanations and refer to it from your new Challenge class.
  3. Add a unit and integration test to show that your challenge is working.
  4. Don't forget to add @Order annotation to your challenge ;-).

If you want to move existing cloud challenges to another cloud: extend Challenge classes in the org.owasp.wrongsecrets.challenges.cloud package and make sure you add the required Terraform in a folder with the separate cloud identified. Make sure that the environment is added to org.owasp.wrongsecrets.RuntimeEnvironment. Collaborate with the others at the project to get your container running so you can test at the cloud account.



❌