In an ever-evolving digital landscape, cybersecurity has become the cornerstone of organizational success. With the proliferation of sophisticated cyber threats, businesses must adopt a multi-layered⦠Read more on Cisco Blogs
On Sunday, February 11, over 160 million viewers from around the globe watched Super Bowl LVIII, making it one of the most viewed annual sporting events. It is also a good bet that a record number of⦠Read more on Cisco Blogs
Crawlector (the name Crawlector is a combination of Crawler & Detector) is a threat hunting framework designed for scanning websites for malicious objects.
Note-1: The framework was first presented at the No Hat conference in Bergamo, Italy on October 22nd, 2022 (Slides, YouTube Recording). Also, it was presented for the second time at the AVAR conference, in Singapore, on December 2nd, 2022.
Note-2: The accompanying tool EKFiddle2Yara (is a tool that takes EKFiddle rules and converts them into Yara rules) mentioned in the talk, was also released at both conferences.
This is for checking for malicious urls against every page being scanned. The framework could either query the list of malicious URLs from URLHaus server (configuration: url_list_web), or from a file on disk (configuration: url_list_file), and if the latter is specified, then, it takes precedence over the former.
It works by searching the content of every page against all URL entries in url_list_web or url_list_file, checking for all occurrences. Additionally, upon a match, and if the configuration option check_url_api is set to true, Crawlector will send a POST request to the API URL set in the url_api configuration option, which returns a JSON object with extra information about a matching URL. Such information includes urlh_status (ex., online, offline, unknown), urlh_threat (ex., malware_download), urlh_tags (ex., elf, Mozi), and urlh_reference (ex., https://urlhaus.abuse.ch/url/1116455/). This information will be included in the log file cl_mlog_<current_date><current_time><(pm|am)>.csv (check below), only if check_url_api is set to true. Otherwise, the log file will include the columns urlh_url (list o f matching malicious URLs) and urlh_hit (number of occurrences for every matching malicious URL), conditional on whether check_url is set to true.
URLHaus feature could be disabled in its entirety by setting the configuration option check_url to false.
It is important to note that this feature could slow scanning considering the huge number of malicious urls (~ 130 million entries at the time of this writing) that need to be checked, and the time it takes to get extra information from the URLHaus server (if the option check_url_api is set to true).
It is very important that you familiarize yourself with the configuration file cl_config.ini before running any session. All of the sections and parameters are documented in the configuration file itself.
The Yara offline scanning feature is a standalone option, meaning, if enabled, Crawlector will execute this feature only irrespective of other enabled features. And, the same is true for the crawling for domains/sites digital certificate feature. Either way, it is recommended that you disable all non-used features in the configuration file.
log_to_file
or log_to_cons
), if a Yara rule references only a module's attributes (ex., PE, ELF, Hash, etc...), then Crawlector will display only the rule's name upon a match, excluding offset and length data.To visit/scan a website, the list of URLs must be stored in text files, in the directory βcl_sitesβ.
Crawlector accepts three types of URLs:
[a-zA-Z0-9_-]{1,128} = <url>
<id>[
depth:<0|1>-><\d+>,
total:<\d+>,
sleep:<\d+>] = <url>
For example,
mfmokbel[depth:1->3,total:10,sleep:0] = https://www.mfmokbel.com
which is equivalent to: mfmokbel[d:1->3,t:10,s:0] = https://www.mfmokbel.com
where, <id> := [a-zA-Z0-9_-]{1,128}
depth, total and sleep, can also be replaced with their shortened versions d, t and s, respectively.
40 (10 + (10*3))
URLs.Note 1: Type 3 URL could be turned into type 1 URL by setting the configuration parameter live_crawler to false, in the configuration file, in the spider section.
Note 2: Empty lines and lines that start with β;β or β//β are ignored.
The spider functionality is what gives Crawlector the capability to find additional links on the targeted page. The Spider supports the following featuers:
Type 3
, for the Spider functionality to workexclude_url
config. option. For example, *.zip|*.exe|*.rar|*.zip|*.7z|*.pdf|.*bat|*.db
include_url
config. option. For example, */checkout/*|*/products/*
exclude_https
add_ext_links
. This feature honours the exclude_url
and include_url
config. option.ext_links_only
. This feature honours the exclude_url
and include_url
config. option.site_ranking
in the configuration file provides some options to alter how the CSV file is to be readsite
section provides the capability to expand on a given site, by attempting to find all available top-level domains (TLDs) and/or subdomains for the same domain. If found, new tlds/subdomains will be checked like any other domainrapid_api_key
in the configuration filefind_tlds
enabled, in addition to Omnisint Labs API tlds results, the framework attempts to find other active/registered domains by going through every tld entry, either, in the tlds_file
or tlds_url
tlds_url
is set, it should point to a url that hosts tlds, each one on a new line (lines that start with either of the characters ';', '#' or '//' are ignored)tlds_file
, holds the filename that contains the list of tlds (same as for tlds_url
; only the tld is present, excluding the '.', for ex., "com", "org")tlds_file
is set, it takes precedence over tlds_url
tld_dl_time_out
, this is for setting the maximum timeout for the dnslookup function when attempting to check if the domain in question resolves or nottld_use_connect
, this option enables the functionality to connect to the domain in question over a list of ports, defined in the option tlds_connect_ports
tlds_connect_ports
accepts a list of ports, comma separated, or a list of ranges, such as 25-40,90-100,80,443,8443 (range start and end are inclusive) tld_con_time_out
, this is for setting the maximum timeout for the connect functiontld_con_use_ssl
, enable/disable the use of ssl when attempting to connect to the domainsave_to_file_subd
is set to true, discovered subdomains will be saved to "\expanded\exp_subdomain_<pm|am>.txt"save_to_file_tld
is set to true, discovered domains will be saved to "\expanded\exp_tld_<pm|am>.txt"exit_here
is set to true, then Crawlector bails out after executing this [site] function, irrespective of other enabled options. It means found sites won't be crawled/spideredcl_sites
are allowed.Open for pull requests and issues. Comments and suggestions are greatly appreciated.
Mohamad Mokbel (@MFMokbel)
Nowadays, βcybersecurityβ is the buzzword du jour, infiltrating every organization, invited or not. Furthermore, this is the case around the world, where an increasing proportion of all services now have an online presence, prompting businesses to reconsider the security of their systems. This, however, is not news to Cisco, as we anticipated it and were prepared to serve and assist clients worldwide.
Secure Cloud Analytics, part of the Cisco Threat, Detection, and Response (TD&R) portfolio, is an industry-leading tool for tackling core Network Detection and Response (NDR) use cases. These workflows focus primarily on threat detection and how security teamsΒ may recognize the most critical issues around hunting and forensic investigations to improve their mean-time-to-respond.
Over the last year, the product team worked tirelessly to strengthen the NDR offering. New telemetry sources, more advanced detections, and observations supplement the context of essential infrastructure aspects as well as usability and interoperability improvements. Additionally, the long-awaited solution Cisco Telemetry Broker is now available, providing a richer SecOps experience across the product.
As part of our innovation story on alerting capabilities, Secure Cloud Analytics now features new detections tied to the MITRE ATT&CK framework such as Worm Propagation, Suspicious User Agent, and Azure OAuth Bypass.
Additionally, various new roles and observations were added to the Secure Cloud Analytics to improve and change user alerts, that are foundational pieces of our detections. Alerts now include a direct link to AWSβ assets and their VPC, as well as direct access to Azure Security Groups, enabling further investigation capabilities through simplified workflows. In addition, the Public Cloud Providers are now included in coverage reports that provide a gap analysis to determine which accounts are covered. Alert Details offers new device information, such as host names, subnets, and role metrics that emphasize detection techniques. To better configure alerts, we are adding telemetry to gain contextual reference on their priority. Furthermore, the ingest process has grown more robust due to data from the Talos intelligence feed and ISE.
The highly anticipated SecureX integration is now available in a single click, with no API credentials required and smooth interaction between the two platforms. Most importantly, Secure Cloud Analytics alerts may now be configured to automatically publish as incidents to the SecureX Incident Manager. The Talos Intelligence Watchlist Hits Alert is on by default due to its prominence among the many alert types.
Among other enhancements to graphs and visualizations, the Encrypted Traffic widget allows for an hourly breakdown of data. Simultaneously, the Device Report contains traffic data for a specific timestamp, which may be downloaded as a CSV. Furthermore, the Event Viewer now displays bi-directional session traffic to provide even more context to Secure Cloud Analytics flows, as well as additional columns to help with telemetry log comprehension:Β Cloud Account, Cloud Region, Cloud VPC, Sensor and Exporter.
On-premises sensors now provide additional telemetry on the overview page and a dedicated page where users can look further into the telemetry flowing through them in Sensor Health. To optimize the Secure Cloud Analytics deployment and improve the user experience, sensors may now be deleted from the interface.
Regarding telemetry, Cisco Telemetry Broker can now serve as a sensor in Secure Cloud Analytics, so users can identify and respond to threats faster with additional context sent to Secure Cloud Analytics. In addition, there will soon be support for other telemetry types besides IPFIX and NetFlow.
As we can see from the vast number of new additions to Secure Cloud Analytics, the product team has been working hard to understand the latest market trends, listen to the customersβ requests, and build one of the finest SaaS products in the NDR industry segment. The efforts strongly underline how Secure Cloud Analytics can solve some of the most important challenges in the NDR space around visibility, fidelity of alerts and deployment complexity by providing a cloud hosted platform that can offer insights on-premise and on cloud environments simultaneously from the same dashboard. Learn more about new features that allow Secure Cloud Analytics to detect, analyze, and respond to the most critical dangers to their company much more quickly.
Weβd love to hear what you think. Ask a Question, Comment Below, and Stay Connected with Cisco Secure on social!
Cisco Secure Social Channels
Threatest is a Go framework for testing threat detection end-to-end.
Threatest allows you to detonate an attack technique, and verify that the alert you expect was generated in your favorite security platform.
Read the announcement blog post: https://securitylabs.datadoghq.com/articles/threatest-end-to-end-testing-threat-detection/
A detonator describes how and where an attack technique is executed.
Supported detonators:
An alert matcher is a platform-specific integration that can check if an expected alert was triggered.
Supported alert matchers:
Each detonation is assigned a UUID. This UUID is reflected in the detonation and used to ensure that the matched alert corresponds exactly to this detonation.
The way this is done depends on the detonator; for instance, Stratus Red Team and the AWS Detonator inject it in the user-agent; the SSH detonator uses a parent process containing the UUID.
See examples for complete usage example.
threatest := Threatest()
threatest.Scenario("AWS console login").
WhenDetonating(StratusRedTeamTechnique("aws.initial-access.console-login-without-mfa")).
Expect(DatadogSecuritySignal("AWS Console login without MFA").WithSeverity("medium")).
WithTimeout(15 * time.Minute)
assert.NoError(t, threatest.Run())
ssh, _ := NewSSHCommandExecutor("test-box", "", "")
threatest := Threatest()
threatest.Scenario("curl to metadata service").
WhenDetonating(NewCommandDetonator(ssh, "curl http://169.254.169.254 --connect-timeout 1")).
Expect(DatadogSecuritySignal("EC2 Instance Metadata Service Accessed via Network Utility"))
assert.NoError(t, threatest.Run())
This repository is a documentation of my adventures with Stratus Red Team - a tool for adversary emulation for the cloud.
Stratus Red Team is "Atomic Red Team for the cloud, allowing to emulate offensive attack techniques in a granular and self-contained manner.
We run the attacks covered in the Stratus Red Team repository one by one on our AWS account. In order to monitor them, we will use CloudTrail and CloudWatch for logging and ingest these logs into SumoLogic for further analysis.
Attack | Description | Link |
---|---|---|
aws.credential-access.ec2-get-password-data | Retrieve EC2 Password Data | Link |
aws.credential-access.ec2-steal-instance-credentials | Steal EC2 Instance Credentials | Link |
aws.credential-access.secretsmanager-retrieve-secrets | Retrieve a High Number of Secrets Manager secrets | Link |
aws.credential-access.ssm-retrieve-securestring-parameters | Retrieve And Decrypt SSM Parameters | Link |
aws.defense-evasion.cloudtrail-delete | Delete CloudTrail Trail | Link |
aws.defense-evasion.cloudtrail-event-selectors | Disable CloudTrail Logging Through Event Selectors | Link |
aws.defense-evasion.cloudtrail-lifecycle-rule | CloudTrail Logs Impairment Through S3 Lifecycle Rule | Link |
aws.defense-evasion.cloudtrail-stop | Stop CloudTrail Trail | Link |
aws.defense-evasion.organizations-leave | Attempt to Leave the AWS Organization | Link |
aws.defense-evasion.vpc-remove-flow-logs | Remove VPC Flow Logs | Link |
aws.discovery.ec2-enumerate-from-instance | Execute Discovery Commands on an EC2 Instance | Link |
aws.discovery.ec2-download-user-data | Download EC2 Instance User Data | TBD |
aws.exfiltration.ec2-security-group-open-port-22-ingress | Open Ingress Port 22 on a Security Group | Link |
aws.exfiltration.ec2-share-ami | Exfiltrate an AMI by Sharing It | Link |
aws.exfiltration.ec2-share-ebs-snapshot | Exfiltrate EBS Snapshot by Sharing It | Link |
aws.exfiltration.rds-share-snapshot | Exfiltrate RDS Snapshot by Sharing | Link |
aws.exfiltration.s3-backdoor-bucket-policy | Backdoor an S3 Bucket via its Bucket Policy | Link |
aws.persistence.iam-backdoor-role | Backdoor an IAM Role | Link |
aws.persistence.iam-backdoor-user | Create an Access Key on an IAM User | TBD |
aws.persistence.iam-create-admin-user | Create an administrative IAM User | TBD |
aws.persistence.iam-create-user-login-profile | Create a Login Profile on an IAM User | TBD |
aws.persistence.lambda-backdoor-function | Backdoor Lambda Function Through Resource-Based Policy | TBD |