FreshRSS

πŸ”’
❌ Secure Planet Training Courses Updated For 2019 - Click Here
There are new available articles, click to refresh the page.
Yesterday β€” May 17th 2024Your RSS feeds

Accessing Secure Client Cloud Management after the SecureX EoL

Secure Client Management capabilities aren’t going away with the SecureX EOL, the functionality is simply migrating to the Cisco Security Cloud Control service.
Before yesterdayYour RSS feeds

(Cyber) Risk = Probability of Occurrence x Damage

Here’s How to Enhance Your Cyber Resilience with CVSS In late 2023, the Common Vulnerability Scoring System (CVSS) v4.0 was unveiled, succeeding the eight-year-old CVSS v3.0, with the aim to enhance vulnerability assessment for both industry and the public. This latest version introduces additional metrics like safety and automation to address criticism of lacking granularity while

SHQ Response Platform and Risk Centre to Enable Management and Analysts Alike

In the last decade, there has been a growing disconnect between front-line analysts and senior management in IT and Cybersecurity. Well-documented challenges facing modern analysts revolve around a high volume of alerts, false positives, poor visibility of technical environments, and analysts spending too much time on manual tasks. The Impact of Alert Fatigue and False Positives  Analysts

What's the Right EDR for You?

A guide to finding the right endpoint detection and response (EDR) solution for your business’ unique needs. Cybersecurity has become an ongoing battle between hackers and small- and mid-sized businesses. Though perimeter security measures like antivirus and firewalls have traditionally served as the frontlines of defense, the battleground has shifted to endpoints. This is why endpoint

Researchers Uncover 'LLMjacking' Scheme Targeting Cloud-Hosted AI Models

Cybersecurity researchers have discovered a novel attack that employs stolen cloud credentials to target cloud-hosted large language model (LLM) services with the goal of selling access to other threat actors. The attack technique has been codenamed LLMjacking by the Sysdig Threat Research Team. "Once initial access was obtained, they exfiltrated cloud credentials and gained

Empowering Cybersecurity with AI: The Future of Cisco XDR

Learn how the Cisco AI Assistant in XDR adds powerful functionality to Cisco XDR that increases defenders efficiency and accuracy.

How to Conduct Advanced Static Analysis in a Malware Sandbox

Sandboxes are synonymous with dynamic malware analysis. They help to execute malicious files in a safe virtual environment and observe their behavior. However, they also offer plenty of value in terms of static analysis. See these five scenarios where a sandbox can prove to be a useful tool in your investigations. Detecting Threats in PDFs PDF files are frequently exploited by threat actors to

Synergizing Advanced Identity Threat Detection & Response Solutions

In an ever-evolving digital landscape, cybersecurity has become the cornerstone of organizational success. With the proliferation of sophisticated cyber threats, businesses must adopt a multi-layered… Read more on Cisco Blogs

Human vs. Non-Human Identity in SaaS

In today's rapidly evolving SaaS environment, the focus is on human users. This is one of the most compromised areas in SaaS security management and requires strict governance of user roles and permissions, monitoring of privileged users, their level of activity (dormant, active, hyperactive), their type (internal/ external), whether they are joiners, movers, or leavers, and more.  Not

To win against cyber attackers at Super Bowl LVIII, the NFL turns to Cisco XDR

On Sunday, February 11, over 160 million viewers from around the globe watched Super Bowl LVIII, making it one of the most viewed annual sporting events. It is also a good bet that a record number of… Read more on Cisco Blogs

How to Use Tines's SOC Automation Capability Matrix

Created by John Tuckner and the team at workflow and automation platform Tines, the SOC Automation Capability Matrix (SOC ACM) is a set of techniques designed to help security operations teams understand their automation capabilities and respond more effectively to incidents.  A customizable, vendor-agnostic tool featuring lists of automation opportunities, it's been shared

Why Are Compromised Identities the Nightmare to IR Speed and Efficiency?

Incident response (IR) is a race against time. You engage your internal or external team because there's enough evidence that something bad is happening, but you’re still blind to the scope, the impact, and the root cause. The common set of IR tools and practices provides IR teams with the ability to discover malicious files and outbound network connections. However, the identity aspect - namely

Wazuh in the Cloud Era: Navigating the Challenges of Cybersecurity

Cloud computing has innovated how organizations operate and manage IT operations, such as data storage, application deployment, networking, and overall resource management. The cloud offers scalability, adaptability, and accessibility, enabling businesses to achieve sustainable growth. However, adopting cloud technologies into your infrastructure presents various cybersecurity risks and

Unifying Security Tech Beyond the Stack: Integrating SecOps with Managed Risk and Strategy

Cybersecurity is an infinite journey in a digital landscape that never ceases to change. According to Ponemon Institute1, β€œonly 59% of organizations say their cybersecurity strategy has changed over the past two years.” This stagnation in strategy adaptation can be traced back to several key issues. Talent Retention Challenges: The cybersecurity field is rapidly advancing, requiring a

Building a Robust Threat Intelligence with Wazuh

Threat intelligence refers to gathering, processing, and analyzing cyber threats, along with proactive defensive measures aimed at strengthening security. It enables organizations to gain a comprehensive insight into historical, present, and anticipated threats, providing context about the constantly evolving threat landscape. Importance of threat intelligence in the cybersecurity ecosystem

Scaling Security Operations with Automation

In an increasingly complex and fast-paced digital landscape, organizations strive to protect themselves from various security threats. However, limited resources often hinder security teams when combatting these threats, making it difficult to keep up with the growing number of security incidents and alerts. Implementing automation throughout security operations helps security teams alleviate

Stop Identity Attacks: Discover the Key to Early Threat Detection

Identity and Access Management (IAM) systems are a staple to ensure only authorized individuals or entities have access to specific resources in order to protect sensitive information and secure business assets. But did you know that today over 80% of attacks now involve identity, compromised credentials or bypassing the authentication mechanism? Recent breaches at MGM and Caesars have

How to Handle Retail SaaS Security on Cyber Monday

If forecasters are right, over the course of today, consumers will spend $13.7 billion. Just about every click, sale, and engagement will be captured by a CRM platform. Inventory applications will trigger automated re-orders; communication tools will send automated email and text messages confirming sales and sharing shipping information.  SaaS applications supporting retail efforts

Three Ways Varonis Helps You Fight Insider Threats

What do basketball teams, government agencies, and car manufacturers have in common? Each one has been breached, having confidential, proprietary, or private information stolen and exposed by insiders. In each case, the motivations and methods varied, but the risk remained the same: insiders have access to too much data with too few controls. Insider threatsΒ continue to prove difficult for

Top 5 Marketing Tech SaaS Security Challenges

Effective marketing operations today are driven by the use of Software-as-a-Service (SaaS) applications. Marketing apps such as Salesforce, Hubspot, Outreach, Asana, Monday, and Box empower marketing teams, agencies, freelancers, and subject matter experts to collaborate seamlessly on campaigns and marketing initiatives.Β  These apps serve as the digital command centers for marketing

Crawlector - Threat Hunting Framework Designed For Scanning Websites For Malicious Objects

By: Zion3R


Crawlector (the name Crawlector is a combination of Crawler & Detector) is a threat hunting framework designed for scanning websites for malicious objects.

Note-1: The framework was first presented at the No Hat conference in Bergamo, Italy on October 22nd, 2022 (Slides, YouTube Recording). Also, it was presented for the second time at the AVAR conference, in Singapore, on December 2nd, 2022.

Note-2: The accompanying tool EKFiddle2Yara (is a tool that takes EKFiddle rules and converts them into Yara rules) mentioned in the talk, was also released at both conferences.


Features

  • Supports spidering websites for findings additional links for scanning (up to 2 levels only)
  • Integrates Yara as a backend engine for rule scanning
  • Supports online and offline scanning
  • Supports crawling for domains/sites digital certificate
  • Supports querying URLhaus for finding malicious URLs on the page
  • Supports hashing the page's content with TLSH (Trend Micro Locality Sensitive Hash), and other standard cryptographic hash functions such as md5, sha1, sha256, and ripemd128, among others
    • TLSH won't return a value if the page size is less than 50 bytes or not "enough amount of randomness" is present in the data
  • Supports querying the rating and category of every URL
  • Supports expanding on a given site, by attempting to find all available TLDs and/or subdomains for the same domain
    • This feature uses the Omnisint Labs API (this site is down as of March 10, 2023) and RapidAPI APIs
    • TLD expansion implementation is native
    • This feature along with the rating and categorization, provides the capability to find scam/phishing/malicious domains for the original domain
  • Supports domain resolution (IPv4 and IPv6)
  • Saves scanned websites pages for later scanning (can be saved as a zip compressed)
  • The entirety of the framework’s settings is controlled via a single customizable configuration file
  • All scanning sessions are saved into a well-structured CSV file with a plethora of information about the website being scanned, in addition to information about the Yara rules that have triggered
  • All HTTP(S) communications are proxy-aware
  • One executable
  • Written in C++

URLHaus Scanning & API Integration

This is for checking for malicious urls against every page being scanned. The framework could either query the list of malicious URLs from URLHaus server (configuration: url_list_web), or from a file on disk (configuration: url_list_file), and if the latter is specified, then, it takes precedence over the former.

It works by searching the content of every page against all URL entries in url_list_web or url_list_file, checking for all occurrences. Additionally, upon a match, and if the configuration option check_url_api is set to true, Crawlector will send a POST request to the API URL set in the url_api configuration option, which returns a JSON object with extra information about a matching URL. Such information includes urlh_status (ex., online, offline, unknown), urlh_threat (ex., malware_download), urlh_tags (ex., elf, Mozi), and urlh_reference (ex., https://urlhaus.abuse.ch/url/1116455/). This information will be included in the log file cl_mlog_<current_date><current_time><(pm|am)>.csv (check below), only if check_url_api is set to true. Otherwise, the log file will include the columns urlh_url (list o f matching malicious URLs) and urlh_hit (number of occurrences for every matching malicious URL), conditional on whether check_url is set to true.

URLHaus feature could be disabled in its entirety by setting the configuration option check_url to false.

It is important to note that this feature could slow scanning considering the huge number of malicious urls (~ 130 million entries at the time of this writing) that need to be checked, and the time it takes to get extra information from the URLHaus server (if the option check_url_api is set to true).

Files and Folders Structures

  1. \cl_sites
    • this is where the list of sites to be visited or crawled is stored.
    • supports multiple files and directories.
  2. \crawled
    • where all crawled/spidered URLs are saved to a text file.
  3. \certs
    • where all domains/sites digital certificates are stored (in .der format).
  4. \results
    • where visited websites are saved.
  5. \pg_cache
    • program cache for sites that are not part of the spider functionality.
  6. \cl_cache
    • crawler cache for sites that are part of the spider functionality.
  7. \yara_rules
    • this is where all Yara rules are stored. All rules that exist in this directory will be loaded by the engine, parsed, validated, and evaluated before execution.
  8. cl_config.ini
    • this file contains all the configuration parameters that can be adjusted to influence the behavior of the framework.
  9. cl_mlog_<current_date><current_time><(pm|am)>.csv
    • log file that contains a plethora of information about visited websites
    • date, time, the status of Yara scanning, list of fired Yara rules with the offsets and lengths of each of the matches, id, URL, HTTP status code, connection status, HTTP headers, page size, the path to a saved page on disk, and other columns related to URLHaus results.
    • file name is unique per session.
  10. cl_offl_mlog_<current_date><current_time><(pm|am)>.csv
    • log file that contains information about files scanned offline.
    • list of fired Yara rules with the offsets and lengths of the matches, and path to a saved page on disk.
    • file name is unique per session.
  11. cl_certs_<current_date><current_time><(pm|am)>.csv
    • log file that contains a plethora of information about found digital certificates
  12. \expanded\exp_subdomain_<pm|am>.txt
    • contains discovered subdomains (part of the [site] section)
  13. \expanded\exp_tld_<pm|am>.txt
    • contains discovered domains (part of the [site] section)

Configuration File (cl_config.ini)

It is very important that you familiarize yourself with the configuration file cl_config.ini before running any session. All of the sections and parameters are documented in the configuration file itself.

The Yara offline scanning feature is a standalone option, meaning, if enabled, Crawlector will execute this feature only irrespective of other enabled features. And, the same is true for the crawling for domains/sites digital certificate feature. Either way, it is recommended that you disable all non-used features in the configuration file.

  • Depending on the configuration settings (log_to_file or log_to_cons), if a Yara rule references only a module's attributes (ex., PE, ELF, Hash, etc...), then Crawlector will display only the rule's name upon a match, excluding offset and length data.

Sites Format Pattern

To visit/scan a website, the list of URLs must be stored in text files, in the directory β€œcl_sites”.

Crawlector accepts three types of URLs:

  1. Type 1: one URL per line
    • Crawlector will assign a unique name to every URL, derived from the URL hostname
  2. Type 2: one URL per line, with a unique name [a-zA-Z0-9_-]{1,128} = <url>
  3. Type 3: for the spider functionality, a unique format is used. One URL per line is as follows:

<id>[depth:<0|1>-><\d+>,total:<\d+>,sleep:<\d+>] = <url>

For example,

mfmokbel[depth:1->3,total:10,sleep:0] = https://www.mfmokbel.com

which is equivalent to: mfmokbel[d:1->3,t:10,s:0] = https://www.mfmokbel.com

where, <id> := [a-zA-Z0-9_-]{1,128}

depth, total and sleep, can also be replaced with their shortened versions d, t and s, respectively.

  • depth: the spider supports going two levels deep for finding additional URLs (this is a design decision).
  • A value of 0 indicates a depth of level 1, with the value that comes after the β€œ->” ignored.
  • A depth of level-1 is controlled by the total parameter. So, first, the spider tries to find that many additional URLs off of the specified URL.
  • The value after the β€œ->” represents the maximum number of URLs to spider for each of the URLs found (as per the total parameter value).
  • A value of 1, indicates a depth of level 2, with the value that comes after the β€œ->” representing the maximum number of URLs to find, for every URL found per the total parameter. For clarification, and as shown in the example above, first, the spider will look for 10 URLs (as specified in the total parameter), and then, each of those found URLs will be spidered up to a max of 3 URLs; therefore, and in the best-case scenario, we would end up with 40 (10 + (10*3)) URLs.
  • The sleep parameter takes an integer value representing the number of milliseconds to sleep between every HTTP request.

Note 1: Type 3 URL could be turned into type 1 URL by setting the configuration parameter live_crawler to false, in the configuration file, in the spider section.

Note 2: Empty lines and lines that start with β€œ;” or β€œ//” are ignored.

The Spider Functionality

The spider functionality is what gives Crawlector the capability to find additional links on the targeted page. The Spider supports the following featuers:

  • The domain has to be of Type 3, for the Spider functionality to work
  • You may specify a list of wildcarded patterns (pipe delimited) to prevent spidering matching urls via the exclude_url config. option. For example, *.zip|*.exe|*.rar|*.zip|*.7z|*.pdf|.*bat|*.db
  • You may specify a list of wildcarded patterns (pipe delimited) to spider only urls that match the pattern via the include_url config. option. For example, */checkout/*|*/products/*
  • You may exclude HTTPS urls via the config. option exclude_https
  • You may account for outbound/external links as well, for the main page only, via the config. option add_ext_links. This feature honours the exclude_url and include_url config. option.
  • You may account for outbound/external links of the main page only, excluding all other urls, via the config. option ext_links_only. This feature honours the exclude_url and include_url config. option.

Site Ranking Functionality

  • This is for checking the ranking of the website
  • You give it a file with a list of websites, with their ranking, in a csv file format
  • Services that provide lists of websites ranking include, Alexa top-1m (discontinued as of May 2022), Cisco Umbrella, Majestic, Quantcast, Farsight and Tranco, among others
  • CSV file format (2 columns only): first column holds the ranking, and the second column holds the domain name
  • If a cell to contain quoted data, it'll be automatically dequoted
  • Line breaks aren't allowed in quoted text
  • Leading and trailing spaces are trimmed from cells read
  • Empty and comment lines are skipped
  • The section site_ranking in the configuration file provides some options to alter how the CSV file is to be read
  • The performance of this query is dependent on the number of records in the CSV file
  • Crawlector compares every entry in the CSV file against the domain being investigated, and not the other way around
  • Only the registered/pay-level domain is compared

Finding TLDs and Subdomains - [site] Section

  • The site section provides the capability to expand on a given site, by attempting to find all available top-level domains (TLDs) and/or subdomains for the same domain. If found, new tlds/subdomains will be checked like any other domain
  • This feature uses the Omnisint Labs (https://omnisint.io/) and RapidAPI APIs
  • Omnisint Labs API returns subdomains and tlds, whereas RapidAPI returns only subdomains (the Omnisint Labs API is down as of March 10, 2023, however, the implementation is still available in case the site is back up)
  • For RapidAPI, you need a valid "Domains records" API key that you can request from RapidAPI, and plug it into the key rapid_api_key in the configuration file
  • With find_tlds enabled, in addition to Omnisint Labs API tlds results, the framework attempts to find other active/registered domains by going through every tld entry, either, in the tlds_file or tlds_url
  • If tlds_url is set, it should point to a url that hosts tlds, each one on a new line (lines that start with either of the characters ';', '#' or '//' are ignored)
  • tlds_file, holds the filename that contains the list of tlds (same as for tlds_url; only the tld is present, excluding the '.', for ex., "com", "org")
  • If tlds_file is set, it takes precedence over tlds_url
  • tld_dl_time_out, this is for setting the maximum timeout for the dnslookup function when attempting to check if the domain in question resolves or not
  • tld_use_connect, this option enables the functionality to connect to the domain in question over a list of ports, defined in the option tlds_connect_ports
  • The option tlds_connect_ports accepts a list of ports, comma separated, or a list of ranges, such as 25-40,90-100,80,443,8443 (range start and end are inclusive)
    • tld_con_time_out, this is for setting the maximum timeout for the connect function
  • tld_con_use_ssl, enable/disable the use of ssl when attempting to connect to the domain
  • If save_to_file_subd is set to true, discovered subdomains will be saved to "\expanded\exp_subdomain_<pm|am>.txt"
  • If save_to_file_tld is set to true, discovered domains will be saved to "\expanded\exp_tld_<pm|am>.txt"
  • If exit_here is set to true, then Crawlector bails out after executing this [site] function, irrespective of other enabled options. It means found sites won't be crawled/spidered

Design Considerations

  • A URL page is retrieved by sending a GET request to the server, reading the server response body, and passing it to Yara engine for detection.
  • Some of the GET request attributes are defined in the [default] section in the configuration file, including, the User-Agent and Referer headers, and connection timeout, among other options.
  • Although Crawlector logs a session's data to a CSV file, converting it to an SQL file is recommended for better performance, manipulation and retrieval of the data. This becomes evident when you’re crawling thousands of domains.
  • Repeated domains/urls in the cl_sites are allowed.

Limitations

  • Single threaded
  • Static detection (no dynamic evaluation of a given page's content)
  • No headless browser support, yet!

Third-party libraries used

Contributing

Open for pull requests and issues. Comments and suggestions are greatly appreciated.

Author

Mohamad Mokbel (@MFMokbel)



The New 80/20 Rule for SecOps: Customize Where it Matters, Automate the Rest

There is a seemingly never-ending quest to find the right security tools that offer the right capabilities for your organization. SOC teams tend to spend about aΒ third of their dayΒ on events that don’t pose any threat to their organization, and this has accelerated the adoption of automated solutions to take the place of (or augment) inefficient and cumbersome SIEMs. With an estimatedΒ 80% of

How to Keep Your Business Running in a Contested Environment

When organizations start incorporating cybersecurity regulations and cyber incident reporting requirements into their security protocols, it's essential for them to establish comprehensive plans for preparation, mitigation, and response to potential threats. At the heart of your business lies your operational technology and critical systems. This places them at the forefront of cybercriminal

Webinar: Identity Threat Detection & Response (ITDR) – Rips in Your Identity Fabric

In today's digital age, SaaS applications have become the backbone of modern businesses. They streamline operations, enhance productivity, and foster innovation. But with great power comes great responsibility. As organizations integrate more SaaS applications into their workflows, they inadvertently open the door to a new era of security threats. The stakes? Your invaluable data and the trust

Learn How Your Business Data Can Amplify Your AI/ML Threat Detection Capabilities

In today's digital landscape, your business data is more than just numbersβ€”it's a powerhouse. Imagine leveraging this data not only for profit but also for enhanced AI and Machine Learning (ML) threat detection. For companies like Comcast, this isn't a dream. It's reality. Your business comprehends its risks, vulnerabilities, and the unique environment in which it operates. No generic,

How MDR Helps Solve the Cybersecurity Talent Gap

How do you overcome today's talent gap in cybersecurity? This is a crucial issue β€” particularly when you find executive leadership or the board asking pointed questions about your security team's ability to defend the organization against new and current threats. This is why many security leaders find themselves turning to managed security services like MDR (managed detection and response),

How Wazuh Improves IT Hygiene for Cyber Security Resilience

IT hygieneΒ is a security best practice that ensures that digital assets in an organization's environment are secure and running properly. Good IT hygiene includes vulnerability management, security configuration assessments, maintaining asset and system inventories, and comprehensive visibility into the activities occurring in an environment. As technology advances and the tools used by

Why Your Detection-First Security Approach Isn't Working

Stopping new and evasive threats is one of the greatest challenges in cybersecurity. This is among the biggest reasons whyΒ attacks increased dramatically in the past yearΒ yet again, despite the estimated $172 billion spent on global cybersecurity in 2022. Armed with cloud-based tools and backed by sophisticated affiliate networks, threat actors can develop new and evasive malware more quickly

Protecting your business with Wazuh: The open source security platform

Today, businesses face a variety of security challenges like cyber attacks, compliance requirements, and endpoint security administration. The threat landscape constantly evolves, and it can be overwhelming for businesses to keep up with the latest security trends. Security teams use processes and security solutions to curb these challenges. These solutions include firewalls, antiviruses, data

New ScrubCrypt Crypter Used in Cryptojacking Attacks Targeting Oracle WebLogic

The infamous cryptocurrency miner group called 8220 Gang has been observed using a new crypter called ScrubCrypt to carry out cryptojacking operations. According to Fortinet FortiGuard Labs, the attack chain commences with the successful exploitation of susceptible Oracle WebLogic servers to download a PowerShell script that contains ScrubCrypt. Crypters are a type of software that can encrypt,

Cisco Secure Cloud Analytics – What’s New

Nowadays, β€œcybersecurity” is the buzzword du jour, infiltrating every organization, invited or not. Furthermore, this is the case around the world, where an increasing proportion of all services now have an online presence, prompting businesses to reconsider the security of their systems. This, however, is not news to Cisco, as we anticipated it and were prepared to serve and assist clients worldwide.

Secure Cloud Analytics, part of the Cisco Threat, Detection, and Response (TD&R) portfolio, is an industry-leading tool for tackling core Network Detection and Response (NDR) use cases. These workflows focus primarily on threat detection and how security teamsΒ may recognize the most critical issues around hunting and forensic investigations to improve their mean-time-to-respond.

Over the last year, the product team worked tirelessly to strengthen the NDR offering. New telemetry sources, more advanced detections, and observations supplement the context of essential infrastructure aspects as well as usability and interoperability improvements. Additionally, the long-awaited solution Cisco Telemetry Broker is now available, providing a richer SecOps experience across the product.

MITRE ATT&CK framework alerting capabilities

As part of our innovation story on alerting capabilities, Secure Cloud Analytics now features new detections tied to the MITRE ATT&CK framework such as Worm Propagation, Suspicious User Agent, and Azure OAuth Bypass.

Additionally, various new roles and observations were added to the Secure Cloud Analytics to improve and change user alerts, that are foundational pieces of our detections. Alerts now include a direct link to AWS’ assets and their VPC, as well as direct access to Azure Security Groups, enabling further investigation capabilities through simplified workflows. In addition, the Public Cloud Providers are now included in coverage reports that provide a gap analysis to determine which accounts are covered. Alert Details offers new device information, such as host names, subnets, and role metrics that emphasize detection techniques. To better configure alerts, we are adding telemetry to gain contextual reference on their priority. Furthermore, the ingest process has grown more robust due to data from the Talos intelligence feed and ISE.

NDR: A Force Multiplier to Cisco XDR Strategy

The highly anticipated SecureX integration is now available in a single click, with no API credentials required and smooth interaction between the two platforms. Most importantly, Secure Cloud Analytics alerts may now be configured to automatically publish as incidents to the SecureX Incident Manager. The Talos Intelligence Watchlist Hits Alert is on by default due to its prominence among the many alert types.

Among other enhancements to graphs and visualizations, the Encrypted Traffic widget allows for an hourly breakdown of data. Simultaneously, the Device Report contains traffic data for a specific timestamp, which may be downloaded as a CSV. Furthermore, the Event Viewer now displays bi-directional session traffic to provide even more context to Secure Cloud Analytics flows, as well as additional columns to help with telemetry log comprehension:Β Cloud Account, Cloud Region, Cloud VPC, Sensor and Exporter.

New Sensor Data to Quickly Detect and HuntΒ Threats

On-premises sensors now provide additional telemetry on the overview page and a dedicated page where users can look further into the telemetry flowing through them in Sensor Health. To optimize the Secure Cloud Analytics deployment and improve the user experience, sensors may now be deleted from the interface.

Regarding telemetry, Cisco Telemetry Broker can now serve as a sensor in Secure Cloud Analytics, so users can identify and respond to threats faster with additional context sent to Secure Cloud Analytics. In addition, there will soon be support for other telemetry types besides IPFIX and NetFlow.

As we can see from the vast number of new additions to Secure Cloud Analytics, the product team has been working hard to understand the latest market trends, listen to the customers’ requests, and build one of the finest SaaS products in the NDR industry segment. The efforts strongly underline how Secure Cloud Analytics can solve some of the most important challenges in the NDR space around visibility, fidelity of alerts and deployment complexity by providing a cloud hosted platform that can offer insights on-premise and on cloud environments simultaneously from the same dashboard. Learn more about new features that allow Secure Cloud Analytics to detect, analyze, and respond to the most critical dangers to their company much more quickly.


We’d love to hear what you think. Ask a Question, Comment Below, and Stay Connected with Cisco Secure on social!

Cisco Secure Social Channels

Instagram
Facebook
Twitter
LinkedIn

Threatest - Threatest Is A Go Framework For End-To-End Testing Threat Detection Rules


Threatest is a Go framework for testing threat detection end-to-end.

Threatest allows you to detonate an attack technique, and verify that the alert you expect was generated in your favorite security platform.

Read the announcement blog post: https://securitylabs.datadoghq.com/articles/threatest-end-to-end-testing-threat-detection/


Concepts

Detonators

A detonator describes how and where an attack technique is executed.

Supported detonators:

  • Local command execution
  • SSH command execution
  • Stratus Red Team
  • AWS detonator

Alert matchers

An alert matcher is a platform-specific integration that can check if an expected alert was triggered.

Supported alert matchers:

  • Datadog security signals

Detonation and alert correlation

Each detonation is assigned a UUID. This UUID is reflected in the detonation and used to ensure that the matched alert corresponds exactly to this detonation.

The way this is done depends on the detonator; for instance, Stratus Red Team and the AWS Detonator inject it in the user-agent; the SSH detonator uses a parent process containing the UUID.

Sample usage

See examples for complete usage example.

Testing Datadog Cloud SIEM signals triggered by Stratus Red Team

threatest := Threatest()

threatest.Scenario("AWS console login").
WhenDetonating(StratusRedTeamTechnique("aws.initial-access.console-login-without-mfa")).
Expect(DatadogSecuritySignal("AWS Console login without MFA").WithSeverity("medium")).
WithTimeout(15 * time.Minute)

assert.NoError(t, threatest.Run())

Testing Datadog Cloud Workload Security signals triggered by running commands over SSH

ssh, _ := NewSSHCommandExecutor("test-box", "", "")

threatest := Threatest()

threatest.Scenario("curl to metadata service").
WhenDetonating(NewCommandDetonator(ssh, "curl http://169.254.169.254 --connect-timeout 1")).
Expect(DatadogSecuritySignal("EC2 Instance Metadata Service Accessed via Network Utility"))

assert.NoError(t, threatest.Run())


AWS-Threat-Simulation-and-Detection - Playing Around With Stratus Red Team (Cloud Attack Simulation Tool) And SumoLogic


This repository is a documentation of my adventures with Stratus Red Team - a tool for adversary emulation for the cloud.

Stratus Red Team is "Atomic Red Team for the cloud, allowing to emulate offensive attack techniques in a granular and self-contained manner.


We run the attacks covered in the Stratus Red Team repository one by one on our AWS account. In order to monitor them, we will use CloudTrail and CloudWatch for logging and ingest these logs into SumoLogic for further analysis.

Attack Description Link
aws.credential-access.ec2-get-password-data Retrieve EC2 Password Data Link
aws.credential-access.ec2-steal-instance-credentials Steal EC2 Instance Credentials Link
aws.credential-access.secretsmanager-retrieve-secrets Retrieve a High Number of Secrets Manager secrets Link
aws.credential-access.ssm-retrieve-securestring-parameters Retrieve And Decrypt SSM Parameters Link
aws.defense-evasion.cloudtrail-delete Delete CloudTrail Trail Link
aws.defense-evasion.cloudtrail-event-selectors Disable CloudTrail Logging Through Event Selectors Link
aws.defense-evasion.cloudtrail-lifecycle-rule CloudTrail Logs Impairment Through S3 Lifecycle Rule Link
aws.defense-evasion.cloudtrail-stop Stop CloudTrail Trail Link
aws.defense-evasion.organizations-leave Attempt to Leave the AWS Organization Link
aws.defense-evasion.vpc-remove-flow-logs Remove VPC Flow Logs Link
aws.discovery.ec2-enumerate-from-instance Execute Discovery Commands on an EC2 Instance Link
aws.discovery.ec2-download-user-data Download EC2 Instance User Data TBD
aws.exfiltration.ec2-security-group-open-port-22-ingress Open Ingress Port 22 on a Security Group Link
aws.exfiltration.ec2-share-ami Exfiltrate an AMI by Sharing It Link
aws.exfiltration.ec2-share-ebs-snapshot Exfiltrate EBS Snapshot by Sharing It Link
aws.exfiltration.rds-share-snapshot Exfiltrate RDS Snapshot by Sharing Link
aws.exfiltration.s3-backdoor-bucket-policy Backdoor an S3 Bucket via its Bucket Policy Link
aws.persistence.iam-backdoor-role Backdoor an IAM Role Link
aws.persistence.iam-backdoor-user Create an Access Key on an IAM User TBD
aws.persistence.iam-create-admin-user Create an administrative IAM User TBD
aws.persistence.iam-create-user-login-profile Create a Login Profile on an IAM User TBD
aws.persistence.lambda-backdoor-function Backdoor Lambda Function Through Resource-Based Policy TBD

Credits

  1. Awesome team at Datadog, Inc. for Stratus Red Team here
  2. Hacking the Cloud AWS
  3. Falcon Force team blog


Threat Detection Software: A Deep Dive

As the threat landscape evolves and multiplies with more advanced attacks than ever, defending against these modern cyber threats is a monumental challenge for almost any organization.Β  Threat detection is about an organization’s ability to accurately identify threats, be it to the network, an endpoint, another asset or an application – including cloud infrastructure and assets. At scale, threat
❌